id
stringlengths
8
9
chunk_id
stringlengths
8
9
text
stringclasses
6 values
start_text
int64
235
36k
stop_text
int64
559
36.1k
code
stringclasses
14 values
start_code
int64
356
8.04k
stop_code
int64
386
8.58k
__index_level_0__
int64
0
35
chap04-0
chap04-0
4 Implementing Text Classification Using Perceptron and Logistic Regression In the previous chapters we have discussed the theory behind the perceptron and logistic regression, including mathematical explanations of how and why they are able to learn from examples. In this chapter we will transition from math to code. Specifically, we will discuss how to implement these models in the Python programming language. All the code that we will introduce throughout this book is available online as well: http://clulab.github.io/gentlenlp/. The reader who is not familiar with the Python programming language is encouraged to read first Appendix A, for a brief introduction to the language, and Appendix B, for a discussion on how computers encode and preprocess text. Once done, please return here. To get a better understanding of how these algorithms work under the hood, we will start by implementing them from scratch. However, as the book progresses, we will introduce some of the popular tools and libraries that make Python the language of choice for machine learning, e.g., PyTorch,1 and Hugging Face’s transformers.2 The code for all the examples in the book is provided in the form of Jupyter notebooks.3 Important fragments of these notebooks will be presented in the implementation chapters so that the reader has the whole picture just by reading the book. However, we strongly encourage you to download the notebooks and execute them yourself. We also encourage you to modify them to conduct your own experiments! 1 https://pytorch.org
2 https://huggingface.co 3 https://jupyter.org/ 55 56 Implementing Text Classification Using Perceptron and LR 4.1 Binary Classification We begin this chapter with binary classification. That is, we aim to train classifiers that assign one of two labels to a given text. As the example for this task, we will train a review classifier using the the Large Movie Review Dataset (Maas et al., 2011).4 We tackle this task by implementing first a binary perceptron classifier, followed by a binary logistic regression one. We will implement the latter both from scratch as well as using PyTorch, so the reader has a clearer understanding on how PyTorch works “under the hood.” 4.1.1 Large Movie Review Dataset This dataset contains movie reviews and their associated scores (between 1 and 10) as provided by IMDb.5 converted these scores to binary labels by assigning each review a positive or negative label if the review score was above 6 or below 5, respectively. Reviews with scores 5 and 6 were considered too neutral and thus excluded. We follow the same protocol in this chapter. The dataset is divided in two even partitions called train and test, each containing 25,000 reviews. The dataset also provides additional unlabeled reviews, but we will not use those here. Each partition contains two directories called pos and neg where the positive and negative examples are stored. Each review is stored in an independent text file, whose name is composed of an id unique to the partition and the score associated with the review, separated by an underscore. An example of a positive and a negative review is shown in Table 4.1. 4.1.2 Bag-of-words Model As discussed in Section 2.2, we will encode the text to classify as a bag of words. That is, we encode each review as a list of numbers, with each position in the list corresponding to a word in our vocabulary, and the value stored in that position corresponding to the number of times the word appears in the review. For example, say we want to encode the following two reviews: 4 https://ai.stanford.edu/~amaas/data/sentiment/ 5 https://www.imdb.com/ Maas et al. 4.1 Binary Classification 57 Table 4.1 Two examples of movie reviews from IMDb. The first is a positive review of the movie Puss in Boots (1988). The second is a negative review of the movie Valentine (2001). These reviews can be found at https://www.imdb.com/review/rw0606396/ and https://www.imdb.com/review/rw0721861/, respectively. Filename Score Binary Label train/pos/24_8.txt 8/10 Positive train/neg/141_3.txt 3/10 Negative Review Text Although this was obviously a low-budget production, the performances and the songs in this movie are worth seeing. One of Walken’s few musical roles to date. (he is a marvelous dancer and singer and he demonstrates his acrobatic skills as well - watch for the cartwheel!) Also starring Jason Connery. A great children’s story and very likable characters. This stalk and slash turkey manages to bring nothing new to an increasingly stale genre. A masked killer stalks young, pert girls and slaughters them in a variety of gruesome ways, none of which are particularly inventive. It’s not scary, it’s not clever, and it’s not funny. So what was the point of it? Review 1: Review 2: "I liked the movie. My friend liked it too. " "I hated it. Would not recommend. " First, we need to create a vocabulary that maps each word to an id that uniquely identifies it. Each of these numbers will be used as the index in a list, so they must start at zero and grow by one for each word in the vocabulary. For example, one possible vocabulary that encodes the previous reviews is: {'would': 0, 'hated': 1, 58 Implementing Text Classification Using Perceptron and LR 'my': 2, 'liked': 3, 'not': 4, 'it': 5, 'movie': 6, 'recommend': 7, 'the': 8, 'I': 9, 'too': 10, 'friend': 11} Using this mapping, we can encode the two reviews as follows: Review1: [0,0,1,2,0,1,1,0,1,1,1,1] Review2: [1,1,0,0,1,1,0,1,0,1,0,0] Note that the word liked (fourth position) in the first review has a value of two. This is because this word appears twice in that review. This is a small example with a vocabulary of only 12 terms. Of course, the same process needs to be implemented for our whole training dataset. For this purpose we will use scikit-learn’s CountVectorizer class.6 Using the CountVectorizer class simplifies things, allowing us to get started quickly with a bag-of-words approach. However, note that it makes several simplifying assumptions (e.g., text is lowercased, and punctuation and single character tokens are removed). Some of these may not be adequate to other tasks. First, we need to obtain the filenames for the reviews in the training set: Once we have acquired the filenames for the training reviews, we need
to read them using the CountVectorizer. In order for the CountVectorizer to open and read the files for us, we make use of the input='filename' constructor parameter (otherwise it would expect the string content directly). The CountVectorizer provides three methods that will be use-
ful for us: a method called fit() that is used to acquire the vocabulary,
a method transform() that converts the text into the bag-of-words representation, and a method fit_transform() that conveniently acquires the vocabulary and transforms the data in a single step. The resulting object is referred to as a document-term matrix, where each row corre- 6 https://scikitlearn.org/stable/modules/generated/sklearn.feature_ extraction.text.CountVectorizer.html 4.1 Binary Classification 59 sponds to a document, and each column corresponds to a term in the vocabulary. As the output above indicates, the resulting matrix has 25,000 rows (one for each review), and 74,849 columns (one for each term). Also you may note that this matrix is sparse, with 3,445,861 stored elements. A regular matrix of shape 25,000×74,849 would have 1,871,225,000 elements. However, most of the elements in a document-term matrix are zeros because only a few words from the vocabulary appear in each document. A sparse matrix takes advantage of this fact by storing only the non-zero cells in order to reduce the memory required to store it. Thus, sparse matrices are convenient, especially when dealing with lots of data. Nevertheless, to simplify the downstream code in this example, we will convert it into a dense matrix, i.e., a regular two-dimensional NumPy array. Finally, we also need the labels of the reviews. We assign a label of one to positive reviews, and a label of zero to negative ones. Note that the first half of the reviews are positive and the second half are negative. The label at the ith position of the y_train array corresponds to the review encoded in the ith row of the X_train matrix. 4.1.3 Perceptron Now that we have defined our task and the data processing pipeline, we will implement a perceptron classifier that classifies the movie reviews as positive or negative. The entire code discussed in this section is available in the chap4_perceptron notebook. Recall from Section 2.4 that the perceptron is composed of a weight vector w and a bias term b. These will be represented as a NumPy array w of the same length as our document vectors, and a variable b for the bias term. Both will be initialized with zeros. The parameters w and b are learned through the following algorithm, which implements Algorithm 2 from Chapter 2: There are a couple of details to point out. Line 3 of Algorithm 2 indicates that we need to repeat the training loop until convergence. Theoretically, convergence is defined as predicting all training examples correctly. This is an ambitious requirement, which is not always possible in practice, so in this code we also include a stop condition if we reach a maximum number of epochs. Another crucial difference between our implementation here and the theoretical Algorithm 2, is that we randomize the order in which the training examples are seen at the beginning of 60 Implementing Text Classification Using Perceptron and LR each epoch. This simple (but highly recommended!) change is necessary to avoid the introduction of spurious biases due to the arbitrary order of the examples in the original training partition.7 We accomplish this by storing the indices corresponding to the X_train matrix rows in a NumPy array, and shuffling these indices at the beginning of each epoch. We shuffle the indices instead of the examples so that we can preserve the mapping between examples and labels. The training loop aligns closely with Algorithm 2. We start by iterating over each example in our training data, storing the current example in the variable x,8 and its corresponding label in the variable y_true. Next, we compute the perceptron decision function shown in Algorithm 1. Note that NumPy (as well as PyTorch) uses Python’s @ operator to indicate vector or matrix multiplication, depending on its operand types. Here we use it to calculate the dot product of the example x and the weights w. To this we add the bias b to obtain the predicted score, whose sign is used to assign a positive or negative predicted label. If the prediction is correct, then no update is needed, and we can move on to the next training example. However, if the prediction is incorrect, then we need to adjust w and b, as described in Algorithm 2. Sidebar 4.1 The tqdm function This is our first exposure to the tqdm function. tqdm is a progress bar that “make your loops show a smart progress meter.”9 The name tqdm comes from the Arabic word taqaddum which can mean “progress.” Using tqdm is as simple as wrapping it around the collection to be traversed. After training, we evaluate the model’s performance on the heldout test partition. The test data is loaded similarly to the training partition, but with one notable difference; we use CountVectorizer’s transform() method instead of the fit_transform() method so that the vocabulary is not adjusted for the test data. We won’t show here the loading of the test partition since it is so similar to the code already shown, but it is available in the Jupyter notebook that accompanies this section. . 7   As an extreme example, consider a dataset where all the positive examples appear first in the training partition. This would cause the perceptron to artificially inflate the weights of the features that occur in these examples, a situation from which the learning algorithm may struggle to recover. 
 . 8  We use typewriter font when we discuss variables in the code, to distinguish code from the theoretical discussion in the other chapters. 
 9 https://github.com/tqdm/tqdm 4.1 Binary Classification 61 Using the model to assign labels to all the test data is easily done in one step – we simply multiply the entire test data document-term matrix by the previously learned weights and add the bias. Scores greater than zero indicate a positive review, and those less than zero are negative. At this point we can evaluate the classifier’s performance, which we will do using precision, recall, and F1 scores for binary classification (described in Section 2.3). For this purpose, we implement a function called binary_classification_report that computes these metrics and returns them as a dictionary: We call this function to compare the predicted labels to the true labels, and obtain the evaluation scores. Our F1 score here is 86.8%, which is much higher than the baseline that assigns labels randomly, which yields an F1 score of about 50%. This is a good result, especially considering the simplicity of the perceptron! In the next sections and chapters, we will discuss a battery of strategies to considerably improve this performance. 4.1.4 Binary Logistic Regression from Scratch Using the same task, dataset, and evaluation, we will now implement a logistic regression classifier, as described in Algorithm 5 from Chapter 3. To give the reader hands-on experience with the implementation of the gradient calculations for logistic regression, we start by implementing it from scratch using NumPy. All the code shown in this section is available in the chap4_logistic_regression_numpy notebook. In the perceptron implementation, we represented the weights and the bias as two different variables. Here, however, we will use a different approach that will allow us to unify them into a single vector variable. Specifically, we take advantage of the similarity between the derivative of the cost function with respect to the weights (Equation 3.14) and the derivative of the cost with respect to the bias (Equation 3.15). d Ci(w, b) = (σi − yi)xij (3.14 revisited) dwj d Ci(w, b) = σi − yi (3.15 revisited) db Note that the two derivative formulas are identical except that the former has a multiplication by xij, while the latter does not. However, 62 Implementing Text Classification Using Perceptron and LR since σi − yi = (σi − yi)1 we can multiply the derivative of the cost with respect to the bias by one without changing the semantics. This gives an opportunity for combining the computations, doing them both in a single pass. The idea is that we can treat the bias as a weight corresponding to a feature that always has a value of one. As can be seen above, we created a NumPy array of ones of the same length as the number of examples in our training set (i.e., the number of rows in the data matrix). Then we add this array as a new column to the data matrix, using NumPy’s column_stack function. Next, we need to initialize our model. This time we will use a single NumPy array w of the same length as the number of columns in the data matrix. The weight vector w is initialized randomly with values between 0 and 1: Before implementing the learning algorithm, we need an implementation of the logistic function. Recall that the logistic function is σ(x) = 1 (3.1 revisited) 1+e−x This function can be easily implemented in NumPy as follows: However, this naive implementation may produce the following warning during training: The term overflow indicates that the result of evaluating exp(-x) is a number so large that it can’t be represented by a float (specifically, we’re using float64 numbers). We will avoid this issue by not calling exp with values that will overflow. NumPy provides the function finfo that can be consulted to find the limits of floating point numbers: The log of the largest floating point number is the largest number for which exp() will not overflow, so we will use it as a threshold to filter out problematic values: We now have everything we need to implement Algorithm 4. The steps to follow for each example are: (1) use the model to make a prediction, (2) calculate the gradient of the loss function with respect to the model parameters, and (3) update the model parameters using the gradient. The size of the update is controlled by the learning rate. Once the model has been trained, we evaluate it on the test dataset using our binary_classification_report function from the previous section. Loading and preprocessing the test dataset follows the same 4.1 Binary Classification 63 steps as with the previous classifier. We omit the code for brevity. These are the results: The performance is comparable with that of the perceptron. The difference in F1 scores between the two classifiers (84.9% here vs. 86.8% for the perceptron) is not significant. Classifier parity is probably attributable to the fact that the signal distinguishing the two classes being easy to learn and the simpler perceptron training algorithm being sufficient in this case. Nevertheless, this task is useful in showing how to implement the logistic regression model from scratch, i.e., by implementing the gradient calculation and parameter updates manually. Next, we will implement the same model again using PyTorch, highlighting how this machine learning library simplifies the process. 4.1.5 Binary Logistic Regression Utilizing PyTorch While it is fairly straightforward to compute the derivatives for logistic regression and implement then directly in NumPy, this will not scale well to arbitrary neural architectures. Fortunately, there are libraries that automate the computation of the derivatives of the cost function (assuming it is differentiable!) for any neural network, and use the resulting gradients to perform gradient descent or other more sophisticated optimization procedures. To this end, we will use the PyTorch deep learning library10. The corresponding notebook for this section is chap4_logistic_regression_pytorch_bce. Our model for logistic regression corresponds to PyTorch’s Linear layer. When we instantiate this layer, we specify the size of the inputs (the size of our vocabulary) and the size of the output, i.e., the number of output neurons (which is one because we’re doing binary classification). The loss function we use is the binary cross-entropy loss (see Chapter 3), which is implemented as BCEWithLogitsLoss in PyTorch. In PyTorch, the gradients obtained from the loss function are applied to the model by an optimizer object, which implements and applies an optimization algorithm. Here we will use the vanilla stochastic gradient descent optimizer; we set its learning rate to 0.1. This is equivalent to the discussion in Section 3.2. Similarly to the manual implementation, the steps required to train the model for a given training example are: (1) ensure the gradients are set to zeros, (2) apply the model to obtain a prediction, (3) calculate 10 https://pytorch.org/ 64 Implementing Text Classification Using Perceptron and LR the loss, (4) compute the gradient of the loss by back-propagation, and (5) update the model parameters. Recall that in our previous implementation everything was hardcoded: applying the model, computing the gradients, and optimizing the model parameters. Here, however, the implementation of the logistic regression is expressed at a higher level of abstraction. This means that we are describing the logical steps without specifying a particular implementation. Instead, implementation details are the responsability of the chosen model, loss function, and optimizer. Thus, we could even choose a different model, loss function, and/or optimizer, and use the same training steps with little or no modification. This decoupling of the training logic from the implementation details is one of the main advantages of libraries such as PyTorch. As shown in the code above, calling the model as a function, with the feature vectors as inputs, produces the predicted scores. Once again, a positive score corresponds to a positive label. When we evaluate this implementation on the test dataset, we obtain results that are in line with our previous models: Writing the perceptron and the logistic regression from scratch is a good exercise, as it exposes us to the fundamentals of implementing machine learning algorithms. However, this becomes cumbersome for more complex neural architectures. For this reason, from this point on, we will use PyTorch for all our coding examples. 4.2 Multiclass Classification So far, in this chapter we have discussed implementing binary classifiers. Next, we will modify these binary classifiers to perform multiclass classification, following the discussion in Section 3.5. 4.2.1 AG News Dataset Before explaining the actual training/testing code, we have to choose a new dataset that is suitable for multiclass classification. To this end, we will use the AG News Classification Dataset (Zhang et al., 2015), a subset of the larger AG corpus of news articles collected from thousands of different news sources.11 The classification dataset consists of four 11 http://groups.di.unipi.it/~gulli/AG_corpus_of_news_articles.html 4.2 Multiclass Classification 65 classes, and the data is equally balanced across all classes (30,000 articles per class for train, and 1,900 articles per class for testing). The goal of the task is to classify each article as one of the four classes: World, Sports, Business, or Sci/Tech. 4.2.2 Preparing the Dataset The AG News Dataset is distributed as two CSV files (one for training and one for testing), each containing three columns: the class index, the title, and the description. The dataset also provides a text file that maps the above class indexes to more descriptive class labels. Because of the tabular nature of the dataset, pandas, a Python library
for tabular data analysis,12 is a natural choice for loading and transform-
ing it. To this end, our Jupyter notebook (chap4_multiclass_logistic_regression) demonstrates the sequence of steps required to handle the data, as well
as model training and evaluation. First, we show how to load the CSV,
add column names, and inspect the result: class index . 0  3 
 . 1  3 
 . 2  3 
 . 3  3 
 . 4  3 
 ... ... . 119995  1 
 . 119996  2 
 . 119997  2 
 . 119998  2 
 . 119999  2 
 title Wall St. Bears Claw Back Into the Black (Reuters) Carlyle Looks Toward Commercial Aerospace (Reu... Oil and Economy Cloud Stocks' Outlook (Reuters) Iraq Halts Oil Exports from Main Southern Pipe... Oil prices soar to all-time record, posing new... ... Pakistan's Musharraf Says Won't Quit as Army C... Renteria signing a top-shelf deal Saban not going to Dolphins yet Today's NFL games Nets get Carter from Raptors description Reuters - Short-sellers, Wall Street's dwindli... Reuters - Private investment firm Carlyle Grou... Reuters - Soaring crude prices plus worries\ab... Reuters - Authorities have halted oil export\f... AFP - Tearaway world oil prices, toppling reco... ... KARACHI (Reuters) - Pakistani President Perve... Red Sox general manager Theo Epstein acknowled... The Miami Dolphins will put their courtship of... PITTSBURGH at NY GIANTS Time: 1:30 p.m. Line: ... INDIANAPOLIS -- All-Star Vince Carter was trad... 120000 rows × 3 columns Since the class labels themselves are in a separate file, we manually add them to the pandas data structure (called dataframe in pandas’ terminology) to increase the interpretability of the data. We use the class index column as a starting point, and use its map method to create a new column with the corresponding labels (technically a new Series object) that is added to the dataframe using its insert method, which allows us to insert the column in a specific position. Note that the label indices are one-based, so we subtract one to align them with their labels. 12 https://pandas.pydata.org 66 Implementing Text Classification Using Perceptron and LR class index . 0  3 
 . 1  3 
 . 2  3 
 . 3  3 
 . 4  3 
 ... ... . 119995  1 
 . 119996  2 
 . 119997  2 
 . 119998  2 
 . 119999  2 
 class Business Business Business Business Business ... World Sports Sports Sports Sports title Wall St. Bears Claw Back Into the Black (Reuters) Oil and Economy Cloud Stocks' Outlook (Reuters) Oil prices soar to all-time record, posing new... Pakistan's Musharraf Says Won't Quit as Army C... Saban not going to Dolphins yet Nets get Carter from Raptors description Reuters - Short-sellers, Wall Street's dwindli... Reuters - Soaring crude prices plus worries\ab... AFP - Tearaway world oil prices, toppling reco... KARACHI (Reuters) - Pakistani President Perve... The Miami Dolphins will put their courtship of... INDIANAPOLIS -- All-Star Vince Carter was trad... Iraq Halts Oil Exports from Main Southern Pipe... Reuters - Authorities have halted oil export\f... ... ... Renteria signing a top-shelf deal Red Sox general manager Theo Epstein acknowled... 120000 rows × 4 columns Carlyle Looks Toward Commercial Aerospace (Reu... Reuters - Private investment firm Carlyle Grou... Today's NFL games PITTSBURGH at NY GIANTS Time: 1:30 p.m. Line: ... Next we will preprocess the text. First we lowercase the title and description, and then we concatenate them into a single string. Then we remove some spurious backslashes from the text. Once this is done, the preprocessed text is added to the dataframe as a new column. Note that pandas allows these steps to be applied to all rows simultaneously. class index class title Wall St. Bears Claw Back Into the Black (Reuters) Oil and Economy Cloud Stocks' Outlook (Reuters) Oil prices soar to all-time record, posing new... ... Pakistan's Musharraf Says Won't Quit as Army C... Saban not going to Dolphins yet Nets get Carter from Raptors description Reuters - Short-sellers, Wall Street's dwindli... Reuters - Soaring crude prices plus worries\ab... AFP - Tearaway world oil prices, toppling reco... ... KARACHI (Reuters) - Pakistani President Perve... The Miami Dolphins will put their courtship of... INDIANAPOLIS -- All-Star Vince Carter was trad... text wall st. bears claw back into the black (reute... oil and economy cloud stocks' outlook (reuters... oil prices soar to all-time record, posing new... ... pakistan's musharraf says won't quit as army c... saban not going to dolphins yet the miami dolp... nets get carter from raptors indianapolis -- a... . 0  3 Business 
 . 1  3 Business 
 . 2  3 Business 
 . 3  3 Business 
 . 4  3 Business 
 ... ... ... . 119995  1 World 
 . 119996  2 Sports 
 . 119997  2 Sports 
 . 119998  2 Sports 
 . 119999  2 Sports 
 120000 rows × 5 columns Carlyle Looks Toward Commercial Reuters - Private investment firm Carlyle carlyle looks toward commercial Aerospace (Reu... Grou... aerospace (reu... Iraq Halts Oil Exports from Main Southern Pipe... Reuters - Authorities have halted oil export\f... iraq halts oil exports from main southern pipe... Renteria signing a top-shelf deal Red Sox general manager Theo Epstein renteria signing a top-shelf deal red sox acknowled... gene... Today's NFL games PITTSBURGH at NY GIANTS Time: 1:30 p.m. today's nfl games pittsburgh at ny giants Line: ... time... At this point, the text is ready to be tokenized. For this purpose we will use NLTK’s word_tokenize function. This function can be applied to the whole column at once using the pandas map function, which returns a new column which we add to the dataframe. However, here we actually use the progress_map function, which provides a visual progress bar. This visual feedback is especially helpful for tasks that take more time to complete. 4.2 Multiclass Classification 67 class index . 0  3 
 . 1  3 
 . 2  3 
 . 3  3 
 . 4  3 
 ... ... . 119995  1 
 . 119996  2 
 . 119997  2 
 . 119998  2 
 . 119999  2 
 class Business Business Business Business Business ... World Sports Sports Sports Sports title Wall St. Bears Claw Back Into the Black (Reuters) Oil and Economy Cloud Stocks' Outlook (Reuters) Oil prices soar to all-time record, posing new... ... Pakistan's Musharraf Says Won't Quit as Army C... Saban not going to Dolphins yet Nets get Carter from Raptors description Reuters - Short-sellers, Wall Street's dwindli... Reuters - Soaring crude prices plus worries\ab... AFP - Tearaway world oil prices, toppling reco... ... KARACHI (Reuters) - Pakistani President Perve... The Miami Dolphins will put their courtship of... INDIANAPOLIS -- All-Star Vince Carter was trad... text wall st. bears claw back into the black (reute... oil and economy cloud stocks' outlook (reuters... oil prices soar to all-time record, posing new... ... pakistan's musharraf says won't quit as army c... saban not going to dolphins yet the miami dolp... nets get carter from raptors indianapolis -- a... tokens [wall, st., bears, claw, back, into, the, blac... [oil, and, economy, cloud, stocks, ', outlook,... [oil, prices, soar, to, all-time, record, ,, p... ... [pakistan, 's, musharraf, says, wo, n't, quit,... [saban, not, going, to, dolphins, yet, the, mi... [nets, get, carter, from, raptors, indianapoli... 120000 rows × 6 columns Carlyle Looks Toward Commercial Reuters - Private investment firm carlyle looks toward commercial [carlyle, looks, toward, Aerospace (Reu... Carlyle Grou... aerospace (reu... commercial, aerospace... Iraq Halts Oil Exports from Main Reuters - Authorities have halted iraq halts oil exports from main [iraq, halts, oil, exports, from, Southern Pipe... oil export\f... southern pipe... main, southe... Renteria signing a top-shelf deal Red Sox general manager Theo renteria signing a top-shelf deal [renteria, signing, a, top-shelf, Epstein acknowled... red sox gene... deal, red, s... Today's NFL games PITTSBURGH at NY GIANTS today's nfl games pittsburgh at [today, 's, nfl, games, Time: 1:30 p.m. Line: ... ny giants time... pittsburgh, at, ny, gi... From the tokens we just created, we then create a vocabulary for our corpus. Here, we only keep the words that occur at least 10 times, decreasing the memory needed and reducing the likelihood that our vocabulary contains noisy tokens. Note that each row in the tokens column contains a list of tokens. In order to create the vocabulary, we will need to convert the Series of lists of tokens into a Series of tokens using the explode() Pandas method. Then we will use the value_counts() method to create a Series object in which the index are the tokens and the values are the number of times they appear in the corpus. The next step is removing the tokens with a count lower than our chosen threshold. Finally, we create a list with the remaining tokens, as well as a dictionary that maps tokens to token ids (i.e., the index of the token in the list). We include in the vocabulary a special token [UNK] that will be used as a placeholder for tokens that do not appear in our vocabulary after the frequency pruning. Using this vocabulary, we construct a feature vector for each news article in the corpus. This feature vector will be encoded as a dictionary, with keys corresponding to token ids, and values corresponding to the number of times the token appears in the article. As above, the feature vectors will be stored as a new column in the dataframe. 68 Implementing Text Classification Using Perceptron and LR class index class title Wall St. Bears Claw Back Into the Black (Reuters) Oil and Economy Cloud Stocks' Outlook (Reuters) Oil prices soar to all-time record, posing new... ... Pakistan's Musharraf Says Won't Quit as Army C... Saban not going to Dolphins yet Nets get Carter from Raptors description Reuters - Short-sellers, Wall Street's dwindli... Reuters - Soaring crude prices plus worries\ab... AFP - Tearaway world oil prices, toppling reco... ... KARACHI (Reuters) - Pakistani President Perve... The Miami Dolphins will put their courtship of... INDIANAPOLIS -- All-Star Vince Carter was trad... text wall st. bears claw back into the black (reute... oil and economy cloud stocks' outlook (reuters... oil prices soar to all-time record, posing new... ... pakistan's musharraf says won't quit as army c... saban not going to dolphins yet the miami dolp... nets get carter from raptors indianapolis -- a... tokens [wall, st., bears, claw, back, into, the, blac... [oil, and, economy, cloud, stocks, ', outlook,... [oil, prices, soar, to, alltime, record, ,, p... ... [pakistan, 's, musharraf, says, wo, n't, quit,... [saban, not, going, to, dolphins, yet, the, mi... [nets, get, carter, from, raptors, indianapoli... features {427: 2, 563: 1, 1607: 1, 15062: 1, 120: 1, 73... {66: 1, 9: 2, 351: 2, 4565: 1, 158: 1, 116: 1,... {66: 2, 99: 2, 4390: 1, 4: 2, 3595: 1, 149: 1,... ... {383: 1, 23: 1, 1626: 2, 91: 1, 1809: 1, 285: ... {7762: 2, 68: 1, 661: 1, 4: 2, 1439: 2, 703: 1... {2170: 2, 226: 1, 2402: 2, 32: 1, 2995: 2, 219... . 0  3 Business 
 . 1  3 Business 
 . 2  3 Business 
 . 3  3 Business 
 . 4  3 Business 
 ... ... ... . 119995  1 World 
 . 119996  2 Sports 
 . 119997  2 Sports 
 . 119998  2 Sports 
 . 119999  2 Sports 
 120000 rows × 7 columns Carlyle Looks Toward Commercial Aerospace (Reu... Reuters - Private investment firm Carlyle Grou... carlyle looks toward commercial aerospace (reu... Iraq Halts Oil Exports from Reuters - Authorities have iraq halts oil exports from Main Southern Pipe... halted oil export\f... main southern pipe... Renteria signing a top-shelf Red Sox general manager renteria signing a topdeal Theo Epstein acknowled... shelf deal red sox gene... PITTSBURGH at NY Today's NFL games GIANTS Time: 1:30 p.m. Line: ... today's nfl games pittsburgh at ny giants time... [carlyle, looks, toward, {15999: 2, 1076: 1, 855: commercial, aerospace... 1, 1286: 1, 4251: 1, ... [iraq, halts, oil, exports, {77: 2, 7380: 1, 66: 3, from, main, southe... 1787: 1, 32: 2, 900: 2... [renteria, signing, a, top- {8428: 2, 2638: 1, 5: 4, shelf, deal, red, s... 0: 3, 127: 1, 202: 3,... [today, 's, nfl, games, {106: 1, 23: 1, 729: 1, pittsburgh, at, ny, gi... 225: 1, 1586: 1, 22: 1... The final preprocessing step is converting the features and the class indices into PyTorch tensors. Recall that we need to subtract one from the class indices to make them zero-based. At this point, the data is fully processed and we are ready to begin training. 4.2.3 Multiclass Logistic Regression Using PyTorch The model itself is a single linear layer whose input size corresponds to the size of our vocabulary, and its output size corresponds to the number of classes in our corpus. PyTorch’s Linear layer includes a bias by default, so there is no need to handle that manually the way we did for our perceptron example. The code for training this model (which implements Algorithm 6) is almost identical to that of the binary logistic repression. However, since we have to calculate a score for each of the four different classes, we need to replace the previous BCEWithLogitsLoss with CrossEntropyLoss, which applies a softmax over the scores to obtain probabilities for each class. For each example, the model predicts 4 scores – one for each label. The label with the highest score is selected using the argmax function. We evaluate the predictions of our model for each class using Scikitlearn’s classification_report, which handles the results of multiclass classification. 4.3 Summary 69 4.3 Summary In this chapter, we used movie review and news article classification to illustrate the implementation of the previously described algorithms for the binary perceptron, binary logistic regression, and multiclass logistic regression. For the binary logistic regression, we made a direct comparison between the lower-level NumPy implementation and a higher-level version that made use of PyTorch. We hope that through this series of exercises the reader has noted several key takeaways. First, data preparation is important and should be done thoughtfully. Certain tasks (e.g., text normalization or sentence splitting) are going to be frequently needed if you continue with NLP, so using or creating generic functions can be very helpful. However, what works for one dataset and one language may not be suitable for another scenario. For example, in our case, we selected different tokenizers for each of our tasks to account for the different registers of English, as well as removing diacritics during normalization. Second, when it comes to implementing machine learning algorithms, it is often easier to use a higher-level library such as PyTorch instead of NumPy. For example, with the former, the gradients are calculated by the library, whereas in NumPy we have to code them ourselves. This becomes cumbersome quickly. For example, even the derivative of the softmax is non-trivial. Third, PyTorch imposes a training structure that remains largely the same, regardless of what models are being trained. That is, at a high level, the same steps are always required: clearing the current gradients, predicting output scores for the provided inputs, calculating the loss, and optimizing. These features make PyTorch a very powerful and convenient deep learning library; we will continue to use it throughout the remainder of the book to implement more complex neural architectures.
11,350
11,432
#!/usr/bin/env python # coding: utf-8 # # Binary Text Classification with Perceptron # In[1]: import random import numpy as np from tqdm.notebook import tqdm # set this variable to a number to be used as the random seed # or to None if you don't want to set a random seed seed = 1234 if seed is not None: random.seed(seed) np.random.seed(seed) # The dataset is divided in two directories called `train` and `test`. # These directories contain the training and testing splits of the dataset. # In[2]: get_ipython().system('ls -lh data/aclImdb/') # Both the `train` and `test` directories contain two directories called `pos` and `neg` that contain text files with the positive and negative reviews, respectively. # In[3]: get_ipython().system('ls -lh data/aclImdb/train/') # We will now read the filenames of the positive and negative examples. # In[4]: from glob import glob pos_files = glob('data/aclImdb/train/pos/*.txt') neg_files = glob('data/aclImdb/train/neg/*.txt') print('number of positive reviews:', len(pos_files)) print('number of negative reviews:', len(neg_files)) # Now, we will use a [`CountVectorizer`](https://scikit-learn.org/stable/modules/generated/sklearn.feature_extraction.text.CountVectorizer.html) to read the text files, tokenize them, acquire a vocabulary from the training data, and encode it in a document-term matrix in which each row represents a review, and each column represents a term in the vocabulary. Each element $(i,j)$ in the matrix represents the number of times term $j$ appears in example $i$. # In[5]: from sklearn.feature_extraction.text import CountVectorizer # initialize CountVectorizer indicating that we will give it a list of filenames that have to be read cv = CountVectorizer(input='filename') # learn vocabulary and return sparse document-term matrix doc_term_matrix = cv.fit_transform(pos_files + neg_files) doc_term_matrix # Note in the message printed above that the matrix is of shape (25000, 74894). # In other words, it has 1,871,225,000 elements. # However, only 3,445,861 elements were stored. # This is because most of the elements in the matrix are zeros. # The reason is that the reviews are short and most words in the english language don't appear in each review. # A matrix that only stores non-zero values is called *sparse*. # # Now we will convert it to a dense numpy array: # In[6]: X_train = doc_term_matrix.toarray() X_train.shape # We will also create a numpy array with the binary labels for the reviews. # One indicates a positive review and zero a negative review. # The label `y_train[i]` corresponds to the review encoded in row `i` of the `X_train` matrix. # In[7]: # training labels y_pos = np.ones(len(pos_files)) y_neg = np.zeros(len(neg_files)) y_train = np.concatenate([y_pos, y_neg]) y_train # Now we will initialize our model, in the form of an array of weights `w` of the same size as the number of features in our dataset (i.e., the number of words in the vocabulary acquired by [`CountVectorizer`](https://scikit-learn.org/stable/modules/generated/sklearn.feature_extraction.text.CountVectorizer.html)), and a bias term `b`. # Both are initialized to zeros. # In[8]: # initialize model: the feature vector and bias term are populated with zeros n_examples, n_features = X_train.shape w = np.zeros(n_features) b = 0 # Now we will use the perceptron learning algorithm to learn the values of `w` and `b` from our training data. # In[9]: n_epochs = 10 indices = np.arange(n_examples) for epoch in range(10): n_errors = 0 # randomize the order in which training examples are seen in this epoch np.random.shuffle(indices) # traverse the training data for i in tqdm(indices, desc=f'epoch {epoch+1}'): x = X_train[i] y_true = y_train[i] # the perceptron decision based on the current model score = x @ w + b y_pred = 1 if score > 0 else 0 # update the model is the prediction was incorrect if y_true == y_pred: continue elif y_true == 1 and y_pred == 0: w = w + x b = b + 1 n_errors += 1 elif y_true == 0 and y_pred == 1: w = w - x b = b - 1 n_errors += 1 if n_errors == 0: break # The next step is evaluating the model on the test dataset. # Note that this time we use the [`transform()`](https://scikit-learn.org/stable/modules/generated/sklearn.feature_extraction.text.CountVectorizer.html#sklearn.feature_extraction.text.CountVectorizer.transform) method of the [`CountVectorizer`](https://scikit-learn.org/stable/modules/generated/sklearn.feature_extraction.text.CountVectorizer.html), instead of the [`fit_transform()`](https://scikit-learn.org/stable/modules/generated/sklearn.feature_extraction.text.CountVectorizer.html#sklearn.feature_extraction.text.CountVectorizer.fit_transform) method that we used above. This is because we want to use the learned vocabulary in the test set, instead of learning a new one. # In[10]: pos_files = glob('data/aclImdb/test/pos/*.txt') neg_files = glob('data/aclImdb/test/neg/*.txt') doc_term_matrix = cv.transform(pos_files + neg_files) X_test = doc_term_matrix.toarray() y_pos = np.ones(len(pos_files)) y_neg = np.zeros(len(neg_files)) y_test = np.concatenate([y_pos, y_neg]) # Using the model is easy: multiply the document-term matrix by the learned weights and add the bias. # We use Python's `@` operator to perform the matrix-vector multiplication. # In[11]: y_pred = (X_test @ w + b) > 0 # Now we print an evaluation of the prediction results using scikit-learn's [`classification_report()`](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.classification_report.html) function. # In[12]: def binary_classification_report(y_true, y_pred): # count true positives, false positives, true negatives, and false negatives tp = fp = tn = fn = 0 for gold, pred in zip(y_true, y_pred): if pred == True: if gold == True: tp += 1 else: fp += 1 else: if gold == False: tn += 1 else: fn += 1 # calculate precision and recall precision = tp / (tp + fp) recall = tp / (tp + fn) # calculate f1 score fscore = 2 * precision * recall / (precision + recall) # calculate accuracy accuracy = (tp + tn) / len(y_true) # number of positive labels in y_true support = sum(y_true) return { "precision": precision, "recall": recall, "f1-score": fscore, "support": support, "accuracy": accuracy, } # In[13]: binary_classification_report(y_test, y_pred)
5,070
5,166
0
chap04-1
chap04-1
4 Implementing Text Classification Using Perceptron and Logistic Regression In the previous chapters we have discussed the theory behind the perceptron and logistic regression, including mathematical explanations of how and why they are able to learn from examples. In this chapter we will transition from math to code. Specifically, we will discuss how to implement these models in the Python programming language. All the code that we will introduce throughout this book is available online as well: http://clulab.github.io/gentlenlp/. The reader who is not familiar with the Python programming language is encouraged to read first Appendix A, for a brief introduction to the language, and Appendix B, for a discussion on how computers encode and preprocess text. Once done, please return here. To get a better understanding of how these algorithms work under the hood, we will start by implementing them from scratch. However, as the book progresses, we will introduce some of the popular tools and libraries that make Python the language of choice for machine learning, e.g., PyTorch,1 and Hugging Face’s transformers.2 The code for all the examples in the book is provided in the form of Jupyter notebooks.3 Important fragments of these notebooks will be presented in the implementation chapters so that the reader has the whole picture just by reading the book. However, we strongly encourage you to download the notebooks and execute them yourself. We also encourage you to modify them to conduct your own experiments! 1 https://pytorch.org
2 https://huggingface.co 3 https://jupyter.org/ 55 56 Implementing Text Classification Using Perceptron and LR 4.1 Binary Classification We begin this chapter with binary classification. That is, we aim to train classifiers that assign one of two labels to a given text. As the example for this task, we will train a review classifier using the the Large Movie Review Dataset (Maas et al., 2011).4 We tackle this task by implementing first a binary perceptron classifier, followed by a binary logistic regression one. We will implement the latter both from scratch as well as using PyTorch, so the reader has a clearer understanding on how PyTorch works “under the hood.” 4.1.1 Large Movie Review Dataset This dataset contains movie reviews and their associated scores (between 1 and 10) as provided by IMDb.5 converted these scores to binary labels by assigning each review a positive or negative label if the review score was above 6 or below 5, respectively. Reviews with scores 5 and 6 were considered too neutral and thus excluded. We follow the same protocol in this chapter. The dataset is divided in two even partitions called train and test, each containing 25,000 reviews. The dataset also provides additional unlabeled reviews, but we will not use those here. Each partition contains two directories called pos and neg where the positive and negative examples are stored. Each review is stored in an independent text file, whose name is composed of an id unique to the partition and the score associated with the review, separated by an underscore. An example of a positive and a negative review is shown in Table 4.1. 4.1.2 Bag-of-words Model As discussed in Section 2.2, we will encode the text to classify as a bag of words. That is, we encode each review as a list of numbers, with each position in the list corresponding to a word in our vocabulary, and the value stored in that position corresponding to the number of times the word appears in the review. For example, say we want to encode the following two reviews: 4 https://ai.stanford.edu/~amaas/data/sentiment/ 5 https://www.imdb.com/ Maas et al. 4.1 Binary Classification 57 Table 4.1 Two examples of movie reviews from IMDb. The first is a positive review of the movie Puss in Boots (1988). The second is a negative review of the movie Valentine (2001). These reviews can be found at https://www.imdb.com/review/rw0606396/ and https://www.imdb.com/review/rw0721861/, respectively. Filename Score Binary Label train/pos/24_8.txt 8/10 Positive train/neg/141_3.txt 3/10 Negative Review Text Although this was obviously a low-budget production, the performances and the songs in this movie are worth seeing. One of Walken’s few musical roles to date. (he is a marvelous dancer and singer and he demonstrates his acrobatic skills as well - watch for the cartwheel!) Also starring Jason Connery. A great children’s story and very likable characters. This stalk and slash turkey manages to bring nothing new to an increasingly stale genre. A masked killer stalks young, pert girls and slaughters them in a variety of gruesome ways, none of which are particularly inventive. It’s not scary, it’s not clever, and it’s not funny. So what was the point of it? Review 1: Review 2: "I liked the movie. My friend liked it too. " "I hated it. Would not recommend. " First, we need to create a vocabulary that maps each word to an id that uniquely identifies it. Each of these numbers will be used as the index in a list, so they must start at zero and grow by one for each word in the vocabulary. For example, one possible vocabulary that encodes the previous reviews is: {'would': 0, 'hated': 1, 58 Implementing Text Classification Using Perceptron and LR 'my': 2, 'liked': 3, 'not': 4, 'it': 5, 'movie': 6, 'recommend': 7, 'the': 8, 'I': 9, 'too': 10, 'friend': 11} Using this mapping, we can encode the two reviews as follows: Review1: [0,0,1,2,0,1,1,0,1,1,1,1] Review2: [1,1,0,0,1,1,0,1,0,1,0,0] Note that the word liked (fourth position) in the first review has a value of two. This is because this word appears twice in that review. This is a small example with a vocabulary of only 12 terms. Of course, the same process needs to be implemented for our whole training dataset. For this purpose we will use scikit-learn’s CountVectorizer class.6 Using the CountVectorizer class simplifies things, allowing us to get started quickly with a bag-of-words approach. However, note that it makes several simplifying assumptions (e.g., text is lowercased, and punctuation and single character tokens are removed). Some of these may not be adequate to other tasks. First, we need to obtain the filenames for the reviews in the training set: Once we have acquired the filenames for the training reviews, we need
to read them using the CountVectorizer. In order for the CountVectorizer to open and read the files for us, we make use of the input='filename' constructor parameter (otherwise it would expect the string content directly). The CountVectorizer provides three methods that will be use-
ful for us: a method called fit() that is used to acquire the vocabulary,
a method transform() that converts the text into the bag-of-words representation, and a method fit_transform() that conveniently acquires the vocabulary and transforms the data in a single step. The resulting object is referred to as a document-term matrix, where each row corre- 6 https://scikitlearn.org/stable/modules/generated/sklearn.feature_ extraction.text.CountVectorizer.html 4.1 Binary Classification 59 sponds to a document, and each column corresponds to a term in the vocabulary. As the output above indicates, the resulting matrix has 25,000 rows (one for each review), and 74,849 columns (one for each term). Also you may note that this matrix is sparse, with 3,445,861 stored elements. A regular matrix of shape 25,000×74,849 would have 1,871,225,000 elements. However, most of the elements in a document-term matrix are zeros because only a few words from the vocabulary appear in each document. A sparse matrix takes advantage of this fact by storing only the non-zero cells in order to reduce the memory required to store it. Thus, sparse matrices are convenient, especially when dealing with lots of data. Nevertheless, to simplify the downstream code in this example, we will convert it into a dense matrix, i.e., a regular two-dimensional NumPy array. Finally, we also need the labels of the reviews. We assign a label of one to positive reviews, and a label of zero to negative ones. Note that the first half of the reviews are positive and the second half are negative. The label at the ith position of the y_train array corresponds to the review encoded in the ith row of the X_train matrix. 4.1.3 Perceptron Now that we have defined our task and the data processing pipeline, we will implement a perceptron classifier that classifies the movie reviews as positive or negative. The entire code discussed in this section is available in the chap4_perceptron notebook. Recall from Section 2.4 that the perceptron is composed of a weight vector w and a bias term b. These will be represented as a NumPy array w of the same length as our document vectors, and a variable b for the bias term. Both will be initialized with zeros. The parameters w and b are learned through the following algorithm, which implements Algorithm 2 from Chapter 2: There are a couple of details to point out. Line 3 of Algorithm 2 indicates that we need to repeat the training loop until convergence. Theoretically, convergence is defined as predicting all training examples correctly. This is an ambitious requirement, which is not always possible in practice, so in this code we also include a stop condition if we reach a maximum number of epochs. Another crucial difference between our implementation here and the theoretical Algorithm 2, is that we randomize the order in which the training examples are seen at the beginning of 60 Implementing Text Classification Using Perceptron and LR each epoch. This simple (but highly recommended!) change is necessary to avoid the introduction of spurious biases due to the arbitrary order of the examples in the original training partition.7 We accomplish this by storing the indices corresponding to the X_train matrix rows in a NumPy array, and shuffling these indices at the beginning of each epoch. We shuffle the indices instead of the examples so that we can preserve the mapping between examples and labels. The training loop aligns closely with Algorithm 2. We start by iterating over each example in our training data, storing the current example in the variable x,8 and its corresponding label in the variable y_true. Next, we compute the perceptron decision function shown in Algorithm 1. Note that NumPy (as well as PyTorch) uses Python’s @ operator to indicate vector or matrix multiplication, depending on its operand types. Here we use it to calculate the dot product of the example x and the weights w. To this we add the bias b to obtain the predicted score, whose sign is used to assign a positive or negative predicted label. If the prediction is correct, then no update is needed, and we can move on to the next training example. However, if the prediction is incorrect, then we need to adjust w and b, as described in Algorithm 2. Sidebar 4.1 The tqdm function This is our first exposure to the tqdm function. tqdm is a progress bar that “make your loops show a smart progress meter.”9 The name tqdm comes from the Arabic word taqaddum which can mean “progress.” Using tqdm is as simple as wrapping it around the collection to be traversed. After training, we evaluate the model’s performance on the heldout test partition. The test data is loaded similarly to the training partition, but with one notable difference; we use CountVectorizer’s transform() method instead of the fit_transform() method so that the vocabulary is not adjusted for the test data. We won’t show here the loading of the test partition since it is so similar to the code already shown, but it is available in the Jupyter notebook that accompanies this section. . 7   As an extreme example, consider a dataset where all the positive examples appear first in the training partition. This would cause the perceptron to artificially inflate the weights of the features that occur in these examples, a situation from which the learning algorithm may struggle to recover. 
 . 8  We use typewriter font when we discuss variables in the code, to distinguish code from the theoretical discussion in the other chapters. 
 9 https://github.com/tqdm/tqdm 4.1 Binary Classification 61 Using the model to assign labels to all the test data is easily done in one step – we simply multiply the entire test data document-term matrix by the previously learned weights and add the bias. Scores greater than zero indicate a positive review, and those less than zero are negative. At this point we can evaluate the classifier’s performance, which we will do using precision, recall, and F1 scores for binary classification (described in Section 2.3). For this purpose, we implement a function called binary_classification_report that computes these metrics and returns them as a dictionary: We call this function to compare the predicted labels to the true labels, and obtain the evaluation scores. Our F1 score here is 86.8%, which is much higher than the baseline that assigns labels randomly, which yields an F1 score of about 50%. This is a good result, especially considering the simplicity of the perceptron! In the next sections and chapters, we will discuss a battery of strategies to considerably improve this performance. 4.1.4 Binary Logistic Regression from Scratch Using the same task, dataset, and evaluation, we will now implement a logistic regression classifier, as described in Algorithm 5 from Chapter 3. To give the reader hands-on experience with the implementation of the gradient calculations for logistic regression, we start by implementing it from scratch using NumPy. All the code shown in this section is available in the chap4_logistic_regression_numpy notebook. In the perceptron implementation, we represented the weights and the bias as two different variables. Here, however, we will use a different approach that will allow us to unify them into a single vector variable. Specifically, we take advantage of the similarity between the derivative of the cost function with respect to the weights (Equation 3.14) and the derivative of the cost with respect to the bias (Equation 3.15). d Ci(w, b) = (σi − yi)xij (3.14 revisited) dwj d Ci(w, b) = σi − yi (3.15 revisited) db Note that the two derivative formulas are identical except that the former has a multiplication by xij, while the latter does not. However, 62 Implementing Text Classification Using Perceptron and LR since σi − yi = (σi − yi)1 we can multiply the derivative of the cost with respect to the bias by one without changing the semantics. This gives an opportunity for combining the computations, doing them both in a single pass. The idea is that we can treat the bias as a weight corresponding to a feature that always has a value of one. As can be seen above, we created a NumPy array of ones of the same length as the number of examples in our training set (i.e., the number of rows in the data matrix). Then we add this array as a new column to the data matrix, using NumPy’s column_stack function. Next, we need to initialize our model. This time we will use a single NumPy array w of the same length as the number of columns in the data matrix. The weight vector w is initialized randomly with values between 0 and 1: Before implementing the learning algorithm, we need an implementation of the logistic function. Recall that the logistic function is σ(x) = 1 (3.1 revisited) 1+e−x This function can be easily implemented in NumPy as follows: However, this naive implementation may produce the following warning during training: The term overflow indicates that the result of evaluating exp(-x) is a number so large that it can’t be represented by a float (specifically, we’re using float64 numbers). We will avoid this issue by not calling exp with values that will overflow. NumPy provides the function finfo that can be consulted to find the limits of floating point numbers: The log of the largest floating point number is the largest number for which exp() will not overflow, so we will use it as a threshold to filter out problematic values: We now have everything we need to implement Algorithm 4. The steps to follow for each example are: (1) use the model to make a prediction, (2) calculate the gradient of the loss function with respect to the model parameters, and (3) update the model parameters using the gradient. The size of the update is controlled by the learning rate. Once the model has been trained, we evaluate it on the test dataset using our binary_classification_report function from the previous section. Loading and preprocessing the test dataset follows the same 4.1 Binary Classification 63 steps as with the previous classifier. We omit the code for brevity. These are the results: The performance is comparable with that of the perceptron. The difference in F1 scores between the two classifiers (84.9% here vs. 86.8% for the perceptron) is not significant. Classifier parity is probably attributable to the fact that the signal distinguishing the two classes being easy to learn and the simpler perceptron training algorithm being sufficient in this case. Nevertheless, this task is useful in showing how to implement the logistic regression model from scratch, i.e., by implementing the gradient calculation and parameter updates manually. Next, we will implement the same model again using PyTorch, highlighting how this machine learning library simplifies the process. 4.1.5 Binary Logistic Regression Utilizing PyTorch While it is fairly straightforward to compute the derivatives for logistic regression and implement then directly in NumPy, this will not scale well to arbitrary neural architectures. Fortunately, there are libraries that automate the computation of the derivatives of the cost function (assuming it is differentiable!) for any neural network, and use the resulting gradients to perform gradient descent or other more sophisticated optimization procedures. To this end, we will use the PyTorch deep learning library10. The corresponding notebook for this section is chap4_logistic_regression_pytorch_bce. Our model for logistic regression corresponds to PyTorch’s Linear layer. When we instantiate this layer, we specify the size of the inputs (the size of our vocabulary) and the size of the output, i.e., the number of output neurons (which is one because we’re doing binary classification). The loss function we use is the binary cross-entropy loss (see Chapter 3), which is implemented as BCEWithLogitsLoss in PyTorch. In PyTorch, the gradients obtained from the loss function are applied to the model by an optimizer object, which implements and applies an optimization algorithm. Here we will use the vanilla stochastic gradient descent optimizer; we set its learning rate to 0.1. This is equivalent to the discussion in Section 3.2. Similarly to the manual implementation, the steps required to train the model for a given training example are: (1) ensure the gradients are set to zeros, (2) apply the model to obtain a prediction, (3) calculate 10 https://pytorch.org/ 64 Implementing Text Classification Using Perceptron and LR the loss, (4) compute the gradient of the loss by back-propagation, and (5) update the model parameters. Recall that in our previous implementation everything was hardcoded: applying the model, computing the gradients, and optimizing the model parameters. Here, however, the implementation of the logistic regression is expressed at a higher level of abstraction. This means that we are describing the logical steps without specifying a particular implementation. Instead, implementation details are the responsability of the chosen model, loss function, and optimizer. Thus, we could even choose a different model, loss function, and/or optimizer, and use the same training steps with little or no modification. This decoupling of the training logic from the implementation details is one of the main advantages of libraries such as PyTorch. As shown in the code above, calling the model as a function, with the feature vectors as inputs, produces the predicted scores. Once again, a positive score corresponds to a positive label. When we evaluate this implementation on the test dataset, we obtain results that are in line with our previous models: Writing the perceptron and the logistic regression from scratch is a good exercise, as it exposes us to the fundamentals of implementing machine learning algorithms. However, this becomes cumbersome for more complex neural architectures. For this reason, from this point on, we will use PyTorch for all our coding examples. 4.2 Multiclass Classification So far, in this chapter we have discussed implementing binary classifiers. Next, we will modify these binary classifiers to perform multiclass classification, following the discussion in Section 3.5. 4.2.1 AG News Dataset Before explaining the actual training/testing code, we have to choose a new dataset that is suitable for multiclass classification. To this end, we will use the AG News Classification Dataset (Zhang et al., 2015), a subset of the larger AG corpus of news articles collected from thousands of different news sources.11 The classification dataset consists of four 11 http://groups.di.unipi.it/~gulli/AG_corpus_of_news_articles.html 4.2 Multiclass Classification 65 classes, and the data is equally balanced across all classes (30,000 articles per class for train, and 1,900 articles per class for testing). The goal of the task is to classify each article as one of the four classes: World, Sports, Business, or Sci/Tech. 4.2.2 Preparing the Dataset The AG News Dataset is distributed as two CSV files (one for training and one for testing), each containing three columns: the class index, the title, and the description. The dataset also provides a text file that maps the above class indexes to more descriptive class labels. Because of the tabular nature of the dataset, pandas, a Python library
for tabular data analysis,12 is a natural choice for loading and transform-
ing it. To this end, our Jupyter notebook (chap4_multiclass_logistic_regression) demonstrates the sequence of steps required to handle the data, as well
as model training and evaluation. First, we show how to load the CSV,
add column names, and inspect the result: class index . 0  3 
 . 1  3 
 . 2  3 
 . 3  3 
 . 4  3 
 ... ... . 119995  1 
 . 119996  2 
 . 119997  2 
 . 119998  2 
 . 119999  2 
 title Wall St. Bears Claw Back Into the Black (Reuters) Carlyle Looks Toward Commercial Aerospace (Reu... Oil and Economy Cloud Stocks' Outlook (Reuters) Iraq Halts Oil Exports from Main Southern Pipe... Oil prices soar to all-time record, posing new... ... Pakistan's Musharraf Says Won't Quit as Army C... Renteria signing a top-shelf deal Saban not going to Dolphins yet Today's NFL games Nets get Carter from Raptors description Reuters - Short-sellers, Wall Street's dwindli... Reuters - Private investment firm Carlyle Grou... Reuters - Soaring crude prices plus worries\ab... Reuters - Authorities have halted oil export\f... AFP - Tearaway world oil prices, toppling reco... ... KARACHI (Reuters) - Pakistani President Perve... Red Sox general manager Theo Epstein acknowled... The Miami Dolphins will put their courtship of... PITTSBURGH at NY GIANTS Time: 1:30 p.m. Line: ... INDIANAPOLIS -- All-Star Vince Carter was trad... 120000 rows × 3 columns Since the class labels themselves are in a separate file, we manually add them to the pandas data structure (called dataframe in pandas’ terminology) to increase the interpretability of the data. We use the class index column as a starting point, and use its map method to create a new column with the corresponding labels (technically a new Series object) that is added to the dataframe using its insert method, which allows us to insert the column in a specific position. Note that the label indices are one-based, so we subtract one to align them with their labels. 12 https://pandas.pydata.org 66 Implementing Text Classification Using Perceptron and LR class index . 0  3 
 . 1  3 
 . 2  3 
 . 3  3 
 . 4  3 
 ... ... . 119995  1 
 . 119996  2 
 . 119997  2 
 . 119998  2 
 . 119999  2 
 class Business Business Business Business Business ... World Sports Sports Sports Sports title Wall St. Bears Claw Back Into the Black (Reuters) Oil and Economy Cloud Stocks' Outlook (Reuters) Oil prices soar to all-time record, posing new... Pakistan's Musharraf Says Won't Quit as Army C... Saban not going to Dolphins yet Nets get Carter from Raptors description Reuters - Short-sellers, Wall Street's dwindli... Reuters - Soaring crude prices plus worries\ab... AFP - Tearaway world oil prices, toppling reco... KARACHI (Reuters) - Pakistani President Perve... The Miami Dolphins will put their courtship of... INDIANAPOLIS -- All-Star Vince Carter was trad... Iraq Halts Oil Exports from Main Southern Pipe... Reuters - Authorities have halted oil export\f... ... ... Renteria signing a top-shelf deal Red Sox general manager Theo Epstein acknowled... 120000 rows × 4 columns Carlyle Looks Toward Commercial Aerospace (Reu... Reuters - Private investment firm Carlyle Grou... Today's NFL games PITTSBURGH at NY GIANTS Time: 1:30 p.m. Line: ... Next we will preprocess the text. First we lowercase the title and description, and then we concatenate them into a single string. Then we remove some spurious backslashes from the text. Once this is done, the preprocessed text is added to the dataframe as a new column. Note that pandas allows these steps to be applied to all rows simultaneously. class index class title Wall St. Bears Claw Back Into the Black (Reuters) Oil and Economy Cloud Stocks' Outlook (Reuters) Oil prices soar to all-time record, posing new... ... Pakistan's Musharraf Says Won't Quit as Army C... Saban not going to Dolphins yet Nets get Carter from Raptors description Reuters - Short-sellers, Wall Street's dwindli... Reuters - Soaring crude prices plus worries\ab... AFP - Tearaway world oil prices, toppling reco... ... KARACHI (Reuters) - Pakistani President Perve... The Miami Dolphins will put their courtship of... INDIANAPOLIS -- All-Star Vince Carter was trad... text wall st. bears claw back into the black (reute... oil and economy cloud stocks' outlook (reuters... oil prices soar to all-time record, posing new... ... pakistan's musharraf says won't quit as army c... saban not going to dolphins yet the miami dolp... nets get carter from raptors indianapolis -- a... . 0  3 Business 
 . 1  3 Business 
 . 2  3 Business 
 . 3  3 Business 
 . 4  3 Business 
 ... ... ... . 119995  1 World 
 . 119996  2 Sports 
 . 119997  2 Sports 
 . 119998  2 Sports 
 . 119999  2 Sports 
 120000 rows × 5 columns Carlyle Looks Toward Commercial Reuters - Private investment firm Carlyle carlyle looks toward commercial Aerospace (Reu... Grou... aerospace (reu... Iraq Halts Oil Exports from Main Southern Pipe... Reuters - Authorities have halted oil export\f... iraq halts oil exports from main southern pipe... Renteria signing a top-shelf deal Red Sox general manager Theo Epstein renteria signing a top-shelf deal red sox acknowled... gene... Today's NFL games PITTSBURGH at NY GIANTS Time: 1:30 p.m. today's nfl games pittsburgh at ny giants Line: ... time... At this point, the text is ready to be tokenized. For this purpose we will use NLTK’s word_tokenize function. This function can be applied to the whole column at once using the pandas map function, which returns a new column which we add to the dataframe. However, here we actually use the progress_map function, which provides a visual progress bar. This visual feedback is especially helpful for tasks that take more time to complete. 4.2 Multiclass Classification 67 class index . 0  3 
 . 1  3 
 . 2  3 
 . 3  3 
 . 4  3 
 ... ... . 119995  1 
 . 119996  2 
 . 119997  2 
 . 119998  2 
 . 119999  2 
 class Business Business Business Business Business ... World Sports Sports Sports Sports title Wall St. Bears Claw Back Into the Black (Reuters) Oil and Economy Cloud Stocks' Outlook (Reuters) Oil prices soar to all-time record, posing new... ... Pakistan's Musharraf Says Won't Quit as Army C... Saban not going to Dolphins yet Nets get Carter from Raptors description Reuters - Short-sellers, Wall Street's dwindli... Reuters - Soaring crude prices plus worries\ab... AFP - Tearaway world oil prices, toppling reco... ... KARACHI (Reuters) - Pakistani President Perve... The Miami Dolphins will put their courtship of... INDIANAPOLIS -- All-Star Vince Carter was trad... text wall st. bears claw back into the black (reute... oil and economy cloud stocks' outlook (reuters... oil prices soar to all-time record, posing new... ... pakistan's musharraf says won't quit as army c... saban not going to dolphins yet the miami dolp... nets get carter from raptors indianapolis -- a... tokens [wall, st., bears, claw, back, into, the, blac... [oil, and, economy, cloud, stocks, ', outlook,... [oil, prices, soar, to, all-time, record, ,, p... ... [pakistan, 's, musharraf, says, wo, n't, quit,... [saban, not, going, to, dolphins, yet, the, mi... [nets, get, carter, from, raptors, indianapoli... 120000 rows × 6 columns Carlyle Looks Toward Commercial Reuters - Private investment firm carlyle looks toward commercial [carlyle, looks, toward, Aerospace (Reu... Carlyle Grou... aerospace (reu... commercial, aerospace... Iraq Halts Oil Exports from Main Reuters - Authorities have halted iraq halts oil exports from main [iraq, halts, oil, exports, from, Southern Pipe... oil export\f... southern pipe... main, southe... Renteria signing a top-shelf deal Red Sox general manager Theo renteria signing a top-shelf deal [renteria, signing, a, top-shelf, Epstein acknowled... red sox gene... deal, red, s... Today's NFL games PITTSBURGH at NY GIANTS today's nfl games pittsburgh at [today, 's, nfl, games, Time: 1:30 p.m. Line: ... ny giants time... pittsburgh, at, ny, gi... From the tokens we just created, we then create a vocabulary for our corpus. Here, we only keep the words that occur at least 10 times, decreasing the memory needed and reducing the likelihood that our vocabulary contains noisy tokens. Note that each row in the tokens column contains a list of tokens. In order to create the vocabulary, we will need to convert the Series of lists of tokens into a Series of tokens using the explode() Pandas method. Then we will use the value_counts() method to create a Series object in which the index are the tokens and the values are the number of times they appear in the corpus. The next step is removing the tokens with a count lower than our chosen threshold. Finally, we create a list with the remaining tokens, as well as a dictionary that maps tokens to token ids (i.e., the index of the token in the list). We include in the vocabulary a special token [UNK] that will be used as a placeholder for tokens that do not appear in our vocabulary after the frequency pruning. Using this vocabulary, we construct a feature vector for each news article in the corpus. This feature vector will be encoded as a dictionary, with keys corresponding to token ids, and values corresponding to the number of times the token appears in the article. As above, the feature vectors will be stored as a new column in the dataframe. 68 Implementing Text Classification Using Perceptron and LR class index class title Wall St. Bears Claw Back Into the Black (Reuters) Oil and Economy Cloud Stocks' Outlook (Reuters) Oil prices soar to all-time record, posing new... ... Pakistan's Musharraf Says Won't Quit as Army C... Saban not going to Dolphins yet Nets get Carter from Raptors description Reuters - Short-sellers, Wall Street's dwindli... Reuters - Soaring crude prices plus worries\ab... AFP - Tearaway world oil prices, toppling reco... ... KARACHI (Reuters) - Pakistani President Perve... The Miami Dolphins will put their courtship of... INDIANAPOLIS -- All-Star Vince Carter was trad... text wall st. bears claw back into the black (reute... oil and economy cloud stocks' outlook (reuters... oil prices soar to all-time record, posing new... ... pakistan's musharraf says won't quit as army c... saban not going to dolphins yet the miami dolp... nets get carter from raptors indianapolis -- a... tokens [wall, st., bears, claw, back, into, the, blac... [oil, and, economy, cloud, stocks, ', outlook,... [oil, prices, soar, to, alltime, record, ,, p... ... [pakistan, 's, musharraf, says, wo, n't, quit,... [saban, not, going, to, dolphins, yet, the, mi... [nets, get, carter, from, raptors, indianapoli... features {427: 2, 563: 1, 1607: 1, 15062: 1, 120: 1, 73... {66: 1, 9: 2, 351: 2, 4565: 1, 158: 1, 116: 1,... {66: 2, 99: 2, 4390: 1, 4: 2, 3595: 1, 149: 1,... ... {383: 1, 23: 1, 1626: 2, 91: 1, 1809: 1, 285: ... {7762: 2, 68: 1, 661: 1, 4: 2, 1439: 2, 703: 1... {2170: 2, 226: 1, 2402: 2, 32: 1, 2995: 2, 219... . 0  3 Business 
 . 1  3 Business 
 . 2  3 Business 
 . 3  3 Business 
 . 4  3 Business 
 ... ... ... . 119995  1 World 
 . 119996  2 Sports 
 . 119997  2 Sports 
 . 119998  2 Sports 
 . 119999  2 Sports 
 120000 rows × 7 columns Carlyle Looks Toward Commercial Aerospace (Reu... Reuters - Private investment firm Carlyle Grou... carlyle looks toward commercial aerospace (reu... Iraq Halts Oil Exports from Reuters - Authorities have iraq halts oil exports from Main Southern Pipe... halted oil export\f... main southern pipe... Renteria signing a top-shelf Red Sox general manager renteria signing a topdeal Theo Epstein acknowled... shelf deal red sox gene... PITTSBURGH at NY Today's NFL games GIANTS Time: 1:30 p.m. Line: ... today's nfl games pittsburgh at ny giants time... [carlyle, looks, toward, {15999: 2, 1076: 1, 855: commercial, aerospace... 1, 1286: 1, 4251: 1, ... [iraq, halts, oil, exports, {77: 2, 7380: 1, 66: 3, from, main, southe... 1787: 1, 32: 2, 900: 2... [renteria, signing, a, top- {8428: 2, 2638: 1, 5: 4, shelf, deal, red, s... 0: 3, 127: 1, 202: 3,... [today, 's, nfl, games, {106: 1, 23: 1, 729: 1, pittsburgh, at, ny, gi... 225: 1, 1586: 1, 22: 1... The final preprocessing step is converting the features and the class indices into PyTorch tensors. Recall that we need to subtract one from the class indices to make them zero-based. At this point, the data is fully processed and we are ready to begin training. 4.2.3 Multiclass Logistic Regression Using PyTorch The model itself is a single linear layer whose input size corresponds to the size of our vocabulary, and its output size corresponds to the number of classes in our corpus. PyTorch’s Linear layer includes a bias by default, so there is no need to handle that manually the way we did for our perceptron example. The code for training this model (which implements Algorithm 6) is almost identical to that of the binary logistic repression. However, since we have to calculate a score for each of the four different classes, we need to replace the previous BCEWithLogitsLoss with CrossEntropyLoss, which applies a softmax over the scores to obtain probabilities for each class. For each example, the model predicts 4 scores – one for each label. The label with the highest score is selected using the argmax function. We evaluate the predictions of our model for each class using Scikitlearn’s classification_report, which handles the results of multiclass classification. 4.3 Summary 69 4.3 Summary In this chapter, we used movie review and news article classification to illustrate the implementation of the previously described algorithms for the binary perceptron, binary logistic regression, and multiclass logistic regression. For the binary logistic regression, we made a direct comparison between the lower-level NumPy implementation and a higher-level version that made use of PyTorch. We hope that through this series of exercises the reader has noted several key takeaways. First, data preparation is important and should be done thoughtfully. Certain tasks (e.g., text normalization or sentence splitting) are going to be frequently needed if you continue with NLP, so using or creating generic functions can be very helpful. However, what works for one dataset and one language may not be suitable for another scenario. For example, in our case, we selected different tokenizers for each of our tasks to account for the different registers of English, as well as removing diacritics during normalization. Second, when it comes to implementing machine learning algorithms, it is often easier to use a higher-level library such as PyTorch instead of NumPy. For example, with the former, the gradients are calculated by the library, whereas in NumPy we have to code them ourselves. This becomes cumbersome quickly. For example, even the derivative of the softmax is non-trivial. Third, PyTorch imposes a training structure that remains largely the same, regardless of what models are being trained. That is, at a high level, the same steps are always required: clearing the current gradients, predicting output scores for the provided inputs, calculating the loss, and optimizing. These features make PyTorch a very powerful and convenient deep learning library; we will continue to use it throughout the remainder of the book to implement more complex neural architectures.
16,510
16,556
#!/usr/bin/env python # coding: utf-8 # # Binary Text Classification with # # Logistic Regression Implemented from Scratch # In[1]: import random import numpy as np from tqdm.notebook import tqdm # set this variable to a number to be used as the random seed # or to None if you don't want to set a random seed seed = 1234 if seed is not None: random.seed(seed) np.random.seed(seed) # The dataset is divided in two directories called `train` and `test`. # These directories contain the training and testing splits of the dataset. # In[2]: get_ipython().system('ls -lh data/aclImdb/') # Both the `train` and `test` directories contain two directories called `pos` and `neg` that contain text files with the positive and negative reviews, respectively. # In[3]: get_ipython().system('ls -lh data/aclImdb/train/') # We will now read the filenames of the positive and negative examples. # In[4]: from glob import glob pos_files = glob('data/aclImdb/train/pos/*.txt') neg_files = glob('data/aclImdb/train/neg/*.txt') print('number of positive reviews:', len(pos_files)) print('number of negative reviews:', len(neg_files)) # Now, we will use a [`CountVectorizer`](https://scikit-learn.org/stable/modules/generated/sklearn.feature_extraction.text.CountVectorizer.html) to read the text files, tokenize them, acquire a vocabulary from the training data, and encode it in a document-term matrix in which each row represents a review, and each column represents a term in the vocabulary. Each element $(i,j)$ in the matrix represents the number of times term $j$ appears in example $i$. # In[5]: from sklearn.feature_extraction.text import CountVectorizer # initialize CountVectorizer indicating that we will give it a list of filenames that have to be read cv = CountVectorizer(input='filename') # learn vocabulary and return sparse document-term matrix doc_term_matrix = cv.fit_transform(pos_files + neg_files) doc_term_matrix # Note in the message printed above that the matrix is of shape (25000, 74894). # In other words, it has 1,871,225,000 elements. # However, only 3,445,861 elements were stored. # This is because most of the elements in the matrix are zeros. # The reason is that the reviews are short and most words in the english language don't appear in each review. # A matrix that only stores non-zero values is called *sparse*. # # Now we will convert it to a dense numpy array: # In[6]: X_train = doc_term_matrix.toarray() X_train.shape # In[7]: # Append 1s to the xs; this will allow us to multiply by the weights and # the bias in a single pass. # Make an array with a one for each row/data point ones = np.ones(X_train.shape[0]) # Concatenate these ones to existing feature vectors X_train = np.column_stack((X_train, ones)) X_train.shape # We will also create a numpy array with the binary labels for the reviews. # One indicates a positive review and zero a negative review. # The label `y_train[i]` corresponds to the review encoded in row `i` of the `X_train` matrix. # In[8]: # training labels y_pos = np.ones(len(pos_files)) y_neg = np.zeros(len(neg_files)) y_train = np.concatenate([y_pos, y_neg]) y_train # Now we will initialize our model, in the form of an array of weights `w` of the same size as the number of features in our dataset (i.e., the number of words in the vocabulary acquired by [`CountVectorizer`](https://scikit-learn.org/stable/modules/generated/sklearn.feature_extraction.text.CountVectorizer.html)), and a bias term `b`. # Both are initialized to zeros. # In[9]: # initialize model: the feature vector and bias term are populated with zeros n_examples, n_features = X_train.shape w = np.random.random(n_features) # Now we will use the logistic regression learning algorithm to learn the values of `w` and `b` from our training data. # In[10]: # from scipy.special import expit as sigmoid def sigmoid(z): if -z > np.log(np.finfo(float).max): return 0.0 return 1 / (1 + np.exp(-z)) # In[11]: lr = 1e-1 n_epochs = 10 indices = np.arange(n_examples) for epoch in range(10): # randomize the order in which training examples are seen in this epoch np.random.shuffle(indices) # traverse the training data for i in tqdm(indices, desc=f'epoch {epoch+1}'): x = X_train[i] y = y_train[i] # calculate the derivative of the cost function for this batch deriv_cost = (sigmoid(x @ w) - y) * x # update the weights w = w - lr * deriv_cost # The next step is evaluating the model on the test dataset. # Note that this time we use the [`transform()`](https://scikit-learn.org/stable/modules/generated/sklearn.feature_extraction.text.CountVectorizer.html#sklearn.feature_extraction.text.CountVectorizer.transform) method of the [`CountVectorizer`](https://scikit-learn.org/stable/modules/generated/sklearn.feature_extraction.text.CountVectorizer.html), instead of the [`fit_transform()`](https://scikit-learn.org/stable/modules/generated/sklearn.feature_extraction.text.CountVectorizer.html#sklearn.feature_extraction.text.CountVectorizer.fit_transform) method that we used above. This is because we want to use the learned vocabulary in the test set, instead of learning a new one. # In[12]: pos_files = glob('data/aclImdb/test/pos/*.txt') neg_files = glob('data/aclImdb/test/neg/*.txt') doc_term_matrix = cv.transform(pos_files + neg_files) X_test = doc_term_matrix.toarray() X_test = np.column_stack((X_test, np.ones(X_test.shape[0]))) y_pos = np.ones(len(pos_files)) y_neg = np.zeros(len(neg_files)) y_test = np.concatenate([y_pos, y_neg]) # Using the model is easy: multiply the document-term matrix by the learned weights and add the bias. # We use Python's `@` operator to perform the matrix-vector multiplication. # In[13]: y_pred = X_test @ w > 0 # Now we print an evaluation of the prediction results using scikit-learn's [`classification_report()`](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.classification_report.html) function. # In[14]: def binary_classification_report(y_true, y_pred): # count true positives, false positives, true negatives, and false negatives tp = fp = tn = fn = 0 for gold, pred in zip(y_true, y_pred): if pred == True: if gold == True: tp += 1 else: fp += 1 else: if gold == False: tn += 1 else: fn += 1 # calculate precision and recall precision = tp / (tp + fp) recall = tp / (tp + fn) # calculate f1 score fscore = 2 * precision * recall / (precision + recall) # calculate accuracy accuracy = (tp + tn) / len(y_true) # number of positive labels in y_true support = sum(y_true) return { "precision": precision, "recall": recall, "f1-score": fscore, "support": support, "accuracy": accuracy, } # In[15]: binary_classification_report(y_test, y_pred)
4,482
4,514
1
chap04-2
chap04-2
4 Implementing Text Classification Using Perceptron and Logistic Regression In the previous chapters we have discussed the theory behind the perceptron and logistic regression, including mathematical explanations of how and why they are able to learn from examples. In this chapter we will transition from math to code. Specifically, we will discuss how to implement these models in the Python programming language. All the code that we will introduce throughout this book is available online as well: http://clulab.github.io/gentlenlp/. The reader who is not familiar with the Python programming language is encouraged to read first Appendix A, for a brief introduction to the language, and Appendix B, for a discussion on how computers encode and preprocess text. Once done, please return here. To get a better understanding of how these algorithms work under the hood, we will start by implementing them from scratch. However, as the book progresses, we will introduce some of the popular tools and libraries that make Python the language of choice for machine learning, e.g., PyTorch,1 and Hugging Face’s transformers.2 The code for all the examples in the book is provided in the form of Jupyter notebooks.3 Important fragments of these notebooks will be presented in the implementation chapters so that the reader has the whole picture just by reading the book. However, we strongly encourage you to download the notebooks and execute them yourself. We also encourage you to modify them to conduct your own experiments! 1 https://pytorch.org
2 https://huggingface.co 3 https://jupyter.org/ 55 56 Implementing Text Classification Using Perceptron and LR 4.1 Binary Classification We begin this chapter with binary classification. That is, we aim to train classifiers that assign one of two labels to a given text. As the example for this task, we will train a review classifier using the the Large Movie Review Dataset (Maas et al., 2011).4 We tackle this task by implementing first a binary perceptron classifier, followed by a binary logistic regression one. We will implement the latter both from scratch as well as using PyTorch, so the reader has a clearer understanding on how PyTorch works “under the hood.” 4.1.1 Large Movie Review Dataset This dataset contains movie reviews and their associated scores (between 1 and 10) as provided by IMDb.5 converted these scores to binary labels by assigning each review a positive or negative label if the review score was above 6 or below 5, respectively. Reviews with scores 5 and 6 were considered too neutral and thus excluded. We follow the same protocol in this chapter. The dataset is divided in two even partitions called train and test, each containing 25,000 reviews. The dataset also provides additional unlabeled reviews, but we will not use those here. Each partition contains two directories called pos and neg where the positive and negative examples are stored. Each review is stored in an independent text file, whose name is composed of an id unique to the partition and the score associated with the review, separated by an underscore. An example of a positive and a negative review is shown in Table 4.1. 4.1.2 Bag-of-words Model As discussed in Section 2.2, we will encode the text to classify as a bag of words. That is, we encode each review as a list of numbers, with each position in the list corresponding to a word in our vocabulary, and the value stored in that position corresponding to the number of times the word appears in the review. For example, say we want to encode the following two reviews: 4 https://ai.stanford.edu/~amaas/data/sentiment/ 5 https://www.imdb.com/ Maas et al. 4.1 Binary Classification 57 Table 4.1 Two examples of movie reviews from IMDb. The first is a positive review of the movie Puss in Boots (1988). The second is a negative review of the movie Valentine (2001). These reviews can be found at https://www.imdb.com/review/rw0606396/ and https://www.imdb.com/review/rw0721861/, respectively. Filename Score Binary Label train/pos/24_8.txt 8/10 Positive train/neg/141_3.txt 3/10 Negative Review Text Although this was obviously a low-budget production, the performances and the songs in this movie are worth seeing. One of Walken’s few musical roles to date. (he is a marvelous dancer and singer and he demonstrates his acrobatic skills as well - watch for the cartwheel!) Also starring Jason Connery. A great children’s story and very likable characters. This stalk and slash turkey manages to bring nothing new to an increasingly stale genre. A masked killer stalks young, pert girls and slaughters them in a variety of gruesome ways, none of which are particularly inventive. It’s not scary, it’s not clever, and it’s not funny. So what was the point of it? Review 1: Review 2: "I liked the movie. My friend liked it too. " "I hated it. Would not recommend. " First, we need to create a vocabulary that maps each word to an id that uniquely identifies it. Each of these numbers will be used as the index in a list, so they must start at zero and grow by one for each word in the vocabulary. For example, one possible vocabulary that encodes the previous reviews is: {'would': 0, 'hated': 1, 58 Implementing Text Classification Using Perceptron and LR 'my': 2, 'liked': 3, 'not': 4, 'it': 5, 'movie': 6, 'recommend': 7, 'the': 8, 'I': 9, 'too': 10, 'friend': 11} Using this mapping, we can encode the two reviews as follows: Review1: [0,0,1,2,0,1,1,0,1,1,1,1] Review2: [1,1,0,0,1,1,0,1,0,1,0,0] Note that the word liked (fourth position) in the first review has a value of two. This is because this word appears twice in that review. This is a small example with a vocabulary of only 12 terms. Of course, the same process needs to be implemented for our whole training dataset. For this purpose we will use scikit-learn’s CountVectorizer class.6 Using the CountVectorizer class simplifies things, allowing us to get started quickly with a bag-of-words approach. However, note that it makes several simplifying assumptions (e.g., text is lowercased, and punctuation and single character tokens are removed). Some of these may not be adequate to other tasks. First, we need to obtain the filenames for the reviews in the training set: Once we have acquired the filenames for the training reviews, we need
to read them using the CountVectorizer. In order for the CountVectorizer to open and read the files for us, we make use of the input='filename' constructor parameter (otherwise it would expect the string content directly). The CountVectorizer provides three methods that will be use-
ful for us: a method called fit() that is used to acquire the vocabulary,
a method transform() that converts the text into the bag-of-words representation, and a method fit_transform() that conveniently acquires the vocabulary and transforms the data in a single step. The resulting object is referred to as a document-term matrix, where each row corre- 6 https://scikitlearn.org/stable/modules/generated/sklearn.feature_ extraction.text.CountVectorizer.html 4.1 Binary Classification 59 sponds to a document, and each column corresponds to a term in the vocabulary. As the output above indicates, the resulting matrix has 25,000 rows (one for each review), and 74,849 columns (one for each term). Also you may note that this matrix is sparse, with 3,445,861 stored elements. A regular matrix of shape 25,000×74,849 would have 1,871,225,000 elements. However, most of the elements in a document-term matrix are zeros because only a few words from the vocabulary appear in each document. A sparse matrix takes advantage of this fact by storing only the non-zero cells in order to reduce the memory required to store it. Thus, sparse matrices are convenient, especially when dealing with lots of data. Nevertheless, to simplify the downstream code in this example, we will convert it into a dense matrix, i.e., a regular two-dimensional NumPy array. Finally, we also need the labels of the reviews. We assign a label of one to positive reviews, and a label of zero to negative ones. Note that the first half of the reviews are positive and the second half are negative. The label at the ith position of the y_train array corresponds to the review encoded in the ith row of the X_train matrix. 4.1.3 Perceptron Now that we have defined our task and the data processing pipeline, we will implement a perceptron classifier that classifies the movie reviews as positive or negative. The entire code discussed in this section is available in the chap4_perceptron notebook. Recall from Section 2.4 that the perceptron is composed of a weight vector w and a bias term b. These will be represented as a NumPy array w of the same length as our document vectors, and a variable b for the bias term. Both will be initialized with zeros. The parameters w and b are learned through the following algorithm, which implements Algorithm 2 from Chapter 2: There are a couple of details to point out. Line 3 of Algorithm 2 indicates that we need to repeat the training loop until convergence. Theoretically, convergence is defined as predicting all training examples correctly. This is an ambitious requirement, which is not always possible in practice, so in this code we also include a stop condition if we reach a maximum number of epochs. Another crucial difference between our implementation here and the theoretical Algorithm 2, is that we randomize the order in which the training examples are seen at the beginning of 60 Implementing Text Classification Using Perceptron and LR each epoch. This simple (but highly recommended!) change is necessary to avoid the introduction of spurious biases due to the arbitrary order of the examples in the original training partition.7 We accomplish this by storing the indices corresponding to the X_train matrix rows in a NumPy array, and shuffling these indices at the beginning of each epoch. We shuffle the indices instead of the examples so that we can preserve the mapping between examples and labels. The training loop aligns closely with Algorithm 2. We start by iterating over each example in our training data, storing the current example in the variable x,8 and its corresponding label in the variable y_true. Next, we compute the perceptron decision function shown in Algorithm 1. Note that NumPy (as well as PyTorch) uses Python’s @ operator to indicate vector or matrix multiplication, depending on its operand types. Here we use it to calculate the dot product of the example x and the weights w. To this we add the bias b to obtain the predicted score, whose sign is used to assign a positive or negative predicted label. If the prediction is correct, then no update is needed, and we can move on to the next training example. However, if the prediction is incorrect, then we need to adjust w and b, as described in Algorithm 2. Sidebar 4.1 The tqdm function This is our first exposure to the tqdm function. tqdm is a progress bar that “make your loops show a smart progress meter.”9 The name tqdm comes from the Arabic word taqaddum which can mean “progress.” Using tqdm is as simple as wrapping it around the collection to be traversed. After training, we evaluate the model’s performance on the heldout test partition. The test data is loaded similarly to the training partition, but with one notable difference; we use CountVectorizer’s transform() method instead of the fit_transform() method so that the vocabulary is not adjusted for the test data. We won’t show here the loading of the test partition since it is so similar to the code already shown, but it is available in the Jupyter notebook that accompanies this section. . 7   As an extreme example, consider a dataset where all the positive examples appear first in the training partition. This would cause the perceptron to artificially inflate the weights of the features that occur in these examples, a situation from which the learning algorithm may struggle to recover. 
 . 8  We use typewriter font when we discuss variables in the code, to distinguish code from the theoretical discussion in the other chapters. 
 9 https://github.com/tqdm/tqdm 4.1 Binary Classification 61 Using the model to assign labels to all the test data is easily done in one step – we simply multiply the entire test data document-term matrix by the previously learned weights and add the bias. Scores greater than zero indicate a positive review, and those less than zero are negative. At this point we can evaluate the classifier’s performance, which we will do using precision, recall, and F1 scores for binary classification (described in Section 2.3). For this purpose, we implement a function called binary_classification_report that computes these metrics and returns them as a dictionary: We call this function to compare the predicted labels to the true labels, and obtain the evaluation scores. Our F1 score here is 86.8%, which is much higher than the baseline that assigns labels randomly, which yields an F1 score of about 50%. This is a good result, especially considering the simplicity of the perceptron! In the next sections and chapters, we will discuss a battery of strategies to considerably improve this performance. 4.1.4 Binary Logistic Regression from Scratch Using the same task, dataset, and evaluation, we will now implement a logistic regression classifier, as described in Algorithm 5 from Chapter 3. To give the reader hands-on experience with the implementation of the gradient calculations for logistic regression, we start by implementing it from scratch using NumPy. All the code shown in this section is available in the chap4_logistic_regression_numpy notebook. In the perceptron implementation, we represented the weights and the bias as two different variables. Here, however, we will use a different approach that will allow us to unify them into a single vector variable. Specifically, we take advantage of the similarity between the derivative of the cost function with respect to the weights (Equation 3.14) and the derivative of the cost with respect to the bias (Equation 3.15). d Ci(w, b) = (σi − yi)xij (3.14 revisited) dwj d Ci(w, b) = σi − yi (3.15 revisited) db Note that the two derivative formulas are identical except that the former has a multiplication by xij, while the latter does not. However, 62 Implementing Text Classification Using Perceptron and LR since σi − yi = (σi − yi)1 we can multiply the derivative of the cost with respect to the bias by one without changing the semantics. This gives an opportunity for combining the computations, doing them both in a single pass. The idea is that we can treat the bias as a weight corresponding to a feature that always has a value of one. As can be seen above, we created a NumPy array of ones of the same length as the number of examples in our training set (i.e., the number of rows in the data matrix). Then we add this array as a new column to the data matrix, using NumPy’s column_stack function. Next, we need to initialize our model. This time we will use a single NumPy array w of the same length as the number of columns in the data matrix. The weight vector w is initialized randomly with values between 0 and 1: Before implementing the learning algorithm, we need an implementation of the logistic function. Recall that the logistic function is σ(x) = 1 (3.1 revisited) 1+e−x This function can be easily implemented in NumPy as follows: However, this naive implementation may produce the following warning during training: The term overflow indicates that the result of evaluating exp(-x) is a number so large that it can’t be represented by a float (specifically, we’re using float64 numbers). We will avoid this issue by not calling exp with values that will overflow. NumPy provides the function finfo that can be consulted to find the limits of floating point numbers: The log of the largest floating point number is the largest number for which exp() will not overflow, so we will use it as a threshold to filter out problematic values: We now have everything we need to implement Algorithm 4. The steps to follow for each example are: (1) use the model to make a prediction, (2) calculate the gradient of the loss function with respect to the model parameters, and (3) update the model parameters using the gradient. The size of the update is controlled by the learning rate. Once the model has been trained, we evaluate it on the test dataset using our binary_classification_report function from the previous section. Loading and preprocessing the test dataset follows the same 4.1 Binary Classification 63 steps as with the previous classifier. We omit the code for brevity. These are the results: The performance is comparable with that of the perceptron. The difference in F1 scores between the two classifiers (84.9% here vs. 86.8% for the perceptron) is not significant. Classifier parity is probably attributable to the fact that the signal distinguishing the two classes being easy to learn and the simpler perceptron training algorithm being sufficient in this case. Nevertheless, this task is useful in showing how to implement the logistic regression model from scratch, i.e., by implementing the gradient calculation and parameter updates manually. Next, we will implement the same model again using PyTorch, highlighting how this machine learning library simplifies the process. 4.1.5 Binary Logistic Regression Utilizing PyTorch While it is fairly straightforward to compute the derivatives for logistic regression and implement then directly in NumPy, this will not scale well to arbitrary neural architectures. Fortunately, there are libraries that automate the computation of the derivatives of the cost function (assuming it is differentiable!) for any neural network, and use the resulting gradients to perform gradient descent or other more sophisticated optimization procedures. To this end, we will use the PyTorch deep learning library10. The corresponding notebook for this section is chap4_logistic_regression_pytorch_bce. Our model for logistic regression corresponds to PyTorch’s Linear layer. When we instantiate this layer, we specify the size of the inputs (the size of our vocabulary) and the size of the output, i.e., the number of output neurons (which is one because we’re doing binary classification). The loss function we use is the binary cross-entropy loss (see Chapter 3), which is implemented as BCEWithLogitsLoss in PyTorch. In PyTorch, the gradients obtained from the loss function are applied to the model by an optimizer object, which implements and applies an optimization algorithm. Here we will use the vanilla stochastic gradient descent optimizer; we set its learning rate to 0.1. This is equivalent to the discussion in Section 3.2. Similarly to the manual implementation, the steps required to train the model for a given training example are: (1) ensure the gradients are set to zeros, (2) apply the model to obtain a prediction, (3) calculate 10 https://pytorch.org/ 64 Implementing Text Classification Using Perceptron and LR the loss, (4) compute the gradient of the loss by back-propagation, and (5) update the model parameters. Recall that in our previous implementation everything was hardcoded: applying the model, computing the gradients, and optimizing the model parameters. Here, however, the implementation of the logistic regression is expressed at a higher level of abstraction. This means that we are describing the logical steps without specifying a particular implementation. Instead, implementation details are the responsability of the chosen model, loss function, and optimizer. Thus, we could even choose a different model, loss function, and/or optimizer, and use the same training steps with little or no modification. This decoupling of the training logic from the implementation details is one of the main advantages of libraries such as PyTorch. As shown in the code above, calling the model as a function, with the feature vectors as inputs, produces the predicted scores. Once again, a positive score corresponds to a positive label. When we evaluate this implementation on the test dataset, we obtain results that are in line with our previous models: Writing the perceptron and the logistic regression from scratch is a good exercise, as it exposes us to the fundamentals of implementing machine learning algorithms. However, this becomes cumbersome for more complex neural architectures. For this reason, from this point on, we will use PyTorch for all our coding examples. 4.2 Multiclass Classification So far, in this chapter we have discussed implementing binary classifiers. Next, we will modify these binary classifiers to perform multiclass classification, following the discussion in Section 3.5. 4.2.1 AG News Dataset Before explaining the actual training/testing code, we have to choose a new dataset that is suitable for multiclass classification. To this end, we will use the AG News Classification Dataset (Zhang et al., 2015), a subset of the larger AG corpus of news articles collected from thousands of different news sources.11 The classification dataset consists of four 11 http://groups.di.unipi.it/~gulli/AG_corpus_of_news_articles.html 4.2 Multiclass Classification 65 classes, and the data is equally balanced across all classes (30,000 articles per class for train, and 1,900 articles per class for testing). The goal of the task is to classify each article as one of the four classes: World, Sports, Business, or Sci/Tech. 4.2.2 Preparing the Dataset The AG News Dataset is distributed as two CSV files (one for training and one for testing), each containing three columns: the class index, the title, and the description. The dataset also provides a text file that maps the above class indexes to more descriptive class labels. Because of the tabular nature of the dataset, pandas, a Python library
for tabular data analysis,12 is a natural choice for loading and transform-
ing it. To this end, our Jupyter notebook (chap4_multiclass_logistic_regression) demonstrates the sequence of steps required to handle the data, as well
as model training and evaluation. First, we show how to load the CSV,
add column names, and inspect the result: class index . 0  3 
 . 1  3 
 . 2  3 
 . 3  3 
 . 4  3 
 ... ... . 119995  1 
 . 119996  2 
 . 119997  2 
 . 119998  2 
 . 119999  2 
 title Wall St. Bears Claw Back Into the Black (Reuters) Carlyle Looks Toward Commercial Aerospace (Reu... Oil and Economy Cloud Stocks' Outlook (Reuters) Iraq Halts Oil Exports from Main Southern Pipe... Oil prices soar to all-time record, posing new... ... Pakistan's Musharraf Says Won't Quit as Army C... Renteria signing a top-shelf deal Saban not going to Dolphins yet Today's NFL games Nets get Carter from Raptors description Reuters - Short-sellers, Wall Street's dwindli... Reuters - Private investment firm Carlyle Grou... Reuters - Soaring crude prices plus worries\ab... Reuters - Authorities have halted oil export\f... AFP - Tearaway world oil prices, toppling reco... ... KARACHI (Reuters) - Pakistani President Perve... Red Sox general manager Theo Epstein acknowled... The Miami Dolphins will put their courtship of... PITTSBURGH at NY GIANTS Time: 1:30 p.m. Line: ... INDIANAPOLIS -- All-Star Vince Carter was trad... 120000 rows × 3 columns Since the class labels themselves are in a separate file, we manually add them to the pandas data structure (called dataframe in pandas’ terminology) to increase the interpretability of the data. We use the class index column as a starting point, and use its map method to create a new column with the corresponding labels (technically a new Series object) that is added to the dataframe using its insert method, which allows us to insert the column in a specific position. Note that the label indices are one-based, so we subtract one to align them with their labels. 12 https://pandas.pydata.org 66 Implementing Text Classification Using Perceptron and LR class index . 0  3 
 . 1  3 
 . 2  3 
 . 3  3 
 . 4  3 
 ... ... . 119995  1 
 . 119996  2 
 . 119997  2 
 . 119998  2 
 . 119999  2 
 class Business Business Business Business Business ... World Sports Sports Sports Sports title Wall St. Bears Claw Back Into the Black (Reuters) Oil and Economy Cloud Stocks' Outlook (Reuters) Oil prices soar to all-time record, posing new... Pakistan's Musharraf Says Won't Quit as Army C... Saban not going to Dolphins yet Nets get Carter from Raptors description Reuters - Short-sellers, Wall Street's dwindli... Reuters - Soaring crude prices plus worries\ab... AFP - Tearaway world oil prices, toppling reco... KARACHI (Reuters) - Pakistani President Perve... The Miami Dolphins will put their courtship of... INDIANAPOLIS -- All-Star Vince Carter was trad... Iraq Halts Oil Exports from Main Southern Pipe... Reuters - Authorities have halted oil export\f... ... ... Renteria signing a top-shelf deal Red Sox general manager Theo Epstein acknowled... 120000 rows × 4 columns Carlyle Looks Toward Commercial Aerospace (Reu... Reuters - Private investment firm Carlyle Grou... Today's NFL games PITTSBURGH at NY GIANTS Time: 1:30 p.m. Line: ... Next we will preprocess the text. First we lowercase the title and description, and then we concatenate them into a single string. Then we remove some spurious backslashes from the text. Once this is done, the preprocessed text is added to the dataframe as a new column. Note that pandas allows these steps to be applied to all rows simultaneously. class index class title Wall St. Bears Claw Back Into the Black (Reuters) Oil and Economy Cloud Stocks' Outlook (Reuters) Oil prices soar to all-time record, posing new... ... Pakistan's Musharraf Says Won't Quit as Army C... Saban not going to Dolphins yet Nets get Carter from Raptors description Reuters - Short-sellers, Wall Street's dwindli... Reuters - Soaring crude prices plus worries\ab... AFP - Tearaway world oil prices, toppling reco... ... KARACHI (Reuters) - Pakistani President Perve... The Miami Dolphins will put their courtship of... INDIANAPOLIS -- All-Star Vince Carter was trad... text wall st. bears claw back into the black (reute... oil and economy cloud stocks' outlook (reuters... oil prices soar to all-time record, posing new... ... pakistan's musharraf says won't quit as army c... saban not going to dolphins yet the miami dolp... nets get carter from raptors indianapolis -- a... . 0  3 Business 
 . 1  3 Business 
 . 2  3 Business 
 . 3  3 Business 
 . 4  3 Business 
 ... ... ... . 119995  1 World 
 . 119996  2 Sports 
 . 119997  2 Sports 
 . 119998  2 Sports 
 . 119999  2 Sports 
 120000 rows × 5 columns Carlyle Looks Toward Commercial Reuters - Private investment firm Carlyle carlyle looks toward commercial Aerospace (Reu... Grou... aerospace (reu... Iraq Halts Oil Exports from Main Southern Pipe... Reuters - Authorities have halted oil export\f... iraq halts oil exports from main southern pipe... Renteria signing a top-shelf deal Red Sox general manager Theo Epstein renteria signing a top-shelf deal red sox acknowled... gene... Today's NFL games PITTSBURGH at NY GIANTS Time: 1:30 p.m. today's nfl games pittsburgh at ny giants Line: ... time... At this point, the text is ready to be tokenized. For this purpose we will use NLTK’s word_tokenize function. This function can be applied to the whole column at once using the pandas map function, which returns a new column which we add to the dataframe. However, here we actually use the progress_map function, which provides a visual progress bar. This visual feedback is especially helpful for tasks that take more time to complete. 4.2 Multiclass Classification 67 class index . 0  3 
 . 1  3 
 . 2  3 
 . 3  3 
 . 4  3 
 ... ... . 119995  1 
 . 119996  2 
 . 119997  2 
 . 119998  2 
 . 119999  2 
 class Business Business Business Business Business ... World Sports Sports Sports Sports title Wall St. Bears Claw Back Into the Black (Reuters) Oil and Economy Cloud Stocks' Outlook (Reuters) Oil prices soar to all-time record, posing new... ... Pakistan's Musharraf Says Won't Quit as Army C... Saban not going to Dolphins yet Nets get Carter from Raptors description Reuters - Short-sellers, Wall Street's dwindli... Reuters - Soaring crude prices plus worries\ab... AFP - Tearaway world oil prices, toppling reco... ... KARACHI (Reuters) - Pakistani President Perve... The Miami Dolphins will put their courtship of... INDIANAPOLIS -- All-Star Vince Carter was trad... text wall st. bears claw back into the black (reute... oil and economy cloud stocks' outlook (reuters... oil prices soar to all-time record, posing new... ... pakistan's musharraf says won't quit as army c... saban not going to dolphins yet the miami dolp... nets get carter from raptors indianapolis -- a... tokens [wall, st., bears, claw, back, into, the, blac... [oil, and, economy, cloud, stocks, ', outlook,... [oil, prices, soar, to, all-time, record, ,, p... ... [pakistan, 's, musharraf, says, wo, n't, quit,... [saban, not, going, to, dolphins, yet, the, mi... [nets, get, carter, from, raptors, indianapoli... 120000 rows × 6 columns Carlyle Looks Toward Commercial Reuters - Private investment firm carlyle looks toward commercial [carlyle, looks, toward, Aerospace (Reu... Carlyle Grou... aerospace (reu... commercial, aerospace... Iraq Halts Oil Exports from Main Reuters - Authorities have halted iraq halts oil exports from main [iraq, halts, oil, exports, from, Southern Pipe... oil export\f... southern pipe... main, southe... Renteria signing a top-shelf deal Red Sox general manager Theo renteria signing a top-shelf deal [renteria, signing, a, top-shelf, Epstein acknowled... red sox gene... deal, red, s... Today's NFL games PITTSBURGH at NY GIANTS today's nfl games pittsburgh at [today, 's, nfl, games, Time: 1:30 p.m. Line: ... ny giants time... pittsburgh, at, ny, gi... From the tokens we just created, we then create a vocabulary for our corpus. Here, we only keep the words that occur at least 10 times, decreasing the memory needed and reducing the likelihood that our vocabulary contains noisy tokens. Note that each row in the tokens column contains a list of tokens. In order to create the vocabulary, we will need to convert the Series of lists of tokens into a Series of tokens using the explode() Pandas method. Then we will use the value_counts() method to create a Series object in which the index are the tokens and the values are the number of times they appear in the corpus. The next step is removing the tokens with a count lower than our chosen threshold. Finally, we create a list with the remaining tokens, as well as a dictionary that maps tokens to token ids (i.e., the index of the token in the list). We include in the vocabulary a special token [UNK] that will be used as a placeholder for tokens that do not appear in our vocabulary after the frequency pruning. Using this vocabulary, we construct a feature vector for each news article in the corpus. This feature vector will be encoded as a dictionary, with keys corresponding to token ids, and values corresponding to the number of times the token appears in the article. As above, the feature vectors will be stored as a new column in the dataframe. 68 Implementing Text Classification Using Perceptron and LR class index class title Wall St. Bears Claw Back Into the Black (Reuters) Oil and Economy Cloud Stocks' Outlook (Reuters) Oil prices soar to all-time record, posing new... ... Pakistan's Musharraf Says Won't Quit as Army C... Saban not going to Dolphins yet Nets get Carter from Raptors description Reuters - Short-sellers, Wall Street's dwindli... Reuters - Soaring crude prices plus worries\ab... AFP - Tearaway world oil prices, toppling reco... ... KARACHI (Reuters) - Pakistani President Perve... The Miami Dolphins will put their courtship of... INDIANAPOLIS -- All-Star Vince Carter was trad... text wall st. bears claw back into the black (reute... oil and economy cloud stocks' outlook (reuters... oil prices soar to all-time record, posing new... ... pakistan's musharraf says won't quit as army c... saban not going to dolphins yet the miami dolp... nets get carter from raptors indianapolis -- a... tokens [wall, st., bears, claw, back, into, the, blac... [oil, and, economy, cloud, stocks, ', outlook,... [oil, prices, soar, to, alltime, record, ,, p... ... [pakistan, 's, musharraf, says, wo, n't, quit,... [saban, not, going, to, dolphins, yet, the, mi... [nets, get, carter, from, raptors, indianapoli... features {427: 2, 563: 1, 1607: 1, 15062: 1, 120: 1, 73... {66: 1, 9: 2, 351: 2, 4565: 1, 158: 1, 116: 1,... {66: 2, 99: 2, 4390: 1, 4: 2, 3595: 1, 149: 1,... ... {383: 1, 23: 1, 1626: 2, 91: 1, 1809: 1, 285: ... {7762: 2, 68: 1, 661: 1, 4: 2, 1439: 2, 703: 1... {2170: 2, 226: 1, 2402: 2, 32: 1, 2995: 2, 219... . 0  3 Business 
 . 1  3 Business 
 . 2  3 Business 
 . 3  3 Business 
 . 4  3 Business 
 ... ... ... . 119995  1 World 
 . 119996  2 Sports 
 . 119997  2 Sports 
 . 119998  2 Sports 
 . 119999  2 Sports 
 120000 rows × 7 columns Carlyle Looks Toward Commercial Aerospace (Reu... Reuters - Private investment firm Carlyle Grou... carlyle looks toward commercial aerospace (reu... Iraq Halts Oil Exports from Reuters - Authorities have iraq halts oil exports from Main Southern Pipe... halted oil export\f... main southern pipe... Renteria signing a top-shelf Red Sox general manager renteria signing a topdeal Theo Epstein acknowled... shelf deal red sox gene... PITTSBURGH at NY Today's NFL games GIANTS Time: 1:30 p.m. Line: ... today's nfl games pittsburgh at ny giants time... [carlyle, looks, toward, {15999: 2, 1076: 1, 855: commercial, aerospace... 1, 1286: 1, 4251: 1, ... [iraq, halts, oil, exports, {77: 2, 7380: 1, 66: 3, from, main, southe... 1787: 1, 32: 2, 900: 2... [renteria, signing, a, top- {8428: 2, 2638: 1, 5: 4, shelf, deal, red, s... 0: 3, 127: 1, 202: 3,... [today, 's, nfl, games, {106: 1, 23: 1, 729: 1, pittsburgh, at, ny, gi... 225: 1, 1586: 1, 22: 1... The final preprocessing step is converting the features and the class indices into PyTorch tensors. Recall that we need to subtract one from the class indices to make them zero-based. At this point, the data is fully processed and we are ready to begin training. 4.2.3 Multiclass Logistic Regression Using PyTorch The model itself is a single linear layer whose input size corresponds to the size of our vocabulary, and its output size corresponds to the number of classes in our corpus. PyTorch’s Linear layer includes a bias by default, so there is no need to handle that manually the way we did for our perceptron example. The code for training this model (which implements Algorithm 6) is almost identical to that of the binary logistic repression. However, since we have to calculate a score for each of the four different classes, we need to replace the previous BCEWithLogitsLoss with CrossEntropyLoss, which applies a softmax over the scores to obtain probabilities for each class. For each example, the model predicts 4 scores – one for each label. The label with the highest score is selected using the argmax function. We evaluate the predictions of our model for each class using Scikitlearn’s classification_report, which handles the results of multiclass classification. 4.3 Summary 69 4.3 Summary In this chapter, we used movie review and news article classification to illustrate the implementation of the previously described algorithms for the binary perceptron, binary logistic regression, and multiclass logistic regression. For the binary logistic regression, we made a direct comparison between the lower-level NumPy implementation and a higher-level version that made use of PyTorch. We hope that through this series of exercises the reader has noted several key takeaways. First, data preparation is important and should be done thoughtfully. Certain tasks (e.g., text normalization or sentence splitting) are going to be frequently needed if you continue with NLP, so using or creating generic functions can be very helpful. However, what works for one dataset and one language may not be suitable for another scenario. For example, in our case, we selected different tokenizers for each of our tasks to account for the different registers of English, as well as removing diacritics during normalization. Second, when it comes to implementing machine learning algorithms, it is often easier to use a higher-level library such as PyTorch instead of NumPy. For example, with the former, the gradients are calculated by the library, whereas in NumPy we have to code them ourselves. This becomes cumbersome quickly. For example, even the derivative of the softmax is non-trivial. Third, PyTorch imposes a training structure that remains largely the same, regardless of what models are being trained. That is, at a high level, the same steps are always required: clearing the current gradients, predicting output scores for the provided inputs, calculating the loss, and optimizing. These features make PyTorch a very powerful and convenient deep learning library; we will continue to use it throughout the remainder of the book to implement more complex neural architectures.
27,786
27,991
#!/usr/bin/env python # coding: utf-8 # # Multiclass Text Classification with # # Logistic Regression Implemented with PyTorch and CE Loss # First, we will do some initialization. # In[1]: import random import torch import numpy as np import pandas as pd from tqdm.notebook import tqdm # enable tqdm in pandas tqdm.pandas() # set to True to use the gpu (if there is one available) use_gpu = True # select device device = torch.device('cuda' if use_gpu and torch.cuda.is_available() else 'cpu') print(f'device: {device.type}') # random seed seed = 1234 # set random seed if seed is not None: print(f'random seed: {seed}') random.seed(seed) np.random.seed(seed) torch.manual_seed(seed) # We will be using the AG's News Topic Classification Dataset. # It is stored in two CSV files: `train.csv` and `test.csv`, as well as a `classes.txt` that stores the labels of the classes to predict. # # First, we will load the training dataset using [pandas](https://pandas.pydata.org/) and take a quick look at how the data. # In[2]: train_df = pd.read_csv('data/ag_news_csv/train.csv', header=None) train_df.columns = ['class index', 'title', 'description'] train_df # The dataset consists of 120,000 examples, each consisting of a class index, a title, and a description. # The class labels are distributed in a separated file. We will add the labels to the dataset so that we can interpret the data more easily. Note that the label indexes are one-based, so we need to subtract one to retrieve them from the list. # In[3]: labels = open('data/ag_news_csv/classes.txt').read().splitlines() classes = train_df['class index'].map(lambda i: labels[i-1]) train_df.insert(1, 'class', classes) train_df # Let's inspect how balanced our examples are by using a bar plot. # In[4]: pd.value_counts(train_df['class']).plot.bar() # The classes are evenly distributed. That's great! # # However, the text contains some spurious backslashes in some parts of the text. # They are meant to represent newlines in the original text. # An example can be seen below, between the words "dwindling" and "band". # In[5]: print(train_df.loc[0, 'description']) # We will replace the backslashes with spaces on the whole column using pandas replace method. # In[6]: title = train_df['title'].str.lower() descr = train_df['description'].str.lower() text = title + " " + descr train_df['text'] = text.str.replace('\\', ' ', regex=False) train_df # Now we will proceed to tokenize the title and description columns using NLTK's word_tokenize(). # We will add a new column to our dataframe with the list of tokens. # In[7]: from nltk.tokenize import word_tokenize train_df['tokens'] = train_df['text'].progress_map(word_tokenize) train_df # Now we will create a vocabulary from the training data. We will only keep the terms that repeat beyond some threshold established below. # In[8]: threshold = 10 tokens = train_df['tokens'].explode().value_counts() tokens = tokens[tokens > threshold] id_to_token = ['[UNK]'] + tokens.index.tolist() token_to_id = {w:i for i,w in enumerate(id_to_token)} vocabulary_size = len(id_to_token) print(f'vocabulary size: {vocabulary_size:,}') # In[9]: from collections import defaultdict def make_feature_vector(tokens, unk_id=0): vector = defaultdict(int) for t in tokens: i = token_to_id.get(t, unk_id) vector[i] += 1 return vector train_df['features'] = train_df['tokens'].progress_map(make_feature_vector) train_df # In[10]: def make_dense(feats): x = np.zeros(vocabulary_size) for k,v in feats.items(): x[k] = v return x X_train = np.stack(train_df['features'].progress_map(make_dense)) y_train = train_df['class index'].to_numpy() - 1 X_train = torch.tensor(X_train, dtype=torch.float32) y_train = torch.tensor(y_train) # In[11]: from torch import nn from torch import optim # hyperparameters lr = 1.0 n_epochs = 5 n_examples = X_train.shape[0] n_feats = X_train.shape[1] n_classes = len(labels) # initialize the model, loss function, optimizer, and data-loader model = nn.Linear(n_feats, n_classes).to(device) loss_func = nn.CrossEntropyLoss() optimizer = optim.SGD(model.parameters(), lr=lr) # train the model indices = np.arange(n_examples) for epoch in range(n_epochs): np.random.shuffle(indices) for i in tqdm(indices, desc=f'epoch {epoch+1}'): # clear gradients model.zero_grad() # send datum to right device x = X_train[i].unsqueeze(0).to(device) y_true = y_train[i].unsqueeze(0).to(device) # predict label scores y_pred = model(x) # compute loss loss = loss_func(y_pred, y_true) # backpropagate loss.backward() # optimize model parameters optimizer.step() # Next, we evaluate on the test dataset # In[12]: # repeat all preprocessing done above, this time on the test set test_df = pd.read_csv('data/ag_news_csv/test.csv', header=None) test_df.columns = ['class index', 'title', 'description'] test_df['text'] = test_df['title'].str.lower() + " " + test_df['description'].str.lower() test_df['text'] = test_df['text'].str.replace('\\', ' ', regex=False) test_df['tokens'] = test_df['text'].progress_map(word_tokenize) test_df['features'] = test_df['tokens'].progress_map(make_feature_vector) X_test = np.stack(test_df['features'].progress_map(make_dense)) y_test = test_df['class index'].to_numpy() - 1 X_test = torch.tensor(X_test, dtype=torch.float32) y_test = torch.tensor(y_test) # In[13]: from sklearn.metrics import classification_report # set model to evaluation mode model.eval() # don't store gradients with torch.no_grad(): X_test = X_test.to(device) y_pred = torch.argmax(model(X_test), dim=1) y_pred = y_pred.cpu().numpy() print(classification_report(y_test, y_pred, target_names=labels))
2,684
2,750
2
chap04-3
chap04-3
4 Implementing Text Classification Using Perceptron and Logistic Regression In the previous chapters we have discussed the theory behind the perceptron and logistic regression, including mathematical explanations of how and why they are able to learn from examples. In this chapter we will transition from math to code. Specifically, we will discuss how to implement these models in the Python programming language. All the code that we will introduce throughout this book is available online as well: http://clulab.github.io/gentlenlp/. The reader who is not familiar with the Python programming language is encouraged to read first Appendix A, for a brief introduction to the language, and Appendix B, for a discussion on how computers encode and preprocess text. Once done, please return here. To get a better understanding of how these algorithms work under the hood, we will start by implementing them from scratch. However, as the book progresses, we will introduce some of the popular tools and libraries that make Python the language of choice for machine learning, e.g., PyTorch,1 and Hugging Face’s transformers.2 The code for all the examples in the book is provided in the form of Jupyter notebooks.3 Important fragments of these notebooks will be presented in the implementation chapters so that the reader has the whole picture just by reading the book. However, we strongly encourage you to download the notebooks and execute them yourself. We also encourage you to modify them to conduct your own experiments! 1 https://pytorch.org
2 https://huggingface.co 3 https://jupyter.org/ 55 56 Implementing Text Classification Using Perceptron and LR 4.1 Binary Classification We begin this chapter with binary classification. That is, we aim to train classifiers that assign one of two labels to a given text. As the example for this task, we will train a review classifier using the the Large Movie Review Dataset (Maas et al., 2011).4 We tackle this task by implementing first a binary perceptron classifier, followed by a binary logistic regression one. We will implement the latter both from scratch as well as using PyTorch, so the reader has a clearer understanding on how PyTorch works “under the hood.” 4.1.1 Large Movie Review Dataset This dataset contains movie reviews and their associated scores (between 1 and 10) as provided by IMDb.5 converted these scores to binary labels by assigning each review a positive or negative label if the review score was above 6 or below 5, respectively. Reviews with scores 5 and 6 were considered too neutral and thus excluded. We follow the same protocol in this chapter. The dataset is divided in two even partitions called train and test, each containing 25,000 reviews. The dataset also provides additional unlabeled reviews, but we will not use those here. Each partition contains two directories called pos and neg where the positive and negative examples are stored. Each review is stored in an independent text file, whose name is composed of an id unique to the partition and the score associated with the review, separated by an underscore. An example of a positive and a negative review is shown in Table 4.1. 4.1.2 Bag-of-words Model As discussed in Section 2.2, we will encode the text to classify as a bag of words. That is, we encode each review as a list of numbers, with each position in the list corresponding to a word in our vocabulary, and the value stored in that position corresponding to the number of times the word appears in the review. For example, say we want to encode the following two reviews: 4 https://ai.stanford.edu/~amaas/data/sentiment/ 5 https://www.imdb.com/ Maas et al. 4.1 Binary Classification 57 Table 4.1 Two examples of movie reviews from IMDb. The first is a positive review of the movie Puss in Boots (1988). The second is a negative review of the movie Valentine (2001). These reviews can be found at https://www.imdb.com/review/rw0606396/ and https://www.imdb.com/review/rw0721861/, respectively. Filename Score Binary Label train/pos/24_8.txt 8/10 Positive train/neg/141_3.txt 3/10 Negative Review Text Although this was obviously a low-budget production, the performances and the songs in this movie are worth seeing. One of Walken’s few musical roles to date. (he is a marvelous dancer and singer and he demonstrates his acrobatic skills as well - watch for the cartwheel!) Also starring Jason Connery. A great children’s story and very likable characters. This stalk and slash turkey manages to bring nothing new to an increasingly stale genre. A masked killer stalks young, pert girls and slaughters them in a variety of gruesome ways, none of which are particularly inventive. It’s not scary, it’s not clever, and it’s not funny. So what was the point of it? Review 1: Review 2: "I liked the movie. My friend liked it too. " "I hated it. Would not recommend. " First, we need to create a vocabulary that maps each word to an id that uniquely identifies it. Each of these numbers will be used as the index in a list, so they must start at zero and grow by one for each word in the vocabulary. For example, one possible vocabulary that encodes the previous reviews is: {'would': 0, 'hated': 1, 58 Implementing Text Classification Using Perceptron and LR 'my': 2, 'liked': 3, 'not': 4, 'it': 5, 'movie': 6, 'recommend': 7, 'the': 8, 'I': 9, 'too': 10, 'friend': 11} Using this mapping, we can encode the two reviews as follows: Review1: [0,0,1,2,0,1,1,0,1,1,1,1] Review2: [1,1,0,0,1,1,0,1,0,1,0,0] Note that the word liked (fourth position) in the first review has a value of two. This is because this word appears twice in that review. This is a small example with a vocabulary of only 12 terms. Of course, the same process needs to be implemented for our whole training dataset. For this purpose we will use scikit-learn’s CountVectorizer class.6 Using the CountVectorizer class simplifies things, allowing us to get started quickly with a bag-of-words approach. However, note that it makes several simplifying assumptions (e.g., text is lowercased, and punctuation and single character tokens are removed). Some of these may not be adequate to other tasks. First, we need to obtain the filenames for the reviews in the training set: Once we have acquired the filenames for the training reviews, we need
to read them using the CountVectorizer. In order for the CountVectorizer to open and read the files for us, we make use of the input='filename' constructor parameter (otherwise it would expect the string content directly). The CountVectorizer provides three methods that will be use-
ful for us: a method called fit() that is used to acquire the vocabulary,
a method transform() that converts the text into the bag-of-words representation, and a method fit_transform() that conveniently acquires the vocabulary and transforms the data in a single step. The resulting object is referred to as a document-term matrix, where each row corre- 6 https://scikitlearn.org/stable/modules/generated/sklearn.feature_ extraction.text.CountVectorizer.html 4.1 Binary Classification 59 sponds to a document, and each column corresponds to a term in the vocabulary. As the output above indicates, the resulting matrix has 25,000 rows (one for each review), and 74,849 columns (one for each term). Also you may note that this matrix is sparse, with 3,445,861 stored elements. A regular matrix of shape 25,000×74,849 would have 1,871,225,000 elements. However, most of the elements in a document-term matrix are zeros because only a few words from the vocabulary appear in each document. A sparse matrix takes advantage of this fact by storing only the non-zero cells in order to reduce the memory required to store it. Thus, sparse matrices are convenient, especially when dealing with lots of data. Nevertheless, to simplify the downstream code in this example, we will convert it into a dense matrix, i.e., a regular two-dimensional NumPy array. Finally, we also need the labels of the reviews. We assign a label of one to positive reviews, and a label of zero to negative ones. Note that the first half of the reviews are positive and the second half are negative. The label at the ith position of the y_train array corresponds to the review encoded in the ith row of the X_train matrix. 4.1.3 Perceptron Now that we have defined our task and the data processing pipeline, we will implement a perceptron classifier that classifies the movie reviews as positive or negative. The entire code discussed in this section is available in the chap4_perceptron notebook. Recall from Section 2.4 that the perceptron is composed of a weight vector w and a bias term b. These will be represented as a NumPy array w of the same length as our document vectors, and a variable b for the bias term. Both will be initialized with zeros. The parameters w and b are learned through the following algorithm, which implements Algorithm 2 from Chapter 2: There are a couple of details to point out. Line 3 of Algorithm 2 indicates that we need to repeat the training loop until convergence. Theoretically, convergence is defined as predicting all training examples correctly. This is an ambitious requirement, which is not always possible in practice, so in this code we also include a stop condition if we reach a maximum number of epochs. Another crucial difference between our implementation here and the theoretical Algorithm 2, is that we randomize the order in which the training examples are seen at the beginning of 60 Implementing Text Classification Using Perceptron and LR each epoch. This simple (but highly recommended!) change is necessary to avoid the introduction of spurious biases due to the arbitrary order of the examples in the original training partition.7 We accomplish this by storing the indices corresponding to the X_train matrix rows in a NumPy array, and shuffling these indices at the beginning of each epoch. We shuffle the indices instead of the examples so that we can preserve the mapping between examples and labels. The training loop aligns closely with Algorithm 2. We start by iterating over each example in our training data, storing the current example in the variable x,8 and its corresponding label in the variable y_true. Next, we compute the perceptron decision function shown in Algorithm 1. Note that NumPy (as well as PyTorch) uses Python’s @ operator to indicate vector or matrix multiplication, depending on its operand types. Here we use it to calculate the dot product of the example x and the weights w. To this we add the bias b to obtain the predicted score, whose sign is used to assign a positive or negative predicted label. If the prediction is correct, then no update is needed, and we can move on to the next training example. However, if the prediction is incorrect, then we need to adjust w and b, as described in Algorithm 2. Sidebar 4.1 The tqdm function This is our first exposure to the tqdm function. tqdm is a progress bar that “make your loops show a smart progress meter.”9 The name tqdm comes from the Arabic word taqaddum which can mean “progress.” Using tqdm is as simple as wrapping it around the collection to be traversed. After training, we evaluate the model’s performance on the heldout test partition. The test data is loaded similarly to the training partition, but with one notable difference; we use CountVectorizer’s transform() method instead of the fit_transform() method so that the vocabulary is not adjusted for the test data. We won’t show here the loading of the test partition since it is so similar to the code already shown, but it is available in the Jupyter notebook that accompanies this section. . 7   As an extreme example, consider a dataset where all the positive examples appear first in the training partition. This would cause the perceptron to artificially inflate the weights of the features that occur in these examples, a situation from which the learning algorithm may struggle to recover. 
 . 8  We use typewriter font when we discuss variables in the code, to distinguish code from the theoretical discussion in the other chapters. 
 9 https://github.com/tqdm/tqdm 4.1 Binary Classification 61 Using the model to assign labels to all the test data is easily done in one step – we simply multiply the entire test data document-term matrix by the previously learned weights and add the bias. Scores greater than zero indicate a positive review, and those less than zero are negative. At this point we can evaluate the classifier’s performance, which we will do using precision, recall, and F1 scores for binary classification (described in Section 2.3). For this purpose, we implement a function called binary_classification_report that computes these metrics and returns them as a dictionary: We call this function to compare the predicted labels to the true labels, and obtain the evaluation scores. Our F1 score here is 86.8%, which is much higher than the baseline that assigns labels randomly, which yields an F1 score of about 50%. This is a good result, especially considering the simplicity of the perceptron! In the next sections and chapters, we will discuss a battery of strategies to considerably improve this performance. 4.1.4 Binary Logistic Regression from Scratch Using the same task, dataset, and evaluation, we will now implement a logistic regression classifier, as described in Algorithm 5 from Chapter 3. To give the reader hands-on experience with the implementation of the gradient calculations for logistic regression, we start by implementing it from scratch using NumPy. All the code shown in this section is available in the chap4_logistic_regression_numpy notebook. In the perceptron implementation, we represented the weights and the bias as two different variables. Here, however, we will use a different approach that will allow us to unify them into a single vector variable. Specifically, we take advantage of the similarity between the derivative of the cost function with respect to the weights (Equation 3.14) and the derivative of the cost with respect to the bias (Equation 3.15). d Ci(w, b) = (σi − yi)xij (3.14 revisited) dwj d Ci(w, b) = σi − yi (3.15 revisited) db Note that the two derivative formulas are identical except that the former has a multiplication by xij, while the latter does not. However, 62 Implementing Text Classification Using Perceptron and LR since σi − yi = (σi − yi)1 we can multiply the derivative of the cost with respect to the bias by one without changing the semantics. This gives an opportunity for combining the computations, doing them both in a single pass. The idea is that we can treat the bias as a weight corresponding to a feature that always has a value of one. As can be seen above, we created a NumPy array of ones of the same length as the number of examples in our training set (i.e., the number of rows in the data matrix). Then we add this array as a new column to the data matrix, using NumPy’s column_stack function. Next, we need to initialize our model. This time we will use a single NumPy array w of the same length as the number of columns in the data matrix. The weight vector w is initialized randomly with values between 0 and 1: Before implementing the learning algorithm, we need an implementation of the logistic function. Recall that the logistic function is σ(x) = 1 (3.1 revisited) 1+e−x This function can be easily implemented in NumPy as follows: However, this naive implementation may produce the following warning during training: The term overflow indicates that the result of evaluating exp(-x) is a number so large that it can’t be represented by a float (specifically, we’re using float64 numbers). We will avoid this issue by not calling exp with values that will overflow. NumPy provides the function finfo that can be consulted to find the limits of floating point numbers: The log of the largest floating point number is the largest number for which exp() will not overflow, so we will use it as a threshold to filter out problematic values: We now have everything we need to implement Algorithm 4. The steps to follow for each example are: (1) use the model to make a prediction, (2) calculate the gradient of the loss function with respect to the model parameters, and (3) update the model parameters using the gradient. The size of the update is controlled by the learning rate. Once the model has been trained, we evaluate it on the test dataset using our binary_classification_report function from the previous section. Loading and preprocessing the test dataset follows the same 4.1 Binary Classification 63 steps as with the previous classifier. We omit the code for brevity. These are the results: The performance is comparable with that of the perceptron. The difference in F1 scores between the two classifiers (84.9% here vs. 86.8% for the perceptron) is not significant. Classifier parity is probably attributable to the fact that the signal distinguishing the two classes being easy to learn and the simpler perceptron training algorithm being sufficient in this case. Nevertheless, this task is useful in showing how to implement the logistic regression model from scratch, i.e., by implementing the gradient calculation and parameter updates manually. Next, we will implement the same model again using PyTorch, highlighting how this machine learning library simplifies the process. 4.1.5 Binary Logistic Regression Utilizing PyTorch While it is fairly straightforward to compute the derivatives for logistic regression and implement then directly in NumPy, this will not scale well to arbitrary neural architectures. Fortunately, there are libraries that automate the computation of the derivatives of the cost function (assuming it is differentiable!) for any neural network, and use the resulting gradients to perform gradient descent or other more sophisticated optimization procedures. To this end, we will use the PyTorch deep learning library10. The corresponding notebook for this section is chap4_logistic_regression_pytorch_bce. Our model for logistic regression corresponds to PyTorch’s Linear layer. When we instantiate this layer, we specify the size of the inputs (the size of our vocabulary) and the size of the output, i.e., the number of output neurons (which is one because we’re doing binary classification). The loss function we use is the binary cross-entropy loss (see Chapter 3), which is implemented as BCEWithLogitsLoss in PyTorch. In PyTorch, the gradients obtained from the loss function are applied to the model by an optimizer object, which implements and applies an optimization algorithm. Here we will use the vanilla stochastic gradient descent optimizer; we set its learning rate to 0.1. This is equivalent to the discussion in Section 3.2. Similarly to the manual implementation, the steps required to train the model for a given training example are: (1) ensure the gradients are set to zeros, (2) apply the model to obtain a prediction, (3) calculate 10 https://pytorch.org/ 64 Implementing Text Classification Using Perceptron and LR the loss, (4) compute the gradient of the loss by back-propagation, and (5) update the model parameters. Recall that in our previous implementation everything was hardcoded: applying the model, computing the gradients, and optimizing the model parameters. Here, however, the implementation of the logistic regression is expressed at a higher level of abstraction. This means that we are describing the logical steps without specifying a particular implementation. Instead, implementation details are the responsability of the chosen model, loss function, and optimizer. Thus, we could even choose a different model, loss function, and/or optimizer, and use the same training steps with little or no modification. This decoupling of the training logic from the implementation details is one of the main advantages of libraries such as PyTorch. As shown in the code above, calling the model as a function, with the feature vectors as inputs, produces the predicted scores. Once again, a positive score corresponds to a positive label. When we evaluate this implementation on the test dataset, we obtain results that are in line with our previous models: Writing the perceptron and the logistic regression from scratch is a good exercise, as it exposes us to the fundamentals of implementing machine learning algorithms. However, this becomes cumbersome for more complex neural architectures. For this reason, from this point on, we will use PyTorch for all our coding examples. 4.2 Multiclass Classification So far, in this chapter we have discussed implementing binary classifiers. Next, we will modify these binary classifiers to perform multiclass classification, following the discussion in Section 3.5. 4.2.1 AG News Dataset Before explaining the actual training/testing code, we have to choose a new dataset that is suitable for multiclass classification. To this end, we will use the AG News Classification Dataset (Zhang et al., 2015), a subset of the larger AG corpus of news articles collected from thousands of different news sources.11 The classification dataset consists of four 11 http://groups.di.unipi.it/~gulli/AG_corpus_of_news_articles.html 4.2 Multiclass Classification 65 classes, and the data is equally balanced across all classes (30,000 articles per class for train, and 1,900 articles per class for testing). The goal of the task is to classify each article as one of the four classes: World, Sports, Business, or Sci/Tech. 4.2.2 Preparing the Dataset The AG News Dataset is distributed as two CSV files (one for training and one for testing), each containing three columns: the class index, the title, and the description. The dataset also provides a text file that maps the above class indexes to more descriptive class labels. Because of the tabular nature of the dataset, pandas, a Python library
for tabular data analysis,12 is a natural choice for loading and transform-
ing it. To this end, our Jupyter notebook (chap4_multiclass_logistic_regression) demonstrates the sequence of steps required to handle the data, as well
as model training and evaluation. First, we show how to load the CSV,
add column names, and inspect the result: class index . 0  3 
 . 1  3 
 . 2  3 
 . 3  3 
 . 4  3 
 ... ... . 119995  1 
 . 119996  2 
 . 119997  2 
 . 119998  2 
 . 119999  2 
 title Wall St. Bears Claw Back Into the Black (Reuters) Carlyle Looks Toward Commercial Aerospace (Reu... Oil and Economy Cloud Stocks' Outlook (Reuters) Iraq Halts Oil Exports from Main Southern Pipe... Oil prices soar to all-time record, posing new... ... Pakistan's Musharraf Says Won't Quit as Army C... Renteria signing a top-shelf deal Saban not going to Dolphins yet Today's NFL games Nets get Carter from Raptors description Reuters - Short-sellers, Wall Street's dwindli... Reuters - Private investment firm Carlyle Grou... Reuters - Soaring crude prices plus worries\ab... Reuters - Authorities have halted oil export\f... AFP - Tearaway world oil prices, toppling reco... ... KARACHI (Reuters) - Pakistani President Perve... Red Sox general manager Theo Epstein acknowled... The Miami Dolphins will put their courtship of... PITTSBURGH at NY GIANTS Time: 1:30 p.m. Line: ... INDIANAPOLIS -- All-Star Vince Carter was trad... 120000 rows × 3 columns Since the class labels themselves are in a separate file, we manually add them to the pandas data structure (called dataframe in pandas’ terminology) to increase the interpretability of the data. We use the class index column as a starting point, and use its map method to create a new column with the corresponding labels (technically a new Series object) that is added to the dataframe using its insert method, which allows us to insert the column in a specific position. Note that the label indices are one-based, so we subtract one to align them with their labels. 12 https://pandas.pydata.org 66 Implementing Text Classification Using Perceptron and LR class index . 0  3 
 . 1  3 
 . 2  3 
 . 3  3 
 . 4  3 
 ... ... . 119995  1 
 . 119996  2 
 . 119997  2 
 . 119998  2 
 . 119999  2 
 class Business Business Business Business Business ... World Sports Sports Sports Sports title Wall St. Bears Claw Back Into the Black (Reuters) Oil and Economy Cloud Stocks' Outlook (Reuters) Oil prices soar to all-time record, posing new... Pakistan's Musharraf Says Won't Quit as Army C... Saban not going to Dolphins yet Nets get Carter from Raptors description Reuters - Short-sellers, Wall Street's dwindli... Reuters - Soaring crude prices plus worries\ab... AFP - Tearaway world oil prices, toppling reco... KARACHI (Reuters) - Pakistani President Perve... The Miami Dolphins will put their courtship of... INDIANAPOLIS -- All-Star Vince Carter was trad... Iraq Halts Oil Exports from Main Southern Pipe... Reuters - Authorities have halted oil export\f... ... ... Renteria signing a top-shelf deal Red Sox general manager Theo Epstein acknowled... 120000 rows × 4 columns Carlyle Looks Toward Commercial Aerospace (Reu... Reuters - Private investment firm Carlyle Grou... Today's NFL games PITTSBURGH at NY GIANTS Time: 1:30 p.m. Line: ... Next we will preprocess the text. First we lowercase the title and description, and then we concatenate them into a single string. Then we remove some spurious backslashes from the text. Once this is done, the preprocessed text is added to the dataframe as a new column. Note that pandas allows these steps to be applied to all rows simultaneously. class index class title Wall St. Bears Claw Back Into the Black (Reuters) Oil and Economy Cloud Stocks' Outlook (Reuters) Oil prices soar to all-time record, posing new... ... Pakistan's Musharraf Says Won't Quit as Army C... Saban not going to Dolphins yet Nets get Carter from Raptors description Reuters - Short-sellers, Wall Street's dwindli... Reuters - Soaring crude prices plus worries\ab... AFP - Tearaway world oil prices, toppling reco... ... KARACHI (Reuters) - Pakistani President Perve... The Miami Dolphins will put their courtship of... INDIANAPOLIS -- All-Star Vince Carter was trad... text wall st. bears claw back into the black (reute... oil and economy cloud stocks' outlook (reuters... oil prices soar to all-time record, posing new... ... pakistan's musharraf says won't quit as army c... saban not going to dolphins yet the miami dolp... nets get carter from raptors indianapolis -- a... . 0  3 Business 
 . 1  3 Business 
 . 2  3 Business 
 . 3  3 Business 
 . 4  3 Business 
 ... ... ... . 119995  1 World 
 . 119996  2 Sports 
 . 119997  2 Sports 
 . 119998  2 Sports 
 . 119999  2 Sports 
 120000 rows × 5 columns Carlyle Looks Toward Commercial Reuters - Private investment firm Carlyle carlyle looks toward commercial Aerospace (Reu... Grou... aerospace (reu... Iraq Halts Oil Exports from Main Southern Pipe... Reuters - Authorities have halted oil export\f... iraq halts oil exports from main southern pipe... Renteria signing a top-shelf deal Red Sox general manager Theo Epstein renteria signing a top-shelf deal red sox acknowled... gene... Today's NFL games PITTSBURGH at NY GIANTS Time: 1:30 p.m. today's nfl games pittsburgh at ny giants Line: ... time... At this point, the text is ready to be tokenized. For this purpose we will use NLTK’s word_tokenize function. This function can be applied to the whole column at once using the pandas map function, which returns a new column which we add to the dataframe. However, here we actually use the progress_map function, which provides a visual progress bar. This visual feedback is especially helpful for tasks that take more time to complete. 4.2 Multiclass Classification 67 class index . 0  3 
 . 1  3 
 . 2  3 
 . 3  3 
 . 4  3 
 ... ... . 119995  1 
 . 119996  2 
 . 119997  2 
 . 119998  2 
 . 119999  2 
 class Business Business Business Business Business ... World Sports Sports Sports Sports title Wall St. Bears Claw Back Into the Black (Reuters) Oil and Economy Cloud Stocks' Outlook (Reuters) Oil prices soar to all-time record, posing new... ... Pakistan's Musharraf Says Won't Quit as Army C... Saban not going to Dolphins yet Nets get Carter from Raptors description Reuters - Short-sellers, Wall Street's dwindli... Reuters - Soaring crude prices plus worries\ab... AFP - Tearaway world oil prices, toppling reco... ... KARACHI (Reuters) - Pakistani President Perve... The Miami Dolphins will put their courtship of... INDIANAPOLIS -- All-Star Vince Carter was trad... text wall st. bears claw back into the black (reute... oil and economy cloud stocks' outlook (reuters... oil prices soar to all-time record, posing new... ... pakistan's musharraf says won't quit as army c... saban not going to dolphins yet the miami dolp... nets get carter from raptors indianapolis -- a... tokens [wall, st., bears, claw, back, into, the, blac... [oil, and, economy, cloud, stocks, ', outlook,... [oil, prices, soar, to, all-time, record, ,, p... ... [pakistan, 's, musharraf, says, wo, n't, quit,... [saban, not, going, to, dolphins, yet, the, mi... [nets, get, carter, from, raptors, indianapoli... 120000 rows × 6 columns Carlyle Looks Toward Commercial Reuters - Private investment firm carlyle looks toward commercial [carlyle, looks, toward, Aerospace (Reu... Carlyle Grou... aerospace (reu... commercial, aerospace... Iraq Halts Oil Exports from Main Reuters - Authorities have halted iraq halts oil exports from main [iraq, halts, oil, exports, from, Southern Pipe... oil export\f... southern pipe... main, southe... Renteria signing a top-shelf deal Red Sox general manager Theo renteria signing a top-shelf deal [renteria, signing, a, top-shelf, Epstein acknowled... red sox gene... deal, red, s... Today's NFL games PITTSBURGH at NY GIANTS today's nfl games pittsburgh at [today, 's, nfl, games, Time: 1:30 p.m. Line: ... ny giants time... pittsburgh, at, ny, gi... From the tokens we just created, we then create a vocabulary for our corpus. Here, we only keep the words that occur at least 10 times, decreasing the memory needed and reducing the likelihood that our vocabulary contains noisy tokens. Note that each row in the tokens column contains a list of tokens. In order to create the vocabulary, we will need to convert the Series of lists of tokens into a Series of tokens using the explode() Pandas method. Then we will use the value_counts() method to create a Series object in which the index are the tokens and the values are the number of times they appear in the corpus. The next step is removing the tokens with a count lower than our chosen threshold. Finally, we create a list with the remaining tokens, as well as a dictionary that maps tokens to token ids (i.e., the index of the token in the list). We include in the vocabulary a special token [UNK] that will be used as a placeholder for tokens that do not appear in our vocabulary after the frequency pruning. Using this vocabulary, we construct a feature vector for each news article in the corpus. This feature vector will be encoded as a dictionary, with keys corresponding to token ids, and values corresponding to the number of times the token appears in the article. As above, the feature vectors will be stored as a new column in the dataframe. 68 Implementing Text Classification Using Perceptron and LR class index class title Wall St. Bears Claw Back Into the Black (Reuters) Oil and Economy Cloud Stocks' Outlook (Reuters) Oil prices soar to all-time record, posing new... ... Pakistan's Musharraf Says Won't Quit as Army C... Saban not going to Dolphins yet Nets get Carter from Raptors description Reuters - Short-sellers, Wall Street's dwindli... Reuters - Soaring crude prices plus worries\ab... AFP - Tearaway world oil prices, toppling reco... ... KARACHI (Reuters) - Pakistani President Perve... The Miami Dolphins will put their courtship of... INDIANAPOLIS -- All-Star Vince Carter was trad... text wall st. bears claw back into the black (reute... oil and economy cloud stocks' outlook (reuters... oil prices soar to all-time record, posing new... ... pakistan's musharraf says won't quit as army c... saban not going to dolphins yet the miami dolp... nets get carter from raptors indianapolis -- a... tokens [wall, st., bears, claw, back, into, the, blac... [oil, and, economy, cloud, stocks, ', outlook,... [oil, prices, soar, to, alltime, record, ,, p... ... [pakistan, 's, musharraf, says, wo, n't, quit,... [saban, not, going, to, dolphins, yet, the, mi... [nets, get, carter, from, raptors, indianapoli... features {427: 2, 563: 1, 1607: 1, 15062: 1, 120: 1, 73... {66: 1, 9: 2, 351: 2, 4565: 1, 158: 1, 116: 1,... {66: 2, 99: 2, 4390: 1, 4: 2, 3595: 1, 149: 1,... ... {383: 1, 23: 1, 1626: 2, 91: 1, 1809: 1, 285: ... {7762: 2, 68: 1, 661: 1, 4: 2, 1439: 2, 703: 1... {2170: 2, 226: 1, 2402: 2, 32: 1, 2995: 2, 219... . 0  3 Business 
 . 1  3 Business 
 . 2  3 Business 
 . 3  3 Business 
 . 4  3 Business 
 ... ... ... . 119995  1 World 
 . 119996  2 Sports 
 . 119997  2 Sports 
 . 119998  2 Sports 
 . 119999  2 Sports 
 120000 rows × 7 columns Carlyle Looks Toward Commercial Aerospace (Reu... Reuters - Private investment firm Carlyle Grou... carlyle looks toward commercial aerospace (reu... Iraq Halts Oil Exports from Reuters - Authorities have iraq halts oil exports from Main Southern Pipe... halted oil export\f... main southern pipe... Renteria signing a top-shelf Red Sox general manager renteria signing a topdeal Theo Epstein acknowled... shelf deal red sox gene... PITTSBURGH at NY Today's NFL games GIANTS Time: 1:30 p.m. Line: ... today's nfl games pittsburgh at ny giants time... [carlyle, looks, toward, {15999: 2, 1076: 1, 855: commercial, aerospace... 1, 1286: 1, 4251: 1, ... [iraq, halts, oil, exports, {77: 2, 7380: 1, 66: 3, from, main, southe... 1787: 1, 32: 2, 900: 2... [renteria, signing, a, top- {8428: 2, 2638: 1, 5: 4, shelf, deal, red, s... 0: 3, 127: 1, 202: 3,... [today, 's, nfl, games, {106: 1, 23: 1, 729: 1, pittsburgh, at, ny, gi... 225: 1, 1586: 1, 22: 1... The final preprocessing step is converting the features and the class indices into PyTorch tensors. Recall that we need to subtract one from the class indices to make them zero-based. At this point, the data is fully processed and we are ready to begin training. 4.2.3 Multiclass Logistic Regression Using PyTorch The model itself is a single linear layer whose input size corresponds to the size of our vocabulary, and its output size corresponds to the number of classes in our corpus. PyTorch’s Linear layer includes a bias by default, so there is no need to handle that manually the way we did for our perceptron example. The code for training this model (which implements Algorithm 6) is almost identical to that of the binary logistic repression. However, since we have to calculate a score for each of the four different classes, we need to replace the previous BCEWithLogitsLoss with CrossEntropyLoss, which applies a softmax over the scores to obtain probabilities for each class. For each example, the model predicts 4 scores – one for each label. The label with the highest score is selected using the argmax function. We evaluate the predictions of our model for each class using Scikitlearn’s classification_report, which handles the results of multiclass classification. 4.3 Summary 69 4.3 Summary In this chapter, we used movie review and news article classification to illustrate the implementation of the previously described algorithms for the binary perceptron, binary logistic regression, and multiclass logistic regression. For the binary logistic regression, we made a direct comparison between the lower-level NumPy implementation and a higher-level version that made use of PyTorch. We hope that through this series of exercises the reader has noted several key takeaways. First, data preparation is important and should be done thoughtfully. Certain tasks (e.g., text normalization or sentence splitting) are going to be frequently needed if you continue with NLP, so using or creating generic functions can be very helpful. However, what works for one dataset and one language may not be suitable for another scenario. For example, in our case, we selected different tokenizers for each of our tasks to account for the different registers of English, as well as removing diacritics during normalization. Second, when it comes to implementing machine learning algorithms, it is often easier to use a higher-level library such as PyTorch instead of NumPy. For example, with the former, the gradients are calculated by the library, whereas in NumPy we have to code them ourselves. This becomes cumbersome quickly. For example, even the derivative of the softmax is non-trivial. Third, PyTorch imposes a training structure that remains largely the same, regardless of what models are being trained. That is, at a high level, the same steps are always required: clearing the current gradients, predicting output scores for the provided inputs, calculating the loss, and optimizing. These features make PyTorch a very powerful and convenient deep learning library; we will continue to use it throughout the remainder of the book to implement more complex neural architectures.
16,420
16,500
#!/usr/bin/env python # coding: utf-8 # # Binary Text Classification with # # Logistic Regression Implemented from Scratch # In[1]: import random import numpy as np from tqdm.notebook import tqdm # set this variable to a number to be used as the random seed # or to None if you don't want to set a random seed seed = 1234 if seed is not None: random.seed(seed) np.random.seed(seed) # The dataset is divided in two directories called `train` and `test`. # These directories contain the training and testing splits of the dataset. # In[2]: get_ipython().system('ls -lh data/aclImdb/') # Both the `train` and `test` directories contain two directories called `pos` and `neg` that contain text files with the positive and negative reviews, respectively. # In[3]: get_ipython().system('ls -lh data/aclImdb/train/') # We will now read the filenames of the positive and negative examples. # In[4]: from glob import glob pos_files = glob('data/aclImdb/train/pos/*.txt') neg_files = glob('data/aclImdb/train/neg/*.txt') print('number of positive reviews:', len(pos_files)) print('number of negative reviews:', len(neg_files)) # Now, we will use a [`CountVectorizer`](https://scikit-learn.org/stable/modules/generated/sklearn.feature_extraction.text.CountVectorizer.html) to read the text files, tokenize them, acquire a vocabulary from the training data, and encode it in a document-term matrix in which each row represents a review, and each column represents a term in the vocabulary. Each element $(i,j)$ in the matrix represents the number of times term $j$ appears in example $i$. # In[5]: from sklearn.feature_extraction.text import CountVectorizer # initialize CountVectorizer indicating that we will give it a list of filenames that have to be read cv = CountVectorizer(input='filename') # learn vocabulary and return sparse document-term matrix doc_term_matrix = cv.fit_transform(pos_files + neg_files) doc_term_matrix # Note in the message printed above that the matrix is of shape (25000, 74894). # In other words, it has 1,871,225,000 elements. # However, only 3,445,861 elements were stored. # This is because most of the elements in the matrix are zeros. # The reason is that the reviews are short and most words in the english language don't appear in each review. # A matrix that only stores non-zero values is called *sparse*. # # Now we will convert it to a dense numpy array: # In[6]: X_train = doc_term_matrix.toarray() X_train.shape # In[7]: # Append 1s to the xs; this will allow us to multiply by the weights and # the bias in a single pass. # Make an array with a one for each row/data point ones = np.ones(X_train.shape[0]) # Concatenate these ones to existing feature vectors X_train = np.column_stack((X_train, ones)) X_train.shape # We will also create a numpy array with the binary labels for the reviews. # One indicates a positive review and zero a negative review. # The label `y_train[i]` corresponds to the review encoded in row `i` of the `X_train` matrix. # In[8]: # training labels y_pos = np.ones(len(pos_files)) y_neg = np.zeros(len(neg_files)) y_train = np.concatenate([y_pos, y_neg]) y_train # Now we will initialize our model, in the form of an array of weights `w` of the same size as the number of features in our dataset (i.e., the number of words in the vocabulary acquired by [`CountVectorizer`](https://scikit-learn.org/stable/modules/generated/sklearn.feature_extraction.text.CountVectorizer.html)), and a bias term `b`. # Both are initialized to zeros. # In[9]: # initialize model: the feature vector and bias term are populated with zeros n_examples, n_features = X_train.shape w = np.random.random(n_features) # Now we will use the logistic regression learning algorithm to learn the values of `w` and `b` from our training data. # In[10]: # from scipy.special import expit as sigmoid def sigmoid(z): if -z > np.log(np.finfo(float).max): return 0.0 return 1 / (1 + np.exp(-z)) # In[11]: lr = 1e-1 n_epochs = 10 indices = np.arange(n_examples) for epoch in range(10): # randomize the order in which training examples are seen in this epoch np.random.shuffle(indices) # traverse the training data for i in tqdm(indices, desc=f'epoch {epoch+1}'): x = X_train[i] y = y_train[i] # calculate the derivative of the cost function for this batch deriv_cost = (sigmoid(x @ w) - y) * x # update the weights w = w - lr * deriv_cost # The next step is evaluating the model on the test dataset. # Note that this time we use the [`transform()`](https://scikit-learn.org/stable/modules/generated/sklearn.feature_extraction.text.CountVectorizer.html#sklearn.feature_extraction.text.CountVectorizer.transform) method of the [`CountVectorizer`](https://scikit-learn.org/stable/modules/generated/sklearn.feature_extraction.text.CountVectorizer.html), instead of the [`fit_transform()`](https://scikit-learn.org/stable/modules/generated/sklearn.feature_extraction.text.CountVectorizer.html#sklearn.feature_extraction.text.CountVectorizer.fit_transform) method that we used above. This is because we want to use the learned vocabulary in the test set, instead of learning a new one. # In[12]: pos_files = glob('data/aclImdb/test/pos/*.txt') neg_files = glob('data/aclImdb/test/neg/*.txt') doc_term_matrix = cv.transform(pos_files + neg_files) X_test = doc_term_matrix.toarray() X_test = np.column_stack((X_test, np.ones(X_test.shape[0]))) y_pos = np.ones(len(pos_files)) y_neg = np.zeros(len(neg_files)) y_test = np.concatenate([y_pos, y_neg]) # Using the model is easy: multiply the document-term matrix by the learned weights and add the bias. # We use Python's `@` operator to perform the matrix-vector multiplication. # In[13]: y_pred = X_test @ w > 0 # Now we print an evaluation of the prediction results using scikit-learn's [`classification_report()`](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.classification_report.html) function. # In[14]: def binary_classification_report(y_true, y_pred): # count true positives, false positives, true negatives, and false negatives tp = fp = tn = fn = 0 for gold, pred in zip(y_true, y_pred): if pred == True: if gold == True: tp += 1 else: fp += 1 else: if gold == False: tn += 1 else: fn += 1 # calculate precision and recall precision = tp / (tp + fp) recall = tp / (tp + fn) # calculate f1 score fscore = 2 * precision * recall / (precision + recall) # calculate accuracy accuracy = (tp + tn) / len(y_true) # number of positive labels in y_true support = sum(y_true) return { "precision": precision, "recall": recall, "f1-score": fscore, "support": support, "accuracy": accuracy, } # In[15]: binary_classification_report(y_test, y_pred)
4,407
4,453
3
chap04-4
chap04-4
4 Implementing Text Classification Using Perceptron and Logistic Regression In the previous chapters we have discussed the theory behind the perceptron and logistic regression, including mathematical explanations of how and why they are able to learn from examples. In this chapter we will transition from math to code. Specifically, we will discuss how to implement these models in the Python programming language. All the code that we will introduce throughout this book is available online as well: http://clulab.github.io/gentlenlp/. The reader who is not familiar with the Python programming language is encouraged to read first Appendix A, for a brief introduction to the language, and Appendix B, for a discussion on how computers encode and preprocess text. Once done, please return here. To get a better understanding of how these algorithms work under the hood, we will start by implementing them from scratch. However, as the book progresses, we will introduce some of the popular tools and libraries that make Python the language of choice for machine learning, e.g., PyTorch,1 and Hugging Face’s transformers.2 The code for all the examples in the book is provided in the form of Jupyter notebooks.3 Important fragments of these notebooks will be presented in the implementation chapters so that the reader has the whole picture just by reading the book. However, we strongly encourage you to download the notebooks and execute them yourself. We also encourage you to modify them to conduct your own experiments! 1 https://pytorch.org
2 https://huggingface.co 3 https://jupyter.org/ 55 56 Implementing Text Classification Using Perceptron and LR 4.1 Binary Classification We begin this chapter with binary classification. That is, we aim to train classifiers that assign one of two labels to a given text. As the example for this task, we will train a review classifier using the the Large Movie Review Dataset (Maas et al., 2011).4 We tackle this task by implementing first a binary perceptron classifier, followed by a binary logistic regression one. We will implement the latter both from scratch as well as using PyTorch, so the reader has a clearer understanding on how PyTorch works “under the hood.” 4.1.1 Large Movie Review Dataset This dataset contains movie reviews and their associated scores (between 1 and 10) as provided by IMDb.5 converted these scores to binary labels by assigning each review a positive or negative label if the review score was above 6 or below 5, respectively. Reviews with scores 5 and 6 were considered too neutral and thus excluded. We follow the same protocol in this chapter. The dataset is divided in two even partitions called train and test, each containing 25,000 reviews. The dataset also provides additional unlabeled reviews, but we will not use those here. Each partition contains two directories called pos and neg where the positive and negative examples are stored. Each review is stored in an independent text file, whose name is composed of an id unique to the partition and the score associated with the review, separated by an underscore. An example of a positive and a negative review is shown in Table 4.1. 4.1.2 Bag-of-words Model As discussed in Section 2.2, we will encode the text to classify as a bag of words. That is, we encode each review as a list of numbers, with each position in the list corresponding to a word in our vocabulary, and the value stored in that position corresponding to the number of times the word appears in the review. For example, say we want to encode the following two reviews: 4 https://ai.stanford.edu/~amaas/data/sentiment/ 5 https://www.imdb.com/ Maas et al. 4.1 Binary Classification 57 Table 4.1 Two examples of movie reviews from IMDb. The first is a positive review of the movie Puss in Boots (1988). The second is a negative review of the movie Valentine (2001). These reviews can be found at https://www.imdb.com/review/rw0606396/ and https://www.imdb.com/review/rw0721861/, respectively. Filename Score Binary Label train/pos/24_8.txt 8/10 Positive train/neg/141_3.txt 3/10 Negative Review Text Although this was obviously a low-budget production, the performances and the songs in this movie are worth seeing. One of Walken’s few musical roles to date. (he is a marvelous dancer and singer and he demonstrates his acrobatic skills as well - watch for the cartwheel!) Also starring Jason Connery. A great children’s story and very likable characters. This stalk and slash turkey manages to bring nothing new to an increasingly stale genre. A masked killer stalks young, pert girls and slaughters them in a variety of gruesome ways, none of which are particularly inventive. It’s not scary, it’s not clever, and it’s not funny. So what was the point of it? Review 1: Review 2: "I liked the movie. My friend liked it too. " "I hated it. Would not recommend. " First, we need to create a vocabulary that maps each word to an id that uniquely identifies it. Each of these numbers will be used as the index in a list, so they must start at zero and grow by one for each word in the vocabulary. For example, one possible vocabulary that encodes the previous reviews is: {'would': 0, 'hated': 1, 58 Implementing Text Classification Using Perceptron and LR 'my': 2, 'liked': 3, 'not': 4, 'it': 5, 'movie': 6, 'recommend': 7, 'the': 8, 'I': 9, 'too': 10, 'friend': 11} Using this mapping, we can encode the two reviews as follows: Review1: [0,0,1,2,0,1,1,0,1,1,1,1] Review2: [1,1,0,0,1,1,0,1,0,1,0,0] Note that the word liked (fourth position) in the first review has a value of two. This is because this word appears twice in that review. This is a small example with a vocabulary of only 12 terms. Of course, the same process needs to be implemented for our whole training dataset. For this purpose we will use scikit-learn’s CountVectorizer class.6 Using the CountVectorizer class simplifies things, allowing us to get started quickly with a bag-of-words approach. However, note that it makes several simplifying assumptions (e.g., text is lowercased, and punctuation and single character tokens are removed). Some of these may not be adequate to other tasks. First, we need to obtain the filenames for the reviews in the training set: Once we have acquired the filenames for the training reviews, we need
to read them using the CountVectorizer. In order for the CountVectorizer to open and read the files for us, we make use of the input='filename' constructor parameter (otherwise it would expect the string content directly). The CountVectorizer provides three methods that will be use-
ful for us: a method called fit() that is used to acquire the vocabulary,
a method transform() that converts the text into the bag-of-words representation, and a method fit_transform() that conveniently acquires the vocabulary and transforms the data in a single step. The resulting object is referred to as a document-term matrix, where each row corre- 6 https://scikitlearn.org/stable/modules/generated/sklearn.feature_ extraction.text.CountVectorizer.html 4.1 Binary Classification 59 sponds to a document, and each column corresponds to a term in the vocabulary. As the output above indicates, the resulting matrix has 25,000 rows (one for each review), and 74,849 columns (one for each term). Also you may note that this matrix is sparse, with 3,445,861 stored elements. A regular matrix of shape 25,000×74,849 would have 1,871,225,000 elements. However, most of the elements in a document-term matrix are zeros because only a few words from the vocabulary appear in each document. A sparse matrix takes advantage of this fact by storing only the non-zero cells in order to reduce the memory required to store it. Thus, sparse matrices are convenient, especially when dealing with lots of data. Nevertheless, to simplify the downstream code in this example, we will convert it into a dense matrix, i.e., a regular two-dimensional NumPy array. Finally, we also need the labels of the reviews. We assign a label of one to positive reviews, and a label of zero to negative ones. Note that the first half of the reviews are positive and the second half are negative. The label at the ith position of the y_train array corresponds to the review encoded in the ith row of the X_train matrix. 4.1.3 Perceptron Now that we have defined our task and the data processing pipeline, we will implement a perceptron classifier that classifies the movie reviews as positive or negative. The entire code discussed in this section is available in the chap4_perceptron notebook. Recall from Section 2.4 that the perceptron is composed of a weight vector w and a bias term b. These will be represented as a NumPy array w of the same length as our document vectors, and a variable b for the bias term. Both will be initialized with zeros. The parameters w and b are learned through the following algorithm, which implements Algorithm 2 from Chapter 2: There are a couple of details to point out. Line 3 of Algorithm 2 indicates that we need to repeat the training loop until convergence. Theoretically, convergence is defined as predicting all training examples correctly. This is an ambitious requirement, which is not always possible in practice, so in this code we also include a stop condition if we reach a maximum number of epochs. Another crucial difference between our implementation here and the theoretical Algorithm 2, is that we randomize the order in which the training examples are seen at the beginning of 60 Implementing Text Classification Using Perceptron and LR each epoch. This simple (but highly recommended!) change is necessary to avoid the introduction of spurious biases due to the arbitrary order of the examples in the original training partition.7 We accomplish this by storing the indices corresponding to the X_train matrix rows in a NumPy array, and shuffling these indices at the beginning of each epoch. We shuffle the indices instead of the examples so that we can preserve the mapping between examples and labels. The training loop aligns closely with Algorithm 2. We start by iterating over each example in our training data, storing the current example in the variable x,8 and its corresponding label in the variable y_true. Next, we compute the perceptron decision function shown in Algorithm 1. Note that NumPy (as well as PyTorch) uses Python’s @ operator to indicate vector or matrix multiplication, depending on its operand types. Here we use it to calculate the dot product of the example x and the weights w. To this we add the bias b to obtain the predicted score, whose sign is used to assign a positive or negative predicted label. If the prediction is correct, then no update is needed, and we can move on to the next training example. However, if the prediction is incorrect, then we need to adjust w and b, as described in Algorithm 2. Sidebar 4.1 The tqdm function This is our first exposure to the tqdm function. tqdm is a progress bar that “make your loops show a smart progress meter.”9 The name tqdm comes from the Arabic word taqaddum which can mean “progress.” Using tqdm is as simple as wrapping it around the collection to be traversed. After training, we evaluate the model’s performance on the heldout test partition. The test data is loaded similarly to the training partition, but with one notable difference; we use CountVectorizer’s transform() method instead of the fit_transform() method so that the vocabulary is not adjusted for the test data. We won’t show here the loading of the test partition since it is so similar to the code already shown, but it is available in the Jupyter notebook that accompanies this section. . 7   As an extreme example, consider a dataset where all the positive examples appear first in the training partition. This would cause the perceptron to artificially inflate the weights of the features that occur in these examples, a situation from which the learning algorithm may struggle to recover. 
 . 8  We use typewriter font when we discuss variables in the code, to distinguish code from the theoretical discussion in the other chapters. 
 9 https://github.com/tqdm/tqdm 4.1 Binary Classification 61 Using the model to assign labels to all the test data is easily done in one step – we simply multiply the entire test data document-term matrix by the previously learned weights and add the bias. Scores greater than zero indicate a positive review, and those less than zero are negative. At this point we can evaluate the classifier’s performance, which we will do using precision, recall, and F1 scores for binary classification (described in Section 2.3). For this purpose, we implement a function called binary_classification_report that computes these metrics and returns them as a dictionary: We call this function to compare the predicted labels to the true labels, and obtain the evaluation scores. Our F1 score here is 86.8%, which is much higher than the baseline that assigns labels randomly, which yields an F1 score of about 50%. This is a good result, especially considering the simplicity of the perceptron! In the next sections and chapters, we will discuss a battery of strategies to considerably improve this performance. 4.1.4 Binary Logistic Regression from Scratch Using the same task, dataset, and evaluation, we will now implement a logistic regression classifier, as described in Algorithm 5 from Chapter 3. To give the reader hands-on experience with the implementation of the gradient calculations for logistic regression, we start by implementing it from scratch using NumPy. All the code shown in this section is available in the chap4_logistic_regression_numpy notebook. In the perceptron implementation, we represented the weights and the bias as two different variables. Here, however, we will use a different approach that will allow us to unify them into a single vector variable. Specifically, we take advantage of the similarity between the derivative of the cost function with respect to the weights (Equation 3.14) and the derivative of the cost with respect to the bias (Equation 3.15). d Ci(w, b) = (σi − yi)xij (3.14 revisited) dwj d Ci(w, b) = σi − yi (3.15 revisited) db Note that the two derivative formulas are identical except that the former has a multiplication by xij, while the latter does not. However, 62 Implementing Text Classification Using Perceptron and LR since σi − yi = (σi − yi)1 we can multiply the derivative of the cost with respect to the bias by one without changing the semantics. This gives an opportunity for combining the computations, doing them both in a single pass. The idea is that we can treat the bias as a weight corresponding to a feature that always has a value of one. As can be seen above, we created a NumPy array of ones of the same length as the number of examples in our training set (i.e., the number of rows in the data matrix). Then we add this array as a new column to the data matrix, using NumPy’s column_stack function. Next, we need to initialize our model. This time we will use a single NumPy array w of the same length as the number of columns in the data matrix. The weight vector w is initialized randomly with values between 0 and 1: Before implementing the learning algorithm, we need an implementation of the logistic function. Recall that the logistic function is σ(x) = 1 (3.1 revisited) 1+e−x This function can be easily implemented in NumPy as follows: However, this naive implementation may produce the following warning during training: The term overflow indicates that the result of evaluating exp(-x) is a number so large that it can’t be represented by a float (specifically, we’re using float64 numbers). We will avoid this issue by not calling exp with values that will overflow. NumPy provides the function finfo that can be consulted to find the limits of floating point numbers: The log of the largest floating point number is the largest number for which exp() will not overflow, so we will use it as a threshold to filter out problematic values: We now have everything we need to implement Algorithm 4. The steps to follow for each example are: (1) use the model to make a prediction, (2) calculate the gradient of the loss function with respect to the model parameters, and (3) update the model parameters using the gradient. The size of the update is controlled by the learning rate. Once the model has been trained, we evaluate it on the test dataset using our binary_classification_report function from the previous section. Loading and preprocessing the test dataset follows the same 4.1 Binary Classification 63 steps as with the previous classifier. We omit the code for brevity. These are the results: The performance is comparable with that of the perceptron. The difference in F1 scores between the two classifiers (84.9% here vs. 86.8% for the perceptron) is not significant. Classifier parity is probably attributable to the fact that the signal distinguishing the two classes being easy to learn and the simpler perceptron training algorithm being sufficient in this case. Nevertheless, this task is useful in showing how to implement the logistic regression model from scratch, i.e., by implementing the gradient calculation and parameter updates manually. Next, we will implement the same model again using PyTorch, highlighting how this machine learning library simplifies the process. 4.1.5 Binary Logistic Regression Utilizing PyTorch While it is fairly straightforward to compute the derivatives for logistic regression and implement then directly in NumPy, this will not scale well to arbitrary neural architectures. Fortunately, there are libraries that automate the computation of the derivatives of the cost function (assuming it is differentiable!) for any neural network, and use the resulting gradients to perform gradient descent or other more sophisticated optimization procedures. To this end, we will use the PyTorch deep learning library10. The corresponding notebook for this section is chap4_logistic_regression_pytorch_bce. Our model for logistic regression corresponds to PyTorch’s Linear layer. When we instantiate this layer, we specify the size of the inputs (the size of our vocabulary) and the size of the output, i.e., the number of output neurons (which is one because we’re doing binary classification). The loss function we use is the binary cross-entropy loss (see Chapter 3), which is implemented as BCEWithLogitsLoss in PyTorch. In PyTorch, the gradients obtained from the loss function are applied to the model by an optimizer object, which implements and applies an optimization algorithm. Here we will use the vanilla stochastic gradient descent optimizer; we set its learning rate to 0.1. This is equivalent to the discussion in Section 3.2. Similarly to the manual implementation, the steps required to train the model for a given training example are: (1) ensure the gradients are set to zeros, (2) apply the model to obtain a prediction, (3) calculate 10 https://pytorch.org/ 64 Implementing Text Classification Using Perceptron and LR the loss, (4) compute the gradient of the loss by back-propagation, and (5) update the model parameters. Recall that in our previous implementation everything was hardcoded: applying the model, computing the gradients, and optimizing the model parameters. Here, however, the implementation of the logistic regression is expressed at a higher level of abstraction. This means that we are describing the logical steps without specifying a particular implementation. Instead, implementation details are the responsability of the chosen model, loss function, and optimizer. Thus, we could even choose a different model, loss function, and/or optimizer, and use the same training steps with little or no modification. This decoupling of the training logic from the implementation details is one of the main advantages of libraries such as PyTorch. As shown in the code above, calling the model as a function, with the feature vectors as inputs, produces the predicted scores. Once again, a positive score corresponds to a positive label. When we evaluate this implementation on the test dataset, we obtain results that are in line with our previous models: Writing the perceptron and the logistic regression from scratch is a good exercise, as it exposes us to the fundamentals of implementing machine learning algorithms. However, this becomes cumbersome for more complex neural architectures. For this reason, from this point on, we will use PyTorch for all our coding examples. 4.2 Multiclass Classification So far, in this chapter we have discussed implementing binary classifiers. Next, we will modify these binary classifiers to perform multiclass classification, following the discussion in Section 3.5. 4.2.1 AG News Dataset Before explaining the actual training/testing code, we have to choose a new dataset that is suitable for multiclass classification. To this end, we will use the AG News Classification Dataset (Zhang et al., 2015), a subset of the larger AG corpus of news articles collected from thousands of different news sources.11 The classification dataset consists of four 11 http://groups.di.unipi.it/~gulli/AG_corpus_of_news_articles.html 4.2 Multiclass Classification 65 classes, and the data is equally balanced across all classes (30,000 articles per class for train, and 1,900 articles per class for testing). The goal of the task is to classify each article as one of the four classes: World, Sports, Business, or Sci/Tech. 4.2.2 Preparing the Dataset The AG News Dataset is distributed as two CSV files (one for training and one for testing), each containing three columns: the class index, the title, and the description. The dataset also provides a text file that maps the above class indexes to more descriptive class labels. Because of the tabular nature of the dataset, pandas, a Python library
for tabular data analysis,12 is a natural choice for loading and transform-
ing it. To this end, our Jupyter notebook (chap4_multiclass_logistic_regression) demonstrates the sequence of steps required to handle the data, as well
as model training and evaluation. First, we show how to load the CSV,
add column names, and inspect the result: class index . 0  3 
 . 1  3 
 . 2  3 
 . 3  3 
 . 4  3 
 ... ... . 119995  1 
 . 119996  2 
 . 119997  2 
 . 119998  2 
 . 119999  2 
 title Wall St. Bears Claw Back Into the Black (Reuters) Carlyle Looks Toward Commercial Aerospace (Reu... Oil and Economy Cloud Stocks' Outlook (Reuters) Iraq Halts Oil Exports from Main Southern Pipe... Oil prices soar to all-time record, posing new... ... Pakistan's Musharraf Says Won't Quit as Army C... Renteria signing a top-shelf deal Saban not going to Dolphins yet Today's NFL games Nets get Carter from Raptors description Reuters - Short-sellers, Wall Street's dwindli... Reuters - Private investment firm Carlyle Grou... Reuters - Soaring crude prices plus worries\ab... Reuters - Authorities have halted oil export\f... AFP - Tearaway world oil prices, toppling reco... ... KARACHI (Reuters) - Pakistani President Perve... Red Sox general manager Theo Epstein acknowled... The Miami Dolphins will put their courtship of... PITTSBURGH at NY GIANTS Time: 1:30 p.m. Line: ... INDIANAPOLIS -- All-Star Vince Carter was trad... 120000 rows × 3 columns Since the class labels themselves are in a separate file, we manually add them to the pandas data structure (called dataframe in pandas’ terminology) to increase the interpretability of the data. We use the class index column as a starting point, and use its map method to create a new column with the corresponding labels (technically a new Series object) that is added to the dataframe using its insert method, which allows us to insert the column in a specific position. Note that the label indices are one-based, so we subtract one to align them with their labels. 12 https://pandas.pydata.org 66 Implementing Text Classification Using Perceptron and LR class index . 0  3 
 . 1  3 
 . 2  3 
 . 3  3 
 . 4  3 
 ... ... . 119995  1 
 . 119996  2 
 . 119997  2 
 . 119998  2 
 . 119999  2 
 class Business Business Business Business Business ... World Sports Sports Sports Sports title Wall St. Bears Claw Back Into the Black (Reuters) Oil and Economy Cloud Stocks' Outlook (Reuters) Oil prices soar to all-time record, posing new... Pakistan's Musharraf Says Won't Quit as Army C... Saban not going to Dolphins yet Nets get Carter from Raptors description Reuters - Short-sellers, Wall Street's dwindli... Reuters - Soaring crude prices plus worries\ab... AFP - Tearaway world oil prices, toppling reco... KARACHI (Reuters) - Pakistani President Perve... The Miami Dolphins will put their courtship of... INDIANAPOLIS -- All-Star Vince Carter was trad... Iraq Halts Oil Exports from Main Southern Pipe... Reuters - Authorities have halted oil export\f... ... ... Renteria signing a top-shelf deal Red Sox general manager Theo Epstein acknowled... 120000 rows × 4 columns Carlyle Looks Toward Commercial Aerospace (Reu... Reuters - Private investment firm Carlyle Grou... Today's NFL games PITTSBURGH at NY GIANTS Time: 1:30 p.m. Line: ... Next we will preprocess the text. First we lowercase the title and description, and then we concatenate them into a single string. Then we remove some spurious backslashes from the text. Once this is done, the preprocessed text is added to the dataframe as a new column. Note that pandas allows these steps to be applied to all rows simultaneously. class index class title Wall St. Bears Claw Back Into the Black (Reuters) Oil and Economy Cloud Stocks' Outlook (Reuters) Oil prices soar to all-time record, posing new... ... Pakistan's Musharraf Says Won't Quit as Army C... Saban not going to Dolphins yet Nets get Carter from Raptors description Reuters - Short-sellers, Wall Street's dwindli... Reuters - Soaring crude prices plus worries\ab... AFP - Tearaway world oil prices, toppling reco... ... KARACHI (Reuters) - Pakistani President Perve... The Miami Dolphins will put their courtship of... INDIANAPOLIS -- All-Star Vince Carter was trad... text wall st. bears claw back into the black (reute... oil and economy cloud stocks' outlook (reuters... oil prices soar to all-time record, posing new... ... pakistan's musharraf says won't quit as army c... saban not going to dolphins yet the miami dolp... nets get carter from raptors indianapolis -- a... . 0  3 Business 
 . 1  3 Business 
 . 2  3 Business 
 . 3  3 Business 
 . 4  3 Business 
 ... ... ... . 119995  1 World 
 . 119996  2 Sports 
 . 119997  2 Sports 
 . 119998  2 Sports 
 . 119999  2 Sports 
 120000 rows × 5 columns Carlyle Looks Toward Commercial Reuters - Private investment firm Carlyle carlyle looks toward commercial Aerospace (Reu... Grou... aerospace (reu... Iraq Halts Oil Exports from Main Southern Pipe... Reuters - Authorities have halted oil export\f... iraq halts oil exports from main southern pipe... Renteria signing a top-shelf deal Red Sox general manager Theo Epstein renteria signing a top-shelf deal red sox acknowled... gene... Today's NFL games PITTSBURGH at NY GIANTS Time: 1:30 p.m. today's nfl games pittsburgh at ny giants Line: ... time... At this point, the text is ready to be tokenized. For this purpose we will use NLTK’s word_tokenize function. This function can be applied to the whole column at once using the pandas map function, which returns a new column which we add to the dataframe. However, here we actually use the progress_map function, which provides a visual progress bar. This visual feedback is especially helpful for tasks that take more time to complete. 4.2 Multiclass Classification 67 class index . 0  3 
 . 1  3 
 . 2  3 
 . 3  3 
 . 4  3 
 ... ... . 119995  1 
 . 119996  2 
 . 119997  2 
 . 119998  2 
 . 119999  2 
 class Business Business Business Business Business ... World Sports Sports Sports Sports title Wall St. Bears Claw Back Into the Black (Reuters) Oil and Economy Cloud Stocks' Outlook (Reuters) Oil prices soar to all-time record, posing new... ... Pakistan's Musharraf Says Won't Quit as Army C... Saban not going to Dolphins yet Nets get Carter from Raptors description Reuters - Short-sellers, Wall Street's dwindli... Reuters - Soaring crude prices plus worries\ab... AFP - Tearaway world oil prices, toppling reco... ... KARACHI (Reuters) - Pakistani President Perve... The Miami Dolphins will put their courtship of... INDIANAPOLIS -- All-Star Vince Carter was trad... text wall st. bears claw back into the black (reute... oil and economy cloud stocks' outlook (reuters... oil prices soar to all-time record, posing new... ... pakistan's musharraf says won't quit as army c... saban not going to dolphins yet the miami dolp... nets get carter from raptors indianapolis -- a... tokens [wall, st., bears, claw, back, into, the, blac... [oil, and, economy, cloud, stocks, ', outlook,... [oil, prices, soar, to, all-time, record, ,, p... ... [pakistan, 's, musharraf, says, wo, n't, quit,... [saban, not, going, to, dolphins, yet, the, mi... [nets, get, carter, from, raptors, indianapoli... 120000 rows × 6 columns Carlyle Looks Toward Commercial Reuters - Private investment firm carlyle looks toward commercial [carlyle, looks, toward, Aerospace (Reu... Carlyle Grou... aerospace (reu... commercial, aerospace... Iraq Halts Oil Exports from Main Reuters - Authorities have halted iraq halts oil exports from main [iraq, halts, oil, exports, from, Southern Pipe... oil export\f... southern pipe... main, southe... Renteria signing a top-shelf deal Red Sox general manager Theo renteria signing a top-shelf deal [renteria, signing, a, top-shelf, Epstein acknowled... red sox gene... deal, red, s... Today's NFL games PITTSBURGH at NY GIANTS today's nfl games pittsburgh at [today, 's, nfl, games, Time: 1:30 p.m. Line: ... ny giants time... pittsburgh, at, ny, gi... From the tokens we just created, we then create a vocabulary for our corpus. Here, we only keep the words that occur at least 10 times, decreasing the memory needed and reducing the likelihood that our vocabulary contains noisy tokens. Note that each row in the tokens column contains a list of tokens. In order to create the vocabulary, we will need to convert the Series of lists of tokens into a Series of tokens using the explode() Pandas method. Then we will use the value_counts() method to create a Series object in which the index are the tokens and the values are the number of times they appear in the corpus. The next step is removing the tokens with a count lower than our chosen threshold. Finally, we create a list with the remaining tokens, as well as a dictionary that maps tokens to token ids (i.e., the index of the token in the list). We include in the vocabulary a special token [UNK] that will be used as a placeholder for tokens that do not appear in our vocabulary after the frequency pruning. Using this vocabulary, we construct a feature vector for each news article in the corpus. This feature vector will be encoded as a dictionary, with keys corresponding to token ids, and values corresponding to the number of times the token appears in the article. As above, the feature vectors will be stored as a new column in the dataframe. 68 Implementing Text Classification Using Perceptron and LR class index class title Wall St. Bears Claw Back Into the Black (Reuters) Oil and Economy Cloud Stocks' Outlook (Reuters) Oil prices soar to all-time record, posing new... ... Pakistan's Musharraf Says Won't Quit as Army C... Saban not going to Dolphins yet Nets get Carter from Raptors description Reuters - Short-sellers, Wall Street's dwindli... Reuters - Soaring crude prices plus worries\ab... AFP - Tearaway world oil prices, toppling reco... ... KARACHI (Reuters) - Pakistani President Perve... The Miami Dolphins will put their courtship of... INDIANAPOLIS -- All-Star Vince Carter was trad... text wall st. bears claw back into the black (reute... oil and economy cloud stocks' outlook (reuters... oil prices soar to all-time record, posing new... ... pakistan's musharraf says won't quit as army c... saban not going to dolphins yet the miami dolp... nets get carter from raptors indianapolis -- a... tokens [wall, st., bears, claw, back, into, the, blac... [oil, and, economy, cloud, stocks, ', outlook,... [oil, prices, soar, to, alltime, record, ,, p... ... [pakistan, 's, musharraf, says, wo, n't, quit,... [saban, not, going, to, dolphins, yet, the, mi... [nets, get, carter, from, raptors, indianapoli... features {427: 2, 563: 1, 1607: 1, 15062: 1, 120: 1, 73... {66: 1, 9: 2, 351: 2, 4565: 1, 158: 1, 116: 1,... {66: 2, 99: 2, 4390: 1, 4: 2, 3595: 1, 149: 1,... ... {383: 1, 23: 1, 1626: 2, 91: 1, 1809: 1, 285: ... {7762: 2, 68: 1, 661: 1, 4: 2, 1439: 2, 703: 1... {2170: 2, 226: 1, 2402: 2, 32: 1, 2995: 2, 219... . 0  3 Business 
 . 1  3 Business 
 . 2  3 Business 
 . 3  3 Business 
 . 4  3 Business 
 ... ... ... . 119995  1 World 
 . 119996  2 Sports 
 . 119997  2 Sports 
 . 119998  2 Sports 
 . 119999  2 Sports 
 120000 rows × 7 columns Carlyle Looks Toward Commercial Aerospace (Reu... Reuters - Private investment firm Carlyle Grou... carlyle looks toward commercial aerospace (reu... Iraq Halts Oil Exports from Reuters - Authorities have iraq halts oil exports from Main Southern Pipe... halted oil export\f... main southern pipe... Renteria signing a top-shelf Red Sox general manager renteria signing a topdeal Theo Epstein acknowled... shelf deal red sox gene... PITTSBURGH at NY Today's NFL games GIANTS Time: 1:30 p.m. Line: ... today's nfl games pittsburgh at ny giants time... [carlyle, looks, toward, {15999: 2, 1076: 1, 855: commercial, aerospace... 1, 1286: 1, 4251: 1, ... [iraq, halts, oil, exports, {77: 2, 7380: 1, 66: 3, from, main, southe... 1787: 1, 32: 2, 900: 2... [renteria, signing, a, top- {8428: 2, 2638: 1, 5: 4, shelf, deal, red, s... 0: 3, 127: 1, 202: 3,... [today, 's, nfl, games, {106: 1, 23: 1, 729: 1, pittsburgh, at, ny, gi... 225: 1, 1586: 1, 22: 1... The final preprocessing step is converting the features and the class indices into PyTorch tensors. Recall that we need to subtract one from the class indices to make them zero-based. At this point, the data is fully processed and we are ready to begin training. 4.2.3 Multiclass Logistic Regression Using PyTorch The model itself is a single linear layer whose input size corresponds to the size of our vocabulary, and its output size corresponds to the number of classes in our corpus. PyTorch’s Linear layer includes a bias by default, so there is no need to handle that manually the way we did for our perceptron example. The code for training this model (which implements Algorithm 6) is almost identical to that of the binary logistic repression. However, since we have to calculate a score for each of the four different classes, we need to replace the previous BCEWithLogitsLoss with CrossEntropyLoss, which applies a softmax over the scores to obtain probabilities for each class. For each example, the model predicts 4 scores – one for each label. The label with the highest score is selected using the argmax function. We evaluate the predictions of our model for each class using Scikitlearn’s classification_report, which handles the results of multiclass classification. 4.3 Summary 69 4.3 Summary In this chapter, we used movie review and news article classification to illustrate the implementation of the previously described algorithms for the binary perceptron, binary logistic regression, and multiclass logistic regression. For the binary logistic regression, we made a direct comparison between the lower-level NumPy implementation and a higher-level version that made use of PyTorch. We hope that through this series of exercises the reader has noted several key takeaways. First, data preparation is important and should be done thoughtfully. Certain tasks (e.g., text normalization or sentence splitting) are going to be frequently needed if you continue with NLP, so using or creating generic functions can be very helpful. However, what works for one dataset and one language may not be suitable for another scenario. For example, in our case, we selected different tokenizers for each of our tasks to account for the different registers of English, as well as removing diacritics during normalization. Second, when it comes to implementing machine learning algorithms, it is often easier to use a higher-level library such as PyTorch instead of NumPy. For example, with the former, the gradients are calculated by the library, whereas in NumPy we have to code them ourselves. This becomes cumbersome quickly. For example, even the derivative of the softmax is non-trivial. Third, PyTorch imposes a training structure that remains largely the same, regardless of what models are being trained. That is, at a high level, the same steps are always required: clearing the current gradients, predicting output scores for the provided inputs, calculating the loss, and optimizing. These features make PyTorch a very powerful and convenient deep learning library; we will continue to use it throughout the remainder of the book to implement more complex neural architectures.
9,407
9,479
#!/usr/bin/env python # coding: utf-8 # # Binary Text Classification with Perceptron # In[1]: import random import numpy as np from tqdm.notebook import tqdm # set this variable to a number to be used as the random seed # or to None if you don't want to set a random seed seed = 1234 if seed is not None: random.seed(seed) np.random.seed(seed) # The dataset is divided in two directories called `train` and `test`. # These directories contain the training and testing splits of the dataset. # In[2]: get_ipython().system('ls -lh data/aclImdb/') # Both the `train` and `test` directories contain two directories called `pos` and `neg` that contain text files with the positive and negative reviews, respectively. # In[3]: get_ipython().system('ls -lh data/aclImdb/train/') # We will now read the filenames of the positive and negative examples. # In[4]: from glob import glob pos_files = glob('data/aclImdb/train/pos/*.txt') neg_files = glob('data/aclImdb/train/neg/*.txt') print('number of positive reviews:', len(pos_files)) print('number of negative reviews:', len(neg_files)) # Now, we will use a [`CountVectorizer`](https://scikit-learn.org/stable/modules/generated/sklearn.feature_extraction.text.CountVectorizer.html) to read the text files, tokenize them, acquire a vocabulary from the training data, and encode it in a document-term matrix in which each row represents a review, and each column represents a term in the vocabulary. Each element $(i,j)$ in the matrix represents the number of times term $j$ appears in example $i$. # In[5]: from sklearn.feature_extraction.text import CountVectorizer # initialize CountVectorizer indicating that we will give it a list of filenames that have to be read cv = CountVectorizer(input='filename') # learn vocabulary and return sparse document-term matrix doc_term_matrix = cv.fit_transform(pos_files + neg_files) doc_term_matrix # Note in the message printed above that the matrix is of shape (25000, 74894). # In other words, it has 1,871,225,000 elements. # However, only 3,445,861 elements were stored. # This is because most of the elements in the matrix are zeros. # The reason is that the reviews are short and most words in the english language don't appear in each review. # A matrix that only stores non-zero values is called *sparse*. # # Now we will convert it to a dense numpy array: # In[6]: X_train = doc_term_matrix.toarray() X_train.shape # We will also create a numpy array with the binary labels for the reviews. # One indicates a positive review and zero a negative review. # The label `y_train[i]` corresponds to the review encoded in row `i` of the `X_train` matrix. # In[7]: # training labels y_pos = np.ones(len(pos_files)) y_neg = np.zeros(len(neg_files)) y_train = np.concatenate([y_pos, y_neg]) y_train # Now we will initialize our model, in the form of an array of weights `w` of the same size as the number of features in our dataset (i.e., the number of words in the vocabulary acquired by [`CountVectorizer`](https://scikit-learn.org/stable/modules/generated/sklearn.feature_extraction.text.CountVectorizer.html)), and a bias term `b`. # Both are initialized to zeros. # In[8]: # initialize model: the feature vector and bias term are populated with zeros n_examples, n_features = X_train.shape w = np.zeros(n_features) b = 0 # Now we will use the perceptron learning algorithm to learn the values of `w` and `b` from our training data. # In[9]: n_epochs = 10 indices = np.arange(n_examples) for epoch in range(10): n_errors = 0 # randomize the order in which training examples are seen in this epoch np.random.shuffle(indices) # traverse the training data for i in tqdm(indices, desc=f'epoch {epoch+1}'): x = X_train[i] y_true = y_train[i] # the perceptron decision based on the current model score = x @ w + b y_pred = 1 if score > 0 else 0 # update the model is the prediction was incorrect if y_true == y_pred: continue elif y_true == 1 and y_pred == 0: w = w + x b = b + 1 n_errors += 1 elif y_true == 0 and y_pred == 1: w = w - x b = b - 1 n_errors += 1 if n_errors == 0: break # The next step is evaluating the model on the test dataset. # Note that this time we use the [`transform()`](https://scikit-learn.org/stable/modules/generated/sklearn.feature_extraction.text.CountVectorizer.html#sklearn.feature_extraction.text.CountVectorizer.transform) method of the [`CountVectorizer`](https://scikit-learn.org/stable/modules/generated/sklearn.feature_extraction.text.CountVectorizer.html), instead of the [`fit_transform()`](https://scikit-learn.org/stable/modules/generated/sklearn.feature_extraction.text.CountVectorizer.html#sklearn.feature_extraction.text.CountVectorizer.fit_transform) method that we used above. This is because we want to use the learned vocabulary in the test set, instead of learning a new one. # In[10]: pos_files = glob('data/aclImdb/test/pos/*.txt') neg_files = glob('data/aclImdb/test/neg/*.txt') doc_term_matrix = cv.transform(pos_files + neg_files) X_test = doc_term_matrix.toarray() y_pos = np.ones(len(pos_files)) y_neg = np.zeros(len(neg_files)) y_test = np.concatenate([y_pos, y_neg]) # Using the model is easy: multiply the document-term matrix by the learned weights and add the bias. # We use Python's `@` operator to perform the matrix-vector multiplication. # In[11]: y_pred = (X_test @ w + b) > 0 # Now we print an evaluation of the prediction results using scikit-learn's [`classification_report()`](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.classification_report.html) function. # In[12]: def binary_classification_report(y_true, y_pred): # count true positives, false positives, true negatives, and false negatives tp = fp = tn = fn = 0 for gold, pred in zip(y_true, y_pred): if pred == True: if gold == True: tp += 1 else: fp += 1 else: if gold == False: tn += 1 else: fn += 1 # calculate precision and recall precision = tp / (tp + fp) recall = tp / (tp + fn) # calculate f1 score fscore = 2 * precision * recall / (precision + recall) # calculate accuracy accuracy = (tp + tn) / len(y_true) # number of positive labels in y_true support = sum(y_true) return { "precision": precision, "recall": recall, "f1-score": fscore, "support": support, "accuracy": accuracy, } # In[13]: binary_classification_report(y_test, y_pred)
3,534
3,558
4
chap04-5
chap04-5
4 Implementing Text Classification Using Perceptron and Logistic Regression In the previous chapters we have discussed the theory behind the perceptron and logistic regression, including mathematical explanations of how and why they are able to learn from examples. In this chapter we will transition from math to code. Specifically, we will discuss how to implement these models in the Python programming language. All the code that we will introduce throughout this book is available online as well: http://clulab.github.io/gentlenlp/. The reader who is not familiar with the Python programming language is encouraged to read first Appendix A, for a brief introduction to the language, and Appendix B, for a discussion on how computers encode and preprocess text. Once done, please return here. To get a better understanding of how these algorithms work under the hood, we will start by implementing them from scratch. However, as the book progresses, we will introduce some of the popular tools and libraries that make Python the language of choice for machine learning, e.g., PyTorch,1 and Hugging Face’s transformers.2 The code for all the examples in the book is provided in the form of Jupyter notebooks.3 Important fragments of these notebooks will be presented in the implementation chapters so that the reader has the whole picture just by reading the book. However, we strongly encourage you to download the notebooks and execute them yourself. We also encourage you to modify them to conduct your own experiments! 1 https://pytorch.org
2 https://huggingface.co 3 https://jupyter.org/ 55 56 Implementing Text Classification Using Perceptron and LR 4.1 Binary Classification We begin this chapter with binary classification. That is, we aim to train classifiers that assign one of two labels to a given text. As the example for this task, we will train a review classifier using the the Large Movie Review Dataset (Maas et al., 2011).4 We tackle this task by implementing first a binary perceptron classifier, followed by a binary logistic regression one. We will implement the latter both from scratch as well as using PyTorch, so the reader has a clearer understanding on how PyTorch works “under the hood.” 4.1.1 Large Movie Review Dataset This dataset contains movie reviews and their associated scores (between 1 and 10) as provided by IMDb.5 converted these scores to binary labels by assigning each review a positive or negative label if the review score was above 6 or below 5, respectively. Reviews with scores 5 and 6 were considered too neutral and thus excluded. We follow the same protocol in this chapter. The dataset is divided in two even partitions called train and test, each containing 25,000 reviews. The dataset also provides additional unlabeled reviews, but we will not use those here. Each partition contains two directories called pos and neg where the positive and negative examples are stored. Each review is stored in an independent text file, whose name is composed of an id unique to the partition and the score associated with the review, separated by an underscore. An example of a positive and a negative review is shown in Table 4.1. 4.1.2 Bag-of-words Model As discussed in Section 2.2, we will encode the text to classify as a bag of words. That is, we encode each review as a list of numbers, with each position in the list corresponding to a word in our vocabulary, and the value stored in that position corresponding to the number of times the word appears in the review. For example, say we want to encode the following two reviews: 4 https://ai.stanford.edu/~amaas/data/sentiment/ 5 https://www.imdb.com/ Maas et al. 4.1 Binary Classification 57 Table 4.1 Two examples of movie reviews from IMDb. The first is a positive review of the movie Puss in Boots (1988). The second is a negative review of the movie Valentine (2001). These reviews can be found at https://www.imdb.com/review/rw0606396/ and https://www.imdb.com/review/rw0721861/, respectively. Filename Score Binary Label train/pos/24_8.txt 8/10 Positive train/neg/141_3.txt 3/10 Negative Review Text Although this was obviously a low-budget production, the performances and the songs in this movie are worth seeing. One of Walken’s few musical roles to date. (he is a marvelous dancer and singer and he demonstrates his acrobatic skills as well - watch for the cartwheel!) Also starring Jason Connery. A great children’s story and very likable characters. This stalk and slash turkey manages to bring nothing new to an increasingly stale genre. A masked killer stalks young, pert girls and slaughters them in a variety of gruesome ways, none of which are particularly inventive. It’s not scary, it’s not clever, and it’s not funny. So what was the point of it? Review 1: Review 2: "I liked the movie. My friend liked it too. " "I hated it. Would not recommend. " First, we need to create a vocabulary that maps each word to an id that uniquely identifies it. Each of these numbers will be used as the index in a list, so they must start at zero and grow by one for each word in the vocabulary. For example, one possible vocabulary that encodes the previous reviews is: {'would': 0, 'hated': 1, 58 Implementing Text Classification Using Perceptron and LR 'my': 2, 'liked': 3, 'not': 4, 'it': 5, 'movie': 6, 'recommend': 7, 'the': 8, 'I': 9, 'too': 10, 'friend': 11} Using this mapping, we can encode the two reviews as follows: Review1: [0,0,1,2,0,1,1,0,1,1,1,1] Review2: [1,1,0,0,1,1,0,1,0,1,0,0] Note that the word liked (fourth position) in the first review has a value of two. This is because this word appears twice in that review. This is a small example with a vocabulary of only 12 terms. Of course, the same process needs to be implemented for our whole training dataset. For this purpose we will use scikit-learn’s CountVectorizer class.6 Using the CountVectorizer class simplifies things, allowing us to get started quickly with a bag-of-words approach. However, note that it makes several simplifying assumptions (e.g., text is lowercased, and punctuation and single character tokens are removed). Some of these may not be adequate to other tasks. First, we need to obtain the filenames for the reviews in the training set: Once we have acquired the filenames for the training reviews, we need
to read them using the CountVectorizer. In order for the CountVectorizer to open and read the files for us, we make use of the input='filename' constructor parameter (otherwise it would expect the string content directly). The CountVectorizer provides three methods that will be use-
ful for us: a method called fit() that is used to acquire the vocabulary,
a method transform() that converts the text into the bag-of-words representation, and a method fit_transform() that conveniently acquires the vocabulary and transforms the data in a single step. The resulting object is referred to as a document-term matrix, where each row corre- 6 https://scikitlearn.org/stable/modules/generated/sklearn.feature_ extraction.text.CountVectorizer.html 4.1 Binary Classification 59 sponds to a document, and each column corresponds to a term in the vocabulary. As the output above indicates, the resulting matrix has 25,000 rows (one for each review), and 74,849 columns (one for each term). Also you may note that this matrix is sparse, with 3,445,861 stored elements. A regular matrix of shape 25,000×74,849 would have 1,871,225,000 elements. However, most of the elements in a document-term matrix are zeros because only a few words from the vocabulary appear in each document. A sparse matrix takes advantage of this fact by storing only the non-zero cells in order to reduce the memory required to store it. Thus, sparse matrices are convenient, especially when dealing with lots of data. Nevertheless, to simplify the downstream code in this example, we will convert it into a dense matrix, i.e., a regular two-dimensional NumPy array. Finally, we also need the labels of the reviews. We assign a label of one to positive reviews, and a label of zero to negative ones. Note that the first half of the reviews are positive and the second half are negative. The label at the ith position of the y_train array corresponds to the review encoded in the ith row of the X_train matrix. 4.1.3 Perceptron Now that we have defined our task and the data processing pipeline, we will implement a perceptron classifier that classifies the movie reviews as positive or negative. The entire code discussed in this section is available in the chap4_perceptron notebook. Recall from Section 2.4 that the perceptron is composed of a weight vector w and a bias term b. These will be represented as a NumPy array w of the same length as our document vectors, and a variable b for the bias term. Both will be initialized with zeros. The parameters w and b are learned through the following algorithm, which implements Algorithm 2 from Chapter 2: There are a couple of details to point out. Line 3 of Algorithm 2 indicates that we need to repeat the training loop until convergence. Theoretically, convergence is defined as predicting all training examples correctly. This is an ambitious requirement, which is not always possible in practice, so in this code we also include a stop condition if we reach a maximum number of epochs. Another crucial difference between our implementation here and the theoretical Algorithm 2, is that we randomize the order in which the training examples are seen at the beginning of 60 Implementing Text Classification Using Perceptron and LR each epoch. This simple (but highly recommended!) change is necessary to avoid the introduction of spurious biases due to the arbitrary order of the examples in the original training partition.7 We accomplish this by storing the indices corresponding to the X_train matrix rows in a NumPy array, and shuffling these indices at the beginning of each epoch. We shuffle the indices instead of the examples so that we can preserve the mapping between examples and labels. The training loop aligns closely with Algorithm 2. We start by iterating over each example in our training data, storing the current example in the variable x,8 and its corresponding label in the variable y_true. Next, we compute the perceptron decision function shown in Algorithm 1. Note that NumPy (as well as PyTorch) uses Python’s @ operator to indicate vector or matrix multiplication, depending on its operand types. Here we use it to calculate the dot product of the example x and the weights w. To this we add the bias b to obtain the predicted score, whose sign is used to assign a positive or negative predicted label. If the prediction is correct, then no update is needed, and we can move on to the next training example. However, if the prediction is incorrect, then we need to adjust w and b, as described in Algorithm 2. Sidebar 4.1 The tqdm function This is our first exposure to the tqdm function. tqdm is a progress bar that “make your loops show a smart progress meter.”9 The name tqdm comes from the Arabic word taqaddum which can mean “progress.” Using tqdm is as simple as wrapping it around the collection to be traversed. After training, we evaluate the model’s performance on the heldout test partition. The test data is loaded similarly to the training partition, but with one notable difference; we use CountVectorizer’s transform() method instead of the fit_transform() method so that the vocabulary is not adjusted for the test data. We won’t show here the loading of the test partition since it is so similar to the code already shown, but it is available in the Jupyter notebook that accompanies this section. . 7   As an extreme example, consider a dataset where all the positive examples appear first in the training partition. This would cause the perceptron to artificially inflate the weights of the features that occur in these examples, a situation from which the learning algorithm may struggle to recover. 
 . 8  We use typewriter font when we discuss variables in the code, to distinguish code from the theoretical discussion in the other chapters. 
 9 https://github.com/tqdm/tqdm 4.1 Binary Classification 61 Using the model to assign labels to all the test data is easily done in one step – we simply multiply the entire test data document-term matrix by the previously learned weights and add the bias. Scores greater than zero indicate a positive review, and those less than zero are negative. At this point we can evaluate the classifier’s performance, which we will do using precision, recall, and F1 scores for binary classification (described in Section 2.3). For this purpose, we implement a function called binary_classification_report that computes these metrics and returns them as a dictionary: We call this function to compare the predicted labels to the true labels, and obtain the evaluation scores. Our F1 score here is 86.8%, which is much higher than the baseline that assigns labels randomly, which yields an F1 score of about 50%. This is a good result, especially considering the simplicity of the perceptron! In the next sections and chapters, we will discuss a battery of strategies to considerably improve this performance. 4.1.4 Binary Logistic Regression from Scratch Using the same task, dataset, and evaluation, we will now implement a logistic regression classifier, as described in Algorithm 5 from Chapter 3. To give the reader hands-on experience with the implementation of the gradient calculations for logistic regression, we start by implementing it from scratch using NumPy. All the code shown in this section is available in the chap4_logistic_regression_numpy notebook. In the perceptron implementation, we represented the weights and the bias as two different variables. Here, however, we will use a different approach that will allow us to unify them into a single vector variable. Specifically, we take advantage of the similarity between the derivative of the cost function with respect to the weights (Equation 3.14) and the derivative of the cost with respect to the bias (Equation 3.15). d Ci(w, b) = (σi − yi)xij (3.14 revisited) dwj d Ci(w, b) = σi − yi (3.15 revisited) db Note that the two derivative formulas are identical except that the former has a multiplication by xij, while the latter does not. However, 62 Implementing Text Classification Using Perceptron and LR since σi − yi = (σi − yi)1 we can multiply the derivative of the cost with respect to the bias by one without changing the semantics. This gives an opportunity for combining the computations, doing them both in a single pass. The idea is that we can treat the bias as a weight corresponding to a feature that always has a value of one. As can be seen above, we created a NumPy array of ones of the same length as the number of examples in our training set (i.e., the number of rows in the data matrix). Then we add this array as a new column to the data matrix, using NumPy’s column_stack function. Next, we need to initialize our model. This time we will use a single NumPy array w of the same length as the number of columns in the data matrix. The weight vector w is initialized randomly with values between 0 and 1: Before implementing the learning algorithm, we need an implementation of the logistic function. Recall that the logistic function is σ(x) = 1 (3.1 revisited) 1+e−x This function can be easily implemented in NumPy as follows: However, this naive implementation may produce the following warning during training: The term overflow indicates that the result of evaluating exp(-x) is a number so large that it can’t be represented by a float (specifically, we’re using float64 numbers). We will avoid this issue by not calling exp with values that will overflow. NumPy provides the function finfo that can be consulted to find the limits of floating point numbers: The log of the largest floating point number is the largest number for which exp() will not overflow, so we will use it as a threshold to filter out problematic values: We now have everything we need to implement Algorithm 4. The steps to follow for each example are: (1) use the model to make a prediction, (2) calculate the gradient of the loss function with respect to the model parameters, and (3) update the model parameters using the gradient. The size of the update is controlled by the learning rate. Once the model has been trained, we evaluate it on the test dataset using our binary_classification_report function from the previous section. Loading and preprocessing the test dataset follows the same 4.1 Binary Classification 63 steps as with the previous classifier. We omit the code for brevity. These are the results: The performance is comparable with that of the perceptron. The difference in F1 scores between the two classifiers (84.9% here vs. 86.8% for the perceptron) is not significant. Classifier parity is probably attributable to the fact that the signal distinguishing the two classes being easy to learn and the simpler perceptron training algorithm being sufficient in this case. Nevertheless, this task is useful in showing how to implement the logistic regression model from scratch, i.e., by implementing the gradient calculation and parameter updates manually. Next, we will implement the same model again using PyTorch, highlighting how this machine learning library simplifies the process. 4.1.5 Binary Logistic Regression Utilizing PyTorch While it is fairly straightforward to compute the derivatives for logistic regression and implement then directly in NumPy, this will not scale well to arbitrary neural architectures. Fortunately, there are libraries that automate the computation of the derivatives of the cost function (assuming it is differentiable!) for any neural network, and use the resulting gradients to perform gradient descent or other more sophisticated optimization procedures. To this end, we will use the PyTorch deep learning library10. The corresponding notebook for this section is chap4_logistic_regression_pytorch_bce. Our model for logistic regression corresponds to PyTorch’s Linear layer. When we instantiate this layer, we specify the size of the inputs (the size of our vocabulary) and the size of the output, i.e., the number of output neurons (which is one because we’re doing binary classification). The loss function we use is the binary cross-entropy loss (see Chapter 3), which is implemented as BCEWithLogitsLoss in PyTorch. In PyTorch, the gradients obtained from the loss function are applied to the model by an optimizer object, which implements and applies an optimization algorithm. Here we will use the vanilla stochastic gradient descent optimizer; we set its learning rate to 0.1. This is equivalent to the discussion in Section 3.2. Similarly to the manual implementation, the steps required to train the model for a given training example are: (1) ensure the gradients are set to zeros, (2) apply the model to obtain a prediction, (3) calculate 10 https://pytorch.org/ 64 Implementing Text Classification Using Perceptron and LR the loss, (4) compute the gradient of the loss by back-propagation, and (5) update the model parameters. Recall that in our previous implementation everything was hardcoded: applying the model, computing the gradients, and optimizing the model parameters. Here, however, the implementation of the logistic regression is expressed at a higher level of abstraction. This means that we are describing the logical steps without specifying a particular implementation. Instead, implementation details are the responsability of the chosen model, loss function, and optimizer. Thus, we could even choose a different model, loss function, and/or optimizer, and use the same training steps with little or no modification. This decoupling of the training logic from the implementation details is one of the main advantages of libraries such as PyTorch. As shown in the code above, calling the model as a function, with the feature vectors as inputs, produces the predicted scores. Once again, a positive score corresponds to a positive label. When we evaluate this implementation on the test dataset, we obtain results that are in line with our previous models: Writing the perceptron and the logistic regression from scratch is a good exercise, as it exposes us to the fundamentals of implementing machine learning algorithms. However, this becomes cumbersome for more complex neural architectures. For this reason, from this point on, we will use PyTorch for all our coding examples. 4.2 Multiclass Classification So far, in this chapter we have discussed implementing binary classifiers. Next, we will modify these binary classifiers to perform multiclass classification, following the discussion in Section 3.5. 4.2.1 AG News Dataset Before explaining the actual training/testing code, we have to choose a new dataset that is suitable for multiclass classification. To this end, we will use the AG News Classification Dataset (Zhang et al., 2015), a subset of the larger AG corpus of news articles collected from thousands of different news sources.11 The classification dataset consists of four 11 http://groups.di.unipi.it/~gulli/AG_corpus_of_news_articles.html 4.2 Multiclass Classification 65 classes, and the data is equally balanced across all classes (30,000 articles per class for train, and 1,900 articles per class for testing). The goal of the task is to classify each article as one of the four classes: World, Sports, Business, or Sci/Tech. 4.2.2 Preparing the Dataset The AG News Dataset is distributed as two CSV files (one for training and one for testing), each containing three columns: the class index, the title, and the description. The dataset also provides a text file that maps the above class indexes to more descriptive class labels. Because of the tabular nature of the dataset, pandas, a Python library
for tabular data analysis,12 is a natural choice for loading and transform-
ing it. To this end, our Jupyter notebook (chap4_multiclass_logistic_regression) demonstrates the sequence of steps required to handle the data, as well
as model training and evaluation. First, we show how to load the CSV,
add column names, and inspect the result: class index . 0  3 
 . 1  3 
 . 2  3 
 . 3  3 
 . 4  3 
 ... ... . 119995  1 
 . 119996  2 
 . 119997  2 
 . 119998  2 
 . 119999  2 
 title Wall St. Bears Claw Back Into the Black (Reuters) Carlyle Looks Toward Commercial Aerospace (Reu... Oil and Economy Cloud Stocks' Outlook (Reuters) Iraq Halts Oil Exports from Main Southern Pipe... Oil prices soar to all-time record, posing new... ... Pakistan's Musharraf Says Won't Quit as Army C... Renteria signing a top-shelf deal Saban not going to Dolphins yet Today's NFL games Nets get Carter from Raptors description Reuters - Short-sellers, Wall Street's dwindli... Reuters - Private investment firm Carlyle Grou... Reuters - Soaring crude prices plus worries\ab... Reuters - Authorities have halted oil export\f... AFP - Tearaway world oil prices, toppling reco... ... KARACHI (Reuters) - Pakistani President Perve... Red Sox general manager Theo Epstein acknowled... The Miami Dolphins will put their courtship of... PITTSBURGH at NY GIANTS Time: 1:30 p.m. Line: ... INDIANAPOLIS -- All-Star Vince Carter was trad... 120000 rows × 3 columns Since the class labels themselves are in a separate file, we manually add them to the pandas data structure (called dataframe in pandas’ terminology) to increase the interpretability of the data. We use the class index column as a starting point, and use its map method to create a new column with the corresponding labels (technically a new Series object) that is added to the dataframe using its insert method, which allows us to insert the column in a specific position. Note that the label indices are one-based, so we subtract one to align them with their labels. 12 https://pandas.pydata.org 66 Implementing Text Classification Using Perceptron and LR class index . 0  3 
 . 1  3 
 . 2  3 
 . 3  3 
 . 4  3 
 ... ... . 119995  1 
 . 119996  2 
 . 119997  2 
 . 119998  2 
 . 119999  2 
 class Business Business Business Business Business ... World Sports Sports Sports Sports title Wall St. Bears Claw Back Into the Black (Reuters) Oil and Economy Cloud Stocks' Outlook (Reuters) Oil prices soar to all-time record, posing new... Pakistan's Musharraf Says Won't Quit as Army C... Saban not going to Dolphins yet Nets get Carter from Raptors description Reuters - Short-sellers, Wall Street's dwindli... Reuters - Soaring crude prices plus worries\ab... AFP - Tearaway world oil prices, toppling reco... KARACHI (Reuters) - Pakistani President Perve... The Miami Dolphins will put their courtship of... INDIANAPOLIS -- All-Star Vince Carter was trad... Iraq Halts Oil Exports from Main Southern Pipe... Reuters - Authorities have halted oil export\f... ... ... Renteria signing a top-shelf deal Red Sox general manager Theo Epstein acknowled... 120000 rows × 4 columns Carlyle Looks Toward Commercial Aerospace (Reu... Reuters - Private investment firm Carlyle Grou... Today's NFL games PITTSBURGH at NY GIANTS Time: 1:30 p.m. Line: ... Next we will preprocess the text. First we lowercase the title and description, and then we concatenate them into a single string. Then we remove some spurious backslashes from the text. Once this is done, the preprocessed text is added to the dataframe as a new column. Note that pandas allows these steps to be applied to all rows simultaneously. class index class title Wall St. Bears Claw Back Into the Black (Reuters) Oil and Economy Cloud Stocks' Outlook (Reuters) Oil prices soar to all-time record, posing new... ... Pakistan's Musharraf Says Won't Quit as Army C... Saban not going to Dolphins yet Nets get Carter from Raptors description Reuters - Short-sellers, Wall Street's dwindli... Reuters - Soaring crude prices plus worries\ab... AFP - Tearaway world oil prices, toppling reco... ... KARACHI (Reuters) - Pakistani President Perve... The Miami Dolphins will put their courtship of... INDIANAPOLIS -- All-Star Vince Carter was trad... text wall st. bears claw back into the black (reute... oil and economy cloud stocks' outlook (reuters... oil prices soar to all-time record, posing new... ... pakistan's musharraf says won't quit as army c... saban not going to dolphins yet the miami dolp... nets get carter from raptors indianapolis -- a... . 0  3 Business 
 . 1  3 Business 
 . 2  3 Business 
 . 3  3 Business 
 . 4  3 Business 
 ... ... ... . 119995  1 World 
 . 119996  2 Sports 
 . 119997  2 Sports 
 . 119998  2 Sports 
 . 119999  2 Sports 
 120000 rows × 5 columns Carlyle Looks Toward Commercial Reuters - Private investment firm Carlyle carlyle looks toward commercial Aerospace (Reu... Grou... aerospace (reu... Iraq Halts Oil Exports from Main Southern Pipe... Reuters - Authorities have halted oil export\f... iraq halts oil exports from main southern pipe... Renteria signing a top-shelf deal Red Sox general manager Theo Epstein renteria signing a top-shelf deal red sox acknowled... gene... Today's NFL games PITTSBURGH at NY GIANTS Time: 1:30 p.m. today's nfl games pittsburgh at ny giants Line: ... time... At this point, the text is ready to be tokenized. For this purpose we will use NLTK’s word_tokenize function. This function can be applied to the whole column at once using the pandas map function, which returns a new column which we add to the dataframe. However, here we actually use the progress_map function, which provides a visual progress bar. This visual feedback is especially helpful for tasks that take more time to complete. 4.2 Multiclass Classification 67 class index . 0  3 
 . 1  3 
 . 2  3 
 . 3  3 
 . 4  3 
 ... ... . 119995  1 
 . 119996  2 
 . 119997  2 
 . 119998  2 
 . 119999  2 
 class Business Business Business Business Business ... World Sports Sports Sports Sports title Wall St. Bears Claw Back Into the Black (Reuters) Oil and Economy Cloud Stocks' Outlook (Reuters) Oil prices soar to all-time record, posing new... ... Pakistan's Musharraf Says Won't Quit as Army C... Saban not going to Dolphins yet Nets get Carter from Raptors description Reuters - Short-sellers, Wall Street's dwindli... Reuters - Soaring crude prices plus worries\ab... AFP - Tearaway world oil prices, toppling reco... ... KARACHI (Reuters) - Pakistani President Perve... The Miami Dolphins will put their courtship of... INDIANAPOLIS -- All-Star Vince Carter was trad... text wall st. bears claw back into the black (reute... oil and economy cloud stocks' outlook (reuters... oil prices soar to all-time record, posing new... ... pakistan's musharraf says won't quit as army c... saban not going to dolphins yet the miami dolp... nets get carter from raptors indianapolis -- a... tokens [wall, st., bears, claw, back, into, the, blac... [oil, and, economy, cloud, stocks, ', outlook,... [oil, prices, soar, to, all-time, record, ,, p... ... [pakistan, 's, musharraf, says, wo, n't, quit,... [saban, not, going, to, dolphins, yet, the, mi... [nets, get, carter, from, raptors, indianapoli... 120000 rows × 6 columns Carlyle Looks Toward Commercial Reuters - Private investment firm carlyle looks toward commercial [carlyle, looks, toward, Aerospace (Reu... Carlyle Grou... aerospace (reu... commercial, aerospace... Iraq Halts Oil Exports from Main Reuters - Authorities have halted iraq halts oil exports from main [iraq, halts, oil, exports, from, Southern Pipe... oil export\f... southern pipe... main, southe... Renteria signing a top-shelf deal Red Sox general manager Theo renteria signing a top-shelf deal [renteria, signing, a, top-shelf, Epstein acknowled... red sox gene... deal, red, s... Today's NFL games PITTSBURGH at NY GIANTS today's nfl games pittsburgh at [today, 's, nfl, games, Time: 1:30 p.m. Line: ... ny giants time... pittsburgh, at, ny, gi... From the tokens we just created, we then create a vocabulary for our corpus. Here, we only keep the words that occur at least 10 times, decreasing the memory needed and reducing the likelihood that our vocabulary contains noisy tokens. Note that each row in the tokens column contains a list of tokens. In order to create the vocabulary, we will need to convert the Series of lists of tokens into a Series of tokens using the explode() Pandas method. Then we will use the value_counts() method to create a Series object in which the index are the tokens and the values are the number of times they appear in the corpus. The next step is removing the tokens with a count lower than our chosen threshold. Finally, we create a list with the remaining tokens, as well as a dictionary that maps tokens to token ids (i.e., the index of the token in the list). We include in the vocabulary a special token [UNK] that will be used as a placeholder for tokens that do not appear in our vocabulary after the frequency pruning. Using this vocabulary, we construct a feature vector for each news article in the corpus. This feature vector will be encoded as a dictionary, with keys corresponding to token ids, and values corresponding to the number of times the token appears in the article. As above, the feature vectors will be stored as a new column in the dataframe. 68 Implementing Text Classification Using Perceptron and LR class index class title Wall St. Bears Claw Back Into the Black (Reuters) Oil and Economy Cloud Stocks' Outlook (Reuters) Oil prices soar to all-time record, posing new... ... Pakistan's Musharraf Says Won't Quit as Army C... Saban not going to Dolphins yet Nets get Carter from Raptors description Reuters - Short-sellers, Wall Street's dwindli... Reuters - Soaring crude prices plus worries\ab... AFP - Tearaway world oil prices, toppling reco... ... KARACHI (Reuters) - Pakistani President Perve... The Miami Dolphins will put their courtship of... INDIANAPOLIS -- All-Star Vince Carter was trad... text wall st. bears claw back into the black (reute... oil and economy cloud stocks' outlook (reuters... oil prices soar to all-time record, posing new... ... pakistan's musharraf says won't quit as army c... saban not going to dolphins yet the miami dolp... nets get carter from raptors indianapolis -- a... tokens [wall, st., bears, claw, back, into, the, blac... [oil, and, economy, cloud, stocks, ', outlook,... [oil, prices, soar, to, alltime, record, ,, p... ... [pakistan, 's, musharraf, says, wo, n't, quit,... [saban, not, going, to, dolphins, yet, the, mi... [nets, get, carter, from, raptors, indianapoli... features {427: 2, 563: 1, 1607: 1, 15062: 1, 120: 1, 73... {66: 1, 9: 2, 351: 2, 4565: 1, 158: 1, 116: 1,... {66: 2, 99: 2, 4390: 1, 4: 2, 3595: 1, 149: 1,... ... {383: 1, 23: 1, 1626: 2, 91: 1, 1809: 1, 285: ... {7762: 2, 68: 1, 661: 1, 4: 2, 1439: 2, 703: 1... {2170: 2, 226: 1, 2402: 2, 32: 1, 2995: 2, 219... . 0  3 Business 
 . 1  3 Business 
 . 2  3 Business 
 . 3  3 Business 
 . 4  3 Business 
 ... ... ... . 119995  1 World 
 . 119996  2 Sports 
 . 119997  2 Sports 
 . 119998  2 Sports 
 . 119999  2 Sports 
 120000 rows × 7 columns Carlyle Looks Toward Commercial Aerospace (Reu... Reuters - Private investment firm Carlyle Grou... carlyle looks toward commercial aerospace (reu... Iraq Halts Oil Exports from Reuters - Authorities have iraq halts oil exports from Main Southern Pipe... halted oil export\f... main southern pipe... Renteria signing a top-shelf Red Sox general manager renteria signing a topdeal Theo Epstein acknowled... shelf deal red sox gene... PITTSBURGH at NY Today's NFL games GIANTS Time: 1:30 p.m. Line: ... today's nfl games pittsburgh at ny giants time... [carlyle, looks, toward, {15999: 2, 1076: 1, 855: commercial, aerospace... 1, 1286: 1, 4251: 1, ... [iraq, halts, oil, exports, {77: 2, 7380: 1, 66: 3, from, main, southe... 1787: 1, 32: 2, 900: 2... [renteria, signing, a, top- {8428: 2, 2638: 1, 5: 4, shelf, deal, red, s... 0: 3, 127: 1, 202: 3,... [today, 's, nfl, games, {106: 1, 23: 1, 729: 1, pittsburgh, at, ny, gi... 225: 1, 1586: 1, 22: 1... The final preprocessing step is converting the features and the class indices into PyTorch tensors. Recall that we need to subtract one from the class indices to make them zero-based. At this point, the data is fully processed and we are ready to begin training. 4.2.3 Multiclass Logistic Regression Using PyTorch The model itself is a single linear layer whose input size corresponds to the size of our vocabulary, and its output size corresponds to the number of classes in our corpus. PyTorch’s Linear layer includes a bias by default, so there is no need to handle that manually the way we did for our perceptron example. The code for training this model (which implements Algorithm 6) is almost identical to that of the binary logistic repression. However, since we have to calculate a score for each of the four different classes, we need to replace the previous BCEWithLogitsLoss with CrossEntropyLoss, which applies a softmax over the scores to obtain probabilities for each class. For each example, the model predicts 4 scores – one for each label. The label with the highest score is selected using the argmax function. We evaluate the predictions of our model for each class using Scikitlearn’s classification_report, which handles the results of multiclass classification. 4.3 Summary 69 4.3 Summary In this chapter, we used movie review and news article classification to illustrate the implementation of the previously described algorithms for the binary perceptron, binary logistic regression, and multiclass logistic regression. For the binary logistic regression, we made a direct comparison between the lower-level NumPy implementation and a higher-level version that made use of PyTorch. We hope that through this series of exercises the reader has noted several key takeaways. First, data preparation is important and should be done thoughtfully. Certain tasks (e.g., text normalization or sentence splitting) are going to be frequently needed if you continue with NLP, so using or creating generic functions can be very helpful. However, what works for one dataset and one language may not be suitable for another scenario. For example, in our case, we selected different tokenizers for each of our tasks to account for the different registers of English, as well as removing diacritics during normalization. Second, when it comes to implementing machine learning algorithms, it is often easier to use a higher-level library such as PyTorch instead of NumPy. For example, with the former, the gradients are calculated by the library, whereas in NumPy we have to code them ourselves. This becomes cumbersome quickly. For example, even the derivative of the softmax is non-trivial. Third, PyTorch imposes a training structure that remains largely the same, regardless of what models are being trained. That is, at a high level, the same steps are always required: clearing the current gradients, predicting output scores for the provided inputs, calculating the loss, and optimizing. These features make PyTorch a very powerful and convenient deep learning library; we will continue to use it throughout the remainder of the book to implement more complex neural architectures.
10,827
10,931
#!/usr/bin/env python # coding: utf-8 # # Binary Text Classification with Perceptron # In[1]: import random import numpy as np from tqdm.notebook import tqdm # set this variable to a number to be used as the random seed # or to None if you don't want to set a random seed seed = 1234 if seed is not None: random.seed(seed) np.random.seed(seed) # The dataset is divided in two directories called `train` and `test`. # These directories contain the training and testing splits of the dataset. # In[2]: get_ipython().system('ls -lh data/aclImdb/') # Both the `train` and `test` directories contain two directories called `pos` and `neg` that contain text files with the positive and negative reviews, respectively. # In[3]: get_ipython().system('ls -lh data/aclImdb/train/') # We will now read the filenames of the positive and negative examples. # In[4]: from glob import glob pos_files = glob('data/aclImdb/train/pos/*.txt') neg_files = glob('data/aclImdb/train/neg/*.txt') print('number of positive reviews:', len(pos_files)) print('number of negative reviews:', len(neg_files)) # Now, we will use a [`CountVectorizer`](https://scikit-learn.org/stable/modules/generated/sklearn.feature_extraction.text.CountVectorizer.html) to read the text files, tokenize them, acquire a vocabulary from the training data, and encode it in a document-term matrix in which each row represents a review, and each column represents a term in the vocabulary. Each element $(i,j)$ in the matrix represents the number of times term $j$ appears in example $i$. # In[5]: from sklearn.feature_extraction.text import CountVectorizer # initialize CountVectorizer indicating that we will give it a list of filenames that have to be read cv = CountVectorizer(input='filename') # learn vocabulary and return sparse document-term matrix doc_term_matrix = cv.fit_transform(pos_files + neg_files) doc_term_matrix # Note in the message printed above that the matrix is of shape (25000, 74894). # In other words, it has 1,871,225,000 elements. # However, only 3,445,861 elements were stored. # This is because most of the elements in the matrix are zeros. # The reason is that the reviews are short and most words in the english language don't appear in each review. # A matrix that only stores non-zero values is called *sparse*. # # Now we will convert it to a dense numpy array: # In[6]: X_train = doc_term_matrix.toarray() X_train.shape # We will also create a numpy array with the binary labels for the reviews. # One indicates a positive review and zero a negative review. # The label `y_train[i]` corresponds to the review encoded in row `i` of the `X_train` matrix. # In[7]: # training labels y_pos = np.ones(len(pos_files)) y_neg = np.zeros(len(neg_files)) y_train = np.concatenate([y_pos, y_neg]) y_train # Now we will initialize our model, in the form of an array of weights `w` of the same size as the number of features in our dataset (i.e., the number of words in the vocabulary acquired by [`CountVectorizer`](https://scikit-learn.org/stable/modules/generated/sklearn.feature_extraction.text.CountVectorizer.html)), and a bias term `b`. # Both are initialized to zeros. # In[8]: # initialize model: the feature vector and bias term are populated with zeros n_examples, n_features = X_train.shape w = np.zeros(n_features) b = 0 # Now we will use the perceptron learning algorithm to learn the values of `w` and `b` from our training data. # In[9]: n_epochs = 10 indices = np.arange(n_examples) for epoch in range(10): n_errors = 0 # randomize the order in which training examples are seen in this epoch np.random.shuffle(indices) # traverse the training data for i in tqdm(indices, desc=f'epoch {epoch+1}'): x = X_train[i] y_true = y_train[i] # the perceptron decision based on the current model score = x @ w + b y_pred = 1 if score > 0 else 0 # update the model is the prediction was incorrect if y_true == y_pred: continue elif y_true == 1 and y_pred == 0: w = w + x b = b + 1 n_errors += 1 elif y_true == 0 and y_pred == 1: w = w - x b = b - 1 n_errors += 1 if n_errors == 0: break # The next step is evaluating the model on the test dataset. # Note that this time we use the [`transform()`](https://scikit-learn.org/stable/modules/generated/sklearn.feature_extraction.text.CountVectorizer.html#sklearn.feature_extraction.text.CountVectorizer.transform) method of the [`CountVectorizer`](https://scikit-learn.org/stable/modules/generated/sklearn.feature_extraction.text.CountVectorizer.html), instead of the [`fit_transform()`](https://scikit-learn.org/stable/modules/generated/sklearn.feature_extraction.text.CountVectorizer.html#sklearn.feature_extraction.text.CountVectorizer.fit_transform) method that we used above. This is because we want to use the learned vocabulary in the test set, instead of learning a new one. # In[10]: pos_files = glob('data/aclImdb/test/pos/*.txt') neg_files = glob('data/aclImdb/test/neg/*.txt') doc_term_matrix = cv.transform(pos_files + neg_files) X_test = doc_term_matrix.toarray() y_pos = np.ones(len(pos_files)) y_neg = np.zeros(len(neg_files)) y_test = np.concatenate([y_pos, y_neg]) # Using the model is easy: multiply the document-term matrix by the learned weights and add the bias. # We use Python's `@` operator to perform the matrix-vector multiplication. # In[11]: y_pred = (X_test @ w + b) > 0 # Now we print an evaluation of the prediction results using scikit-learn's [`classification_report()`](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.classification_report.html) function. # In[12]: def binary_classification_report(y_true, y_pred): # count true positives, false positives, true negatives, and false negatives tp = fp = tn = fn = 0 for gold, pred in zip(y_true, y_pred): if pred == True: if gold == True: tp += 1 else: fp += 1 else: if gold == False: tn += 1 else: fn += 1 # calculate precision and recall precision = tp / (tp + fp) recall = tp / (tp + fn) # calculate f1 score fscore = 2 * precision * recall / (precision + recall) # calculate accuracy accuracy = (tp + tn) / len(y_true) # number of positive labels in y_true support = sum(y_true) return { "precision": precision, "recall": recall, "f1-score": fscore, "support": support, "accuracy": accuracy, } # In[13]: binary_classification_report(y_test, y_pred)
4,004
4,054
5
chap04-6
chap04-6
4 Implementing Text Classification Using Perceptron and Logistic Regression In the previous chapters we have discussed the theory behind the perceptron and logistic regression, including mathematical explanations of how and why they are able to learn from examples. In this chapter we will transition from math to code. Specifically, we will discuss how to implement these models in the Python programming language. All the code that we will introduce throughout this book is available online as well: http://clulab.github.io/gentlenlp/. The reader who is not familiar with the Python programming language is encouraged to read first Appendix A, for a brief introduction to the language, and Appendix B, for a discussion on how computers encode and preprocess text. Once done, please return here. To get a better understanding of how these algorithms work under the hood, we will start by implementing them from scratch. However, as the book progresses, we will introduce some of the popular tools and libraries that make Python the language of choice for machine learning, e.g., PyTorch,1 and Hugging Face’s transformers.2 The code for all the examples in the book is provided in the form of Jupyter notebooks.3 Important fragments of these notebooks will be presented in the implementation chapters so that the reader has the whole picture just by reading the book. However, we strongly encourage you to download the notebooks and execute them yourself. We also encourage you to modify them to conduct your own experiments! 1 https://pytorch.org
2 https://huggingface.co 3 https://jupyter.org/ 55 56 Implementing Text Classification Using Perceptron and LR 4.1 Binary Classification We begin this chapter with binary classification. That is, we aim to train classifiers that assign one of two labels to a given text. As the example for this task, we will train a review classifier using the the Large Movie Review Dataset (Maas et al., 2011).4 We tackle this task by implementing first a binary perceptron classifier, followed by a binary logistic regression one. We will implement the latter both from scratch as well as using PyTorch, so the reader has a clearer understanding on how PyTorch works “under the hood.” 4.1.1 Large Movie Review Dataset This dataset contains movie reviews and their associated scores (between 1 and 10) as provided by IMDb.5 converted these scores to binary labels by assigning each review a positive or negative label if the review score was above 6 or below 5, respectively. Reviews with scores 5 and 6 were considered too neutral and thus excluded. We follow the same protocol in this chapter. The dataset is divided in two even partitions called train and test, each containing 25,000 reviews. The dataset also provides additional unlabeled reviews, but we will not use those here. Each partition contains two directories called pos and neg where the positive and negative examples are stored. Each review is stored in an independent text file, whose name is composed of an id unique to the partition and the score associated with the review, separated by an underscore. An example of a positive and a negative review is shown in Table 4.1. 4.1.2 Bag-of-words Model As discussed in Section 2.2, we will encode the text to classify as a bag of words. That is, we encode each review as a list of numbers, with each position in the list corresponding to a word in our vocabulary, and the value stored in that position corresponding to the number of times the word appears in the review. For example, say we want to encode the following two reviews: 4 https://ai.stanford.edu/~amaas/data/sentiment/ 5 https://www.imdb.com/ Maas et al. 4.1 Binary Classification 57 Table 4.1 Two examples of movie reviews from IMDb. The first is a positive review of the movie Puss in Boots (1988). The second is a negative review of the movie Valentine (2001). These reviews can be found at https://www.imdb.com/review/rw0606396/ and https://www.imdb.com/review/rw0721861/, respectively. Filename Score Binary Label train/pos/24_8.txt 8/10 Positive train/neg/141_3.txt 3/10 Negative Review Text Although this was obviously a low-budget production, the performances and the songs in this movie are worth seeing. One of Walken’s few musical roles to date. (he is a marvelous dancer and singer and he demonstrates his acrobatic skills as well - watch for the cartwheel!) Also starring Jason Connery. A great children’s story and very likable characters. This stalk and slash turkey manages to bring nothing new to an increasingly stale genre. A masked killer stalks young, pert girls and slaughters them in a variety of gruesome ways, none of which are particularly inventive. It’s not scary, it’s not clever, and it’s not funny. So what was the point of it? Review 1: Review 2: "I liked the movie. My friend liked it too. " "I hated it. Would not recommend. " First, we need to create a vocabulary that maps each word to an id that uniquely identifies it. Each of these numbers will be used as the index in a list, so they must start at zero and grow by one for each word in the vocabulary. For example, one possible vocabulary that encodes the previous reviews is: {'would': 0, 'hated': 1, 58 Implementing Text Classification Using Perceptron and LR 'my': 2, 'liked': 3, 'not': 4, 'it': 5, 'movie': 6, 'recommend': 7, 'the': 8, 'I': 9, 'too': 10, 'friend': 11} Using this mapping, we can encode the two reviews as follows: Review1: [0,0,1,2,0,1,1,0,1,1,1,1] Review2: [1,1,0,0,1,1,0,1,0,1,0,0] Note that the word liked (fourth position) in the first review has a value of two. This is because this word appears twice in that review. This is a small example with a vocabulary of only 12 terms. Of course, the same process needs to be implemented for our whole training dataset. For this purpose we will use scikit-learn’s CountVectorizer class.6 Using the CountVectorizer class simplifies things, allowing us to get started quickly with a bag-of-words approach. However, note that it makes several simplifying assumptions (e.g., text is lowercased, and punctuation and single character tokens are removed). Some of these may not be adequate to other tasks. First, we need to obtain the filenames for the reviews in the training set: Once we have acquired the filenames for the training reviews, we need
to read them using the CountVectorizer. In order for the CountVectorizer to open and read the files for us, we make use of the input='filename' constructor parameter (otherwise it would expect the string content directly). The CountVectorizer provides three methods that will be use-
ful for us: a method called fit() that is used to acquire the vocabulary,
a method transform() that converts the text into the bag-of-words representation, and a method fit_transform() that conveniently acquires the vocabulary and transforms the data in a single step. The resulting object is referred to as a document-term matrix, where each row corre- 6 https://scikitlearn.org/stable/modules/generated/sklearn.feature_ extraction.text.CountVectorizer.html 4.1 Binary Classification 59 sponds to a document, and each column corresponds to a term in the vocabulary. As the output above indicates, the resulting matrix has 25,000 rows (one for each review), and 74,849 columns (one for each term). Also you may note that this matrix is sparse, with 3,445,861 stored elements. A regular matrix of shape 25,000×74,849 would have 1,871,225,000 elements. However, most of the elements in a document-term matrix are zeros because only a few words from the vocabulary appear in each document. A sparse matrix takes advantage of this fact by storing only the non-zero cells in order to reduce the memory required to store it. Thus, sparse matrices are convenient, especially when dealing with lots of data. Nevertheless, to simplify the downstream code in this example, we will convert it into a dense matrix, i.e., a regular two-dimensional NumPy array. Finally, we also need the labels of the reviews. We assign a label of one to positive reviews, and a label of zero to negative ones. Note that the first half of the reviews are positive and the second half are negative. The label at the ith position of the y_train array corresponds to the review encoded in the ith row of the X_train matrix. 4.1.3 Perceptron Now that we have defined our task and the data processing pipeline, we will implement a perceptron classifier that classifies the movie reviews as positive or negative. The entire code discussed in this section is available in the chap4_perceptron notebook. Recall from Section 2.4 that the perceptron is composed of a weight vector w and a bias term b. These will be represented as a NumPy array w of the same length as our document vectors, and a variable b for the bias term. Both will be initialized with zeros. The parameters w and b are learned through the following algorithm, which implements Algorithm 2 from Chapter 2: There are a couple of details to point out. Line 3 of Algorithm 2 indicates that we need to repeat the training loop until convergence. Theoretically, convergence is defined as predicting all training examples correctly. This is an ambitious requirement, which is not always possible in practice, so in this code we also include a stop condition if we reach a maximum number of epochs. Another crucial difference between our implementation here and the theoretical Algorithm 2, is that we randomize the order in which the training examples are seen at the beginning of 60 Implementing Text Classification Using Perceptron and LR each epoch. This simple (but highly recommended!) change is necessary to avoid the introduction of spurious biases due to the arbitrary order of the examples in the original training partition.7 We accomplish this by storing the indices corresponding to the X_train matrix rows in a NumPy array, and shuffling these indices at the beginning of each epoch. We shuffle the indices instead of the examples so that we can preserve the mapping between examples and labels. The training loop aligns closely with Algorithm 2. We start by iterating over each example in our training data, storing the current example in the variable x,8 and its corresponding label in the variable y_true. Next, we compute the perceptron decision function shown in Algorithm 1. Note that NumPy (as well as PyTorch) uses Python’s @ operator to indicate vector or matrix multiplication, depending on its operand types. Here we use it to calculate the dot product of the example x and the weights w. To this we add the bias b to obtain the predicted score, whose sign is used to assign a positive or negative predicted label. If the prediction is correct, then no update is needed, and we can move on to the next training example. However, if the prediction is incorrect, then we need to adjust w and b, as described in Algorithm 2. Sidebar 4.1 The tqdm function This is our first exposure to the tqdm function. tqdm is a progress bar that “make your loops show a smart progress meter.”9 The name tqdm comes from the Arabic word taqaddum which can mean “progress.” Using tqdm is as simple as wrapping it around the collection to be traversed. After training, we evaluate the model’s performance on the heldout test partition. The test data is loaded similarly to the training partition, but with one notable difference; we use CountVectorizer’s transform() method instead of the fit_transform() method so that the vocabulary is not adjusted for the test data. We won’t show here the loading of the test partition since it is so similar to the code already shown, but it is available in the Jupyter notebook that accompanies this section. . 7   As an extreme example, consider a dataset where all the positive examples appear first in the training partition. This would cause the perceptron to artificially inflate the weights of the features that occur in these examples, a situation from which the learning algorithm may struggle to recover. 
 . 8  We use typewriter font when we discuss variables in the code, to distinguish code from the theoretical discussion in the other chapters. 
 9 https://github.com/tqdm/tqdm 4.1 Binary Classification 61 Using the model to assign labels to all the test data is easily done in one step – we simply multiply the entire test data document-term matrix by the previously learned weights and add the bias. Scores greater than zero indicate a positive review, and those less than zero are negative. At this point we can evaluate the classifier’s performance, which we will do using precision, recall, and F1 scores for binary classification (described in Section 2.3). For this purpose, we implement a function called binary_classification_report that computes these metrics and returns them as a dictionary: We call this function to compare the predicted labels to the true labels, and obtain the evaluation scores. Our F1 score here is 86.8%, which is much higher than the baseline that assigns labels randomly, which yields an F1 score of about 50%. This is a good result, especially considering the simplicity of the perceptron! In the next sections and chapters, we will discuss a battery of strategies to considerably improve this performance. 4.1.4 Binary Logistic Regression from Scratch Using the same task, dataset, and evaluation, we will now implement a logistic regression classifier, as described in Algorithm 5 from Chapter 3. To give the reader hands-on experience with the implementation of the gradient calculations for logistic regression, we start by implementing it from scratch using NumPy. All the code shown in this section is available in the chap4_logistic_regression_numpy notebook. In the perceptron implementation, we represented the weights and the bias as two different variables. Here, however, we will use a different approach that will allow us to unify them into a single vector variable. Specifically, we take advantage of the similarity between the derivative of the cost function with respect to the weights (Equation 3.14) and the derivative of the cost with respect to the bias (Equation 3.15). d Ci(w, b) = (σi − yi)xij (3.14 revisited) dwj d Ci(w, b) = σi − yi (3.15 revisited) db Note that the two derivative formulas are identical except that the former has a multiplication by xij, while the latter does not. However, 62 Implementing Text Classification Using Perceptron and LR since σi − yi = (σi − yi)1 we can multiply the derivative of the cost with respect to the bias by one without changing the semantics. This gives an opportunity for combining the computations, doing them both in a single pass. The idea is that we can treat the bias as a weight corresponding to a feature that always has a value of one. As can be seen above, we created a NumPy array of ones of the same length as the number of examples in our training set (i.e., the number of rows in the data matrix). Then we add this array as a new column to the data matrix, using NumPy’s column_stack function. Next, we need to initialize our model. This time we will use a single NumPy array w of the same length as the number of columns in the data matrix. The weight vector w is initialized randomly with values between 0 and 1: Before implementing the learning algorithm, we need an implementation of the logistic function. Recall that the logistic function is σ(x) = 1 (3.1 revisited) 1+e−x This function can be easily implemented in NumPy as follows: However, this naive implementation may produce the following warning during training: The term overflow indicates that the result of evaluating exp(-x) is a number so large that it can’t be represented by a float (specifically, we’re using float64 numbers). We will avoid this issue by not calling exp with values that will overflow. NumPy provides the function finfo that can be consulted to find the limits of floating point numbers: The log of the largest floating point number is the largest number for which exp() will not overflow, so we will use it as a threshold to filter out problematic values: We now have everything we need to implement Algorithm 4. The steps to follow for each example are: (1) use the model to make a prediction, (2) calculate the gradient of the loss function with respect to the model parameters, and (3) update the model parameters using the gradient. The size of the update is controlled by the learning rate. Once the model has been trained, we evaluate it on the test dataset using our binary_classification_report function from the previous section. Loading and preprocessing the test dataset follows the same 4.1 Binary Classification 63 steps as with the previous classifier. We omit the code for brevity. These are the results: The performance is comparable with that of the perceptron. The difference in F1 scores between the two classifiers (84.9% here vs. 86.8% for the perceptron) is not significant. Classifier parity is probably attributable to the fact that the signal distinguishing the two classes being easy to learn and the simpler perceptron training algorithm being sufficient in this case. Nevertheless, this task is useful in showing how to implement the logistic regression model from scratch, i.e., by implementing the gradient calculation and parameter updates manually. Next, we will implement the same model again using PyTorch, highlighting how this machine learning library simplifies the process. 4.1.5 Binary Logistic Regression Utilizing PyTorch While it is fairly straightforward to compute the derivatives for logistic regression and implement then directly in NumPy, this will not scale well to arbitrary neural architectures. Fortunately, there are libraries that automate the computation of the derivatives of the cost function (assuming it is differentiable!) for any neural network, and use the resulting gradients to perform gradient descent or other more sophisticated optimization procedures. To this end, we will use the PyTorch deep learning library10. The corresponding notebook for this section is chap4_logistic_regression_pytorch_bce. Our model for logistic regression corresponds to PyTorch’s Linear layer. When we instantiate this layer, we specify the size of the inputs (the size of our vocabulary) and the size of the output, i.e., the number of output neurons (which is one because we’re doing binary classification). The loss function we use is the binary cross-entropy loss (see Chapter 3), which is implemented as BCEWithLogitsLoss in PyTorch. In PyTorch, the gradients obtained from the loss function are applied to the model by an optimizer object, which implements and applies an optimization algorithm. Here we will use the vanilla stochastic gradient descent optimizer; we set its learning rate to 0.1. This is equivalent to the discussion in Section 3.2. Similarly to the manual implementation, the steps required to train the model for a given training example are: (1) ensure the gradients are set to zeros, (2) apply the model to obtain a prediction, (3) calculate 10 https://pytorch.org/ 64 Implementing Text Classification Using Perceptron and LR the loss, (4) compute the gradient of the loss by back-propagation, and (5) update the model parameters. Recall that in our previous implementation everything was hardcoded: applying the model, computing the gradients, and optimizing the model parameters. Here, however, the implementation of the logistic regression is expressed at a higher level of abstraction. This means that we are describing the logical steps without specifying a particular implementation. Instead, implementation details are the responsability of the chosen model, loss function, and optimizer. Thus, we could even choose a different model, loss function, and/or optimizer, and use the same training steps with little or no modification. This decoupling of the training logic from the implementation details is one of the main advantages of libraries such as PyTorch. As shown in the code above, calling the model as a function, with the feature vectors as inputs, produces the predicted scores. Once again, a positive score corresponds to a positive label. When we evaluate this implementation on the test dataset, we obtain results that are in line with our previous models: Writing the perceptron and the logistic regression from scratch is a good exercise, as it exposes us to the fundamentals of implementing machine learning algorithms. However, this becomes cumbersome for more complex neural architectures. For this reason, from this point on, we will use PyTorch for all our coding examples. 4.2 Multiclass Classification So far, in this chapter we have discussed implementing binary classifiers. Next, we will modify these binary classifiers to perform multiclass classification, following the discussion in Section 3.5. 4.2.1 AG News Dataset Before explaining the actual training/testing code, we have to choose a new dataset that is suitable for multiclass classification. To this end, we will use the AG News Classification Dataset (Zhang et al., 2015), a subset of the larger AG corpus of news articles collected from thousands of different news sources.11 The classification dataset consists of four 11 http://groups.di.unipi.it/~gulli/AG_corpus_of_news_articles.html 4.2 Multiclass Classification 65 classes, and the data is equally balanced across all classes (30,000 articles per class for train, and 1,900 articles per class for testing). The goal of the task is to classify each article as one of the four classes: World, Sports, Business, or Sci/Tech. 4.2.2 Preparing the Dataset The AG News Dataset is distributed as two CSV files (one for training and one for testing), each containing three columns: the class index, the title, and the description. The dataset also provides a text file that maps the above class indexes to more descriptive class labels. Because of the tabular nature of the dataset, pandas, a Python library
for tabular data analysis,12 is a natural choice for loading and transform-
ing it. To this end, our Jupyter notebook (chap4_multiclass_logistic_regression) demonstrates the sequence of steps required to handle the data, as well
as model training and evaluation. First, we show how to load the CSV,
add column names, and inspect the result: class index . 0  3 
 . 1  3 
 . 2  3 
 . 3  3 
 . 4  3 
 ... ... . 119995  1 
 . 119996  2 
 . 119997  2 
 . 119998  2 
 . 119999  2 
 title Wall St. Bears Claw Back Into the Black (Reuters) Carlyle Looks Toward Commercial Aerospace (Reu... Oil and Economy Cloud Stocks' Outlook (Reuters) Iraq Halts Oil Exports from Main Southern Pipe... Oil prices soar to all-time record, posing new... ... Pakistan's Musharraf Says Won't Quit as Army C... Renteria signing a top-shelf deal Saban not going to Dolphins yet Today's NFL games Nets get Carter from Raptors description Reuters - Short-sellers, Wall Street's dwindli... Reuters - Private investment firm Carlyle Grou... Reuters - Soaring crude prices plus worries\ab... Reuters - Authorities have halted oil export\f... AFP - Tearaway world oil prices, toppling reco... ... KARACHI (Reuters) - Pakistani President Perve... Red Sox general manager Theo Epstein acknowled... The Miami Dolphins will put their courtship of... PITTSBURGH at NY GIANTS Time: 1:30 p.m. Line: ... INDIANAPOLIS -- All-Star Vince Carter was trad... 120000 rows × 3 columns Since the class labels themselves are in a separate file, we manually add them to the pandas data structure (called dataframe in pandas’ terminology) to increase the interpretability of the data. We use the class index column as a starting point, and use its map method to create a new column with the corresponding labels (technically a new Series object) that is added to the dataframe using its insert method, which allows us to insert the column in a specific position. Note that the label indices are one-based, so we subtract one to align them with their labels. 12 https://pandas.pydata.org 66 Implementing Text Classification Using Perceptron and LR class index . 0  3 
 . 1  3 
 . 2  3 
 . 3  3 
 . 4  3 
 ... ... . 119995  1 
 . 119996  2 
 . 119997  2 
 . 119998  2 
 . 119999  2 
 class Business Business Business Business Business ... World Sports Sports Sports Sports title Wall St. Bears Claw Back Into the Black (Reuters) Oil and Economy Cloud Stocks' Outlook (Reuters) Oil prices soar to all-time record, posing new... Pakistan's Musharraf Says Won't Quit as Army C... Saban not going to Dolphins yet Nets get Carter from Raptors description Reuters - Short-sellers, Wall Street's dwindli... Reuters - Soaring crude prices plus worries\ab... AFP - Tearaway world oil prices, toppling reco... KARACHI (Reuters) - Pakistani President Perve... The Miami Dolphins will put their courtship of... INDIANAPOLIS -- All-Star Vince Carter was trad... Iraq Halts Oil Exports from Main Southern Pipe... Reuters - Authorities have halted oil export\f... ... ... Renteria signing a top-shelf deal Red Sox general manager Theo Epstein acknowled... 120000 rows × 4 columns Carlyle Looks Toward Commercial Aerospace (Reu... Reuters - Private investment firm Carlyle Grou... Today's NFL games PITTSBURGH at NY GIANTS Time: 1:30 p.m. Line: ... Next we will preprocess the text. First we lowercase the title and description, and then we concatenate them into a single string. Then we remove some spurious backslashes from the text. Once this is done, the preprocessed text is added to the dataframe as a new column. Note that pandas allows these steps to be applied to all rows simultaneously. class index class title Wall St. Bears Claw Back Into the Black (Reuters) Oil and Economy Cloud Stocks' Outlook (Reuters) Oil prices soar to all-time record, posing new... ... Pakistan's Musharraf Says Won't Quit as Army C... Saban not going to Dolphins yet Nets get Carter from Raptors description Reuters - Short-sellers, Wall Street's dwindli... Reuters - Soaring crude prices plus worries\ab... AFP - Tearaway world oil prices, toppling reco... ... KARACHI (Reuters) - Pakistani President Perve... The Miami Dolphins will put their courtship of... INDIANAPOLIS -- All-Star Vince Carter was trad... text wall st. bears claw back into the black (reute... oil and economy cloud stocks' outlook (reuters... oil prices soar to all-time record, posing new... ... pakistan's musharraf says won't quit as army c... saban not going to dolphins yet the miami dolp... nets get carter from raptors indianapolis -- a... . 0  3 Business 
 . 1  3 Business 
 . 2  3 Business 
 . 3  3 Business 
 . 4  3 Business 
 ... ... ... . 119995  1 World 
 . 119996  2 Sports 
 . 119997  2 Sports 
 . 119998  2 Sports 
 . 119999  2 Sports 
 120000 rows × 5 columns Carlyle Looks Toward Commercial Reuters - Private investment firm Carlyle carlyle looks toward commercial Aerospace (Reu... Grou... aerospace (reu... Iraq Halts Oil Exports from Main Southern Pipe... Reuters - Authorities have halted oil export\f... iraq halts oil exports from main southern pipe... Renteria signing a top-shelf deal Red Sox general manager Theo Epstein renteria signing a top-shelf deal red sox acknowled... gene... Today's NFL games PITTSBURGH at NY GIANTS Time: 1:30 p.m. today's nfl games pittsburgh at ny giants Line: ... time... At this point, the text is ready to be tokenized. For this purpose we will use NLTK’s word_tokenize function. This function can be applied to the whole column at once using the pandas map function, which returns a new column which we add to the dataframe. However, here we actually use the progress_map function, which provides a visual progress bar. This visual feedback is especially helpful for tasks that take more time to complete. 4.2 Multiclass Classification 67 class index . 0  3 
 . 1  3 
 . 2  3 
 . 3  3 
 . 4  3 
 ... ... . 119995  1 
 . 119996  2 
 . 119997  2 
 . 119998  2 
 . 119999  2 
 class Business Business Business Business Business ... World Sports Sports Sports Sports title Wall St. Bears Claw Back Into the Black (Reuters) Oil and Economy Cloud Stocks' Outlook (Reuters) Oil prices soar to all-time record, posing new... ... Pakistan's Musharraf Says Won't Quit as Army C... Saban not going to Dolphins yet Nets get Carter from Raptors description Reuters - Short-sellers, Wall Street's dwindli... Reuters - Soaring crude prices plus worries\ab... AFP - Tearaway world oil prices, toppling reco... ... KARACHI (Reuters) - Pakistani President Perve... The Miami Dolphins will put their courtship of... INDIANAPOLIS -- All-Star Vince Carter was trad... text wall st. bears claw back into the black (reute... oil and economy cloud stocks' outlook (reuters... oil prices soar to all-time record, posing new... ... pakistan's musharraf says won't quit as army c... saban not going to dolphins yet the miami dolp... nets get carter from raptors indianapolis -- a... tokens [wall, st., bears, claw, back, into, the, blac... [oil, and, economy, cloud, stocks, ', outlook,... [oil, prices, soar, to, all-time, record, ,, p... ... [pakistan, 's, musharraf, says, wo, n't, quit,... [saban, not, going, to, dolphins, yet, the, mi... [nets, get, carter, from, raptors, indianapoli... 120000 rows × 6 columns Carlyle Looks Toward Commercial Reuters - Private investment firm carlyle looks toward commercial [carlyle, looks, toward, Aerospace (Reu... Carlyle Grou... aerospace (reu... commercial, aerospace... Iraq Halts Oil Exports from Main Reuters - Authorities have halted iraq halts oil exports from main [iraq, halts, oil, exports, from, Southern Pipe... oil export\f... southern pipe... main, southe... Renteria signing a top-shelf deal Red Sox general manager Theo renteria signing a top-shelf deal [renteria, signing, a, top-shelf, Epstein acknowled... red sox gene... deal, red, s... Today's NFL games PITTSBURGH at NY GIANTS today's nfl games pittsburgh at [today, 's, nfl, games, Time: 1:30 p.m. Line: ... ny giants time... pittsburgh, at, ny, gi... From the tokens we just created, we then create a vocabulary for our corpus. Here, we only keep the words that occur at least 10 times, decreasing the memory needed and reducing the likelihood that our vocabulary contains noisy tokens. Note that each row in the tokens column contains a list of tokens. In order to create the vocabulary, we will need to convert the Series of lists of tokens into a Series of tokens using the explode() Pandas method. Then we will use the value_counts() method to create a Series object in which the index are the tokens and the values are the number of times they appear in the corpus. The next step is removing the tokens with a count lower than our chosen threshold. Finally, we create a list with the remaining tokens, as well as a dictionary that maps tokens to token ids (i.e., the index of the token in the list). We include in the vocabulary a special token [UNK] that will be used as a placeholder for tokens that do not appear in our vocabulary after the frequency pruning. Using this vocabulary, we construct a feature vector for each news article in the corpus. This feature vector will be encoded as a dictionary, with keys corresponding to token ids, and values corresponding to the number of times the token appears in the article. As above, the feature vectors will be stored as a new column in the dataframe. 68 Implementing Text Classification Using Perceptron and LR class index class title Wall St. Bears Claw Back Into the Black (Reuters) Oil and Economy Cloud Stocks' Outlook (Reuters) Oil prices soar to all-time record, posing new... ... Pakistan's Musharraf Says Won't Quit as Army C... Saban not going to Dolphins yet Nets get Carter from Raptors description Reuters - Short-sellers, Wall Street's dwindli... Reuters - Soaring crude prices plus worries\ab... AFP - Tearaway world oil prices, toppling reco... ... KARACHI (Reuters) - Pakistani President Perve... The Miami Dolphins will put their courtship of... INDIANAPOLIS -- All-Star Vince Carter was trad... text wall st. bears claw back into the black (reute... oil and economy cloud stocks' outlook (reuters... oil prices soar to all-time record, posing new... ... pakistan's musharraf says won't quit as army c... saban not going to dolphins yet the miami dolp... nets get carter from raptors indianapolis -- a... tokens [wall, st., bears, claw, back, into, the, blac... [oil, and, economy, cloud, stocks, ', outlook,... [oil, prices, soar, to, alltime, record, ,, p... ... [pakistan, 's, musharraf, says, wo, n't, quit,... [saban, not, going, to, dolphins, yet, the, mi... [nets, get, carter, from, raptors, indianapoli... features {427: 2, 563: 1, 1607: 1, 15062: 1, 120: 1, 73... {66: 1, 9: 2, 351: 2, 4565: 1, 158: 1, 116: 1,... {66: 2, 99: 2, 4390: 1, 4: 2, 3595: 1, 149: 1,... ... {383: 1, 23: 1, 1626: 2, 91: 1, 1809: 1, 285: ... {7762: 2, 68: 1, 661: 1, 4: 2, 1439: 2, 703: 1... {2170: 2, 226: 1, 2402: 2, 32: 1, 2995: 2, 219... . 0  3 Business 
 . 1  3 Business 
 . 2  3 Business 
 . 3  3 Business 
 . 4  3 Business 
 ... ... ... . 119995  1 World 
 . 119996  2 Sports 
 . 119997  2 Sports 
 . 119998  2 Sports 
 . 119999  2 Sports 
 120000 rows × 7 columns Carlyle Looks Toward Commercial Aerospace (Reu... Reuters - Private investment firm Carlyle Grou... carlyle looks toward commercial aerospace (reu... Iraq Halts Oil Exports from Reuters - Authorities have iraq halts oil exports from Main Southern Pipe... halted oil export\f... main southern pipe... Renteria signing a top-shelf Red Sox general manager renteria signing a topdeal Theo Epstein acknowled... shelf deal red sox gene... PITTSBURGH at NY Today's NFL games GIANTS Time: 1:30 p.m. Line: ... today's nfl games pittsburgh at ny giants time... [carlyle, looks, toward, {15999: 2, 1076: 1, 855: commercial, aerospace... 1, 1286: 1, 4251: 1, ... [iraq, halts, oil, exports, {77: 2, 7380: 1, 66: 3, from, main, southe... 1787: 1, 32: 2, 900: 2... [renteria, signing, a, top- {8428: 2, 2638: 1, 5: 4, shelf, deal, red, s... 0: 3, 127: 1, 202: 3,... [today, 's, nfl, games, {106: 1, 23: 1, 729: 1, pittsburgh, at, ny, gi... 225: 1, 1586: 1, 22: 1... The final preprocessing step is converting the features and the class indices into PyTorch tensors. Recall that we need to subtract one from the class indices to make them zero-based. At this point, the data is fully processed and we are ready to begin training. 4.2.3 Multiclass Logistic Regression Using PyTorch The model itself is a single linear layer whose input size corresponds to the size of our vocabulary, and its output size corresponds to the number of classes in our corpus. PyTorch’s Linear layer includes a bias by default, so there is no need to handle that manually the way we did for our perceptron example. The code for training this model (which implements Algorithm 6) is almost identical to that of the binary logistic repression. However, since we have to calculate a score for each of the four different classes, we need to replace the previous BCEWithLogitsLoss with CrossEntropyLoss, which applies a softmax over the scores to obtain probabilities for each class. For each example, the model predicts 4 scores – one for each label. The label with the highest score is selected using the argmax function. We evaluate the predictions of our model for each class using Scikitlearn’s classification_report, which handles the results of multiclass classification. 4.3 Summary 69 4.3 Summary In this chapter, we used movie review and news article classification to illustrate the implementation of the previously described algorithms for the binary perceptron, binary logistic regression, and multiclass logistic regression. For the binary logistic regression, we made a direct comparison between the lower-level NumPy implementation and a higher-level version that made use of PyTorch. We hope that through this series of exercises the reader has noted several key takeaways. First, data preparation is important and should be done thoughtfully. Certain tasks (e.g., text normalization or sentence splitting) are going to be frequently needed if you continue with NLP, so using or creating generic functions can be very helpful. However, what works for one dataset and one language may not be suitable for another scenario. For example, in our case, we selected different tokenizers for each of our tasks to account for the different registers of English, as well as removing diacritics during normalization. Second, when it comes to implementing machine learning algorithms, it is often easier to use a higher-level library such as PyTorch instead of NumPy. For example, with the former, the gradients are calculated by the library, whereas in NumPy we have to code them ourselves. This becomes cumbersome quickly. For example, even the derivative of the softmax is non-trivial. Third, PyTorch imposes a training structure that remains largely the same, regardless of what models are being trained. That is, at a high level, the same steps are always required: clearing the current gradients, predicting output scores for the provided inputs, calculating the loss, and optimizing. These features make PyTorch a very powerful and convenient deep learning library; we will continue to use it throughout the remainder of the book to implement more complex neural architectures.
23,684
24,252
#!/usr/bin/env python # coding: utf-8 # # Multiclass Text Classification with # # Logistic Regression Implemented with PyTorch and CE Loss # First, we will do some initialization. # In[1]: import random import torch import numpy as np import pandas as pd from tqdm.notebook import tqdm # enable tqdm in pandas tqdm.pandas() # set to True to use the gpu (if there is one available) use_gpu = True # select device device = torch.device('cuda' if use_gpu and torch.cuda.is_available() else 'cpu') print(f'device: {device.type}') # random seed seed = 1234 # set random seed if seed is not None: print(f'random seed: {seed}') random.seed(seed) np.random.seed(seed) torch.manual_seed(seed) # We will be using the AG's News Topic Classification Dataset. # It is stored in two CSV files: `train.csv` and `test.csv`, as well as a `classes.txt` that stores the labels of the classes to predict. # # First, we will load the training dataset using [pandas](https://pandas.pydata.org/) and take a quick look at how the data. # In[2]: train_df = pd.read_csv('data/ag_news_csv/train.csv', header=None) train_df.columns = ['class index', 'title', 'description'] train_df # The dataset consists of 120,000 examples, each consisting of a class index, a title, and a description. # The class labels are distributed in a separated file. We will add the labels to the dataset so that we can interpret the data more easily. Note that the label indexes are one-based, so we need to subtract one to retrieve them from the list. # In[3]: labels = open('data/ag_news_csv/classes.txt').read().splitlines() classes = train_df['class index'].map(lambda i: labels[i-1]) train_df.insert(1, 'class', classes) train_df # Let's inspect how balanced our examples are by using a bar plot. # In[4]: pd.value_counts(train_df['class']).plot.bar() # The classes are evenly distributed. That's great! # # However, the text contains some spurious backslashes in some parts of the text. # They are meant to represent newlines in the original text. # An example can be seen below, between the words "dwindling" and "band". # In[5]: print(train_df.loc[0, 'description']) # We will replace the backslashes with spaces on the whole column using pandas replace method. # In[6]: title = train_df['title'].str.lower() descr = train_df['description'].str.lower() text = title + " " + descr train_df['text'] = text.str.replace('\\', ' ', regex=False) train_df # Now we will proceed to tokenize the title and description columns using NLTK's word_tokenize(). # We will add a new column to our dataframe with the list of tokens. # In[7]: from nltk.tokenize import word_tokenize train_df['tokens'] = train_df['text'].progress_map(word_tokenize) train_df # Now we will create a vocabulary from the training data. We will only keep the terms that repeat beyond some threshold established below. # In[8]: threshold = 10 tokens = train_df['tokens'].explode().value_counts() tokens = tokens[tokens > threshold] id_to_token = ['[UNK]'] + tokens.index.tolist() token_to_id = {w:i for i,w in enumerate(id_to_token)} vocabulary_size = len(id_to_token) print(f'vocabulary size: {vocabulary_size:,}') # In[9]: from collections import defaultdict def make_feature_vector(tokens, unk_id=0): vector = defaultdict(int) for t in tokens: i = token_to_id.get(t, unk_id) vector[i] += 1 return vector train_df['features'] = train_df['tokens'].progress_map(make_feature_vector) train_df # In[10]: def make_dense(feats): x = np.zeros(vocabulary_size) for k,v in feats.items(): x[k] = v return x X_train = np.stack(train_df['features'].progress_map(make_dense)) y_train = train_df['class index'].to_numpy() - 1 X_train = torch.tensor(X_train, dtype=torch.float32) y_train = torch.tensor(y_train) # In[11]: from torch import nn from torch import optim # hyperparameters lr = 1.0 n_epochs = 5 n_examples = X_train.shape[0] n_feats = X_train.shape[1] n_classes = len(labels) # initialize the model, loss function, optimizer, and data-loader model = nn.Linear(n_feats, n_classes).to(device) loss_func = nn.CrossEntropyLoss() optimizer = optim.SGD(model.parameters(), lr=lr) # train the model indices = np.arange(n_examples) for epoch in range(n_epochs): np.random.shuffle(indices) for i in tqdm(indices, desc=f'epoch {epoch+1}'): # clear gradients model.zero_grad() # send datum to right device x = X_train[i].unsqueeze(0).to(device) y_true = y_train[i].unsqueeze(0).to(device) # predict label scores y_pred = model(x) # compute loss loss = loss_func(y_pred, y_true) # backpropagate loss.backward() # optimize model parameters optimizer.step() # Next, we evaluate on the test dataset # In[12]: # repeat all preprocessing done above, this time on the test set test_df = pd.read_csv('data/ag_news_csv/test.csv', header=None) test_df.columns = ['class index', 'title', 'description'] test_df['text'] = test_df['title'].str.lower() + " " + test_df['description'].str.lower() test_df['text'] = test_df['text'].str.replace('\\', ' ', regex=False) test_df['tokens'] = test_df['text'].progress_map(word_tokenize) test_df['features'] = test_df['tokens'].progress_map(make_feature_vector) X_test = np.stack(test_df['features'].progress_map(make_dense)) y_test = test_df['class index'].to_numpy() - 1 X_test = torch.tensor(X_test, dtype=torch.float32) y_test = torch.tensor(y_test) # In[13]: from sklearn.metrics import classification_report # set model to evaluation mode model.eval() # don't store gradients with torch.no_grad(): X_test = X_test.to(device) y_pred = torch.argmax(model(X_test), dim=1) y_pred = y_pred.cpu().numpy() print(classification_report(y_test, y_pred, target_names=labels))
1,551
1,715
6
chap04-7
chap04-7
4 Implementing Text Classification Using Perceptron and Logistic Regression In the previous chapters we have discussed the theory behind the perceptron and logistic regression, including mathematical explanations of how and why they are able to learn from examples. In this chapter we will transition from math to code. Specifically, we will discuss how to implement these models in the Python programming language. All the code that we will introduce throughout this book is available online as well: http://clulab.github.io/gentlenlp/. The reader who is not familiar with the Python programming language is encouraged to read first Appendix A, for a brief introduction to the language, and Appendix B, for a discussion on how computers encode and preprocess text. Once done, please return here. To get a better understanding of how these algorithms work under the hood, we will start by implementing them from scratch. However, as the book progresses, we will introduce some of the popular tools and libraries that make Python the language of choice for machine learning, e.g., PyTorch,1 and Hugging Face’s transformers.2 The code for all the examples in the book is provided in the form of Jupyter notebooks.3 Important fragments of these notebooks will be presented in the implementation chapters so that the reader has the whole picture just by reading the book. However, we strongly encourage you to download the notebooks and execute them yourself. We also encourage you to modify them to conduct your own experiments! 1 https://pytorch.org
2 https://huggingface.co 3 https://jupyter.org/ 55 56 Implementing Text Classification Using Perceptron and LR 4.1 Binary Classification We begin this chapter with binary classification. That is, we aim to train classifiers that assign one of two labels to a given text. As the example for this task, we will train a review classifier using the the Large Movie Review Dataset (Maas et al., 2011).4 We tackle this task by implementing first a binary perceptron classifier, followed by a binary logistic regression one. We will implement the latter both from scratch as well as using PyTorch, so the reader has a clearer understanding on how PyTorch works “under the hood.” 4.1.1 Large Movie Review Dataset This dataset contains movie reviews and their associated scores (between 1 and 10) as provided by IMDb.5 converted these scores to binary labels by assigning each review a positive or negative label if the review score was above 6 or below 5, respectively. Reviews with scores 5 and 6 were considered too neutral and thus excluded. We follow the same protocol in this chapter. The dataset is divided in two even partitions called train and test, each containing 25,000 reviews. The dataset also provides additional unlabeled reviews, but we will not use those here. Each partition contains two directories called pos and neg where the positive and negative examples are stored. Each review is stored in an independent text file, whose name is composed of an id unique to the partition and the score associated with the review, separated by an underscore. An example of a positive and a negative review is shown in Table 4.1. 4.1.2 Bag-of-words Model As discussed in Section 2.2, we will encode the text to classify as a bag of words. That is, we encode each review as a list of numbers, with each position in the list corresponding to a word in our vocabulary, and the value stored in that position corresponding to the number of times the word appears in the review. For example, say we want to encode the following two reviews: 4 https://ai.stanford.edu/~amaas/data/sentiment/ 5 https://www.imdb.com/ Maas et al. 4.1 Binary Classification 57 Table 4.1 Two examples of movie reviews from IMDb. The first is a positive review of the movie Puss in Boots (1988). The second is a negative review of the movie Valentine (2001). These reviews can be found at https://www.imdb.com/review/rw0606396/ and https://www.imdb.com/review/rw0721861/, respectively. Filename Score Binary Label train/pos/24_8.txt 8/10 Positive train/neg/141_3.txt 3/10 Negative Review Text Although this was obviously a low-budget production, the performances and the songs in this movie are worth seeing. One of Walken’s few musical roles to date. (he is a marvelous dancer and singer and he demonstrates his acrobatic skills as well - watch for the cartwheel!) Also starring Jason Connery. A great children’s story and very likable characters. This stalk and slash turkey manages to bring nothing new to an increasingly stale genre. A masked killer stalks young, pert girls and slaughters them in a variety of gruesome ways, none of which are particularly inventive. It’s not scary, it’s not clever, and it’s not funny. So what was the point of it? Review 1: Review 2: "I liked the movie. My friend liked it too. " "I hated it. Would not recommend. " First, we need to create a vocabulary that maps each word to an id that uniquely identifies it. Each of these numbers will be used as the index in a list, so they must start at zero and grow by one for each word in the vocabulary. For example, one possible vocabulary that encodes the previous reviews is: {'would': 0, 'hated': 1, 58 Implementing Text Classification Using Perceptron and LR 'my': 2, 'liked': 3, 'not': 4, 'it': 5, 'movie': 6, 'recommend': 7, 'the': 8, 'I': 9, 'too': 10, 'friend': 11} Using this mapping, we can encode the two reviews as follows: Review1: [0,0,1,2,0,1,1,0,1,1,1,1] Review2: [1,1,0,0,1,1,0,1,0,1,0,0] Note that the word liked (fourth position) in the first review has a value of two. This is because this word appears twice in that review. This is a small example with a vocabulary of only 12 terms. Of course, the same process needs to be implemented for our whole training dataset. For this purpose we will use scikit-learn’s CountVectorizer class.6 Using the CountVectorizer class simplifies things, allowing us to get started quickly with a bag-of-words approach. However, note that it makes several simplifying assumptions (e.g., text is lowercased, and punctuation and single character tokens are removed). Some of these may not be adequate to other tasks. First, we need to obtain the filenames for the reviews in the training set: Once we have acquired the filenames for the training reviews, we need
to read them using the CountVectorizer. In order for the CountVectorizer to open and read the files for us, we make use of the input='filename' constructor parameter (otherwise it would expect the string content directly). The CountVectorizer provides three methods that will be use-
ful for us: a method called fit() that is used to acquire the vocabulary,
a method transform() that converts the text into the bag-of-words representation, and a method fit_transform() that conveniently acquires the vocabulary and transforms the data in a single step. The resulting object is referred to as a document-term matrix, where each row corre- 6 https://scikitlearn.org/stable/modules/generated/sklearn.feature_ extraction.text.CountVectorizer.html 4.1 Binary Classification 59 sponds to a document, and each column corresponds to a term in the vocabulary. As the output above indicates, the resulting matrix has 25,000 rows (one for each review), and 74,849 columns (one for each term). Also you may note that this matrix is sparse, with 3,445,861 stored elements. A regular matrix of shape 25,000×74,849 would have 1,871,225,000 elements. However, most of the elements in a document-term matrix are zeros because only a few words from the vocabulary appear in each document. A sparse matrix takes advantage of this fact by storing only the non-zero cells in order to reduce the memory required to store it. Thus, sparse matrices are convenient, especially when dealing with lots of data. Nevertheless, to simplify the downstream code in this example, we will convert it into a dense matrix, i.e., a regular two-dimensional NumPy array. Finally, we also need the labels of the reviews. We assign a label of one to positive reviews, and a label of zero to negative ones. Note that the first half of the reviews are positive and the second half are negative. The label at the ith position of the y_train array corresponds to the review encoded in the ith row of the X_train matrix. 4.1.3 Perceptron Now that we have defined our task and the data processing pipeline, we will implement a perceptron classifier that classifies the movie reviews as positive or negative. The entire code discussed in this section is available in the chap4_perceptron notebook. Recall from Section 2.4 that the perceptron is composed of a weight vector w and a bias term b. These will be represented as a NumPy array w of the same length as our document vectors, and a variable b for the bias term. Both will be initialized with zeros. The parameters w and b are learned through the following algorithm, which implements Algorithm 2 from Chapter 2: There are a couple of details to point out. Line 3 of Algorithm 2 indicates that we need to repeat the training loop until convergence. Theoretically, convergence is defined as predicting all training examples correctly. This is an ambitious requirement, which is not always possible in practice, so in this code we also include a stop condition if we reach a maximum number of epochs. Another crucial difference between our implementation here and the theoretical Algorithm 2, is that we randomize the order in which the training examples are seen at the beginning of 60 Implementing Text Classification Using Perceptron and LR each epoch. This simple (but highly recommended!) change is necessary to avoid the introduction of spurious biases due to the arbitrary order of the examples in the original training partition.7 We accomplish this by storing the indices corresponding to the X_train matrix rows in a NumPy array, and shuffling these indices at the beginning of each epoch. We shuffle the indices instead of the examples so that we can preserve the mapping between examples and labels. The training loop aligns closely with Algorithm 2. We start by iterating over each example in our training data, storing the current example in the variable x,8 and its corresponding label in the variable y_true. Next, we compute the perceptron decision function shown in Algorithm 1. Note that NumPy (as well as PyTorch) uses Python’s @ operator to indicate vector or matrix multiplication, depending on its operand types. Here we use it to calculate the dot product of the example x and the weights w. To this we add the bias b to obtain the predicted score, whose sign is used to assign a positive or negative predicted label. If the prediction is correct, then no update is needed, and we can move on to the next training example. However, if the prediction is incorrect, then we need to adjust w and b, as described in Algorithm 2. Sidebar 4.1 The tqdm function This is our first exposure to the tqdm function. tqdm is a progress bar that “make your loops show a smart progress meter.”9 The name tqdm comes from the Arabic word taqaddum which can mean “progress.” Using tqdm is as simple as wrapping it around the collection to be traversed. After training, we evaluate the model’s performance on the heldout test partition. The test data is loaded similarly to the training partition, but with one notable difference; we use CountVectorizer’s transform() method instead of the fit_transform() method so that the vocabulary is not adjusted for the test data. We won’t show here the loading of the test partition since it is so similar to the code already shown, but it is available in the Jupyter notebook that accompanies this section. . 7   As an extreme example, consider a dataset where all the positive examples appear first in the training partition. This would cause the perceptron to artificially inflate the weights of the features that occur in these examples, a situation from which the learning algorithm may struggle to recover. 
 . 8  We use typewriter font when we discuss variables in the code, to distinguish code from the theoretical discussion in the other chapters. 
 9 https://github.com/tqdm/tqdm 4.1 Binary Classification 61 Using the model to assign labels to all the test data is easily done in one step – we simply multiply the entire test data document-term matrix by the previously learned weights and add the bias. Scores greater than zero indicate a positive review, and those less than zero are negative. At this point we can evaluate the classifier’s performance, which we will do using precision, recall, and F1 scores for binary classification (described in Section 2.3). For this purpose, we implement a function called binary_classification_report that computes these metrics and returns them as a dictionary: We call this function to compare the predicted labels to the true labels, and obtain the evaluation scores. Our F1 score here is 86.8%, which is much higher than the baseline that assigns labels randomly, which yields an F1 score of about 50%. This is a good result, especially considering the simplicity of the perceptron! In the next sections and chapters, we will discuss a battery of strategies to considerably improve this performance. 4.1.4 Binary Logistic Regression from Scratch Using the same task, dataset, and evaluation, we will now implement a logistic regression classifier, as described in Algorithm 5 from Chapter 3. To give the reader hands-on experience with the implementation of the gradient calculations for logistic regression, we start by implementing it from scratch using NumPy. All the code shown in this section is available in the chap4_logistic_regression_numpy notebook. In the perceptron implementation, we represented the weights and the bias as two different variables. Here, however, we will use a different approach that will allow us to unify them into a single vector variable. Specifically, we take advantage of the similarity between the derivative of the cost function with respect to the weights (Equation 3.14) and the derivative of the cost with respect to the bias (Equation 3.15). d Ci(w, b) = (σi − yi)xij (3.14 revisited) dwj d Ci(w, b) = σi − yi (3.15 revisited) db Note that the two derivative formulas are identical except that the former has a multiplication by xij, while the latter does not. However, 62 Implementing Text Classification Using Perceptron and LR since σi − yi = (σi − yi)1 we can multiply the derivative of the cost with respect to the bias by one without changing the semantics. This gives an opportunity for combining the computations, doing them both in a single pass. The idea is that we can treat the bias as a weight corresponding to a feature that always has a value of one. As can be seen above, we created a NumPy array of ones of the same length as the number of examples in our training set (i.e., the number of rows in the data matrix). Then we add this array as a new column to the data matrix, using NumPy’s column_stack function. Next, we need to initialize our model. This time we will use a single NumPy array w of the same length as the number of columns in the data matrix. The weight vector w is initialized randomly with values between 0 and 1: Before implementing the learning algorithm, we need an implementation of the logistic function. Recall that the logistic function is σ(x) = 1 (3.1 revisited) 1+e−x This function can be easily implemented in NumPy as follows: However, this naive implementation may produce the following warning during training: The term overflow indicates that the result of evaluating exp(-x) is a number so large that it can’t be represented by a float (specifically, we’re using float64 numbers). We will avoid this issue by not calling exp with values that will overflow. NumPy provides the function finfo that can be consulted to find the limits of floating point numbers: The log of the largest floating point number is the largest number for which exp() will not overflow, so we will use it as a threshold to filter out problematic values: We now have everything we need to implement Algorithm 4. The steps to follow for each example are: (1) use the model to make a prediction, (2) calculate the gradient of the loss function with respect to the model parameters, and (3) update the model parameters using the gradient. The size of the update is controlled by the learning rate. Once the model has been trained, we evaluate it on the test dataset using our binary_classification_report function from the previous section. Loading and preprocessing the test dataset follows the same 4.1 Binary Classification 63 steps as with the previous classifier. We omit the code for brevity. These are the results: The performance is comparable with that of the perceptron. The difference in F1 scores between the two classifiers (84.9% here vs. 86.8% for the perceptron) is not significant. Classifier parity is probably attributable to the fact that the signal distinguishing the two classes being easy to learn and the simpler perceptron training algorithm being sufficient in this case. Nevertheless, this task is useful in showing how to implement the logistic regression model from scratch, i.e., by implementing the gradient calculation and parameter updates manually. Next, we will implement the same model again using PyTorch, highlighting how this machine learning library simplifies the process. 4.1.5 Binary Logistic Regression Utilizing PyTorch While it is fairly straightforward to compute the derivatives for logistic regression and implement then directly in NumPy, this will not scale well to arbitrary neural architectures. Fortunately, there are libraries that automate the computation of the derivatives of the cost function (assuming it is differentiable!) for any neural network, and use the resulting gradients to perform gradient descent or other more sophisticated optimization procedures. To this end, we will use the PyTorch deep learning library10. The corresponding notebook for this section is chap4_logistic_regression_pytorch_bce. Our model for logistic regression corresponds to PyTorch’s Linear layer. When we instantiate this layer, we specify the size of the inputs (the size of our vocabulary) and the size of the output, i.e., the number of output neurons (which is one because we’re doing binary classification). The loss function we use is the binary cross-entropy loss (see Chapter 3), which is implemented as BCEWithLogitsLoss in PyTorch. In PyTorch, the gradients obtained from the loss function are applied to the model by an optimizer object, which implements and applies an optimization algorithm. Here we will use the vanilla stochastic gradient descent optimizer; we set its learning rate to 0.1. This is equivalent to the discussion in Section 3.2. Similarly to the manual implementation, the steps required to train the model for a given training example are: (1) ensure the gradients are set to zeros, (2) apply the model to obtain a prediction, (3) calculate 10 https://pytorch.org/ 64 Implementing Text Classification Using Perceptron and LR the loss, (4) compute the gradient of the loss by back-propagation, and (5) update the model parameters. Recall that in our previous implementation everything was hardcoded: applying the model, computing the gradients, and optimizing the model parameters. Here, however, the implementation of the logistic regression is expressed at a higher level of abstraction. This means that we are describing the logical steps without specifying a particular implementation. Instead, implementation details are the responsability of the chosen model, loss function, and optimizer. Thus, we could even choose a different model, loss function, and/or optimizer, and use the same training steps with little or no modification. This decoupling of the training logic from the implementation details is one of the main advantages of libraries such as PyTorch. As shown in the code above, calling the model as a function, with the feature vectors as inputs, produces the predicted scores. Once again, a positive score corresponds to a positive label. When we evaluate this implementation on the test dataset, we obtain results that are in line with our previous models: Writing the perceptron and the logistic regression from scratch is a good exercise, as it exposes us to the fundamentals of implementing machine learning algorithms. However, this becomes cumbersome for more complex neural architectures. For this reason, from this point on, we will use PyTorch for all our coding examples. 4.2 Multiclass Classification So far, in this chapter we have discussed implementing binary classifiers. Next, we will modify these binary classifiers to perform multiclass classification, following the discussion in Section 3.5. 4.2.1 AG News Dataset Before explaining the actual training/testing code, we have to choose a new dataset that is suitable for multiclass classification. To this end, we will use the AG News Classification Dataset (Zhang et al., 2015), a subset of the larger AG corpus of news articles collected from thousands of different news sources.11 The classification dataset consists of four 11 http://groups.di.unipi.it/~gulli/AG_corpus_of_news_articles.html 4.2 Multiclass Classification 65 classes, and the data is equally balanced across all classes (30,000 articles per class for train, and 1,900 articles per class for testing). The goal of the task is to classify each article as one of the four classes: World, Sports, Business, or Sci/Tech. 4.2.2 Preparing the Dataset The AG News Dataset is distributed as two CSV files (one for training and one for testing), each containing three columns: the class index, the title, and the description. The dataset also provides a text file that maps the above class indexes to more descriptive class labels. Because of the tabular nature of the dataset, pandas, a Python library
for tabular data analysis,12 is a natural choice for loading and transform-
ing it. To this end, our Jupyter notebook (chap4_multiclass_logistic_regression) demonstrates the sequence of steps required to handle the data, as well
as model training and evaluation. First, we show how to load the CSV,
add column names, and inspect the result: class index . 0  3 
 . 1  3 
 . 2  3 
 . 3  3 
 . 4  3 
 ... ... . 119995  1 
 . 119996  2 
 . 119997  2 
 . 119998  2 
 . 119999  2 
 title Wall St. Bears Claw Back Into the Black (Reuters) Carlyle Looks Toward Commercial Aerospace (Reu... Oil and Economy Cloud Stocks' Outlook (Reuters) Iraq Halts Oil Exports from Main Southern Pipe... Oil prices soar to all-time record, posing new... ... Pakistan's Musharraf Says Won't Quit as Army C... Renteria signing a top-shelf deal Saban not going to Dolphins yet Today's NFL games Nets get Carter from Raptors description Reuters - Short-sellers, Wall Street's dwindli... Reuters - Private investment firm Carlyle Grou... Reuters - Soaring crude prices plus worries\ab... Reuters - Authorities have halted oil export\f... AFP - Tearaway world oil prices, toppling reco... ... KARACHI (Reuters) - Pakistani President Perve... Red Sox general manager Theo Epstein acknowled... The Miami Dolphins will put their courtship of... PITTSBURGH at NY GIANTS Time: 1:30 p.m. Line: ... INDIANAPOLIS -- All-Star Vince Carter was trad... 120000 rows × 3 columns Since the class labels themselves are in a separate file, we manually add them to the pandas data structure (called dataframe in pandas’ terminology) to increase the interpretability of the data. We use the class index column as a starting point, and use its map method to create a new column with the corresponding labels (technically a new Series object) that is added to the dataframe using its insert method, which allows us to insert the column in a specific position. Note that the label indices are one-based, so we subtract one to align them with their labels. 12 https://pandas.pydata.org 66 Implementing Text Classification Using Perceptron and LR class index . 0  3 
 . 1  3 
 . 2  3 
 . 3  3 
 . 4  3 
 ... ... . 119995  1 
 . 119996  2 
 . 119997  2 
 . 119998  2 
 . 119999  2 
 class Business Business Business Business Business ... World Sports Sports Sports Sports title Wall St. Bears Claw Back Into the Black (Reuters) Oil and Economy Cloud Stocks' Outlook (Reuters) Oil prices soar to all-time record, posing new... Pakistan's Musharraf Says Won't Quit as Army C... Saban not going to Dolphins yet Nets get Carter from Raptors description Reuters - Short-sellers, Wall Street's dwindli... Reuters - Soaring crude prices plus worries\ab... AFP - Tearaway world oil prices, toppling reco... KARACHI (Reuters) - Pakistani President Perve... The Miami Dolphins will put their courtship of... INDIANAPOLIS -- All-Star Vince Carter was trad... Iraq Halts Oil Exports from Main Southern Pipe... Reuters - Authorities have halted oil export\f... ... ... Renteria signing a top-shelf deal Red Sox general manager Theo Epstein acknowled... 120000 rows × 4 columns Carlyle Looks Toward Commercial Aerospace (Reu... Reuters - Private investment firm Carlyle Grou... Today's NFL games PITTSBURGH at NY GIANTS Time: 1:30 p.m. Line: ... Next we will preprocess the text. First we lowercase the title and description, and then we concatenate them into a single string. Then we remove some spurious backslashes from the text. Once this is done, the preprocessed text is added to the dataframe as a new column. Note that pandas allows these steps to be applied to all rows simultaneously. class index class title Wall St. Bears Claw Back Into the Black (Reuters) Oil and Economy Cloud Stocks' Outlook (Reuters) Oil prices soar to all-time record, posing new... ... Pakistan's Musharraf Says Won't Quit as Army C... Saban not going to Dolphins yet Nets get Carter from Raptors description Reuters - Short-sellers, Wall Street's dwindli... Reuters - Soaring crude prices plus worries\ab... AFP - Tearaway world oil prices, toppling reco... ... KARACHI (Reuters) - Pakistani President Perve... The Miami Dolphins will put their courtship of... INDIANAPOLIS -- All-Star Vince Carter was trad... text wall st. bears claw back into the black (reute... oil and economy cloud stocks' outlook (reuters... oil prices soar to all-time record, posing new... ... pakistan's musharraf says won't quit as army c... saban not going to dolphins yet the miami dolp... nets get carter from raptors indianapolis -- a... . 0  3 Business 
 . 1  3 Business 
 . 2  3 Business 
 . 3  3 Business 
 . 4  3 Business 
 ... ... ... . 119995  1 World 
 . 119996  2 Sports 
 . 119997  2 Sports 
 . 119998  2 Sports 
 . 119999  2 Sports 
 120000 rows × 5 columns Carlyle Looks Toward Commercial Reuters - Private investment firm Carlyle carlyle looks toward commercial Aerospace (Reu... Grou... aerospace (reu... Iraq Halts Oil Exports from Main Southern Pipe... Reuters - Authorities have halted oil export\f... iraq halts oil exports from main southern pipe... Renteria signing a top-shelf deal Red Sox general manager Theo Epstein renteria signing a top-shelf deal red sox acknowled... gene... Today's NFL games PITTSBURGH at NY GIANTS Time: 1:30 p.m. today's nfl games pittsburgh at ny giants Line: ... time... At this point, the text is ready to be tokenized. For this purpose we will use NLTK’s word_tokenize function. This function can be applied to the whole column at once using the pandas map function, which returns a new column which we add to the dataframe. However, here we actually use the progress_map function, which provides a visual progress bar. This visual feedback is especially helpful for tasks that take more time to complete. 4.2 Multiclass Classification 67 class index . 0  3 
 . 1  3 
 . 2  3 
 . 3  3 
 . 4  3 
 ... ... . 119995  1 
 . 119996  2 
 . 119997  2 
 . 119998  2 
 . 119999  2 
 class Business Business Business Business Business ... World Sports Sports Sports Sports title Wall St. Bears Claw Back Into the Black (Reuters) Oil and Economy Cloud Stocks' Outlook (Reuters) Oil prices soar to all-time record, posing new... ... Pakistan's Musharraf Says Won't Quit as Army C... Saban not going to Dolphins yet Nets get Carter from Raptors description Reuters - Short-sellers, Wall Street's dwindli... Reuters - Soaring crude prices plus worries\ab... AFP - Tearaway world oil prices, toppling reco... ... KARACHI (Reuters) - Pakistani President Perve... The Miami Dolphins will put their courtship of... INDIANAPOLIS -- All-Star Vince Carter was trad... text wall st. bears claw back into the black (reute... oil and economy cloud stocks' outlook (reuters... oil prices soar to all-time record, posing new... ... pakistan's musharraf says won't quit as army c... saban not going to dolphins yet the miami dolp... nets get carter from raptors indianapolis -- a... tokens [wall, st., bears, claw, back, into, the, blac... [oil, and, economy, cloud, stocks, ', outlook,... [oil, prices, soar, to, all-time, record, ,, p... ... [pakistan, 's, musharraf, says, wo, n't, quit,... [saban, not, going, to, dolphins, yet, the, mi... [nets, get, carter, from, raptors, indianapoli... 120000 rows × 6 columns Carlyle Looks Toward Commercial Reuters - Private investment firm carlyle looks toward commercial [carlyle, looks, toward, Aerospace (Reu... Carlyle Grou... aerospace (reu... commercial, aerospace... Iraq Halts Oil Exports from Main Reuters - Authorities have halted iraq halts oil exports from main [iraq, halts, oil, exports, from, Southern Pipe... oil export\f... southern pipe... main, southe... Renteria signing a top-shelf deal Red Sox general manager Theo renteria signing a top-shelf deal [renteria, signing, a, top-shelf, Epstein acknowled... red sox gene... deal, red, s... Today's NFL games PITTSBURGH at NY GIANTS today's nfl games pittsburgh at [today, 's, nfl, games, Time: 1:30 p.m. Line: ... ny giants time... pittsburgh, at, ny, gi... From the tokens we just created, we then create a vocabulary for our corpus. Here, we only keep the words that occur at least 10 times, decreasing the memory needed and reducing the likelihood that our vocabulary contains noisy tokens. Note that each row in the tokens column contains a list of tokens. In order to create the vocabulary, we will need to convert the Series of lists of tokens into a Series of tokens using the explode() Pandas method. Then we will use the value_counts() method to create a Series object in which the index are the tokens and the values are the number of times they appear in the corpus. The next step is removing the tokens with a count lower than our chosen threshold. Finally, we create a list with the remaining tokens, as well as a dictionary that maps tokens to token ids (i.e., the index of the token in the list). We include in the vocabulary a special token [UNK] that will be used as a placeholder for tokens that do not appear in our vocabulary after the frequency pruning. Using this vocabulary, we construct a feature vector for each news article in the corpus. This feature vector will be encoded as a dictionary, with keys corresponding to token ids, and values corresponding to the number of times the token appears in the article. As above, the feature vectors will be stored as a new column in the dataframe. 68 Implementing Text Classification Using Perceptron and LR class index class title Wall St. Bears Claw Back Into the Black (Reuters) Oil and Economy Cloud Stocks' Outlook (Reuters) Oil prices soar to all-time record, posing new... ... Pakistan's Musharraf Says Won't Quit as Army C... Saban not going to Dolphins yet Nets get Carter from Raptors description Reuters - Short-sellers, Wall Street's dwindli... Reuters - Soaring crude prices plus worries\ab... AFP - Tearaway world oil prices, toppling reco... ... KARACHI (Reuters) - Pakistani President Perve... The Miami Dolphins will put their courtship of... INDIANAPOLIS -- All-Star Vince Carter was trad... text wall st. bears claw back into the black (reute... oil and economy cloud stocks' outlook (reuters... oil prices soar to all-time record, posing new... ... pakistan's musharraf says won't quit as army c... saban not going to dolphins yet the miami dolp... nets get carter from raptors indianapolis -- a... tokens [wall, st., bears, claw, back, into, the, blac... [oil, and, economy, cloud, stocks, ', outlook,... [oil, prices, soar, to, alltime, record, ,, p... ... [pakistan, 's, musharraf, says, wo, n't, quit,... [saban, not, going, to, dolphins, yet, the, mi... [nets, get, carter, from, raptors, indianapoli... features {427: 2, 563: 1, 1607: 1, 15062: 1, 120: 1, 73... {66: 1, 9: 2, 351: 2, 4565: 1, 158: 1, 116: 1,... {66: 2, 99: 2, 4390: 1, 4: 2, 3595: 1, 149: 1,... ... {383: 1, 23: 1, 1626: 2, 91: 1, 1809: 1, 285: ... {7762: 2, 68: 1, 661: 1, 4: 2, 1439: 2, 703: 1... {2170: 2, 226: 1, 2402: 2, 32: 1, 2995: 2, 219... . 0  3 Business 
 . 1  3 Business 
 . 2  3 Business 
 . 3  3 Business 
 . 4  3 Business 
 ... ... ... . 119995  1 World 
 . 119996  2 Sports 
 . 119997  2 Sports 
 . 119998  2 Sports 
 . 119999  2 Sports 
 120000 rows × 7 columns Carlyle Looks Toward Commercial Aerospace (Reu... Reuters - Private investment firm Carlyle Grou... carlyle looks toward commercial aerospace (reu... Iraq Halts Oil Exports from Reuters - Authorities have iraq halts oil exports from Main Southern Pipe... halted oil export\f... main southern pipe... Renteria signing a top-shelf Red Sox general manager renteria signing a topdeal Theo Epstein acknowled... shelf deal red sox gene... PITTSBURGH at NY Today's NFL games GIANTS Time: 1:30 p.m. Line: ... today's nfl games pittsburgh at ny giants time... [carlyle, looks, toward, {15999: 2, 1076: 1, 855: commercial, aerospace... 1, 1286: 1, 4251: 1, ... [iraq, halts, oil, exports, {77: 2, 7380: 1, 66: 3, from, main, southe... 1787: 1, 32: 2, 900: 2... [renteria, signing, a, top- {8428: 2, 2638: 1, 5: 4, shelf, deal, red, s... 0: 3, 127: 1, 202: 3,... [today, 's, nfl, games, {106: 1, 23: 1, 729: 1, pittsburgh, at, ny, gi... 225: 1, 1586: 1, 22: 1... The final preprocessing step is converting the features and the class indices into PyTorch tensors. Recall that we need to subtract one from the class indices to make them zero-based. At this point, the data is fully processed and we are ready to begin training. 4.2.3 Multiclass Logistic Regression Using PyTorch The model itself is a single linear layer whose input size corresponds to the size of our vocabulary, and its output size corresponds to the number of classes in our corpus. PyTorch’s Linear layer includes a bias by default, so there is no need to handle that manually the way we did for our perceptron example. The code for training this model (which implements Algorithm 6) is almost identical to that of the binary logistic repression. However, since we have to calculate a score for each of the four different classes, we need to replace the previous BCEWithLogitsLoss with CrossEntropyLoss, which applies a softmax over the scores to obtain probabilities for each class. For each example, the model predicts 4 scores – one for each label. The label with the highest score is selected using the argmax function. We evaluate the predictions of our model for each class using Scikitlearn’s classification_report, which handles the results of multiclass classification. 4.3 Summary 69 4.3 Summary In this chapter, we used movie review and news article classification to illustrate the implementation of the previously described algorithms for the binary perceptron, binary logistic regression, and multiclass logistic regression. For the binary logistic regression, we made a direct comparison between the lower-level NumPy implementation and a higher-level version that made use of PyTorch. We hope that through this series of exercises the reader has noted several key takeaways. First, data preparation is important and should be done thoughtfully. Certain tasks (e.g., text normalization or sentence splitting) are going to be frequently needed if you continue with NLP, so using or creating generic functions can be very helpful. However, what works for one dataset and one language may not be suitable for another scenario. For example, in our case, we selected different tokenizers for each of our tasks to account for the different registers of English, as well as removing diacritics during normalization. Second, when it comes to implementing machine learning algorithms, it is often easier to use a higher-level library such as PyTorch instead of NumPy. For example, with the former, the gradients are calculated by the library, whereas in NumPy we have to code them ourselves. This becomes cumbersome quickly. For example, even the derivative of the softmax is non-trivial. Third, PyTorch imposes a training structure that remains largely the same, regardless of what models are being trained. That is, at a high level, the same steps are always required: clearing the current gradients, predicting output scores for the provided inputs, calculating the loss, and optimizing. These features make PyTorch a very powerful and convenient deep learning library; we will continue to use it throughout the remainder of the book to implement more complex neural architectures.
22,466
22,543
#!/usr/bin/env python # coding: utf-8 # # Multiclass Text Classification with # # Logistic Regression Implemented with PyTorch and CE Loss # First, we will do some initialization. # In[1]: import random import torch import numpy as np import pandas as pd from tqdm.notebook import tqdm # enable tqdm in pandas tqdm.pandas() # set to True to use the gpu (if there is one available) use_gpu = True # select device device = torch.device('cuda' if use_gpu and torch.cuda.is_available() else 'cpu') print(f'device: {device.type}') # random seed seed = 1234 # set random seed if seed is not None: print(f'random seed: {seed}') random.seed(seed) np.random.seed(seed) torch.manual_seed(seed) # We will be using the AG's News Topic Classification Dataset. # It is stored in two CSV files: `train.csv` and `test.csv`, as well as a `classes.txt` that stores the labels of the classes to predict. # # First, we will load the training dataset using [pandas](https://pandas.pydata.org/) and take a quick look at how the data. # In[2]: train_df = pd.read_csv('data/ag_news_csv/train.csv', header=None) train_df.columns = ['class index', 'title', 'description'] train_df # The dataset consists of 120,000 examples, each consisting of a class index, a title, and a description. # The class labels are distributed in a separated file. We will add the labels to the dataset so that we can interpret the data more easily. Note that the label indexes are one-based, so we need to subtract one to retrieve them from the list. # In[3]: labels = open('data/ag_news_csv/classes.txt').read().splitlines() classes = train_df['class index'].map(lambda i: labels[i-1]) train_df.insert(1, 'class', classes) train_df # Let's inspect how balanced our examples are by using a bar plot. # In[4]: pd.value_counts(train_df['class']).plot.bar() # The classes are evenly distributed. That's great! # # However, the text contains some spurious backslashes in some parts of the text. # They are meant to represent newlines in the original text. # An example can be seen below, between the words "dwindling" and "band". # In[5]: print(train_df.loc[0, 'description']) # We will replace the backslashes with spaces on the whole column using pandas replace method. # In[6]: title = train_df['title'].str.lower() descr = train_df['description'].str.lower() text = title + " " + descr train_df['text'] = text.str.replace('\\', ' ', regex=False) train_df # Now we will proceed to tokenize the title and description columns using NLTK's word_tokenize(). # We will add a new column to our dataframe with the list of tokens. # In[7]: from nltk.tokenize import word_tokenize train_df['tokens'] = train_df['text'].progress_map(word_tokenize) train_df # Now we will create a vocabulary from the training data. We will only keep the terms that repeat beyond some threshold established below. # In[8]: threshold = 10 tokens = train_df['tokens'].explode().value_counts() tokens = tokens[tokens > threshold] id_to_token = ['[UNK]'] + tokens.index.tolist() token_to_id = {w:i for i,w in enumerate(id_to_token)} vocabulary_size = len(id_to_token) print(f'vocabulary size: {vocabulary_size:,}') # In[9]: from collections import defaultdict def make_feature_vector(tokens, unk_id=0): vector = defaultdict(int) for t in tokens: i = token_to_id.get(t, unk_id) vector[i] += 1 return vector train_df['features'] = train_df['tokens'].progress_map(make_feature_vector) train_df # In[10]: def make_dense(feats): x = np.zeros(vocabulary_size) for k,v in feats.items(): x[k] = v return x X_train = np.stack(train_df['features'].progress_map(make_dense)) y_train = train_df['class index'].to_numpy() - 1 X_train = torch.tensor(X_train, dtype=torch.float32) y_train = torch.tensor(y_train) # In[11]: from torch import nn from torch import optim # hyperparameters lr = 1.0 n_epochs = 5 n_examples = X_train.shape[0] n_feats = X_train.shape[1] n_classes = len(labels) # initialize the model, loss function, optimizer, and data-loader model = nn.Linear(n_feats, n_classes).to(device) loss_func = nn.CrossEntropyLoss() optimizer = optim.SGD(model.parameters(), lr=lr) # train the model indices = np.arange(n_examples) for epoch in range(n_epochs): np.random.shuffle(indices) for i in tqdm(indices, desc=f'epoch {epoch+1}'): # clear gradients model.zero_grad() # send datum to right device x = X_train[i].unsqueeze(0).to(device) y_true = y_train[i].unsqueeze(0).to(device) # predict label scores y_pred = model(x) # compute loss loss = loss_func(y_pred, y_true) # backpropagate loss.backward() # optimize model parameters optimizer.step() # Next, we evaluate on the test dataset # In[12]: # repeat all preprocessing done above, this time on the test set test_df = pd.read_csv('data/ag_news_csv/test.csv', header=None) test_df.columns = ['class index', 'title', 'description'] test_df['text'] = test_df['title'].str.lower() + " " + test_df['description'].str.lower() test_df['text'] = test_df['text'].str.replace('\\', ' ', regex=False) test_df['tokens'] = test_df['text'].progress_map(word_tokenize) test_df['features'] = test_df['tokens'].progress_map(make_feature_vector) X_test = np.stack(test_df['features'].progress_map(make_dense)) y_test = test_df['class index'].to_numpy() - 1 X_test = torch.tensor(X_test, dtype=torch.float32) y_test = torch.tensor(y_test) # In[13]: from sklearn.metrics import classification_report # set model to evaluation mode model.eval() # don't store gradients with torch.no_grad(): X_test = X_test.to(device) y_pred = torch.argmax(model(X_test), dim=1) y_pred = y_pred.cpu().numpy() print(classification_report(y_test, y_pred, target_names=labels))
1,054
1,179
7
chap04-8
chap04-8
4 Implementing Text Classification Using Perceptron and Logistic Regression In the previous chapters we have discussed the theory behind the perceptron and logistic regression, including mathematical explanations of how and why they are able to learn from examples. In this chapter we will transition from math to code. Specifically, we will discuss how to implement these models in the Python programming language. All the code that we will introduce throughout this book is available online as well: http://clulab.github.io/gentlenlp/. The reader who is not familiar with the Python programming language is encouraged to read first Appendix A, for a brief introduction to the language, and Appendix B, for a discussion on how computers encode and preprocess text. Once done, please return here. To get a better understanding of how these algorithms work under the hood, we will start by implementing them from scratch. However, as the book progresses, we will introduce some of the popular tools and libraries that make Python the language of choice for machine learning, e.g., PyTorch,1 and Hugging Face’s transformers.2 The code for all the examples in the book is provided in the form of Jupyter notebooks.3 Important fragments of these notebooks will be presented in the implementation chapters so that the reader has the whole picture just by reading the book. However, we strongly encourage you to download the notebooks and execute them yourself. We also encourage you to modify them to conduct your own experiments! 1 https://pytorch.org
2 https://huggingface.co 3 https://jupyter.org/ 55 56 Implementing Text Classification Using Perceptron and LR 4.1 Binary Classification We begin this chapter with binary classification. That is, we aim to train classifiers that assign one of two labels to a given text. As the example for this task, we will train a review classifier using the the Large Movie Review Dataset (Maas et al., 2011).4 We tackle this task by implementing first a binary perceptron classifier, followed by a binary logistic regression one. We will implement the latter both from scratch as well as using PyTorch, so the reader has a clearer understanding on how PyTorch works “under the hood.” 4.1.1 Large Movie Review Dataset This dataset contains movie reviews and their associated scores (between 1 and 10) as provided by IMDb.5 converted these scores to binary labels by assigning each review a positive or negative label if the review score was above 6 or below 5, respectively. Reviews with scores 5 and 6 were considered too neutral and thus excluded. We follow the same protocol in this chapter. The dataset is divided in two even partitions called train and test, each containing 25,000 reviews. The dataset also provides additional unlabeled reviews, but we will not use those here. Each partition contains two directories called pos and neg where the positive and negative examples are stored. Each review is stored in an independent text file, whose name is composed of an id unique to the partition and the score associated with the review, separated by an underscore. An example of a positive and a negative review is shown in Table 4.1. 4.1.2 Bag-of-words Model As discussed in Section 2.2, we will encode the text to classify as a bag of words. That is, we encode each review as a list of numbers, with each position in the list corresponding to a word in our vocabulary, and the value stored in that position corresponding to the number of times the word appears in the review. For example, say we want to encode the following two reviews: 4 https://ai.stanford.edu/~amaas/data/sentiment/ 5 https://www.imdb.com/ Maas et al. 4.1 Binary Classification 57 Table 4.1 Two examples of movie reviews from IMDb. The first is a positive review of the movie Puss in Boots (1988). The second is a negative review of the movie Valentine (2001). These reviews can be found at https://www.imdb.com/review/rw0606396/ and https://www.imdb.com/review/rw0721861/, respectively. Filename Score Binary Label train/pos/24_8.txt 8/10 Positive train/neg/141_3.txt 3/10 Negative Review Text Although this was obviously a low-budget production, the performances and the songs in this movie are worth seeing. One of Walken’s few musical roles to date. (he is a marvelous dancer and singer and he demonstrates his acrobatic skills as well - watch for the cartwheel!) Also starring Jason Connery. A great children’s story and very likable characters. This stalk and slash turkey manages to bring nothing new to an increasingly stale genre. A masked killer stalks young, pert girls and slaughters them in a variety of gruesome ways, none of which are particularly inventive. It’s not scary, it’s not clever, and it’s not funny. So what was the point of it? Review 1: Review 2: "I liked the movie. My friend liked it too. " "I hated it. Would not recommend. " First, we need to create a vocabulary that maps each word to an id that uniquely identifies it. Each of these numbers will be used as the index in a list, so they must start at zero and grow by one for each word in the vocabulary. For example, one possible vocabulary that encodes the previous reviews is: {'would': 0, 'hated': 1, 58 Implementing Text Classification Using Perceptron and LR 'my': 2, 'liked': 3, 'not': 4, 'it': 5, 'movie': 6, 'recommend': 7, 'the': 8, 'I': 9, 'too': 10, 'friend': 11} Using this mapping, we can encode the two reviews as follows: Review1: [0,0,1,2,0,1,1,0,1,1,1,1] Review2: [1,1,0,0,1,1,0,1,0,1,0,0] Note that the word liked (fourth position) in the first review has a value of two. This is because this word appears twice in that review. This is a small example with a vocabulary of only 12 terms. Of course, the same process needs to be implemented for our whole training dataset. For this purpose we will use scikit-learn’s CountVectorizer class.6 Using the CountVectorizer class simplifies things, allowing us to get started quickly with a bag-of-words approach. However, note that it makes several simplifying assumptions (e.g., text is lowercased, and punctuation and single character tokens are removed). Some of these may not be adequate to other tasks. First, we need to obtain the filenames for the reviews in the training set: Once we have acquired the filenames for the training reviews, we need
to read them using the CountVectorizer. In order for the CountVectorizer to open and read the files for us, we make use of the input='filename' constructor parameter (otherwise it would expect the string content directly). The CountVectorizer provides three methods that will be use-
ful for us: a method called fit() that is used to acquire the vocabulary,
a method transform() that converts the text into the bag-of-words representation, and a method fit_transform() that conveniently acquires the vocabulary and transforms the data in a single step. The resulting object is referred to as a document-term matrix, where each row corre- 6 https://scikitlearn.org/stable/modules/generated/sklearn.feature_ extraction.text.CountVectorizer.html 4.1 Binary Classification 59 sponds to a document, and each column corresponds to a term in the vocabulary. As the output above indicates, the resulting matrix has 25,000 rows (one for each review), and 74,849 columns (one for each term). Also you may note that this matrix is sparse, with 3,445,861 stored elements. A regular matrix of shape 25,000×74,849 would have 1,871,225,000 elements. However, most of the elements in a document-term matrix are zeros because only a few words from the vocabulary appear in each document. A sparse matrix takes advantage of this fact by storing only the non-zero cells in order to reduce the memory required to store it. Thus, sparse matrices are convenient, especially when dealing with lots of data. Nevertheless, to simplify the downstream code in this example, we will convert it into a dense matrix, i.e., a regular two-dimensional NumPy array. Finally, we also need the labels of the reviews. We assign a label of one to positive reviews, and a label of zero to negative ones. Note that the first half of the reviews are positive and the second half are negative. The label at the ith position of the y_train array corresponds to the review encoded in the ith row of the X_train matrix. 4.1.3 Perceptron Now that we have defined our task and the data processing pipeline, we will implement a perceptron classifier that classifies the movie reviews as positive or negative. The entire code discussed in this section is available in the chap4_perceptron notebook. Recall from Section 2.4 that the perceptron is composed of a weight vector w and a bias term b. These will be represented as a NumPy array w of the same length as our document vectors, and a variable b for the bias term. Both will be initialized with zeros. The parameters w and b are learned through the following algorithm, which implements Algorithm 2 from Chapter 2: There are a couple of details to point out. Line 3 of Algorithm 2 indicates that we need to repeat the training loop until convergence. Theoretically, convergence is defined as predicting all training examples correctly. This is an ambitious requirement, which is not always possible in practice, so in this code we also include a stop condition if we reach a maximum number of epochs. Another crucial difference between our implementation here and the theoretical Algorithm 2, is that we randomize the order in which the training examples are seen at the beginning of 60 Implementing Text Classification Using Perceptron and LR each epoch. This simple (but highly recommended!) change is necessary to avoid the introduction of spurious biases due to the arbitrary order of the examples in the original training partition.7 We accomplish this by storing the indices corresponding to the X_train matrix rows in a NumPy array, and shuffling these indices at the beginning of each epoch. We shuffle the indices instead of the examples so that we can preserve the mapping between examples and labels. The training loop aligns closely with Algorithm 2. We start by iterating over each example in our training data, storing the current example in the variable x,8 and its corresponding label in the variable y_true. Next, we compute the perceptron decision function shown in Algorithm 1. Note that NumPy (as well as PyTorch) uses Python’s @ operator to indicate vector or matrix multiplication, depending on its operand types. Here we use it to calculate the dot product of the example x and the weights w. To this we add the bias b to obtain the predicted score, whose sign is used to assign a positive or negative predicted label. If the prediction is correct, then no update is needed, and we can move on to the next training example. However, if the prediction is incorrect, then we need to adjust w and b, as described in Algorithm 2. Sidebar 4.1 The tqdm function This is our first exposure to the tqdm function. tqdm is a progress bar that “make your loops show a smart progress meter.”9 The name tqdm comes from the Arabic word taqaddum which can mean “progress.” Using tqdm is as simple as wrapping it around the collection to be traversed. After training, we evaluate the model’s performance on the heldout test partition. The test data is loaded similarly to the training partition, but with one notable difference; we use CountVectorizer’s transform() method instead of the fit_transform() method so that the vocabulary is not adjusted for the test data. We won’t show here the loading of the test partition since it is so similar to the code already shown, but it is available in the Jupyter notebook that accompanies this section. . 7   As an extreme example, consider a dataset where all the positive examples appear first in the training partition. This would cause the perceptron to artificially inflate the weights of the features that occur in these examples, a situation from which the learning algorithm may struggle to recover. 
 . 8  We use typewriter font when we discuss variables in the code, to distinguish code from the theoretical discussion in the other chapters. 
 9 https://github.com/tqdm/tqdm 4.1 Binary Classification 61 Using the model to assign labels to all the test data is easily done in one step – we simply multiply the entire test data document-term matrix by the previously learned weights and add the bias. Scores greater than zero indicate a positive review, and those less than zero are negative. At this point we can evaluate the classifier’s performance, which we will do using precision, recall, and F1 scores for binary classification (described in Section 2.3). For this purpose, we implement a function called binary_classification_report that computes these metrics and returns them as a dictionary: We call this function to compare the predicted labels to the true labels, and obtain the evaluation scores. Our F1 score here is 86.8%, which is much higher than the baseline that assigns labels randomly, which yields an F1 score of about 50%. This is a good result, especially considering the simplicity of the perceptron! In the next sections and chapters, we will discuss a battery of strategies to considerably improve this performance. 4.1.4 Binary Logistic Regression from Scratch Using the same task, dataset, and evaluation, we will now implement a logistic regression classifier, as described in Algorithm 5 from Chapter 3. To give the reader hands-on experience with the implementation of the gradient calculations for logistic regression, we start by implementing it from scratch using NumPy. All the code shown in this section is available in the chap4_logistic_regression_numpy notebook. In the perceptron implementation, we represented the weights and the bias as two different variables. Here, however, we will use a different approach that will allow us to unify them into a single vector variable. Specifically, we take advantage of the similarity between the derivative of the cost function with respect to the weights (Equation 3.14) and the derivative of the cost with respect to the bias (Equation 3.15). d Ci(w, b) = (σi − yi)xij (3.14 revisited) dwj d Ci(w, b) = σi − yi (3.15 revisited) db Note that the two derivative formulas are identical except that the former has a multiplication by xij, while the latter does not. However, 62 Implementing Text Classification Using Perceptron and LR since σi − yi = (σi − yi)1 we can multiply the derivative of the cost with respect to the bias by one without changing the semantics. This gives an opportunity for combining the computations, doing them both in a single pass. The idea is that we can treat the bias as a weight corresponding to a feature that always has a value of one. As can be seen above, we created a NumPy array of ones of the same length as the number of examples in our training set (i.e., the number of rows in the data matrix). Then we add this array as a new column to the data matrix, using NumPy’s column_stack function. Next, we need to initialize our model. This time we will use a single NumPy array w of the same length as the number of columns in the data matrix. The weight vector w is initialized randomly with values between 0 and 1: Before implementing the learning algorithm, we need an implementation of the logistic function. Recall that the logistic function is σ(x) = 1 (3.1 revisited) 1+e−x This function can be easily implemented in NumPy as follows: However, this naive implementation may produce the following warning during training: The term overflow indicates that the result of evaluating exp(-x) is a number so large that it can’t be represented by a float (specifically, we’re using float64 numbers). We will avoid this issue by not calling exp with values that will overflow. NumPy provides the function finfo that can be consulted to find the limits of floating point numbers: The log of the largest floating point number is the largest number for which exp() will not overflow, so we will use it as a threshold to filter out problematic values: We now have everything we need to implement Algorithm 4. The steps to follow for each example are: (1) use the model to make a prediction, (2) calculate the gradient of the loss function with respect to the model parameters, and (3) update the model parameters using the gradient. The size of the update is controlled by the learning rate. Once the model has been trained, we evaluate it on the test dataset using our binary_classification_report function from the previous section. Loading and preprocessing the test dataset follows the same 4.1 Binary Classification 63 steps as with the previous classifier. We omit the code for brevity. These are the results: The performance is comparable with that of the perceptron. The difference in F1 scores between the two classifiers (84.9% here vs. 86.8% for the perceptron) is not significant. Classifier parity is probably attributable to the fact that the signal distinguishing the two classes being easy to learn and the simpler perceptron training algorithm being sufficient in this case. Nevertheless, this task is useful in showing how to implement the logistic regression model from scratch, i.e., by implementing the gradient calculation and parameter updates manually. Next, we will implement the same model again using PyTorch, highlighting how this machine learning library simplifies the process. 4.1.5 Binary Logistic Regression Utilizing PyTorch While it is fairly straightforward to compute the derivatives for logistic regression and implement then directly in NumPy, this will not scale well to arbitrary neural architectures. Fortunately, there are libraries that automate the computation of the derivatives of the cost function (assuming it is differentiable!) for any neural network, and use the resulting gradients to perform gradient descent or other more sophisticated optimization procedures. To this end, we will use the PyTorch deep learning library10. The corresponding notebook for this section is chap4_logistic_regression_pytorch_bce. Our model for logistic regression corresponds to PyTorch’s Linear layer. When we instantiate this layer, we specify the size of the inputs (the size of our vocabulary) and the size of the output, i.e., the number of output neurons (which is one because we’re doing binary classification). The loss function we use is the binary cross-entropy loss (see Chapter 3), which is implemented as BCEWithLogitsLoss in PyTorch. In PyTorch, the gradients obtained from the loss function are applied to the model by an optimizer object, which implements and applies an optimization algorithm. Here we will use the vanilla stochastic gradient descent optimizer; we set its learning rate to 0.1. This is equivalent to the discussion in Section 3.2. Similarly to the manual implementation, the steps required to train the model for a given training example are: (1) ensure the gradients are set to zeros, (2) apply the model to obtain a prediction, (3) calculate 10 https://pytorch.org/ 64 Implementing Text Classification Using Perceptron and LR the loss, (4) compute the gradient of the loss by back-propagation, and (5) update the model parameters. Recall that in our previous implementation everything was hardcoded: applying the model, computing the gradients, and optimizing the model parameters. Here, however, the implementation of the logistic regression is expressed at a higher level of abstraction. This means that we are describing the logical steps without specifying a particular implementation. Instead, implementation details are the responsability of the chosen model, loss function, and optimizer. Thus, we could even choose a different model, loss function, and/or optimizer, and use the same training steps with little or no modification. This decoupling of the training logic from the implementation details is one of the main advantages of libraries such as PyTorch. As shown in the code above, calling the model as a function, with the feature vectors as inputs, produces the predicted scores. Once again, a positive score corresponds to a positive label. When we evaluate this implementation on the test dataset, we obtain results that are in line with our previous models: Writing the perceptron and the logistic regression from scratch is a good exercise, as it exposes us to the fundamentals of implementing machine learning algorithms. However, this becomes cumbersome for more complex neural architectures. For this reason, from this point on, we will use PyTorch for all our coding examples. 4.2 Multiclass Classification So far, in this chapter we have discussed implementing binary classifiers. Next, we will modify these binary classifiers to perform multiclass classification, following the discussion in Section 3.5. 4.2.1 AG News Dataset Before explaining the actual training/testing code, we have to choose a new dataset that is suitable for multiclass classification. To this end, we will use the AG News Classification Dataset (Zhang et al., 2015), a subset of the larger AG corpus of news articles collected from thousands of different news sources.11 The classification dataset consists of four 11 http://groups.di.unipi.it/~gulli/AG_corpus_of_news_articles.html 4.2 Multiclass Classification 65 classes, and the data is equally balanced across all classes (30,000 articles per class for train, and 1,900 articles per class for testing). The goal of the task is to classify each article as one of the four classes: World, Sports, Business, or Sci/Tech. 4.2.2 Preparing the Dataset The AG News Dataset is distributed as two CSV files (one for training and one for testing), each containing three columns: the class index, the title, and the description. The dataset also provides a text file that maps the above class indexes to more descriptive class labels. Because of the tabular nature of the dataset, pandas, a Python library
for tabular data analysis,12 is a natural choice for loading and transform-
ing it. To this end, our Jupyter notebook (chap4_multiclass_logistic_regression) demonstrates the sequence of steps required to handle the data, as well
as model training and evaluation. First, we show how to load the CSV,
add column names, and inspect the result: class index . 0  3 
 . 1  3 
 . 2  3 
 . 3  3 
 . 4  3 
 ... ... . 119995  1 
 . 119996  2 
 . 119997  2 
 . 119998  2 
 . 119999  2 
 title Wall St. Bears Claw Back Into the Black (Reuters) Carlyle Looks Toward Commercial Aerospace (Reu... Oil and Economy Cloud Stocks' Outlook (Reuters) Iraq Halts Oil Exports from Main Southern Pipe... Oil prices soar to all-time record, posing new... ... Pakistan's Musharraf Says Won't Quit as Army C... Renteria signing a top-shelf deal Saban not going to Dolphins yet Today's NFL games Nets get Carter from Raptors description Reuters - Short-sellers, Wall Street's dwindli... Reuters - Private investment firm Carlyle Grou... Reuters - Soaring crude prices plus worries\ab... Reuters - Authorities have halted oil export\f... AFP - Tearaway world oil prices, toppling reco... ... KARACHI (Reuters) - Pakistani President Perve... Red Sox general manager Theo Epstein acknowled... The Miami Dolphins will put their courtship of... PITTSBURGH at NY GIANTS Time: 1:30 p.m. Line: ... INDIANAPOLIS -- All-Star Vince Carter was trad... 120000 rows × 3 columns Since the class labels themselves are in a separate file, we manually add them to the pandas data structure (called dataframe in pandas’ terminology) to increase the interpretability of the data. We use the class index column as a starting point, and use its map method to create a new column with the corresponding labels (technically a new Series object) that is added to the dataframe using its insert method, which allows us to insert the column in a specific position. Note that the label indices are one-based, so we subtract one to align them with their labels. 12 https://pandas.pydata.org 66 Implementing Text Classification Using Perceptron and LR class index . 0  3 
 . 1  3 
 . 2  3 
 . 3  3 
 . 4  3 
 ... ... . 119995  1 
 . 119996  2 
 . 119997  2 
 . 119998  2 
 . 119999  2 
 class Business Business Business Business Business ... World Sports Sports Sports Sports title Wall St. Bears Claw Back Into the Black (Reuters) Oil and Economy Cloud Stocks' Outlook (Reuters) Oil prices soar to all-time record, posing new... Pakistan's Musharraf Says Won't Quit as Army C... Saban not going to Dolphins yet Nets get Carter from Raptors description Reuters - Short-sellers, Wall Street's dwindli... Reuters - Soaring crude prices plus worries\ab... AFP - Tearaway world oil prices, toppling reco... KARACHI (Reuters) - Pakistani President Perve... The Miami Dolphins will put their courtship of... INDIANAPOLIS -- All-Star Vince Carter was trad... Iraq Halts Oil Exports from Main Southern Pipe... Reuters - Authorities have halted oil export\f... ... ... Renteria signing a top-shelf deal Red Sox general manager Theo Epstein acknowled... 120000 rows × 4 columns Carlyle Looks Toward Commercial Aerospace (Reu... Reuters - Private investment firm Carlyle Grou... Today's NFL games PITTSBURGH at NY GIANTS Time: 1:30 p.m. Line: ... Next we will preprocess the text. First we lowercase the title and description, and then we concatenate them into a single string. Then we remove some spurious backslashes from the text. Once this is done, the preprocessed text is added to the dataframe as a new column. Note that pandas allows these steps to be applied to all rows simultaneously. class index class title Wall St. Bears Claw Back Into the Black (Reuters) Oil and Economy Cloud Stocks' Outlook (Reuters) Oil prices soar to all-time record, posing new... ... Pakistan's Musharraf Says Won't Quit as Army C... Saban not going to Dolphins yet Nets get Carter from Raptors description Reuters - Short-sellers, Wall Street's dwindli... Reuters - Soaring crude prices plus worries\ab... AFP - Tearaway world oil prices, toppling reco... ... KARACHI (Reuters) - Pakistani President Perve... The Miami Dolphins will put their courtship of... INDIANAPOLIS -- All-Star Vince Carter was trad... text wall st. bears claw back into the black (reute... oil and economy cloud stocks' outlook (reuters... oil prices soar to all-time record, posing new... ... pakistan's musharraf says won't quit as army c... saban not going to dolphins yet the miami dolp... nets get carter from raptors indianapolis -- a... . 0  3 Business 
 . 1  3 Business 
 . 2  3 Business 
 . 3  3 Business 
 . 4  3 Business 
 ... ... ... . 119995  1 World 
 . 119996  2 Sports 
 . 119997  2 Sports 
 . 119998  2 Sports 
 . 119999  2 Sports 
 120000 rows × 5 columns Carlyle Looks Toward Commercial Reuters - Private investment firm Carlyle carlyle looks toward commercial Aerospace (Reu... Grou... aerospace (reu... Iraq Halts Oil Exports from Main Southern Pipe... Reuters - Authorities have halted oil export\f... iraq halts oil exports from main southern pipe... Renteria signing a top-shelf deal Red Sox general manager Theo Epstein renteria signing a top-shelf deal red sox acknowled... gene... Today's NFL games PITTSBURGH at NY GIANTS Time: 1:30 p.m. today's nfl games pittsburgh at ny giants Line: ... time... At this point, the text is ready to be tokenized. For this purpose we will use NLTK’s word_tokenize function. This function can be applied to the whole column at once using the pandas map function, which returns a new column which we add to the dataframe. However, here we actually use the progress_map function, which provides a visual progress bar. This visual feedback is especially helpful for tasks that take more time to complete. 4.2 Multiclass Classification 67 class index . 0  3 
 . 1  3 
 . 2  3 
 . 3  3 
 . 4  3 
 ... ... . 119995  1 
 . 119996  2 
 . 119997  2 
 . 119998  2 
 . 119999  2 
 class Business Business Business Business Business ... World Sports Sports Sports Sports title Wall St. Bears Claw Back Into the Black (Reuters) Oil and Economy Cloud Stocks' Outlook (Reuters) Oil prices soar to all-time record, posing new... ... Pakistan's Musharraf Says Won't Quit as Army C... Saban not going to Dolphins yet Nets get Carter from Raptors description Reuters - Short-sellers, Wall Street's dwindli... Reuters - Soaring crude prices plus worries\ab... AFP - Tearaway world oil prices, toppling reco... ... KARACHI (Reuters) - Pakistani President Perve... The Miami Dolphins will put their courtship of... INDIANAPOLIS -- All-Star Vince Carter was trad... text wall st. bears claw back into the black (reute... oil and economy cloud stocks' outlook (reuters... oil prices soar to all-time record, posing new... ... pakistan's musharraf says won't quit as army c... saban not going to dolphins yet the miami dolp... nets get carter from raptors indianapolis -- a... tokens [wall, st., bears, claw, back, into, the, blac... [oil, and, economy, cloud, stocks, ', outlook,... [oil, prices, soar, to, all-time, record, ,, p... ... [pakistan, 's, musharraf, says, wo, n't, quit,... [saban, not, going, to, dolphins, yet, the, mi... [nets, get, carter, from, raptors, indianapoli... 120000 rows × 6 columns Carlyle Looks Toward Commercial Reuters - Private investment firm carlyle looks toward commercial [carlyle, looks, toward, Aerospace (Reu... Carlyle Grou... aerospace (reu... commercial, aerospace... Iraq Halts Oil Exports from Main Reuters - Authorities have halted iraq halts oil exports from main [iraq, halts, oil, exports, from, Southern Pipe... oil export\f... southern pipe... main, southe... Renteria signing a top-shelf deal Red Sox general manager Theo renteria signing a top-shelf deal [renteria, signing, a, top-shelf, Epstein acknowled... red sox gene... deal, red, s... Today's NFL games PITTSBURGH at NY GIANTS today's nfl games pittsburgh at [today, 's, nfl, games, Time: 1:30 p.m. Line: ... ny giants time... pittsburgh, at, ny, gi... From the tokens we just created, we then create a vocabulary for our corpus. Here, we only keep the words that occur at least 10 times, decreasing the memory needed and reducing the likelihood that our vocabulary contains noisy tokens. Note that each row in the tokens column contains a list of tokens. In order to create the vocabulary, we will need to convert the Series of lists of tokens into a Series of tokens using the explode() Pandas method. Then we will use the value_counts() method to create a Series object in which the index are the tokens and the values are the number of times they appear in the corpus. The next step is removing the tokens with a count lower than our chosen threshold. Finally, we create a list with the remaining tokens, as well as a dictionary that maps tokens to token ids (i.e., the index of the token in the list). We include in the vocabulary a special token [UNK] that will be used as a placeholder for tokens that do not appear in our vocabulary after the frequency pruning. Using this vocabulary, we construct a feature vector for each news article in the corpus. This feature vector will be encoded as a dictionary, with keys corresponding to token ids, and values corresponding to the number of times the token appears in the article. As above, the feature vectors will be stored as a new column in the dataframe. 68 Implementing Text Classification Using Perceptron and LR class index class title Wall St. Bears Claw Back Into the Black (Reuters) Oil and Economy Cloud Stocks' Outlook (Reuters) Oil prices soar to all-time record, posing new... ... Pakistan's Musharraf Says Won't Quit as Army C... Saban not going to Dolphins yet Nets get Carter from Raptors description Reuters - Short-sellers, Wall Street's dwindli... Reuters - Soaring crude prices plus worries\ab... AFP - Tearaway world oil prices, toppling reco... ... KARACHI (Reuters) - Pakistani President Perve... The Miami Dolphins will put their courtship of... INDIANAPOLIS -- All-Star Vince Carter was trad... text wall st. bears claw back into the black (reute... oil and economy cloud stocks' outlook (reuters... oil prices soar to all-time record, posing new... ... pakistan's musharraf says won't quit as army c... saban not going to dolphins yet the miami dolp... nets get carter from raptors indianapolis -- a... tokens [wall, st., bears, claw, back, into, the, blac... [oil, and, economy, cloud, stocks, ', outlook,... [oil, prices, soar, to, alltime, record, ,, p... ... [pakistan, 's, musharraf, says, wo, n't, quit,... [saban, not, going, to, dolphins, yet, the, mi... [nets, get, carter, from, raptors, indianapoli... features {427: 2, 563: 1, 1607: 1, 15062: 1, 120: 1, 73... {66: 1, 9: 2, 351: 2, 4565: 1, 158: 1, 116: 1,... {66: 2, 99: 2, 4390: 1, 4: 2, 3595: 1, 149: 1,... ... {383: 1, 23: 1, 1626: 2, 91: 1, 1809: 1, 285: ... {7762: 2, 68: 1, 661: 1, 4: 2, 1439: 2, 703: 1... {2170: 2, 226: 1, 2402: 2, 32: 1, 2995: 2, 219... . 0  3 Business 
 . 1  3 Business 
 . 2  3 Business 
 . 3  3 Business 
 . 4  3 Business 
 ... ... ... . 119995  1 World 
 . 119996  2 Sports 
 . 119997  2 Sports 
 . 119998  2 Sports 
 . 119999  2 Sports 
 120000 rows × 7 columns Carlyle Looks Toward Commercial Aerospace (Reu... Reuters - Private investment firm Carlyle Grou... carlyle looks toward commercial aerospace (reu... Iraq Halts Oil Exports from Reuters - Authorities have iraq halts oil exports from Main Southern Pipe... halted oil export\f... main southern pipe... Renteria signing a top-shelf Red Sox general manager renteria signing a topdeal Theo Epstein acknowled... shelf deal red sox gene... PITTSBURGH at NY Today's NFL games GIANTS Time: 1:30 p.m. Line: ... today's nfl games pittsburgh at ny giants time... [carlyle, looks, toward, {15999: 2, 1076: 1, 855: commercial, aerospace... 1, 1286: 1, 4251: 1, ... [iraq, halts, oil, exports, {77: 2, 7380: 1, 66: 3, from, main, southe... 1787: 1, 32: 2, 900: 2... [renteria, signing, a, top- {8428: 2, 2638: 1, 5: 4, shelf, deal, red, s... 0: 3, 127: 1, 202: 3,... [today, 's, nfl, games, {106: 1, 23: 1, 729: 1, pittsburgh, at, ny, gi... 225: 1, 1586: 1, 22: 1... The final preprocessing step is converting the features and the class indices into PyTorch tensors. Recall that we need to subtract one from the class indices to make them zero-based. At this point, the data is fully processed and we are ready to begin training. 4.2.3 Multiclass Logistic Regression Using PyTorch The model itself is a single linear layer whose input size corresponds to the size of our vocabulary, and its output size corresponds to the number of classes in our corpus. PyTorch’s Linear layer includes a bias by default, so there is no need to handle that manually the way we did for our perceptron example. The code for training this model (which implements Algorithm 6) is almost identical to that of the binary logistic repression. However, since we have to calculate a score for each of the four different classes, we need to replace the previous BCEWithLogitsLoss with CrossEntropyLoss, which applies a softmax over the scores to obtain probabilities for each class. For each example, the model predicts 4 scores – one for each label. The label with the highest score is selected using the argmax function. We evaluate the predictions of our model for each class using Scikitlearn’s classification_report, which handles the results of multiclass classification. 4.3 Summary 69 4.3 Summary In this chapter, we used movie review and news article classification to illustrate the implementation of the previously described algorithms for the binary perceptron, binary logistic regression, and multiclass logistic regression. For the binary logistic regression, we made a direct comparison between the lower-level NumPy implementation and a higher-level version that made use of PyTorch. We hope that through this series of exercises the reader has noted several key takeaways. First, data preparation is important and should be done thoughtfully. Certain tasks (e.g., text normalization or sentence splitting) are going to be frequently needed if you continue with NLP, so using or creating generic functions can be very helpful. However, what works for one dataset and one language may not be suitable for another scenario. For example, in our case, we selected different tokenizers for each of our tasks to account for the different registers of English, as well as removing diacritics during normalization. Second, when it comes to implementing machine learning algorithms, it is often easier to use a higher-level library such as PyTorch instead of NumPy. For example, with the former, the gradients are calculated by the library, whereas in NumPy we have to code them ourselves. This becomes cumbersome quickly. For example, even the derivative of the softmax is non-trivial. Third, PyTorch imposes a training structure that remains largely the same, regardless of what models are being trained. That is, at a high level, the same steps are always required: clearing the current gradients, predicting output scores for the provided inputs, calculating the loss, and optimizing. These features make PyTorch a very powerful and convenient deep learning library; we will continue to use it throughout the remainder of the book to implement more complex neural architectures.
8,010
8,094
#!/usr/bin/env python # coding: utf-8 # # Binary Text Classification with Perceptron # In[1]: import random import numpy as np from tqdm.notebook import tqdm # set this variable to a number to be used as the random seed # or to None if you don't want to set a random seed seed = 1234 if seed is not None: random.seed(seed) np.random.seed(seed) # The dataset is divided in two directories called `train` and `test`. # These directories contain the training and testing splits of the dataset. # In[2]: get_ipython().system('ls -lh data/aclImdb/') # Both the `train` and `test` directories contain two directories called `pos` and `neg` that contain text files with the positive and negative reviews, respectively. # In[3]: get_ipython().system('ls -lh data/aclImdb/train/') # We will now read the filenames of the positive and negative examples. # In[4]: from glob import glob pos_files = glob('data/aclImdb/train/pos/*.txt') neg_files = glob('data/aclImdb/train/neg/*.txt') print('number of positive reviews:', len(pos_files)) print('number of negative reviews:', len(neg_files)) # Now, we will use a [`CountVectorizer`](https://scikit-learn.org/stable/modules/generated/sklearn.feature_extraction.text.CountVectorizer.html) to read the text files, tokenize them, acquire a vocabulary from the training data, and encode it in a document-term matrix in which each row represents a review, and each column represents a term in the vocabulary. Each element $(i,j)$ in the matrix represents the number of times term $j$ appears in example $i$. # In[5]: from sklearn.feature_extraction.text import CountVectorizer # initialize CountVectorizer indicating that we will give it a list of filenames that have to be read cv = CountVectorizer(input='filename') # learn vocabulary and return sparse document-term matrix doc_term_matrix = cv.fit_transform(pos_files + neg_files) doc_term_matrix # Note in the message printed above that the matrix is of shape (25000, 74894). # In other words, it has 1,871,225,000 elements. # However, only 3,445,861 elements were stored. # This is because most of the elements in the matrix are zeros. # The reason is that the reviews are short and most words in the english language don't appear in each review. # A matrix that only stores non-zero values is called *sparse*. # # Now we will convert it to a dense numpy array: # In[6]: X_train = doc_term_matrix.toarray() X_train.shape # We will also create a numpy array with the binary labels for the reviews. # One indicates a positive review and zero a negative review. # The label `y_train[i]` corresponds to the review encoded in row `i` of the `X_train` matrix. # In[7]: # training labels y_pos = np.ones(len(pos_files)) y_neg = np.zeros(len(neg_files)) y_train = np.concatenate([y_pos, y_neg]) y_train # Now we will initialize our model, in the form of an array of weights `w` of the same size as the number of features in our dataset (i.e., the number of words in the vocabulary acquired by [`CountVectorizer`](https://scikit-learn.org/stable/modules/generated/sklearn.feature_extraction.text.CountVectorizer.html)), and a bias term `b`. # Both are initialized to zeros. # In[8]: # initialize model: the feature vector and bias term are populated with zeros n_examples, n_features = X_train.shape w = np.zeros(n_features) b = 0 # Now we will use the perceptron learning algorithm to learn the values of `w` and `b` from our training data. # In[9]: n_epochs = 10 indices = np.arange(n_examples) for epoch in range(10): n_errors = 0 # randomize the order in which training examples are seen in this epoch np.random.shuffle(indices) # traverse the training data for i in tqdm(indices, desc=f'epoch {epoch+1}'): x = X_train[i] y_true = y_train[i] # the perceptron decision based on the current model score = x @ w + b y_pred = 1 if score > 0 else 0 # update the model is the prediction was incorrect if y_true == y_pred: continue elif y_true == 1 and y_pred == 0: w = w + x b = b + 1 n_errors += 1 elif y_true == 0 and y_pred == 1: w = w - x b = b - 1 n_errors += 1 if n_errors == 0: break # The next step is evaluating the model on the test dataset. # Note that this time we use the [`transform()`](https://scikit-learn.org/stable/modules/generated/sklearn.feature_extraction.text.CountVectorizer.html#sklearn.feature_extraction.text.CountVectorizer.transform) method of the [`CountVectorizer`](https://scikit-learn.org/stable/modules/generated/sklearn.feature_extraction.text.CountVectorizer.html), instead of the [`fit_transform()`](https://scikit-learn.org/stable/modules/generated/sklearn.feature_extraction.text.CountVectorizer.html#sklearn.feature_extraction.text.CountVectorizer.fit_transform) method that we used above. This is because we want to use the learned vocabulary in the test set, instead of learning a new one. # In[10]: pos_files = glob('data/aclImdb/test/pos/*.txt') neg_files = glob('data/aclImdb/test/neg/*.txt') doc_term_matrix = cv.transform(pos_files + neg_files) X_test = doc_term_matrix.toarray() y_pos = np.ones(len(pos_files)) y_neg = np.zeros(len(neg_files)) y_test = np.concatenate([y_pos, y_neg]) # Using the model is easy: multiply the document-term matrix by the learned weights and add the bias. # We use Python's `@` operator to perform the matrix-vector multiplication. # In[11]: y_pred = (X_test @ w + b) > 0 # Now we print an evaluation of the prediction results using scikit-learn's [`classification_report()`](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.classification_report.html) function. # In[12]: def binary_classification_report(y_true, y_pred): # count true positives, false positives, true negatives, and false negatives tp = fp = tn = fn = 0 for gold, pred in zip(y_true, y_pred): if pred == True: if gold == True: tp += 1 else: fp += 1 else: if gold == False: tn += 1 else: fn += 1 # calculate precision and recall precision = tp / (tp + fp) recall = tp / (tp + fn) # calculate f1 score fscore = 2 * precision * recall / (precision + recall) # calculate accuracy accuracy = (tp + tn) / len(y_true) # number of positive labels in y_true support = sum(y_true) return { "precision": precision, "recall": recall, "f1-score": fscore, "support": support, "accuracy": accuracy, } # In[13]: binary_classification_report(y_test, y_pred)
2,401
2,437
8
chap04-9
chap04-9
4 Implementing Text Classification Using Perceptron and Logistic Regression In the previous chapters we have discussed the theory behind the perceptron and logistic regression, including mathematical explanations of how and why they are able to learn from examples. In this chapter we will transition from math to code. Specifically, we will discuss how to implement these models in the Python programming language. All the code that we will introduce throughout this book is available online as well: http://clulab.github.io/gentlenlp/. The reader who is not familiar with the Python programming language is encouraged to read first Appendix A, for a brief introduction to the language, and Appendix B, for a discussion on how computers encode and preprocess text. Once done, please return here. To get a better understanding of how these algorithms work under the hood, we will start by implementing them from scratch. However, as the book progresses, we will introduce some of the popular tools and libraries that make Python the language of choice for machine learning, e.g., PyTorch,1 and Hugging Face’s transformers.2 The code for all the examples in the book is provided in the form of Jupyter notebooks.3 Important fragments of these notebooks will be presented in the implementation chapters so that the reader has the whole picture just by reading the book. However, we strongly encourage you to download the notebooks and execute them yourself. We also encourage you to modify them to conduct your own experiments! 1 https://pytorch.org
2 https://huggingface.co 3 https://jupyter.org/ 55 56 Implementing Text Classification Using Perceptron and LR 4.1 Binary Classification We begin this chapter with binary classification. That is, we aim to train classifiers that assign one of two labels to a given text. As the example for this task, we will train a review classifier using the the Large Movie Review Dataset (Maas et al., 2011).4 We tackle this task by implementing first a binary perceptron classifier, followed by a binary logistic regression one. We will implement the latter both from scratch as well as using PyTorch, so the reader has a clearer understanding on how PyTorch works “under the hood.” 4.1.1 Large Movie Review Dataset This dataset contains movie reviews and their associated scores (between 1 and 10) as provided by IMDb.5 converted these scores to binary labels by assigning each review a positive or negative label if the review score was above 6 or below 5, respectively. Reviews with scores 5 and 6 were considered too neutral and thus excluded. We follow the same protocol in this chapter. The dataset is divided in two even partitions called train and test, each containing 25,000 reviews. The dataset also provides additional unlabeled reviews, but we will not use those here. Each partition contains two directories called pos and neg where the positive and negative examples are stored. Each review is stored in an independent text file, whose name is composed of an id unique to the partition and the score associated with the review, separated by an underscore. An example of a positive and a negative review is shown in Table 4.1. 4.1.2 Bag-of-words Model As discussed in Section 2.2, we will encode the text to classify as a bag of words. That is, we encode each review as a list of numbers, with each position in the list corresponding to a word in our vocabulary, and the value stored in that position corresponding to the number of times the word appears in the review. For example, say we want to encode the following two reviews: 4 https://ai.stanford.edu/~amaas/data/sentiment/ 5 https://www.imdb.com/ Maas et al. 4.1 Binary Classification 57 Table 4.1 Two examples of movie reviews from IMDb. The first is a positive review of the movie Puss in Boots (1988). The second is a negative review of the movie Valentine (2001). These reviews can be found at https://www.imdb.com/review/rw0606396/ and https://www.imdb.com/review/rw0721861/, respectively. Filename Score Binary Label train/pos/24_8.txt 8/10 Positive train/neg/141_3.txt 3/10 Negative Review Text Although this was obviously a low-budget production, the performances and the songs in this movie are worth seeing. One of Walken’s few musical roles to date. (he is a marvelous dancer and singer and he demonstrates his acrobatic skills as well - watch for the cartwheel!) Also starring Jason Connery. A great children’s story and very likable characters. This stalk and slash turkey manages to bring nothing new to an increasingly stale genre. A masked killer stalks young, pert girls and slaughters them in a variety of gruesome ways, none of which are particularly inventive. It’s not scary, it’s not clever, and it’s not funny. So what was the point of it? Review 1: Review 2: "I liked the movie. My friend liked it too. " "I hated it. Would not recommend. " First, we need to create a vocabulary that maps each word to an id that uniquely identifies it. Each of these numbers will be used as the index in a list, so they must start at zero and grow by one for each word in the vocabulary. For example, one possible vocabulary that encodes the previous reviews is: {'would': 0, 'hated': 1, 58 Implementing Text Classification Using Perceptron and LR 'my': 2, 'liked': 3, 'not': 4, 'it': 5, 'movie': 6, 'recommend': 7, 'the': 8, 'I': 9, 'too': 10, 'friend': 11} Using this mapping, we can encode the two reviews as follows: Review1: [0,0,1,2,0,1,1,0,1,1,1,1] Review2: [1,1,0,0,1,1,0,1,0,1,0,0] Note that the word liked (fourth position) in the first review has a value of two. This is because this word appears twice in that review. This is a small example with a vocabulary of only 12 terms. Of course, the same process needs to be implemented for our whole training dataset. For this purpose we will use scikit-learn’s CountVectorizer class.6 Using the CountVectorizer class simplifies things, allowing us to get started quickly with a bag-of-words approach. However, note that it makes several simplifying assumptions (e.g., text is lowercased, and punctuation and single character tokens are removed). Some of these may not be adequate to other tasks. First, we need to obtain the filenames for the reviews in the training set: Once we have acquired the filenames for the training reviews, we need
to read them using the CountVectorizer. In order for the CountVectorizer to open and read the files for us, we make use of the input='filename' constructor parameter (otherwise it would expect the string content directly). The CountVectorizer provides three methods that will be use-
ful for us: a method called fit() that is used to acquire the vocabulary,
a method transform() that converts the text into the bag-of-words representation, and a method fit_transform() that conveniently acquires the vocabulary and transforms the data in a single step. The resulting object is referred to as a document-term matrix, where each row corre- 6 https://scikitlearn.org/stable/modules/generated/sklearn.feature_ extraction.text.CountVectorizer.html 4.1 Binary Classification 59 sponds to a document, and each column corresponds to a term in the vocabulary. As the output above indicates, the resulting matrix has 25,000 rows (one for each review), and 74,849 columns (one for each term). Also you may note that this matrix is sparse, with 3,445,861 stored elements. A regular matrix of shape 25,000×74,849 would have 1,871,225,000 elements. However, most of the elements in a document-term matrix are zeros because only a few words from the vocabulary appear in each document. A sparse matrix takes advantage of this fact by storing only the non-zero cells in order to reduce the memory required to store it. Thus, sparse matrices are convenient, especially when dealing with lots of data. Nevertheless, to simplify the downstream code in this example, we will convert it into a dense matrix, i.e., a regular two-dimensional NumPy array. Finally, we also need the labels of the reviews. We assign a label of one to positive reviews, and a label of zero to negative ones. Note that the first half of the reviews are positive and the second half are negative. The label at the ith position of the y_train array corresponds to the review encoded in the ith row of the X_train matrix. 4.1.3 Perceptron Now that we have defined our task and the data processing pipeline, we will implement a perceptron classifier that classifies the movie reviews as positive or negative. The entire code discussed in this section is available in the chap4_perceptron notebook. Recall from Section 2.4 that the perceptron is composed of a weight vector w and a bias term b. These will be represented as a NumPy array w of the same length as our document vectors, and a variable b for the bias term. Both will be initialized with zeros. The parameters w and b are learned through the following algorithm, which implements Algorithm 2 from Chapter 2: There are a couple of details to point out. Line 3 of Algorithm 2 indicates that we need to repeat the training loop until convergence. Theoretically, convergence is defined as predicting all training examples correctly. This is an ambitious requirement, which is not always possible in practice, so in this code we also include a stop condition if we reach a maximum number of epochs. Another crucial difference between our implementation here and the theoretical Algorithm 2, is that we randomize the order in which the training examples are seen at the beginning of 60 Implementing Text Classification Using Perceptron and LR each epoch. This simple (but highly recommended!) change is necessary to avoid the introduction of spurious biases due to the arbitrary order of the examples in the original training partition.7 We accomplish this by storing the indices corresponding to the X_train matrix rows in a NumPy array, and shuffling these indices at the beginning of each epoch. We shuffle the indices instead of the examples so that we can preserve the mapping between examples and labels. The training loop aligns closely with Algorithm 2. We start by iterating over each example in our training data, storing the current example in the variable x,8 and its corresponding label in the variable y_true. Next, we compute the perceptron decision function shown in Algorithm 1. Note that NumPy (as well as PyTorch) uses Python’s @ operator to indicate vector or matrix multiplication, depending on its operand types. Here we use it to calculate the dot product of the example x and the weights w. To this we add the bias b to obtain the predicted score, whose sign is used to assign a positive or negative predicted label. If the prediction is correct, then no update is needed, and we can move on to the next training example. However, if the prediction is incorrect, then we need to adjust w and b, as described in Algorithm 2. Sidebar 4.1 The tqdm function This is our first exposure to the tqdm function. tqdm is a progress bar that “make your loops show a smart progress meter.”9 The name tqdm comes from the Arabic word taqaddum which can mean “progress.” Using tqdm is as simple as wrapping it around the collection to be traversed. After training, we evaluate the model’s performance on the heldout test partition. The test data is loaded similarly to the training partition, but with one notable difference; we use CountVectorizer’s transform() method instead of the fit_transform() method so that the vocabulary is not adjusted for the test data. We won’t show here the loading of the test partition since it is so similar to the code already shown, but it is available in the Jupyter notebook that accompanies this section. . 7   As an extreme example, consider a dataset where all the positive examples appear first in the training partition. This would cause the perceptron to artificially inflate the weights of the features that occur in these examples, a situation from which the learning algorithm may struggle to recover. 
 . 8  We use typewriter font when we discuss variables in the code, to distinguish code from the theoretical discussion in the other chapters. 
 9 https://github.com/tqdm/tqdm 4.1 Binary Classification 61 Using the model to assign labels to all the test data is easily done in one step – we simply multiply the entire test data document-term matrix by the previously learned weights and add the bias. Scores greater than zero indicate a positive review, and those less than zero are negative. At this point we can evaluate the classifier’s performance, which we will do using precision, recall, and F1 scores for binary classification (described in Section 2.3). For this purpose, we implement a function called binary_classification_report that computes these metrics and returns them as a dictionary: We call this function to compare the predicted labels to the true labels, and obtain the evaluation scores. Our F1 score here is 86.8%, which is much higher than the baseline that assigns labels randomly, which yields an F1 score of about 50%. This is a good result, especially considering the simplicity of the perceptron! In the next sections and chapters, we will discuss a battery of strategies to considerably improve this performance. 4.1.4 Binary Logistic Regression from Scratch Using the same task, dataset, and evaluation, we will now implement a logistic regression classifier, as described in Algorithm 5 from Chapter 3. To give the reader hands-on experience with the implementation of the gradient calculations for logistic regression, we start by implementing it from scratch using NumPy. All the code shown in this section is available in the chap4_logistic_regression_numpy notebook. In the perceptron implementation, we represented the weights and the bias as two different variables. Here, however, we will use a different approach that will allow us to unify them into a single vector variable. Specifically, we take advantage of the similarity between the derivative of the cost function with respect to the weights (Equation 3.14) and the derivative of the cost with respect to the bias (Equation 3.15). d Ci(w, b) = (σi − yi)xij (3.14 revisited) dwj d Ci(w, b) = σi − yi (3.15 revisited) db Note that the two derivative formulas are identical except that the former has a multiplication by xij, while the latter does not. However, 62 Implementing Text Classification Using Perceptron and LR since σi − yi = (σi − yi)1 we can multiply the derivative of the cost with respect to the bias by one without changing the semantics. This gives an opportunity for combining the computations, doing them both in a single pass. The idea is that we can treat the bias as a weight corresponding to a feature that always has a value of one. As can be seen above, we created a NumPy array of ones of the same length as the number of examples in our training set (i.e., the number of rows in the data matrix). Then we add this array as a new column to the data matrix, using NumPy’s column_stack function. Next, we need to initialize our model. This time we will use a single NumPy array w of the same length as the number of columns in the data matrix. The weight vector w is initialized randomly with values between 0 and 1: Before implementing the learning algorithm, we need an implementation of the logistic function. Recall that the logistic function is σ(x) = 1 (3.1 revisited) 1+e−x This function can be easily implemented in NumPy as follows: However, this naive implementation may produce the following warning during training: The term overflow indicates that the result of evaluating exp(-x) is a number so large that it can’t be represented by a float (specifically, we’re using float64 numbers). We will avoid this issue by not calling exp with values that will overflow. NumPy provides the function finfo that can be consulted to find the limits of floating point numbers: The log of the largest floating point number is the largest number for which exp() will not overflow, so we will use it as a threshold to filter out problematic values: We now have everything we need to implement Algorithm 4. The steps to follow for each example are: (1) use the model to make a prediction, (2) calculate the gradient of the loss function with respect to the model parameters, and (3) update the model parameters using the gradient. The size of the update is controlled by the learning rate. Once the model has been trained, we evaluate it on the test dataset using our binary_classification_report function from the previous section. Loading and preprocessing the test dataset follows the same 4.1 Binary Classification 63 steps as with the previous classifier. We omit the code for brevity. These are the results: The performance is comparable with that of the perceptron. The difference in F1 scores between the two classifiers (84.9% here vs. 86.8% for the perceptron) is not significant. Classifier parity is probably attributable to the fact that the signal distinguishing the two classes being easy to learn and the simpler perceptron training algorithm being sufficient in this case. Nevertheless, this task is useful in showing how to implement the logistic regression model from scratch, i.e., by implementing the gradient calculation and parameter updates manually. Next, we will implement the same model again using PyTorch, highlighting how this machine learning library simplifies the process. 4.1.5 Binary Logistic Regression Utilizing PyTorch While it is fairly straightforward to compute the derivatives for logistic regression and implement then directly in NumPy, this will not scale well to arbitrary neural architectures. Fortunately, there are libraries that automate the computation of the derivatives of the cost function (assuming it is differentiable!) for any neural network, and use the resulting gradients to perform gradient descent or other more sophisticated optimization procedures. To this end, we will use the PyTorch deep learning library10. The corresponding notebook for this section is chap4_logistic_regression_pytorch_bce. Our model for logistic regression corresponds to PyTorch’s Linear layer. When we instantiate this layer, we specify the size of the inputs (the size of our vocabulary) and the size of the output, i.e., the number of output neurons (which is one because we’re doing binary classification). The loss function we use is the binary cross-entropy loss (see Chapter 3), which is implemented as BCEWithLogitsLoss in PyTorch. In PyTorch, the gradients obtained from the loss function are applied to the model by an optimizer object, which implements and applies an optimization algorithm. Here we will use the vanilla stochastic gradient descent optimizer; we set its learning rate to 0.1. This is equivalent to the discussion in Section 3.2. Similarly to the manual implementation, the steps required to train the model for a given training example are: (1) ensure the gradients are set to zeros, (2) apply the model to obtain a prediction, (3) calculate 10 https://pytorch.org/ 64 Implementing Text Classification Using Perceptron and LR the loss, (4) compute the gradient of the loss by back-propagation, and (5) update the model parameters. Recall that in our previous implementation everything was hardcoded: applying the model, computing the gradients, and optimizing the model parameters. Here, however, the implementation of the logistic regression is expressed at a higher level of abstraction. This means that we are describing the logical steps without specifying a particular implementation. Instead, implementation details are the responsability of the chosen model, loss function, and optimizer. Thus, we could even choose a different model, loss function, and/or optimizer, and use the same training steps with little or no modification. This decoupling of the training logic from the implementation details is one of the main advantages of libraries such as PyTorch. As shown in the code above, calling the model as a function, with the feature vectors as inputs, produces the predicted scores. Once again, a positive score corresponds to a positive label. When we evaluate this implementation on the test dataset, we obtain results that are in line with our previous models: Writing the perceptron and the logistic regression from scratch is a good exercise, as it exposes us to the fundamentals of implementing machine learning algorithms. However, this becomes cumbersome for more complex neural architectures. For this reason, from this point on, we will use PyTorch for all our coding examples. 4.2 Multiclass Classification So far, in this chapter we have discussed implementing binary classifiers. Next, we will modify these binary classifiers to perform multiclass classification, following the discussion in Section 3.5. 4.2.1 AG News Dataset Before explaining the actual training/testing code, we have to choose a new dataset that is suitable for multiclass classification. To this end, we will use the AG News Classification Dataset (Zhang et al., 2015), a subset of the larger AG corpus of news articles collected from thousands of different news sources.11 The classification dataset consists of four 11 http://groups.di.unipi.it/~gulli/AG_corpus_of_news_articles.html 4.2 Multiclass Classification 65 classes, and the data is equally balanced across all classes (30,000 articles per class for train, and 1,900 articles per class for testing). The goal of the task is to classify each article as one of the four classes: World, Sports, Business, or Sci/Tech. 4.2.2 Preparing the Dataset The AG News Dataset is distributed as two CSV files (one for training and one for testing), each containing three columns: the class index, the title, and the description. The dataset also provides a text file that maps the above class indexes to more descriptive class labels. Because of the tabular nature of the dataset, pandas, a Python library
for tabular data analysis,12 is a natural choice for loading and transform-
ing it. To this end, our Jupyter notebook (chap4_multiclass_logistic_regression) demonstrates the sequence of steps required to handle the data, as well
as model training and evaluation. First, we show how to load the CSV,
add column names, and inspect the result: class index . 0  3 
 . 1  3 
 . 2  3 
 . 3  3 
 . 4  3 
 ... ... . 119995  1 
 . 119996  2 
 . 119997  2 
 . 119998  2 
 . 119999  2 
 title Wall St. Bears Claw Back Into the Black (Reuters) Carlyle Looks Toward Commercial Aerospace (Reu... Oil and Economy Cloud Stocks' Outlook (Reuters) Iraq Halts Oil Exports from Main Southern Pipe... Oil prices soar to all-time record, posing new... ... Pakistan's Musharraf Says Won't Quit as Army C... Renteria signing a top-shelf deal Saban not going to Dolphins yet Today's NFL games Nets get Carter from Raptors description Reuters - Short-sellers, Wall Street's dwindli... Reuters - Private investment firm Carlyle Grou... Reuters - Soaring crude prices plus worries\ab... Reuters - Authorities have halted oil export\f... AFP - Tearaway world oil prices, toppling reco... ... KARACHI (Reuters) - Pakistani President Perve... Red Sox general manager Theo Epstein acknowled... The Miami Dolphins will put their courtship of... PITTSBURGH at NY GIANTS Time: 1:30 p.m. Line: ... INDIANAPOLIS -- All-Star Vince Carter was trad... 120000 rows × 3 columns Since the class labels themselves are in a separate file, we manually add them to the pandas data structure (called dataframe in pandas’ terminology) to increase the interpretability of the data. We use the class index column as a starting point, and use its map method to create a new column with the corresponding labels (technically a new Series object) that is added to the dataframe using its insert method, which allows us to insert the column in a specific position. Note that the label indices are one-based, so we subtract one to align them with their labels. 12 https://pandas.pydata.org 66 Implementing Text Classification Using Perceptron and LR class index . 0  3 
 . 1  3 
 . 2  3 
 . 3  3 
 . 4  3 
 ... ... . 119995  1 
 . 119996  2 
 . 119997  2 
 . 119998  2 
 . 119999  2 
 class Business Business Business Business Business ... World Sports Sports Sports Sports title Wall St. Bears Claw Back Into the Black (Reuters) Oil and Economy Cloud Stocks' Outlook (Reuters) Oil prices soar to all-time record, posing new... Pakistan's Musharraf Says Won't Quit as Army C... Saban not going to Dolphins yet Nets get Carter from Raptors description Reuters - Short-sellers, Wall Street's dwindli... Reuters - Soaring crude prices plus worries\ab... AFP - Tearaway world oil prices, toppling reco... KARACHI (Reuters) - Pakistani President Perve... The Miami Dolphins will put their courtship of... INDIANAPOLIS -- All-Star Vince Carter was trad... Iraq Halts Oil Exports from Main Southern Pipe... Reuters - Authorities have halted oil export\f... ... ... Renteria signing a top-shelf deal Red Sox general manager Theo Epstein acknowled... 120000 rows × 4 columns Carlyle Looks Toward Commercial Aerospace (Reu... Reuters - Private investment firm Carlyle Grou... Today's NFL games PITTSBURGH at NY GIANTS Time: 1:30 p.m. Line: ... Next we will preprocess the text. First we lowercase the title and description, and then we concatenate them into a single string. Then we remove some spurious backslashes from the text. Once this is done, the preprocessed text is added to the dataframe as a new column. Note that pandas allows these steps to be applied to all rows simultaneously. class index class title Wall St. Bears Claw Back Into the Black (Reuters) Oil and Economy Cloud Stocks' Outlook (Reuters) Oil prices soar to all-time record, posing new... ... Pakistan's Musharraf Says Won't Quit as Army C... Saban not going to Dolphins yet Nets get Carter from Raptors description Reuters - Short-sellers, Wall Street's dwindli... Reuters - Soaring crude prices plus worries\ab... AFP - Tearaway world oil prices, toppling reco... ... KARACHI (Reuters) - Pakistani President Perve... The Miami Dolphins will put their courtship of... INDIANAPOLIS -- All-Star Vince Carter was trad... text wall st. bears claw back into the black (reute... oil and economy cloud stocks' outlook (reuters... oil prices soar to all-time record, posing new... ... pakistan's musharraf says won't quit as army c... saban not going to dolphins yet the miami dolp... nets get carter from raptors indianapolis -- a... . 0  3 Business 
 . 1  3 Business 
 . 2  3 Business 
 . 3  3 Business 
 . 4  3 Business 
 ... ... ... . 119995  1 World 
 . 119996  2 Sports 
 . 119997  2 Sports 
 . 119998  2 Sports 
 . 119999  2 Sports 
 120000 rows × 5 columns Carlyle Looks Toward Commercial Reuters - Private investment firm Carlyle carlyle looks toward commercial Aerospace (Reu... Grou... aerospace (reu... Iraq Halts Oil Exports from Main Southern Pipe... Reuters - Authorities have halted oil export\f... iraq halts oil exports from main southern pipe... Renteria signing a top-shelf deal Red Sox general manager Theo Epstein renteria signing a top-shelf deal red sox acknowled... gene... Today's NFL games PITTSBURGH at NY GIANTS Time: 1:30 p.m. today's nfl games pittsburgh at ny giants Line: ... time... At this point, the text is ready to be tokenized. For this purpose we will use NLTK’s word_tokenize function. This function can be applied to the whole column at once using the pandas map function, which returns a new column which we add to the dataframe. However, here we actually use the progress_map function, which provides a visual progress bar. This visual feedback is especially helpful for tasks that take more time to complete. 4.2 Multiclass Classification 67 class index . 0  3 
 . 1  3 
 . 2  3 
 . 3  3 
 . 4  3 
 ... ... . 119995  1 
 . 119996  2 
 . 119997  2 
 . 119998  2 
 . 119999  2 
 class Business Business Business Business Business ... World Sports Sports Sports Sports title Wall St. Bears Claw Back Into the Black (Reuters) Oil and Economy Cloud Stocks' Outlook (Reuters) Oil prices soar to all-time record, posing new... ... Pakistan's Musharraf Says Won't Quit as Army C... Saban not going to Dolphins yet Nets get Carter from Raptors description Reuters - Short-sellers, Wall Street's dwindli... Reuters - Soaring crude prices plus worries\ab... AFP - Tearaway world oil prices, toppling reco... ... KARACHI (Reuters) - Pakistani President Perve... The Miami Dolphins will put their courtship of... INDIANAPOLIS -- All-Star Vince Carter was trad... text wall st. bears claw back into the black (reute... oil and economy cloud stocks' outlook (reuters... oil prices soar to all-time record, posing new... ... pakistan's musharraf says won't quit as army c... saban not going to dolphins yet the miami dolp... nets get carter from raptors indianapolis -- a... tokens [wall, st., bears, claw, back, into, the, blac... [oil, and, economy, cloud, stocks, ', outlook,... [oil, prices, soar, to, all-time, record, ,, p... ... [pakistan, 's, musharraf, says, wo, n't, quit,... [saban, not, going, to, dolphins, yet, the, mi... [nets, get, carter, from, raptors, indianapoli... 120000 rows × 6 columns Carlyle Looks Toward Commercial Reuters - Private investment firm carlyle looks toward commercial [carlyle, looks, toward, Aerospace (Reu... Carlyle Grou... aerospace (reu... commercial, aerospace... Iraq Halts Oil Exports from Main Reuters - Authorities have halted iraq halts oil exports from main [iraq, halts, oil, exports, from, Southern Pipe... oil export\f... southern pipe... main, southe... Renteria signing a top-shelf deal Red Sox general manager Theo renteria signing a top-shelf deal [renteria, signing, a, top-shelf, Epstein acknowled... red sox gene... deal, red, s... Today's NFL games PITTSBURGH at NY GIANTS today's nfl games pittsburgh at [today, 's, nfl, games, Time: 1:30 p.m. Line: ... ny giants time... pittsburgh, at, ny, gi... From the tokens we just created, we then create a vocabulary for our corpus. Here, we only keep the words that occur at least 10 times, decreasing the memory needed and reducing the likelihood that our vocabulary contains noisy tokens. Note that each row in the tokens column contains a list of tokens. In order to create the vocabulary, we will need to convert the Series of lists of tokens into a Series of tokens using the explode() Pandas method. Then we will use the value_counts() method to create a Series object in which the index are the tokens and the values are the number of times they appear in the corpus. The next step is removing the tokens with a count lower than our chosen threshold. Finally, we create a list with the remaining tokens, as well as a dictionary that maps tokens to token ids (i.e., the index of the token in the list). We include in the vocabulary a special token [UNK] that will be used as a placeholder for tokens that do not appear in our vocabulary after the frequency pruning. Using this vocabulary, we construct a feature vector for each news article in the corpus. This feature vector will be encoded as a dictionary, with keys corresponding to token ids, and values corresponding to the number of times the token appears in the article. As above, the feature vectors will be stored as a new column in the dataframe. 68 Implementing Text Classification Using Perceptron and LR class index class title Wall St. Bears Claw Back Into the Black (Reuters) Oil and Economy Cloud Stocks' Outlook (Reuters) Oil prices soar to all-time record, posing new... ... Pakistan's Musharraf Says Won't Quit as Army C... Saban not going to Dolphins yet Nets get Carter from Raptors description Reuters - Short-sellers, Wall Street's dwindli... Reuters - Soaring crude prices plus worries\ab... AFP - Tearaway world oil prices, toppling reco... ... KARACHI (Reuters) - Pakistani President Perve... The Miami Dolphins will put their courtship of... INDIANAPOLIS -- All-Star Vince Carter was trad... text wall st. bears claw back into the black (reute... oil and economy cloud stocks' outlook (reuters... oil prices soar to all-time record, posing new... ... pakistan's musharraf says won't quit as army c... saban not going to dolphins yet the miami dolp... nets get carter from raptors indianapolis -- a... tokens [wall, st., bears, claw, back, into, the, blac... [oil, and, economy, cloud, stocks, ', outlook,... [oil, prices, soar, to, alltime, record, ,, p... ... [pakistan, 's, musharraf, says, wo, n't, quit,... [saban, not, going, to, dolphins, yet, the, mi... [nets, get, carter, from, raptors, indianapoli... features {427: 2, 563: 1, 1607: 1, 15062: 1, 120: 1, 73... {66: 1, 9: 2, 351: 2, 4565: 1, 158: 1, 116: 1,... {66: 2, 99: 2, 4390: 1, 4: 2, 3595: 1, 149: 1,... ... {383: 1, 23: 1, 1626: 2, 91: 1, 1809: 1, 285: ... {7762: 2, 68: 1, 661: 1, 4: 2, 1439: 2, 703: 1... {2170: 2, 226: 1, 2402: 2, 32: 1, 2995: 2, 219... . 0  3 Business 
 . 1  3 Business 
 . 2  3 Business 
 . 3  3 Business 
 . 4  3 Business 
 ... ... ... . 119995  1 World 
 . 119996  2 Sports 
 . 119997  2 Sports 
 . 119998  2 Sports 
 . 119999  2 Sports 
 120000 rows × 7 columns Carlyle Looks Toward Commercial Aerospace (Reu... Reuters - Private investment firm Carlyle Grou... carlyle looks toward commercial aerospace (reu... Iraq Halts Oil Exports from Reuters - Authorities have iraq halts oil exports from Main Southern Pipe... halted oil export\f... main southern pipe... Renteria signing a top-shelf Red Sox general manager renteria signing a topdeal Theo Epstein acknowled... shelf deal red sox gene... PITTSBURGH at NY Today's NFL games GIANTS Time: 1:30 p.m. Line: ... today's nfl games pittsburgh at ny giants time... [carlyle, looks, toward, {15999: 2, 1076: 1, 855: commercial, aerospace... 1, 1286: 1, 4251: 1, ... [iraq, halts, oil, exports, {77: 2, 7380: 1, 66: 3, from, main, southe... 1787: 1, 32: 2, 900: 2... [renteria, signing, a, top- {8428: 2, 2638: 1, 5: 4, shelf, deal, red, s... 0: 3, 127: 1, 202: 3,... [today, 's, nfl, games, {106: 1, 23: 1, 729: 1, pittsburgh, at, ny, gi... 225: 1, 1586: 1, 22: 1... The final preprocessing step is converting the features and the class indices into PyTorch tensors. Recall that we need to subtract one from the class indices to make them zero-based. At this point, the data is fully processed and we are ready to begin training. 4.2.3 Multiclass Logistic Regression Using PyTorch The model itself is a single linear layer whose input size corresponds to the size of our vocabulary, and its output size corresponds to the number of classes in our corpus. PyTorch’s Linear layer includes a bias by default, so there is no need to handle that manually the way we did for our perceptron example. The code for training this model (which implements Algorithm 6) is almost identical to that of the binary logistic repression. However, since we have to calculate a score for each of the four different classes, we need to replace the previous BCEWithLogitsLoss with CrossEntropyLoss, which applies a softmax over the scores to obtain probabilities for each class. For each example, the model predicts 4 scores – one for each label. The label with the highest score is selected using the argmax function. We evaluate the predictions of our model for each class using Scikitlearn’s classification_report, which handles the results of multiclass classification. 4.3 Summary 69 4.3 Summary In this chapter, we used movie review and news article classification to illustrate the implementation of the previously described algorithms for the binary perceptron, binary logistic regression, and multiclass logistic regression. For the binary logistic regression, we made a direct comparison between the lower-level NumPy implementation and a higher-level version that made use of PyTorch. We hope that through this series of exercises the reader has noted several key takeaways. First, data preparation is important and should be done thoughtfully. Certain tasks (e.g., text normalization or sentence splitting) are going to be frequently needed if you continue with NLP, so using or creating generic functions can be very helpful. However, what works for one dataset and one language may not be suitable for another scenario. For example, in our case, we selected different tokenizers for each of our tasks to account for the different registers of English, as well as removing diacritics during normalization. Second, when it comes to implementing machine learning algorithms, it is often easier to use a higher-level library such as PyTorch instead of NumPy. For example, with the former, the gradients are calculated by the library, whereas in NumPy we have to code them ourselves. This becomes cumbersome quickly. For example, even the derivative of the softmax is non-trivial. Third, PyTorch imposes a training structure that remains largely the same, regardless of what models are being trained. That is, at a high level, the same steps are always required: clearing the current gradients, predicting output scores for the provided inputs, calculating the loss, and optimizing. These features make PyTorch a very powerful and convenient deep learning library; we will continue to use it throughout the remainder of the book to implement more complex neural architectures.
19,082
19,452
#!/usr/bin/env python # coding: utf-8 # # Binary Text Classification with # # Logistic Regression Implemented with PyTorch and BCE Loss # In[1]: import random import numpy as np import torch from tqdm.notebook import tqdm # set this variable to a number to be used as the random seed # or to None if you don't want to set a random seed seed = 1234 if seed is not None: random.seed(seed) np.random.seed(seed) torch.manual_seed(seed) # The dataset is divided in two directories called `train` and `test`. # These directories contain the training and testing splits of the dataset. # In[2]: get_ipython().system('ls -lh data/aclImdb/') # Both the `train` and `test` directories contain two directories called `pos` and `neg` that contain text files with the positive and negative reviews, respectively. # In[3]: get_ipython().system('ls -lh data/aclImdb/train/') # We will now read the filenames of the positive and negative examples. # In[4]: from glob import glob pos_files = glob('data/aclImdb/train/pos/*.txt') neg_files = glob('data/aclImdb/train/neg/*.txt') print('number of positive reviews:', len(pos_files)) print('number of negative reviews:', len(neg_files)) # Now, we will use a [`CountVectorizer`](https://scikit-learn.org/stable/modules/generated/sklearn.feature_extraction.text.CountVectorizer.html) to read the text files, tokenize them, acquire a vocabulary from the training data, and encode it in a document-term matrix in which each row represents a review, and each column represents a term in the vocabulary. Each element $(i,j)$ in the matrix represents the number of times term $j$ appears in example $i$. # In[5]: from sklearn.feature_extraction.text import CountVectorizer # initialize CountVectorizer indicating that we will give it a list of filenames that have to be read cv = CountVectorizer(input='filename') # learn vocabulary and return sparse document-term matrix doc_term_matrix = cv.fit_transform(pos_files + neg_files) doc_term_matrix # Note in the message printed above that the matrix is of shape (25000, 74894). # In other words, it has 1,871,225,000 elements. # However, only 3,445,861 elements were stored. # This is because most of the elements in the matrix are zeros. # The reason is that the reviews are short and most words in the english language don't appear in each review. # A matrix that only stores non-zero values is called *sparse*. # # Now we will convert it to a dense numpy array: # In[6]: X_train = doc_term_matrix.toarray() X_train.shape # We will also create a numpy array with the binary labels for the reviews. # One indicates a positive review and zero a negative review. # The label `y_train[i]` corresponds to the review encoded in row `i` of the `X_train` matrix. # In[7]: # training labels y_pos = np.ones(len(pos_files)) y_neg = np.zeros(len(neg_files)) y_train = np.concatenate([y_pos, y_neg]) y_train # Now we will initialize our model, in the form of an array of weights `w` of the same size as the number of features in our dataset (i.e., the number of words in the vocabulary acquired by [`CountVectorizer`](https://scikit-learn.org/stable/modules/generated/sklearn.feature_extraction.text.CountVectorizer.html)), and a bias term `b`. # Both are initialized to zeros. # In[8]: n_examples, n_features = X_train.shape # Now we will use the logistic regression learning algorithm to learn the values of `w` and `b` from our training data. # In[9]: import torch from torch import nn from torch import optim lr = 1e-1 n_epochs = 10 model = nn.Linear(n_features, 1) loss_func = nn.BCEWithLogitsLoss() optimizer = optim.SGD(model.parameters(), lr=lr) X_train = torch.tensor(X_train, dtype=torch.float32) y_train = torch.tensor(y_train, dtype=torch.float32) indices = np.arange(n_examples) for epoch in range(10): n_errors = 0 # randomize training examples np.random.shuffle(indices) # for each training example for i in tqdm(indices, desc=f'epoch {epoch+1}'): x = X_train[i] y_true = y_train[i] # make predictions y_pred = model(x) # calculate loss loss = loss_func(y_pred[0], y_true) # calculate gradients through back-propagation loss.backward() # optimize model parameters optimizer.step() # ensure gradients are set to zero model.zero_grad() # The next step is evaluating the model on the test dataset. # Note that this time we use the [`transform()`](https://scikit-learn.org/stable/modules/generated/sklearn.feature_extraction.text.CountVectorizer.html#sklearn.feature_extraction.text.CountVectorizer.transform) method of the [`CountVectorizer`](https://scikit-learn.org/stable/modules/generated/sklearn.feature_extraction.text.CountVectorizer.html), instead of the [`fit_transform()`](https://scikit-learn.org/stable/modules/generated/sklearn.feature_extraction.text.CountVectorizer.html#sklearn.feature_extraction.text.CountVectorizer.fit_transform) method that we used above. This is because we want to use the learned vocabulary in the test set, instead of learning a new one. # In[10]: pos_files = glob('data/aclImdb/test/pos/*.txt') neg_files = glob('data/aclImdb/test/neg/*.txt') doc_term_matrix = cv.transform(pos_files + neg_files) X_test = doc_term_matrix.toarray() X_test = torch.tensor(X_test, dtype=torch.float32) y_pos = np.ones(len(pos_files)) y_neg = np.zeros(len(neg_files)) y_test = np.concatenate([y_pos, y_neg]) # Using the model is easy: multiply the document-term matrix by the learned weights and add the bias. # We use Python's `@` operator to perform the matrix-vector multiplication. # In[11]: y_pred = model(X_test) > 0 # Now we print an evaluation of the prediction results using scikit-learn's [`classification_report()`](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.classification_report.html) function. # In[12]: def binary_classification_report(y_true, y_pred): # count true positives, false positives, true negatives, and false negatives tp = fp = tn = fn = 0 for gold, pred in zip(y_true, y_pred): if pred == True: if gold == True: tp += 1 else: fp += 1 else: if gold == False: tn += 1 else: fn += 1 # calculate precision and recall precision = tp / (tp + fp) recall = tp / (tp + fn) # calculate f1 score fscore = 2 * precision * recall / (precision + recall) # calculate accuracy accuracy = (tp + tn) / len(y_true) # number of positive labels in y_true support = sum(y_true) return { "precision": precision, "recall": recall, "f1-score": fscore, "support": support, "accuracy": accuracy, } # In[13]: binary_classification_report(y_test, y_pred)
4,090
4,394
9
chap04-10
chap04-10
4 Implementing Text Classification Using Perceptron and Logistic Regression In the previous chapters we have discussed the theory behind the perceptron and logistic regression, including mathematical explanations of how and why they are able to learn from examples. In this chapter we will transition from math to code. Specifically, we will discuss how to implement these models in the Python programming language. All the code that we will introduce throughout this book is available online as well: http://clulab.github.io/gentlenlp/. The reader who is not familiar with the Python programming language is encouraged to read first Appendix A, for a brief introduction to the language, and Appendix B, for a discussion on how computers encode and preprocess text. Once done, please return here. To get a better understanding of how these algorithms work under the hood, we will start by implementing them from scratch. However, as the book progresses, we will introduce some of the popular tools and libraries that make Python the language of choice for machine learning, e.g., PyTorch,1 and Hugging Face’s transformers.2 The code for all the examples in the book is provided in the form of Jupyter notebooks.3 Important fragments of these notebooks will be presented in the implementation chapters so that the reader has the whole picture just by reading the book. However, we strongly encourage you to download the notebooks and execute them yourself. We also encourage you to modify them to conduct your own experiments! 1 https://pytorch.org
2 https://huggingface.co 3 https://jupyter.org/ 55 56 Implementing Text Classification Using Perceptron and LR 4.1 Binary Classification We begin this chapter with binary classification. That is, we aim to train classifiers that assign one of two labels to a given text. As the example for this task, we will train a review classifier using the the Large Movie Review Dataset (Maas et al., 2011).4 We tackle this task by implementing first a binary perceptron classifier, followed by a binary logistic regression one. We will implement the latter both from scratch as well as using PyTorch, so the reader has a clearer understanding on how PyTorch works “under the hood.” 4.1.1 Large Movie Review Dataset This dataset contains movie reviews and their associated scores (between 1 and 10) as provided by IMDb.5 converted these scores to binary labels by assigning each review a positive or negative label if the review score was above 6 or below 5, respectively. Reviews with scores 5 and 6 were considered too neutral and thus excluded. We follow the same protocol in this chapter. The dataset is divided in two even partitions called train and test, each containing 25,000 reviews. The dataset also provides additional unlabeled reviews, but we will not use those here. Each partition contains two directories called pos and neg where the positive and negative examples are stored. Each review is stored in an independent text file, whose name is composed of an id unique to the partition and the score associated with the review, separated by an underscore. An example of a positive and a negative review is shown in Table 4.1. 4.1.2 Bag-of-words Model As discussed in Section 2.2, we will encode the text to classify as a bag of words. That is, we encode each review as a list of numbers, with each position in the list corresponding to a word in our vocabulary, and the value stored in that position corresponding to the number of times the word appears in the review. For example, say we want to encode the following two reviews: 4 https://ai.stanford.edu/~amaas/data/sentiment/ 5 https://www.imdb.com/ Maas et al. 4.1 Binary Classification 57 Table 4.1 Two examples of movie reviews from IMDb. The first is a positive review of the movie Puss in Boots (1988). The second is a negative review of the movie Valentine (2001). These reviews can be found at https://www.imdb.com/review/rw0606396/ and https://www.imdb.com/review/rw0721861/, respectively. Filename Score Binary Label train/pos/24_8.txt 8/10 Positive train/neg/141_3.txt 3/10 Negative Review Text Although this was obviously a low-budget production, the performances and the songs in this movie are worth seeing. One of Walken’s few musical roles to date. (he is a marvelous dancer and singer and he demonstrates his acrobatic skills as well - watch for the cartwheel!) Also starring Jason Connery. A great children’s story and very likable characters. This stalk and slash turkey manages to bring nothing new to an increasingly stale genre. A masked killer stalks young, pert girls and slaughters them in a variety of gruesome ways, none of which are particularly inventive. It’s not scary, it’s not clever, and it’s not funny. So what was the point of it? Review 1: Review 2: "I liked the movie. My friend liked it too. " "I hated it. Would not recommend. " First, we need to create a vocabulary that maps each word to an id that uniquely identifies it. Each of these numbers will be used as the index in a list, so they must start at zero and grow by one for each word in the vocabulary. For example, one possible vocabulary that encodes the previous reviews is: {'would': 0, 'hated': 1, 58 Implementing Text Classification Using Perceptron and LR 'my': 2, 'liked': 3, 'not': 4, 'it': 5, 'movie': 6, 'recommend': 7, 'the': 8, 'I': 9, 'too': 10, 'friend': 11} Using this mapping, we can encode the two reviews as follows: Review1: [0,0,1,2,0,1,1,0,1,1,1,1] Review2: [1,1,0,0,1,1,0,1,0,1,0,0] Note that the word liked (fourth position) in the first review has a value of two. This is because this word appears twice in that review. This is a small example with a vocabulary of only 12 terms. Of course, the same process needs to be implemented for our whole training dataset. For this purpose we will use scikit-learn’s CountVectorizer class.6 Using the CountVectorizer class simplifies things, allowing us to get started quickly with a bag-of-words approach. However, note that it makes several simplifying assumptions (e.g., text is lowercased, and punctuation and single character tokens are removed). Some of these may not be adequate to other tasks. First, we need to obtain the filenames for the reviews in the training set: Once we have acquired the filenames for the training reviews, we need
to read them using the CountVectorizer. In order for the CountVectorizer to open and read the files for us, we make use of the input='filename' constructor parameter (otherwise it would expect the string content directly). The CountVectorizer provides three methods that will be use-
ful for us: a method called fit() that is used to acquire the vocabulary,
a method transform() that converts the text into the bag-of-words representation, and a method fit_transform() that conveniently acquires the vocabulary and transforms the data in a single step. The resulting object is referred to as a document-term matrix, where each row corre- 6 https://scikitlearn.org/stable/modules/generated/sklearn.feature_ extraction.text.CountVectorizer.html 4.1 Binary Classification 59 sponds to a document, and each column corresponds to a term in the vocabulary. As the output above indicates, the resulting matrix has 25,000 rows (one for each review), and 74,849 columns (one for each term). Also you may note that this matrix is sparse, with 3,445,861 stored elements. A regular matrix of shape 25,000×74,849 would have 1,871,225,000 elements. However, most of the elements in a document-term matrix are zeros because only a few words from the vocabulary appear in each document. A sparse matrix takes advantage of this fact by storing only the non-zero cells in order to reduce the memory required to store it. Thus, sparse matrices are convenient, especially when dealing with lots of data. Nevertheless, to simplify the downstream code in this example, we will convert it into a dense matrix, i.e., a regular two-dimensional NumPy array. Finally, we also need the labels of the reviews. We assign a label of one to positive reviews, and a label of zero to negative ones. Note that the first half of the reviews are positive and the second half are negative. The label at the ith position of the y_train array corresponds to the review encoded in the ith row of the X_train matrix. 4.1.3 Perceptron Now that we have defined our task and the data processing pipeline, we will implement a perceptron classifier that classifies the movie reviews as positive or negative. The entire code discussed in this section is available in the chap4_perceptron notebook. Recall from Section 2.4 that the perceptron is composed of a weight vector w and a bias term b. These will be represented as a NumPy array w of the same length as our document vectors, and a variable b for the bias term. Both will be initialized with zeros. The parameters w and b are learned through the following algorithm, which implements Algorithm 2 from Chapter 2: There are a couple of details to point out. Line 3 of Algorithm 2 indicates that we need to repeat the training loop until convergence. Theoretically, convergence is defined as predicting all training examples correctly. This is an ambitious requirement, which is not always possible in practice, so in this code we also include a stop condition if we reach a maximum number of epochs. Another crucial difference between our implementation here and the theoretical Algorithm 2, is that we randomize the order in which the training examples are seen at the beginning of 60 Implementing Text Classification Using Perceptron and LR each epoch. This simple (but highly recommended!) change is necessary to avoid the introduction of spurious biases due to the arbitrary order of the examples in the original training partition.7 We accomplish this by storing the indices corresponding to the X_train matrix rows in a NumPy array, and shuffling these indices at the beginning of each epoch. We shuffle the indices instead of the examples so that we can preserve the mapping between examples and labels. The training loop aligns closely with Algorithm 2. We start by iterating over each example in our training data, storing the current example in the variable x,8 and its corresponding label in the variable y_true. Next, we compute the perceptron decision function shown in Algorithm 1. Note that NumPy (as well as PyTorch) uses Python’s @ operator to indicate vector or matrix multiplication, depending on its operand types. Here we use it to calculate the dot product of the example x and the weights w. To this we add the bias b to obtain the predicted score, whose sign is used to assign a positive or negative predicted label. If the prediction is correct, then no update is needed, and we can move on to the next training example. However, if the prediction is incorrect, then we need to adjust w and b, as described in Algorithm 2. Sidebar 4.1 The tqdm function This is our first exposure to the tqdm function. tqdm is a progress bar that “make your loops show a smart progress meter.”9 The name tqdm comes from the Arabic word taqaddum which can mean “progress.” Using tqdm is as simple as wrapping it around the collection to be traversed. After training, we evaluate the model’s performance on the heldout test partition. The test data is loaded similarly to the training partition, but with one notable difference; we use CountVectorizer’s transform() method instead of the fit_transform() method so that the vocabulary is not adjusted for the test data. We won’t show here the loading of the test partition since it is so similar to the code already shown, but it is available in the Jupyter notebook that accompanies this section. . 7   As an extreme example, consider a dataset where all the positive examples appear first in the training partition. This would cause the perceptron to artificially inflate the weights of the features that occur in these examples, a situation from which the learning algorithm may struggle to recover. 
 . 8  We use typewriter font when we discuss variables in the code, to distinguish code from the theoretical discussion in the other chapters. 
 9 https://github.com/tqdm/tqdm 4.1 Binary Classification 61 Using the model to assign labels to all the test data is easily done in one step – we simply multiply the entire test data document-term matrix by the previously learned weights and add the bias. Scores greater than zero indicate a positive review, and those less than zero are negative. At this point we can evaluate the classifier’s performance, which we will do using precision, recall, and F1 scores for binary classification (described in Section 2.3). For this purpose, we implement a function called binary_classification_report that computes these metrics and returns them as a dictionary: We call this function to compare the predicted labels to the true labels, and obtain the evaluation scores. Our F1 score here is 86.8%, which is much higher than the baseline that assigns labels randomly, which yields an F1 score of about 50%. This is a good result, especially considering the simplicity of the perceptron! In the next sections and chapters, we will discuss a battery of strategies to considerably improve this performance. 4.1.4 Binary Logistic Regression from Scratch Using the same task, dataset, and evaluation, we will now implement a logistic regression classifier, as described in Algorithm 5 from Chapter 3. To give the reader hands-on experience with the implementation of the gradient calculations for logistic regression, we start by implementing it from scratch using NumPy. All the code shown in this section is available in the chap4_logistic_regression_numpy notebook. In the perceptron implementation, we represented the weights and the bias as two different variables. Here, however, we will use a different approach that will allow us to unify them into a single vector variable. Specifically, we take advantage of the similarity between the derivative of the cost function with respect to the weights (Equation 3.14) and the derivative of the cost with respect to the bias (Equation 3.15). d Ci(w, b) = (σi − yi)xij (3.14 revisited) dwj d Ci(w, b) = σi − yi (3.15 revisited) db Note that the two derivative formulas are identical except that the former has a multiplication by xij, while the latter does not. However, 62 Implementing Text Classification Using Perceptron and LR since σi − yi = (σi − yi)1 we can multiply the derivative of the cost with respect to the bias by one without changing the semantics. This gives an opportunity for combining the computations, doing them both in a single pass. The idea is that we can treat the bias as a weight corresponding to a feature that always has a value of one. As can be seen above, we created a NumPy array of ones of the same length as the number of examples in our training set (i.e., the number of rows in the data matrix). Then we add this array as a new column to the data matrix, using NumPy’s column_stack function. Next, we need to initialize our model. This time we will use a single NumPy array w of the same length as the number of columns in the data matrix. The weight vector w is initialized randomly with values between 0 and 1: Before implementing the learning algorithm, we need an implementation of the logistic function. Recall that the logistic function is σ(x) = 1 (3.1 revisited) 1+e−x This function can be easily implemented in NumPy as follows: However, this naive implementation may produce the following warning during training: The term overflow indicates that the result of evaluating exp(-x) is a number so large that it can’t be represented by a float (specifically, we’re using float64 numbers). We will avoid this issue by not calling exp with values that will overflow. NumPy provides the function finfo that can be consulted to find the limits of floating point numbers: The log of the largest floating point number is the largest number for which exp() will not overflow, so we will use it as a threshold to filter out problematic values: We now have everything we need to implement Algorithm 4. The steps to follow for each example are: (1) use the model to make a prediction, (2) calculate the gradient of the loss function with respect to the model parameters, and (3) update the model parameters using the gradient. The size of the update is controlled by the learning rate. Once the model has been trained, we evaluate it on the test dataset using our binary_classification_report function from the previous section. Loading and preprocessing the test dataset follows the same 4.1 Binary Classification 63 steps as with the previous classifier. We omit the code for brevity. These are the results: The performance is comparable with that of the perceptron. The difference in F1 scores between the two classifiers (84.9% here vs. 86.8% for the perceptron) is not significant. Classifier parity is probably attributable to the fact that the signal distinguishing the two classes being easy to learn and the simpler perceptron training algorithm being sufficient in this case. Nevertheless, this task is useful in showing how to implement the logistic regression model from scratch, i.e., by implementing the gradient calculation and parameter updates manually. Next, we will implement the same model again using PyTorch, highlighting how this machine learning library simplifies the process. 4.1.5 Binary Logistic Regression Utilizing PyTorch While it is fairly straightforward to compute the derivatives for logistic regression and implement then directly in NumPy, this will not scale well to arbitrary neural architectures. Fortunately, there are libraries that automate the computation of the derivatives of the cost function (assuming it is differentiable!) for any neural network, and use the resulting gradients to perform gradient descent or other more sophisticated optimization procedures. To this end, we will use the PyTorch deep learning library10. The corresponding notebook for this section is chap4_logistic_regression_pytorch_bce. Our model for logistic regression corresponds to PyTorch’s Linear layer. When we instantiate this layer, we specify the size of the inputs (the size of our vocabulary) and the size of the output, i.e., the number of output neurons (which is one because we’re doing binary classification). The loss function we use is the binary cross-entropy loss (see Chapter 3), which is implemented as BCEWithLogitsLoss in PyTorch. In PyTorch, the gradients obtained from the loss function are applied to the model by an optimizer object, which implements and applies an optimization algorithm. Here we will use the vanilla stochastic gradient descent optimizer; we set its learning rate to 0.1. This is equivalent to the discussion in Section 3.2. Similarly to the manual implementation, the steps required to train the model for a given training example are: (1) ensure the gradients are set to zeros, (2) apply the model to obtain a prediction, (3) calculate 10 https://pytorch.org/ 64 Implementing Text Classification Using Perceptron and LR the loss, (4) compute the gradient of the loss by back-propagation, and (5) update the model parameters. Recall that in our previous implementation everything was hardcoded: applying the model, computing the gradients, and optimizing the model parameters. Here, however, the implementation of the logistic regression is expressed at a higher level of abstraction. This means that we are describing the logical steps without specifying a particular implementation. Instead, implementation details are the responsability of the chosen model, loss function, and optimizer. Thus, we could even choose a different model, loss function, and/or optimizer, and use the same training steps with little or no modification. This decoupling of the training logic from the implementation details is one of the main advantages of libraries such as PyTorch. As shown in the code above, calling the model as a function, with the feature vectors as inputs, produces the predicted scores. Once again, a positive score corresponds to a positive label. When we evaluate this implementation on the test dataset, we obtain results that are in line with our previous models: Writing the perceptron and the logistic regression from scratch is a good exercise, as it exposes us to the fundamentals of implementing machine learning algorithms. However, this becomes cumbersome for more complex neural architectures. For this reason, from this point on, we will use PyTorch for all our coding examples. 4.2 Multiclass Classification So far, in this chapter we have discussed implementing binary classifiers. Next, we will modify these binary classifiers to perform multiclass classification, following the discussion in Section 3.5. 4.2.1 AG News Dataset Before explaining the actual training/testing code, we have to choose a new dataset that is suitable for multiclass classification. To this end, we will use the AG News Classification Dataset (Zhang et al., 2015), a subset of the larger AG corpus of news articles collected from thousands of different news sources.11 The classification dataset consists of four 11 http://groups.di.unipi.it/~gulli/AG_corpus_of_news_articles.html 4.2 Multiclass Classification 65 classes, and the data is equally balanced across all classes (30,000 articles per class for train, and 1,900 articles per class for testing). The goal of the task is to classify each article as one of the four classes: World, Sports, Business, or Sci/Tech. 4.2.2 Preparing the Dataset The AG News Dataset is distributed as two CSV files (one for training and one for testing), each containing three columns: the class index, the title, and the description. The dataset also provides a text file that maps the above class indexes to more descriptive class labels. Because of the tabular nature of the dataset, pandas, a Python library
for tabular data analysis,12 is a natural choice for loading and transform-
ing it. To this end, our Jupyter notebook (chap4_multiclass_logistic_regression) demonstrates the sequence of steps required to handle the data, as well
as model training and evaluation. First, we show how to load the CSV,
add column names, and inspect the result: class index . 0  3 
 . 1  3 
 . 2  3 
 . 3  3 
 . 4  3 
 ... ... . 119995  1 
 . 119996  2 
 . 119997  2 
 . 119998  2 
 . 119999  2 
 title Wall St. Bears Claw Back Into the Black (Reuters) Carlyle Looks Toward Commercial Aerospace (Reu... Oil and Economy Cloud Stocks' Outlook (Reuters) Iraq Halts Oil Exports from Main Southern Pipe... Oil prices soar to all-time record, posing new... ... Pakistan's Musharraf Says Won't Quit as Army C... Renteria signing a top-shelf deal Saban not going to Dolphins yet Today's NFL games Nets get Carter from Raptors description Reuters - Short-sellers, Wall Street's dwindli... Reuters - Private investment firm Carlyle Grou... Reuters - Soaring crude prices plus worries\ab... Reuters - Authorities have halted oil export\f... AFP - Tearaway world oil prices, toppling reco... ... KARACHI (Reuters) - Pakistani President Perve... Red Sox general manager Theo Epstein acknowled... The Miami Dolphins will put their courtship of... PITTSBURGH at NY GIANTS Time: 1:30 p.m. Line: ... INDIANAPOLIS -- All-Star Vince Carter was trad... 120000 rows × 3 columns Since the class labels themselves are in a separate file, we manually add them to the pandas data structure (called dataframe in pandas’ terminology) to increase the interpretability of the data. We use the class index column as a starting point, and use its map method to create a new column with the corresponding labels (technically a new Series object) that is added to the dataframe using its insert method, which allows us to insert the column in a specific position. Note that the label indices are one-based, so we subtract one to align them with their labels. 12 https://pandas.pydata.org 66 Implementing Text Classification Using Perceptron and LR class index . 0  3 
 . 1  3 
 . 2  3 
 . 3  3 
 . 4  3 
 ... ... . 119995  1 
 . 119996  2 
 . 119997  2 
 . 119998  2 
 . 119999  2 
 class Business Business Business Business Business ... World Sports Sports Sports Sports title Wall St. Bears Claw Back Into the Black (Reuters) Oil and Economy Cloud Stocks' Outlook (Reuters) Oil prices soar to all-time record, posing new... Pakistan's Musharraf Says Won't Quit as Army C... Saban not going to Dolphins yet Nets get Carter from Raptors description Reuters - Short-sellers, Wall Street's dwindli... Reuters - Soaring crude prices plus worries\ab... AFP - Tearaway world oil prices, toppling reco... KARACHI (Reuters) - Pakistani President Perve... The Miami Dolphins will put their courtship of... INDIANAPOLIS -- All-Star Vince Carter was trad... Iraq Halts Oil Exports from Main Southern Pipe... Reuters - Authorities have halted oil export\f... ... ... Renteria signing a top-shelf deal Red Sox general manager Theo Epstein acknowled... 120000 rows × 4 columns Carlyle Looks Toward Commercial Aerospace (Reu... Reuters - Private investment firm Carlyle Grou... Today's NFL games PITTSBURGH at NY GIANTS Time: 1:30 p.m. Line: ... Next we will preprocess the text. First we lowercase the title and description, and then we concatenate them into a single string. Then we remove some spurious backslashes from the text. Once this is done, the preprocessed text is added to the dataframe as a new column. Note that pandas allows these steps to be applied to all rows simultaneously. class index class title Wall St. Bears Claw Back Into the Black (Reuters) Oil and Economy Cloud Stocks' Outlook (Reuters) Oil prices soar to all-time record, posing new... ... Pakistan's Musharraf Says Won't Quit as Army C... Saban not going to Dolphins yet Nets get Carter from Raptors description Reuters - Short-sellers, Wall Street's dwindli... Reuters - Soaring crude prices plus worries\ab... AFP - Tearaway world oil prices, toppling reco... ... KARACHI (Reuters) - Pakistani President Perve... The Miami Dolphins will put their courtship of... INDIANAPOLIS -- All-Star Vince Carter was trad... text wall st. bears claw back into the black (reute... oil and economy cloud stocks' outlook (reuters... oil prices soar to all-time record, posing new... ... pakistan's musharraf says won't quit as army c... saban not going to dolphins yet the miami dolp... nets get carter from raptors indianapolis -- a... . 0  3 Business 
 . 1  3 Business 
 . 2  3 Business 
 . 3  3 Business 
 . 4  3 Business 
 ... ... ... . 119995  1 World 
 . 119996  2 Sports 
 . 119997  2 Sports 
 . 119998  2 Sports 
 . 119999  2 Sports 
 120000 rows × 5 columns Carlyle Looks Toward Commercial Reuters - Private investment firm Carlyle carlyle looks toward commercial Aerospace (Reu... Grou... aerospace (reu... Iraq Halts Oil Exports from Main Southern Pipe... Reuters - Authorities have halted oil export\f... iraq halts oil exports from main southern pipe... Renteria signing a top-shelf deal Red Sox general manager Theo Epstein renteria signing a top-shelf deal red sox acknowled... gene... Today's NFL games PITTSBURGH at NY GIANTS Time: 1:30 p.m. today's nfl games pittsburgh at ny giants Line: ... time... At this point, the text is ready to be tokenized. For this purpose we will use NLTK’s word_tokenize function. This function can be applied to the whole column at once using the pandas map function, which returns a new column which we add to the dataframe. However, here we actually use the progress_map function, which provides a visual progress bar. This visual feedback is especially helpful for tasks that take more time to complete. 4.2 Multiclass Classification 67 class index . 0  3 
 . 1  3 
 . 2  3 
 . 3  3 
 . 4  3 
 ... ... . 119995  1 
 . 119996  2 
 . 119997  2 
 . 119998  2 
 . 119999  2 
 class Business Business Business Business Business ... World Sports Sports Sports Sports title Wall St. Bears Claw Back Into the Black (Reuters) Oil and Economy Cloud Stocks' Outlook (Reuters) Oil prices soar to all-time record, posing new... ... Pakistan's Musharraf Says Won't Quit as Army C... Saban not going to Dolphins yet Nets get Carter from Raptors description Reuters - Short-sellers, Wall Street's dwindli... Reuters - Soaring crude prices plus worries\ab... AFP - Tearaway world oil prices, toppling reco... ... KARACHI (Reuters) - Pakistani President Perve... The Miami Dolphins will put their courtship of... INDIANAPOLIS -- All-Star Vince Carter was trad... text wall st. bears claw back into the black (reute... oil and economy cloud stocks' outlook (reuters... oil prices soar to all-time record, posing new... ... pakistan's musharraf says won't quit as army c... saban not going to dolphins yet the miami dolp... nets get carter from raptors indianapolis -- a... tokens [wall, st., bears, claw, back, into, the, blac... [oil, and, economy, cloud, stocks, ', outlook,... [oil, prices, soar, to, all-time, record, ,, p... ... [pakistan, 's, musharraf, says, wo, n't, quit,... [saban, not, going, to, dolphins, yet, the, mi... [nets, get, carter, from, raptors, indianapoli... 120000 rows × 6 columns Carlyle Looks Toward Commercial Reuters - Private investment firm carlyle looks toward commercial [carlyle, looks, toward, Aerospace (Reu... Carlyle Grou... aerospace (reu... commercial, aerospace... Iraq Halts Oil Exports from Main Reuters - Authorities have halted iraq halts oil exports from main [iraq, halts, oil, exports, from, Southern Pipe... oil export\f... southern pipe... main, southe... Renteria signing a top-shelf deal Red Sox general manager Theo renteria signing a top-shelf deal [renteria, signing, a, top-shelf, Epstein acknowled... red sox gene... deal, red, s... Today's NFL games PITTSBURGH at NY GIANTS today's nfl games pittsburgh at [today, 's, nfl, games, Time: 1:30 p.m. Line: ... ny giants time... pittsburgh, at, ny, gi... From the tokens we just created, we then create a vocabulary for our corpus. Here, we only keep the words that occur at least 10 times, decreasing the memory needed and reducing the likelihood that our vocabulary contains noisy tokens. Note that each row in the tokens column contains a list of tokens. In order to create the vocabulary, we will need to convert the Series of lists of tokens into a Series of tokens using the explode() Pandas method. Then we will use the value_counts() method to create a Series object in which the index are the tokens and the values are the number of times they appear in the corpus. The next step is removing the tokens with a count lower than our chosen threshold. Finally, we create a list with the remaining tokens, as well as a dictionary that maps tokens to token ids (i.e., the index of the token in the list). We include in the vocabulary a special token [UNK] that will be used as a placeholder for tokens that do not appear in our vocabulary after the frequency pruning. Using this vocabulary, we construct a feature vector for each news article in the corpus. This feature vector will be encoded as a dictionary, with keys corresponding to token ids, and values corresponding to the number of times the token appears in the article. As above, the feature vectors will be stored as a new column in the dataframe. 68 Implementing Text Classification Using Perceptron and LR class index class title Wall St. Bears Claw Back Into the Black (Reuters) Oil and Economy Cloud Stocks' Outlook (Reuters) Oil prices soar to all-time record, posing new... ... Pakistan's Musharraf Says Won't Quit as Army C... Saban not going to Dolphins yet Nets get Carter from Raptors description Reuters - Short-sellers, Wall Street's dwindli... Reuters - Soaring crude prices plus worries\ab... AFP - Tearaway world oil prices, toppling reco... ... KARACHI (Reuters) - Pakistani President Perve... The Miami Dolphins will put their courtship of... INDIANAPOLIS -- All-Star Vince Carter was trad... text wall st. bears claw back into the black (reute... oil and economy cloud stocks' outlook (reuters... oil prices soar to all-time record, posing new... ... pakistan's musharraf says won't quit as army c... saban not going to dolphins yet the miami dolp... nets get carter from raptors indianapolis -- a... tokens [wall, st., bears, claw, back, into, the, blac... [oil, and, economy, cloud, stocks, ', outlook,... [oil, prices, soar, to, alltime, record, ,, p... ... [pakistan, 's, musharraf, says, wo, n't, quit,... [saban, not, going, to, dolphins, yet, the, mi... [nets, get, carter, from, raptors, indianapoli... features {427: 2, 563: 1, 1607: 1, 15062: 1, 120: 1, 73... {66: 1, 9: 2, 351: 2, 4565: 1, 158: 1, 116: 1,... {66: 2, 99: 2, 4390: 1, 4: 2, 3595: 1, 149: 1,... ... {383: 1, 23: 1, 1626: 2, 91: 1, 1809: 1, 285: ... {7762: 2, 68: 1, 661: 1, 4: 2, 1439: 2, 703: 1... {2170: 2, 226: 1, 2402: 2, 32: 1, 2995: 2, 219... . 0  3 Business 
 . 1  3 Business 
 . 2  3 Business 
 . 3  3 Business 
 . 4  3 Business 
 ... ... ... . 119995  1 World 
 . 119996  2 Sports 
 . 119997  2 Sports 
 . 119998  2 Sports 
 . 119999  2 Sports 
 120000 rows × 7 columns Carlyle Looks Toward Commercial Aerospace (Reu... Reuters - Private investment firm Carlyle Grou... carlyle looks toward commercial aerospace (reu... Iraq Halts Oil Exports from Reuters - Authorities have iraq halts oil exports from Main Southern Pipe... halted oil export\f... main southern pipe... Renteria signing a top-shelf Red Sox general manager renteria signing a topdeal Theo Epstein acknowled... shelf deal red sox gene... PITTSBURGH at NY Today's NFL games GIANTS Time: 1:30 p.m. Line: ... today's nfl games pittsburgh at ny giants time... [carlyle, looks, toward, {15999: 2, 1076: 1, 855: commercial, aerospace... 1, 1286: 1, 4251: 1, ... [iraq, halts, oil, exports, {77: 2, 7380: 1, 66: 3, from, main, southe... 1787: 1, 32: 2, 900: 2... [renteria, signing, a, top- {8428: 2, 2638: 1, 5: 4, shelf, deal, red, s... 0: 3, 127: 1, 202: 3,... [today, 's, nfl, games, {106: 1, 23: 1, 729: 1, pittsburgh, at, ny, gi... 225: 1, 1586: 1, 22: 1... The final preprocessing step is converting the features and the class indices into PyTorch tensors. Recall that we need to subtract one from the class indices to make them zero-based. At this point, the data is fully processed and we are ready to begin training. 4.2.3 Multiclass Logistic Regression Using PyTorch The model itself is a single linear layer whose input size corresponds to the size of our vocabulary, and its output size corresponds to the number of classes in our corpus. PyTorch’s Linear layer includes a bias by default, so there is no need to handle that manually the way we did for our perceptron example. The code for training this model (which implements Algorithm 6) is almost identical to that of the binary logistic repression. However, since we have to calculate a score for each of the four different classes, we need to replace the previous BCEWithLogitsLoss with CrossEntropyLoss, which applies a softmax over the scores to obtain probabilities for each class. For each example, the model predicts 4 scores – one for each label. The label with the highest score is selected using the argmax function. We evaluate the predictions of our model for each class using Scikitlearn’s classification_report, which handles the results of multiclass classification. 4.3 Summary 69 4.3 Summary In this chapter, we used movie review and news article classification to illustrate the implementation of the previously described algorithms for the binary perceptron, binary logistic regression, and multiclass logistic regression. For the binary logistic regression, we made a direct comparison between the lower-level NumPy implementation and a higher-level version that made use of PyTorch. We hope that through this series of exercises the reader has noted several key takeaways. First, data preparation is important and should be done thoughtfully. Certain tasks (e.g., text normalization or sentence splitting) are going to be frequently needed if you continue with NLP, so using or creating generic functions can be very helpful. However, what works for one dataset and one language may not be suitable for another scenario. For example, in our case, we selected different tokenizers for each of our tasks to account for the different registers of English, as well as removing diacritics during normalization. Second, when it comes to implementing machine learning algorithms, it is often easier to use a higher-level library such as PyTorch instead of NumPy. For example, with the former, the gradients are calculated by the library, whereas in NumPy we have to code them ourselves. This becomes cumbersome quickly. For example, even the derivative of the softmax is non-trivial. Third, PyTorch imposes a training structure that remains largely the same, regardless of what models are being trained. That is, at a high level, the same steps are always required: clearing the current gradients, predicting output scores for the provided inputs, calculating the loss, and optimizing. These features make PyTorch a very powerful and convenient deep learning library; we will continue to use it throughout the remainder of the book to implement more complex neural architectures.
15,479
15,530
#!/usr/bin/env python # coding: utf-8 # # Binary Text Classification with # # Logistic Regression Implemented from Scratch # In[1]: import random import numpy as np from tqdm.notebook import tqdm # set this variable to a number to be used as the random seed # or to None if you don't want to set a random seed seed = 1234 if seed is not None: random.seed(seed) np.random.seed(seed) # The dataset is divided in two directories called `train` and `test`. # These directories contain the training and testing splits of the dataset. # In[2]: get_ipython().system('ls -lh data/aclImdb/') # Both the `train` and `test` directories contain two directories called `pos` and `neg` that contain text files with the positive and negative reviews, respectively. # In[3]: get_ipython().system('ls -lh data/aclImdb/train/') # We will now read the filenames of the positive and negative examples. # In[4]: from glob import glob pos_files = glob('data/aclImdb/train/pos/*.txt') neg_files = glob('data/aclImdb/train/neg/*.txt') print('number of positive reviews:', len(pos_files)) print('number of negative reviews:', len(neg_files)) # Now, we will use a [`CountVectorizer`](https://scikit-learn.org/stable/modules/generated/sklearn.feature_extraction.text.CountVectorizer.html) to read the text files, tokenize them, acquire a vocabulary from the training data, and encode it in a document-term matrix in which each row represents a review, and each column represents a term in the vocabulary. Each element $(i,j)$ in the matrix represents the number of times term $j$ appears in example $i$. # In[5]: from sklearn.feature_extraction.text import CountVectorizer # initialize CountVectorizer indicating that we will give it a list of filenames that have to be read cv = CountVectorizer(input='filename') # learn vocabulary and return sparse document-term matrix doc_term_matrix = cv.fit_transform(pos_files + neg_files) doc_term_matrix # Note in the message printed above that the matrix is of shape (25000, 74894). # In other words, it has 1,871,225,000 elements. # However, only 3,445,861 elements were stored. # This is because most of the elements in the matrix are zeros. # The reason is that the reviews are short and most words in the english language don't appear in each review. # A matrix that only stores non-zero values is called *sparse*. # # Now we will convert it to a dense numpy array: # In[6]: X_train = doc_term_matrix.toarray() X_train.shape # In[7]: # Append 1s to the xs; this will allow us to multiply by the weights and # the bias in a single pass. # Make an array with a one for each row/data point ones = np.ones(X_train.shape[0]) # Concatenate these ones to existing feature vectors X_train = np.column_stack((X_train, ones)) X_train.shape # We will also create a numpy array with the binary labels for the reviews. # One indicates a positive review and zero a negative review. # The label `y_train[i]` corresponds to the review encoded in row `i` of the `X_train` matrix. # In[8]: # training labels y_pos = np.ones(len(pos_files)) y_neg = np.zeros(len(neg_files)) y_train = np.concatenate([y_pos, y_neg]) y_train # Now we will initialize our model, in the form of an array of weights `w` of the same size as the number of features in our dataset (i.e., the number of words in the vocabulary acquired by [`CountVectorizer`](https://scikit-learn.org/stable/modules/generated/sklearn.feature_extraction.text.CountVectorizer.html)), and a bias term `b`. # Both are initialized to zeros. # In[9]: # initialize model: the feature vector and bias term are populated with zeros n_examples, n_features = X_train.shape w = np.random.random(n_features) # Now we will use the logistic regression learning algorithm to learn the values of `w` and `b` from our training data. # In[10]: # from scipy.special import expit as sigmoid def sigmoid(z): if -z > np.log(np.finfo(float).max): return 0.0 return 1 / (1 + np.exp(-z)) # In[11]: lr = 1e-1 n_epochs = 10 indices = np.arange(n_examples) for epoch in range(10): # randomize the order in which training examples are seen in this epoch np.random.shuffle(indices) # traverse the training data for i in tqdm(indices, desc=f'epoch {epoch+1}'): x = X_train[i] y = y_train[i] # calculate the derivative of the cost function for this batch deriv_cost = (sigmoid(x @ w) - y) * x # update the weights w = w - lr * deriv_cost # The next step is evaluating the model on the test dataset. # Note that this time we use the [`transform()`](https://scikit-learn.org/stable/modules/generated/sklearn.feature_extraction.text.CountVectorizer.html#sklearn.feature_extraction.text.CountVectorizer.transform) method of the [`CountVectorizer`](https://scikit-learn.org/stable/modules/generated/sklearn.feature_extraction.text.CountVectorizer.html), instead of the [`fit_transform()`](https://scikit-learn.org/stable/modules/generated/sklearn.feature_extraction.text.CountVectorizer.html#sklearn.feature_extraction.text.CountVectorizer.fit_transform) method that we used above. This is because we want to use the learned vocabulary in the test set, instead of learning a new one. # In[12]: pos_files = glob('data/aclImdb/test/pos/*.txt') neg_files = glob('data/aclImdb/test/neg/*.txt') doc_term_matrix = cv.transform(pos_files + neg_files) X_test = doc_term_matrix.toarray() X_test = np.column_stack((X_test, np.ones(X_test.shape[0]))) y_pos = np.ones(len(pos_files)) y_neg = np.zeros(len(neg_files)) y_test = np.concatenate([y_pos, y_neg]) # Using the model is easy: multiply the document-term matrix by the learned weights and add the bias. # We use Python's `@` operator to perform the matrix-vector multiplication. # In[13]: y_pred = X_test @ w > 0 # Now we print an evaluation of the prediction results using scikit-learn's [`classification_report()`](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.classification_report.html) function. # In[14]: def binary_classification_report(y_true, y_pred): # count true positives, false positives, true negatives, and false negatives tp = fp = tn = fn = 0 for gold, pred in zip(y_true, y_pred): if pred == True: if gold == True: tp += 1 else: fp += 1 else: if gold == False: tn += 1 else: fn += 1 # calculate precision and recall precision = tp / (tp + fp) recall = tp / (tp + fn) # calculate f1 score fscore = 2 * precision * recall / (precision + recall) # calculate accuracy accuracy = (tp + tn) / len(y_true) # number of positive labels in y_true support = sum(y_true) return { "precision": precision, "recall": recall, "f1-score": fscore, "support": support, "accuracy": accuracy, } # In[15]: binary_classification_report(y_test, y_pred)
3,894
3,910
10
chap04-11
chap04-11
4 Implementing Text Classification Using Perceptron and Logistic Regression In the previous chapters we have discussed the theory behind the perceptron and logistic regression, including mathematical explanations of how and why they are able to learn from examples. In this chapter we will transition from math to code. Specifically, we will discuss how to implement these models in the Python programming language. All the code that we will introduce throughout this book is available online as well: http://clulab.github.io/gentlenlp/. The reader who is not familiar with the Python programming language is encouraged to read first Appendix A, for a brief introduction to the language, and Appendix B, for a discussion on how computers encode and preprocess text. Once done, please return here. To get a better understanding of how these algorithms work under the hood, we will start by implementing them from scratch. However, as the book progresses, we will introduce some of the popular tools and libraries that make Python the language of choice for machine learning, e.g., PyTorch,1 and Hugging Face’s transformers.2 The code for all the examples in the book is provided in the form of Jupyter notebooks.3 Important fragments of these notebooks will be presented in the implementation chapters so that the reader has the whole picture just by reading the book. However, we strongly encourage you to download the notebooks and execute them yourself. We also encourage you to modify them to conduct your own experiments! 1 https://pytorch.org
2 https://huggingface.co 3 https://jupyter.org/ 55 56 Implementing Text Classification Using Perceptron and LR 4.1 Binary Classification We begin this chapter with binary classification. That is, we aim to train classifiers that assign one of two labels to a given text. As the example for this task, we will train a review classifier using the the Large Movie Review Dataset (Maas et al., 2011).4 We tackle this task by implementing first a binary perceptron classifier, followed by a binary logistic regression one. We will implement the latter both from scratch as well as using PyTorch, so the reader has a clearer understanding on how PyTorch works “under the hood.” 4.1.1 Large Movie Review Dataset This dataset contains movie reviews and their associated scores (between 1 and 10) as provided by IMDb.5 converted these scores to binary labels by assigning each review a positive or negative label if the review score was above 6 or below 5, respectively. Reviews with scores 5 and 6 were considered too neutral and thus excluded. We follow the same protocol in this chapter. The dataset is divided in two even partitions called train and test, each containing 25,000 reviews. The dataset also provides additional unlabeled reviews, but we will not use those here. Each partition contains two directories called pos and neg where the positive and negative examples are stored. Each review is stored in an independent text file, whose name is composed of an id unique to the partition and the score associated with the review, separated by an underscore. An example of a positive and a negative review is shown in Table 4.1. 4.1.2 Bag-of-words Model As discussed in Section 2.2, we will encode the text to classify as a bag of words. That is, we encode each review as a list of numbers, with each position in the list corresponding to a word in our vocabulary, and the value stored in that position corresponding to the number of times the word appears in the review. For example, say we want to encode the following two reviews: 4 https://ai.stanford.edu/~amaas/data/sentiment/ 5 https://www.imdb.com/ Maas et al. 4.1 Binary Classification 57 Table 4.1 Two examples of movie reviews from IMDb. The first is a positive review of the movie Puss in Boots (1988). The second is a negative review of the movie Valentine (2001). These reviews can be found at https://www.imdb.com/review/rw0606396/ and https://www.imdb.com/review/rw0721861/, respectively. Filename Score Binary Label train/pos/24_8.txt 8/10 Positive train/neg/141_3.txt 3/10 Negative Review Text Although this was obviously a low-budget production, the performances and the songs in this movie are worth seeing. One of Walken’s few musical roles to date. (he is a marvelous dancer and singer and he demonstrates his acrobatic skills as well - watch for the cartwheel!) Also starring Jason Connery. A great children’s story and very likable characters. This stalk and slash turkey manages to bring nothing new to an increasingly stale genre. A masked killer stalks young, pert girls and slaughters them in a variety of gruesome ways, none of which are particularly inventive. It’s not scary, it’s not clever, and it’s not funny. So what was the point of it? Review 1: Review 2: "I liked the movie. My friend liked it too. " "I hated it. Would not recommend. " First, we need to create a vocabulary that maps each word to an id that uniquely identifies it. Each of these numbers will be used as the index in a list, so they must start at zero and grow by one for each word in the vocabulary. For example, one possible vocabulary that encodes the previous reviews is: {'would': 0, 'hated': 1, 58 Implementing Text Classification Using Perceptron and LR 'my': 2, 'liked': 3, 'not': 4, 'it': 5, 'movie': 6, 'recommend': 7, 'the': 8, 'I': 9, 'too': 10, 'friend': 11} Using this mapping, we can encode the two reviews as follows: Review1: [0,0,1,2,0,1,1,0,1,1,1,1] Review2: [1,1,0,0,1,1,0,1,0,1,0,0] Note that the word liked (fourth position) in the first review has a value of two. This is because this word appears twice in that review. This is a small example with a vocabulary of only 12 terms. Of course, the same process needs to be implemented for our whole training dataset. For this purpose we will use scikit-learn’s CountVectorizer class.6 Using the CountVectorizer class simplifies things, allowing us to get started quickly with a bag-of-words approach. However, note that it makes several simplifying assumptions (e.g., text is lowercased, and punctuation and single character tokens are removed). Some of these may not be adequate to other tasks. First, we need to obtain the filenames for the reviews in the training set: Once we have acquired the filenames for the training reviews, we need
to read them using the CountVectorizer. In order for the CountVectorizer to open and read the files for us, we make use of the input='filename' constructor parameter (otherwise it would expect the string content directly). The CountVectorizer provides three methods that will be use-
ful for us: a method called fit() that is used to acquire the vocabulary,
a method transform() that converts the text into the bag-of-words representation, and a method fit_transform() that conveniently acquires the vocabulary and transforms the data in a single step. The resulting object is referred to as a document-term matrix, where each row corre- 6 https://scikitlearn.org/stable/modules/generated/sklearn.feature_ extraction.text.CountVectorizer.html 4.1 Binary Classification 59 sponds to a document, and each column corresponds to a term in the vocabulary. As the output above indicates, the resulting matrix has 25,000 rows (one for each review), and 74,849 columns (one for each term). Also you may note that this matrix is sparse, with 3,445,861 stored elements. A regular matrix of shape 25,000×74,849 would have 1,871,225,000 elements. However, most of the elements in a document-term matrix are zeros because only a few words from the vocabulary appear in each document. A sparse matrix takes advantage of this fact by storing only the non-zero cells in order to reduce the memory required to store it. Thus, sparse matrices are convenient, especially when dealing with lots of data. Nevertheless, to simplify the downstream code in this example, we will convert it into a dense matrix, i.e., a regular two-dimensional NumPy array. Finally, we also need the labels of the reviews. We assign a label of one to positive reviews, and a label of zero to negative ones. Note that the first half of the reviews are positive and the second half are negative. The label at the ith position of the y_train array corresponds to the review encoded in the ith row of the X_train matrix. 4.1.3 Perceptron Now that we have defined our task and the data processing pipeline, we will implement a perceptron classifier that classifies the movie reviews as positive or negative. The entire code discussed in this section is available in the chap4_perceptron notebook. Recall from Section 2.4 that the perceptron is composed of a weight vector w and a bias term b. These will be represented as a NumPy array w of the same length as our document vectors, and a variable b for the bias term. Both will be initialized with zeros. The parameters w and b are learned through the following algorithm, which implements Algorithm 2 from Chapter 2: There are a couple of details to point out. Line 3 of Algorithm 2 indicates that we need to repeat the training loop until convergence. Theoretically, convergence is defined as predicting all training examples correctly. This is an ambitious requirement, which is not always possible in practice, so in this code we also include a stop condition if we reach a maximum number of epochs. Another crucial difference between our implementation here and the theoretical Algorithm 2, is that we randomize the order in which the training examples are seen at the beginning of 60 Implementing Text Classification Using Perceptron and LR each epoch. This simple (but highly recommended!) change is necessary to avoid the introduction of spurious biases due to the arbitrary order of the examples in the original training partition.7 We accomplish this by storing the indices corresponding to the X_train matrix rows in a NumPy array, and shuffling these indices at the beginning of each epoch. We shuffle the indices instead of the examples so that we can preserve the mapping between examples and labels. The training loop aligns closely with Algorithm 2. We start by iterating over each example in our training data, storing the current example in the variable x,8 and its corresponding label in the variable y_true. Next, we compute the perceptron decision function shown in Algorithm 1. Note that NumPy (as well as PyTorch) uses Python’s @ operator to indicate vector or matrix multiplication, depending on its operand types. Here we use it to calculate the dot product of the example x and the weights w. To this we add the bias b to obtain the predicted score, whose sign is used to assign a positive or negative predicted label. If the prediction is correct, then no update is needed, and we can move on to the next training example. However, if the prediction is incorrect, then we need to adjust w and b, as described in Algorithm 2. Sidebar 4.1 The tqdm function This is our first exposure to the tqdm function. tqdm is a progress bar that “make your loops show a smart progress meter.”9 The name tqdm comes from the Arabic word taqaddum which can mean “progress.” Using tqdm is as simple as wrapping it around the collection to be traversed. After training, we evaluate the model’s performance on the heldout test partition. The test data is loaded similarly to the training partition, but with one notable difference; we use CountVectorizer’s transform() method instead of the fit_transform() method so that the vocabulary is not adjusted for the test data. We won’t show here the loading of the test partition since it is so similar to the code already shown, but it is available in the Jupyter notebook that accompanies this section. . 7   As an extreme example, consider a dataset where all the positive examples appear first in the training partition. This would cause the perceptron to artificially inflate the weights of the features that occur in these examples, a situation from which the learning algorithm may struggle to recover. 
 . 8  We use typewriter font when we discuss variables in the code, to distinguish code from the theoretical discussion in the other chapters. 
 9 https://github.com/tqdm/tqdm 4.1 Binary Classification 61 Using the model to assign labels to all the test data is easily done in one step – we simply multiply the entire test data document-term matrix by the previously learned weights and add the bias. Scores greater than zero indicate a positive review, and those less than zero are negative. At this point we can evaluate the classifier’s performance, which we will do using precision, recall, and F1 scores for binary classification (described in Section 2.3). For this purpose, we implement a function called binary_classification_report that computes these metrics and returns them as a dictionary: We call this function to compare the predicted labels to the true labels, and obtain the evaluation scores. Our F1 score here is 86.8%, which is much higher than the baseline that assigns labels randomly, which yields an F1 score of about 50%. This is a good result, especially considering the simplicity of the perceptron! In the next sections and chapters, we will discuss a battery of strategies to considerably improve this performance. 4.1.4 Binary Logistic Regression from Scratch Using the same task, dataset, and evaluation, we will now implement a logistic regression classifier, as described in Algorithm 5 from Chapter 3. To give the reader hands-on experience with the implementation of the gradient calculations for logistic regression, we start by implementing it from scratch using NumPy. All the code shown in this section is available in the chap4_logistic_regression_numpy notebook. In the perceptron implementation, we represented the weights and the bias as two different variables. Here, however, we will use a different approach that will allow us to unify them into a single vector variable. Specifically, we take advantage of the similarity between the derivative of the cost function with respect to the weights (Equation 3.14) and the derivative of the cost with respect to the bias (Equation 3.15). d Ci(w, b) = (σi − yi)xij (3.14 revisited) dwj d Ci(w, b) = σi − yi (3.15 revisited) db Note that the two derivative formulas are identical except that the former has a multiplication by xij, while the latter does not. However, 62 Implementing Text Classification Using Perceptron and LR since σi − yi = (σi − yi)1 we can multiply the derivative of the cost with respect to the bias by one without changing the semantics. This gives an opportunity for combining the computations, doing them both in a single pass. The idea is that we can treat the bias as a weight corresponding to a feature that always has a value of one. As can be seen above, we created a NumPy array of ones of the same length as the number of examples in our training set (i.e., the number of rows in the data matrix). Then we add this array as a new column to the data matrix, using NumPy’s column_stack function. Next, we need to initialize our model. This time we will use a single NumPy array w of the same length as the number of columns in the data matrix. The weight vector w is initialized randomly with values between 0 and 1: Before implementing the learning algorithm, we need an implementation of the logistic function. Recall that the logistic function is σ(x) = 1 (3.1 revisited) 1+e−x This function can be easily implemented in NumPy as follows: However, this naive implementation may produce the following warning during training: The term overflow indicates that the result of evaluating exp(-x) is a number so large that it can’t be represented by a float (specifically, we’re using float64 numbers). We will avoid this issue by not calling exp with values that will overflow. NumPy provides the function finfo that can be consulted to find the limits of floating point numbers: The log of the largest floating point number is the largest number for which exp() will not overflow, so we will use it as a threshold to filter out problematic values: We now have everything we need to implement Algorithm 4. The steps to follow for each example are: (1) use the model to make a prediction, (2) calculate the gradient of the loss function with respect to the model parameters, and (3) update the model parameters using the gradient. The size of the update is controlled by the learning rate. Once the model has been trained, we evaluate it on the test dataset using our binary_classification_report function from the previous section. Loading and preprocessing the test dataset follows the same 4.1 Binary Classification 63 steps as with the previous classifier. We omit the code for brevity. These are the results: The performance is comparable with that of the perceptron. The difference in F1 scores between the two classifiers (84.9% here vs. 86.8% for the perceptron) is not significant. Classifier parity is probably attributable to the fact that the signal distinguishing the two classes being easy to learn and the simpler perceptron training algorithm being sufficient in this case. Nevertheless, this task is useful in showing how to implement the logistic regression model from scratch, i.e., by implementing the gradient calculation and parameter updates manually. Next, we will implement the same model again using PyTorch, highlighting how this machine learning library simplifies the process. 4.1.5 Binary Logistic Regression Utilizing PyTorch While it is fairly straightforward to compute the derivatives for logistic regression and implement then directly in NumPy, this will not scale well to arbitrary neural architectures. Fortunately, there are libraries that automate the computation of the derivatives of the cost function (assuming it is differentiable!) for any neural network, and use the resulting gradients to perform gradient descent or other more sophisticated optimization procedures. To this end, we will use the PyTorch deep learning library10. The corresponding notebook for this section is chap4_logistic_regression_pytorch_bce. Our model for logistic regression corresponds to PyTorch’s Linear layer. When we instantiate this layer, we specify the size of the inputs (the size of our vocabulary) and the size of the output, i.e., the number of output neurons (which is one because we’re doing binary classification). The loss function we use is the binary cross-entropy loss (see Chapter 3), which is implemented as BCEWithLogitsLoss in PyTorch. In PyTorch, the gradients obtained from the loss function are applied to the model by an optimizer object, which implements and applies an optimization algorithm. Here we will use the vanilla stochastic gradient descent optimizer; we set its learning rate to 0.1. This is equivalent to the discussion in Section 3.2. Similarly to the manual implementation, the steps required to train the model for a given training example are: (1) ensure the gradients are set to zeros, (2) apply the model to obtain a prediction, (3) calculate 10 https://pytorch.org/ 64 Implementing Text Classification Using Perceptron and LR the loss, (4) compute the gradient of the loss by back-propagation, and (5) update the model parameters. Recall that in our previous implementation everything was hardcoded: applying the model, computing the gradients, and optimizing the model parameters. Here, however, the implementation of the logistic regression is expressed at a higher level of abstraction. This means that we are describing the logical steps without specifying a particular implementation. Instead, implementation details are the responsability of the chosen model, loss function, and optimizer. Thus, we could even choose a different model, loss function, and/or optimizer, and use the same training steps with little or no modification. This decoupling of the training logic from the implementation details is one of the main advantages of libraries such as PyTorch. As shown in the code above, calling the model as a function, with the feature vectors as inputs, produces the predicted scores. Once again, a positive score corresponds to a positive label. When we evaluate this implementation on the test dataset, we obtain results that are in line with our previous models: Writing the perceptron and the logistic regression from scratch is a good exercise, as it exposes us to the fundamentals of implementing machine learning algorithms. However, this becomes cumbersome for more complex neural architectures. For this reason, from this point on, we will use PyTorch for all our coding examples. 4.2 Multiclass Classification So far, in this chapter we have discussed implementing binary classifiers. Next, we will modify these binary classifiers to perform multiclass classification, following the discussion in Section 3.5. 4.2.1 AG News Dataset Before explaining the actual training/testing code, we have to choose a new dataset that is suitable for multiclass classification. To this end, we will use the AG News Classification Dataset (Zhang et al., 2015), a subset of the larger AG corpus of news articles collected from thousands of different news sources.11 The classification dataset consists of four 11 http://groups.di.unipi.it/~gulli/AG_corpus_of_news_articles.html 4.2 Multiclass Classification 65 classes, and the data is equally balanced across all classes (30,000 articles per class for train, and 1,900 articles per class for testing). The goal of the task is to classify each article as one of the four classes: World, Sports, Business, or Sci/Tech. 4.2.2 Preparing the Dataset The AG News Dataset is distributed as two CSV files (one for training and one for testing), each containing three columns: the class index, the title, and the description. The dataset also provides a text file that maps the above class indexes to more descriptive class labels. Because of the tabular nature of the dataset, pandas, a Python library
for tabular data analysis,12 is a natural choice for loading and transform-
ing it. To this end, our Jupyter notebook (chap4_multiclass_logistic_regression) demonstrates the sequence of steps required to handle the data, as well
as model training and evaluation. First, we show how to load the CSV,
add column names, and inspect the result: class index . 0  3 
 . 1  3 
 . 2  3 
 . 3  3 
 . 4  3 
 ... ... . 119995  1 
 . 119996  2 
 . 119997  2 
 . 119998  2 
 . 119999  2 
 title Wall St. Bears Claw Back Into the Black (Reuters) Carlyle Looks Toward Commercial Aerospace (Reu... Oil and Economy Cloud Stocks' Outlook (Reuters) Iraq Halts Oil Exports from Main Southern Pipe... Oil prices soar to all-time record, posing new... ... Pakistan's Musharraf Says Won't Quit as Army C... Renteria signing a top-shelf deal Saban not going to Dolphins yet Today's NFL games Nets get Carter from Raptors description Reuters - Short-sellers, Wall Street's dwindli... Reuters - Private investment firm Carlyle Grou... Reuters - Soaring crude prices plus worries\ab... Reuters - Authorities have halted oil export\f... AFP - Tearaway world oil prices, toppling reco... ... KARACHI (Reuters) - Pakistani President Perve... Red Sox general manager Theo Epstein acknowled... The Miami Dolphins will put their courtship of... PITTSBURGH at NY GIANTS Time: 1:30 p.m. Line: ... INDIANAPOLIS -- All-Star Vince Carter was trad... 120000 rows × 3 columns Since the class labels themselves are in a separate file, we manually add them to the pandas data structure (called dataframe in pandas’ terminology) to increase the interpretability of the data. We use the class index column as a starting point, and use its map method to create a new column with the corresponding labels (technically a new Series object) that is added to the dataframe using its insert method, which allows us to insert the column in a specific position. Note that the label indices are one-based, so we subtract one to align them with their labels. 12 https://pandas.pydata.org 66 Implementing Text Classification Using Perceptron and LR class index . 0  3 
 . 1  3 
 . 2  3 
 . 3  3 
 . 4  3 
 ... ... . 119995  1 
 . 119996  2 
 . 119997  2 
 . 119998  2 
 . 119999  2 
 class Business Business Business Business Business ... World Sports Sports Sports Sports title Wall St. Bears Claw Back Into the Black (Reuters) Oil and Economy Cloud Stocks' Outlook (Reuters) Oil prices soar to all-time record, posing new... Pakistan's Musharraf Says Won't Quit as Army C... Saban not going to Dolphins yet Nets get Carter from Raptors description Reuters - Short-sellers, Wall Street's dwindli... Reuters - Soaring crude prices plus worries\ab... AFP - Tearaway world oil prices, toppling reco... KARACHI (Reuters) - Pakistani President Perve... The Miami Dolphins will put their courtship of... INDIANAPOLIS -- All-Star Vince Carter was trad... Iraq Halts Oil Exports from Main Southern Pipe... Reuters - Authorities have halted oil export\f... ... ... Renteria signing a top-shelf deal Red Sox general manager Theo Epstein acknowled... 120000 rows × 4 columns Carlyle Looks Toward Commercial Aerospace (Reu... Reuters - Private investment firm Carlyle Grou... Today's NFL games PITTSBURGH at NY GIANTS Time: 1:30 p.m. Line: ... Next we will preprocess the text. First we lowercase the title and description, and then we concatenate them into a single string. Then we remove some spurious backslashes from the text. Once this is done, the preprocessed text is added to the dataframe as a new column. Note that pandas allows these steps to be applied to all rows simultaneously. class index class title Wall St. Bears Claw Back Into the Black (Reuters) Oil and Economy Cloud Stocks' Outlook (Reuters) Oil prices soar to all-time record, posing new... ... Pakistan's Musharraf Says Won't Quit as Army C... Saban not going to Dolphins yet Nets get Carter from Raptors description Reuters - Short-sellers, Wall Street's dwindli... Reuters - Soaring crude prices plus worries\ab... AFP - Tearaway world oil prices, toppling reco... ... KARACHI (Reuters) - Pakistani President Perve... The Miami Dolphins will put their courtship of... INDIANAPOLIS -- All-Star Vince Carter was trad... text wall st. bears claw back into the black (reute... oil and economy cloud stocks' outlook (reuters... oil prices soar to all-time record, posing new... ... pakistan's musharraf says won't quit as army c... saban not going to dolphins yet the miami dolp... nets get carter from raptors indianapolis -- a... . 0  3 Business 
 . 1  3 Business 
 . 2  3 Business 
 . 3  3 Business 
 . 4  3 Business 
 ... ... ... . 119995  1 World 
 . 119996  2 Sports 
 . 119997  2 Sports 
 . 119998  2 Sports 
 . 119999  2 Sports 
 120000 rows × 5 columns Carlyle Looks Toward Commercial Reuters - Private investment firm Carlyle carlyle looks toward commercial Aerospace (Reu... Grou... aerospace (reu... Iraq Halts Oil Exports from Main Southern Pipe... Reuters - Authorities have halted oil export\f... iraq halts oil exports from main southern pipe... Renteria signing a top-shelf deal Red Sox general manager Theo Epstein renteria signing a top-shelf deal red sox acknowled... gene... Today's NFL games PITTSBURGH at NY GIANTS Time: 1:30 p.m. today's nfl games pittsburgh at ny giants Line: ... time... At this point, the text is ready to be tokenized. For this purpose we will use NLTK’s word_tokenize function. This function can be applied to the whole column at once using the pandas map function, which returns a new column which we add to the dataframe. However, here we actually use the progress_map function, which provides a visual progress bar. This visual feedback is especially helpful for tasks that take more time to complete. 4.2 Multiclass Classification 67 class index . 0  3 
 . 1  3 
 . 2  3 
 . 3  3 
 . 4  3 
 ... ... . 119995  1 
 . 119996  2 
 . 119997  2 
 . 119998  2 
 . 119999  2 
 class Business Business Business Business Business ... World Sports Sports Sports Sports title Wall St. Bears Claw Back Into the Black (Reuters) Oil and Economy Cloud Stocks' Outlook (Reuters) Oil prices soar to all-time record, posing new... ... Pakistan's Musharraf Says Won't Quit as Army C... Saban not going to Dolphins yet Nets get Carter from Raptors description Reuters - Short-sellers, Wall Street's dwindli... Reuters - Soaring crude prices plus worries\ab... AFP - Tearaway world oil prices, toppling reco... ... KARACHI (Reuters) - Pakistani President Perve... The Miami Dolphins will put their courtship of... INDIANAPOLIS -- All-Star Vince Carter was trad... text wall st. bears claw back into the black (reute... oil and economy cloud stocks' outlook (reuters... oil prices soar to all-time record, posing new... ... pakistan's musharraf says won't quit as army c... saban not going to dolphins yet the miami dolp... nets get carter from raptors indianapolis -- a... tokens [wall, st., bears, claw, back, into, the, blac... [oil, and, economy, cloud, stocks, ', outlook,... [oil, prices, soar, to, all-time, record, ,, p... ... [pakistan, 's, musharraf, says, wo, n't, quit,... [saban, not, going, to, dolphins, yet, the, mi... [nets, get, carter, from, raptors, indianapoli... 120000 rows × 6 columns Carlyle Looks Toward Commercial Reuters - Private investment firm carlyle looks toward commercial [carlyle, looks, toward, Aerospace (Reu... Carlyle Grou... aerospace (reu... commercial, aerospace... Iraq Halts Oil Exports from Main Reuters - Authorities have halted iraq halts oil exports from main [iraq, halts, oil, exports, from, Southern Pipe... oil export\f... southern pipe... main, southe... Renteria signing a top-shelf deal Red Sox general manager Theo renteria signing a top-shelf deal [renteria, signing, a, top-shelf, Epstein acknowled... red sox gene... deal, red, s... Today's NFL games PITTSBURGH at NY GIANTS today's nfl games pittsburgh at [today, 's, nfl, games, Time: 1:30 p.m. Line: ... ny giants time... pittsburgh, at, ny, gi... From the tokens we just created, we then create a vocabulary for our corpus. Here, we only keep the words that occur at least 10 times, decreasing the memory needed and reducing the likelihood that our vocabulary contains noisy tokens. Note that each row in the tokens column contains a list of tokens. In order to create the vocabulary, we will need to convert the Series of lists of tokens into a Series of tokens using the explode() Pandas method. Then we will use the value_counts() method to create a Series object in which the index are the tokens and the values are the number of times they appear in the corpus. The next step is removing the tokens with a count lower than our chosen threshold. Finally, we create a list with the remaining tokens, as well as a dictionary that maps tokens to token ids (i.e., the index of the token in the list). We include in the vocabulary a special token [UNK] that will be used as a placeholder for tokens that do not appear in our vocabulary after the frequency pruning. Using this vocabulary, we construct a feature vector for each news article in the corpus. This feature vector will be encoded as a dictionary, with keys corresponding to token ids, and values corresponding to the number of times the token appears in the article. As above, the feature vectors will be stored as a new column in the dataframe. 68 Implementing Text Classification Using Perceptron and LR class index class title Wall St. Bears Claw Back Into the Black (Reuters) Oil and Economy Cloud Stocks' Outlook (Reuters) Oil prices soar to all-time record, posing new... ... Pakistan's Musharraf Says Won't Quit as Army C... Saban not going to Dolphins yet Nets get Carter from Raptors description Reuters - Short-sellers, Wall Street's dwindli... Reuters - Soaring crude prices plus worries\ab... AFP - Tearaway world oil prices, toppling reco... ... KARACHI (Reuters) - Pakistani President Perve... The Miami Dolphins will put their courtship of... INDIANAPOLIS -- All-Star Vince Carter was trad... text wall st. bears claw back into the black (reute... oil and economy cloud stocks' outlook (reuters... oil prices soar to all-time record, posing new... ... pakistan's musharraf says won't quit as army c... saban not going to dolphins yet the miami dolp... nets get carter from raptors indianapolis -- a... tokens [wall, st., bears, claw, back, into, the, blac... [oil, and, economy, cloud, stocks, ', outlook,... [oil, prices, soar, to, alltime, record, ,, p... ... [pakistan, 's, musharraf, says, wo, n't, quit,... [saban, not, going, to, dolphins, yet, the, mi... [nets, get, carter, from, raptors, indianapoli... features {427: 2, 563: 1, 1607: 1, 15062: 1, 120: 1, 73... {66: 1, 9: 2, 351: 2, 4565: 1, 158: 1, 116: 1,... {66: 2, 99: 2, 4390: 1, 4: 2, 3595: 1, 149: 1,... ... {383: 1, 23: 1, 1626: 2, 91: 1, 1809: 1, 285: ... {7762: 2, 68: 1, 661: 1, 4: 2, 1439: 2, 703: 1... {2170: 2, 226: 1, 2402: 2, 32: 1, 2995: 2, 219... . 0  3 Business 
 . 1  3 Business 
 . 2  3 Business 
 . 3  3 Business 
 . 4  3 Business 
 ... ... ... . 119995  1 World 
 . 119996  2 Sports 
 . 119997  2 Sports 
 . 119998  2 Sports 
 . 119999  2 Sports 
 120000 rows × 7 columns Carlyle Looks Toward Commercial Aerospace (Reu... Reuters - Private investment firm Carlyle Grou... carlyle looks toward commercial aerospace (reu... Iraq Halts Oil Exports from Reuters - Authorities have iraq halts oil exports from Main Southern Pipe... halted oil export\f... main southern pipe... Renteria signing a top-shelf Red Sox general manager renteria signing a topdeal Theo Epstein acknowled... shelf deal red sox gene... PITTSBURGH at NY Today's NFL games GIANTS Time: 1:30 p.m. Line: ... today's nfl games pittsburgh at ny giants time... [carlyle, looks, toward, {15999: 2, 1076: 1, 855: commercial, aerospace... 1, 1286: 1, 4251: 1, ... [iraq, halts, oil, exports, {77: 2, 7380: 1, 66: 3, from, main, southe... 1787: 1, 32: 2, 900: 2... [renteria, signing, a, top- {8428: 2, 2638: 1, 5: 4, shelf, deal, red, s... 0: 3, 127: 1, 202: 3,... [today, 's, nfl, games, {106: 1, 23: 1, 729: 1, pittsburgh, at, ny, gi... 225: 1, 1586: 1, 22: 1... The final preprocessing step is converting the features and the class indices into PyTorch tensors. Recall that we need to subtract one from the class indices to make them zero-based. At this point, the data is fully processed and we are ready to begin training. 4.2.3 Multiclass Logistic Regression Using PyTorch The model itself is a single linear layer whose input size corresponds to the size of our vocabulary, and its output size corresponds to the number of classes in our corpus. PyTorch’s Linear layer includes a bias by default, so there is no need to handle that manually the way we did for our perceptron example. The code for training this model (which implements Algorithm 6) is almost identical to that of the binary logistic repression. However, since we have to calculate a score for each of the four different classes, we need to replace the previous BCEWithLogitsLoss with CrossEntropyLoss, which applies a softmax over the scores to obtain probabilities for each class. For each example, the model predicts 4 scores – one for each label. The label with the highest score is selected using the argmax function. We evaluate the predictions of our model for each class using Scikitlearn’s classification_report, which handles the results of multiclass classification. 4.3 Summary 69 4.3 Summary In this chapter, we used movie review and news article classification to illustrate the implementation of the previously described algorithms for the binary perceptron, binary logistic regression, and multiclass logistic regression. For the binary logistic regression, we made a direct comparison between the lower-level NumPy implementation and a higher-level version that made use of PyTorch. We hope that through this series of exercises the reader has noted several key takeaways. First, data preparation is important and should be done thoughtfully. Certain tasks (e.g., text normalization or sentence splitting) are going to be frequently needed if you continue with NLP, so using or creating generic functions can be very helpful. However, what works for one dataset and one language may not be suitable for another scenario. For example, in our case, we selected different tokenizers for each of our tasks to account for the different registers of English, as well as removing diacritics during normalization. Second, when it comes to implementing machine learning algorithms, it is often easier to use a higher-level library such as PyTorch instead of NumPy. For example, with the former, the gradients are calculated by the library, whereas in NumPy we have to code them ourselves. This becomes cumbersome quickly. For example, even the derivative of the softmax is non-trivial. Third, PyTorch imposes a training structure that remains largely the same, regardless of what models are being trained. That is, at a high level, the same steps are always required: clearing the current gradients, predicting output scores for the provided inputs, calculating the loss, and optimizing. These features make PyTorch a very powerful and convenient deep learning library; we will continue to use it throughout the remainder of the book to implement more complex neural architectures.
11,433
11,666
#!/usr/bin/env python # coding: utf-8 # # Binary Text Classification with Perceptron # In[1]: import random import numpy as np from tqdm.notebook import tqdm # set this variable to a number to be used as the random seed # or to None if you don't want to set a random seed seed = 1234 if seed is not None: random.seed(seed) np.random.seed(seed) # The dataset is divided in two directories called `train` and `test`. # These directories contain the training and testing splits of the dataset. # In[2]: get_ipython().system('ls -lh data/aclImdb/') # Both the `train` and `test` directories contain two directories called `pos` and `neg` that contain text files with the positive and negative reviews, respectively. # In[3]: get_ipython().system('ls -lh data/aclImdb/train/') # We will now read the filenames of the positive and negative examples. # In[4]: from glob import glob pos_files = glob('data/aclImdb/train/pos/*.txt') neg_files = glob('data/aclImdb/train/neg/*.txt') print('number of positive reviews:', len(pos_files)) print('number of negative reviews:', len(neg_files)) # Now, we will use a [`CountVectorizer`](https://scikit-learn.org/stable/modules/generated/sklearn.feature_extraction.text.CountVectorizer.html) to read the text files, tokenize them, acquire a vocabulary from the training data, and encode it in a document-term matrix in which each row represents a review, and each column represents a term in the vocabulary. Each element $(i,j)$ in the matrix represents the number of times term $j$ appears in example $i$. # In[5]: from sklearn.feature_extraction.text import CountVectorizer # initialize CountVectorizer indicating that we will give it a list of filenames that have to be read cv = CountVectorizer(input='filename') # learn vocabulary and return sparse document-term matrix doc_term_matrix = cv.fit_transform(pos_files + neg_files) doc_term_matrix # Note in the message printed above that the matrix is of shape (25000, 74894). # In other words, it has 1,871,225,000 elements. # However, only 3,445,861 elements were stored. # This is because most of the elements in the matrix are zeros. # The reason is that the reviews are short and most words in the english language don't appear in each review. # A matrix that only stores non-zero values is called *sparse*. # # Now we will convert it to a dense numpy array: # In[6]: X_train = doc_term_matrix.toarray() X_train.shape # We will also create a numpy array with the binary labels for the reviews. # One indicates a positive review and zero a negative review. # The label `y_train[i]` corresponds to the review encoded in row `i` of the `X_train` matrix. # In[7]: # training labels y_pos = np.ones(len(pos_files)) y_neg = np.zeros(len(neg_files)) y_train = np.concatenate([y_pos, y_neg]) y_train # Now we will initialize our model, in the form of an array of weights `w` of the same size as the number of features in our dataset (i.e., the number of words in the vocabulary acquired by [`CountVectorizer`](https://scikit-learn.org/stable/modules/generated/sklearn.feature_extraction.text.CountVectorizer.html)), and a bias term `b`. # Both are initialized to zeros. # In[8]: # initialize model: the feature vector and bias term are populated with zeros n_examples, n_features = X_train.shape w = np.zeros(n_features) b = 0 # Now we will use the perceptron learning algorithm to learn the values of `w` and `b` from our training data. # In[9]: n_epochs = 10 indices = np.arange(n_examples) for epoch in range(10): n_errors = 0 # randomize the order in which training examples are seen in this epoch np.random.shuffle(indices) # traverse the training data for i in tqdm(indices, desc=f'epoch {epoch+1}'): x = X_train[i] y_true = y_train[i] # the perceptron decision based on the current model score = x @ w + b y_pred = 1 if score > 0 else 0 # update the model is the prediction was incorrect if y_true == y_pred: continue elif y_true == 1 and y_pred == 0: w = w + x b = b + 1 n_errors += 1 elif y_true == 0 and y_pred == 1: w = w - x b = b - 1 n_errors += 1 if n_errors == 0: break # The next step is evaluating the model on the test dataset. # Note that this time we use the [`transform()`](https://scikit-learn.org/stable/modules/generated/sklearn.feature_extraction.text.CountVectorizer.html#sklearn.feature_extraction.text.CountVectorizer.transform) method of the [`CountVectorizer`](https://scikit-learn.org/stable/modules/generated/sklearn.feature_extraction.text.CountVectorizer.html), instead of the [`fit_transform()`](https://scikit-learn.org/stable/modules/generated/sklearn.feature_extraction.text.CountVectorizer.html#sklearn.feature_extraction.text.CountVectorizer.fit_transform) method that we used above. This is because we want to use the learned vocabulary in the test set, instead of learning a new one. # In[10]: pos_files = glob('data/aclImdb/test/pos/*.txt') neg_files = glob('data/aclImdb/test/neg/*.txt') doc_term_matrix = cv.transform(pos_files + neg_files) X_test = doc_term_matrix.toarray() y_pos = np.ones(len(pos_files)) y_neg = np.zeros(len(neg_files)) y_test = np.concatenate([y_pos, y_neg]) # Using the model is easy: multiply the document-term matrix by the learned weights and add the bias. # We use Python's `@` operator to perform the matrix-vector multiplication. # In[11]: y_pred = (X_test @ w + b) > 0 # Now we print an evaluation of the prediction results using scikit-learn's [`classification_report()`](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.classification_report.html) function. # In[12]: def binary_classification_report(y_true, y_pred): # count true positives, false positives, true negatives, and false negatives tp = fp = tn = fn = 0 for gold, pred in zip(y_true, y_pred): if pred == True: if gold == True: tp += 1 else: fp += 1 else: if gold == False: tn += 1 else: fn += 1 # calculate precision and recall precision = tp / (tp + fp) recall = tp / (tp + fn) # calculate f1 score fscore = 2 * precision * recall / (precision + recall) # calculate accuracy accuracy = (tp + tn) / len(y_true) # number of positive labels in y_true support = sum(y_true) return { "precision": precision, "recall": recall, "f1-score": fscore, "support": support, "accuracy": accuracy, } # In[13]: binary_classification_report(y_test, y_pred)
5,166
5,220
11
chap04-12
chap04-12
4 Implementing Text Classification Using Perceptron and Logistic Regression In the previous chapters we have discussed the theory behind the perceptron and logistic regression, including mathematical explanations of how and why they are able to learn from examples. In this chapter we will transition from math to code. Specifically, we will discuss how to implement these models in the Python programming language. All the code that we will introduce throughout this book is available online as well: http://clulab.github.io/gentlenlp/. The reader who is not familiar with the Python programming language is encouraged to read first Appendix A, for a brief introduction to the language, and Appendix B, for a discussion on how computers encode and preprocess text. Once done, please return here. To get a better understanding of how these algorithms work under the hood, we will start by implementing them from scratch. However, as the book progresses, we will introduce some of the popular tools and libraries that make Python the language of choice for machine learning, e.g., PyTorch,1 and Hugging Face’s transformers.2 The code for all the examples in the book is provided in the form of Jupyter notebooks.3 Important fragments of these notebooks will be presented in the implementation chapters so that the reader has the whole picture just by reading the book. However, we strongly encourage you to download the notebooks and execute them yourself. We also encourage you to modify them to conduct your own experiments! 1 https://pytorch.org
2 https://huggingface.co 3 https://jupyter.org/ 55 56 Implementing Text Classification Using Perceptron and LR 4.1 Binary Classification We begin this chapter with binary classification. That is, we aim to train classifiers that assign one of two labels to a given text. As the example for this task, we will train a review classifier using the the Large Movie Review Dataset (Maas et al., 2011).4 We tackle this task by implementing first a binary perceptron classifier, followed by a binary logistic regression one. We will implement the latter both from scratch as well as using PyTorch, so the reader has a clearer understanding on how PyTorch works “under the hood.” 4.1.1 Large Movie Review Dataset This dataset contains movie reviews and their associated scores (between 1 and 10) as provided by IMDb.5 converted these scores to binary labels by assigning each review a positive or negative label if the review score was above 6 or below 5, respectively. Reviews with scores 5 and 6 were considered too neutral and thus excluded. We follow the same protocol in this chapter. The dataset is divided in two even partitions called train and test, each containing 25,000 reviews. The dataset also provides additional unlabeled reviews, but we will not use those here. Each partition contains two directories called pos and neg where the positive and negative examples are stored. Each review is stored in an independent text file, whose name is composed of an id unique to the partition and the score associated with the review, separated by an underscore. An example of a positive and a negative review is shown in Table 4.1. 4.1.2 Bag-of-words Model As discussed in Section 2.2, we will encode the text to classify as a bag of words. That is, we encode each review as a list of numbers, with each position in the list corresponding to a word in our vocabulary, and the value stored in that position corresponding to the number of times the word appears in the review. For example, say we want to encode the following two reviews: 4 https://ai.stanford.edu/~amaas/data/sentiment/ 5 https://www.imdb.com/ Maas et al. 4.1 Binary Classification 57 Table 4.1 Two examples of movie reviews from IMDb. The first is a positive review of the movie Puss in Boots (1988). The second is a negative review of the movie Valentine (2001). These reviews can be found at https://www.imdb.com/review/rw0606396/ and https://www.imdb.com/review/rw0721861/, respectively. Filename Score Binary Label train/pos/24_8.txt 8/10 Positive train/neg/141_3.txt 3/10 Negative Review Text Although this was obviously a low-budget production, the performances and the songs in this movie are worth seeing. One of Walken’s few musical roles to date. (he is a marvelous dancer and singer and he demonstrates his acrobatic skills as well - watch for the cartwheel!) Also starring Jason Connery. A great children’s story and very likable characters. This stalk and slash turkey manages to bring nothing new to an increasingly stale genre. A masked killer stalks young, pert girls and slaughters them in a variety of gruesome ways, none of which are particularly inventive. It’s not scary, it’s not clever, and it’s not funny. So what was the point of it? Review 1: Review 2: "I liked the movie. My friend liked it too. " "I hated it. Would not recommend. " First, we need to create a vocabulary that maps each word to an id that uniquely identifies it. Each of these numbers will be used as the index in a list, so they must start at zero and grow by one for each word in the vocabulary. For example, one possible vocabulary that encodes the previous reviews is: {'would': 0, 'hated': 1, 58 Implementing Text Classification Using Perceptron and LR 'my': 2, 'liked': 3, 'not': 4, 'it': 5, 'movie': 6, 'recommend': 7, 'the': 8, 'I': 9, 'too': 10, 'friend': 11} Using this mapping, we can encode the two reviews as follows: Review1: [0,0,1,2,0,1,1,0,1,1,1,1] Review2: [1,1,0,0,1,1,0,1,0,1,0,0] Note that the word liked (fourth position) in the first review has a value of two. This is because this word appears twice in that review. This is a small example with a vocabulary of only 12 terms. Of course, the same process needs to be implemented for our whole training dataset. For this purpose we will use scikit-learn’s CountVectorizer class.6 Using the CountVectorizer class simplifies things, allowing us to get started quickly with a bag-of-words approach. However, note that it makes several simplifying assumptions (e.g., text is lowercased, and punctuation and single character tokens are removed). Some of these may not be adequate to other tasks. First, we need to obtain the filenames for the reviews in the training set: Once we have acquired the filenames for the training reviews, we need
to read them using the CountVectorizer. In order for the CountVectorizer to open and read the files for us, we make use of the input='filename' constructor parameter (otherwise it would expect the string content directly). The CountVectorizer provides three methods that will be use-
ful for us: a method called fit() that is used to acquire the vocabulary,
a method transform() that converts the text into the bag-of-words representation, and a method fit_transform() that conveniently acquires the vocabulary and transforms the data in a single step. The resulting object is referred to as a document-term matrix, where each row corre- 6 https://scikitlearn.org/stable/modules/generated/sklearn.feature_ extraction.text.CountVectorizer.html 4.1 Binary Classification 59 sponds to a document, and each column corresponds to a term in the vocabulary. As the output above indicates, the resulting matrix has 25,000 rows (one for each review), and 74,849 columns (one for each term). Also you may note that this matrix is sparse, with 3,445,861 stored elements. A regular matrix of shape 25,000×74,849 would have 1,871,225,000 elements. However, most of the elements in a document-term matrix are zeros because only a few words from the vocabulary appear in each document. A sparse matrix takes advantage of this fact by storing only the non-zero cells in order to reduce the memory required to store it. Thus, sparse matrices are convenient, especially when dealing with lots of data. Nevertheless, to simplify the downstream code in this example, we will convert it into a dense matrix, i.e., a regular two-dimensional NumPy array. Finally, we also need the labels of the reviews. We assign a label of one to positive reviews, and a label of zero to negative ones. Note that the first half of the reviews are positive and the second half are negative. The label at the ith position of the y_train array corresponds to the review encoded in the ith row of the X_train matrix. 4.1.3 Perceptron Now that we have defined our task and the data processing pipeline, we will implement a perceptron classifier that classifies the movie reviews as positive or negative. The entire code discussed in this section is available in the chap4_perceptron notebook. Recall from Section 2.4 that the perceptron is composed of a weight vector w and a bias term b. These will be represented as a NumPy array w of the same length as our document vectors, and a variable b for the bias term. Both will be initialized with zeros. The parameters w and b are learned through the following algorithm, which implements Algorithm 2 from Chapter 2: There are a couple of details to point out. Line 3 of Algorithm 2 indicates that we need to repeat the training loop until convergence. Theoretically, convergence is defined as predicting all training examples correctly. This is an ambitious requirement, which is not always possible in practice, so in this code we also include a stop condition if we reach a maximum number of epochs. Another crucial difference between our implementation here and the theoretical Algorithm 2, is that we randomize the order in which the training examples are seen at the beginning of 60 Implementing Text Classification Using Perceptron and LR each epoch. This simple (but highly recommended!) change is necessary to avoid the introduction of spurious biases due to the arbitrary order of the examples in the original training partition.7 We accomplish this by storing the indices corresponding to the X_train matrix rows in a NumPy array, and shuffling these indices at the beginning of each epoch. We shuffle the indices instead of the examples so that we can preserve the mapping between examples and labels. The training loop aligns closely with Algorithm 2. We start by iterating over each example in our training data, storing the current example in the variable x,8 and its corresponding label in the variable y_true. Next, we compute the perceptron decision function shown in Algorithm 1. Note that NumPy (as well as PyTorch) uses Python’s @ operator to indicate vector or matrix multiplication, depending on its operand types. Here we use it to calculate the dot product of the example x and the weights w. To this we add the bias b to obtain the predicted score, whose sign is used to assign a positive or negative predicted label. If the prediction is correct, then no update is needed, and we can move on to the next training example. However, if the prediction is incorrect, then we need to adjust w and b, as described in Algorithm 2. Sidebar 4.1 The tqdm function This is our first exposure to the tqdm function. tqdm is a progress bar that “make your loops show a smart progress meter.”9 The name tqdm comes from the Arabic word taqaddum which can mean “progress.” Using tqdm is as simple as wrapping it around the collection to be traversed. After training, we evaluate the model’s performance on the heldout test partition. The test data is loaded similarly to the training partition, but with one notable difference; we use CountVectorizer’s transform() method instead of the fit_transform() method so that the vocabulary is not adjusted for the test data. We won’t show here the loading of the test partition since it is so similar to the code already shown, but it is available in the Jupyter notebook that accompanies this section. . 7   As an extreme example, consider a dataset where all the positive examples appear first in the training partition. This would cause the perceptron to artificially inflate the weights of the features that occur in these examples, a situation from which the learning algorithm may struggle to recover. 
 . 8  We use typewriter font when we discuss variables in the code, to distinguish code from the theoretical discussion in the other chapters. 
 9 https://github.com/tqdm/tqdm 4.1 Binary Classification 61 Using the model to assign labels to all the test data is easily done in one step – we simply multiply the entire test data document-term matrix by the previously learned weights and add the bias. Scores greater than zero indicate a positive review, and those less than zero are negative. At this point we can evaluate the classifier’s performance, which we will do using precision, recall, and F1 scores for binary classification (described in Section 2.3). For this purpose, we implement a function called binary_classification_report that computes these metrics and returns them as a dictionary: We call this function to compare the predicted labels to the true labels, and obtain the evaluation scores. Our F1 score here is 86.8%, which is much higher than the baseline that assigns labels randomly, which yields an F1 score of about 50%. This is a good result, especially considering the simplicity of the perceptron! In the next sections and chapters, we will discuss a battery of strategies to considerably improve this performance. 4.1.4 Binary Logistic Regression from Scratch Using the same task, dataset, and evaluation, we will now implement a logistic regression classifier, as described in Algorithm 5 from Chapter 3. To give the reader hands-on experience with the implementation of the gradient calculations for logistic regression, we start by implementing it from scratch using NumPy. All the code shown in this section is available in the chap4_logistic_regression_numpy notebook. In the perceptron implementation, we represented the weights and the bias as two different variables. Here, however, we will use a different approach that will allow us to unify them into a single vector variable. Specifically, we take advantage of the similarity between the derivative of the cost function with respect to the weights (Equation 3.14) and the derivative of the cost with respect to the bias (Equation 3.15). d Ci(w, b) = (σi − yi)xij (3.14 revisited) dwj d Ci(w, b) = σi − yi (3.15 revisited) db Note that the two derivative formulas are identical except that the former has a multiplication by xij, while the latter does not. However, 62 Implementing Text Classification Using Perceptron and LR since σi − yi = (σi − yi)1 we can multiply the derivative of the cost with respect to the bias by one without changing the semantics. This gives an opportunity for combining the computations, doing them both in a single pass. The idea is that we can treat the bias as a weight corresponding to a feature that always has a value of one. As can be seen above, we created a NumPy array of ones of the same length as the number of examples in our training set (i.e., the number of rows in the data matrix). Then we add this array as a new column to the data matrix, using NumPy’s column_stack function. Next, we need to initialize our model. This time we will use a single NumPy array w of the same length as the number of columns in the data matrix. The weight vector w is initialized randomly with values between 0 and 1: Before implementing the learning algorithm, we need an implementation of the logistic function. Recall that the logistic function is σ(x) = 1 (3.1 revisited) 1+e−x This function can be easily implemented in NumPy as follows: However, this naive implementation may produce the following warning during training: The term overflow indicates that the result of evaluating exp(-x) is a number so large that it can’t be represented by a float (specifically, we’re using float64 numbers). We will avoid this issue by not calling exp with values that will overflow. NumPy provides the function finfo that can be consulted to find the limits of floating point numbers: The log of the largest floating point number is the largest number for which exp() will not overflow, so we will use it as a threshold to filter out problematic values: We now have everything we need to implement Algorithm 4. The steps to follow for each example are: (1) use the model to make a prediction, (2) calculate the gradient of the loss function with respect to the model parameters, and (3) update the model parameters using the gradient. The size of the update is controlled by the learning rate. Once the model has been trained, we evaluate it on the test dataset using our binary_classification_report function from the previous section. Loading and preprocessing the test dataset follows the same 4.1 Binary Classification 63 steps as with the previous classifier. We omit the code for brevity. These are the results: The performance is comparable with that of the perceptron. The difference in F1 scores between the two classifiers (84.9% here vs. 86.8% for the perceptron) is not significant. Classifier parity is probably attributable to the fact that the signal distinguishing the two classes being easy to learn and the simpler perceptron training algorithm being sufficient in this case. Nevertheless, this task is useful in showing how to implement the logistic regression model from scratch, i.e., by implementing the gradient calculation and parameter updates manually. Next, we will implement the same model again using PyTorch, highlighting how this machine learning library simplifies the process. 4.1.5 Binary Logistic Regression Utilizing PyTorch While it is fairly straightforward to compute the derivatives for logistic regression and implement then directly in NumPy, this will not scale well to arbitrary neural architectures. Fortunately, there are libraries that automate the computation of the derivatives of the cost function (assuming it is differentiable!) for any neural network, and use the resulting gradients to perform gradient descent or other more sophisticated optimization procedures. To this end, we will use the PyTorch deep learning library10. The corresponding notebook for this section is chap4_logistic_regression_pytorch_bce. Our model for logistic regression corresponds to PyTorch’s Linear layer. When we instantiate this layer, we specify the size of the inputs (the size of our vocabulary) and the size of the output, i.e., the number of output neurons (which is one because we’re doing binary classification). The loss function we use is the binary cross-entropy loss (see Chapter 3), which is implemented as BCEWithLogitsLoss in PyTorch. In PyTorch, the gradients obtained from the loss function are applied to the model by an optimizer object, which implements and applies an optimization algorithm. Here we will use the vanilla stochastic gradient descent optimizer; we set its learning rate to 0.1. This is equivalent to the discussion in Section 3.2. Similarly to the manual implementation, the steps required to train the model for a given training example are: (1) ensure the gradients are set to zeros, (2) apply the model to obtain a prediction, (3) calculate 10 https://pytorch.org/ 64 Implementing Text Classification Using Perceptron and LR the loss, (4) compute the gradient of the loss by back-propagation, and (5) update the model parameters. Recall that in our previous implementation everything was hardcoded: applying the model, computing the gradients, and optimizing the model parameters. Here, however, the implementation of the logistic regression is expressed at a higher level of abstraction. This means that we are describing the logical steps without specifying a particular implementation. Instead, implementation details are the responsability of the chosen model, loss function, and optimizer. Thus, we could even choose a different model, loss function, and/or optimizer, and use the same training steps with little or no modification. This decoupling of the training logic from the implementation details is one of the main advantages of libraries such as PyTorch. As shown in the code above, calling the model as a function, with the feature vectors as inputs, produces the predicted scores. Once again, a positive score corresponds to a positive label. When we evaluate this implementation on the test dataset, we obtain results that are in line with our previous models: Writing the perceptron and the logistic regression from scratch is a good exercise, as it exposes us to the fundamentals of implementing machine learning algorithms. However, this becomes cumbersome for more complex neural architectures. For this reason, from this point on, we will use PyTorch for all our coding examples. 4.2 Multiclass Classification So far, in this chapter we have discussed implementing binary classifiers. Next, we will modify these binary classifiers to perform multiclass classification, following the discussion in Section 3.5. 4.2.1 AG News Dataset Before explaining the actual training/testing code, we have to choose a new dataset that is suitable for multiclass classification. To this end, we will use the AG News Classification Dataset (Zhang et al., 2015), a subset of the larger AG corpus of news articles collected from thousands of different news sources.11 The classification dataset consists of four 11 http://groups.di.unipi.it/~gulli/AG_corpus_of_news_articles.html 4.2 Multiclass Classification 65 classes, and the data is equally balanced across all classes (30,000 articles per class for train, and 1,900 articles per class for testing). The goal of the task is to classify each article as one of the four classes: World, Sports, Business, or Sci/Tech. 4.2.2 Preparing the Dataset The AG News Dataset is distributed as two CSV files (one for training and one for testing), each containing three columns: the class index, the title, and the description. The dataset also provides a text file that maps the above class indexes to more descriptive class labels. Because of the tabular nature of the dataset, pandas, a Python library
for tabular data analysis,12 is a natural choice for loading and transform-
ing it. To this end, our Jupyter notebook (chap4_multiclass_logistic_regression) demonstrates the sequence of steps required to handle the data, as well
as model training and evaluation. First, we show how to load the CSV,
add column names, and inspect the result: class index . 0  3 
 . 1  3 
 . 2  3 
 . 3  3 
 . 4  3 
 ... ... . 119995  1 
 . 119996  2 
 . 119997  2 
 . 119998  2 
 . 119999  2 
 title Wall St. Bears Claw Back Into the Black (Reuters) Carlyle Looks Toward Commercial Aerospace (Reu... Oil and Economy Cloud Stocks' Outlook (Reuters) Iraq Halts Oil Exports from Main Southern Pipe... Oil prices soar to all-time record, posing new... ... Pakistan's Musharraf Says Won't Quit as Army C... Renteria signing a top-shelf deal Saban not going to Dolphins yet Today's NFL games Nets get Carter from Raptors description Reuters - Short-sellers, Wall Street's dwindli... Reuters - Private investment firm Carlyle Grou... Reuters - Soaring crude prices plus worries\ab... Reuters - Authorities have halted oil export\f... AFP - Tearaway world oil prices, toppling reco... ... KARACHI (Reuters) - Pakistani President Perve... Red Sox general manager Theo Epstein acknowled... The Miami Dolphins will put their courtship of... PITTSBURGH at NY GIANTS Time: 1:30 p.m. Line: ... INDIANAPOLIS -- All-Star Vince Carter was trad... 120000 rows × 3 columns Since the class labels themselves are in a separate file, we manually add them to the pandas data structure (called dataframe in pandas’ terminology) to increase the interpretability of the data. We use the class index column as a starting point, and use its map method to create a new column with the corresponding labels (technically a new Series object) that is added to the dataframe using its insert method, which allows us to insert the column in a specific position. Note that the label indices are one-based, so we subtract one to align them with their labels. 12 https://pandas.pydata.org 66 Implementing Text Classification Using Perceptron and LR class index . 0  3 
 . 1  3 
 . 2  3 
 . 3  3 
 . 4  3 
 ... ... . 119995  1 
 . 119996  2 
 . 119997  2 
 . 119998  2 
 . 119999  2 
 class Business Business Business Business Business ... World Sports Sports Sports Sports title Wall St. Bears Claw Back Into the Black (Reuters) Oil and Economy Cloud Stocks' Outlook (Reuters) Oil prices soar to all-time record, posing new... Pakistan's Musharraf Says Won't Quit as Army C... Saban not going to Dolphins yet Nets get Carter from Raptors description Reuters - Short-sellers, Wall Street's dwindli... Reuters - Soaring crude prices plus worries\ab... AFP - Tearaway world oil prices, toppling reco... KARACHI (Reuters) - Pakistani President Perve... The Miami Dolphins will put their courtship of... INDIANAPOLIS -- All-Star Vince Carter was trad... Iraq Halts Oil Exports from Main Southern Pipe... Reuters - Authorities have halted oil export\f... ... ... Renteria signing a top-shelf deal Red Sox general manager Theo Epstein acknowled... 120000 rows × 4 columns Carlyle Looks Toward Commercial Aerospace (Reu... Reuters - Private investment firm Carlyle Grou... Today's NFL games PITTSBURGH at NY GIANTS Time: 1:30 p.m. Line: ... Next we will preprocess the text. First we lowercase the title and description, and then we concatenate them into a single string. Then we remove some spurious backslashes from the text. Once this is done, the preprocessed text is added to the dataframe as a new column. Note that pandas allows these steps to be applied to all rows simultaneously. class index class title Wall St. Bears Claw Back Into the Black (Reuters) Oil and Economy Cloud Stocks' Outlook (Reuters) Oil prices soar to all-time record, posing new... ... Pakistan's Musharraf Says Won't Quit as Army C... Saban not going to Dolphins yet Nets get Carter from Raptors description Reuters - Short-sellers, Wall Street's dwindli... Reuters - Soaring crude prices plus worries\ab... AFP - Tearaway world oil prices, toppling reco... ... KARACHI (Reuters) - Pakistani President Perve... The Miami Dolphins will put their courtship of... INDIANAPOLIS -- All-Star Vince Carter was trad... text wall st. bears claw back into the black (reute... oil and economy cloud stocks' outlook (reuters... oil prices soar to all-time record, posing new... ... pakistan's musharraf says won't quit as army c... saban not going to dolphins yet the miami dolp... nets get carter from raptors indianapolis -- a... . 0  3 Business 
 . 1  3 Business 
 . 2  3 Business 
 . 3  3 Business 
 . 4  3 Business 
 ... ... ... . 119995  1 World 
 . 119996  2 Sports 
 . 119997  2 Sports 
 . 119998  2 Sports 
 . 119999  2 Sports 
 120000 rows × 5 columns Carlyle Looks Toward Commercial Reuters - Private investment firm Carlyle carlyle looks toward commercial Aerospace (Reu... Grou... aerospace (reu... Iraq Halts Oil Exports from Main Southern Pipe... Reuters - Authorities have halted oil export\f... iraq halts oil exports from main southern pipe... Renteria signing a top-shelf deal Red Sox general manager Theo Epstein renteria signing a top-shelf deal red sox acknowled... gene... Today's NFL games PITTSBURGH at NY GIANTS Time: 1:30 p.m. today's nfl games pittsburgh at ny giants Line: ... time... At this point, the text is ready to be tokenized. For this purpose we will use NLTK’s word_tokenize function. This function can be applied to the whole column at once using the pandas map function, which returns a new column which we add to the dataframe. However, here we actually use the progress_map function, which provides a visual progress bar. This visual feedback is especially helpful for tasks that take more time to complete. 4.2 Multiclass Classification 67 class index . 0  3 
 . 1  3 
 . 2  3 
 . 3  3 
 . 4  3 
 ... ... . 119995  1 
 . 119996  2 
 . 119997  2 
 . 119998  2 
 . 119999  2 
 class Business Business Business Business Business ... World Sports Sports Sports Sports title Wall St. Bears Claw Back Into the Black (Reuters) Oil and Economy Cloud Stocks' Outlook (Reuters) Oil prices soar to all-time record, posing new... ... Pakistan's Musharraf Says Won't Quit as Army C... Saban not going to Dolphins yet Nets get Carter from Raptors description Reuters - Short-sellers, Wall Street's dwindli... Reuters - Soaring crude prices plus worries\ab... AFP - Tearaway world oil prices, toppling reco... ... KARACHI (Reuters) - Pakistani President Perve... The Miami Dolphins will put their courtship of... INDIANAPOLIS -- All-Star Vince Carter was trad... text wall st. bears claw back into the black (reute... oil and economy cloud stocks' outlook (reuters... oil prices soar to all-time record, posing new... ... pakistan's musharraf says won't quit as army c... saban not going to dolphins yet the miami dolp... nets get carter from raptors indianapolis -- a... tokens [wall, st., bears, claw, back, into, the, blac... [oil, and, economy, cloud, stocks, ', outlook,... [oil, prices, soar, to, all-time, record, ,, p... ... [pakistan, 's, musharraf, says, wo, n't, quit,... [saban, not, going, to, dolphins, yet, the, mi... [nets, get, carter, from, raptors, indianapoli... 120000 rows × 6 columns Carlyle Looks Toward Commercial Reuters - Private investment firm carlyle looks toward commercial [carlyle, looks, toward, Aerospace (Reu... Carlyle Grou... aerospace (reu... commercial, aerospace... Iraq Halts Oil Exports from Main Reuters - Authorities have halted iraq halts oil exports from main [iraq, halts, oil, exports, from, Southern Pipe... oil export\f... southern pipe... main, southe... Renteria signing a top-shelf deal Red Sox general manager Theo renteria signing a top-shelf deal [renteria, signing, a, top-shelf, Epstein acknowled... red sox gene... deal, red, s... Today's NFL games PITTSBURGH at NY GIANTS today's nfl games pittsburgh at [today, 's, nfl, games, Time: 1:30 p.m. Line: ... ny giants time... pittsburgh, at, ny, gi... From the tokens we just created, we then create a vocabulary for our corpus. Here, we only keep the words that occur at least 10 times, decreasing the memory needed and reducing the likelihood that our vocabulary contains noisy tokens. Note that each row in the tokens column contains a list of tokens. In order to create the vocabulary, we will need to convert the Series of lists of tokens into a Series of tokens using the explode() Pandas method. Then we will use the value_counts() method to create a Series object in which the index are the tokens and the values are the number of times they appear in the corpus. The next step is removing the tokens with a count lower than our chosen threshold. Finally, we create a list with the remaining tokens, as well as a dictionary that maps tokens to token ids (i.e., the index of the token in the list). We include in the vocabulary a special token [UNK] that will be used as a placeholder for tokens that do not appear in our vocabulary after the frequency pruning. Using this vocabulary, we construct a feature vector for each news article in the corpus. This feature vector will be encoded as a dictionary, with keys corresponding to token ids, and values corresponding to the number of times the token appears in the article. As above, the feature vectors will be stored as a new column in the dataframe. 68 Implementing Text Classification Using Perceptron and LR class index class title Wall St. Bears Claw Back Into the Black (Reuters) Oil and Economy Cloud Stocks' Outlook (Reuters) Oil prices soar to all-time record, posing new... ... Pakistan's Musharraf Says Won't Quit as Army C... Saban not going to Dolphins yet Nets get Carter from Raptors description Reuters - Short-sellers, Wall Street's dwindli... Reuters - Soaring crude prices plus worries\ab... AFP - Tearaway world oil prices, toppling reco... ... KARACHI (Reuters) - Pakistani President Perve... The Miami Dolphins will put their courtship of... INDIANAPOLIS -- All-Star Vince Carter was trad... text wall st. bears claw back into the black (reute... oil and economy cloud stocks' outlook (reuters... oil prices soar to all-time record, posing new... ... pakistan's musharraf says won't quit as army c... saban not going to dolphins yet the miami dolp... nets get carter from raptors indianapolis -- a... tokens [wall, st., bears, claw, back, into, the, blac... [oil, and, economy, cloud, stocks, ', outlook,... [oil, prices, soar, to, alltime, record, ,, p... ... [pakistan, 's, musharraf, says, wo, n't, quit,... [saban, not, going, to, dolphins, yet, the, mi... [nets, get, carter, from, raptors, indianapoli... features {427: 2, 563: 1, 1607: 1, 15062: 1, 120: 1, 73... {66: 1, 9: 2, 351: 2, 4565: 1, 158: 1, 116: 1,... {66: 2, 99: 2, 4390: 1, 4: 2, 3595: 1, 149: 1,... ... {383: 1, 23: 1, 1626: 2, 91: 1, 1809: 1, 285: ... {7762: 2, 68: 1, 661: 1, 4: 2, 1439: 2, 703: 1... {2170: 2, 226: 1, 2402: 2, 32: 1, 2995: 2, 219... . 0  3 Business 
 . 1  3 Business 
 . 2  3 Business 
 . 3  3 Business 
 . 4  3 Business 
 ... ... ... . 119995  1 World 
 . 119996  2 Sports 
 . 119997  2 Sports 
 . 119998  2 Sports 
 . 119999  2 Sports 
 120000 rows × 7 columns Carlyle Looks Toward Commercial Aerospace (Reu... Reuters - Private investment firm Carlyle Grou... carlyle looks toward commercial aerospace (reu... Iraq Halts Oil Exports from Reuters - Authorities have iraq halts oil exports from Main Southern Pipe... halted oil export\f... main southern pipe... Renteria signing a top-shelf Red Sox general manager renteria signing a topdeal Theo Epstein acknowled... shelf deal red sox gene... PITTSBURGH at NY Today's NFL games GIANTS Time: 1:30 p.m. Line: ... today's nfl games pittsburgh at ny giants time... [carlyle, looks, toward, {15999: 2, 1076: 1, 855: commercial, aerospace... 1, 1286: 1, 4251: 1, ... [iraq, halts, oil, exports, {77: 2, 7380: 1, 66: 3, from, main, southe... 1787: 1, 32: 2, 900: 2... [renteria, signing, a, top- {8428: 2, 2638: 1, 5: 4, shelf, deal, red, s... 0: 3, 127: 1, 202: 3,... [today, 's, nfl, games, {106: 1, 23: 1, 729: 1, pittsburgh, at, ny, gi... 225: 1, 1586: 1, 22: 1... The final preprocessing step is converting the features and the class indices into PyTorch tensors. Recall that we need to subtract one from the class indices to make them zero-based. At this point, the data is fully processed and we are ready to begin training. 4.2.3 Multiclass Logistic Regression Using PyTorch The model itself is a single linear layer whose input size corresponds to the size of our vocabulary, and its output size corresponds to the number of classes in our corpus. PyTorch’s Linear layer includes a bias by default, so there is no need to handle that manually the way we did for our perceptron example. The code for training this model (which implements Algorithm 6) is almost identical to that of the binary logistic repression. However, since we have to calculate a score for each of the four different classes, we need to replace the previous BCEWithLogitsLoss with CrossEntropyLoss, which applies a softmax over the scores to obtain probabilities for each class. For each example, the model predicts 4 scores – one for each label. The label with the highest score is selected using the argmax function. We evaluate the predictions of our model for each class using Scikitlearn’s classification_report, which handles the results of multiclass classification. 4.3 Summary 69 4.3 Summary In this chapter, we used movie review and news article classification to illustrate the implementation of the previously described algorithms for the binary perceptron, binary logistic regression, and multiclass logistic regression. For the binary logistic regression, we made a direct comparison between the lower-level NumPy implementation and a higher-level version that made use of PyTorch. We hope that through this series of exercises the reader has noted several key takeaways. First, data preparation is important and should be done thoughtfully. Certain tasks (e.g., text normalization or sentence splitting) are going to be frequently needed if you continue with NLP, so using or creating generic functions can be very helpful. However, what works for one dataset and one language may not be suitable for another scenario. For example, in our case, we selected different tokenizers for each of our tasks to account for the different registers of English, as well as removing diacritics during normalization. Second, when it comes to implementing machine learning algorithms, it is often easier to use a higher-level library such as PyTorch instead of NumPy. For example, with the former, the gradients are calculated by the library, whereas in NumPy we have to code them ourselves. This becomes cumbersome quickly. For example, even the derivative of the softmax is non-trivial. Third, PyTorch imposes a training structure that remains largely the same, regardless of what models are being trained. That is, at a high level, the same steps are always required: clearing the current gradients, predicting output scores for the provided inputs, calculating the loss, and optimizing. These features make PyTorch a very powerful and convenient deep learning library; we will continue to use it throughout the remainder of the book to implement more complex neural architectures.
18,594
18,722
#!/usr/bin/env python # coding: utf-8 # # Binary Text Classification with # # Logistic Regression Implemented with PyTorch and BCE Loss # In[1]: import random import numpy as np import torch from tqdm.notebook import tqdm # set this variable to a number to be used as the random seed # or to None if you don't want to set a random seed seed = 1234 if seed is not None: random.seed(seed) np.random.seed(seed) torch.manual_seed(seed) # The dataset is divided in two directories called `train` and `test`. # These directories contain the training and testing splits of the dataset. # In[2]: get_ipython().system('ls -lh data/aclImdb/') # Both the `train` and `test` directories contain two directories called `pos` and `neg` that contain text files with the positive and negative reviews, respectively. # In[3]: get_ipython().system('ls -lh data/aclImdb/train/') # We will now read the filenames of the positive and negative examples. # In[4]: from glob import glob pos_files = glob('data/aclImdb/train/pos/*.txt') neg_files = glob('data/aclImdb/train/neg/*.txt') print('number of positive reviews:', len(pos_files)) print('number of negative reviews:', len(neg_files)) # Now, we will use a [`CountVectorizer`](https://scikit-learn.org/stable/modules/generated/sklearn.feature_extraction.text.CountVectorizer.html) to read the text files, tokenize them, acquire a vocabulary from the training data, and encode it in a document-term matrix in which each row represents a review, and each column represents a term in the vocabulary. Each element $(i,j)$ in the matrix represents the number of times term $j$ appears in example $i$. # In[5]: from sklearn.feature_extraction.text import CountVectorizer # initialize CountVectorizer indicating that we will give it a list of filenames that have to be read cv = CountVectorizer(input='filename') # learn vocabulary and return sparse document-term matrix doc_term_matrix = cv.fit_transform(pos_files + neg_files) doc_term_matrix # Note in the message printed above that the matrix is of shape (25000, 74894). # In other words, it has 1,871,225,000 elements. # However, only 3,445,861 elements were stored. # This is because most of the elements in the matrix are zeros. # The reason is that the reviews are short and most words in the english language don't appear in each review. # A matrix that only stores non-zero values is called *sparse*. # # Now we will convert it to a dense numpy array: # In[6]: X_train = doc_term_matrix.toarray() X_train.shape # We will also create a numpy array with the binary labels for the reviews. # One indicates a positive review and zero a negative review. # The label `y_train[i]` corresponds to the review encoded in row `i` of the `X_train` matrix. # In[7]: # training labels y_pos = np.ones(len(pos_files)) y_neg = np.zeros(len(neg_files)) y_train = np.concatenate([y_pos, y_neg]) y_train # Now we will initialize our model, in the form of an array of weights `w` of the same size as the number of features in our dataset (i.e., the number of words in the vocabulary acquired by [`CountVectorizer`](https://scikit-learn.org/stable/modules/generated/sklearn.feature_extraction.text.CountVectorizer.html)), and a bias term `b`. # Both are initialized to zeros. # In[8]: n_examples, n_features = X_train.shape # Now we will use the logistic regression learning algorithm to learn the values of `w` and `b` from our training data. # In[9]: import torch from torch import nn from torch import optim lr = 1e-1 n_epochs = 10 model = nn.Linear(n_features, 1) loss_func = nn.BCEWithLogitsLoss() optimizer = optim.SGD(model.parameters(), lr=lr) X_train = torch.tensor(X_train, dtype=torch.float32) y_train = torch.tensor(y_train, dtype=torch.float32) indices = np.arange(n_examples) for epoch in range(10): n_errors = 0 # randomize training examples np.random.shuffle(indices) # for each training example for i in tqdm(indices, desc=f'epoch {epoch+1}'): x = X_train[i] y_true = y_train[i] # make predictions y_pred = model(x) # calculate loss loss = loss_func(y_pred[0], y_true) # calculate gradients through back-propagation loss.backward() # optimize model parameters optimizer.step() # ensure gradients are set to zero model.zero_grad() # The next step is evaluating the model on the test dataset. # Note that this time we use the [`transform()`](https://scikit-learn.org/stable/modules/generated/sklearn.feature_extraction.text.CountVectorizer.html#sklearn.feature_extraction.text.CountVectorizer.transform) method of the [`CountVectorizer`](https://scikit-learn.org/stable/modules/generated/sklearn.feature_extraction.text.CountVectorizer.html), instead of the [`fit_transform()`](https://scikit-learn.org/stable/modules/generated/sklearn.feature_extraction.text.CountVectorizer.html#sklearn.feature_extraction.text.CountVectorizer.fit_transform) method that we used above. This is because we want to use the learned vocabulary in the test set, instead of learning a new one. # In[10]: pos_files = glob('data/aclImdb/test/pos/*.txt') neg_files = glob('data/aclImdb/test/neg/*.txt') doc_term_matrix = cv.transform(pos_files + neg_files) X_test = doc_term_matrix.toarray() X_test = torch.tensor(X_test, dtype=torch.float32) y_pos = np.ones(len(pos_files)) y_neg = np.zeros(len(neg_files)) y_test = np.concatenate([y_pos, y_neg]) # Using the model is easy: multiply the document-term matrix by the learned weights and add the bias. # We use Python's `@` operator to perform the matrix-vector multiplication. # In[11]: y_pred = model(X_test) > 0 # Now we print an evaluation of the prediction results using scikit-learn's [`classification_report()`](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.classification_report.html) function. # In[12]: def binary_classification_report(y_true, y_pred): # count true positives, false positives, true negatives, and false negatives tp = fp = tn = fn = 0 for gold, pred in zip(y_true, y_pred): if pred == True: if gold == True: tp += 1 else: fp += 1 else: if gold == False: tn += 1 else: fn += 1 # calculate precision and recall precision = tp / (tp + fp) recall = tp / (tp + fn) # calculate f1 score fscore = 2 * precision * recall / (precision + recall) # calculate accuracy accuracy = (tp + tn) / len(y_true) # number of positive labels in y_true support = sum(y_true) return { "precision": precision, "recall": recall, "f1-score": fscore, "support": support, "accuracy": accuracy, } # In[13]: binary_classification_report(y_test, y_pred)
3,597
3,632
12
chap04-13
chap04-13
4 Implementing Text Classification Using Perceptron and Logistic Regression In the previous chapters we have discussed the theory behind the perceptron and logistic regression, including mathematical explanations of how and why they are able to learn from examples. In this chapter we will transition from math to code. Specifically, we will discuss how to implement these models in the Python programming language. All the code that we will introduce throughout this book is available online as well: http://clulab.github.io/gentlenlp/. The reader who is not familiar with the Python programming language is encouraged to read first Appendix A, for a brief introduction to the language, and Appendix B, for a discussion on how computers encode and preprocess text. Once done, please return here. To get a better understanding of how these algorithms work under the hood, we will start by implementing them from scratch. However, as the book progresses, we will introduce some of the popular tools and libraries that make Python the language of choice for machine learning, e.g., PyTorch,1 and Hugging Face’s transformers.2 The code for all the examples in the book is provided in the form of Jupyter notebooks.3 Important fragments of these notebooks will be presented in the implementation chapters so that the reader has the whole picture just by reading the book. However, we strongly encourage you to download the notebooks and execute them yourself. We also encourage you to modify them to conduct your own experiments! 1 https://pytorch.org
2 https://huggingface.co 3 https://jupyter.org/ 55 56 Implementing Text Classification Using Perceptron and LR 4.1 Binary Classification We begin this chapter with binary classification. That is, we aim to train classifiers that assign one of two labels to a given text. As the example for this task, we will train a review classifier using the the Large Movie Review Dataset (Maas et al., 2011).4 We tackle this task by implementing first a binary perceptron classifier, followed by a binary logistic regression one. We will implement the latter both from scratch as well as using PyTorch, so the reader has a clearer understanding on how PyTorch works “under the hood.” 4.1.1 Large Movie Review Dataset This dataset contains movie reviews and their associated scores (between 1 and 10) as provided by IMDb.5 converted these scores to binary labels by assigning each review a positive or negative label if the review score was above 6 or below 5, respectively. Reviews with scores 5 and 6 were considered too neutral and thus excluded. We follow the same protocol in this chapter. The dataset is divided in two even partitions called train and test, each containing 25,000 reviews. The dataset also provides additional unlabeled reviews, but we will not use those here. Each partition contains two directories called pos and neg where the positive and negative examples are stored. Each review is stored in an independent text file, whose name is composed of an id unique to the partition and the score associated with the review, separated by an underscore. An example of a positive and a negative review is shown in Table 4.1. 4.1.2 Bag-of-words Model As discussed in Section 2.2, we will encode the text to classify as a bag of words. That is, we encode each review as a list of numbers, with each position in the list corresponding to a word in our vocabulary, and the value stored in that position corresponding to the number of times the word appears in the review. For example, say we want to encode the following two reviews: 4 https://ai.stanford.edu/~amaas/data/sentiment/ 5 https://www.imdb.com/ Maas et al. 4.1 Binary Classification 57 Table 4.1 Two examples of movie reviews from IMDb. The first is a positive review of the movie Puss in Boots (1988). The second is a negative review of the movie Valentine (2001). These reviews can be found at https://www.imdb.com/review/rw0606396/ and https://www.imdb.com/review/rw0721861/, respectively. Filename Score Binary Label train/pos/24_8.txt 8/10 Positive train/neg/141_3.txt 3/10 Negative Review Text Although this was obviously a low-budget production, the performances and the songs in this movie are worth seeing. One of Walken’s few musical roles to date. (he is a marvelous dancer and singer and he demonstrates his acrobatic skills as well - watch for the cartwheel!) Also starring Jason Connery. A great children’s story and very likable characters. This stalk and slash turkey manages to bring nothing new to an increasingly stale genre. A masked killer stalks young, pert girls and slaughters them in a variety of gruesome ways, none of which are particularly inventive. It’s not scary, it’s not clever, and it’s not funny. So what was the point of it? Review 1: Review 2: "I liked the movie. My friend liked it too. " "I hated it. Would not recommend. " First, we need to create a vocabulary that maps each word to an id that uniquely identifies it. Each of these numbers will be used as the index in a list, so they must start at zero and grow by one for each word in the vocabulary. For example, one possible vocabulary that encodes the previous reviews is: {'would': 0, 'hated': 1, 58 Implementing Text Classification Using Perceptron and LR 'my': 2, 'liked': 3, 'not': 4, 'it': 5, 'movie': 6, 'recommend': 7, 'the': 8, 'I': 9, 'too': 10, 'friend': 11} Using this mapping, we can encode the two reviews as follows: Review1: [0,0,1,2,0,1,1,0,1,1,1,1] Review2: [1,1,0,0,1,1,0,1,0,1,0,0] Note that the word liked (fourth position) in the first review has a value of two. This is because this word appears twice in that review. This is a small example with a vocabulary of only 12 terms. Of course, the same process needs to be implemented for our whole training dataset. For this purpose we will use scikit-learn’s CountVectorizer class.6 Using the CountVectorizer class simplifies things, allowing us to get started quickly with a bag-of-words approach. However, note that it makes several simplifying assumptions (e.g., text is lowercased, and punctuation and single character tokens are removed). Some of these may not be adequate to other tasks. First, we need to obtain the filenames for the reviews in the training set: Once we have acquired the filenames for the training reviews, we need
to read them using the CountVectorizer. In order for the CountVectorizer to open and read the files for us, we make use of the input='filename' constructor parameter (otherwise it would expect the string content directly). The CountVectorizer provides three methods that will be use-
ful for us: a method called fit() that is used to acquire the vocabulary,
a method transform() that converts the text into the bag-of-words representation, and a method fit_transform() that conveniently acquires the vocabulary and transforms the data in a single step. The resulting object is referred to as a document-term matrix, where each row corre- 6 https://scikitlearn.org/stable/modules/generated/sklearn.feature_ extraction.text.CountVectorizer.html 4.1 Binary Classification 59 sponds to a document, and each column corresponds to a term in the vocabulary. As the output above indicates, the resulting matrix has 25,000 rows (one for each review), and 74,849 columns (one for each term). Also you may note that this matrix is sparse, with 3,445,861 stored elements. A regular matrix of shape 25,000×74,849 would have 1,871,225,000 elements. However, most of the elements in a document-term matrix are zeros because only a few words from the vocabulary appear in each document. A sparse matrix takes advantage of this fact by storing only the non-zero cells in order to reduce the memory required to store it. Thus, sparse matrices are convenient, especially when dealing with lots of data. Nevertheless, to simplify the downstream code in this example, we will convert it into a dense matrix, i.e., a regular two-dimensional NumPy array. Finally, we also need the labels of the reviews. We assign a label of one to positive reviews, and a label of zero to negative ones. Note that the first half of the reviews are positive and the second half are negative. The label at the ith position of the y_train array corresponds to the review encoded in the ith row of the X_train matrix. 4.1.3 Perceptron Now that we have defined our task and the data processing pipeline, we will implement a perceptron classifier that classifies the movie reviews as positive or negative. The entire code discussed in this section is available in the chap4_perceptron notebook. Recall from Section 2.4 that the perceptron is composed of a weight vector w and a bias term b. These will be represented as a NumPy array w of the same length as our document vectors, and a variable b for the bias term. Both will be initialized with zeros. The parameters w and b are learned through the following algorithm, which implements Algorithm 2 from Chapter 2: There are a couple of details to point out. Line 3 of Algorithm 2 indicates that we need to repeat the training loop until convergence. Theoretically, convergence is defined as predicting all training examples correctly. This is an ambitious requirement, which is not always possible in practice, so in this code we also include a stop condition if we reach a maximum number of epochs. Another crucial difference between our implementation here and the theoretical Algorithm 2, is that we randomize the order in which the training examples are seen at the beginning of 60 Implementing Text Classification Using Perceptron and LR each epoch. This simple (but highly recommended!) change is necessary to avoid the introduction of spurious biases due to the arbitrary order of the examples in the original training partition.7 We accomplish this by storing the indices corresponding to the X_train matrix rows in a NumPy array, and shuffling these indices at the beginning of each epoch. We shuffle the indices instead of the examples so that we can preserve the mapping between examples and labels. The training loop aligns closely with Algorithm 2. We start by iterating over each example in our training data, storing the current example in the variable x,8 and its corresponding label in the variable y_true. Next, we compute the perceptron decision function shown in Algorithm 1. Note that NumPy (as well as PyTorch) uses Python’s @ operator to indicate vector or matrix multiplication, depending on its operand types. Here we use it to calculate the dot product of the example x and the weights w. To this we add the bias b to obtain the predicted score, whose sign is used to assign a positive or negative predicted label. If the prediction is correct, then no update is needed, and we can move on to the next training example. However, if the prediction is incorrect, then we need to adjust w and b, as described in Algorithm 2. Sidebar 4.1 The tqdm function This is our first exposure to the tqdm function. tqdm is a progress bar that “make your loops show a smart progress meter.”9 The name tqdm comes from the Arabic word taqaddum which can mean “progress.” Using tqdm is as simple as wrapping it around the collection to be traversed. After training, we evaluate the model’s performance on the heldout test partition. The test data is loaded similarly to the training partition, but with one notable difference; we use CountVectorizer’s transform() method instead of the fit_transform() method so that the vocabulary is not adjusted for the test data. We won’t show here the loading of the test partition since it is so similar to the code already shown, but it is available in the Jupyter notebook that accompanies this section. . 7   As an extreme example, consider a dataset where all the positive examples appear first in the training partition. This would cause the perceptron to artificially inflate the weights of the features that occur in these examples, a situation from which the learning algorithm may struggle to recover. 
 . 8  We use typewriter font when we discuss variables in the code, to distinguish code from the theoretical discussion in the other chapters. 
 9 https://github.com/tqdm/tqdm 4.1 Binary Classification 61 Using the model to assign labels to all the test data is easily done in one step – we simply multiply the entire test data document-term matrix by the previously learned weights and add the bias. Scores greater than zero indicate a positive review, and those less than zero are negative. At this point we can evaluate the classifier’s performance, which we will do using precision, recall, and F1 scores for binary classification (described in Section 2.3). For this purpose, we implement a function called binary_classification_report that computes these metrics and returns them as a dictionary: We call this function to compare the predicted labels to the true labels, and obtain the evaluation scores. Our F1 score here is 86.8%, which is much higher than the baseline that assigns labels randomly, which yields an F1 score of about 50%. This is a good result, especially considering the simplicity of the perceptron! In the next sections and chapters, we will discuss a battery of strategies to considerably improve this performance. 4.1.4 Binary Logistic Regression from Scratch Using the same task, dataset, and evaluation, we will now implement a logistic regression classifier, as described in Algorithm 5 from Chapter 3. To give the reader hands-on experience with the implementation of the gradient calculations for logistic regression, we start by implementing it from scratch using NumPy. All the code shown in this section is available in the chap4_logistic_regression_numpy notebook. In the perceptron implementation, we represented the weights and the bias as two different variables. Here, however, we will use a different approach that will allow us to unify them into a single vector variable. Specifically, we take advantage of the similarity between the derivative of the cost function with respect to the weights (Equation 3.14) and the derivative of the cost with respect to the bias (Equation 3.15). d Ci(w, b) = (σi − yi)xij (3.14 revisited) dwj d Ci(w, b) = σi − yi (3.15 revisited) db Note that the two derivative formulas are identical except that the former has a multiplication by xij, while the latter does not. However, 62 Implementing Text Classification Using Perceptron and LR since σi − yi = (σi − yi)1 we can multiply the derivative of the cost with respect to the bias by one without changing the semantics. This gives an opportunity for combining the computations, doing them both in a single pass. The idea is that we can treat the bias as a weight corresponding to a feature that always has a value of one. As can be seen above, we created a NumPy array of ones of the same length as the number of examples in our training set (i.e., the number of rows in the data matrix). Then we add this array as a new column to the data matrix, using NumPy’s column_stack function. Next, we need to initialize our model. This time we will use a single NumPy array w of the same length as the number of columns in the data matrix. The weight vector w is initialized randomly with values between 0 and 1: Before implementing the learning algorithm, we need an implementation of the logistic function. Recall that the logistic function is σ(x) = 1 (3.1 revisited) 1+e−x This function can be easily implemented in NumPy as follows: However, this naive implementation may produce the following warning during training: The term overflow indicates that the result of evaluating exp(-x) is a number so large that it can’t be represented by a float (specifically, we’re using float64 numbers). We will avoid this issue by not calling exp with values that will overflow. NumPy provides the function finfo that can be consulted to find the limits of floating point numbers: The log of the largest floating point number is the largest number for which exp() will not overflow, so we will use it as a threshold to filter out problematic values: We now have everything we need to implement Algorithm 4. The steps to follow for each example are: (1) use the model to make a prediction, (2) calculate the gradient of the loss function with respect to the model parameters, and (3) update the model parameters using the gradient. The size of the update is controlled by the learning rate. Once the model has been trained, we evaluate it on the test dataset using our binary_classification_report function from the previous section. Loading and preprocessing the test dataset follows the same 4.1 Binary Classification 63 steps as with the previous classifier. We omit the code for brevity. These are the results: The performance is comparable with that of the perceptron. The difference in F1 scores between the two classifiers (84.9% here vs. 86.8% for the perceptron) is not significant. Classifier parity is probably attributable to the fact that the signal distinguishing the two classes being easy to learn and the simpler perceptron training algorithm being sufficient in this case. Nevertheless, this task is useful in showing how to implement the logistic regression model from scratch, i.e., by implementing the gradient calculation and parameter updates manually. Next, we will implement the same model again using PyTorch, highlighting how this machine learning library simplifies the process. 4.1.5 Binary Logistic Regression Utilizing PyTorch While it is fairly straightforward to compute the derivatives for logistic regression and implement then directly in NumPy, this will not scale well to arbitrary neural architectures. Fortunately, there are libraries that automate the computation of the derivatives of the cost function (assuming it is differentiable!) for any neural network, and use the resulting gradients to perform gradient descent or other more sophisticated optimization procedures. To this end, we will use the PyTorch deep learning library10. The corresponding notebook for this section is chap4_logistic_regression_pytorch_bce. Our model for logistic regression corresponds to PyTorch’s Linear layer. When we instantiate this layer, we specify the size of the inputs (the size of our vocabulary) and the size of the output, i.e., the number of output neurons (which is one because we’re doing binary classification). The loss function we use is the binary cross-entropy loss (see Chapter 3), which is implemented as BCEWithLogitsLoss in PyTorch. In PyTorch, the gradients obtained from the loss function are applied to the model by an optimizer object, which implements and applies an optimization algorithm. Here we will use the vanilla stochastic gradient descent optimizer; we set its learning rate to 0.1. This is equivalent to the discussion in Section 3.2. Similarly to the manual implementation, the steps required to train the model for a given training example are: (1) ensure the gradients are set to zeros, (2) apply the model to obtain a prediction, (3) calculate 10 https://pytorch.org/ 64 Implementing Text Classification Using Perceptron and LR the loss, (4) compute the gradient of the loss by back-propagation, and (5) update the model parameters. Recall that in our previous implementation everything was hardcoded: applying the model, computing the gradients, and optimizing the model parameters. Here, however, the implementation of the logistic regression is expressed at a higher level of abstraction. This means that we are describing the logical steps without specifying a particular implementation. Instead, implementation details are the responsability of the chosen model, loss function, and optimizer. Thus, we could even choose a different model, loss function, and/or optimizer, and use the same training steps with little or no modification. This decoupling of the training logic from the implementation details is one of the main advantages of libraries such as PyTorch. As shown in the code above, calling the model as a function, with the feature vectors as inputs, produces the predicted scores. Once again, a positive score corresponds to a positive label. When we evaluate this implementation on the test dataset, we obtain results that are in line with our previous models: Writing the perceptron and the logistic regression from scratch is a good exercise, as it exposes us to the fundamentals of implementing machine learning algorithms. However, this becomes cumbersome for more complex neural architectures. For this reason, from this point on, we will use PyTorch for all our coding examples. 4.2 Multiclass Classification So far, in this chapter we have discussed implementing binary classifiers. Next, we will modify these binary classifiers to perform multiclass classification, following the discussion in Section 3.5. 4.2.1 AG News Dataset Before explaining the actual training/testing code, we have to choose a new dataset that is suitable for multiclass classification. To this end, we will use the AG News Classification Dataset (Zhang et al., 2015), a subset of the larger AG corpus of news articles collected from thousands of different news sources.11 The classification dataset consists of four 11 http://groups.di.unipi.it/~gulli/AG_corpus_of_news_articles.html 4.2 Multiclass Classification 65 classes, and the data is equally balanced across all classes (30,000 articles per class for train, and 1,900 articles per class for testing). The goal of the task is to classify each article as one of the four classes: World, Sports, Business, or Sci/Tech. 4.2.2 Preparing the Dataset The AG News Dataset is distributed as two CSV files (one for training and one for testing), each containing three columns: the class index, the title, and the description. The dataset also provides a text file that maps the above class indexes to more descriptive class labels. Because of the tabular nature of the dataset, pandas, a Python library
for tabular data analysis,12 is a natural choice for loading and transform-
ing it. To this end, our Jupyter notebook (chap4_multiclass_logistic_regression) demonstrates the sequence of steps required to handle the data, as well
as model training and evaluation. First, we show how to load the CSV,
add column names, and inspect the result: class index . 0  3 
 . 1  3 
 . 2  3 
 . 3  3 
 . 4  3 
 ... ... . 119995  1 
 . 119996  2 
 . 119997  2 
 . 119998  2 
 . 119999  2 
 title Wall St. Bears Claw Back Into the Black (Reuters) Carlyle Looks Toward Commercial Aerospace (Reu... Oil and Economy Cloud Stocks' Outlook (Reuters) Iraq Halts Oil Exports from Main Southern Pipe... Oil prices soar to all-time record, posing new... ... Pakistan's Musharraf Says Won't Quit as Army C... Renteria signing a top-shelf deal Saban not going to Dolphins yet Today's NFL games Nets get Carter from Raptors description Reuters - Short-sellers, Wall Street's dwindli... Reuters - Private investment firm Carlyle Grou... Reuters - Soaring crude prices plus worries\ab... Reuters - Authorities have halted oil export\f... AFP - Tearaway world oil prices, toppling reco... ... KARACHI (Reuters) - Pakistani President Perve... Red Sox general manager Theo Epstein acknowled... The Miami Dolphins will put their courtship of... PITTSBURGH at NY GIANTS Time: 1:30 p.m. Line: ... INDIANAPOLIS -- All-Star Vince Carter was trad... 120000 rows × 3 columns Since the class labels themselves are in a separate file, we manually add them to the pandas data structure (called dataframe in pandas’ terminology) to increase the interpretability of the data. We use the class index column as a starting point, and use its map method to create a new column with the corresponding labels (technically a new Series object) that is added to the dataframe using its insert method, which allows us to insert the column in a specific position. Note that the label indices are one-based, so we subtract one to align them with their labels. 12 https://pandas.pydata.org 66 Implementing Text Classification Using Perceptron and LR class index . 0  3 
 . 1  3 
 . 2  3 
 . 3  3 
 . 4  3 
 ... ... . 119995  1 
 . 119996  2 
 . 119997  2 
 . 119998  2 
 . 119999  2 
 class Business Business Business Business Business ... World Sports Sports Sports Sports title Wall St. Bears Claw Back Into the Black (Reuters) Oil and Economy Cloud Stocks' Outlook (Reuters) Oil prices soar to all-time record, posing new... Pakistan's Musharraf Says Won't Quit as Army C... Saban not going to Dolphins yet Nets get Carter from Raptors description Reuters - Short-sellers, Wall Street's dwindli... Reuters - Soaring crude prices plus worries\ab... AFP - Tearaway world oil prices, toppling reco... KARACHI (Reuters) - Pakistani President Perve... The Miami Dolphins will put their courtship of... INDIANAPOLIS -- All-Star Vince Carter was trad... Iraq Halts Oil Exports from Main Southern Pipe... Reuters - Authorities have halted oil export\f... ... ... Renteria signing a top-shelf deal Red Sox general manager Theo Epstein acknowled... 120000 rows × 4 columns Carlyle Looks Toward Commercial Aerospace (Reu... Reuters - Private investment firm Carlyle Grou... Today's NFL games PITTSBURGH at NY GIANTS Time: 1:30 p.m. Line: ... Next we will preprocess the text. First we lowercase the title and description, and then we concatenate them into a single string. Then we remove some spurious backslashes from the text. Once this is done, the preprocessed text is added to the dataframe as a new column. Note that pandas allows these steps to be applied to all rows simultaneously. class index class title Wall St. Bears Claw Back Into the Black (Reuters) Oil and Economy Cloud Stocks' Outlook (Reuters) Oil prices soar to all-time record, posing new... ... Pakistan's Musharraf Says Won't Quit as Army C... Saban not going to Dolphins yet Nets get Carter from Raptors description Reuters - Short-sellers, Wall Street's dwindli... Reuters - Soaring crude prices plus worries\ab... AFP - Tearaway world oil prices, toppling reco... ... KARACHI (Reuters) - Pakistani President Perve... The Miami Dolphins will put their courtship of... INDIANAPOLIS -- All-Star Vince Carter was trad... text wall st. bears claw back into the black (reute... oil and economy cloud stocks' outlook (reuters... oil prices soar to all-time record, posing new... ... pakistan's musharraf says won't quit as army c... saban not going to dolphins yet the miami dolp... nets get carter from raptors indianapolis -- a... . 0  3 Business 
 . 1  3 Business 
 . 2  3 Business 
 . 3  3 Business 
 . 4  3 Business 
 ... ... ... . 119995  1 World 
 . 119996  2 Sports 
 . 119997  2 Sports 
 . 119998  2 Sports 
 . 119999  2 Sports 
 120000 rows × 5 columns Carlyle Looks Toward Commercial Reuters - Private investment firm Carlyle carlyle looks toward commercial Aerospace (Reu... Grou... aerospace (reu... Iraq Halts Oil Exports from Main Southern Pipe... Reuters - Authorities have halted oil export\f... iraq halts oil exports from main southern pipe... Renteria signing a top-shelf deal Red Sox general manager Theo Epstein renteria signing a top-shelf deal red sox acknowled... gene... Today's NFL games PITTSBURGH at NY GIANTS Time: 1:30 p.m. today's nfl games pittsburgh at ny giants Line: ... time... At this point, the text is ready to be tokenized. For this purpose we will use NLTK’s word_tokenize function. This function can be applied to the whole column at once using the pandas map function, which returns a new column which we add to the dataframe. However, here we actually use the progress_map function, which provides a visual progress bar. This visual feedback is especially helpful for tasks that take more time to complete. 4.2 Multiclass Classification 67 class index . 0  3 
 . 1  3 
 . 2  3 
 . 3  3 
 . 4  3 
 ... ... . 119995  1 
 . 119996  2 
 . 119997  2 
 . 119998  2 
 . 119999  2 
 class Business Business Business Business Business ... World Sports Sports Sports Sports title Wall St. Bears Claw Back Into the Black (Reuters) Oil and Economy Cloud Stocks' Outlook (Reuters) Oil prices soar to all-time record, posing new... ... Pakistan's Musharraf Says Won't Quit as Army C... Saban not going to Dolphins yet Nets get Carter from Raptors description Reuters - Short-sellers, Wall Street's dwindli... Reuters - Soaring crude prices plus worries\ab... AFP - Tearaway world oil prices, toppling reco... ... KARACHI (Reuters) - Pakistani President Perve... The Miami Dolphins will put their courtship of... INDIANAPOLIS -- All-Star Vince Carter was trad... text wall st. bears claw back into the black (reute... oil and economy cloud stocks' outlook (reuters... oil prices soar to all-time record, posing new... ... pakistan's musharraf says won't quit as army c... saban not going to dolphins yet the miami dolp... nets get carter from raptors indianapolis -- a... tokens [wall, st., bears, claw, back, into, the, blac... [oil, and, economy, cloud, stocks, ', outlook,... [oil, prices, soar, to, all-time, record, ,, p... ... [pakistan, 's, musharraf, says, wo, n't, quit,... [saban, not, going, to, dolphins, yet, the, mi... [nets, get, carter, from, raptors, indianapoli... 120000 rows × 6 columns Carlyle Looks Toward Commercial Reuters - Private investment firm carlyle looks toward commercial [carlyle, looks, toward, Aerospace (Reu... Carlyle Grou... aerospace (reu... commercial, aerospace... Iraq Halts Oil Exports from Main Reuters - Authorities have halted iraq halts oil exports from main [iraq, halts, oil, exports, from, Southern Pipe... oil export\f... southern pipe... main, southe... Renteria signing a top-shelf deal Red Sox general manager Theo renteria signing a top-shelf deal [renteria, signing, a, top-shelf, Epstein acknowled... red sox gene... deal, red, s... Today's NFL games PITTSBURGH at NY GIANTS today's nfl games pittsburgh at [today, 's, nfl, games, Time: 1:30 p.m. Line: ... ny giants time... pittsburgh, at, ny, gi... From the tokens we just created, we then create a vocabulary for our corpus. Here, we only keep the words that occur at least 10 times, decreasing the memory needed and reducing the likelihood that our vocabulary contains noisy tokens. Note that each row in the tokens column contains a list of tokens. In order to create the vocabulary, we will need to convert the Series of lists of tokens into a Series of tokens using the explode() Pandas method. Then we will use the value_counts() method to create a Series object in which the index are the tokens and the values are the number of times they appear in the corpus. The next step is removing the tokens with a count lower than our chosen threshold. Finally, we create a list with the remaining tokens, as well as a dictionary that maps tokens to token ids (i.e., the index of the token in the list). We include in the vocabulary a special token [UNK] that will be used as a placeholder for tokens that do not appear in our vocabulary after the frequency pruning. Using this vocabulary, we construct a feature vector for each news article in the corpus. This feature vector will be encoded as a dictionary, with keys corresponding to token ids, and values corresponding to the number of times the token appears in the article. As above, the feature vectors will be stored as a new column in the dataframe. 68 Implementing Text Classification Using Perceptron and LR class index class title Wall St. Bears Claw Back Into the Black (Reuters) Oil and Economy Cloud Stocks' Outlook (Reuters) Oil prices soar to all-time record, posing new... ... Pakistan's Musharraf Says Won't Quit as Army C... Saban not going to Dolphins yet Nets get Carter from Raptors description Reuters - Short-sellers, Wall Street's dwindli... Reuters - Soaring crude prices plus worries\ab... AFP - Tearaway world oil prices, toppling reco... ... KARACHI (Reuters) - Pakistani President Perve... The Miami Dolphins will put their courtship of... INDIANAPOLIS -- All-Star Vince Carter was trad... text wall st. bears claw back into the black (reute... oil and economy cloud stocks' outlook (reuters... oil prices soar to all-time record, posing new... ... pakistan's musharraf says won't quit as army c... saban not going to dolphins yet the miami dolp... nets get carter from raptors indianapolis -- a... tokens [wall, st., bears, claw, back, into, the, blac... [oil, and, economy, cloud, stocks, ', outlook,... [oil, prices, soar, to, alltime, record, ,, p... ... [pakistan, 's, musharraf, says, wo, n't, quit,... [saban, not, going, to, dolphins, yet, the, mi... [nets, get, carter, from, raptors, indianapoli... features {427: 2, 563: 1, 1607: 1, 15062: 1, 120: 1, 73... {66: 1, 9: 2, 351: 2, 4565: 1, 158: 1, 116: 1,... {66: 2, 99: 2, 4390: 1, 4: 2, 3595: 1, 149: 1,... ... {383: 1, 23: 1, 1626: 2, 91: 1, 1809: 1, 285: ... {7762: 2, 68: 1, 661: 1, 4: 2, 1439: 2, 703: 1... {2170: 2, 226: 1, 2402: 2, 32: 1, 2995: 2, 219... . 0  3 Business 
 . 1  3 Business 
 . 2  3 Business 
 . 3  3 Business 
 . 4  3 Business 
 ... ... ... . 119995  1 World 
 . 119996  2 Sports 
 . 119997  2 Sports 
 . 119998  2 Sports 
 . 119999  2 Sports 
 120000 rows × 7 columns Carlyle Looks Toward Commercial Aerospace (Reu... Reuters - Private investment firm Carlyle Grou... carlyle looks toward commercial aerospace (reu... Iraq Halts Oil Exports from Reuters - Authorities have iraq halts oil exports from Main Southern Pipe... halted oil export\f... main southern pipe... Renteria signing a top-shelf Red Sox general manager renteria signing a topdeal Theo Epstein acknowled... shelf deal red sox gene... PITTSBURGH at NY Today's NFL games GIANTS Time: 1:30 p.m. Line: ... today's nfl games pittsburgh at ny giants time... [carlyle, looks, toward, {15999: 2, 1076: 1, 855: commercial, aerospace... 1, 1286: 1, 4251: 1, ... [iraq, halts, oil, exports, {77: 2, 7380: 1, 66: 3, from, main, southe... 1787: 1, 32: 2, 900: 2... [renteria, signing, a, top- {8428: 2, 2638: 1, 5: 4, shelf, deal, red, s... 0: 3, 127: 1, 202: 3,... [today, 's, nfl, games, {106: 1, 23: 1, 729: 1, pittsburgh, at, ny, gi... 225: 1, 1586: 1, 22: 1... The final preprocessing step is converting the features and the class indices into PyTorch tensors. Recall that we need to subtract one from the class indices to make them zero-based. At this point, the data is fully processed and we are ready to begin training. 4.2.3 Multiclass Logistic Regression Using PyTorch The model itself is a single linear layer whose input size corresponds to the size of our vocabulary, and its output size corresponds to the number of classes in our corpus. PyTorch’s Linear layer includes a bias by default, so there is no need to handle that manually the way we did for our perceptron example. The code for training this model (which implements Algorithm 6) is almost identical to that of the binary logistic repression. However, since we have to calculate a score for each of the four different classes, we need to replace the previous BCEWithLogitsLoss with CrossEntropyLoss, which applies a softmax over the scores to obtain probabilities for each class. For each example, the model predicts 4 scores – one for each label. The label with the highest score is selected using the argmax function. We evaluate the predictions of our model for each class using Scikitlearn’s classification_report, which handles the results of multiclass classification. 4.3 Summary 69 4.3 Summary In this chapter, we used movie review and news article classification to illustrate the implementation of the previously described algorithms for the binary perceptron, binary logistic regression, and multiclass logistic regression. For the binary logistic regression, we made a direct comparison between the lower-level NumPy implementation and a higher-level version that made use of PyTorch. We hope that through this series of exercises the reader has noted several key takeaways. First, data preparation is important and should be done thoughtfully. Certain tasks (e.g., text normalization or sentence splitting) are going to be frequently needed if you continue with NLP, so using or creating generic functions can be very helpful. However, what works for one dataset and one language may not be suitable for another scenario. For example, in our case, we selected different tokenizers for each of our tasks to account for the different registers of English, as well as removing diacritics during normalization. Second, when it comes to implementing machine learning algorithms, it is often easier to use a higher-level library such as PyTorch instead of NumPy. For example, with the former, the gradients are calculated by the library, whereas in NumPy we have to code them ourselves. This becomes cumbersome quickly. For example, even the derivative of the softmax is non-trivial. Third, PyTorch imposes a training structure that remains largely the same, regardless of what models are being trained. That is, at a high level, the same steps are always required: clearing the current gradients, predicting output scores for the provided inputs, calculating the loss, and optimizing. These features make PyTorch a very powerful and convenient deep learning library; we will continue to use it throughout the remainder of the book to implement more complex neural architectures.
18,305
18,377
#!/usr/bin/env python # coding: utf-8 # # Binary Text Classification with # # Logistic Regression Implemented with PyTorch and BCE Loss # In[1]: import random import numpy as np import torch from tqdm.notebook import tqdm # set this variable to a number to be used as the random seed # or to None if you don't want to set a random seed seed = 1234 if seed is not None: random.seed(seed) np.random.seed(seed) torch.manual_seed(seed) # The dataset is divided in two directories called `train` and `test`. # These directories contain the training and testing splits of the dataset. # In[2]: get_ipython().system('ls -lh data/aclImdb/') # Both the `train` and `test` directories contain two directories called `pos` and `neg` that contain text files with the positive and negative reviews, respectively. # In[3]: get_ipython().system('ls -lh data/aclImdb/train/') # We will now read the filenames of the positive and negative examples. # In[4]: from glob import glob pos_files = glob('data/aclImdb/train/pos/*.txt') neg_files = glob('data/aclImdb/train/neg/*.txt') print('number of positive reviews:', len(pos_files)) print('number of negative reviews:', len(neg_files)) # Now, we will use a [`CountVectorizer`](https://scikit-learn.org/stable/modules/generated/sklearn.feature_extraction.text.CountVectorizer.html) to read the text files, tokenize them, acquire a vocabulary from the training data, and encode it in a document-term matrix in which each row represents a review, and each column represents a term in the vocabulary. Each element $(i,j)$ in the matrix represents the number of times term $j$ appears in example $i$. # In[5]: from sklearn.feature_extraction.text import CountVectorizer # initialize CountVectorizer indicating that we will give it a list of filenames that have to be read cv = CountVectorizer(input='filename') # learn vocabulary and return sparse document-term matrix doc_term_matrix = cv.fit_transform(pos_files + neg_files) doc_term_matrix # Note in the message printed above that the matrix is of shape (25000, 74894). # In other words, it has 1,871,225,000 elements. # However, only 3,445,861 elements were stored. # This is because most of the elements in the matrix are zeros. # The reason is that the reviews are short and most words in the english language don't appear in each review. # A matrix that only stores non-zero values is called *sparse*. # # Now we will convert it to a dense numpy array: # In[6]: X_train = doc_term_matrix.toarray() X_train.shape # We will also create a numpy array with the binary labels for the reviews. # One indicates a positive review and zero a negative review. # The label `y_train[i]` corresponds to the review encoded in row `i` of the `X_train` matrix. # In[7]: # training labels y_pos = np.ones(len(pos_files)) y_neg = np.zeros(len(neg_files)) y_train = np.concatenate([y_pos, y_neg]) y_train # Now we will initialize our model, in the form of an array of weights `w` of the same size as the number of features in our dataset (i.e., the number of words in the vocabulary acquired by [`CountVectorizer`](https://scikit-learn.org/stable/modules/generated/sklearn.feature_extraction.text.CountVectorizer.html)), and a bias term `b`. # Both are initialized to zeros. # In[8]: n_examples, n_features = X_train.shape # Now we will use the logistic regression learning algorithm to learn the values of `w` and `b` from our training data. # In[9]: import torch from torch import nn from torch import optim lr = 1e-1 n_epochs = 10 model = nn.Linear(n_features, 1) loss_func = nn.BCEWithLogitsLoss() optimizer = optim.SGD(model.parameters(), lr=lr) X_train = torch.tensor(X_train, dtype=torch.float32) y_train = torch.tensor(y_train, dtype=torch.float32) indices = np.arange(n_examples) for epoch in range(10): n_errors = 0 # randomize training examples np.random.shuffle(indices) # for each training example for i in tqdm(indices, desc=f'epoch {epoch+1}'): x = X_train[i] y_true = y_train[i] # make predictions y_pred = model(x) # calculate loss loss = loss_func(y_pred[0], y_true) # calculate gradients through back-propagation loss.backward() # optimize model parameters optimizer.step() # ensure gradients are set to zero model.zero_grad() # The next step is evaluating the model on the test dataset. # Note that this time we use the [`transform()`](https://scikit-learn.org/stable/modules/generated/sklearn.feature_extraction.text.CountVectorizer.html#sklearn.feature_extraction.text.CountVectorizer.transform) method of the [`CountVectorizer`](https://scikit-learn.org/stable/modules/generated/sklearn.feature_extraction.text.CountVectorizer.html), instead of the [`fit_transform()`](https://scikit-learn.org/stable/modules/generated/sklearn.feature_extraction.text.CountVectorizer.html#sklearn.feature_extraction.text.CountVectorizer.fit_transform) method that we used above. This is because we want to use the learned vocabulary in the test set, instead of learning a new one. # In[10]: pos_files = glob('data/aclImdb/test/pos/*.txt') neg_files = glob('data/aclImdb/test/neg/*.txt') doc_term_matrix = cv.transform(pos_files + neg_files) X_test = doc_term_matrix.toarray() X_test = torch.tensor(X_test, dtype=torch.float32) y_pos = np.ones(len(pos_files)) y_neg = np.zeros(len(neg_files)) y_test = np.concatenate([y_pos, y_neg]) # Using the model is easy: multiply the document-term matrix by the learned weights and add the bias. # We use Python's `@` operator to perform the matrix-vector multiplication. # In[11]: y_pred = model(X_test) > 0 # Now we print an evaluation of the prediction results using scikit-learn's [`classification_report()`](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.classification_report.html) function. # In[12]: def binary_classification_report(y_true, y_pred): # count true positives, false positives, true negatives, and false negatives tp = fp = tn = fn = 0 for gold, pred in zip(y_true, y_pred): if pred == True: if gold == True: tp += 1 else: fp += 1 else: if gold == False: tn += 1 else: fn += 1 # calculate precision and recall precision = tp / (tp + fp) recall = tp / (tp + fn) # calculate f1 score fscore = 2 * precision * recall / (precision + recall) # calculate accuracy accuracy = (tp + tn) / len(y_true) # number of positive labels in y_true support = sum(y_true) return { "precision": precision, "recall": recall, "f1-score": fscore, "support": support, "accuracy": accuracy, } # In[13]: binary_classification_report(y_test, y_pred)
3,564
3,597
13
chap04-14
chap04-14
4 Implementing Text Classification Using Perceptron and Logistic Regression In the previous chapters we have discussed the theory behind the perceptron and logistic regression, including mathematical explanations of how and why they are able to learn from examples. In this chapter we will transition from math to code. Specifically, we will discuss how to implement these models in the Python programming language. All the code that we will introduce throughout this book is available online as well: http://clulab.github.io/gentlenlp/. The reader who is not familiar with the Python programming language is encouraged to read first Appendix A, for a brief introduction to the language, and Appendix B, for a discussion on how computers encode and preprocess text. Once done, please return here. To get a better understanding of how these algorithms work under the hood, we will start by implementing them from scratch. However, as the book progresses, we will introduce some of the popular tools and libraries that make Python the language of choice for machine learning, e.g., PyTorch,1 and Hugging Face’s transformers.2 The code for all the examples in the book is provided in the form of Jupyter notebooks.3 Important fragments of these notebooks will be presented in the implementation chapters so that the reader has the whole picture just by reading the book. However, we strongly encourage you to download the notebooks and execute them yourself. We also encourage you to modify them to conduct your own experiments! 1 https://pytorch.org
2 https://huggingface.co 3 https://jupyter.org/ 55 56 Implementing Text Classification Using Perceptron and LR 4.1 Binary Classification We begin this chapter with binary classification. That is, we aim to train classifiers that assign one of two labels to a given text. As the example for this task, we will train a review classifier using the the Large Movie Review Dataset (Maas et al., 2011).4 We tackle this task by implementing first a binary perceptron classifier, followed by a binary logistic regression one. We will implement the latter both from scratch as well as using PyTorch, so the reader has a clearer understanding on how PyTorch works “under the hood.” 4.1.1 Large Movie Review Dataset This dataset contains movie reviews and their associated scores (between 1 and 10) as provided by IMDb.5 converted these scores to binary labels by assigning each review a positive or negative label if the review score was above 6 or below 5, respectively. Reviews with scores 5 and 6 were considered too neutral and thus excluded. We follow the same protocol in this chapter. The dataset is divided in two even partitions called train and test, each containing 25,000 reviews. The dataset also provides additional unlabeled reviews, but we will not use those here. Each partition contains two directories called pos and neg where the positive and negative examples are stored. Each review is stored in an independent text file, whose name is composed of an id unique to the partition and the score associated with the review, separated by an underscore. An example of a positive and a negative review is shown in Table 4.1. 4.1.2 Bag-of-words Model As discussed in Section 2.2, we will encode the text to classify as a bag of words. That is, we encode each review as a list of numbers, with each position in the list corresponding to a word in our vocabulary, and the value stored in that position corresponding to the number of times the word appears in the review. For example, say we want to encode the following two reviews: 4 https://ai.stanford.edu/~amaas/data/sentiment/ 5 https://www.imdb.com/ Maas et al. 4.1 Binary Classification 57 Table 4.1 Two examples of movie reviews from IMDb. The first is a positive review of the movie Puss in Boots (1988). The second is a negative review of the movie Valentine (2001). These reviews can be found at https://www.imdb.com/review/rw0606396/ and https://www.imdb.com/review/rw0721861/, respectively. Filename Score Binary Label train/pos/24_8.txt 8/10 Positive train/neg/141_3.txt 3/10 Negative Review Text Although this was obviously a low-budget production, the performances and the songs in this movie are worth seeing. One of Walken’s few musical roles to date. (he is a marvelous dancer and singer and he demonstrates his acrobatic skills as well - watch for the cartwheel!) Also starring Jason Connery. A great children’s story and very likable characters. This stalk and slash turkey manages to bring nothing new to an increasingly stale genre. A masked killer stalks young, pert girls and slaughters them in a variety of gruesome ways, none of which are particularly inventive. It’s not scary, it’s not clever, and it’s not funny. So what was the point of it? Review 1: Review 2: "I liked the movie. My friend liked it too. " "I hated it. Would not recommend. " First, we need to create a vocabulary that maps each word to an id that uniquely identifies it. Each of these numbers will be used as the index in a list, so they must start at zero and grow by one for each word in the vocabulary. For example, one possible vocabulary that encodes the previous reviews is: {'would': 0, 'hated': 1, 58 Implementing Text Classification Using Perceptron and LR 'my': 2, 'liked': 3, 'not': 4, 'it': 5, 'movie': 6, 'recommend': 7, 'the': 8, 'I': 9, 'too': 10, 'friend': 11} Using this mapping, we can encode the two reviews as follows: Review1: [0,0,1,2,0,1,1,0,1,1,1,1] Review2: [1,1,0,0,1,1,0,1,0,1,0,0] Note that the word liked (fourth position) in the first review has a value of two. This is because this word appears twice in that review. This is a small example with a vocabulary of only 12 terms. Of course, the same process needs to be implemented for our whole training dataset. For this purpose we will use scikit-learn’s CountVectorizer class.6 Using the CountVectorizer class simplifies things, allowing us to get started quickly with a bag-of-words approach. However, note that it makes several simplifying assumptions (e.g., text is lowercased, and punctuation and single character tokens are removed). Some of these may not be adequate to other tasks. First, we need to obtain the filenames for the reviews in the training set: Once we have acquired the filenames for the training reviews, we need
to read them using the CountVectorizer. In order for the CountVectorizer to open and read the files for us, we make use of the input='filename' constructor parameter (otherwise it would expect the string content directly). The CountVectorizer provides three methods that will be use-
ful for us: a method called fit() that is used to acquire the vocabulary,
a method transform() that converts the text into the bag-of-words representation, and a method fit_transform() that conveniently acquires the vocabulary and transforms the data in a single step. The resulting object is referred to as a document-term matrix, where each row corre- 6 https://scikitlearn.org/stable/modules/generated/sklearn.feature_ extraction.text.CountVectorizer.html 4.1 Binary Classification 59 sponds to a document, and each column corresponds to a term in the vocabulary. As the output above indicates, the resulting matrix has 25,000 rows (one for each review), and 74,849 columns (one for each term). Also you may note that this matrix is sparse, with 3,445,861 stored elements. A regular matrix of shape 25,000×74,849 would have 1,871,225,000 elements. However, most of the elements in a document-term matrix are zeros because only a few words from the vocabulary appear in each document. A sparse matrix takes advantage of this fact by storing only the non-zero cells in order to reduce the memory required to store it. Thus, sparse matrices are convenient, especially when dealing with lots of data. Nevertheless, to simplify the downstream code in this example, we will convert it into a dense matrix, i.e., a regular two-dimensional NumPy array. Finally, we also need the labels of the reviews. We assign a label of one to positive reviews, and a label of zero to negative ones. Note that the first half of the reviews are positive and the second half are negative. The label at the ith position of the y_train array corresponds to the review encoded in the ith row of the X_train matrix. 4.1.3 Perceptron Now that we have defined our task and the data processing pipeline, we will implement a perceptron classifier that classifies the movie reviews as positive or negative. The entire code discussed in this section is available in the chap4_perceptron notebook. Recall from Section 2.4 that the perceptron is composed of a weight vector w and a bias term b. These will be represented as a NumPy array w of the same length as our document vectors, and a variable b for the bias term. Both will be initialized with zeros. The parameters w and b are learned through the following algorithm, which implements Algorithm 2 from Chapter 2: There are a couple of details to point out. Line 3 of Algorithm 2 indicates that we need to repeat the training loop until convergence. Theoretically, convergence is defined as predicting all training examples correctly. This is an ambitious requirement, which is not always possible in practice, so in this code we also include a stop condition if we reach a maximum number of epochs. Another crucial difference between our implementation here and the theoretical Algorithm 2, is that we randomize the order in which the training examples are seen at the beginning of 60 Implementing Text Classification Using Perceptron and LR each epoch. This simple (but highly recommended!) change is necessary to avoid the introduction of spurious biases due to the arbitrary order of the examples in the original training partition.7 We accomplish this by storing the indices corresponding to the X_train matrix rows in a NumPy array, and shuffling these indices at the beginning of each epoch. We shuffle the indices instead of the examples so that we can preserve the mapping between examples and labels. The training loop aligns closely with Algorithm 2. We start by iterating over each example in our training data, storing the current example in the variable x,8 and its corresponding label in the variable y_true. Next, we compute the perceptron decision function shown in Algorithm 1. Note that NumPy (as well as PyTorch) uses Python’s @ operator to indicate vector or matrix multiplication, depending on its operand types. Here we use it to calculate the dot product of the example x and the weights w. To this we add the bias b to obtain the predicted score, whose sign is used to assign a positive or negative predicted label. If the prediction is correct, then no update is needed, and we can move on to the next training example. However, if the prediction is incorrect, then we need to adjust w and b, as described in Algorithm 2. Sidebar 4.1 The tqdm function This is our first exposure to the tqdm function. tqdm is a progress bar that “make your loops show a smart progress meter.”9 The name tqdm comes from the Arabic word taqaddum which can mean “progress.” Using tqdm is as simple as wrapping it around the collection to be traversed. After training, we evaluate the model’s performance on the heldout test partition. The test data is loaded similarly to the training partition, but with one notable difference; we use CountVectorizer’s transform() method instead of the fit_transform() method so that the vocabulary is not adjusted for the test data. We won’t show here the loading of the test partition since it is so similar to the code already shown, but it is available in the Jupyter notebook that accompanies this section. . 7   As an extreme example, consider a dataset where all the positive examples appear first in the training partition. This would cause the perceptron to artificially inflate the weights of the features that occur in these examples, a situation from which the learning algorithm may struggle to recover. 
 . 8  We use typewriter font when we discuss variables in the code, to distinguish code from the theoretical discussion in the other chapters. 
 9 https://github.com/tqdm/tqdm 4.1 Binary Classification 61 Using the model to assign labels to all the test data is easily done in one step – we simply multiply the entire test data document-term matrix by the previously learned weights and add the bias. Scores greater than zero indicate a positive review, and those less than zero are negative. At this point we can evaluate the classifier’s performance, which we will do using precision, recall, and F1 scores for binary classification (described in Section 2.3). For this purpose, we implement a function called binary_classification_report that computes these metrics and returns them as a dictionary: We call this function to compare the predicted labels to the true labels, and obtain the evaluation scores. Our F1 score here is 86.8%, which is much higher than the baseline that assigns labels randomly, which yields an F1 score of about 50%. This is a good result, especially considering the simplicity of the perceptron! In the next sections and chapters, we will discuss a battery of strategies to considerably improve this performance. 4.1.4 Binary Logistic Regression from Scratch Using the same task, dataset, and evaluation, we will now implement a logistic regression classifier, as described in Algorithm 5 from Chapter 3. To give the reader hands-on experience with the implementation of the gradient calculations for logistic regression, we start by implementing it from scratch using NumPy. All the code shown in this section is available in the chap4_logistic_regression_numpy notebook. In the perceptron implementation, we represented the weights and the bias as two different variables. Here, however, we will use a different approach that will allow us to unify them into a single vector variable. Specifically, we take advantage of the similarity between the derivative of the cost function with respect to the weights (Equation 3.14) and the derivative of the cost with respect to the bias (Equation 3.15). d Ci(w, b) = (σi − yi)xij (3.14 revisited) dwj d Ci(w, b) = σi − yi (3.15 revisited) db Note that the two derivative formulas are identical except that the former has a multiplication by xij, while the latter does not. However, 62 Implementing Text Classification Using Perceptron and LR since σi − yi = (σi − yi)1 we can multiply the derivative of the cost with respect to the bias by one without changing the semantics. This gives an opportunity for combining the computations, doing them both in a single pass. The idea is that we can treat the bias as a weight corresponding to a feature that always has a value of one. As can be seen above, we created a NumPy array of ones of the same length as the number of examples in our training set (i.e., the number of rows in the data matrix). Then we add this array as a new column to the data matrix, using NumPy’s column_stack function. Next, we need to initialize our model. This time we will use a single NumPy array w of the same length as the number of columns in the data matrix. The weight vector w is initialized randomly with values between 0 and 1: Before implementing the learning algorithm, we need an implementation of the logistic function. Recall that the logistic function is σ(x) = 1 (3.1 revisited) 1+e−x This function can be easily implemented in NumPy as follows: However, this naive implementation may produce the following warning during training: The term overflow indicates that the result of evaluating exp(-x) is a number so large that it can’t be represented by a float (specifically, we’re using float64 numbers). We will avoid this issue by not calling exp with values that will overflow. NumPy provides the function finfo that can be consulted to find the limits of floating point numbers: The log of the largest floating point number is the largest number for which exp() will not overflow, so we will use it as a threshold to filter out problematic values: We now have everything we need to implement Algorithm 4. The steps to follow for each example are: (1) use the model to make a prediction, (2) calculate the gradient of the loss function with respect to the model parameters, and (3) update the model parameters using the gradient. The size of the update is controlled by the learning rate. Once the model has been trained, we evaluate it on the test dataset using our binary_classification_report function from the previous section. Loading and preprocessing the test dataset follows the same 4.1 Binary Classification 63 steps as with the previous classifier. We omit the code for brevity. These are the results: The performance is comparable with that of the perceptron. The difference in F1 scores between the two classifiers (84.9% here vs. 86.8% for the perceptron) is not significant. Classifier parity is probably attributable to the fact that the signal distinguishing the two classes being easy to learn and the simpler perceptron training algorithm being sufficient in this case. Nevertheless, this task is useful in showing how to implement the logistic regression model from scratch, i.e., by implementing the gradient calculation and parameter updates manually. Next, we will implement the same model again using PyTorch, highlighting how this machine learning library simplifies the process. 4.1.5 Binary Logistic Regression Utilizing PyTorch While it is fairly straightforward to compute the derivatives for logistic regression and implement then directly in NumPy, this will not scale well to arbitrary neural architectures. Fortunately, there are libraries that automate the computation of the derivatives of the cost function (assuming it is differentiable!) for any neural network, and use the resulting gradients to perform gradient descent or other more sophisticated optimization procedures. To this end, we will use the PyTorch deep learning library10. The corresponding notebook for this section is chap4_logistic_regression_pytorch_bce. Our model for logistic regression corresponds to PyTorch’s Linear layer. When we instantiate this layer, we specify the size of the inputs (the size of our vocabulary) and the size of the output, i.e., the number of output neurons (which is one because we’re doing binary classification). The loss function we use is the binary cross-entropy loss (see Chapter 3), which is implemented as BCEWithLogitsLoss in PyTorch. In PyTorch, the gradients obtained from the loss function are applied to the model by an optimizer object, which implements and applies an optimization algorithm. Here we will use the vanilla stochastic gradient descent optimizer; we set its learning rate to 0.1. This is equivalent to the discussion in Section 3.2. Similarly to the manual implementation, the steps required to train the model for a given training example are: (1) ensure the gradients are set to zeros, (2) apply the model to obtain a prediction, (3) calculate 10 https://pytorch.org/ 64 Implementing Text Classification Using Perceptron and LR the loss, (4) compute the gradient of the loss by back-propagation, and (5) update the model parameters. Recall that in our previous implementation everything was hardcoded: applying the model, computing the gradients, and optimizing the model parameters. Here, however, the implementation of the logistic regression is expressed at a higher level of abstraction. This means that we are describing the logical steps without specifying a particular implementation. Instead, implementation details are the responsability of the chosen model, loss function, and optimizer. Thus, we could even choose a different model, loss function, and/or optimizer, and use the same training steps with little or no modification. This decoupling of the training logic from the implementation details is one of the main advantages of libraries such as PyTorch. As shown in the code above, calling the model as a function, with the feature vectors as inputs, produces the predicted scores. Once again, a positive score corresponds to a positive label. When we evaluate this implementation on the test dataset, we obtain results that are in line with our previous models: Writing the perceptron and the logistic regression from scratch is a good exercise, as it exposes us to the fundamentals of implementing machine learning algorithms. However, this becomes cumbersome for more complex neural architectures. For this reason, from this point on, we will use PyTorch for all our coding examples. 4.2 Multiclass Classification So far, in this chapter we have discussed implementing binary classifiers. Next, we will modify these binary classifiers to perform multiclass classification, following the discussion in Section 3.5. 4.2.1 AG News Dataset Before explaining the actual training/testing code, we have to choose a new dataset that is suitable for multiclass classification. To this end, we will use the AG News Classification Dataset (Zhang et al., 2015), a subset of the larger AG corpus of news articles collected from thousands of different news sources.11 The classification dataset consists of four 11 http://groups.di.unipi.it/~gulli/AG_corpus_of_news_articles.html 4.2 Multiclass Classification 65 classes, and the data is equally balanced across all classes (30,000 articles per class for train, and 1,900 articles per class for testing). The goal of the task is to classify each article as one of the four classes: World, Sports, Business, or Sci/Tech. 4.2.2 Preparing the Dataset The AG News Dataset is distributed as two CSV files (one for training and one for testing), each containing three columns: the class index, the title, and the description. The dataset also provides a text file that maps the above class indexes to more descriptive class labels. Because of the tabular nature of the dataset, pandas, a Python library
for tabular data analysis,12 is a natural choice for loading and transform-
ing it. To this end, our Jupyter notebook (chap4_multiclass_logistic_regression) demonstrates the sequence of steps required to handle the data, as well
as model training and evaluation. First, we show how to load the CSV,
add column names, and inspect the result: class index . 0  3 
 . 1  3 
 . 2  3 
 . 3  3 
 . 4  3 
 ... ... . 119995  1 
 . 119996  2 
 . 119997  2 
 . 119998  2 
 . 119999  2 
 title Wall St. Bears Claw Back Into the Black (Reuters) Carlyle Looks Toward Commercial Aerospace (Reu... Oil and Economy Cloud Stocks' Outlook (Reuters) Iraq Halts Oil Exports from Main Southern Pipe... Oil prices soar to all-time record, posing new... ... Pakistan's Musharraf Says Won't Quit as Army C... Renteria signing a top-shelf deal Saban not going to Dolphins yet Today's NFL games Nets get Carter from Raptors description Reuters - Short-sellers, Wall Street's dwindli... Reuters - Private investment firm Carlyle Grou... Reuters - Soaring crude prices plus worries\ab... Reuters - Authorities have halted oil export\f... AFP - Tearaway world oil prices, toppling reco... ... KARACHI (Reuters) - Pakistani President Perve... Red Sox general manager Theo Epstein acknowled... The Miami Dolphins will put their courtship of... PITTSBURGH at NY GIANTS Time: 1:30 p.m. Line: ... INDIANAPOLIS -- All-Star Vince Carter was trad... 120000 rows × 3 columns Since the class labels themselves are in a separate file, we manually add them to the pandas data structure (called dataframe in pandas’ terminology) to increase the interpretability of the data. We use the class index column as a starting point, and use its map method to create a new column with the corresponding labels (technically a new Series object) that is added to the dataframe using its insert method, which allows us to insert the column in a specific position. Note that the label indices are one-based, so we subtract one to align them with their labels. 12 https://pandas.pydata.org 66 Implementing Text Classification Using Perceptron and LR class index . 0  3 
 . 1  3 
 . 2  3 
 . 3  3 
 . 4  3 
 ... ... . 119995  1 
 . 119996  2 
 . 119997  2 
 . 119998  2 
 . 119999  2 
 class Business Business Business Business Business ... World Sports Sports Sports Sports title Wall St. Bears Claw Back Into the Black (Reuters) Oil and Economy Cloud Stocks' Outlook (Reuters) Oil prices soar to all-time record, posing new... Pakistan's Musharraf Says Won't Quit as Army C... Saban not going to Dolphins yet Nets get Carter from Raptors description Reuters - Short-sellers, Wall Street's dwindli... Reuters - Soaring crude prices plus worries\ab... AFP - Tearaway world oil prices, toppling reco... KARACHI (Reuters) - Pakistani President Perve... The Miami Dolphins will put their courtship of... INDIANAPOLIS -- All-Star Vince Carter was trad... Iraq Halts Oil Exports from Main Southern Pipe... Reuters - Authorities have halted oil export\f... ... ... Renteria signing a top-shelf deal Red Sox general manager Theo Epstein acknowled... 120000 rows × 4 columns Carlyle Looks Toward Commercial Aerospace (Reu... Reuters - Private investment firm Carlyle Grou... Today's NFL games PITTSBURGH at NY GIANTS Time: 1:30 p.m. Line: ... Next we will preprocess the text. First we lowercase the title and description, and then we concatenate them into a single string. Then we remove some spurious backslashes from the text. Once this is done, the preprocessed text is added to the dataframe as a new column. Note that pandas allows these steps to be applied to all rows simultaneously. class index class title Wall St. Bears Claw Back Into the Black (Reuters) Oil and Economy Cloud Stocks' Outlook (Reuters) Oil prices soar to all-time record, posing new... ... Pakistan's Musharraf Says Won't Quit as Army C... Saban not going to Dolphins yet Nets get Carter from Raptors description Reuters - Short-sellers, Wall Street's dwindli... Reuters - Soaring crude prices plus worries\ab... AFP - Tearaway world oil prices, toppling reco... ... KARACHI (Reuters) - Pakistani President Perve... The Miami Dolphins will put their courtship of... INDIANAPOLIS -- All-Star Vince Carter was trad... text wall st. bears claw back into the black (reute... oil and economy cloud stocks' outlook (reuters... oil prices soar to all-time record, posing new... ... pakistan's musharraf says won't quit as army c... saban not going to dolphins yet the miami dolp... nets get carter from raptors indianapolis -- a... . 0  3 Business 
 . 1  3 Business 
 . 2  3 Business 
 . 3  3 Business 
 . 4  3 Business 
 ... ... ... . 119995  1 World 
 . 119996  2 Sports 
 . 119997  2 Sports 
 . 119998  2 Sports 
 . 119999  2 Sports 
 120000 rows × 5 columns Carlyle Looks Toward Commercial Reuters - Private investment firm Carlyle carlyle looks toward commercial Aerospace (Reu... Grou... aerospace (reu... Iraq Halts Oil Exports from Main Southern Pipe... Reuters - Authorities have halted oil export\f... iraq halts oil exports from main southern pipe... Renteria signing a top-shelf deal Red Sox general manager Theo Epstein renteria signing a top-shelf deal red sox acknowled... gene... Today's NFL games PITTSBURGH at NY GIANTS Time: 1:30 p.m. today's nfl games pittsburgh at ny giants Line: ... time... At this point, the text is ready to be tokenized. For this purpose we will use NLTK’s word_tokenize function. This function can be applied to the whole column at once using the pandas map function, which returns a new column which we add to the dataframe. However, here we actually use the progress_map function, which provides a visual progress bar. This visual feedback is especially helpful for tasks that take more time to complete. 4.2 Multiclass Classification 67 class index . 0  3 
 . 1  3 
 . 2  3 
 . 3  3 
 . 4  3 
 ... ... . 119995  1 
 . 119996  2 
 . 119997  2 
 . 119998  2 
 . 119999  2 
 class Business Business Business Business Business ... World Sports Sports Sports Sports title Wall St. Bears Claw Back Into the Black (Reuters) Oil and Economy Cloud Stocks' Outlook (Reuters) Oil prices soar to all-time record, posing new... ... Pakistan's Musharraf Says Won't Quit as Army C... Saban not going to Dolphins yet Nets get Carter from Raptors description Reuters - Short-sellers, Wall Street's dwindli... Reuters - Soaring crude prices plus worries\ab... AFP - Tearaway world oil prices, toppling reco... ... KARACHI (Reuters) - Pakistani President Perve... The Miami Dolphins will put their courtship of... INDIANAPOLIS -- All-Star Vince Carter was trad... text wall st. bears claw back into the black (reute... oil and economy cloud stocks' outlook (reuters... oil prices soar to all-time record, posing new... ... pakistan's musharraf says won't quit as army c... saban not going to dolphins yet the miami dolp... nets get carter from raptors indianapolis -- a... tokens [wall, st., bears, claw, back, into, the, blac... [oil, and, economy, cloud, stocks, ', outlook,... [oil, prices, soar, to, all-time, record, ,, p... ... [pakistan, 's, musharraf, says, wo, n't, quit,... [saban, not, going, to, dolphins, yet, the, mi... [nets, get, carter, from, raptors, indianapoli... 120000 rows × 6 columns Carlyle Looks Toward Commercial Reuters - Private investment firm carlyle looks toward commercial [carlyle, looks, toward, Aerospace (Reu... Carlyle Grou... aerospace (reu... commercial, aerospace... Iraq Halts Oil Exports from Main Reuters - Authorities have halted iraq halts oil exports from main [iraq, halts, oil, exports, from, Southern Pipe... oil export\f... southern pipe... main, southe... Renteria signing a top-shelf deal Red Sox general manager Theo renteria signing a top-shelf deal [renteria, signing, a, top-shelf, Epstein acknowled... red sox gene... deal, red, s... Today's NFL games PITTSBURGH at NY GIANTS today's nfl games pittsburgh at [today, 's, nfl, games, Time: 1:30 p.m. Line: ... ny giants time... pittsburgh, at, ny, gi... From the tokens we just created, we then create a vocabulary for our corpus. Here, we only keep the words that occur at least 10 times, decreasing the memory needed and reducing the likelihood that our vocabulary contains noisy tokens. Note that each row in the tokens column contains a list of tokens. In order to create the vocabulary, we will need to convert the Series of lists of tokens into a Series of tokens using the explode() Pandas method. Then we will use the value_counts() method to create a Series object in which the index are the tokens and the values are the number of times they appear in the corpus. The next step is removing the tokens with a count lower than our chosen threshold. Finally, we create a list with the remaining tokens, as well as a dictionary that maps tokens to token ids (i.e., the index of the token in the list). We include in the vocabulary a special token [UNK] that will be used as a placeholder for tokens that do not appear in our vocabulary after the frequency pruning. Using this vocabulary, we construct a feature vector for each news article in the corpus. This feature vector will be encoded as a dictionary, with keys corresponding to token ids, and values corresponding to the number of times the token appears in the article. As above, the feature vectors will be stored as a new column in the dataframe. 68 Implementing Text Classification Using Perceptron and LR class index class title Wall St. Bears Claw Back Into the Black (Reuters) Oil and Economy Cloud Stocks' Outlook (Reuters) Oil prices soar to all-time record, posing new... ... Pakistan's Musharraf Says Won't Quit as Army C... Saban not going to Dolphins yet Nets get Carter from Raptors description Reuters - Short-sellers, Wall Street's dwindli... Reuters - Soaring crude prices plus worries\ab... AFP - Tearaway world oil prices, toppling reco... ... KARACHI (Reuters) - Pakistani President Perve... The Miami Dolphins will put their courtship of... INDIANAPOLIS -- All-Star Vince Carter was trad... text wall st. bears claw back into the black (reute... oil and economy cloud stocks' outlook (reuters... oil prices soar to all-time record, posing new... ... pakistan's musharraf says won't quit as army c... saban not going to dolphins yet the miami dolp... nets get carter from raptors indianapolis -- a... tokens [wall, st., bears, claw, back, into, the, blac... [oil, and, economy, cloud, stocks, ', outlook,... [oil, prices, soar, to, alltime, record, ,, p... ... [pakistan, 's, musharraf, says, wo, n't, quit,... [saban, not, going, to, dolphins, yet, the, mi... [nets, get, carter, from, raptors, indianapoli... features {427: 2, 563: 1, 1607: 1, 15062: 1, 120: 1, 73... {66: 1, 9: 2, 351: 2, 4565: 1, 158: 1, 116: 1,... {66: 2, 99: 2, 4390: 1, 4: 2, 3595: 1, 149: 1,... ... {383: 1, 23: 1, 1626: 2, 91: 1, 1809: 1, 285: ... {7762: 2, 68: 1, 661: 1, 4: 2, 1439: 2, 703: 1... {2170: 2, 226: 1, 2402: 2, 32: 1, 2995: 2, 219... . 0  3 Business 
 . 1  3 Business 
 . 2  3 Business 
 . 3  3 Business 
 . 4  3 Business 
 ... ... ... . 119995  1 World 
 . 119996  2 Sports 
 . 119997  2 Sports 
 . 119998  2 Sports 
 . 119999  2 Sports 
 120000 rows × 7 columns Carlyle Looks Toward Commercial Aerospace (Reu... Reuters - Private investment firm Carlyle Grou... carlyle looks toward commercial aerospace (reu... Iraq Halts Oil Exports from Reuters - Authorities have iraq halts oil exports from Main Southern Pipe... halted oil export\f... main southern pipe... Renteria signing a top-shelf Red Sox general manager renteria signing a topdeal Theo Epstein acknowled... shelf deal red sox gene... PITTSBURGH at NY Today's NFL games GIANTS Time: 1:30 p.m. Line: ... today's nfl games pittsburgh at ny giants time... [carlyle, looks, toward, {15999: 2, 1076: 1, 855: commercial, aerospace... 1, 1286: 1, 4251: 1, ... [iraq, halts, oil, exports, {77: 2, 7380: 1, 66: 3, from, main, southe... 1787: 1, 32: 2, 900: 2... [renteria, signing, a, top- {8428: 2, 2638: 1, 5: 4, shelf, deal, red, s... 0: 3, 127: 1, 202: 3,... [today, 's, nfl, games, {106: 1, 23: 1, 729: 1, pittsburgh, at, ny, gi... 225: 1, 1586: 1, 22: 1... The final preprocessing step is converting the features and the class indices into PyTorch tensors. Recall that we need to subtract one from the class indices to make them zero-based. At this point, the data is fully processed and we are ready to begin training. 4.2.3 Multiclass Logistic Regression Using PyTorch The model itself is a single linear layer whose input size corresponds to the size of our vocabulary, and its output size corresponds to the number of classes in our corpus. PyTorch’s Linear layer includes a bias by default, so there is no need to handle that manually the way we did for our perceptron example. The code for training this model (which implements Algorithm 6) is almost identical to that of the binary logistic repression. However, since we have to calculate a score for each of the four different classes, we need to replace the previous BCEWithLogitsLoss with CrossEntropyLoss, which applies a softmax over the scores to obtain probabilities for each class. For each example, the model predicts 4 scores – one for each label. The label with the highest score is selected using the argmax function. We evaluate the predictions of our model for each class using Scikitlearn’s classification_report, which handles the results of multiclass classification. 4.3 Summary 69 4.3 Summary In this chapter, we used movie review and news article classification to illustrate the implementation of the previously described algorithms for the binary perceptron, binary logistic regression, and multiclass logistic regression. For the binary logistic regression, we made a direct comparison between the lower-level NumPy implementation and a higher-level version that made use of PyTorch. We hope that through this series of exercises the reader has noted several key takeaways. First, data preparation is important and should be done thoughtfully. Certain tasks (e.g., text normalization or sentence splitting) are going to be frequently needed if you continue with NLP, so using or creating generic functions can be very helpful. However, what works for one dataset and one language may not be suitable for another scenario. For example, in our case, we selected different tokenizers for each of our tasks to account for the different registers of English, as well as removing diacritics during normalization. Second, when it comes to implementing machine learning algorithms, it is often easier to use a higher-level library such as PyTorch instead of NumPy. For example, with the former, the gradients are calculated by the library, whereas in NumPy we have to code them ourselves. This becomes cumbersome quickly. For example, even the derivative of the softmax is non-trivial. Third, PyTorch imposes a training structure that remains largely the same, regardless of what models are being trained. That is, at a high level, the same steps are always required: clearing the current gradients, predicting output scores for the provided inputs, calculating the loss, and optimizing. These features make PyTorch a very powerful and convenient deep learning library; we will continue to use it throughout the remainder of the book to implement more complex neural architectures.
18,886
18,986
#!/usr/bin/env python # coding: utf-8 # # Binary Text Classification with # # Logistic Regression Implemented with PyTorch and BCE Loss # In[1]: import random import numpy as np import torch from tqdm.notebook import tqdm # set this variable to a number to be used as the random seed # or to None if you don't want to set a random seed seed = 1234 if seed is not None: random.seed(seed) np.random.seed(seed) torch.manual_seed(seed) # The dataset is divided in two directories called `train` and `test`. # These directories contain the training and testing splits of the dataset. # In[2]: get_ipython().system('ls -lh data/aclImdb/') # Both the `train` and `test` directories contain two directories called `pos` and `neg` that contain text files with the positive and negative reviews, respectively. # In[3]: get_ipython().system('ls -lh data/aclImdb/train/') # We will now read the filenames of the positive and negative examples. # In[4]: from glob import glob pos_files = glob('data/aclImdb/train/pos/*.txt') neg_files = glob('data/aclImdb/train/neg/*.txt') print('number of positive reviews:', len(pos_files)) print('number of negative reviews:', len(neg_files)) # Now, we will use a [`CountVectorizer`](https://scikit-learn.org/stable/modules/generated/sklearn.feature_extraction.text.CountVectorizer.html) to read the text files, tokenize them, acquire a vocabulary from the training data, and encode it in a document-term matrix in which each row represents a review, and each column represents a term in the vocabulary. Each element $(i,j)$ in the matrix represents the number of times term $j$ appears in example $i$. # In[5]: from sklearn.feature_extraction.text import CountVectorizer # initialize CountVectorizer indicating that we will give it a list of filenames that have to be read cv = CountVectorizer(input='filename') # learn vocabulary and return sparse document-term matrix doc_term_matrix = cv.fit_transform(pos_files + neg_files) doc_term_matrix # Note in the message printed above that the matrix is of shape (25000, 74894). # In other words, it has 1,871,225,000 elements. # However, only 3,445,861 elements were stored. # This is because most of the elements in the matrix are zeros. # The reason is that the reviews are short and most words in the english language don't appear in each review. # A matrix that only stores non-zero values is called *sparse*. # # Now we will convert it to a dense numpy array: # In[6]: X_train = doc_term_matrix.toarray() X_train.shape # We will also create a numpy array with the binary labels for the reviews. # One indicates a positive review and zero a negative review. # The label `y_train[i]` corresponds to the review encoded in row `i` of the `X_train` matrix. # In[7]: # training labels y_pos = np.ones(len(pos_files)) y_neg = np.zeros(len(neg_files)) y_train = np.concatenate([y_pos, y_neg]) y_train # Now we will initialize our model, in the form of an array of weights `w` of the same size as the number of features in our dataset (i.e., the number of words in the vocabulary acquired by [`CountVectorizer`](https://scikit-learn.org/stable/modules/generated/sklearn.feature_extraction.text.CountVectorizer.html)), and a bias term `b`. # Both are initialized to zeros. # In[8]: n_examples, n_features = X_train.shape # Now we will use the logistic regression learning algorithm to learn the values of `w` and `b` from our training data. # In[9]: import torch from torch import nn from torch import optim lr = 1e-1 n_epochs = 10 model = nn.Linear(n_features, 1) loss_func = nn.BCEWithLogitsLoss() optimizer = optim.SGD(model.parameters(), lr=lr) X_train = torch.tensor(X_train, dtype=torch.float32) y_train = torch.tensor(y_train, dtype=torch.float32) indices = np.arange(n_examples) for epoch in range(10): n_errors = 0 # randomize training examples np.random.shuffle(indices) # for each training example for i in tqdm(indices, desc=f'epoch {epoch+1}'): x = X_train[i] y_true = y_train[i] # make predictions y_pred = model(x) # calculate loss loss = loss_func(y_pred[0], y_true) # calculate gradients through back-propagation loss.backward() # optimize model parameters optimizer.step() # ensure gradients are set to zero model.zero_grad() # The next step is evaluating the model on the test dataset. # Note that this time we use the [`transform()`](https://scikit-learn.org/stable/modules/generated/sklearn.feature_extraction.text.CountVectorizer.html#sklearn.feature_extraction.text.CountVectorizer.transform) method of the [`CountVectorizer`](https://scikit-learn.org/stable/modules/generated/sklearn.feature_extraction.text.CountVectorizer.html), instead of the [`fit_transform()`](https://scikit-learn.org/stable/modules/generated/sklearn.feature_extraction.text.CountVectorizer.html#sklearn.feature_extraction.text.CountVectorizer.fit_transform) method that we used above. This is because we want to use the learned vocabulary in the test set, instead of learning a new one. # In[10]: pos_files = glob('data/aclImdb/test/pos/*.txt') neg_files = glob('data/aclImdb/test/neg/*.txt') doc_term_matrix = cv.transform(pos_files + neg_files) X_test = doc_term_matrix.toarray() X_test = torch.tensor(X_test, dtype=torch.float32) y_pos = np.ones(len(pos_files)) y_neg = np.zeros(len(neg_files)) y_test = np.concatenate([y_pos, y_neg]) # Using the model is easy: multiply the document-term matrix by the learned weights and add the bias. # We use Python's `@` operator to perform the matrix-vector multiplication. # In[11]: y_pred = model(X_test) > 0 # Now we print an evaluation of the prediction results using scikit-learn's [`classification_report()`](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.classification_report.html) function. # In[12]: def binary_classification_report(y_true, y_pred): # count true positives, false positives, true negatives, and false negatives tp = fp = tn = fn = 0 for gold, pred in zip(y_true, y_pred): if pred == True: if gold == True: tp += 1 else: fp += 1 else: if gold == False: tn += 1 else: fn += 1 # calculate precision and recall precision = tp / (tp + fp) recall = tp / (tp + fn) # calculate f1 score fscore = 2 * precision * recall / (precision + recall) # calculate accuracy accuracy = (tp + tn) / len(y_true) # number of positive labels in y_true support = sum(y_true) return { "precision": precision, "recall": recall, "f1-score": fscore, "support": support, "accuracy": accuracy, } # In[13]: binary_classification_report(y_test, y_pred)
3,539
3,549
14
chap04-15
chap04-15
4 Implementing Text Classification Using Perceptron and Logistic Regression In the previous chapters we have discussed the theory behind the perceptron and logistic regression, including mathematical explanations of how and why they are able to learn from examples. In this chapter we will transition from math to code. Specifically, we will discuss how to implement these models in the Python programming language. All the code that we will introduce throughout this book is available online as well: http://clulab.github.io/gentlenlp/. The reader who is not familiar with the Python programming language is encouraged to read first Appendix A, for a brief introduction to the language, and Appendix B, for a discussion on how computers encode and preprocess text. Once done, please return here. To get a better understanding of how these algorithms work under the hood, we will start by implementing them from scratch. However, as the book progresses, we will introduce some of the popular tools and libraries that make Python the language of choice for machine learning, e.g., PyTorch,1 and Hugging Face’s transformers.2 The code for all the examples in the book is provided in the form of Jupyter notebooks.3 Important fragments of these notebooks will be presented in the implementation chapters so that the reader has the whole picture just by reading the book. However, we strongly encourage you to download the notebooks and execute them yourself. We also encourage you to modify them to conduct your own experiments! 1 https://pytorch.org
2 https://huggingface.co 3 https://jupyter.org/ 55 56 Implementing Text Classification Using Perceptron and LR 4.1 Binary Classification We begin this chapter with binary classification. That is, we aim to train classifiers that assign one of two labels to a given text. As the example for this task, we will train a review classifier using the the Large Movie Review Dataset (Maas et al., 2011).4 We tackle this task by implementing first a binary perceptron classifier, followed by a binary logistic regression one. We will implement the latter both from scratch as well as using PyTorch, so the reader has a clearer understanding on how PyTorch works “under the hood.” 4.1.1 Large Movie Review Dataset This dataset contains movie reviews and their associated scores (between 1 and 10) as provided by IMDb.5 converted these scores to binary labels by assigning each review a positive or negative label if the review score was above 6 or below 5, respectively. Reviews with scores 5 and 6 were considered too neutral and thus excluded. We follow the same protocol in this chapter. The dataset is divided in two even partitions called train and test, each containing 25,000 reviews. The dataset also provides additional unlabeled reviews, but we will not use those here. Each partition contains two directories called pos and neg where the positive and negative examples are stored. Each review is stored in an independent text file, whose name is composed of an id unique to the partition and the score associated with the review, separated by an underscore. An example of a positive and a negative review is shown in Table 4.1. 4.1.2 Bag-of-words Model As discussed in Section 2.2, we will encode the text to classify as a bag of words. That is, we encode each review as a list of numbers, with each position in the list corresponding to a word in our vocabulary, and the value stored in that position corresponding to the number of times the word appears in the review. For example, say we want to encode the following two reviews: 4 https://ai.stanford.edu/~amaas/data/sentiment/ 5 https://www.imdb.com/ Maas et al. 4.1 Binary Classification 57 Table 4.1 Two examples of movie reviews from IMDb. The first is a positive review of the movie Puss in Boots (1988). The second is a negative review of the movie Valentine (2001). These reviews can be found at https://www.imdb.com/review/rw0606396/ and https://www.imdb.com/review/rw0721861/, respectively. Filename Score Binary Label train/pos/24_8.txt 8/10 Positive train/neg/141_3.txt 3/10 Negative Review Text Although this was obviously a low-budget production, the performances and the songs in this movie are worth seeing. One of Walken’s few musical roles to date. (he is a marvelous dancer and singer and he demonstrates his acrobatic skills as well - watch for the cartwheel!) Also starring Jason Connery. A great children’s story and very likable characters. This stalk and slash turkey manages to bring nothing new to an increasingly stale genre. A masked killer stalks young, pert girls and slaughters them in a variety of gruesome ways, none of which are particularly inventive. It’s not scary, it’s not clever, and it’s not funny. So what was the point of it? Review 1: Review 2: "I liked the movie. My friend liked it too. " "I hated it. Would not recommend. " First, we need to create a vocabulary that maps each word to an id that uniquely identifies it. Each of these numbers will be used as the index in a list, so they must start at zero and grow by one for each word in the vocabulary. For example, one possible vocabulary that encodes the previous reviews is: {'would': 0, 'hated': 1, 58 Implementing Text Classification Using Perceptron and LR 'my': 2, 'liked': 3, 'not': 4, 'it': 5, 'movie': 6, 'recommend': 7, 'the': 8, 'I': 9, 'too': 10, 'friend': 11} Using this mapping, we can encode the two reviews as follows: Review1: [0,0,1,2,0,1,1,0,1,1,1,1] Review2: [1,1,0,0,1,1,0,1,0,1,0,0] Note that the word liked (fourth position) in the first review has a value of two. This is because this word appears twice in that review. This is a small example with a vocabulary of only 12 terms. Of course, the same process needs to be implemented for our whole training dataset. For this purpose we will use scikit-learn’s CountVectorizer class.6 Using the CountVectorizer class simplifies things, allowing us to get started quickly with a bag-of-words approach. However, note that it makes several simplifying assumptions (e.g., text is lowercased, and punctuation and single character tokens are removed). Some of these may not be adequate to other tasks. First, we need to obtain the filenames for the reviews in the training set: Once we have acquired the filenames for the training reviews, we need
to read them using the CountVectorizer. In order for the CountVectorizer to open and read the files for us, we make use of the input='filename' constructor parameter (otherwise it would expect the string content directly). The CountVectorizer provides three methods that will be use-
ful for us: a method called fit() that is used to acquire the vocabulary,
a method transform() that converts the text into the bag-of-words representation, and a method fit_transform() that conveniently acquires the vocabulary and transforms the data in a single step. The resulting object is referred to as a document-term matrix, where each row corre- 6 https://scikitlearn.org/stable/modules/generated/sklearn.feature_ extraction.text.CountVectorizer.html 4.1 Binary Classification 59 sponds to a document, and each column corresponds to a term in the vocabulary. As the output above indicates, the resulting matrix has 25,000 rows (one for each review), and 74,849 columns (one for each term). Also you may note that this matrix is sparse, with 3,445,861 stored elements. A regular matrix of shape 25,000×74,849 would have 1,871,225,000 elements. However, most of the elements in a document-term matrix are zeros because only a few words from the vocabulary appear in each document. A sparse matrix takes advantage of this fact by storing only the non-zero cells in order to reduce the memory required to store it. Thus, sparse matrices are convenient, especially when dealing with lots of data. Nevertheless, to simplify the downstream code in this example, we will convert it into a dense matrix, i.e., a regular two-dimensional NumPy array. Finally, we also need the labels of the reviews. We assign a label of one to positive reviews, and a label of zero to negative ones. Note that the first half of the reviews are positive and the second half are negative. The label at the ith position of the y_train array corresponds to the review encoded in the ith row of the X_train matrix. 4.1.3 Perceptron Now that we have defined our task and the data processing pipeline, we will implement a perceptron classifier that classifies the movie reviews as positive or negative. The entire code discussed in this section is available in the chap4_perceptron notebook. Recall from Section 2.4 that the perceptron is composed of a weight vector w and a bias term b. These will be represented as a NumPy array w of the same length as our document vectors, and a variable b for the bias term. Both will be initialized with zeros. The parameters w and b are learned through the following algorithm, which implements Algorithm 2 from Chapter 2: There are a couple of details to point out. Line 3 of Algorithm 2 indicates that we need to repeat the training loop until convergence. Theoretically, convergence is defined as predicting all training examples correctly. This is an ambitious requirement, which is not always possible in practice, so in this code we also include a stop condition if we reach a maximum number of epochs. Another crucial difference between our implementation here and the theoretical Algorithm 2, is that we randomize the order in which the training examples are seen at the beginning of 60 Implementing Text Classification Using Perceptron and LR each epoch. This simple (but highly recommended!) change is necessary to avoid the introduction of spurious biases due to the arbitrary order of the examples in the original training partition.7 We accomplish this by storing the indices corresponding to the X_train matrix rows in a NumPy array, and shuffling these indices at the beginning of each epoch. We shuffle the indices instead of the examples so that we can preserve the mapping between examples and labels. The training loop aligns closely with Algorithm 2. We start by iterating over each example in our training data, storing the current example in the variable x,8 and its corresponding label in the variable y_true. Next, we compute the perceptron decision function shown in Algorithm 1. Note that NumPy (as well as PyTorch) uses Python’s @ operator to indicate vector or matrix multiplication, depending on its operand types. Here we use it to calculate the dot product of the example x and the weights w. To this we add the bias b to obtain the predicted score, whose sign is used to assign a positive or negative predicted label. If the prediction is correct, then no update is needed, and we can move on to the next training example. However, if the prediction is incorrect, then we need to adjust w and b, as described in Algorithm 2. Sidebar 4.1 The tqdm function This is our first exposure to the tqdm function. tqdm is a progress bar that “make your loops show a smart progress meter.”9 The name tqdm comes from the Arabic word taqaddum which can mean “progress.” Using tqdm is as simple as wrapping it around the collection to be traversed. After training, we evaluate the model’s performance on the heldout test partition. The test data is loaded similarly to the training partition, but with one notable difference; we use CountVectorizer’s transform() method instead of the fit_transform() method so that the vocabulary is not adjusted for the test data. We won’t show here the loading of the test partition since it is so similar to the code already shown, but it is available in the Jupyter notebook that accompanies this section. . 7   As an extreme example, consider a dataset where all the positive examples appear first in the training partition. This would cause the perceptron to artificially inflate the weights of the features that occur in these examples, a situation from which the learning algorithm may struggle to recover. 
 . 8  We use typewriter font when we discuss variables in the code, to distinguish code from the theoretical discussion in the other chapters. 
 9 https://github.com/tqdm/tqdm 4.1 Binary Classification 61 Using the model to assign labels to all the test data is easily done in one step – we simply multiply the entire test data document-term matrix by the previously learned weights and add the bias. Scores greater than zero indicate a positive review, and those less than zero are negative. At this point we can evaluate the classifier’s performance, which we will do using precision, recall, and F1 scores for binary classification (described in Section 2.3). For this purpose, we implement a function called binary_classification_report that computes these metrics and returns them as a dictionary: We call this function to compare the predicted labels to the true labels, and obtain the evaluation scores. Our F1 score here is 86.8%, which is much higher than the baseline that assigns labels randomly, which yields an F1 score of about 50%. This is a good result, especially considering the simplicity of the perceptron! In the next sections and chapters, we will discuss a battery of strategies to considerably improve this performance. 4.1.4 Binary Logistic Regression from Scratch Using the same task, dataset, and evaluation, we will now implement a logistic regression classifier, as described in Algorithm 5 from Chapter 3. To give the reader hands-on experience with the implementation of the gradient calculations for logistic regression, we start by implementing it from scratch using NumPy. All the code shown in this section is available in the chap4_logistic_regression_numpy notebook. In the perceptron implementation, we represented the weights and the bias as two different variables. Here, however, we will use a different approach that will allow us to unify them into a single vector variable. Specifically, we take advantage of the similarity between the derivative of the cost function with respect to the weights (Equation 3.14) and the derivative of the cost with respect to the bias (Equation 3.15). d Ci(w, b) = (σi − yi)xij (3.14 revisited) dwj d Ci(w, b) = σi − yi (3.15 revisited) db Note that the two derivative formulas are identical except that the former has a multiplication by xij, while the latter does not. However, 62 Implementing Text Classification Using Perceptron and LR since σi − yi = (σi − yi)1 we can multiply the derivative of the cost with respect to the bias by one without changing the semantics. This gives an opportunity for combining the computations, doing them both in a single pass. The idea is that we can treat the bias as a weight corresponding to a feature that always has a value of one. As can be seen above, we created a NumPy array of ones of the same length as the number of examples in our training set (i.e., the number of rows in the data matrix). Then we add this array as a new column to the data matrix, using NumPy’s column_stack function. Next, we need to initialize our model. This time we will use a single NumPy array w of the same length as the number of columns in the data matrix. The weight vector w is initialized randomly with values between 0 and 1: Before implementing the learning algorithm, we need an implementation of the logistic function. Recall that the logistic function is σ(x) = 1 (3.1 revisited) 1+e−x This function can be easily implemented in NumPy as follows: However, this naive implementation may produce the following warning during training: The term overflow indicates that the result of evaluating exp(-x) is a number so large that it can’t be represented by a float (specifically, we’re using float64 numbers). We will avoid this issue by not calling exp with values that will overflow. NumPy provides the function finfo that can be consulted to find the limits of floating point numbers: The log of the largest floating point number is the largest number for which exp() will not overflow, so we will use it as a threshold to filter out problematic values: We now have everything we need to implement Algorithm 4. The steps to follow for each example are: (1) use the model to make a prediction, (2) calculate the gradient of the loss function with respect to the model parameters, and (3) update the model parameters using the gradient. The size of the update is controlled by the learning rate. Once the model has been trained, we evaluate it on the test dataset using our binary_classification_report function from the previous section. Loading and preprocessing the test dataset follows the same 4.1 Binary Classification 63 steps as with the previous classifier. We omit the code for brevity. These are the results: The performance is comparable with that of the perceptron. The difference in F1 scores between the two classifiers (84.9% here vs. 86.8% for the perceptron) is not significant. Classifier parity is probably attributable to the fact that the signal distinguishing the two classes being easy to learn and the simpler perceptron training algorithm being sufficient in this case. Nevertheless, this task is useful in showing how to implement the logistic regression model from scratch, i.e., by implementing the gradient calculation and parameter updates manually. Next, we will implement the same model again using PyTorch, highlighting how this machine learning library simplifies the process. 4.1.5 Binary Logistic Regression Utilizing PyTorch While it is fairly straightforward to compute the derivatives for logistic regression and implement then directly in NumPy, this will not scale well to arbitrary neural architectures. Fortunately, there are libraries that automate the computation of the derivatives of the cost function (assuming it is differentiable!) for any neural network, and use the resulting gradients to perform gradient descent or other more sophisticated optimization procedures. To this end, we will use the PyTorch deep learning library10. The corresponding notebook for this section is chap4_logistic_regression_pytorch_bce. Our model for logistic regression corresponds to PyTorch’s Linear layer. When we instantiate this layer, we specify the size of the inputs (the size of our vocabulary) and the size of the output, i.e., the number of output neurons (which is one because we’re doing binary classification). The loss function we use is the binary cross-entropy loss (see Chapter 3), which is implemented as BCEWithLogitsLoss in PyTorch. In PyTorch, the gradients obtained from the loss function are applied to the model by an optimizer object, which implements and applies an optimization algorithm. Here we will use the vanilla stochastic gradient descent optimizer; we set its learning rate to 0.1. This is equivalent to the discussion in Section 3.2. Similarly to the manual implementation, the steps required to train the model for a given training example are: (1) ensure the gradients are set to zeros, (2) apply the model to obtain a prediction, (3) calculate 10 https://pytorch.org/ 64 Implementing Text Classification Using Perceptron and LR the loss, (4) compute the gradient of the loss by back-propagation, and (5) update the model parameters. Recall that in our previous implementation everything was hardcoded: applying the model, computing the gradients, and optimizing the model parameters. Here, however, the implementation of the logistic regression is expressed at a higher level of abstraction. This means that we are describing the logical steps without specifying a particular implementation. Instead, implementation details are the responsability of the chosen model, loss function, and optimizer. Thus, we could even choose a different model, loss function, and/or optimizer, and use the same training steps with little or no modification. This decoupling of the training logic from the implementation details is one of the main advantages of libraries such as PyTorch. As shown in the code above, calling the model as a function, with the feature vectors as inputs, produces the predicted scores. Once again, a positive score corresponds to a positive label. When we evaluate this implementation on the test dataset, we obtain results that are in line with our previous models: Writing the perceptron and the logistic regression from scratch is a good exercise, as it exposes us to the fundamentals of implementing machine learning algorithms. However, this becomes cumbersome for more complex neural architectures. For this reason, from this point on, we will use PyTorch for all our coding examples. 4.2 Multiclass Classification So far, in this chapter we have discussed implementing binary classifiers. Next, we will modify these binary classifiers to perform multiclass classification, following the discussion in Section 3.5. 4.2.1 AG News Dataset Before explaining the actual training/testing code, we have to choose a new dataset that is suitable for multiclass classification. To this end, we will use the AG News Classification Dataset (Zhang et al., 2015), a subset of the larger AG corpus of news articles collected from thousands of different news sources.11 The classification dataset consists of four 11 http://groups.di.unipi.it/~gulli/AG_corpus_of_news_articles.html 4.2 Multiclass Classification 65 classes, and the data is equally balanced across all classes (30,000 articles per class for train, and 1,900 articles per class for testing). The goal of the task is to classify each article as one of the four classes: World, Sports, Business, or Sci/Tech. 4.2.2 Preparing the Dataset The AG News Dataset is distributed as two CSV files (one for training and one for testing), each containing three columns: the class index, the title, and the description. The dataset also provides a text file that maps the above class indexes to more descriptive class labels. Because of the tabular nature of the dataset, pandas, a Python library
for tabular data analysis,12 is a natural choice for loading and transform-
ing it. To this end, our Jupyter notebook (chap4_multiclass_logistic_regression) demonstrates the sequence of steps required to handle the data, as well
as model training and evaluation. First, we show how to load the CSV,
add column names, and inspect the result: class index . 0  3 
 . 1  3 
 . 2  3 
 . 3  3 
 . 4  3 
 ... ... . 119995  1 
 . 119996  2 
 . 119997  2 
 . 119998  2 
 . 119999  2 
 title Wall St. Bears Claw Back Into the Black (Reuters) Carlyle Looks Toward Commercial Aerospace (Reu... Oil and Economy Cloud Stocks' Outlook (Reuters) Iraq Halts Oil Exports from Main Southern Pipe... Oil prices soar to all-time record, posing new... ... Pakistan's Musharraf Says Won't Quit as Army C... Renteria signing a top-shelf deal Saban not going to Dolphins yet Today's NFL games Nets get Carter from Raptors description Reuters - Short-sellers, Wall Street's dwindli... Reuters - Private investment firm Carlyle Grou... Reuters - Soaring crude prices plus worries\ab... Reuters - Authorities have halted oil export\f... AFP - Tearaway world oil prices, toppling reco... ... KARACHI (Reuters) - Pakistani President Perve... Red Sox general manager Theo Epstein acknowled... The Miami Dolphins will put their courtship of... PITTSBURGH at NY GIANTS Time: 1:30 p.m. Line: ... INDIANAPOLIS -- All-Star Vince Carter was trad... 120000 rows × 3 columns Since the class labels themselves are in a separate file, we manually add them to the pandas data structure (called dataframe in pandas’ terminology) to increase the interpretability of the data. We use the class index column as a starting point, and use its map method to create a new column with the corresponding labels (technically a new Series object) that is added to the dataframe using its insert method, which allows us to insert the column in a specific position. Note that the label indices are one-based, so we subtract one to align them with their labels. 12 https://pandas.pydata.org 66 Implementing Text Classification Using Perceptron and LR class index . 0  3 
 . 1  3 
 . 2  3 
 . 3  3 
 . 4  3 
 ... ... . 119995  1 
 . 119996  2 
 . 119997  2 
 . 119998  2 
 . 119999  2 
 class Business Business Business Business Business ... World Sports Sports Sports Sports title Wall St. Bears Claw Back Into the Black (Reuters) Oil and Economy Cloud Stocks' Outlook (Reuters) Oil prices soar to all-time record, posing new... Pakistan's Musharraf Says Won't Quit as Army C... Saban not going to Dolphins yet Nets get Carter from Raptors description Reuters - Short-sellers, Wall Street's dwindli... Reuters - Soaring crude prices plus worries\ab... AFP - Tearaway world oil prices, toppling reco... KARACHI (Reuters) - Pakistani President Perve... The Miami Dolphins will put their courtship of... INDIANAPOLIS -- All-Star Vince Carter was trad... Iraq Halts Oil Exports from Main Southern Pipe... Reuters - Authorities have halted oil export\f... ... ... Renteria signing a top-shelf deal Red Sox general manager Theo Epstein acknowled... 120000 rows × 4 columns Carlyle Looks Toward Commercial Aerospace (Reu... Reuters - Private investment firm Carlyle Grou... Today's NFL games PITTSBURGH at NY GIANTS Time: 1:30 p.m. Line: ... Next we will preprocess the text. First we lowercase the title and description, and then we concatenate them into a single string. Then we remove some spurious backslashes from the text. Once this is done, the preprocessed text is added to the dataframe as a new column. Note that pandas allows these steps to be applied to all rows simultaneously. class index class title Wall St. Bears Claw Back Into the Black (Reuters) Oil and Economy Cloud Stocks' Outlook (Reuters) Oil prices soar to all-time record, posing new... ... Pakistan's Musharraf Says Won't Quit as Army C... Saban not going to Dolphins yet Nets get Carter from Raptors description Reuters - Short-sellers, Wall Street's dwindli... Reuters - Soaring crude prices plus worries\ab... AFP - Tearaway world oil prices, toppling reco... ... KARACHI (Reuters) - Pakistani President Perve... The Miami Dolphins will put their courtship of... INDIANAPOLIS -- All-Star Vince Carter was trad... text wall st. bears claw back into the black (reute... oil and economy cloud stocks' outlook (reuters... oil prices soar to all-time record, posing new... ... pakistan's musharraf says won't quit as army c... saban not going to dolphins yet the miami dolp... nets get carter from raptors indianapolis -- a... . 0  3 Business 
 . 1  3 Business 
 . 2  3 Business 
 . 3  3 Business 
 . 4  3 Business 
 ... ... ... . 119995  1 World 
 . 119996  2 Sports 
 . 119997  2 Sports 
 . 119998  2 Sports 
 . 119999  2 Sports 
 120000 rows × 5 columns Carlyle Looks Toward Commercial Reuters - Private investment firm Carlyle carlyle looks toward commercial Aerospace (Reu... Grou... aerospace (reu... Iraq Halts Oil Exports from Main Southern Pipe... Reuters - Authorities have halted oil export\f... iraq halts oil exports from main southern pipe... Renteria signing a top-shelf deal Red Sox general manager Theo Epstein renteria signing a top-shelf deal red sox acknowled... gene... Today's NFL games PITTSBURGH at NY GIANTS Time: 1:30 p.m. today's nfl games pittsburgh at ny giants Line: ... time... At this point, the text is ready to be tokenized. For this purpose we will use NLTK’s word_tokenize function. This function can be applied to the whole column at once using the pandas map function, which returns a new column which we add to the dataframe. However, here we actually use the progress_map function, which provides a visual progress bar. This visual feedback is especially helpful for tasks that take more time to complete. 4.2 Multiclass Classification 67 class index . 0  3 
 . 1  3 
 . 2  3 
 . 3  3 
 . 4  3 
 ... ... . 119995  1 
 . 119996  2 
 . 119997  2 
 . 119998  2 
 . 119999  2 
 class Business Business Business Business Business ... World Sports Sports Sports Sports title Wall St. Bears Claw Back Into the Black (Reuters) Oil and Economy Cloud Stocks' Outlook (Reuters) Oil prices soar to all-time record, posing new... ... Pakistan's Musharraf Says Won't Quit as Army C... Saban not going to Dolphins yet Nets get Carter from Raptors description Reuters - Short-sellers, Wall Street's dwindli... Reuters - Soaring crude prices plus worries\ab... AFP - Tearaway world oil prices, toppling reco... ... KARACHI (Reuters) - Pakistani President Perve... The Miami Dolphins will put their courtship of... INDIANAPOLIS -- All-Star Vince Carter was trad... text wall st. bears claw back into the black (reute... oil and economy cloud stocks' outlook (reuters... oil prices soar to all-time record, posing new... ... pakistan's musharraf says won't quit as army c... saban not going to dolphins yet the miami dolp... nets get carter from raptors indianapolis -- a... tokens [wall, st., bears, claw, back, into, the, blac... [oil, and, economy, cloud, stocks, ', outlook,... [oil, prices, soar, to, all-time, record, ,, p... ... [pakistan, 's, musharraf, says, wo, n't, quit,... [saban, not, going, to, dolphins, yet, the, mi... [nets, get, carter, from, raptors, indianapoli... 120000 rows × 6 columns Carlyle Looks Toward Commercial Reuters - Private investment firm carlyle looks toward commercial [carlyle, looks, toward, Aerospace (Reu... Carlyle Grou... aerospace (reu... commercial, aerospace... Iraq Halts Oil Exports from Main Reuters - Authorities have halted iraq halts oil exports from main [iraq, halts, oil, exports, from, Southern Pipe... oil export\f... southern pipe... main, southe... Renteria signing a top-shelf deal Red Sox general manager Theo renteria signing a top-shelf deal [renteria, signing, a, top-shelf, Epstein acknowled... red sox gene... deal, red, s... Today's NFL games PITTSBURGH at NY GIANTS today's nfl games pittsburgh at [today, 's, nfl, games, Time: 1:30 p.m. Line: ... ny giants time... pittsburgh, at, ny, gi... From the tokens we just created, we then create a vocabulary for our corpus. Here, we only keep the words that occur at least 10 times, decreasing the memory needed and reducing the likelihood that our vocabulary contains noisy tokens. Note that each row in the tokens column contains a list of tokens. In order to create the vocabulary, we will need to convert the Series of lists of tokens into a Series of tokens using the explode() Pandas method. Then we will use the value_counts() method to create a Series object in which the index are the tokens and the values are the number of times they appear in the corpus. The next step is removing the tokens with a count lower than our chosen threshold. Finally, we create a list with the remaining tokens, as well as a dictionary that maps tokens to token ids (i.e., the index of the token in the list). We include in the vocabulary a special token [UNK] that will be used as a placeholder for tokens that do not appear in our vocabulary after the frequency pruning. Using this vocabulary, we construct a feature vector for each news article in the corpus. This feature vector will be encoded as a dictionary, with keys corresponding to token ids, and values corresponding to the number of times the token appears in the article. As above, the feature vectors will be stored as a new column in the dataframe. 68 Implementing Text Classification Using Perceptron and LR class index class title Wall St. Bears Claw Back Into the Black (Reuters) Oil and Economy Cloud Stocks' Outlook (Reuters) Oil prices soar to all-time record, posing new... ... Pakistan's Musharraf Says Won't Quit as Army C... Saban not going to Dolphins yet Nets get Carter from Raptors description Reuters - Short-sellers, Wall Street's dwindli... Reuters - Soaring crude prices plus worries\ab... AFP - Tearaway world oil prices, toppling reco... ... KARACHI (Reuters) - Pakistani President Perve... The Miami Dolphins will put their courtship of... INDIANAPOLIS -- All-Star Vince Carter was trad... text wall st. bears claw back into the black (reute... oil and economy cloud stocks' outlook (reuters... oil prices soar to all-time record, posing new... ... pakistan's musharraf says won't quit as army c... saban not going to dolphins yet the miami dolp... nets get carter from raptors indianapolis -- a... tokens [wall, st., bears, claw, back, into, the, blac... [oil, and, economy, cloud, stocks, ', outlook,... [oil, prices, soar, to, alltime, record, ,, p... ... [pakistan, 's, musharraf, says, wo, n't, quit,... [saban, not, going, to, dolphins, yet, the, mi... [nets, get, carter, from, raptors, indianapoli... features {427: 2, 563: 1, 1607: 1, 15062: 1, 120: 1, 73... {66: 1, 9: 2, 351: 2, 4565: 1, 158: 1, 116: 1,... {66: 2, 99: 2, 4390: 1, 4: 2, 3595: 1, 149: 1,... ... {383: 1, 23: 1, 1626: 2, 91: 1, 1809: 1, 285: ... {7762: 2, 68: 1, 661: 1, 4: 2, 1439: 2, 703: 1... {2170: 2, 226: 1, 2402: 2, 32: 1, 2995: 2, 219... . 0  3 Business 
 . 1  3 Business 
 . 2  3 Business 
 . 3  3 Business 
 . 4  3 Business 
 ... ... ... . 119995  1 World 
 . 119996  2 Sports 
 . 119997  2 Sports 
 . 119998  2 Sports 
 . 119999  2 Sports 
 120000 rows × 7 columns Carlyle Looks Toward Commercial Aerospace (Reu... Reuters - Private investment firm Carlyle Grou... carlyle looks toward commercial aerospace (reu... Iraq Halts Oil Exports from Reuters - Authorities have iraq halts oil exports from Main Southern Pipe... halted oil export\f... main southern pipe... Renteria signing a top-shelf Red Sox general manager renteria signing a topdeal Theo Epstein acknowled... shelf deal red sox gene... PITTSBURGH at NY Today's NFL games GIANTS Time: 1:30 p.m. Line: ... today's nfl games pittsburgh at ny giants time... [carlyle, looks, toward, {15999: 2, 1076: 1, 855: commercial, aerospace... 1, 1286: 1, 4251: 1, ... [iraq, halts, oil, exports, {77: 2, 7380: 1, 66: 3, from, main, southe... 1787: 1, 32: 2, 900: 2... [renteria, signing, a, top- {8428: 2, 2638: 1, 5: 4, shelf, deal, red, s... 0: 3, 127: 1, 202: 3,... [today, 's, nfl, games, {106: 1, 23: 1, 729: 1, pittsburgh, at, ny, gi... 225: 1, 1586: 1, 22: 1... The final preprocessing step is converting the features and the class indices into PyTorch tensors. Recall that we need to subtract one from the class indices to make them zero-based. At this point, the data is fully processed and we are ready to begin training. 4.2.3 Multiclass Logistic Regression Using PyTorch The model itself is a single linear layer whose input size corresponds to the size of our vocabulary, and its output size corresponds to the number of classes in our corpus. PyTorch’s Linear layer includes a bias by default, so there is no need to handle that manually the way we did for our perceptron example. The code for training this model (which implements Algorithm 6) is almost identical to that of the binary logistic repression. However, since we have to calculate a score for each of the four different classes, we need to replace the previous BCEWithLogitsLoss with CrossEntropyLoss, which applies a softmax over the scores to obtain probabilities for each class. For each example, the model predicts 4 scores – one for each label. The label with the highest score is selected using the argmax function. We evaluate the predictions of our model for each class using Scikitlearn’s classification_report, which handles the results of multiclass classification. 4.3 Summary 69 4.3 Summary In this chapter, we used movie review and news article classification to illustrate the implementation of the previously described algorithms for the binary perceptron, binary logistic regression, and multiclass logistic regression. For the binary logistic regression, we made a direct comparison between the lower-level NumPy implementation and a higher-level version that made use of PyTorch. We hope that through this series of exercises the reader has noted several key takeaways. First, data preparation is important and should be done thoughtfully. Certain tasks (e.g., text normalization or sentence splitting) are going to be frequently needed if you continue with NLP, so using or creating generic functions can be very helpful. However, what works for one dataset and one language may not be suitable for another scenario. For example, in our case, we selected different tokenizers for each of our tasks to account for the different registers of English, as well as removing diacritics during normalization. Second, when it comes to implementing machine learning algorithms, it is often easier to use a higher-level library such as PyTorch instead of NumPy. For example, with the former, the gradients are calculated by the library, whereas in NumPy we have to code them ourselves. This becomes cumbersome quickly. For example, even the derivative of the softmax is non-trivial. Third, PyTorch imposes a training structure that remains largely the same, regardless of what models are being trained. That is, at a high level, the same steps are always required: clearing the current gradients, predicting output scores for the provided inputs, calculating the loss, and optimizing. These features make PyTorch a very powerful and convenient deep learning library; we will continue to use it throughout the remainder of the book to implement more complex neural architectures.
30,574
30,675
#!/usr/bin/env python # coding: utf-8 # # Multiclass Text Classification with # # Logistic Regression Implemented with PyTorch and CE Loss # First, we will do some initialization. # In[1]: import random import torch import numpy as np import pandas as pd from tqdm.notebook import tqdm # enable tqdm in pandas tqdm.pandas() # set to True to use the gpu (if there is one available) use_gpu = True # select device device = torch.device('cuda' if use_gpu and torch.cuda.is_available() else 'cpu') print(f'device: {device.type}') # random seed seed = 1234 # set random seed if seed is not None: print(f'random seed: {seed}') random.seed(seed) np.random.seed(seed) torch.manual_seed(seed) # We will be using the AG's News Topic Classification Dataset. # It is stored in two CSV files: `train.csv` and `test.csv`, as well as a `classes.txt` that stores the labels of the classes to predict. # # First, we will load the training dataset using [pandas](https://pandas.pydata.org/) and take a quick look at how the data. # In[2]: train_df = pd.read_csv('data/ag_news_csv/train.csv', header=None) train_df.columns = ['class index', 'title', 'description'] train_df # The dataset consists of 120,000 examples, each consisting of a class index, a title, and a description. # The class labels are distributed in a separated file. We will add the labels to the dataset so that we can interpret the data more easily. Note that the label indexes are one-based, so we need to subtract one to retrieve them from the list. # In[3]: labels = open('data/ag_news_csv/classes.txt').read().splitlines() classes = train_df['class index'].map(lambda i: labels[i-1]) train_df.insert(1, 'class', classes) train_df # Let's inspect how balanced our examples are by using a bar plot. # In[4]: pd.value_counts(train_df['class']).plot.bar() # The classes are evenly distributed. That's great! # # However, the text contains some spurious backslashes in some parts of the text. # They are meant to represent newlines in the original text. # An example can be seen below, between the words "dwindling" and "band". # In[5]: print(train_df.loc[0, 'description']) # We will replace the backslashes with spaces on the whole column using pandas replace method. # In[6]: title = train_df['title'].str.lower() descr = train_df['description'].str.lower() text = title + " " + descr train_df['text'] = text.str.replace('\\', ' ', regex=False) train_df # Now we will proceed to tokenize the title and description columns using NLTK's word_tokenize(). # We will add a new column to our dataframe with the list of tokens. # In[7]: from nltk.tokenize import word_tokenize train_df['tokens'] = train_df['text'].progress_map(word_tokenize) train_df # Now we will create a vocabulary from the training data. We will only keep the terms that repeat beyond some threshold established below. # In[8]: threshold = 10 tokens = train_df['tokens'].explode().value_counts() tokens = tokens[tokens > threshold] id_to_token = ['[UNK]'] + tokens.index.tolist() token_to_id = {w:i for i,w in enumerate(id_to_token)} vocabulary_size = len(id_to_token) print(f'vocabulary size: {vocabulary_size:,}') # In[9]: from collections import defaultdict def make_feature_vector(tokens, unk_id=0): vector = defaultdict(int) for t in tokens: i = token_to_id.get(t, unk_id) vector[i] += 1 return vector train_df['features'] = train_df['tokens'].progress_map(make_feature_vector) train_df # In[10]: def make_dense(feats): x = np.zeros(vocabulary_size) for k,v in feats.items(): x[k] = v return x X_train = np.stack(train_df['features'].progress_map(make_dense)) y_train = train_df['class index'].to_numpy() - 1 X_train = torch.tensor(X_train, dtype=torch.float32) y_train = torch.tensor(y_train) # In[11]: from torch import nn from torch import optim # hyperparameters lr = 1.0 n_epochs = 5 n_examples = X_train.shape[0] n_feats = X_train.shape[1] n_classes = len(labels) # initialize the model, loss function, optimizer, and data-loader model = nn.Linear(n_feats, n_classes).to(device) loss_func = nn.CrossEntropyLoss() optimizer = optim.SGD(model.parameters(), lr=lr) # train the model indices = np.arange(n_examples) for epoch in range(n_epochs): np.random.shuffle(indices) for i in tqdm(indices, desc=f'epoch {epoch+1}'): # clear gradients model.zero_grad() # send datum to right device x = X_train[i].unsqueeze(0).to(device) y_true = y_train[i].unsqueeze(0).to(device) # predict label scores y_pred = model(x) # compute loss loss = loss_func(y_pred, y_true) # backpropagate loss.backward() # optimize model parameters optimizer.step() # Next, we evaluate on the test dataset # In[12]: # repeat all preprocessing done above, this time on the test set test_df = pd.read_csv('data/ag_news_csv/test.csv', header=None) test_df.columns = ['class index', 'title', 'description'] test_df['text'] = test_df['title'].str.lower() + " " + test_df['description'].str.lower() test_df['text'] = test_df['text'].str.replace('\\', ' ', regex=False) test_df['tokens'] = test_df['text'].progress_map(word_tokenize) test_df['features'] = test_df['tokens'].progress_map(make_feature_vector) X_test = np.stack(test_df['features'].progress_map(make_dense)) y_test = test_df['class index'].to_numpy() - 1 X_test = torch.tensor(X_test, dtype=torch.float32) y_test = torch.tensor(y_test) # In[13]: from sklearn.metrics import classification_report # set model to evaluation mode model.eval() # don't store gradients with torch.no_grad(): X_test = X_test.to(device) y_pred = torch.argmax(model(X_test), dim=1) y_pred = y_pred.cpu().numpy() print(classification_report(y_test, y_pred, target_names=labels))
2,912
3,016
15
chap04-16
chap04-16
4 Implementing Text Classification Using Perceptron and Logistic Regression In the previous chapters we have discussed the theory behind the perceptron and logistic regression, including mathematical explanations of how and why they are able to learn from examples. In this chapter we will transition from math to code. Specifically, we will discuss how to implement these models in the Python programming language. All the code that we will introduce throughout this book is available online as well: http://clulab.github.io/gentlenlp/. The reader who is not familiar with the Python programming language is encouraged to read first Appendix A, for a brief introduction to the language, and Appendix B, for a discussion on how computers encode and preprocess text. Once done, please return here. To get a better understanding of how these algorithms work under the hood, we will start by implementing them from scratch. However, as the book progresses, we will introduce some of the popular tools and libraries that make Python the language of choice for machine learning, e.g., PyTorch,1 and Hugging Face’s transformers.2 The code for all the examples in the book is provided in the form of Jupyter notebooks.3 Important fragments of these notebooks will be presented in the implementation chapters so that the reader has the whole picture just by reading the book. However, we strongly encourage you to download the notebooks and execute them yourself. We also encourage you to modify them to conduct your own experiments! 1 https://pytorch.org
2 https://huggingface.co 3 https://jupyter.org/ 55 56 Implementing Text Classification Using Perceptron and LR 4.1 Binary Classification We begin this chapter with binary classification. That is, we aim to train classifiers that assign one of two labels to a given text. As the example for this task, we will train a review classifier using the the Large Movie Review Dataset (Maas et al., 2011).4 We tackle this task by implementing first a binary perceptron classifier, followed by a binary logistic regression one. We will implement the latter both from scratch as well as using PyTorch, so the reader has a clearer understanding on how PyTorch works “under the hood.” 4.1.1 Large Movie Review Dataset This dataset contains movie reviews and their associated scores (between 1 and 10) as provided by IMDb.5 converted these scores to binary labels by assigning each review a positive or negative label if the review score was above 6 or below 5, respectively. Reviews with scores 5 and 6 were considered too neutral and thus excluded. We follow the same protocol in this chapter. The dataset is divided in two even partitions called train and test, each containing 25,000 reviews. The dataset also provides additional unlabeled reviews, but we will not use those here. Each partition contains two directories called pos and neg where the positive and negative examples are stored. Each review is stored in an independent text file, whose name is composed of an id unique to the partition and the score associated with the review, separated by an underscore. An example of a positive and a negative review is shown in Table 4.1. 4.1.2 Bag-of-words Model As discussed in Section 2.2, we will encode the text to classify as a bag of words. That is, we encode each review as a list of numbers, with each position in the list corresponding to a word in our vocabulary, and the value stored in that position corresponding to the number of times the word appears in the review. For example, say we want to encode the following two reviews: 4 https://ai.stanford.edu/~amaas/data/sentiment/ 5 https://www.imdb.com/ Maas et al. 4.1 Binary Classification 57 Table 4.1 Two examples of movie reviews from IMDb. The first is a positive review of the movie Puss in Boots (1988). The second is a negative review of the movie Valentine (2001). These reviews can be found at https://www.imdb.com/review/rw0606396/ and https://www.imdb.com/review/rw0721861/, respectively. Filename Score Binary Label train/pos/24_8.txt 8/10 Positive train/neg/141_3.txt 3/10 Negative Review Text Although this was obviously a low-budget production, the performances and the songs in this movie are worth seeing. One of Walken’s few musical roles to date. (he is a marvelous dancer and singer and he demonstrates his acrobatic skills as well - watch for the cartwheel!) Also starring Jason Connery. A great children’s story and very likable characters. This stalk and slash turkey manages to bring nothing new to an increasingly stale genre. A masked killer stalks young, pert girls and slaughters them in a variety of gruesome ways, none of which are particularly inventive. It’s not scary, it’s not clever, and it’s not funny. So what was the point of it? Review 1: Review 2: "I liked the movie. My friend liked it too. " "I hated it. Would not recommend. " First, we need to create a vocabulary that maps each word to an id that uniquely identifies it. Each of these numbers will be used as the index in a list, so they must start at zero and grow by one for each word in the vocabulary. For example, one possible vocabulary that encodes the previous reviews is: {'would': 0, 'hated': 1, 58 Implementing Text Classification Using Perceptron and LR 'my': 2, 'liked': 3, 'not': 4, 'it': 5, 'movie': 6, 'recommend': 7, 'the': 8, 'I': 9, 'too': 10, 'friend': 11} Using this mapping, we can encode the two reviews as follows: Review1: [0,0,1,2,0,1,1,0,1,1,1,1] Review2: [1,1,0,0,1,1,0,1,0,1,0,0] Note that the word liked (fourth position) in the first review has a value of two. This is because this word appears twice in that review. This is a small example with a vocabulary of only 12 terms. Of course, the same process needs to be implemented for our whole training dataset. For this purpose we will use scikit-learn’s CountVectorizer class.6 Using the CountVectorizer class simplifies things, allowing us to get started quickly with a bag-of-words approach. However, note that it makes several simplifying assumptions (e.g., text is lowercased, and punctuation and single character tokens are removed). Some of these may not be adequate to other tasks. First, we need to obtain the filenames for the reviews in the training set: Once we have acquired the filenames for the training reviews, we need
to read them using the CountVectorizer. In order for the CountVectorizer to open and read the files for us, we make use of the input='filename' constructor parameter (otherwise it would expect the string content directly). The CountVectorizer provides three methods that will be use-
ful for us: a method called fit() that is used to acquire the vocabulary,
a method transform() that converts the text into the bag-of-words representation, and a method fit_transform() that conveniently acquires the vocabulary and transforms the data in a single step. The resulting object is referred to as a document-term matrix, where each row corre- 6 https://scikitlearn.org/stable/modules/generated/sklearn.feature_ extraction.text.CountVectorizer.html 4.1 Binary Classification 59 sponds to a document, and each column corresponds to a term in the vocabulary. As the output above indicates, the resulting matrix has 25,000 rows (one for each review), and 74,849 columns (one for each term). Also you may note that this matrix is sparse, with 3,445,861 stored elements. A regular matrix of shape 25,000×74,849 would have 1,871,225,000 elements. However, most of the elements in a document-term matrix are zeros because only a few words from the vocabulary appear in each document. A sparse matrix takes advantage of this fact by storing only the non-zero cells in order to reduce the memory required to store it. Thus, sparse matrices are convenient, especially when dealing with lots of data. Nevertheless, to simplify the downstream code in this example, we will convert it into a dense matrix, i.e., a regular two-dimensional NumPy array. Finally, we also need the labels of the reviews. We assign a label of one to positive reviews, and a label of zero to negative ones. Note that the first half of the reviews are positive and the second half are negative. The label at the ith position of the y_train array corresponds to the review encoded in the ith row of the X_train matrix. 4.1.3 Perceptron Now that we have defined our task and the data processing pipeline, we will implement a perceptron classifier that classifies the movie reviews as positive or negative. The entire code discussed in this section is available in the chap4_perceptron notebook. Recall from Section 2.4 that the perceptron is composed of a weight vector w and a bias term b. These will be represented as a NumPy array w of the same length as our document vectors, and a variable b for the bias term. Both will be initialized with zeros. The parameters w and b are learned through the following algorithm, which implements Algorithm 2 from Chapter 2: There are a couple of details to point out. Line 3 of Algorithm 2 indicates that we need to repeat the training loop until convergence. Theoretically, convergence is defined as predicting all training examples correctly. This is an ambitious requirement, which is not always possible in practice, so in this code we also include a stop condition if we reach a maximum number of epochs. Another crucial difference between our implementation here and the theoretical Algorithm 2, is that we randomize the order in which the training examples are seen at the beginning of 60 Implementing Text Classification Using Perceptron and LR each epoch. This simple (but highly recommended!) change is necessary to avoid the introduction of spurious biases due to the arbitrary order of the examples in the original training partition.7 We accomplish this by storing the indices corresponding to the X_train matrix rows in a NumPy array, and shuffling these indices at the beginning of each epoch. We shuffle the indices instead of the examples so that we can preserve the mapping between examples and labels. The training loop aligns closely with Algorithm 2. We start by iterating over each example in our training data, storing the current example in the variable x,8 and its corresponding label in the variable y_true. Next, we compute the perceptron decision function shown in Algorithm 1. Note that NumPy (as well as PyTorch) uses Python’s @ operator to indicate vector or matrix multiplication, depending on its operand types. Here we use it to calculate the dot product of the example x and the weights w. To this we add the bias b to obtain the predicted score, whose sign is used to assign a positive or negative predicted label. If the prediction is correct, then no update is needed, and we can move on to the next training example. However, if the prediction is incorrect, then we need to adjust w and b, as described in Algorithm 2. Sidebar 4.1 The tqdm function This is our first exposure to the tqdm function. tqdm is a progress bar that “make your loops show a smart progress meter.”9 The name tqdm comes from the Arabic word taqaddum which can mean “progress.” Using tqdm is as simple as wrapping it around the collection to be traversed. After training, we evaluate the model’s performance on the heldout test partition. The test data is loaded similarly to the training partition, but with one notable difference; we use CountVectorizer’s transform() method instead of the fit_transform() method so that the vocabulary is not adjusted for the test data. We won’t show here the loading of the test partition since it is so similar to the code already shown, but it is available in the Jupyter notebook that accompanies this section. . 7   As an extreme example, consider a dataset where all the positive examples appear first in the training partition. This would cause the perceptron to artificially inflate the weights of the features that occur in these examples, a situation from which the learning algorithm may struggle to recover. 
 . 8  We use typewriter font when we discuss variables in the code, to distinguish code from the theoretical discussion in the other chapters. 
 9 https://github.com/tqdm/tqdm 4.1 Binary Classification 61 Using the model to assign labels to all the test data is easily done in one step – we simply multiply the entire test data document-term matrix by the previously learned weights and add the bias. Scores greater than zero indicate a positive review, and those less than zero are negative. At this point we can evaluate the classifier’s performance, which we will do using precision, recall, and F1 scores for binary classification (described in Section 2.3). For this purpose, we implement a function called binary_classification_report that computes these metrics and returns them as a dictionary: We call this function to compare the predicted labels to the true labels, and obtain the evaluation scores. Our F1 score here is 86.8%, which is much higher than the baseline that assigns labels randomly, which yields an F1 score of about 50%. This is a good result, especially considering the simplicity of the perceptron! In the next sections and chapters, we will discuss a battery of strategies to considerably improve this performance. 4.1.4 Binary Logistic Regression from Scratch Using the same task, dataset, and evaluation, we will now implement a logistic regression classifier, as described in Algorithm 5 from Chapter 3. To give the reader hands-on experience with the implementation of the gradient calculations for logistic regression, we start by implementing it from scratch using NumPy. All the code shown in this section is available in the chap4_logistic_regression_numpy notebook. In the perceptron implementation, we represented the weights and the bias as two different variables. Here, however, we will use a different approach that will allow us to unify them into a single vector variable. Specifically, we take advantage of the similarity between the derivative of the cost function with respect to the weights (Equation 3.14) and the derivative of the cost with respect to the bias (Equation 3.15). d Ci(w, b) = (σi − yi)xij (3.14 revisited) dwj d Ci(w, b) = σi − yi (3.15 revisited) db Note that the two derivative formulas are identical except that the former has a multiplication by xij, while the latter does not. However, 62 Implementing Text Classification Using Perceptron and LR since σi − yi = (σi − yi)1 we can multiply the derivative of the cost with respect to the bias by one without changing the semantics. This gives an opportunity for combining the computations, doing them both in a single pass. The idea is that we can treat the bias as a weight corresponding to a feature that always has a value of one. As can be seen above, we created a NumPy array of ones of the same length as the number of examples in our training set (i.e., the number of rows in the data matrix). Then we add this array as a new column to the data matrix, using NumPy’s column_stack function. Next, we need to initialize our model. This time we will use a single NumPy array w of the same length as the number of columns in the data matrix. The weight vector w is initialized randomly with values between 0 and 1: Before implementing the learning algorithm, we need an implementation of the logistic function. Recall that the logistic function is σ(x) = 1 (3.1 revisited) 1+e−x This function can be easily implemented in NumPy as follows: However, this naive implementation may produce the following warning during training: The term overflow indicates that the result of evaluating exp(-x) is a number so large that it can’t be represented by a float (specifically, we’re using float64 numbers). We will avoid this issue by not calling exp with values that will overflow. NumPy provides the function finfo that can be consulted to find the limits of floating point numbers: The log of the largest floating point number is the largest number for which exp() will not overflow, so we will use it as a threshold to filter out problematic values: We now have everything we need to implement Algorithm 4. The steps to follow for each example are: (1) use the model to make a prediction, (2) calculate the gradient of the loss function with respect to the model parameters, and (3) update the model parameters using the gradient. The size of the update is controlled by the learning rate. Once the model has been trained, we evaluate it on the test dataset using our binary_classification_report function from the previous section. Loading and preprocessing the test dataset follows the same 4.1 Binary Classification 63 steps as with the previous classifier. We omit the code for brevity. These are the results: The performance is comparable with that of the perceptron. The difference in F1 scores between the two classifiers (84.9% here vs. 86.8% for the perceptron) is not significant. Classifier parity is probably attributable to the fact that the signal distinguishing the two classes being easy to learn and the simpler perceptron training algorithm being sufficient in this case. Nevertheless, this task is useful in showing how to implement the logistic regression model from scratch, i.e., by implementing the gradient calculation and parameter updates manually. Next, we will implement the same model again using PyTorch, highlighting how this machine learning library simplifies the process. 4.1.5 Binary Logistic Regression Utilizing PyTorch While it is fairly straightforward to compute the derivatives for logistic regression and implement then directly in NumPy, this will not scale well to arbitrary neural architectures. Fortunately, there are libraries that automate the computation of the derivatives of the cost function (assuming it is differentiable!) for any neural network, and use the resulting gradients to perform gradient descent or other more sophisticated optimization procedures. To this end, we will use the PyTorch deep learning library10. The corresponding notebook for this section is chap4_logistic_regression_pytorch_bce. Our model for logistic regression corresponds to PyTorch’s Linear layer. When we instantiate this layer, we specify the size of the inputs (the size of our vocabulary) and the size of the output, i.e., the number of output neurons (which is one because we’re doing binary classification). The loss function we use is the binary cross-entropy loss (see Chapter 3), which is implemented as BCEWithLogitsLoss in PyTorch. In PyTorch, the gradients obtained from the loss function are applied to the model by an optimizer object, which implements and applies an optimization algorithm. Here we will use the vanilla stochastic gradient descent optimizer; we set its learning rate to 0.1. This is equivalent to the discussion in Section 3.2. Similarly to the manual implementation, the steps required to train the model for a given training example are: (1) ensure the gradients are set to zeros, (2) apply the model to obtain a prediction, (3) calculate 10 https://pytorch.org/ 64 Implementing Text Classification Using Perceptron and LR the loss, (4) compute the gradient of the loss by back-propagation, and (5) update the model parameters. Recall that in our previous implementation everything was hardcoded: applying the model, computing the gradients, and optimizing the model parameters. Here, however, the implementation of the logistic regression is expressed at a higher level of abstraction. This means that we are describing the logical steps without specifying a particular implementation. Instead, implementation details are the responsability of the chosen model, loss function, and optimizer. Thus, we could even choose a different model, loss function, and/or optimizer, and use the same training steps with little or no modification. This decoupling of the training logic from the implementation details is one of the main advantages of libraries such as PyTorch. As shown in the code above, calling the model as a function, with the feature vectors as inputs, produces the predicted scores. Once again, a positive score corresponds to a positive label. When we evaluate this implementation on the test dataset, we obtain results that are in line with our previous models: Writing the perceptron and the logistic regression from scratch is a good exercise, as it exposes us to the fundamentals of implementing machine learning algorithms. However, this becomes cumbersome for more complex neural architectures. For this reason, from this point on, we will use PyTorch for all our coding examples. 4.2 Multiclass Classification So far, in this chapter we have discussed implementing binary classifiers. Next, we will modify these binary classifiers to perform multiclass classification, following the discussion in Section 3.5. 4.2.1 AG News Dataset Before explaining the actual training/testing code, we have to choose a new dataset that is suitable for multiclass classification. To this end, we will use the AG News Classification Dataset (Zhang et al., 2015), a subset of the larger AG corpus of news articles collected from thousands of different news sources.11 The classification dataset consists of four 11 http://groups.di.unipi.it/~gulli/AG_corpus_of_news_articles.html 4.2 Multiclass Classification 65 classes, and the data is equally balanced across all classes (30,000 articles per class for train, and 1,900 articles per class for testing). The goal of the task is to classify each article as one of the four classes: World, Sports, Business, or Sci/Tech. 4.2.2 Preparing the Dataset The AG News Dataset is distributed as two CSV files (one for training and one for testing), each containing three columns: the class index, the title, and the description. The dataset also provides a text file that maps the above class indexes to more descriptive class labels. Because of the tabular nature of the dataset, pandas, a Python library
for tabular data analysis,12 is a natural choice for loading and transform-
ing it. To this end, our Jupyter notebook (chap4_multiclass_logistic_regression) demonstrates the sequence of steps required to handle the data, as well
as model training and evaluation. First, we show how to load the CSV,
add column names, and inspect the result: class index . 0  3 
 . 1  3 
 . 2  3 
 . 3  3 
 . 4  3 
 ... ... . 119995  1 
 . 119996  2 
 . 119997  2 
 . 119998  2 
 . 119999  2 
 title Wall St. Bears Claw Back Into the Black (Reuters) Carlyle Looks Toward Commercial Aerospace (Reu... Oil and Economy Cloud Stocks' Outlook (Reuters) Iraq Halts Oil Exports from Main Southern Pipe... Oil prices soar to all-time record, posing new... ... Pakistan's Musharraf Says Won't Quit as Army C... Renteria signing a top-shelf deal Saban not going to Dolphins yet Today's NFL games Nets get Carter from Raptors description Reuters - Short-sellers, Wall Street's dwindli... Reuters - Private investment firm Carlyle Grou... Reuters - Soaring crude prices plus worries\ab... Reuters - Authorities have halted oil export\f... AFP - Tearaway world oil prices, toppling reco... ... KARACHI (Reuters) - Pakistani President Perve... Red Sox general manager Theo Epstein acknowled... The Miami Dolphins will put their courtship of... PITTSBURGH at NY GIANTS Time: 1:30 p.m. Line: ... INDIANAPOLIS -- All-Star Vince Carter was trad... 120000 rows × 3 columns Since the class labels themselves are in a separate file, we manually add them to the pandas data structure (called dataframe in pandas’ terminology) to increase the interpretability of the data. We use the class index column as a starting point, and use its map method to create a new column with the corresponding labels (technically a new Series object) that is added to the dataframe using its insert method, which allows us to insert the column in a specific position. Note that the label indices are one-based, so we subtract one to align them with their labels. 12 https://pandas.pydata.org 66 Implementing Text Classification Using Perceptron and LR class index . 0  3 
 . 1  3 
 . 2  3 
 . 3  3 
 . 4  3 
 ... ... . 119995  1 
 . 119996  2 
 . 119997  2 
 . 119998  2 
 . 119999  2 
 class Business Business Business Business Business ... World Sports Sports Sports Sports title Wall St. Bears Claw Back Into the Black (Reuters) Oil and Economy Cloud Stocks' Outlook (Reuters) Oil prices soar to all-time record, posing new... Pakistan's Musharraf Says Won't Quit as Army C... Saban not going to Dolphins yet Nets get Carter from Raptors description Reuters - Short-sellers, Wall Street's dwindli... Reuters - Soaring crude prices plus worries\ab... AFP - Tearaway world oil prices, toppling reco... KARACHI (Reuters) - Pakistani President Perve... The Miami Dolphins will put their courtship of... INDIANAPOLIS -- All-Star Vince Carter was trad... Iraq Halts Oil Exports from Main Southern Pipe... Reuters - Authorities have halted oil export\f... ... ... Renteria signing a top-shelf deal Red Sox general manager Theo Epstein acknowled... 120000 rows × 4 columns Carlyle Looks Toward Commercial Aerospace (Reu... Reuters - Private investment firm Carlyle Grou... Today's NFL games PITTSBURGH at NY GIANTS Time: 1:30 p.m. Line: ... Next we will preprocess the text. First we lowercase the title and description, and then we concatenate them into a single string. Then we remove some spurious backslashes from the text. Once this is done, the preprocessed text is added to the dataframe as a new column. Note that pandas allows these steps to be applied to all rows simultaneously. class index class title Wall St. Bears Claw Back Into the Black (Reuters) Oil and Economy Cloud Stocks' Outlook (Reuters) Oil prices soar to all-time record, posing new... ... Pakistan's Musharraf Says Won't Quit as Army C... Saban not going to Dolphins yet Nets get Carter from Raptors description Reuters - Short-sellers, Wall Street's dwindli... Reuters - Soaring crude prices plus worries\ab... AFP - Tearaway world oil prices, toppling reco... ... KARACHI (Reuters) - Pakistani President Perve... The Miami Dolphins will put their courtship of... INDIANAPOLIS -- All-Star Vince Carter was trad... text wall st. bears claw back into the black (reute... oil and economy cloud stocks' outlook (reuters... oil prices soar to all-time record, posing new... ... pakistan's musharraf says won't quit as army c... saban not going to dolphins yet the miami dolp... nets get carter from raptors indianapolis -- a... . 0  3 Business 
 . 1  3 Business 
 . 2  3 Business 
 . 3  3 Business 
 . 4  3 Business 
 ... ... ... . 119995  1 World 
 . 119996  2 Sports 
 . 119997  2 Sports 
 . 119998  2 Sports 
 . 119999  2 Sports 
 120000 rows × 5 columns Carlyle Looks Toward Commercial Reuters - Private investment firm Carlyle carlyle looks toward commercial Aerospace (Reu... Grou... aerospace (reu... Iraq Halts Oil Exports from Main Southern Pipe... Reuters - Authorities have halted oil export\f... iraq halts oil exports from main southern pipe... Renteria signing a top-shelf deal Red Sox general manager Theo Epstein renteria signing a top-shelf deal red sox acknowled... gene... Today's NFL games PITTSBURGH at NY GIANTS Time: 1:30 p.m. today's nfl games pittsburgh at ny giants Line: ... time... At this point, the text is ready to be tokenized. For this purpose we will use NLTK’s word_tokenize function. This function can be applied to the whole column at once using the pandas map function, which returns a new column which we add to the dataframe. However, here we actually use the progress_map function, which provides a visual progress bar. This visual feedback is especially helpful for tasks that take more time to complete. 4.2 Multiclass Classification 67 class index . 0  3 
 . 1  3 
 . 2  3 
 . 3  3 
 . 4  3 
 ... ... . 119995  1 
 . 119996  2 
 . 119997  2 
 . 119998  2 
 . 119999  2 
 class Business Business Business Business Business ... World Sports Sports Sports Sports title Wall St. Bears Claw Back Into the Black (Reuters) Oil and Economy Cloud Stocks' Outlook (Reuters) Oil prices soar to all-time record, posing new... ... Pakistan's Musharraf Says Won't Quit as Army C... Saban not going to Dolphins yet Nets get Carter from Raptors description Reuters - Short-sellers, Wall Street's dwindli... Reuters - Soaring crude prices plus worries\ab... AFP - Tearaway world oil prices, toppling reco... ... KARACHI (Reuters) - Pakistani President Perve... The Miami Dolphins will put their courtship of... INDIANAPOLIS -- All-Star Vince Carter was trad... text wall st. bears claw back into the black (reute... oil and economy cloud stocks' outlook (reuters... oil prices soar to all-time record, posing new... ... pakistan's musharraf says won't quit as army c... saban not going to dolphins yet the miami dolp... nets get carter from raptors indianapolis -- a... tokens [wall, st., bears, claw, back, into, the, blac... [oil, and, economy, cloud, stocks, ', outlook,... [oil, prices, soar, to, all-time, record, ,, p... ... [pakistan, 's, musharraf, says, wo, n't, quit,... [saban, not, going, to, dolphins, yet, the, mi... [nets, get, carter, from, raptors, indianapoli... 120000 rows × 6 columns Carlyle Looks Toward Commercial Reuters - Private investment firm carlyle looks toward commercial [carlyle, looks, toward, Aerospace (Reu... Carlyle Grou... aerospace (reu... commercial, aerospace... Iraq Halts Oil Exports from Main Reuters - Authorities have halted iraq halts oil exports from main [iraq, halts, oil, exports, from, Southern Pipe... oil export\f... southern pipe... main, southe... Renteria signing a top-shelf deal Red Sox general manager Theo renteria signing a top-shelf deal [renteria, signing, a, top-shelf, Epstein acknowled... red sox gene... deal, red, s... Today's NFL games PITTSBURGH at NY GIANTS today's nfl games pittsburgh at [today, 's, nfl, games, Time: 1:30 p.m. Line: ... ny giants time... pittsburgh, at, ny, gi... From the tokens we just created, we then create a vocabulary for our corpus. Here, we only keep the words that occur at least 10 times, decreasing the memory needed and reducing the likelihood that our vocabulary contains noisy tokens. Note that each row in the tokens column contains a list of tokens. In order to create the vocabulary, we will need to convert the Series of lists of tokens into a Series of tokens using the explode() Pandas method. Then we will use the value_counts() method to create a Series object in which the index are the tokens and the values are the number of times they appear in the corpus. The next step is removing the tokens with a count lower than our chosen threshold. Finally, we create a list with the remaining tokens, as well as a dictionary that maps tokens to token ids (i.e., the index of the token in the list). We include in the vocabulary a special token [UNK] that will be used as a placeholder for tokens that do not appear in our vocabulary after the frequency pruning. Using this vocabulary, we construct a feature vector for each news article in the corpus. This feature vector will be encoded as a dictionary, with keys corresponding to token ids, and values corresponding to the number of times the token appears in the article. As above, the feature vectors will be stored as a new column in the dataframe. 68 Implementing Text Classification Using Perceptron and LR class index class title Wall St. Bears Claw Back Into the Black (Reuters) Oil and Economy Cloud Stocks' Outlook (Reuters) Oil prices soar to all-time record, posing new... ... Pakistan's Musharraf Says Won't Quit as Army C... Saban not going to Dolphins yet Nets get Carter from Raptors description Reuters - Short-sellers, Wall Street's dwindli... Reuters - Soaring crude prices plus worries\ab... AFP - Tearaway world oil prices, toppling reco... ... KARACHI (Reuters) - Pakistani President Perve... The Miami Dolphins will put their courtship of... INDIANAPOLIS -- All-Star Vince Carter was trad... text wall st. bears claw back into the black (reute... oil and economy cloud stocks' outlook (reuters... oil prices soar to all-time record, posing new... ... pakistan's musharraf says won't quit as army c... saban not going to dolphins yet the miami dolp... nets get carter from raptors indianapolis -- a... tokens [wall, st., bears, claw, back, into, the, blac... [oil, and, economy, cloud, stocks, ', outlook,... [oil, prices, soar, to, alltime, record, ,, p... ... [pakistan, 's, musharraf, says, wo, n't, quit,... [saban, not, going, to, dolphins, yet, the, mi... [nets, get, carter, from, raptors, indianapoli... features {427: 2, 563: 1, 1607: 1, 15062: 1, 120: 1, 73... {66: 1, 9: 2, 351: 2, 4565: 1, 158: 1, 116: 1,... {66: 2, 99: 2, 4390: 1, 4: 2, 3595: 1, 149: 1,... ... {383: 1, 23: 1, 1626: 2, 91: 1, 1809: 1, 285: ... {7762: 2, 68: 1, 661: 1, 4: 2, 1439: 2, 703: 1... {2170: 2, 226: 1, 2402: 2, 32: 1, 2995: 2, 219... . 0  3 Business 
 . 1  3 Business 
 . 2  3 Business 
 . 3  3 Business 
 . 4  3 Business 
 ... ... ... . 119995  1 World 
 . 119996  2 Sports 
 . 119997  2 Sports 
 . 119998  2 Sports 
 . 119999  2 Sports 
 120000 rows × 7 columns Carlyle Looks Toward Commercial Aerospace (Reu... Reuters - Private investment firm Carlyle Grou... carlyle looks toward commercial aerospace (reu... Iraq Halts Oil Exports from Reuters - Authorities have iraq halts oil exports from Main Southern Pipe... halted oil export\f... main southern pipe... Renteria signing a top-shelf Red Sox general manager renteria signing a topdeal Theo Epstein acknowled... shelf deal red sox gene... PITTSBURGH at NY Today's NFL games GIANTS Time: 1:30 p.m. Line: ... today's nfl games pittsburgh at ny giants time... [carlyle, looks, toward, {15999: 2, 1076: 1, 855: commercial, aerospace... 1, 1286: 1, 4251: 1, ... [iraq, halts, oil, exports, {77: 2, 7380: 1, 66: 3, from, main, southe... 1787: 1, 32: 2, 900: 2... [renteria, signing, a, top- {8428: 2, 2638: 1, 5: 4, shelf, deal, red, s... 0: 3, 127: 1, 202: 3,... [today, 's, nfl, games, {106: 1, 23: 1, 729: 1, pittsburgh, at, ny, gi... 225: 1, 1586: 1, 22: 1... The final preprocessing step is converting the features and the class indices into PyTorch tensors. Recall that we need to subtract one from the class indices to make them zero-based. At this point, the data is fully processed and we are ready to begin training. 4.2.3 Multiclass Logistic Regression Using PyTorch The model itself is a single linear layer whose input size corresponds to the size of our vocabulary, and its output size corresponds to the number of classes in our corpus. PyTorch’s Linear layer includes a bias by default, so there is no need to handle that manually the way we did for our perceptron example. The code for training this model (which implements Algorithm 6) is almost identical to that of the binary logistic repression. However, since we have to calculate a score for each of the four different classes, we need to replace the previous BCEWithLogitsLoss with CrossEntropyLoss, which applies a softmax over the scores to obtain probabilities for each class. For each example, the model predicts 4 scores – one for each label. The label with the highest score is selected using the argmax function. We evaluate the predictions of our model for each class using Scikitlearn’s classification_report, which handles the results of multiclass classification. 4.3 Summary 69 4.3 Summary In this chapter, we used movie review and news article classification to illustrate the implementation of the previously described algorithms for the binary perceptron, binary logistic regression, and multiclass logistic regression. For the binary logistic regression, we made a direct comparison between the lower-level NumPy implementation and a higher-level version that made use of PyTorch. We hope that through this series of exercises the reader has noted several key takeaways. First, data preparation is important and should be done thoughtfully. Certain tasks (e.g., text normalization or sentence splitting) are going to be frequently needed if you continue with NLP, so using or creating generic functions can be very helpful. However, what works for one dataset and one language may not be suitable for another scenario. For example, in our case, we selected different tokenizers for each of our tasks to account for the different registers of English, as well as removing diacritics during normalization. Second, when it comes to implementing machine learning algorithms, it is often easier to use a higher-level library such as PyTorch instead of NumPy. For example, with the former, the gradients are calculated by the library, whereas in NumPy we have to code them ourselves. This becomes cumbersome quickly. For example, even the derivative of the softmax is non-trivial. Third, PyTorch imposes a training structure that remains largely the same, regardless of what models are being trained. That is, at a high level, the same steps are always required: clearing the current gradients, predicting output scores for the provided inputs, calculating the loss, and optimizing. These features make PyTorch a very powerful and convenient deep learning library; we will continue to use it throughout the remainder of the book to implement more complex neural architectures.
35,150
35,323
#!/usr/bin/env python # coding: utf-8 # # Multiclass Text Classification with # # Logistic Regression Implemented with PyTorch and CE Loss # First, we will do some initialization. # In[1]: import random import torch import numpy as np import pandas as pd from tqdm.notebook import tqdm # enable tqdm in pandas tqdm.pandas() # set to True to use the gpu (if there is one available) use_gpu = True # select device device = torch.device('cuda' if use_gpu and torch.cuda.is_available() else 'cpu') print(f'device: {device.type}') # random seed seed = 1234 # set random seed if seed is not None: print(f'random seed: {seed}') random.seed(seed) np.random.seed(seed) torch.manual_seed(seed) # We will be using the AG's News Topic Classification Dataset. # It is stored in two CSV files: `train.csv` and `test.csv`, as well as a `classes.txt` that stores the labels of the classes to predict. # # First, we will load the training dataset using [pandas](https://pandas.pydata.org/) and take a quick look at how the data. # In[2]: train_df = pd.read_csv('data/ag_news_csv/train.csv', header=None) train_df.columns = ['class index', 'title', 'description'] train_df # The dataset consists of 120,000 examples, each consisting of a class index, a title, and a description. # The class labels are distributed in a separated file. We will add the labels to the dataset so that we can interpret the data more easily. Note that the label indexes are one-based, so we need to subtract one to retrieve them from the list. # In[3]: labels = open('data/ag_news_csv/classes.txt').read().splitlines() classes = train_df['class index'].map(lambda i: labels[i-1]) train_df.insert(1, 'class', classes) train_df # Let's inspect how balanced our examples are by using a bar plot. # In[4]: pd.value_counts(train_df['class']).plot.bar() # The classes are evenly distributed. That's great! # # However, the text contains some spurious backslashes in some parts of the text. # They are meant to represent newlines in the original text. # An example can be seen below, between the words "dwindling" and "band". # In[5]: print(train_df.loc[0, 'description']) # We will replace the backslashes with spaces on the whole column using pandas replace method. # In[6]: title = train_df['title'].str.lower() descr = train_df['description'].str.lower() text = title + " " + descr train_df['text'] = text.str.replace('\\', ' ', regex=False) train_df # Now we will proceed to tokenize the title and description columns using NLTK's word_tokenize(). # We will add a new column to our dataframe with the list of tokens. # In[7]: from nltk.tokenize import word_tokenize train_df['tokens'] = train_df['text'].progress_map(word_tokenize) train_df # Now we will create a vocabulary from the training data. We will only keep the terms that repeat beyond some threshold established below. # In[8]: threshold = 10 tokens = train_df['tokens'].explode().value_counts() tokens = tokens[tokens > threshold] id_to_token = ['[UNK]'] + tokens.index.tolist() token_to_id = {w:i for i,w in enumerate(id_to_token)} vocabulary_size = len(id_to_token) print(f'vocabulary size: {vocabulary_size:,}') # In[9]: from collections import defaultdict def make_feature_vector(tokens, unk_id=0): vector = defaultdict(int) for t in tokens: i = token_to_id.get(t, unk_id) vector[i] += 1 return vector train_df['features'] = train_df['tokens'].progress_map(make_feature_vector) train_df # In[10]: def make_dense(feats): x = np.zeros(vocabulary_size) for k,v in feats.items(): x[k] = v return x X_train = np.stack(train_df['features'].progress_map(make_dense)) y_train = train_df['class index'].to_numpy() - 1 X_train = torch.tensor(X_train, dtype=torch.float32) y_train = torch.tensor(y_train) # In[11]: from torch import nn from torch import optim # hyperparameters lr = 1.0 n_epochs = 5 n_examples = X_train.shape[0] n_feats = X_train.shape[1] n_classes = len(labels) # initialize the model, loss function, optimizer, and data-loader model = nn.Linear(n_feats, n_classes).to(device) loss_func = nn.CrossEntropyLoss() optimizer = optim.SGD(model.parameters(), lr=lr) # train the model indices = np.arange(n_examples) for epoch in range(n_epochs): np.random.shuffle(indices) for i in tqdm(indices, desc=f'epoch {epoch+1}'): # clear gradients model.zero_grad() # send datum to right device x = X_train[i].unsqueeze(0).to(device) y_true = y_train[i].unsqueeze(0).to(device) # predict label scores y_pred = model(x) # compute loss loss = loss_func(y_pred, y_true) # backpropagate loss.backward() # optimize model parameters optimizer.step() # Next, we evaluate on the test dataset # In[12]: # repeat all preprocessing done above, this time on the test set test_df = pd.read_csv('data/ag_news_csv/test.csv', header=None) test_df.columns = ['class index', 'title', 'description'] test_df['text'] = test_df['title'].str.lower() + " " + test_df['description'].str.lower() test_df['text'] = test_df['text'].str.replace('\\', ' ', regex=False) test_df['tokens'] = test_df['text'].progress_map(word_tokenize) test_df['features'] = test_df['tokens'].progress_map(make_feature_vector) X_test = np.stack(test_df['features'].progress_map(make_dense)) y_test = test_df['class index'].to_numpy() - 1 X_test = torch.tensor(X_test, dtype=torch.float32) y_test = torch.tensor(y_test) # In[13]: from sklearn.metrics import classification_report # set model to evaluation mode model.eval() # don't store gradients with torch.no_grad(): X_test = X_test.to(device) y_pred = torch.argmax(model(X_test), dim=1) y_pred = y_pred.cpu().numpy() print(classification_report(y_test, y_pred, target_names=labels))
4,091
4,140
16
chap04-17
chap04-17
4 Implementing Text Classification Using Perceptron and Logistic Regression In the previous chapters we have discussed the theory behind the perceptron and logistic regression, including mathematical explanations of how and why they are able to learn from examples. In this chapter we will transition from math to code. Specifically, we will discuss how to implement these models in the Python programming language. All the code that we will introduce throughout this book is available online as well: http://clulab.github.io/gentlenlp/. The reader who is not familiar with the Python programming language is encouraged to read first Appendix A, for a brief introduction to the language, and Appendix B, for a discussion on how computers encode and preprocess text. Once done, please return here. To get a better understanding of how these algorithms work under the hood, we will start by implementing them from scratch. However, as the book progresses, we will introduce some of the popular tools and libraries that make Python the language of choice for machine learning, e.g., PyTorch,1 and Hugging Face’s transformers.2 The code for all the examples in the book is provided in the form of Jupyter notebooks.3 Important fragments of these notebooks will be presented in the implementation chapters so that the reader has the whole picture just by reading the book. However, we strongly encourage you to download the notebooks and execute them yourself. We also encourage you to modify them to conduct your own experiments! 1 https://pytorch.org
2 https://huggingface.co 3 https://jupyter.org/ 55 56 Implementing Text Classification Using Perceptron and LR 4.1 Binary Classification We begin this chapter with binary classification. That is, we aim to train classifiers that assign one of two labels to a given text. As the example for this task, we will train a review classifier using the the Large Movie Review Dataset (Maas et al., 2011).4 We tackle this task by implementing first a binary perceptron classifier, followed by a binary logistic regression one. We will implement the latter both from scratch as well as using PyTorch, so the reader has a clearer understanding on how PyTorch works “under the hood.” 4.1.1 Large Movie Review Dataset This dataset contains movie reviews and their associated scores (between 1 and 10) as provided by IMDb.5 converted these scores to binary labels by assigning each review a positive or negative label if the review score was above 6 or below 5, respectively. Reviews with scores 5 and 6 were considered too neutral and thus excluded. We follow the same protocol in this chapter. The dataset is divided in two even partitions called train and test, each containing 25,000 reviews. The dataset also provides additional unlabeled reviews, but we will not use those here. Each partition contains two directories called pos and neg where the positive and negative examples are stored. Each review is stored in an independent text file, whose name is composed of an id unique to the partition and the score associated with the review, separated by an underscore. An example of a positive and a negative review is shown in Table 4.1. 4.1.2 Bag-of-words Model As discussed in Section 2.2, we will encode the text to classify as a bag of words. That is, we encode each review as a list of numbers, with each position in the list corresponding to a word in our vocabulary, and the value stored in that position corresponding to the number of times the word appears in the review. For example, say we want to encode the following two reviews: 4 https://ai.stanford.edu/~amaas/data/sentiment/ 5 https://www.imdb.com/ Maas et al. 4.1 Binary Classification 57 Table 4.1 Two examples of movie reviews from IMDb. The first is a positive review of the movie Puss in Boots (1988). The second is a negative review of the movie Valentine (2001). These reviews can be found at https://www.imdb.com/review/rw0606396/ and https://www.imdb.com/review/rw0721861/, respectively. Filename Score Binary Label train/pos/24_8.txt 8/10 Positive train/neg/141_3.txt 3/10 Negative Review Text Although this was obviously a low-budget production, the performances and the songs in this movie are worth seeing. One of Walken’s few musical roles to date. (he is a marvelous dancer and singer and he demonstrates his acrobatic skills as well - watch for the cartwheel!) Also starring Jason Connery. A great children’s story and very likable characters. This stalk and slash turkey manages to bring nothing new to an increasingly stale genre. A masked killer stalks young, pert girls and slaughters them in a variety of gruesome ways, none of which are particularly inventive. It’s not scary, it’s not clever, and it’s not funny. So what was the point of it? Review 1: Review 2: "I liked the movie. My friend liked it too. " "I hated it. Would not recommend. " First, we need to create a vocabulary that maps each word to an id that uniquely identifies it. Each of these numbers will be used as the index in a list, so they must start at zero and grow by one for each word in the vocabulary. For example, one possible vocabulary that encodes the previous reviews is: {'would': 0, 'hated': 1, 58 Implementing Text Classification Using Perceptron and LR 'my': 2, 'liked': 3, 'not': 4, 'it': 5, 'movie': 6, 'recommend': 7, 'the': 8, 'I': 9, 'too': 10, 'friend': 11} Using this mapping, we can encode the two reviews as follows: Review1: [0,0,1,2,0,1,1,0,1,1,1,1] Review2: [1,1,0,0,1,1,0,1,0,1,0,0] Note that the word liked (fourth position) in the first review has a value of two. This is because this word appears twice in that review. This is a small example with a vocabulary of only 12 terms. Of course, the same process needs to be implemented for our whole training dataset. For this purpose we will use scikit-learn’s CountVectorizer class.6 Using the CountVectorizer class simplifies things, allowing us to get started quickly with a bag-of-words approach. However, note that it makes several simplifying assumptions (e.g., text is lowercased, and punctuation and single character tokens are removed). Some of these may not be adequate to other tasks. First, we need to obtain the filenames for the reviews in the training set: Once we have acquired the filenames for the training reviews, we need
to read them using the CountVectorizer. In order for the CountVectorizer to open and read the files for us, we make use of the input='filename' constructor parameter (otherwise it would expect the string content directly). The CountVectorizer provides three methods that will be use-
ful for us: a method called fit() that is used to acquire the vocabulary,
a method transform() that converts the text into the bag-of-words representation, and a method fit_transform() that conveniently acquires the vocabulary and transforms the data in a single step. The resulting object is referred to as a document-term matrix, where each row corre- 6 https://scikitlearn.org/stable/modules/generated/sklearn.feature_ extraction.text.CountVectorizer.html 4.1 Binary Classification 59 sponds to a document, and each column corresponds to a term in the vocabulary. As the output above indicates, the resulting matrix has 25,000 rows (one for each review), and 74,849 columns (one for each term). Also you may note that this matrix is sparse, with 3,445,861 stored elements. A regular matrix of shape 25,000×74,849 would have 1,871,225,000 elements. However, most of the elements in a document-term matrix are zeros because only a few words from the vocabulary appear in each document. A sparse matrix takes advantage of this fact by storing only the non-zero cells in order to reduce the memory required to store it. Thus, sparse matrices are convenient, especially when dealing with lots of data. Nevertheless, to simplify the downstream code in this example, we will convert it into a dense matrix, i.e., a regular two-dimensional NumPy array. Finally, we also need the labels of the reviews. We assign a label of one to positive reviews, and a label of zero to negative ones. Note that the first half of the reviews are positive and the second half are negative. The label at the ith position of the y_train array corresponds to the review encoded in the ith row of the X_train matrix. 4.1.3 Perceptron Now that we have defined our task and the data processing pipeline, we will implement a perceptron classifier that classifies the movie reviews as positive or negative. The entire code discussed in this section is available in the chap4_perceptron notebook. Recall from Section 2.4 that the perceptron is composed of a weight vector w and a bias term b. These will be represented as a NumPy array w of the same length as our document vectors, and a variable b for the bias term. Both will be initialized with zeros. The parameters w and b are learned through the following algorithm, which implements Algorithm 2 from Chapter 2: There are a couple of details to point out. Line 3 of Algorithm 2 indicates that we need to repeat the training loop until convergence. Theoretically, convergence is defined as predicting all training examples correctly. This is an ambitious requirement, which is not always possible in practice, so in this code we also include a stop condition if we reach a maximum number of epochs. Another crucial difference between our implementation here and the theoretical Algorithm 2, is that we randomize the order in which the training examples are seen at the beginning of 60 Implementing Text Classification Using Perceptron and LR each epoch. This simple (but highly recommended!) change is necessary to avoid the introduction of spurious biases due to the arbitrary order of the examples in the original training partition.7 We accomplish this by storing the indices corresponding to the X_train matrix rows in a NumPy array, and shuffling these indices at the beginning of each epoch. We shuffle the indices instead of the examples so that we can preserve the mapping between examples and labels. The training loop aligns closely with Algorithm 2. We start by iterating over each example in our training data, storing the current example in the variable x,8 and its corresponding label in the variable y_true. Next, we compute the perceptron decision function shown in Algorithm 1. Note that NumPy (as well as PyTorch) uses Python’s @ operator to indicate vector or matrix multiplication, depending on its operand types. Here we use it to calculate the dot product of the example x and the weights w. To this we add the bias b to obtain the predicted score, whose sign is used to assign a positive or negative predicted label. If the prediction is correct, then no update is needed, and we can move on to the next training example. However, if the prediction is incorrect, then we need to adjust w and b, as described in Algorithm 2. Sidebar 4.1 The tqdm function This is our first exposure to the tqdm function. tqdm is a progress bar that “make your loops show a smart progress meter.”9 The name tqdm comes from the Arabic word taqaddum which can mean “progress.” Using tqdm is as simple as wrapping it around the collection to be traversed. After training, we evaluate the model’s performance on the heldout test partition. The test data is loaded similarly to the training partition, but with one notable difference; we use CountVectorizer’s transform() method instead of the fit_transform() method so that the vocabulary is not adjusted for the test data. We won’t show here the loading of the test partition since it is so similar to the code already shown, but it is available in the Jupyter notebook that accompanies this section. . 7   As an extreme example, consider a dataset where all the positive examples appear first in the training partition. This would cause the perceptron to artificially inflate the weights of the features that occur in these examples, a situation from which the learning algorithm may struggle to recover. 
 . 8  We use typewriter font when we discuss variables in the code, to distinguish code from the theoretical discussion in the other chapters. 
 9 https://github.com/tqdm/tqdm 4.1 Binary Classification 61 Using the model to assign labels to all the test data is easily done in one step – we simply multiply the entire test data document-term matrix by the previously learned weights and add the bias. Scores greater than zero indicate a positive review, and those less than zero are negative. At this point we can evaluate the classifier’s performance, which we will do using precision, recall, and F1 scores for binary classification (described in Section 2.3). For this purpose, we implement a function called binary_classification_report that computes these metrics and returns them as a dictionary: We call this function to compare the predicted labels to the true labels, and obtain the evaluation scores. Our F1 score here is 86.8%, which is much higher than the baseline that assigns labels randomly, which yields an F1 score of about 50%. This is a good result, especially considering the simplicity of the perceptron! In the next sections and chapters, we will discuss a battery of strategies to considerably improve this performance. 4.1.4 Binary Logistic Regression from Scratch Using the same task, dataset, and evaluation, we will now implement a logistic regression classifier, as described in Algorithm 5 from Chapter 3. To give the reader hands-on experience with the implementation of the gradient calculations for logistic regression, we start by implementing it from scratch using NumPy. All the code shown in this section is available in the chap4_logistic_regression_numpy notebook. In the perceptron implementation, we represented the weights and the bias as two different variables. Here, however, we will use a different approach that will allow us to unify them into a single vector variable. Specifically, we take advantage of the similarity between the derivative of the cost function with respect to the weights (Equation 3.14) and the derivative of the cost with respect to the bias (Equation 3.15). d Ci(w, b) = (σi − yi)xij (3.14 revisited) dwj d Ci(w, b) = σi − yi (3.15 revisited) db Note that the two derivative formulas are identical except that the former has a multiplication by xij, while the latter does not. However, 62 Implementing Text Classification Using Perceptron and LR since σi − yi = (σi − yi)1 we can multiply the derivative of the cost with respect to the bias by one without changing the semantics. This gives an opportunity for combining the computations, doing them both in a single pass. The idea is that we can treat the bias as a weight corresponding to a feature that always has a value of one. As can be seen above, we created a NumPy array of ones of the same length as the number of examples in our training set (i.e., the number of rows in the data matrix). Then we add this array as a new column to the data matrix, using NumPy’s column_stack function. Next, we need to initialize our model. This time we will use a single NumPy array w of the same length as the number of columns in the data matrix. The weight vector w is initialized randomly with values between 0 and 1: Before implementing the learning algorithm, we need an implementation of the logistic function. Recall that the logistic function is σ(x) = 1 (3.1 revisited) 1+e−x This function can be easily implemented in NumPy as follows: However, this naive implementation may produce the following warning during training: The term overflow indicates that the result of evaluating exp(-x) is a number so large that it can’t be represented by a float (specifically, we’re using float64 numbers). We will avoid this issue by not calling exp with values that will overflow. NumPy provides the function finfo that can be consulted to find the limits of floating point numbers: The log of the largest floating point number is the largest number for which exp() will not overflow, so we will use it as a threshold to filter out problematic values: We now have everything we need to implement Algorithm 4. The steps to follow for each example are: (1) use the model to make a prediction, (2) calculate the gradient of the loss function with respect to the model parameters, and (3) update the model parameters using the gradient. The size of the update is controlled by the learning rate. Once the model has been trained, we evaluate it on the test dataset using our binary_classification_report function from the previous section. Loading and preprocessing the test dataset follows the same 4.1 Binary Classification 63 steps as with the previous classifier. We omit the code for brevity. These are the results: The performance is comparable with that of the perceptron. The difference in F1 scores between the two classifiers (84.9% here vs. 86.8% for the perceptron) is not significant. Classifier parity is probably attributable to the fact that the signal distinguishing the two classes being easy to learn and the simpler perceptron training algorithm being sufficient in this case. Nevertheless, this task is useful in showing how to implement the logistic regression model from scratch, i.e., by implementing the gradient calculation and parameter updates manually. Next, we will implement the same model again using PyTorch, highlighting how this machine learning library simplifies the process. 4.1.5 Binary Logistic Regression Utilizing PyTorch While it is fairly straightforward to compute the derivatives for logistic regression and implement then directly in NumPy, this will not scale well to arbitrary neural architectures. Fortunately, there are libraries that automate the computation of the derivatives of the cost function (assuming it is differentiable!) for any neural network, and use the resulting gradients to perform gradient descent or other more sophisticated optimization procedures. To this end, we will use the PyTorch deep learning library10. The corresponding notebook for this section is chap4_logistic_regression_pytorch_bce. Our model for logistic regression corresponds to PyTorch’s Linear layer. When we instantiate this layer, we specify the size of the inputs (the size of our vocabulary) and the size of the output, i.e., the number of output neurons (which is one because we’re doing binary classification). The loss function we use is the binary cross-entropy loss (see Chapter 3), which is implemented as BCEWithLogitsLoss in PyTorch. In PyTorch, the gradients obtained from the loss function are applied to the model by an optimizer object, which implements and applies an optimization algorithm. Here we will use the vanilla stochastic gradient descent optimizer; we set its learning rate to 0.1. This is equivalent to the discussion in Section 3.2. Similarly to the manual implementation, the steps required to train the model for a given training example are: (1) ensure the gradients are set to zeros, (2) apply the model to obtain a prediction, (3) calculate 10 https://pytorch.org/ 64 Implementing Text Classification Using Perceptron and LR the loss, (4) compute the gradient of the loss by back-propagation, and (5) update the model parameters. Recall that in our previous implementation everything was hardcoded: applying the model, computing the gradients, and optimizing the model parameters. Here, however, the implementation of the logistic regression is expressed at a higher level of abstraction. This means that we are describing the logical steps without specifying a particular implementation. Instead, implementation details are the responsability of the chosen model, loss function, and optimizer. Thus, we could even choose a different model, loss function, and/or optimizer, and use the same training steps with little or no modification. This decoupling of the training logic from the implementation details is one of the main advantages of libraries such as PyTorch. As shown in the code above, calling the model as a function, with the feature vectors as inputs, produces the predicted scores. Once again, a positive score corresponds to a positive label. When we evaluate this implementation on the test dataset, we obtain results that are in line with our previous models: Writing the perceptron and the logistic regression from scratch is a good exercise, as it exposes us to the fundamentals of implementing machine learning algorithms. However, this becomes cumbersome for more complex neural architectures. For this reason, from this point on, we will use PyTorch for all our coding examples. 4.2 Multiclass Classification So far, in this chapter we have discussed implementing binary classifiers. Next, we will modify these binary classifiers to perform multiclass classification, following the discussion in Section 3.5. 4.2.1 AG News Dataset Before explaining the actual training/testing code, we have to choose a new dataset that is suitable for multiclass classification. To this end, we will use the AG News Classification Dataset (Zhang et al., 2015), a subset of the larger AG corpus of news articles collected from thousands of different news sources.11 The classification dataset consists of four 11 http://groups.di.unipi.it/~gulli/AG_corpus_of_news_articles.html 4.2 Multiclass Classification 65 classes, and the data is equally balanced across all classes (30,000 articles per class for train, and 1,900 articles per class for testing). The goal of the task is to classify each article as one of the four classes: World, Sports, Business, or Sci/Tech. 4.2.2 Preparing the Dataset The AG News Dataset is distributed as two CSV files (one for training and one for testing), each containing three columns: the class index, the title, and the description. The dataset also provides a text file that maps the above class indexes to more descriptive class labels. Because of the tabular nature of the dataset, pandas, a Python library
for tabular data analysis,12 is a natural choice for loading and transform-
ing it. To this end, our Jupyter notebook (chap4_multiclass_logistic_regression) demonstrates the sequence of steps required to handle the data, as well
as model training and evaluation. First, we show how to load the CSV,
add column names, and inspect the result: class index . 0  3 
 . 1  3 
 . 2  3 
 . 3  3 
 . 4  3 
 ... ... . 119995  1 
 . 119996  2 
 . 119997  2 
 . 119998  2 
 . 119999  2 
 title Wall St. Bears Claw Back Into the Black (Reuters) Carlyle Looks Toward Commercial Aerospace (Reu... Oil and Economy Cloud Stocks' Outlook (Reuters) Iraq Halts Oil Exports from Main Southern Pipe... Oil prices soar to all-time record, posing new... ... Pakistan's Musharraf Says Won't Quit as Army C... Renteria signing a top-shelf deal Saban not going to Dolphins yet Today's NFL games Nets get Carter from Raptors description Reuters - Short-sellers, Wall Street's dwindli... Reuters - Private investment firm Carlyle Grou... Reuters - Soaring crude prices plus worries\ab... Reuters - Authorities have halted oil export\f... AFP - Tearaway world oil prices, toppling reco... ... KARACHI (Reuters) - Pakistani President Perve... Red Sox general manager Theo Epstein acknowled... The Miami Dolphins will put their courtship of... PITTSBURGH at NY GIANTS Time: 1:30 p.m. Line: ... INDIANAPOLIS -- All-Star Vince Carter was trad... 120000 rows × 3 columns Since the class labels themselves are in a separate file, we manually add them to the pandas data structure (called dataframe in pandas’ terminology) to increase the interpretability of the data. We use the class index column as a starting point, and use its map method to create a new column with the corresponding labels (technically a new Series object) that is added to the dataframe using its insert method, which allows us to insert the column in a specific position. Note that the label indices are one-based, so we subtract one to align them with their labels. 12 https://pandas.pydata.org 66 Implementing Text Classification Using Perceptron and LR class index . 0  3 
 . 1  3 
 . 2  3 
 . 3  3 
 . 4  3 
 ... ... . 119995  1 
 . 119996  2 
 . 119997  2 
 . 119998  2 
 . 119999  2 
 class Business Business Business Business Business ... World Sports Sports Sports Sports title Wall St. Bears Claw Back Into the Black (Reuters) Oil and Economy Cloud Stocks' Outlook (Reuters) Oil prices soar to all-time record, posing new... Pakistan's Musharraf Says Won't Quit as Army C... Saban not going to Dolphins yet Nets get Carter from Raptors description Reuters - Short-sellers, Wall Street's dwindli... Reuters - Soaring crude prices plus worries\ab... AFP - Tearaway world oil prices, toppling reco... KARACHI (Reuters) - Pakistani President Perve... The Miami Dolphins will put their courtship of... INDIANAPOLIS -- All-Star Vince Carter was trad... Iraq Halts Oil Exports from Main Southern Pipe... Reuters - Authorities have halted oil export\f... ... ... Renteria signing a top-shelf deal Red Sox general manager Theo Epstein acknowled... 120000 rows × 4 columns Carlyle Looks Toward Commercial Aerospace (Reu... Reuters - Private investment firm Carlyle Grou... Today's NFL games PITTSBURGH at NY GIANTS Time: 1:30 p.m. Line: ... Next we will preprocess the text. First we lowercase the title and description, and then we concatenate them into a single string. Then we remove some spurious backslashes from the text. Once this is done, the preprocessed text is added to the dataframe as a new column. Note that pandas allows these steps to be applied to all rows simultaneously. class index class title Wall St. Bears Claw Back Into the Black (Reuters) Oil and Economy Cloud Stocks' Outlook (Reuters) Oil prices soar to all-time record, posing new... ... Pakistan's Musharraf Says Won't Quit as Army C... Saban not going to Dolphins yet Nets get Carter from Raptors description Reuters - Short-sellers, Wall Street's dwindli... Reuters - Soaring crude prices plus worries\ab... AFP - Tearaway world oil prices, toppling reco... ... KARACHI (Reuters) - Pakistani President Perve... The Miami Dolphins will put their courtship of... INDIANAPOLIS -- All-Star Vince Carter was trad... text wall st. bears claw back into the black (reute... oil and economy cloud stocks' outlook (reuters... oil prices soar to all-time record, posing new... ... pakistan's musharraf says won't quit as army c... saban not going to dolphins yet the miami dolp... nets get carter from raptors indianapolis -- a... . 0  3 Business 
 . 1  3 Business 
 . 2  3 Business 
 . 3  3 Business 
 . 4  3 Business 
 ... ... ... . 119995  1 World 
 . 119996  2 Sports 
 . 119997  2 Sports 
 . 119998  2 Sports 
 . 119999  2 Sports 
 120000 rows × 5 columns Carlyle Looks Toward Commercial Reuters - Private investment firm Carlyle carlyle looks toward commercial Aerospace (Reu... Grou... aerospace (reu... Iraq Halts Oil Exports from Main Southern Pipe... Reuters - Authorities have halted oil export\f... iraq halts oil exports from main southern pipe... Renteria signing a top-shelf deal Red Sox general manager Theo Epstein renteria signing a top-shelf deal red sox acknowled... gene... Today's NFL games PITTSBURGH at NY GIANTS Time: 1:30 p.m. today's nfl games pittsburgh at ny giants Line: ... time... At this point, the text is ready to be tokenized. For this purpose we will use NLTK’s word_tokenize function. This function can be applied to the whole column at once using the pandas map function, which returns a new column which we add to the dataframe. However, here we actually use the progress_map function, which provides a visual progress bar. This visual feedback is especially helpful for tasks that take more time to complete. 4.2 Multiclass Classification 67 class index . 0  3 
 . 1  3 
 . 2  3 
 . 3  3 
 . 4  3 
 ... ... . 119995  1 
 . 119996  2 
 . 119997  2 
 . 119998  2 
 . 119999  2 
 class Business Business Business Business Business ... World Sports Sports Sports Sports title Wall St. Bears Claw Back Into the Black (Reuters) Oil and Economy Cloud Stocks' Outlook (Reuters) Oil prices soar to all-time record, posing new... ... Pakistan's Musharraf Says Won't Quit as Army C... Saban not going to Dolphins yet Nets get Carter from Raptors description Reuters - Short-sellers, Wall Street's dwindli... Reuters - Soaring crude prices plus worries\ab... AFP - Tearaway world oil prices, toppling reco... ... KARACHI (Reuters) - Pakistani President Perve... The Miami Dolphins will put their courtship of... INDIANAPOLIS -- All-Star Vince Carter was trad... text wall st. bears claw back into the black (reute... oil and economy cloud stocks' outlook (reuters... oil prices soar to all-time record, posing new... ... pakistan's musharraf says won't quit as army c... saban not going to dolphins yet the miami dolp... nets get carter from raptors indianapolis -- a... tokens [wall, st., bears, claw, back, into, the, blac... [oil, and, economy, cloud, stocks, ', outlook,... [oil, prices, soar, to, all-time, record, ,, p... ... [pakistan, 's, musharraf, says, wo, n't, quit,... [saban, not, going, to, dolphins, yet, the, mi... [nets, get, carter, from, raptors, indianapoli... 120000 rows × 6 columns Carlyle Looks Toward Commercial Reuters - Private investment firm carlyle looks toward commercial [carlyle, looks, toward, Aerospace (Reu... Carlyle Grou... aerospace (reu... commercial, aerospace... Iraq Halts Oil Exports from Main Reuters - Authorities have halted iraq halts oil exports from main [iraq, halts, oil, exports, from, Southern Pipe... oil export\f... southern pipe... main, southe... Renteria signing a top-shelf deal Red Sox general manager Theo renteria signing a top-shelf deal [renteria, signing, a, top-shelf, Epstein acknowled... red sox gene... deal, red, s... Today's NFL games PITTSBURGH at NY GIANTS today's nfl games pittsburgh at [today, 's, nfl, games, Time: 1:30 p.m. Line: ... ny giants time... pittsburgh, at, ny, gi... From the tokens we just created, we then create a vocabulary for our corpus. Here, we only keep the words that occur at least 10 times, decreasing the memory needed and reducing the likelihood that our vocabulary contains noisy tokens. Note that each row in the tokens column contains a list of tokens. In order to create the vocabulary, we will need to convert the Series of lists of tokens into a Series of tokens using the explode() Pandas method. Then we will use the value_counts() method to create a Series object in which the index are the tokens and the values are the number of times they appear in the corpus. The next step is removing the tokens with a count lower than our chosen threshold. Finally, we create a list with the remaining tokens, as well as a dictionary that maps tokens to token ids (i.e., the index of the token in the list). We include in the vocabulary a special token [UNK] that will be used as a placeholder for tokens that do not appear in our vocabulary after the frequency pruning. Using this vocabulary, we construct a feature vector for each news article in the corpus. This feature vector will be encoded as a dictionary, with keys corresponding to token ids, and values corresponding to the number of times the token appears in the article. As above, the feature vectors will be stored as a new column in the dataframe. 68 Implementing Text Classification Using Perceptron and LR class index class title Wall St. Bears Claw Back Into the Black (Reuters) Oil and Economy Cloud Stocks' Outlook (Reuters) Oil prices soar to all-time record, posing new... ... Pakistan's Musharraf Says Won't Quit as Army C... Saban not going to Dolphins yet Nets get Carter from Raptors description Reuters - Short-sellers, Wall Street's dwindli... Reuters - Soaring crude prices plus worries\ab... AFP - Tearaway world oil prices, toppling reco... ... KARACHI (Reuters) - Pakistani President Perve... The Miami Dolphins will put their courtship of... INDIANAPOLIS -- All-Star Vince Carter was trad... text wall st. bears claw back into the black (reute... oil and economy cloud stocks' outlook (reuters... oil prices soar to all-time record, posing new... ... pakistan's musharraf says won't quit as army c... saban not going to dolphins yet the miami dolp... nets get carter from raptors indianapolis -- a... tokens [wall, st., bears, claw, back, into, the, blac... [oil, and, economy, cloud, stocks, ', outlook,... [oil, prices, soar, to, alltime, record, ,, p... ... [pakistan, 's, musharraf, says, wo, n't, quit,... [saban, not, going, to, dolphins, yet, the, mi... [nets, get, carter, from, raptors, indianapoli... features {427: 2, 563: 1, 1607: 1, 15062: 1, 120: 1, 73... {66: 1, 9: 2, 351: 2, 4565: 1, 158: 1, 116: 1,... {66: 2, 99: 2, 4390: 1, 4: 2, 3595: 1, 149: 1,... ... {383: 1, 23: 1, 1626: 2, 91: 1, 1809: 1, 285: ... {7762: 2, 68: 1, 661: 1, 4: 2, 1439: 2, 703: 1... {2170: 2, 226: 1, 2402: 2, 32: 1, 2995: 2, 219... . 0  3 Business 
 . 1  3 Business 
 . 2  3 Business 
 . 3  3 Business 
 . 4  3 Business 
 ... ... ... . 119995  1 World 
 . 119996  2 Sports 
 . 119997  2 Sports 
 . 119998  2 Sports 
 . 119999  2 Sports 
 120000 rows × 7 columns Carlyle Looks Toward Commercial Aerospace (Reu... Reuters - Private investment firm Carlyle Grou... carlyle looks toward commercial aerospace (reu... Iraq Halts Oil Exports from Reuters - Authorities have iraq halts oil exports from Main Southern Pipe... halted oil export\f... main southern pipe... Renteria signing a top-shelf Red Sox general manager renteria signing a topdeal Theo Epstein acknowled... shelf deal red sox gene... PITTSBURGH at NY Today's NFL games GIANTS Time: 1:30 p.m. Line: ... today's nfl games pittsburgh at ny giants time... [carlyle, looks, toward, {15999: 2, 1076: 1, 855: commercial, aerospace... 1, 1286: 1, 4251: 1, ... [iraq, halts, oil, exports, {77: 2, 7380: 1, 66: 3, from, main, southe... 1787: 1, 32: 2, 900: 2... [renteria, signing, a, top- {8428: 2, 2638: 1, 5: 4, shelf, deal, red, s... 0: 3, 127: 1, 202: 3,... [today, 's, nfl, games, {106: 1, 23: 1, 729: 1, pittsburgh, at, ny, gi... 225: 1, 1586: 1, 22: 1... The final preprocessing step is converting the features and the class indices into PyTorch tensors. Recall that we need to subtract one from the class indices to make them zero-based. At this point, the data is fully processed and we are ready to begin training. 4.2.3 Multiclass Logistic Regression Using PyTorch The model itself is a single linear layer whose input size corresponds to the size of our vocabulary, and its output size corresponds to the number of classes in our corpus. PyTorch’s Linear layer includes a bias by default, so there is no need to handle that manually the way we did for our perceptron example. The code for training this model (which implements Algorithm 6) is almost identical to that of the binary logistic repression. However, since we have to calculate a score for each of the four different classes, we need to replace the previous BCEWithLogitsLoss with CrossEntropyLoss, which applies a softmax over the scores to obtain probabilities for each class. For each example, the model predicts 4 scores – one for each label. The label with the highest score is selected using the argmax function. We evaluate the predictions of our model for each class using Scikitlearn’s classification_report, which handles the results of multiclass classification. 4.3 Summary 69 4.3 Summary In this chapter, we used movie review and news article classification to illustrate the implementation of the previously described algorithms for the binary perceptron, binary logistic regression, and multiclass logistic regression. For the binary logistic regression, we made a direct comparison between the lower-level NumPy implementation and a higher-level version that made use of PyTorch. We hope that through this series of exercises the reader has noted several key takeaways. First, data preparation is important and should be done thoughtfully. Certain tasks (e.g., text normalization or sentence splitting) are going to be frequently needed if you continue with NLP, so using or creating generic functions can be very helpful. However, what works for one dataset and one language may not be suitable for another scenario. For example, in our case, we selected different tokenizers for each of our tasks to account for the different registers of English, as well as removing diacritics during normalization. Second, when it comes to implementing machine learning algorithms, it is often easier to use a higher-level library such as PyTorch instead of NumPy. For example, with the former, the gradients are calculated by the library, whereas in NumPy we have to code them ourselves. This becomes cumbersome quickly. For example, even the derivative of the softmax is non-trivial. Third, PyTorch imposes a training structure that remains largely the same, regardless of what models are being trained. That is, at a high level, the same steps are always required: clearing the current gradients, predicting output scores for the provided inputs, calculating the loss, and optimizing. These features make PyTorch a very powerful and convenient deep learning library; we will continue to use it throughout the remainder of the book to implement more complex neural architectures.
10,416
10,481
#!/usr/bin/env python # coding: utf-8 # # Binary Text Classification with Perceptron # In[1]: import random import numpy as np from tqdm.notebook import tqdm # set this variable to a number to be used as the random seed # or to None if you don't want to set a random seed seed = 1234 if seed is not None: random.seed(seed) np.random.seed(seed) # The dataset is divided in two directories called `train` and `test`. # These directories contain the training and testing splits of the dataset. # In[2]: get_ipython().system('ls -lh data/aclImdb/') # Both the `train` and `test` directories contain two directories called `pos` and `neg` that contain text files with the positive and negative reviews, respectively. # In[3]: get_ipython().system('ls -lh data/aclImdb/train/') # We will now read the filenames of the positive and negative examples. # In[4]: from glob import glob pos_files = glob('data/aclImdb/train/pos/*.txt') neg_files = glob('data/aclImdb/train/neg/*.txt') print('number of positive reviews:', len(pos_files)) print('number of negative reviews:', len(neg_files)) # Now, we will use a [`CountVectorizer`](https://scikit-learn.org/stable/modules/generated/sklearn.feature_extraction.text.CountVectorizer.html) to read the text files, tokenize them, acquire a vocabulary from the training data, and encode it in a document-term matrix in which each row represents a review, and each column represents a term in the vocabulary. Each element $(i,j)$ in the matrix represents the number of times term $j$ appears in example $i$. # In[5]: from sklearn.feature_extraction.text import CountVectorizer # initialize CountVectorizer indicating that we will give it a list of filenames that have to be read cv = CountVectorizer(input='filename') # learn vocabulary and return sparse document-term matrix doc_term_matrix = cv.fit_transform(pos_files + neg_files) doc_term_matrix # Note in the message printed above that the matrix is of shape (25000, 74894). # In other words, it has 1,871,225,000 elements. # However, only 3,445,861 elements were stored. # This is because most of the elements in the matrix are zeros. # The reason is that the reviews are short and most words in the english language don't appear in each review. # A matrix that only stores non-zero values is called *sparse*. # # Now we will convert it to a dense numpy array: # In[6]: X_train = doc_term_matrix.toarray() X_train.shape # We will also create a numpy array with the binary labels for the reviews. # One indicates a positive review and zero a negative review. # The label `y_train[i]` corresponds to the review encoded in row `i` of the `X_train` matrix. # In[7]: # training labels y_pos = np.ones(len(pos_files)) y_neg = np.zeros(len(neg_files)) y_train = np.concatenate([y_pos, y_neg]) y_train # Now we will initialize our model, in the form of an array of weights `w` of the same size as the number of features in our dataset (i.e., the number of words in the vocabulary acquired by [`CountVectorizer`](https://scikit-learn.org/stable/modules/generated/sklearn.feature_extraction.text.CountVectorizer.html)), and a bias term `b`. # Both are initialized to zeros. # In[8]: # initialize model: the feature vector and bias term are populated with zeros n_examples, n_features = X_train.shape w = np.zeros(n_features) b = 0 # Now we will use the perceptron learning algorithm to learn the values of `w` and `b` from our training data. # In[9]: n_epochs = 10 indices = np.arange(n_examples) for epoch in range(10): n_errors = 0 # randomize the order in which training examples are seen in this epoch np.random.shuffle(indices) # traverse the training data for i in tqdm(indices, desc=f'epoch {epoch+1}'): x = X_train[i] y_true = y_train[i] # the perceptron decision based on the current model score = x @ w + b y_pred = 1 if score > 0 else 0 # update the model is the prediction was incorrect if y_true == y_pred: continue elif y_true == 1 and y_pred == 0: w = w + x b = b + 1 n_errors += 1 elif y_true == 0 and y_pred == 1: w = w - x b = b - 1 n_errors += 1 if n_errors == 0: break # The next step is evaluating the model on the test dataset. # Note that this time we use the [`transform()`](https://scikit-learn.org/stable/modules/generated/sklearn.feature_extraction.text.CountVectorizer.html#sklearn.feature_extraction.text.CountVectorizer.transform) method of the [`CountVectorizer`](https://scikit-learn.org/stable/modules/generated/sklearn.feature_extraction.text.CountVectorizer.html), instead of the [`fit_transform()`](https://scikit-learn.org/stable/modules/generated/sklearn.feature_extraction.text.CountVectorizer.html#sklearn.feature_extraction.text.CountVectorizer.fit_transform) method that we used above. This is because we want to use the learned vocabulary in the test set, instead of learning a new one. # In[10]: pos_files = glob('data/aclImdb/test/pos/*.txt') neg_files = glob('data/aclImdb/test/neg/*.txt') doc_term_matrix = cv.transform(pos_files + neg_files) X_test = doc_term_matrix.toarray() y_pos = np.ones(len(pos_files)) y_neg = np.zeros(len(neg_files)) y_test = np.concatenate([y_pos, y_neg]) # Using the model is easy: multiply the document-term matrix by the learned weights and add the bias. # We use Python's `@` operator to perform the matrix-vector multiplication. # In[11]: y_pred = (X_test @ w + b) > 0 # Now we print an evaluation of the prediction results using scikit-learn's [`classification_report()`](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.classification_report.html) function. # In[12]: def binary_classification_report(y_true, y_pred): # count true positives, false positives, true negatives, and false negatives tp = fp = tn = fn = 0 for gold, pred in zip(y_true, y_pred): if pred == True: if gold == True: tp += 1 else: fp += 1 else: if gold == False: tn += 1 else: fn += 1 # calculate precision and recall precision = tp / (tp + fp) recall = tp / (tp + fn) # calculate f1 score fscore = 2 * precision * recall / (precision + recall) # calculate accuracy accuracy = (tp + tn) / len(y_true) # number of positive labels in y_true support = sum(y_true) return { "precision": precision, "recall": recall, "f1-score": fscore, "support": support, "accuracy": accuracy, } # In[13]: binary_classification_report(y_test, y_pred)
3,880
3,945
17
chap04-18
chap04-18
4 Implementing Text Classification Using Perceptron and Logistic Regression In the previous chapters we have discussed the theory behind the perceptron and logistic regression, including mathematical explanations of how and why they are able to learn from examples. In this chapter we will transition from math to code. Specifically, we will discuss how to implement these models in the Python programming language. All the code that we will introduce throughout this book is available online as well: http://clulab.github.io/gentlenlp/. The reader who is not familiar with the Python programming language is encouraged to read first Appendix A, for a brief introduction to the language, and Appendix B, for a discussion on how computers encode and preprocess text. Once done, please return here. To get a better understanding of how these algorithms work under the hood, we will start by implementing them from scratch. However, as the book progresses, we will introduce some of the popular tools and libraries that make Python the language of choice for machine learning, e.g., PyTorch,1 and Hugging Face’s transformers.2 The code for all the examples in the book is provided in the form of Jupyter notebooks.3 Important fragments of these notebooks will be presented in the implementation chapters so that the reader has the whole picture just by reading the book. However, we strongly encourage you to download the notebooks and execute them yourself. We also encourage you to modify them to conduct your own experiments! 1 https://pytorch.org
2 https://huggingface.co 3 https://jupyter.org/ 55 56 Implementing Text Classification Using Perceptron and LR 4.1 Binary Classification We begin this chapter with binary classification. That is, we aim to train classifiers that assign one of two labels to a given text. As the example for this task, we will train a review classifier using the the Large Movie Review Dataset (Maas et al., 2011).4 We tackle this task by implementing first a binary perceptron classifier, followed by a binary logistic regression one. We will implement the latter both from scratch as well as using PyTorch, so the reader has a clearer understanding on how PyTorch works “under the hood.” 4.1.1 Large Movie Review Dataset This dataset contains movie reviews and their associated scores (between 1 and 10) as provided by IMDb.5 converted these scores to binary labels by assigning each review a positive or negative label if the review score was above 6 or below 5, respectively. Reviews with scores 5 and 6 were considered too neutral and thus excluded. We follow the same protocol in this chapter. The dataset is divided in two even partitions called train and test, each containing 25,000 reviews. The dataset also provides additional unlabeled reviews, but we will not use those here. Each partition contains two directories called pos and neg where the positive and negative examples are stored. Each review is stored in an independent text file, whose name is composed of an id unique to the partition and the score associated with the review, separated by an underscore. An example of a positive and a negative review is shown in Table 4.1. 4.1.2 Bag-of-words Model As discussed in Section 2.2, we will encode the text to classify as a bag of words. That is, we encode each review as a list of numbers, with each position in the list corresponding to a word in our vocabulary, and the value stored in that position corresponding to the number of times the word appears in the review. For example, say we want to encode the following two reviews: 4 https://ai.stanford.edu/~amaas/data/sentiment/ 5 https://www.imdb.com/ Maas et al. 4.1 Binary Classification 57 Table 4.1 Two examples of movie reviews from IMDb. The first is a positive review of the movie Puss in Boots (1988). The second is a negative review of the movie Valentine (2001). These reviews can be found at https://www.imdb.com/review/rw0606396/ and https://www.imdb.com/review/rw0721861/, respectively. Filename Score Binary Label train/pos/24_8.txt 8/10 Positive train/neg/141_3.txt 3/10 Negative Review Text Although this was obviously a low-budget production, the performances and the songs in this movie are worth seeing. One of Walken’s few musical roles to date. (he is a marvelous dancer and singer and he demonstrates his acrobatic skills as well - watch for the cartwheel!) Also starring Jason Connery. A great children’s story and very likable characters. This stalk and slash turkey manages to bring nothing new to an increasingly stale genre. A masked killer stalks young, pert girls and slaughters them in a variety of gruesome ways, none of which are particularly inventive. It’s not scary, it’s not clever, and it’s not funny. So what was the point of it? Review 1: Review 2: "I liked the movie. My friend liked it too. " "I hated it. Would not recommend. " First, we need to create a vocabulary that maps each word to an id that uniquely identifies it. Each of these numbers will be used as the index in a list, so they must start at zero and grow by one for each word in the vocabulary. For example, one possible vocabulary that encodes the previous reviews is: {'would': 0, 'hated': 1, 58 Implementing Text Classification Using Perceptron and LR 'my': 2, 'liked': 3, 'not': 4, 'it': 5, 'movie': 6, 'recommend': 7, 'the': 8, 'I': 9, 'too': 10, 'friend': 11} Using this mapping, we can encode the two reviews as follows: Review1: [0,0,1,2,0,1,1,0,1,1,1,1] Review2: [1,1,0,0,1,1,0,1,0,1,0,0] Note that the word liked (fourth position) in the first review has a value of two. This is because this word appears twice in that review. This is a small example with a vocabulary of only 12 terms. Of course, the same process needs to be implemented for our whole training dataset. For this purpose we will use scikit-learn’s CountVectorizer class.6 Using the CountVectorizer class simplifies things, allowing us to get started quickly with a bag-of-words approach. However, note that it makes several simplifying assumptions (e.g., text is lowercased, and punctuation and single character tokens are removed). Some of these may not be adequate to other tasks. First, we need to obtain the filenames for the reviews in the training set: Once we have acquired the filenames for the training reviews, we need
to read them using the CountVectorizer. In order for the CountVectorizer to open and read the files for us, we make use of the input='filename' constructor parameter (otherwise it would expect the string content directly). The CountVectorizer provides three methods that will be use-
ful for us: a method called fit() that is used to acquire the vocabulary,
a method transform() that converts the text into the bag-of-words representation, and a method fit_transform() that conveniently acquires the vocabulary and transforms the data in a single step. The resulting object is referred to as a document-term matrix, where each row corre- 6 https://scikitlearn.org/stable/modules/generated/sklearn.feature_ extraction.text.CountVectorizer.html 4.1 Binary Classification 59 sponds to a document, and each column corresponds to a term in the vocabulary. As the output above indicates, the resulting matrix has 25,000 rows (one for each review), and 74,849 columns (one for each term). Also you may note that this matrix is sparse, with 3,445,861 stored elements. A regular matrix of shape 25,000×74,849 would have 1,871,225,000 elements. However, most of the elements in a document-term matrix are zeros because only a few words from the vocabulary appear in each document. A sparse matrix takes advantage of this fact by storing only the non-zero cells in order to reduce the memory required to store it. Thus, sparse matrices are convenient, especially when dealing with lots of data. Nevertheless, to simplify the downstream code in this example, we will convert it into a dense matrix, i.e., a regular two-dimensional NumPy array. Finally, we also need the labels of the reviews. We assign a label of one to positive reviews, and a label of zero to negative ones. Note that the first half of the reviews are positive and the second half are negative. The label at the ith position of the y_train array corresponds to the review encoded in the ith row of the X_train matrix. 4.1.3 Perceptron Now that we have defined our task and the data processing pipeline, we will implement a perceptron classifier that classifies the movie reviews as positive or negative. The entire code discussed in this section is available in the chap4_perceptron notebook. Recall from Section 2.4 that the perceptron is composed of a weight vector w and a bias term b. These will be represented as a NumPy array w of the same length as our document vectors, and a variable b for the bias term. Both will be initialized with zeros. The parameters w and b are learned through the following algorithm, which implements Algorithm 2 from Chapter 2: There are a couple of details to point out. Line 3 of Algorithm 2 indicates that we need to repeat the training loop until convergence. Theoretically, convergence is defined as predicting all training examples correctly. This is an ambitious requirement, which is not always possible in practice, so in this code we also include a stop condition if we reach a maximum number of epochs. Another crucial difference between our implementation here and the theoretical Algorithm 2, is that we randomize the order in which the training examples are seen at the beginning of 60 Implementing Text Classification Using Perceptron and LR each epoch. This simple (but highly recommended!) change is necessary to avoid the introduction of spurious biases due to the arbitrary order of the examples in the original training partition.7 We accomplish this by storing the indices corresponding to the X_train matrix rows in a NumPy array, and shuffling these indices at the beginning of each epoch. We shuffle the indices instead of the examples so that we can preserve the mapping between examples and labels. The training loop aligns closely with Algorithm 2. We start by iterating over each example in our training data, storing the current example in the variable x,8 and its corresponding label in the variable y_true. Next, we compute the perceptron decision function shown in Algorithm 1. Note that NumPy (as well as PyTorch) uses Python’s @ operator to indicate vector or matrix multiplication, depending on its operand types. Here we use it to calculate the dot product of the example x and the weights w. To this we add the bias b to obtain the predicted score, whose sign is used to assign a positive or negative predicted label. If the prediction is correct, then no update is needed, and we can move on to the next training example. However, if the prediction is incorrect, then we need to adjust w and b, as described in Algorithm 2. Sidebar 4.1 The tqdm function This is our first exposure to the tqdm function. tqdm is a progress bar that “make your loops show a smart progress meter.”9 The name tqdm comes from the Arabic word taqaddum which can mean “progress.” Using tqdm is as simple as wrapping it around the collection to be traversed. After training, we evaluate the model’s performance on the heldout test partition. The test data is loaded similarly to the training partition, but with one notable difference; we use CountVectorizer’s transform() method instead of the fit_transform() method so that the vocabulary is not adjusted for the test data. We won’t show here the loading of the test partition since it is so similar to the code already shown, but it is available in the Jupyter notebook that accompanies this section. . 7   As an extreme example, consider a dataset where all the positive examples appear first in the training partition. This would cause the perceptron to artificially inflate the weights of the features that occur in these examples, a situation from which the learning algorithm may struggle to recover. 
 . 8  We use typewriter font when we discuss variables in the code, to distinguish code from the theoretical discussion in the other chapters. 
 9 https://github.com/tqdm/tqdm 4.1 Binary Classification 61 Using the model to assign labels to all the test data is easily done in one step – we simply multiply the entire test data document-term matrix by the previously learned weights and add the bias. Scores greater than zero indicate a positive review, and those less than zero are negative. At this point we can evaluate the classifier’s performance, which we will do using precision, recall, and F1 scores for binary classification (described in Section 2.3). For this purpose, we implement a function called binary_classification_report that computes these metrics and returns them as a dictionary: We call this function to compare the predicted labels to the true labels, and obtain the evaluation scores. Our F1 score here is 86.8%, which is much higher than the baseline that assigns labels randomly, which yields an F1 score of about 50%. This is a good result, especially considering the simplicity of the perceptron! In the next sections and chapters, we will discuss a battery of strategies to considerably improve this performance. 4.1.4 Binary Logistic Regression from Scratch Using the same task, dataset, and evaluation, we will now implement a logistic regression classifier, as described in Algorithm 5 from Chapter 3. To give the reader hands-on experience with the implementation of the gradient calculations for logistic regression, we start by implementing it from scratch using NumPy. All the code shown in this section is available in the chap4_logistic_regression_numpy notebook. In the perceptron implementation, we represented the weights and the bias as two different variables. Here, however, we will use a different approach that will allow us to unify them into a single vector variable. Specifically, we take advantage of the similarity between the derivative of the cost function with respect to the weights (Equation 3.14) and the derivative of the cost with respect to the bias (Equation 3.15). d Ci(w, b) = (σi − yi)xij (3.14 revisited) dwj d Ci(w, b) = σi − yi (3.15 revisited) db Note that the two derivative formulas are identical except that the former has a multiplication by xij, while the latter does not. However, 62 Implementing Text Classification Using Perceptron and LR since σi − yi = (σi − yi)1 we can multiply the derivative of the cost with respect to the bias by one without changing the semantics. This gives an opportunity for combining the computations, doing them both in a single pass. The idea is that we can treat the bias as a weight corresponding to a feature that always has a value of one. As can be seen above, we created a NumPy array of ones of the same length as the number of examples in our training set (i.e., the number of rows in the data matrix). Then we add this array as a new column to the data matrix, using NumPy’s column_stack function. Next, we need to initialize our model. This time we will use a single NumPy array w of the same length as the number of columns in the data matrix. The weight vector w is initialized randomly with values between 0 and 1: Before implementing the learning algorithm, we need an implementation of the logistic function. Recall that the logistic function is σ(x) = 1 (3.1 revisited) 1+e−x This function can be easily implemented in NumPy as follows: However, this naive implementation may produce the following warning during training: The term overflow indicates that the result of evaluating exp(-x) is a number so large that it can’t be represented by a float (specifically, we’re using float64 numbers). We will avoid this issue by not calling exp with values that will overflow. NumPy provides the function finfo that can be consulted to find the limits of floating point numbers: The log of the largest floating point number is the largest number for which exp() will not overflow, so we will use it as a threshold to filter out problematic values: We now have everything we need to implement Algorithm 4. The steps to follow for each example are: (1) use the model to make a prediction, (2) calculate the gradient of the loss function with respect to the model parameters, and (3) update the model parameters using the gradient. The size of the update is controlled by the learning rate. Once the model has been trained, we evaluate it on the test dataset using our binary_classification_report function from the previous section. Loading and preprocessing the test dataset follows the same 4.1 Binary Classification 63 steps as with the previous classifier. We omit the code for brevity. These are the results: The performance is comparable with that of the perceptron. The difference in F1 scores between the two classifiers (84.9% here vs. 86.8% for the perceptron) is not significant. Classifier parity is probably attributable to the fact that the signal distinguishing the two classes being easy to learn and the simpler perceptron training algorithm being sufficient in this case. Nevertheless, this task is useful in showing how to implement the logistic regression model from scratch, i.e., by implementing the gradient calculation and parameter updates manually. Next, we will implement the same model again using PyTorch, highlighting how this machine learning library simplifies the process. 4.1.5 Binary Logistic Regression Utilizing PyTorch While it is fairly straightforward to compute the derivatives for logistic regression and implement then directly in NumPy, this will not scale well to arbitrary neural architectures. Fortunately, there are libraries that automate the computation of the derivatives of the cost function (assuming it is differentiable!) for any neural network, and use the resulting gradients to perform gradient descent or other more sophisticated optimization procedures. To this end, we will use the PyTorch deep learning library10. The corresponding notebook for this section is chap4_logistic_regression_pytorch_bce. Our model for logistic regression corresponds to PyTorch’s Linear layer. When we instantiate this layer, we specify the size of the inputs (the size of our vocabulary) and the size of the output, i.e., the number of output neurons (which is one because we’re doing binary classification). The loss function we use is the binary cross-entropy loss (see Chapter 3), which is implemented as BCEWithLogitsLoss in PyTorch. In PyTorch, the gradients obtained from the loss function are applied to the model by an optimizer object, which implements and applies an optimization algorithm. Here we will use the vanilla stochastic gradient descent optimizer; we set its learning rate to 0.1. This is equivalent to the discussion in Section 3.2. Similarly to the manual implementation, the steps required to train the model for a given training example are: (1) ensure the gradients are set to zeros, (2) apply the model to obtain a prediction, (3) calculate 10 https://pytorch.org/ 64 Implementing Text Classification Using Perceptron and LR the loss, (4) compute the gradient of the loss by back-propagation, and (5) update the model parameters. Recall that in our previous implementation everything was hardcoded: applying the model, computing the gradients, and optimizing the model parameters. Here, however, the implementation of the logistic regression is expressed at a higher level of abstraction. This means that we are describing the logical steps without specifying a particular implementation. Instead, implementation details are the responsability of the chosen model, loss function, and optimizer. Thus, we could even choose a different model, loss function, and/or optimizer, and use the same training steps with little or no modification. This decoupling of the training logic from the implementation details is one of the main advantages of libraries such as PyTorch. As shown in the code above, calling the model as a function, with the feature vectors as inputs, produces the predicted scores. Once again, a positive score corresponds to a positive label. When we evaluate this implementation on the test dataset, we obtain results that are in line with our previous models: Writing the perceptron and the logistic regression from scratch is a good exercise, as it exposes us to the fundamentals of implementing machine learning algorithms. However, this becomes cumbersome for more complex neural architectures. For this reason, from this point on, we will use PyTorch for all our coding examples. 4.2 Multiclass Classification So far, in this chapter we have discussed implementing binary classifiers. Next, we will modify these binary classifiers to perform multiclass classification, following the discussion in Section 3.5. 4.2.1 AG News Dataset Before explaining the actual training/testing code, we have to choose a new dataset that is suitable for multiclass classification. To this end, we will use the AG News Classification Dataset (Zhang et al., 2015), a subset of the larger AG corpus of news articles collected from thousands of different news sources.11 The classification dataset consists of four 11 http://groups.di.unipi.it/~gulli/AG_corpus_of_news_articles.html 4.2 Multiclass Classification 65 classes, and the data is equally balanced across all classes (30,000 articles per class for train, and 1,900 articles per class for testing). The goal of the task is to classify each article as one of the four classes: World, Sports, Business, or Sci/Tech. 4.2.2 Preparing the Dataset The AG News Dataset is distributed as two CSV files (one for training and one for testing), each containing three columns: the class index, the title, and the description. The dataset also provides a text file that maps the above class indexes to more descriptive class labels. Because of the tabular nature of the dataset, pandas, a Python library
for tabular data analysis,12 is a natural choice for loading and transform-
ing it. To this end, our Jupyter notebook (chap4_multiclass_logistic_regression) demonstrates the sequence of steps required to handle the data, as well
as model training and evaluation. First, we show how to load the CSV,
add column names, and inspect the result: class index . 0  3 
 . 1  3 
 . 2  3 
 . 3  3 
 . 4  3 
 ... ... . 119995  1 
 . 119996  2 
 . 119997  2 
 . 119998  2 
 . 119999  2 
 title Wall St. Bears Claw Back Into the Black (Reuters) Carlyle Looks Toward Commercial Aerospace (Reu... Oil and Economy Cloud Stocks' Outlook (Reuters) Iraq Halts Oil Exports from Main Southern Pipe... Oil prices soar to all-time record, posing new... ... Pakistan's Musharraf Says Won't Quit as Army C... Renteria signing a top-shelf deal Saban not going to Dolphins yet Today's NFL games Nets get Carter from Raptors description Reuters - Short-sellers, Wall Street's dwindli... Reuters - Private investment firm Carlyle Grou... Reuters - Soaring crude prices plus worries\ab... Reuters - Authorities have halted oil export\f... AFP - Tearaway world oil prices, toppling reco... ... KARACHI (Reuters) - Pakistani President Perve... Red Sox general manager Theo Epstein acknowled... The Miami Dolphins will put their courtship of... PITTSBURGH at NY GIANTS Time: 1:30 p.m. Line: ... INDIANAPOLIS -- All-Star Vince Carter was trad... 120000 rows × 3 columns Since the class labels themselves are in a separate file, we manually add them to the pandas data structure (called dataframe in pandas’ terminology) to increase the interpretability of the data. We use the class index column as a starting point, and use its map method to create a new column with the corresponding labels (technically a new Series object) that is added to the dataframe using its insert method, which allows us to insert the column in a specific position. Note that the label indices are one-based, so we subtract one to align them with their labels. 12 https://pandas.pydata.org 66 Implementing Text Classification Using Perceptron and LR class index . 0  3 
 . 1  3 
 . 2  3 
 . 3  3 
 . 4  3 
 ... ... . 119995  1 
 . 119996  2 
 . 119997  2 
 . 119998  2 
 . 119999  2 
 class Business Business Business Business Business ... World Sports Sports Sports Sports title Wall St. Bears Claw Back Into the Black (Reuters) Oil and Economy Cloud Stocks' Outlook (Reuters) Oil prices soar to all-time record, posing new... Pakistan's Musharraf Says Won't Quit as Army C... Saban not going to Dolphins yet Nets get Carter from Raptors description Reuters - Short-sellers, Wall Street's dwindli... Reuters - Soaring crude prices plus worries\ab... AFP - Tearaway world oil prices, toppling reco... KARACHI (Reuters) - Pakistani President Perve... The Miami Dolphins will put their courtship of... INDIANAPOLIS -- All-Star Vince Carter was trad... Iraq Halts Oil Exports from Main Southern Pipe... Reuters - Authorities have halted oil export\f... ... ... Renteria signing a top-shelf deal Red Sox general manager Theo Epstein acknowled... 120000 rows × 4 columns Carlyle Looks Toward Commercial Aerospace (Reu... Reuters - Private investment firm Carlyle Grou... Today's NFL games PITTSBURGH at NY GIANTS Time: 1:30 p.m. Line: ... Next we will preprocess the text. First we lowercase the title and description, and then we concatenate them into a single string. Then we remove some spurious backslashes from the text. Once this is done, the preprocessed text is added to the dataframe as a new column. Note that pandas allows these steps to be applied to all rows simultaneously. class index class title Wall St. Bears Claw Back Into the Black (Reuters) Oil and Economy Cloud Stocks' Outlook (Reuters) Oil prices soar to all-time record, posing new... ... Pakistan's Musharraf Says Won't Quit as Army C... Saban not going to Dolphins yet Nets get Carter from Raptors description Reuters - Short-sellers, Wall Street's dwindli... Reuters - Soaring crude prices plus worries\ab... AFP - Tearaway world oil prices, toppling reco... ... KARACHI (Reuters) - Pakistani President Perve... The Miami Dolphins will put their courtship of... INDIANAPOLIS -- All-Star Vince Carter was trad... text wall st. bears claw back into the black (reute... oil and economy cloud stocks' outlook (reuters... oil prices soar to all-time record, posing new... ... pakistan's musharraf says won't quit as army c... saban not going to dolphins yet the miami dolp... nets get carter from raptors indianapolis -- a... . 0  3 Business 
 . 1  3 Business 
 . 2  3 Business 
 . 3  3 Business 
 . 4  3 Business 
 ... ... ... . 119995  1 World 
 . 119996  2 Sports 
 . 119997  2 Sports 
 . 119998  2 Sports 
 . 119999  2 Sports 
 120000 rows × 5 columns Carlyle Looks Toward Commercial Reuters - Private investment firm Carlyle carlyle looks toward commercial Aerospace (Reu... Grou... aerospace (reu... Iraq Halts Oil Exports from Main Southern Pipe... Reuters - Authorities have halted oil export\f... iraq halts oil exports from main southern pipe... Renteria signing a top-shelf deal Red Sox general manager Theo Epstein renteria signing a top-shelf deal red sox acknowled... gene... Today's NFL games PITTSBURGH at NY GIANTS Time: 1:30 p.m. today's nfl games pittsburgh at ny giants Line: ... time... At this point, the text is ready to be tokenized. For this purpose we will use NLTK’s word_tokenize function. This function can be applied to the whole column at once using the pandas map function, which returns a new column which we add to the dataframe. However, here we actually use the progress_map function, which provides a visual progress bar. This visual feedback is especially helpful for tasks that take more time to complete. 4.2 Multiclass Classification 67 class index . 0  3 
 . 1  3 
 . 2  3 
 . 3  3 
 . 4  3 
 ... ... . 119995  1 
 . 119996  2 
 . 119997  2 
 . 119998  2 
 . 119999  2 
 class Business Business Business Business Business ... World Sports Sports Sports Sports title Wall St. Bears Claw Back Into the Black (Reuters) Oil and Economy Cloud Stocks' Outlook (Reuters) Oil prices soar to all-time record, posing new... ... Pakistan's Musharraf Says Won't Quit as Army C... Saban not going to Dolphins yet Nets get Carter from Raptors description Reuters - Short-sellers, Wall Street's dwindli... Reuters - Soaring crude prices plus worries\ab... AFP - Tearaway world oil prices, toppling reco... ... KARACHI (Reuters) - Pakistani President Perve... The Miami Dolphins will put their courtship of... INDIANAPOLIS -- All-Star Vince Carter was trad... text wall st. bears claw back into the black (reute... oil and economy cloud stocks' outlook (reuters... oil prices soar to all-time record, posing new... ... pakistan's musharraf says won't quit as army c... saban not going to dolphins yet the miami dolp... nets get carter from raptors indianapolis -- a... tokens [wall, st., bears, claw, back, into, the, blac... [oil, and, economy, cloud, stocks, ', outlook,... [oil, prices, soar, to, all-time, record, ,, p... ... [pakistan, 's, musharraf, says, wo, n't, quit,... [saban, not, going, to, dolphins, yet, the, mi... [nets, get, carter, from, raptors, indianapoli... 120000 rows × 6 columns Carlyle Looks Toward Commercial Reuters - Private investment firm carlyle looks toward commercial [carlyle, looks, toward, Aerospace (Reu... Carlyle Grou... aerospace (reu... commercial, aerospace... Iraq Halts Oil Exports from Main Reuters - Authorities have halted iraq halts oil exports from main [iraq, halts, oil, exports, from, Southern Pipe... oil export\f... southern pipe... main, southe... Renteria signing a top-shelf deal Red Sox general manager Theo renteria signing a top-shelf deal [renteria, signing, a, top-shelf, Epstein acknowled... red sox gene... deal, red, s... Today's NFL games PITTSBURGH at NY GIANTS today's nfl games pittsburgh at [today, 's, nfl, games, Time: 1:30 p.m. Line: ... ny giants time... pittsburgh, at, ny, gi... From the tokens we just created, we then create a vocabulary for our corpus. Here, we only keep the words that occur at least 10 times, decreasing the memory needed and reducing the likelihood that our vocabulary contains noisy tokens. Note that each row in the tokens column contains a list of tokens. In order to create the vocabulary, we will need to convert the Series of lists of tokens into a Series of tokens using the explode() Pandas method. Then we will use the value_counts() method to create a Series object in which the index are the tokens and the values are the number of times they appear in the corpus. The next step is removing the tokens with a count lower than our chosen threshold. Finally, we create a list with the remaining tokens, as well as a dictionary that maps tokens to token ids (i.e., the index of the token in the list). We include in the vocabulary a special token [UNK] that will be used as a placeholder for tokens that do not appear in our vocabulary after the frequency pruning. Using this vocabulary, we construct a feature vector for each news article in the corpus. This feature vector will be encoded as a dictionary, with keys corresponding to token ids, and values corresponding to the number of times the token appears in the article. As above, the feature vectors will be stored as a new column in the dataframe. 68 Implementing Text Classification Using Perceptron and LR class index class title Wall St. Bears Claw Back Into the Black (Reuters) Oil and Economy Cloud Stocks' Outlook (Reuters) Oil prices soar to all-time record, posing new... ... Pakistan's Musharraf Says Won't Quit as Army C... Saban not going to Dolphins yet Nets get Carter from Raptors description Reuters - Short-sellers, Wall Street's dwindli... Reuters - Soaring crude prices plus worries\ab... AFP - Tearaway world oil prices, toppling reco... ... KARACHI (Reuters) - Pakistani President Perve... The Miami Dolphins will put their courtship of... INDIANAPOLIS -- All-Star Vince Carter was trad... text wall st. bears claw back into the black (reute... oil and economy cloud stocks' outlook (reuters... oil prices soar to all-time record, posing new... ... pakistan's musharraf says won't quit as army c... saban not going to dolphins yet the miami dolp... nets get carter from raptors indianapolis -- a... tokens [wall, st., bears, claw, back, into, the, blac... [oil, and, economy, cloud, stocks, ', outlook,... [oil, prices, soar, to, alltime, record, ,, p... ... [pakistan, 's, musharraf, says, wo, n't, quit,... [saban, not, going, to, dolphins, yet, the, mi... [nets, get, carter, from, raptors, indianapoli... features {427: 2, 563: 1, 1607: 1, 15062: 1, 120: 1, 73... {66: 1, 9: 2, 351: 2, 4565: 1, 158: 1, 116: 1,... {66: 2, 99: 2, 4390: 1, 4: 2, 3595: 1, 149: 1,... ... {383: 1, 23: 1, 1626: 2, 91: 1, 1809: 1, 285: ... {7762: 2, 68: 1, 661: 1, 4: 2, 1439: 2, 703: 1... {2170: 2, 226: 1, 2402: 2, 32: 1, 2995: 2, 219... . 0  3 Business 
 . 1  3 Business 
 . 2  3 Business 
 . 3  3 Business 
 . 4  3 Business 
 ... ... ... . 119995  1 World 
 . 119996  2 Sports 
 . 119997  2 Sports 
 . 119998  2 Sports 
 . 119999  2 Sports 
 120000 rows × 7 columns Carlyle Looks Toward Commercial Aerospace (Reu... Reuters - Private investment firm Carlyle Grou... carlyle looks toward commercial aerospace (reu... Iraq Halts Oil Exports from Reuters - Authorities have iraq halts oil exports from Main Southern Pipe... halted oil export\f... main southern pipe... Renteria signing a top-shelf Red Sox general manager renteria signing a topdeal Theo Epstein acknowled... shelf deal red sox gene... PITTSBURGH at NY Today's NFL games GIANTS Time: 1:30 p.m. Line: ... today's nfl games pittsburgh at ny giants time... [carlyle, looks, toward, {15999: 2, 1076: 1, 855: commercial, aerospace... 1, 1286: 1, 4251: 1, ... [iraq, halts, oil, exports, {77: 2, 7380: 1, 66: 3, from, main, southe... 1787: 1, 32: 2, 900: 2... [renteria, signing, a, top- {8428: 2, 2638: 1, 5: 4, shelf, deal, red, s... 0: 3, 127: 1, 202: 3,... [today, 's, nfl, games, {106: 1, 23: 1, 729: 1, pittsburgh, at, ny, gi... 225: 1, 1586: 1, 22: 1... The final preprocessing step is converting the features and the class indices into PyTorch tensors. Recall that we need to subtract one from the class indices to make them zero-based. At this point, the data is fully processed and we are ready to begin training. 4.2.3 Multiclass Logistic Regression Using PyTorch The model itself is a single linear layer whose input size corresponds to the size of our vocabulary, and its output size corresponds to the number of classes in our corpus. PyTorch’s Linear layer includes a bias by default, so there is no need to handle that manually the way we did for our perceptron example. The code for training this model (which implements Algorithm 6) is almost identical to that of the binary logistic repression. However, since we have to calculate a score for each of the four different classes, we need to replace the previous BCEWithLogitsLoss with CrossEntropyLoss, which applies a softmax over the scores to obtain probabilities for each class. For each example, the model predicts 4 scores – one for each label. The label with the highest score is selected using the argmax function. We evaluate the predictions of our model for each class using Scikitlearn’s classification_report, which handles the results of multiclass classification. 4.3 Summary 69 4.3 Summary In this chapter, we used movie review and news article classification to illustrate the implementation of the previously described algorithms for the binary perceptron, binary logistic regression, and multiclass logistic regression. For the binary logistic regression, we made a direct comparison between the lower-level NumPy implementation and a higher-level version that made use of PyTorch. We hope that through this series of exercises the reader has noted several key takeaways. First, data preparation is important and should be done thoughtfully. Certain tasks (e.g., text normalization or sentence splitting) are going to be frequently needed if you continue with NLP, so using or creating generic functions can be very helpful. However, what works for one dataset and one language may not be suitable for another scenario. For example, in our case, we selected different tokenizers for each of our tasks to account for the different registers of English, as well as removing diacritics during normalization. Second, when it comes to implementing machine learning algorithms, it is often easier to use a higher-level library such as PyTorch instead of NumPy. For example, with the former, the gradients are calculated by the library, whereas in NumPy we have to code them ourselves. This becomes cumbersome quickly. For example, even the derivative of the softmax is non-trivial. Third, PyTorch imposes a training structure that remains largely the same, regardless of what models are being trained. That is, at a high level, the same steps are always required: clearing the current gradients, predicting output scores for the provided inputs, calculating the loss, and optimizing. These features make PyTorch a very powerful and convenient deep learning library; we will continue to use it throughout the remainder of the book to implement more complex neural architectures.
6,675
7,004
#!/usr/bin/env python # coding: utf-8 # # Binary Text Classification with # # Logistic Regression Implemented with PyTorch and BCE Loss # In[1]: import random import numpy as np import torch from tqdm.notebook import tqdm # set this variable to a number to be used as the random seed # or to None if you don't want to set a random seed seed = 1234 if seed is not None: random.seed(seed) np.random.seed(seed) torch.manual_seed(seed) # The dataset is divided in two directories called `train` and `test`. # These directories contain the training and testing splits of the dataset. # In[2]: get_ipython().system('ls -lh data/aclImdb/') # Both the `train` and `test` directories contain two directories called `pos` and `neg` that contain text files with the positive and negative reviews, respectively. # In[3]: get_ipython().system('ls -lh data/aclImdb/train/') # We will now read the filenames of the positive and negative examples. # In[4]: from glob import glob pos_files = glob('data/aclImdb/train/pos/*.txt') neg_files = glob('data/aclImdb/train/neg/*.txt') print('number of positive reviews:', len(pos_files)) print('number of negative reviews:', len(neg_files)) # Now, we will use a [`CountVectorizer`](https://scikit-learn.org/stable/modules/generated/sklearn.feature_extraction.text.CountVectorizer.html) to read the text files, tokenize them, acquire a vocabulary from the training data, and encode it in a document-term matrix in which each row represents a review, and each column represents a term in the vocabulary. Each element $(i,j)$ in the matrix represents the number of times term $j$ appears in example $i$. # In[5]: from sklearn.feature_extraction.text import CountVectorizer # initialize CountVectorizer indicating that we will give it a list of filenames that have to be read cv = CountVectorizer(input='filename') # learn vocabulary and return sparse document-term matrix doc_term_matrix = cv.fit_transform(pos_files + neg_files) doc_term_matrix # Note in the message printed above that the matrix is of shape (25000, 74894). # In other words, it has 1,871,225,000 elements. # However, only 3,445,861 elements were stored. # This is because most of the elements in the matrix are zeros. # The reason is that the reviews are short and most words in the english language don't appear in each review. # A matrix that only stores non-zero values is called *sparse*. # # Now we will convert it to a dense numpy array: # In[6]: X_train = doc_term_matrix.toarray() X_train.shape # We will also create a numpy array with the binary labels for the reviews. # One indicates a positive review and zero a negative review. # The label `y_train[i]` corresponds to the review encoded in row `i` of the `X_train` matrix. # In[7]: # training labels y_pos = np.ones(len(pos_files)) y_neg = np.zeros(len(neg_files)) y_train = np.concatenate([y_pos, y_neg]) y_train # Now we will initialize our model, in the form of an array of weights `w` of the same size as the number of features in our dataset (i.e., the number of words in the vocabulary acquired by [`CountVectorizer`](https://scikit-learn.org/stable/modules/generated/sklearn.feature_extraction.text.CountVectorizer.html)), and a bias term `b`. # Both are initialized to zeros. # In[8]: n_examples, n_features = X_train.shape # Now we will use the logistic regression learning algorithm to learn the values of `w` and `b` from our training data. # In[9]: import torch from torch import nn from torch import optim lr = 1e-1 n_epochs = 10 model = nn.Linear(n_features, 1) loss_func = nn.BCEWithLogitsLoss() optimizer = optim.SGD(model.parameters(), lr=lr) X_train = torch.tensor(X_train, dtype=torch.float32) y_train = torch.tensor(y_train, dtype=torch.float32) indices = np.arange(n_examples) for epoch in range(10): n_errors = 0 # randomize training examples np.random.shuffle(indices) # for each training example for i in tqdm(indices, desc=f'epoch {epoch+1}'): x = X_train[i] y_true = y_train[i] # make predictions y_pred = model(x) # calculate loss loss = loss_func(y_pred[0], y_true) # calculate gradients through back-propagation loss.backward() # optimize model parameters optimizer.step() # ensure gradients are set to zero model.zero_grad() # The next step is evaluating the model on the test dataset. # Note that this time we use the [`transform()`](https://scikit-learn.org/stable/modules/generated/sklearn.feature_extraction.text.CountVectorizer.html#sklearn.feature_extraction.text.CountVectorizer.transform) method of the [`CountVectorizer`](https://scikit-learn.org/stable/modules/generated/sklearn.feature_extraction.text.CountVectorizer.html), instead of the [`fit_transform()`](https://scikit-learn.org/stable/modules/generated/sklearn.feature_extraction.text.CountVectorizer.html#sklearn.feature_extraction.text.CountVectorizer.fit_transform) method that we used above. This is because we want to use the learned vocabulary in the test set, instead of learning a new one. # In[10]: pos_files = glob('data/aclImdb/test/pos/*.txt') neg_files = glob('data/aclImdb/test/neg/*.txt') doc_term_matrix = cv.transform(pos_files + neg_files) X_test = doc_term_matrix.toarray() X_test = torch.tensor(X_test, dtype=torch.float32) y_pos = np.ones(len(pos_files)) y_neg = np.zeros(len(neg_files)) y_test = np.concatenate([y_pos, y_neg]) # Using the model is easy: multiply the document-term matrix by the learned weights and add the bias. # We use Python's `@` operator to perform the matrix-vector multiplication. # In[11]: y_pred = model(X_test) > 0 # Now we print an evaluation of the prediction results using scikit-learn's [`classification_report()`](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.classification_report.html) function. # In[12]: def binary_classification_report(y_true, y_pred): # count true positives, false positives, true negatives, and false negatives tp = fp = tn = fn = 0 for gold, pred in zip(y_true, y_pred): if pred == True: if gold == True: tp += 1 else: fp += 1 else: if gold == False: tn += 1 else: fn += 1 # calculate precision and recall precision = tp / (tp + fp) recall = tp / (tp + fn) # calculate f1 score fscore = 2 * precision * recall / (precision + recall) # calculate accuracy accuracy = (tp + tn) / len(y_true) # number of positive labels in y_true support = sum(y_true) return { "precision": precision, "recall": recall, "f1-score": fscore, "support": support, "accuracy": accuracy, } # In[13]: binary_classification_report(y_test, y_pred)
1,938
1,996
18
chap04-19
chap04-19
4 Implementing Text Classification Using Perceptron and Logistic Regression In the previous chapters we have discussed the theory behind the perceptron and logistic regression, including mathematical explanations of how and why they are able to learn from examples. In this chapter we will transition from math to code. Specifically, we will discuss how to implement these models in the Python programming language. All the code that we will introduce throughout this book is available online as well: http://clulab.github.io/gentlenlp/. The reader who is not familiar with the Python programming language is encouraged to read first Appendix A, for a brief introduction to the language, and Appendix B, for a discussion on how computers encode and preprocess text. Once done, please return here. To get a better understanding of how these algorithms work under the hood, we will start by implementing them from scratch. However, as the book progresses, we will introduce some of the popular tools and libraries that make Python the language of choice for machine learning, e.g., PyTorch,1 and Hugging Face’s transformers.2 The code for all the examples in the book is provided in the form of Jupyter notebooks.3 Important fragments of these notebooks will be presented in the implementation chapters so that the reader has the whole picture just by reading the book. However, we strongly encourage you to download the notebooks and execute them yourself. We also encourage you to modify them to conduct your own experiments! 1 https://pytorch.org
2 https://huggingface.co 3 https://jupyter.org/ 55 56 Implementing Text Classification Using Perceptron and LR 4.1 Binary Classification We begin this chapter with binary classification. That is, we aim to train classifiers that assign one of two labels to a given text. As the example for this task, we will train a review classifier using the the Large Movie Review Dataset (Maas et al., 2011).4 We tackle this task by implementing first a binary perceptron classifier, followed by a binary logistic regression one. We will implement the latter both from scratch as well as using PyTorch, so the reader has a clearer understanding on how PyTorch works “under the hood.” 4.1.1 Large Movie Review Dataset This dataset contains movie reviews and their associated scores (between 1 and 10) as provided by IMDb.5 converted these scores to binary labels by assigning each review a positive or negative label if the review score was above 6 or below 5, respectively. Reviews with scores 5 and 6 were considered too neutral and thus excluded. We follow the same protocol in this chapter. The dataset is divided in two even partitions called train and test, each containing 25,000 reviews. The dataset also provides additional unlabeled reviews, but we will not use those here. Each partition contains two directories called pos and neg where the positive and negative examples are stored. Each review is stored in an independent text file, whose name is composed of an id unique to the partition and the score associated with the review, separated by an underscore. An example of a positive and a negative review is shown in Table 4.1. 4.1.2 Bag-of-words Model As discussed in Section 2.2, we will encode the text to classify as a bag of words. That is, we encode each review as a list of numbers, with each position in the list corresponding to a word in our vocabulary, and the value stored in that position corresponding to the number of times the word appears in the review. For example, say we want to encode the following two reviews: 4 https://ai.stanford.edu/~amaas/data/sentiment/ 5 https://www.imdb.com/ Maas et al. 4.1 Binary Classification 57 Table 4.1 Two examples of movie reviews from IMDb. The first is a positive review of the movie Puss in Boots (1988). The second is a negative review of the movie Valentine (2001). These reviews can be found at https://www.imdb.com/review/rw0606396/ and https://www.imdb.com/review/rw0721861/, respectively. Filename Score Binary Label train/pos/24_8.txt 8/10 Positive train/neg/141_3.txt 3/10 Negative Review Text Although this was obviously a low-budget production, the performances and the songs in this movie are worth seeing. One of Walken’s few musical roles to date. (he is a marvelous dancer and singer and he demonstrates his acrobatic skills as well - watch for the cartwheel!) Also starring Jason Connery. A great children’s story and very likable characters. This stalk and slash turkey manages to bring nothing new to an increasingly stale genre. A masked killer stalks young, pert girls and slaughters them in a variety of gruesome ways, none of which are particularly inventive. It’s not scary, it’s not clever, and it’s not funny. So what was the point of it? Review 1: Review 2: "I liked the movie. My friend liked it too. " "I hated it. Would not recommend. " First, we need to create a vocabulary that maps each word to an id that uniquely identifies it. Each of these numbers will be used as the index in a list, so they must start at zero and grow by one for each word in the vocabulary. For example, one possible vocabulary that encodes the previous reviews is: {'would': 0, 'hated': 1, 58 Implementing Text Classification Using Perceptron and LR 'my': 2, 'liked': 3, 'not': 4, 'it': 5, 'movie': 6, 'recommend': 7, 'the': 8, 'I': 9, 'too': 10, 'friend': 11} Using this mapping, we can encode the two reviews as follows: Review1: [0,0,1,2,0,1,1,0,1,1,1,1] Review2: [1,1,0,0,1,1,0,1,0,1,0,0] Note that the word liked (fourth position) in the first review has a value of two. This is because this word appears twice in that review. This is a small example with a vocabulary of only 12 terms. Of course, the same process needs to be implemented for our whole training dataset. For this purpose we will use scikit-learn’s CountVectorizer class.6 Using the CountVectorizer class simplifies things, allowing us to get started quickly with a bag-of-words approach. However, note that it makes several simplifying assumptions (e.g., text is lowercased, and punctuation and single character tokens are removed). Some of these may not be adequate to other tasks. First, we need to obtain the filenames for the reviews in the training set: Once we have acquired the filenames for the training reviews, we need
to read them using the CountVectorizer. In order for the CountVectorizer to open and read the files for us, we make use of the input='filename' constructor parameter (otherwise it would expect the string content directly). The CountVectorizer provides three methods that will be use-
ful for us: a method called fit() that is used to acquire the vocabulary,
a method transform() that converts the text into the bag-of-words representation, and a method fit_transform() that conveniently acquires the vocabulary and transforms the data in a single step. The resulting object is referred to as a document-term matrix, where each row corre- 6 https://scikitlearn.org/stable/modules/generated/sklearn.feature_ extraction.text.CountVectorizer.html 4.1 Binary Classification 59 sponds to a document, and each column corresponds to a term in the vocabulary. As the output above indicates, the resulting matrix has 25,000 rows (one for each review), and 74,849 columns (one for each term). Also you may note that this matrix is sparse, with 3,445,861 stored elements. A regular matrix of shape 25,000×74,849 would have 1,871,225,000 elements. However, most of the elements in a document-term matrix are zeros because only a few words from the vocabulary appear in each document. A sparse matrix takes advantage of this fact by storing only the non-zero cells in order to reduce the memory required to store it. Thus, sparse matrices are convenient, especially when dealing with lots of data. Nevertheless, to simplify the downstream code in this example, we will convert it into a dense matrix, i.e., a regular two-dimensional NumPy array. Finally, we also need the labels of the reviews. We assign a label of one to positive reviews, and a label of zero to negative ones. Note that the first half of the reviews are positive and the second half are negative. The label at the ith position of the y_train array corresponds to the review encoded in the ith row of the X_train matrix. 4.1.3 Perceptron Now that we have defined our task and the data processing pipeline, we will implement a perceptron classifier that classifies the movie reviews as positive or negative. The entire code discussed in this section is available in the chap4_perceptron notebook. Recall from Section 2.4 that the perceptron is composed of a weight vector w and a bias term b. These will be represented as a NumPy array w of the same length as our document vectors, and a variable b for the bias term. Both will be initialized with zeros. The parameters w and b are learned through the following algorithm, which implements Algorithm 2 from Chapter 2: There are a couple of details to point out. Line 3 of Algorithm 2 indicates that we need to repeat the training loop until convergence. Theoretically, convergence is defined as predicting all training examples correctly. This is an ambitious requirement, which is not always possible in practice, so in this code we also include a stop condition if we reach a maximum number of epochs. Another crucial difference between our implementation here and the theoretical Algorithm 2, is that we randomize the order in which the training examples are seen at the beginning of 60 Implementing Text Classification Using Perceptron and LR each epoch. This simple (but highly recommended!) change is necessary to avoid the introduction of spurious biases due to the arbitrary order of the examples in the original training partition.7 We accomplish this by storing the indices corresponding to the X_train matrix rows in a NumPy array, and shuffling these indices at the beginning of each epoch. We shuffle the indices instead of the examples so that we can preserve the mapping between examples and labels. The training loop aligns closely with Algorithm 2. We start by iterating over each example in our training data, storing the current example in the variable x,8 and its corresponding label in the variable y_true. Next, we compute the perceptron decision function shown in Algorithm 1. Note that NumPy (as well as PyTorch) uses Python’s @ operator to indicate vector or matrix multiplication, depending on its operand types. Here we use it to calculate the dot product of the example x and the weights w. To this we add the bias b to obtain the predicted score, whose sign is used to assign a positive or negative predicted label. If the prediction is correct, then no update is needed, and we can move on to the next training example. However, if the prediction is incorrect, then we need to adjust w and b, as described in Algorithm 2. Sidebar 4.1 The tqdm function This is our first exposure to the tqdm function. tqdm is a progress bar that “make your loops show a smart progress meter.”9 The name tqdm comes from the Arabic word taqaddum which can mean “progress.” Using tqdm is as simple as wrapping it around the collection to be traversed. After training, we evaluate the model’s performance on the heldout test partition. The test data is loaded similarly to the training partition, but with one notable difference; we use CountVectorizer’s transform() method instead of the fit_transform() method so that the vocabulary is not adjusted for the test data. We won’t show here the loading of the test partition since it is so similar to the code already shown, but it is available in the Jupyter notebook that accompanies this section. . 7   As an extreme example, consider a dataset where all the positive examples appear first in the training partition. This would cause the perceptron to artificially inflate the weights of the features that occur in these examples, a situation from which the learning algorithm may struggle to recover. 
 . 8  We use typewriter font when we discuss variables in the code, to distinguish code from the theoretical discussion in the other chapters. 
 9 https://github.com/tqdm/tqdm 4.1 Binary Classification 61 Using the model to assign labels to all the test data is easily done in one step – we simply multiply the entire test data document-term matrix by the previously learned weights and add the bias. Scores greater than zero indicate a positive review, and those less than zero are negative. At this point we can evaluate the classifier’s performance, which we will do using precision, recall, and F1 scores for binary classification (described in Section 2.3). For this purpose, we implement a function called binary_classification_report that computes these metrics and returns them as a dictionary: We call this function to compare the predicted labels to the true labels, and obtain the evaluation scores. Our F1 score here is 86.8%, which is much higher than the baseline that assigns labels randomly, which yields an F1 score of about 50%. This is a good result, especially considering the simplicity of the perceptron! In the next sections and chapters, we will discuss a battery of strategies to considerably improve this performance. 4.1.4 Binary Logistic Regression from Scratch Using the same task, dataset, and evaluation, we will now implement a logistic regression classifier, as described in Algorithm 5 from Chapter 3. To give the reader hands-on experience with the implementation of the gradient calculations for logistic regression, we start by implementing it from scratch using NumPy. All the code shown in this section is available in the chap4_logistic_regression_numpy notebook. In the perceptron implementation, we represented the weights and the bias as two different variables. Here, however, we will use a different approach that will allow us to unify them into a single vector variable. Specifically, we take advantage of the similarity between the derivative of the cost function with respect to the weights (Equation 3.14) and the derivative of the cost with respect to the bias (Equation 3.15). d Ci(w, b) = (σi − yi)xij (3.14 revisited) dwj d Ci(w, b) = σi − yi (3.15 revisited) db Note that the two derivative formulas are identical except that the former has a multiplication by xij, while the latter does not. However, 62 Implementing Text Classification Using Perceptron and LR since σi − yi = (σi − yi)1 we can multiply the derivative of the cost with respect to the bias by one without changing the semantics. This gives an opportunity for combining the computations, doing them both in a single pass. The idea is that we can treat the bias as a weight corresponding to a feature that always has a value of one. As can be seen above, we created a NumPy array of ones of the same length as the number of examples in our training set (i.e., the number of rows in the data matrix). Then we add this array as a new column to the data matrix, using NumPy’s column_stack function. Next, we need to initialize our model. This time we will use a single NumPy array w of the same length as the number of columns in the data matrix. The weight vector w is initialized randomly with values between 0 and 1: Before implementing the learning algorithm, we need an implementation of the logistic function. Recall that the logistic function is σ(x) = 1 (3.1 revisited) 1+e−x This function can be easily implemented in NumPy as follows: However, this naive implementation may produce the following warning during training: The term overflow indicates that the result of evaluating exp(-x) is a number so large that it can’t be represented by a float (specifically, we’re using float64 numbers). We will avoid this issue by not calling exp with values that will overflow. NumPy provides the function finfo that can be consulted to find the limits of floating point numbers: The log of the largest floating point number is the largest number for which exp() will not overflow, so we will use it as a threshold to filter out problematic values: We now have everything we need to implement Algorithm 4. The steps to follow for each example are: (1) use the model to make a prediction, (2) calculate the gradient of the loss function with respect to the model parameters, and (3) update the model parameters using the gradient. The size of the update is controlled by the learning rate. Once the model has been trained, we evaluate it on the test dataset using our binary_classification_report function from the previous section. Loading and preprocessing the test dataset follows the same 4.1 Binary Classification 63 steps as with the previous classifier. We omit the code for brevity. These are the results: The performance is comparable with that of the perceptron. The difference in F1 scores between the two classifiers (84.9% here vs. 86.8% for the perceptron) is not significant. Classifier parity is probably attributable to the fact that the signal distinguishing the two classes being easy to learn and the simpler perceptron training algorithm being sufficient in this case. Nevertheless, this task is useful in showing how to implement the logistic regression model from scratch, i.e., by implementing the gradient calculation and parameter updates manually. Next, we will implement the same model again using PyTorch, highlighting how this machine learning library simplifies the process. 4.1.5 Binary Logistic Regression Utilizing PyTorch While it is fairly straightforward to compute the derivatives for logistic regression and implement then directly in NumPy, this will not scale well to arbitrary neural architectures. Fortunately, there are libraries that automate the computation of the derivatives of the cost function (assuming it is differentiable!) for any neural network, and use the resulting gradients to perform gradient descent or other more sophisticated optimization procedures. To this end, we will use the PyTorch deep learning library10. The corresponding notebook for this section is chap4_logistic_regression_pytorch_bce. Our model for logistic regression corresponds to PyTorch’s Linear layer. When we instantiate this layer, we specify the size of the inputs (the size of our vocabulary) and the size of the output, i.e., the number of output neurons (which is one because we’re doing binary classification). The loss function we use is the binary cross-entropy loss (see Chapter 3), which is implemented as BCEWithLogitsLoss in PyTorch. In PyTorch, the gradients obtained from the loss function are applied to the model by an optimizer object, which implements and applies an optimization algorithm. Here we will use the vanilla stochastic gradient descent optimizer; we set its learning rate to 0.1. This is equivalent to the discussion in Section 3.2. Similarly to the manual implementation, the steps required to train the model for a given training example are: (1) ensure the gradients are set to zeros, (2) apply the model to obtain a prediction, (3) calculate 10 https://pytorch.org/ 64 Implementing Text Classification Using Perceptron and LR the loss, (4) compute the gradient of the loss by back-propagation, and (5) update the model parameters. Recall that in our previous implementation everything was hardcoded: applying the model, computing the gradients, and optimizing the model parameters. Here, however, the implementation of the logistic regression is expressed at a higher level of abstraction. This means that we are describing the logical steps without specifying a particular implementation. Instead, implementation details are the responsability of the chosen model, loss function, and optimizer. Thus, we could even choose a different model, loss function, and/or optimizer, and use the same training steps with little or no modification. This decoupling of the training logic from the implementation details is one of the main advantages of libraries such as PyTorch. As shown in the code above, calling the model as a function, with the feature vectors as inputs, produces the predicted scores. Once again, a positive score corresponds to a positive label. When we evaluate this implementation on the test dataset, we obtain results that are in line with our previous models: Writing the perceptron and the logistic regression from scratch is a good exercise, as it exposes us to the fundamentals of implementing machine learning algorithms. However, this becomes cumbersome for more complex neural architectures. For this reason, from this point on, we will use PyTorch for all our coding examples. 4.2 Multiclass Classification So far, in this chapter we have discussed implementing binary classifiers. Next, we will modify these binary classifiers to perform multiclass classification, following the discussion in Section 3.5. 4.2.1 AG News Dataset Before explaining the actual training/testing code, we have to choose a new dataset that is suitable for multiclass classification. To this end, we will use the AG News Classification Dataset (Zhang et al., 2015), a subset of the larger AG corpus of news articles collected from thousands of different news sources.11 The classification dataset consists of four 11 http://groups.di.unipi.it/~gulli/AG_corpus_of_news_articles.html 4.2 Multiclass Classification 65 classes, and the data is equally balanced across all classes (30,000 articles per class for train, and 1,900 articles per class for testing). The goal of the task is to classify each article as one of the four classes: World, Sports, Business, or Sci/Tech. 4.2.2 Preparing the Dataset The AG News Dataset is distributed as two CSV files (one for training and one for testing), each containing three columns: the class index, the title, and the description. The dataset also provides a text file that maps the above class indexes to more descriptive class labels. Because of the tabular nature of the dataset, pandas, a Python library
for tabular data analysis,12 is a natural choice for loading and transform-
ing it. To this end, our Jupyter notebook (chap4_multiclass_logistic_regression) demonstrates the sequence of steps required to handle the data, as well
as model training and evaluation. First, we show how to load the CSV,
add column names, and inspect the result: class index . 0  3 
 . 1  3 
 . 2  3 
 . 3  3 
 . 4  3 
 ... ... . 119995  1 
 . 119996  2 
 . 119997  2 
 . 119998  2 
 . 119999  2 
 title Wall St. Bears Claw Back Into the Black (Reuters) Carlyle Looks Toward Commercial Aerospace (Reu... Oil and Economy Cloud Stocks' Outlook (Reuters) Iraq Halts Oil Exports from Main Southern Pipe... Oil prices soar to all-time record, posing new... ... Pakistan's Musharraf Says Won't Quit as Army C... Renteria signing a top-shelf deal Saban not going to Dolphins yet Today's NFL games Nets get Carter from Raptors description Reuters - Short-sellers, Wall Street's dwindli... Reuters - Private investment firm Carlyle Grou... Reuters - Soaring crude prices plus worries\ab... Reuters - Authorities have halted oil export\f... AFP - Tearaway world oil prices, toppling reco... ... KARACHI (Reuters) - Pakistani President Perve... Red Sox general manager Theo Epstein acknowled... The Miami Dolphins will put their courtship of... PITTSBURGH at NY GIANTS Time: 1:30 p.m. Line: ... INDIANAPOLIS -- All-Star Vince Carter was trad... 120000 rows × 3 columns Since the class labels themselves are in a separate file, we manually add them to the pandas data structure (called dataframe in pandas’ terminology) to increase the interpretability of the data. We use the class index column as a starting point, and use its map method to create a new column with the corresponding labels (technically a new Series object) that is added to the dataframe using its insert method, which allows us to insert the column in a specific position. Note that the label indices are one-based, so we subtract one to align them with their labels. 12 https://pandas.pydata.org 66 Implementing Text Classification Using Perceptron and LR class index . 0  3 
 . 1  3 
 . 2  3 
 . 3  3 
 . 4  3 
 ... ... . 119995  1 
 . 119996  2 
 . 119997  2 
 . 119998  2 
 . 119999  2 
 class Business Business Business Business Business ... World Sports Sports Sports Sports title Wall St. Bears Claw Back Into the Black (Reuters) Oil and Economy Cloud Stocks' Outlook (Reuters) Oil prices soar to all-time record, posing new... Pakistan's Musharraf Says Won't Quit as Army C... Saban not going to Dolphins yet Nets get Carter from Raptors description Reuters - Short-sellers, Wall Street's dwindli... Reuters - Soaring crude prices plus worries\ab... AFP - Tearaway world oil prices, toppling reco... KARACHI (Reuters) - Pakistani President Perve... The Miami Dolphins will put their courtship of... INDIANAPOLIS -- All-Star Vince Carter was trad... Iraq Halts Oil Exports from Main Southern Pipe... Reuters - Authorities have halted oil export\f... ... ... Renteria signing a top-shelf deal Red Sox general manager Theo Epstein acknowled... 120000 rows × 4 columns Carlyle Looks Toward Commercial Aerospace (Reu... Reuters - Private investment firm Carlyle Grou... Today's NFL games PITTSBURGH at NY GIANTS Time: 1:30 p.m. Line: ... Next we will preprocess the text. First we lowercase the title and description, and then we concatenate them into a single string. Then we remove some spurious backslashes from the text. Once this is done, the preprocessed text is added to the dataframe as a new column. Note that pandas allows these steps to be applied to all rows simultaneously. class index class title Wall St. Bears Claw Back Into the Black (Reuters) Oil and Economy Cloud Stocks' Outlook (Reuters) Oil prices soar to all-time record, posing new... ... Pakistan's Musharraf Says Won't Quit as Army C... Saban not going to Dolphins yet Nets get Carter from Raptors description Reuters - Short-sellers, Wall Street's dwindli... Reuters - Soaring crude prices plus worries\ab... AFP - Tearaway world oil prices, toppling reco... ... KARACHI (Reuters) - Pakistani President Perve... The Miami Dolphins will put their courtship of... INDIANAPOLIS -- All-Star Vince Carter was trad... text wall st. bears claw back into the black (reute... oil and economy cloud stocks' outlook (reuters... oil prices soar to all-time record, posing new... ... pakistan's musharraf says won't quit as army c... saban not going to dolphins yet the miami dolp... nets get carter from raptors indianapolis -- a... . 0  3 Business 
 . 1  3 Business 
 . 2  3 Business 
 . 3  3 Business 
 . 4  3 Business 
 ... ... ... . 119995  1 World 
 . 119996  2 Sports 
 . 119997  2 Sports 
 . 119998  2 Sports 
 . 119999  2 Sports 
 120000 rows × 5 columns Carlyle Looks Toward Commercial Reuters - Private investment firm Carlyle carlyle looks toward commercial Aerospace (Reu... Grou... aerospace (reu... Iraq Halts Oil Exports from Main Southern Pipe... Reuters - Authorities have halted oil export\f... iraq halts oil exports from main southern pipe... Renteria signing a top-shelf deal Red Sox general manager Theo Epstein renteria signing a top-shelf deal red sox acknowled... gene... Today's NFL games PITTSBURGH at NY GIANTS Time: 1:30 p.m. today's nfl games pittsburgh at ny giants Line: ... time... At this point, the text is ready to be tokenized. For this purpose we will use NLTK’s word_tokenize function. This function can be applied to the whole column at once using the pandas map function, which returns a new column which we add to the dataframe. However, here we actually use the progress_map function, which provides a visual progress bar. This visual feedback is especially helpful for tasks that take more time to complete. 4.2 Multiclass Classification 67 class index . 0  3 
 . 1  3 
 . 2  3 
 . 3  3 
 . 4  3 
 ... ... . 119995  1 
 . 119996  2 
 . 119997  2 
 . 119998  2 
 . 119999  2 
 class Business Business Business Business Business ... World Sports Sports Sports Sports title Wall St. Bears Claw Back Into the Black (Reuters) Oil and Economy Cloud Stocks' Outlook (Reuters) Oil prices soar to all-time record, posing new... ... Pakistan's Musharraf Says Won't Quit as Army C... Saban not going to Dolphins yet Nets get Carter from Raptors description Reuters - Short-sellers, Wall Street's dwindli... Reuters - Soaring crude prices plus worries\ab... AFP - Tearaway world oil prices, toppling reco... ... KARACHI (Reuters) - Pakistani President Perve... The Miami Dolphins will put their courtship of... INDIANAPOLIS -- All-Star Vince Carter was trad... text wall st. bears claw back into the black (reute... oil and economy cloud stocks' outlook (reuters... oil prices soar to all-time record, posing new... ... pakistan's musharraf says won't quit as army c... saban not going to dolphins yet the miami dolp... nets get carter from raptors indianapolis -- a... tokens [wall, st., bears, claw, back, into, the, blac... [oil, and, economy, cloud, stocks, ', outlook,... [oil, prices, soar, to, all-time, record, ,, p... ... [pakistan, 's, musharraf, says, wo, n't, quit,... [saban, not, going, to, dolphins, yet, the, mi... [nets, get, carter, from, raptors, indianapoli... 120000 rows × 6 columns Carlyle Looks Toward Commercial Reuters - Private investment firm carlyle looks toward commercial [carlyle, looks, toward, Aerospace (Reu... Carlyle Grou... aerospace (reu... commercial, aerospace... Iraq Halts Oil Exports from Main Reuters - Authorities have halted iraq halts oil exports from main [iraq, halts, oil, exports, from, Southern Pipe... oil export\f... southern pipe... main, southe... Renteria signing a top-shelf deal Red Sox general manager Theo renteria signing a top-shelf deal [renteria, signing, a, top-shelf, Epstein acknowled... red sox gene... deal, red, s... Today's NFL games PITTSBURGH at NY GIANTS today's nfl games pittsburgh at [today, 's, nfl, games, Time: 1:30 p.m. Line: ... ny giants time... pittsburgh, at, ny, gi... From the tokens we just created, we then create a vocabulary for our corpus. Here, we only keep the words that occur at least 10 times, decreasing the memory needed and reducing the likelihood that our vocabulary contains noisy tokens. Note that each row in the tokens column contains a list of tokens. In order to create the vocabulary, we will need to convert the Series of lists of tokens into a Series of tokens using the explode() Pandas method. Then we will use the value_counts() method to create a Series object in which the index are the tokens and the values are the number of times they appear in the corpus. The next step is removing the tokens with a count lower than our chosen threshold. Finally, we create a list with the remaining tokens, as well as a dictionary that maps tokens to token ids (i.e., the index of the token in the list). We include in the vocabulary a special token [UNK] that will be used as a placeholder for tokens that do not appear in our vocabulary after the frequency pruning. Using this vocabulary, we construct a feature vector for each news article in the corpus. This feature vector will be encoded as a dictionary, with keys corresponding to token ids, and values corresponding to the number of times the token appears in the article. As above, the feature vectors will be stored as a new column in the dataframe. 68 Implementing Text Classification Using Perceptron and LR class index class title Wall St. Bears Claw Back Into the Black (Reuters) Oil and Economy Cloud Stocks' Outlook (Reuters) Oil prices soar to all-time record, posing new... ... Pakistan's Musharraf Says Won't Quit as Army C... Saban not going to Dolphins yet Nets get Carter from Raptors description Reuters - Short-sellers, Wall Street's dwindli... Reuters - Soaring crude prices plus worries\ab... AFP - Tearaway world oil prices, toppling reco... ... KARACHI (Reuters) - Pakistani President Perve... The Miami Dolphins will put their courtship of... INDIANAPOLIS -- All-Star Vince Carter was trad... text wall st. bears claw back into the black (reute... oil and economy cloud stocks' outlook (reuters... oil prices soar to all-time record, posing new... ... pakistan's musharraf says won't quit as army c... saban not going to dolphins yet the miami dolp... nets get carter from raptors indianapolis -- a... tokens [wall, st., bears, claw, back, into, the, blac... [oil, and, economy, cloud, stocks, ', outlook,... [oil, prices, soar, to, alltime, record, ,, p... ... [pakistan, 's, musharraf, says, wo, n't, quit,... [saban, not, going, to, dolphins, yet, the, mi... [nets, get, carter, from, raptors, indianapoli... features {427: 2, 563: 1, 1607: 1, 15062: 1, 120: 1, 73... {66: 1, 9: 2, 351: 2, 4565: 1, 158: 1, 116: 1,... {66: 2, 99: 2, 4390: 1, 4: 2, 3595: 1, 149: 1,... ... {383: 1, 23: 1, 1626: 2, 91: 1, 1809: 1, 285: ... {7762: 2, 68: 1, 661: 1, 4: 2, 1439: 2, 703: 1... {2170: 2, 226: 1, 2402: 2, 32: 1, 2995: 2, 219... . 0  3 Business 
 . 1  3 Business 
 . 2  3 Business 
 . 3  3 Business 
 . 4  3 Business 
 ... ... ... . 119995  1 World 
 . 119996  2 Sports 
 . 119997  2 Sports 
 . 119998  2 Sports 
 . 119999  2 Sports 
 120000 rows × 7 columns Carlyle Looks Toward Commercial Aerospace (Reu... Reuters - Private investment firm Carlyle Grou... carlyle looks toward commercial aerospace (reu... Iraq Halts Oil Exports from Reuters - Authorities have iraq halts oil exports from Main Southern Pipe... halted oil export\f... main southern pipe... Renteria signing a top-shelf Red Sox general manager renteria signing a topdeal Theo Epstein acknowled... shelf deal red sox gene... PITTSBURGH at NY Today's NFL games GIANTS Time: 1:30 p.m. Line: ... today's nfl games pittsburgh at ny giants time... [carlyle, looks, toward, {15999: 2, 1076: 1, 855: commercial, aerospace... 1, 1286: 1, 4251: 1, ... [iraq, halts, oil, exports, {77: 2, 7380: 1, 66: 3, from, main, southe... 1787: 1, 32: 2, 900: 2... [renteria, signing, a, top- {8428: 2, 2638: 1, 5: 4, shelf, deal, red, s... 0: 3, 127: 1, 202: 3,... [today, 's, nfl, games, {106: 1, 23: 1, 729: 1, pittsburgh, at, ny, gi... 225: 1, 1586: 1, 22: 1... The final preprocessing step is converting the features and the class indices into PyTorch tensors. Recall that we need to subtract one from the class indices to make them zero-based. At this point, the data is fully processed and we are ready to begin training. 4.2.3 Multiclass Logistic Regression Using PyTorch The model itself is a single linear layer whose input size corresponds to the size of our vocabulary, and its output size corresponds to the number of classes in our corpus. PyTorch’s Linear layer includes a bias by default, so there is no need to handle that manually the way we did for our perceptron example. The code for training this model (which implements Algorithm 6) is almost identical to that of the binary logistic repression. However, since we have to calculate a score for each of the four different classes, we need to replace the previous BCEWithLogitsLoss with CrossEntropyLoss, which applies a softmax over the scores to obtain probabilities for each class. For each example, the model predicts 4 scores – one for each label. The label with the highest score is selected using the argmax function. We evaluate the predictions of our model for each class using Scikitlearn’s classification_report, which handles the results of multiclass classification. 4.3 Summary 69 4.3 Summary In this chapter, we used movie review and news article classification to illustrate the implementation of the previously described algorithms for the binary perceptron, binary logistic regression, and multiclass logistic regression. For the binary logistic regression, we made a direct comparison between the lower-level NumPy implementation and a higher-level version that made use of PyTorch. We hope that through this series of exercises the reader has noted several key takeaways. First, data preparation is important and should be done thoughtfully. Certain tasks (e.g., text normalization or sentence splitting) are going to be frequently needed if you continue with NLP, so using or creating generic functions can be very helpful. However, what works for one dataset and one language may not be suitable for another scenario. For example, in our case, we selected different tokenizers for each of our tasks to account for the different registers of English, as well as removing diacritics during normalization. Second, when it comes to implementing machine learning algorithms, it is often easier to use a higher-level library such as PyTorch instead of NumPy. For example, with the former, the gradients are calculated by the library, whereas in NumPy we have to code them ourselves. This becomes cumbersome quickly. For example, even the derivative of the softmax is non-trivial. Third, PyTorch imposes a training structure that remains largely the same, regardless of what models are being trained. That is, at a high level, the same steps are always required: clearing the current gradients, predicting output scores for the provided inputs, calculating the loss, and optimizing. These features make PyTorch a very powerful and convenient deep learning library; we will continue to use it throughout the remainder of the book to implement more complex neural architectures.
10,941
11,003
#!/usr/bin/env python # coding: utf-8 # # Binary Text Classification with Perceptron # In[1]: import random import numpy as np from tqdm.notebook import tqdm # set this variable to a number to be used as the random seed # or to None if you don't want to set a random seed seed = 1234 if seed is not None: random.seed(seed) np.random.seed(seed) # The dataset is divided in two directories called `train` and `test`. # These directories contain the training and testing splits of the dataset. # In[2]: get_ipython().system('ls -lh data/aclImdb/') # Both the `train` and `test` directories contain two directories called `pos` and `neg` that contain text files with the positive and negative reviews, respectively. # In[3]: get_ipython().system('ls -lh data/aclImdb/train/') # We will now read the filenames of the positive and negative examples. # In[4]: from glob import glob pos_files = glob('data/aclImdb/train/pos/*.txt') neg_files = glob('data/aclImdb/train/neg/*.txt') print('number of positive reviews:', len(pos_files)) print('number of negative reviews:', len(neg_files)) # Now, we will use a [`CountVectorizer`](https://scikit-learn.org/stable/modules/generated/sklearn.feature_extraction.text.CountVectorizer.html) to read the text files, tokenize them, acquire a vocabulary from the training data, and encode it in a document-term matrix in which each row represents a review, and each column represents a term in the vocabulary. Each element $(i,j)$ in the matrix represents the number of times term $j$ appears in example $i$. # In[5]: from sklearn.feature_extraction.text import CountVectorizer # initialize CountVectorizer indicating that we will give it a list of filenames that have to be read cv = CountVectorizer(input='filename') # learn vocabulary and return sparse document-term matrix doc_term_matrix = cv.fit_transform(pos_files + neg_files) doc_term_matrix # Note in the message printed above that the matrix is of shape (25000, 74894). # In other words, it has 1,871,225,000 elements. # However, only 3,445,861 elements were stored. # This is because most of the elements in the matrix are zeros. # The reason is that the reviews are short and most words in the english language don't appear in each review. # A matrix that only stores non-zero values is called *sparse*. # # Now we will convert it to a dense numpy array: # In[6]: X_train = doc_term_matrix.toarray() X_train.shape # We will also create a numpy array with the binary labels for the reviews. # One indicates a positive review and zero a negative review. # The label `y_train[i]` corresponds to the review encoded in row `i` of the `X_train` matrix. # In[7]: # training labels y_pos = np.ones(len(pos_files)) y_neg = np.zeros(len(neg_files)) y_train = np.concatenate([y_pos, y_neg]) y_train # Now we will initialize our model, in the form of an array of weights `w` of the same size as the number of features in our dataset (i.e., the number of words in the vocabulary acquired by [`CountVectorizer`](https://scikit-learn.org/stable/modules/generated/sklearn.feature_extraction.text.CountVectorizer.html)), and a bias term `b`. # Both are initialized to zeros. # In[8]: # initialize model: the feature vector and bias term are populated with zeros n_examples, n_features = X_train.shape w = np.zeros(n_features) b = 0 # Now we will use the perceptron learning algorithm to learn the values of `w` and `b` from our training data. # In[9]: n_epochs = 10 indices = np.arange(n_examples) for epoch in range(10): n_errors = 0 # randomize the order in which training examples are seen in this epoch np.random.shuffle(indices) # traverse the training data for i in tqdm(indices, desc=f'epoch {epoch+1}'): x = X_train[i] y_true = y_train[i] # the perceptron decision based on the current model score = x @ w + b y_pred = 1 if score > 0 else 0 # update the model is the prediction was incorrect if y_true == y_pred: continue elif y_true == 1 and y_pred == 0: w = w + x b = b + 1 n_errors += 1 elif y_true == 0 and y_pred == 1: w = w - x b = b - 1 n_errors += 1 if n_errors == 0: break # The next step is evaluating the model on the test dataset. # Note that this time we use the [`transform()`](https://scikit-learn.org/stable/modules/generated/sklearn.feature_extraction.text.CountVectorizer.html#sklearn.feature_extraction.text.CountVectorizer.transform) method of the [`CountVectorizer`](https://scikit-learn.org/stable/modules/generated/sklearn.feature_extraction.text.CountVectorizer.html), instead of the [`fit_transform()`](https://scikit-learn.org/stable/modules/generated/sklearn.feature_extraction.text.CountVectorizer.html#sklearn.feature_extraction.text.CountVectorizer.fit_transform) method that we used above. This is because we want to use the learned vocabulary in the test set, instead of learning a new one. # In[10]: pos_files = glob('data/aclImdb/test/pos/*.txt') neg_files = glob('data/aclImdb/test/neg/*.txt') doc_term_matrix = cv.transform(pos_files + neg_files) X_test = doc_term_matrix.toarray() y_pos = np.ones(len(pos_files)) y_neg = np.zeros(len(neg_files)) y_test = np.concatenate([y_pos, y_neg]) # Using the model is easy: multiply the document-term matrix by the learned weights and add the bias. # We use Python's `@` operator to perform the matrix-vector multiplication. # In[11]: y_pred = (X_test @ w + b) > 0 # Now we print an evaluation of the prediction results using scikit-learn's [`classification_report()`](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.classification_report.html) function. # In[12]: def binary_classification_report(y_true, y_pred): # count true positives, false positives, true negatives, and false negatives tp = fp = tn = fn = 0 for gold, pred in zip(y_true, y_pred): if pred == True: if gold == True: tp += 1 else: fp += 1 else: if gold == False: tn += 1 else: fn += 1 # calculate precision and recall precision = tp / (tp + fp) recall = tp / (tp + fn) # calculate f1 score fscore = 2 * precision * recall / (precision + recall) # calculate accuracy accuracy = (tp + tn) / len(y_true) # number of positive labels in y_true support = sum(y_true) return { "precision": precision, "recall": recall, "f1-score": fscore, "support": support, "accuracy": accuracy, } # In[13]: binary_classification_report(y_test, y_pred)
4,054
4,278
19
chap04-20
chap04-20
4 Implementing Text Classification Using Perceptron and Logistic Regression In the previous chapters we have discussed the theory behind the perceptron and logistic regression, including mathematical explanations of how and why they are able to learn from examples. In this chapter we will transition from math to code. Specifically, we will discuss how to implement these models in the Python programming language. All the code that we will introduce throughout this book is available online as well: http://clulab.github.io/gentlenlp/. The reader who is not familiar with the Python programming language is encouraged to read first Appendix A, for a brief introduction to the language, and Appendix B, for a discussion on how computers encode and preprocess text. Once done, please return here. To get a better understanding of how these algorithms work under the hood, we will start by implementing them from scratch. However, as the book progresses, we will introduce some of the popular tools and libraries that make Python the language of choice for machine learning, e.g., PyTorch,1 and Hugging Face’s transformers.2 The code for all the examples in the book is provided in the form of Jupyter notebooks.3 Important fragments of these notebooks will be presented in the implementation chapters so that the reader has the whole picture just by reading the book. However, we strongly encourage you to download the notebooks and execute them yourself. We also encourage you to modify them to conduct your own experiments! 1 https://pytorch.org
2 https://huggingface.co 3 https://jupyter.org/ 55 56 Implementing Text Classification Using Perceptron and LR 4.1 Binary Classification We begin this chapter with binary classification. That is, we aim to train classifiers that assign one of two labels to a given text. As the example for this task, we will train a review classifier using the the Large Movie Review Dataset (Maas et al., 2011).4 We tackle this task by implementing first a binary perceptron classifier, followed by a binary logistic regression one. We will implement the latter both from scratch as well as using PyTorch, so the reader has a clearer understanding on how PyTorch works “under the hood.” 4.1.1 Large Movie Review Dataset This dataset contains movie reviews and their associated scores (between 1 and 10) as provided by IMDb.5 converted these scores to binary labels by assigning each review a positive or negative label if the review score was above 6 or below 5, respectively. Reviews with scores 5 and 6 were considered too neutral and thus excluded. We follow the same protocol in this chapter. The dataset is divided in two even partitions called train and test, each containing 25,000 reviews. The dataset also provides additional unlabeled reviews, but we will not use those here. Each partition contains two directories called pos and neg where the positive and negative examples are stored. Each review is stored in an independent text file, whose name is composed of an id unique to the partition and the score associated with the review, separated by an underscore. An example of a positive and a negative review is shown in Table 4.1. 4.1.2 Bag-of-words Model As discussed in Section 2.2, we will encode the text to classify as a bag of words. That is, we encode each review as a list of numbers, with each position in the list corresponding to a word in our vocabulary, and the value stored in that position corresponding to the number of times the word appears in the review. For example, say we want to encode the following two reviews: 4 https://ai.stanford.edu/~amaas/data/sentiment/ 5 https://www.imdb.com/ Maas et al. 4.1 Binary Classification 57 Table 4.1 Two examples of movie reviews from IMDb. The first is a positive review of the movie Puss in Boots (1988). The second is a negative review of the movie Valentine (2001). These reviews can be found at https://www.imdb.com/review/rw0606396/ and https://www.imdb.com/review/rw0721861/, respectively. Filename Score Binary Label train/pos/24_8.txt 8/10 Positive train/neg/141_3.txt 3/10 Negative Review Text Although this was obviously a low-budget production, the performances and the songs in this movie are worth seeing. One of Walken’s few musical roles to date. (he is a marvelous dancer and singer and he demonstrates his acrobatic skills as well - watch for the cartwheel!) Also starring Jason Connery. A great children’s story and very likable characters. This stalk and slash turkey manages to bring nothing new to an increasingly stale genre. A masked killer stalks young, pert girls and slaughters them in a variety of gruesome ways, none of which are particularly inventive. It’s not scary, it’s not clever, and it’s not funny. So what was the point of it? Review 1: Review 2: "I liked the movie. My friend liked it too. " "I hated it. Would not recommend. " First, we need to create a vocabulary that maps each word to an id that uniquely identifies it. Each of these numbers will be used as the index in a list, so they must start at zero and grow by one for each word in the vocabulary. For example, one possible vocabulary that encodes the previous reviews is: {'would': 0, 'hated': 1, 58 Implementing Text Classification Using Perceptron and LR 'my': 2, 'liked': 3, 'not': 4, 'it': 5, 'movie': 6, 'recommend': 7, 'the': 8, 'I': 9, 'too': 10, 'friend': 11} Using this mapping, we can encode the two reviews as follows: Review1: [0,0,1,2,0,1,1,0,1,1,1,1] Review2: [1,1,0,0,1,1,0,1,0,1,0,0] Note that the word liked (fourth position) in the first review has a value of two. This is because this word appears twice in that review. This is a small example with a vocabulary of only 12 terms. Of course, the same process needs to be implemented for our whole training dataset. For this purpose we will use scikit-learn’s CountVectorizer class.6 Using the CountVectorizer class simplifies things, allowing us to get started quickly with a bag-of-words approach. However, note that it makes several simplifying assumptions (e.g., text is lowercased, and punctuation and single character tokens are removed). Some of these may not be adequate to other tasks. First, we need to obtain the filenames for the reviews in the training set: Once we have acquired the filenames for the training reviews, we need
to read them using the CountVectorizer. In order for the CountVectorizer to open and read the files for us, we make use of the input='filename' constructor parameter (otherwise it would expect the string content directly). The CountVectorizer provides three methods that will be use-
ful for us: a method called fit() that is used to acquire the vocabulary,
a method transform() that converts the text into the bag-of-words representation, and a method fit_transform() that conveniently acquires the vocabulary and transforms the data in a single step. The resulting object is referred to as a document-term matrix, where each row corre- 6 https://scikitlearn.org/stable/modules/generated/sklearn.feature_ extraction.text.CountVectorizer.html 4.1 Binary Classification 59 sponds to a document, and each column corresponds to a term in the vocabulary. As the output above indicates, the resulting matrix has 25,000 rows (one for each review), and 74,849 columns (one for each term). Also you may note that this matrix is sparse, with 3,445,861 stored elements. A regular matrix of shape 25,000×74,849 would have 1,871,225,000 elements. However, most of the elements in a document-term matrix are zeros because only a few words from the vocabulary appear in each document. A sparse matrix takes advantage of this fact by storing only the non-zero cells in order to reduce the memory required to store it. Thus, sparse matrices are convenient, especially when dealing with lots of data. Nevertheless, to simplify the downstream code in this example, we will convert it into a dense matrix, i.e., a regular two-dimensional NumPy array. Finally, we also need the labels of the reviews. We assign a label of one to positive reviews, and a label of zero to negative ones. Note that the first half of the reviews are positive and the second half are negative. The label at the ith position of the y_train array corresponds to the review encoded in the ith row of the X_train matrix. 4.1.3 Perceptron Now that we have defined our task and the data processing pipeline, we will implement a perceptron classifier that classifies the movie reviews as positive or negative. The entire code discussed in this section is available in the chap4_perceptron notebook. Recall from Section 2.4 that the perceptron is composed of a weight vector w and a bias term b. These will be represented as a NumPy array w of the same length as our document vectors, and a variable b for the bias term. Both will be initialized with zeros. The parameters w and b are learned through the following algorithm, which implements Algorithm 2 from Chapter 2: There are a couple of details to point out. Line 3 of Algorithm 2 indicates that we need to repeat the training loop until convergence. Theoretically, convergence is defined as predicting all training examples correctly. This is an ambitious requirement, which is not always possible in practice, so in this code we also include a stop condition if we reach a maximum number of epochs. Another crucial difference between our implementation here and the theoretical Algorithm 2, is that we randomize the order in which the training examples are seen at the beginning of 60 Implementing Text Classification Using Perceptron and LR each epoch. This simple (but highly recommended!) change is necessary to avoid the introduction of spurious biases due to the arbitrary order of the examples in the original training partition.7 We accomplish this by storing the indices corresponding to the X_train matrix rows in a NumPy array, and shuffling these indices at the beginning of each epoch. We shuffle the indices instead of the examples so that we can preserve the mapping between examples and labels. The training loop aligns closely with Algorithm 2. We start by iterating over each example in our training data, storing the current example in the variable x,8 and its corresponding label in the variable y_true. Next, we compute the perceptron decision function shown in Algorithm 1. Note that NumPy (as well as PyTorch) uses Python’s @ operator to indicate vector or matrix multiplication, depending on its operand types. Here we use it to calculate the dot product of the example x and the weights w. To this we add the bias b to obtain the predicted score, whose sign is used to assign a positive or negative predicted label. If the prediction is correct, then no update is needed, and we can move on to the next training example. However, if the prediction is incorrect, then we need to adjust w and b, as described in Algorithm 2. Sidebar 4.1 The tqdm function This is our first exposure to the tqdm function. tqdm is a progress bar that “make your loops show a smart progress meter.”9 The name tqdm comes from the Arabic word taqaddum which can mean “progress.” Using tqdm is as simple as wrapping it around the collection to be traversed. After training, we evaluate the model’s performance on the heldout test partition. The test data is loaded similarly to the training partition, but with one notable difference; we use CountVectorizer’s transform() method instead of the fit_transform() method so that the vocabulary is not adjusted for the test data. We won’t show here the loading of the test partition since it is so similar to the code already shown, but it is available in the Jupyter notebook that accompanies this section. . 7   As an extreme example, consider a dataset where all the positive examples appear first in the training partition. This would cause the perceptron to artificially inflate the weights of the features that occur in these examples, a situation from which the learning algorithm may struggle to recover. 
 . 8  We use typewriter font when we discuss variables in the code, to distinguish code from the theoretical discussion in the other chapters. 
 9 https://github.com/tqdm/tqdm 4.1 Binary Classification 61 Using the model to assign labels to all the test data is easily done in one step – we simply multiply the entire test data document-term matrix by the previously learned weights and add the bias. Scores greater than zero indicate a positive review, and those less than zero are negative. At this point we can evaluate the classifier’s performance, which we will do using precision, recall, and F1 scores for binary classification (described in Section 2.3). For this purpose, we implement a function called binary_classification_report that computes these metrics and returns them as a dictionary: We call this function to compare the predicted labels to the true labels, and obtain the evaluation scores. Our F1 score here is 86.8%, which is much higher than the baseline that assigns labels randomly, which yields an F1 score of about 50%. This is a good result, especially considering the simplicity of the perceptron! In the next sections and chapters, we will discuss a battery of strategies to considerably improve this performance. 4.1.4 Binary Logistic Regression from Scratch Using the same task, dataset, and evaluation, we will now implement a logistic regression classifier, as described in Algorithm 5 from Chapter 3. To give the reader hands-on experience with the implementation of the gradient calculations for logistic regression, we start by implementing it from scratch using NumPy. All the code shown in this section is available in the chap4_logistic_regression_numpy notebook. In the perceptron implementation, we represented the weights and the bias as two different variables. Here, however, we will use a different approach that will allow us to unify them into a single vector variable. Specifically, we take advantage of the similarity between the derivative of the cost function with respect to the weights (Equation 3.14) and the derivative of the cost with respect to the bias (Equation 3.15). d Ci(w, b) = (σi − yi)xij (3.14 revisited) dwj d Ci(w, b) = σi − yi (3.15 revisited) db Note that the two derivative formulas are identical except that the former has a multiplication by xij, while the latter does not. However, 62 Implementing Text Classification Using Perceptron and LR since σi − yi = (σi − yi)1 we can multiply the derivative of the cost with respect to the bias by one without changing the semantics. This gives an opportunity for combining the computations, doing them both in a single pass. The idea is that we can treat the bias as a weight corresponding to a feature that always has a value of one. As can be seen above, we created a NumPy array of ones of the same length as the number of examples in our training set (i.e., the number of rows in the data matrix). Then we add this array as a new column to the data matrix, using NumPy’s column_stack function. Next, we need to initialize our model. This time we will use a single NumPy array w of the same length as the number of columns in the data matrix. The weight vector w is initialized randomly with values between 0 and 1: Before implementing the learning algorithm, we need an implementation of the logistic function. Recall that the logistic function is σ(x) = 1 (3.1 revisited) 1+e−x This function can be easily implemented in NumPy as follows: However, this naive implementation may produce the following warning during training: The term overflow indicates that the result of evaluating exp(-x) is a number so large that it can’t be represented by a float (specifically, we’re using float64 numbers). We will avoid this issue by not calling exp with values that will overflow. NumPy provides the function finfo that can be consulted to find the limits of floating point numbers: The log of the largest floating point number is the largest number for which exp() will not overflow, so we will use it as a threshold to filter out problematic values: We now have everything we need to implement Algorithm 4. The steps to follow for each example are: (1) use the model to make a prediction, (2) calculate the gradient of the loss function with respect to the model parameters, and (3) update the model parameters using the gradient. The size of the update is controlled by the learning rate. Once the model has been trained, we evaluate it on the test dataset using our binary_classification_report function from the previous section. Loading and preprocessing the test dataset follows the same 4.1 Binary Classification 63 steps as with the previous classifier. We omit the code for brevity. These are the results: The performance is comparable with that of the perceptron. The difference in F1 scores between the two classifiers (84.9% here vs. 86.8% for the perceptron) is not significant. Classifier parity is probably attributable to the fact that the signal distinguishing the two classes being easy to learn and the simpler perceptron training algorithm being sufficient in this case. Nevertheless, this task is useful in showing how to implement the logistic regression model from scratch, i.e., by implementing the gradient calculation and parameter updates manually. Next, we will implement the same model again using PyTorch, highlighting how this machine learning library simplifies the process. 4.1.5 Binary Logistic Regression Utilizing PyTorch While it is fairly straightforward to compute the derivatives for logistic regression and implement then directly in NumPy, this will not scale well to arbitrary neural architectures. Fortunately, there are libraries that automate the computation of the derivatives of the cost function (assuming it is differentiable!) for any neural network, and use the resulting gradients to perform gradient descent or other more sophisticated optimization procedures. To this end, we will use the PyTorch deep learning library10. The corresponding notebook for this section is chap4_logistic_regression_pytorch_bce. Our model for logistic regression corresponds to PyTorch’s Linear layer. When we instantiate this layer, we specify the size of the inputs (the size of our vocabulary) and the size of the output, i.e., the number of output neurons (which is one because we’re doing binary classification). The loss function we use is the binary cross-entropy loss (see Chapter 3), which is implemented as BCEWithLogitsLoss in PyTorch. In PyTorch, the gradients obtained from the loss function are applied to the model by an optimizer object, which implements and applies an optimization algorithm. Here we will use the vanilla stochastic gradient descent optimizer; we set its learning rate to 0.1. This is equivalent to the discussion in Section 3.2. Similarly to the manual implementation, the steps required to train the model for a given training example are: (1) ensure the gradients are set to zeros, (2) apply the model to obtain a prediction, (3) calculate 10 https://pytorch.org/ 64 Implementing Text Classification Using Perceptron and LR the loss, (4) compute the gradient of the loss by back-propagation, and (5) update the model parameters. Recall that in our previous implementation everything was hardcoded: applying the model, computing the gradients, and optimizing the model parameters. Here, however, the implementation of the logistic regression is expressed at a higher level of abstraction. This means that we are describing the logical steps without specifying a particular implementation. Instead, implementation details are the responsability of the chosen model, loss function, and optimizer. Thus, we could even choose a different model, loss function, and/or optimizer, and use the same training steps with little or no modification. This decoupling of the training logic from the implementation details is one of the main advantages of libraries such as PyTorch. As shown in the code above, calling the model as a function, with the feature vectors as inputs, produces the predicted scores. Once again, a positive score corresponds to a positive label. When we evaluate this implementation on the test dataset, we obtain results that are in line with our previous models: Writing the perceptron and the logistic regression from scratch is a good exercise, as it exposes us to the fundamentals of implementing machine learning algorithms. However, this becomes cumbersome for more complex neural architectures. For this reason, from this point on, we will use PyTorch for all our coding examples. 4.2 Multiclass Classification So far, in this chapter we have discussed implementing binary classifiers. Next, we will modify these binary classifiers to perform multiclass classification, following the discussion in Section 3.5. 4.2.1 AG News Dataset Before explaining the actual training/testing code, we have to choose a new dataset that is suitable for multiclass classification. To this end, we will use the AG News Classification Dataset (Zhang et al., 2015), a subset of the larger AG corpus of news articles collected from thousands of different news sources.11 The classification dataset consists of four 11 http://groups.di.unipi.it/~gulli/AG_corpus_of_news_articles.html 4.2 Multiclass Classification 65 classes, and the data is equally balanced across all classes (30,000 articles per class for train, and 1,900 articles per class for testing). The goal of the task is to classify each article as one of the four classes: World, Sports, Business, or Sci/Tech. 4.2.2 Preparing the Dataset The AG News Dataset is distributed as two CSV files (one for training and one for testing), each containing three columns: the class index, the title, and the description. The dataset also provides a text file that maps the above class indexes to more descriptive class labels. Because of the tabular nature of the dataset, pandas, a Python library
for tabular data analysis,12 is a natural choice for loading and transform-
ing it. To this end, our Jupyter notebook (chap4_multiclass_logistic_regression) demonstrates the sequence of steps required to handle the data, as well
as model training and evaluation. First, we show how to load the CSV,
add column names, and inspect the result: class index . 0  3 
 . 1  3 
 . 2  3 
 . 3  3 
 . 4  3 
 ... ... . 119995  1 
 . 119996  2 
 . 119997  2 
 . 119998  2 
 . 119999  2 
 title Wall St. Bears Claw Back Into the Black (Reuters) Carlyle Looks Toward Commercial Aerospace (Reu... Oil and Economy Cloud Stocks' Outlook (Reuters) Iraq Halts Oil Exports from Main Southern Pipe... Oil prices soar to all-time record, posing new... ... Pakistan's Musharraf Says Won't Quit as Army C... Renteria signing a top-shelf deal Saban not going to Dolphins yet Today's NFL games Nets get Carter from Raptors description Reuters - Short-sellers, Wall Street's dwindli... Reuters - Private investment firm Carlyle Grou... Reuters - Soaring crude prices plus worries\ab... Reuters - Authorities have halted oil export\f... AFP - Tearaway world oil prices, toppling reco... ... KARACHI (Reuters) - Pakistani President Perve... Red Sox general manager Theo Epstein acknowled... The Miami Dolphins will put their courtship of... PITTSBURGH at NY GIANTS Time: 1:30 p.m. Line: ... INDIANAPOLIS -- All-Star Vince Carter was trad... 120000 rows × 3 columns Since the class labels themselves are in a separate file, we manually add them to the pandas data structure (called dataframe in pandas’ terminology) to increase the interpretability of the data. We use the class index column as a starting point, and use its map method to create a new column with the corresponding labels (technically a new Series object) that is added to the dataframe using its insert method, which allows us to insert the column in a specific position. Note that the label indices are one-based, so we subtract one to align them with their labels. 12 https://pandas.pydata.org 66 Implementing Text Classification Using Perceptron and LR class index . 0  3 
 . 1  3 
 . 2  3 
 . 3  3 
 . 4  3 
 ... ... . 119995  1 
 . 119996  2 
 . 119997  2 
 . 119998  2 
 . 119999  2 
 class Business Business Business Business Business ... World Sports Sports Sports Sports title Wall St. Bears Claw Back Into the Black (Reuters) Oil and Economy Cloud Stocks' Outlook (Reuters) Oil prices soar to all-time record, posing new... Pakistan's Musharraf Says Won't Quit as Army C... Saban not going to Dolphins yet Nets get Carter from Raptors description Reuters - Short-sellers, Wall Street's dwindli... Reuters - Soaring crude prices plus worries\ab... AFP - Tearaway world oil prices, toppling reco... KARACHI (Reuters) - Pakistani President Perve... The Miami Dolphins will put their courtship of... INDIANAPOLIS -- All-Star Vince Carter was trad... Iraq Halts Oil Exports from Main Southern Pipe... Reuters - Authorities have halted oil export\f... ... ... Renteria signing a top-shelf deal Red Sox general manager Theo Epstein acknowled... 120000 rows × 4 columns Carlyle Looks Toward Commercial Aerospace (Reu... Reuters - Private investment firm Carlyle Grou... Today's NFL games PITTSBURGH at NY GIANTS Time: 1:30 p.m. Line: ... Next we will preprocess the text. First we lowercase the title and description, and then we concatenate them into a single string. Then we remove some spurious backslashes from the text. Once this is done, the preprocessed text is added to the dataframe as a new column. Note that pandas allows these steps to be applied to all rows simultaneously. class index class title Wall St. Bears Claw Back Into the Black (Reuters) Oil and Economy Cloud Stocks' Outlook (Reuters) Oil prices soar to all-time record, posing new... ... Pakistan's Musharraf Says Won't Quit as Army C... Saban not going to Dolphins yet Nets get Carter from Raptors description Reuters - Short-sellers, Wall Street's dwindli... Reuters - Soaring crude prices plus worries\ab... AFP - Tearaway world oil prices, toppling reco... ... KARACHI (Reuters) - Pakistani President Perve... The Miami Dolphins will put their courtship of... INDIANAPOLIS -- All-Star Vince Carter was trad... text wall st. bears claw back into the black (reute... oil and economy cloud stocks' outlook (reuters... oil prices soar to all-time record, posing new... ... pakistan's musharraf says won't quit as army c... saban not going to dolphins yet the miami dolp... nets get carter from raptors indianapolis -- a... . 0  3 Business 
 . 1  3 Business 
 . 2  3 Business 
 . 3  3 Business 
 . 4  3 Business 
 ... ... ... . 119995  1 World 
 . 119996  2 Sports 
 . 119997  2 Sports 
 . 119998  2 Sports 
 . 119999  2 Sports 
 120000 rows × 5 columns Carlyle Looks Toward Commercial Reuters - Private investment firm Carlyle carlyle looks toward commercial Aerospace (Reu... Grou... aerospace (reu... Iraq Halts Oil Exports from Main Southern Pipe... Reuters - Authorities have halted oil export\f... iraq halts oil exports from main southern pipe... Renteria signing a top-shelf deal Red Sox general manager Theo Epstein renteria signing a top-shelf deal red sox acknowled... gene... Today's NFL games PITTSBURGH at NY GIANTS Time: 1:30 p.m. today's nfl games pittsburgh at ny giants Line: ... time... At this point, the text is ready to be tokenized. For this purpose we will use NLTK’s word_tokenize function. This function can be applied to the whole column at once using the pandas map function, which returns a new column which we add to the dataframe. However, here we actually use the progress_map function, which provides a visual progress bar. This visual feedback is especially helpful for tasks that take more time to complete. 4.2 Multiclass Classification 67 class index . 0  3 
 . 1  3 
 . 2  3 
 . 3  3 
 . 4  3 
 ... ... . 119995  1 
 . 119996  2 
 . 119997  2 
 . 119998  2 
 . 119999  2 
 class Business Business Business Business Business ... World Sports Sports Sports Sports title Wall St. Bears Claw Back Into the Black (Reuters) Oil and Economy Cloud Stocks' Outlook (Reuters) Oil prices soar to all-time record, posing new... ... Pakistan's Musharraf Says Won't Quit as Army C... Saban not going to Dolphins yet Nets get Carter from Raptors description Reuters - Short-sellers, Wall Street's dwindli... Reuters - Soaring crude prices plus worries\ab... AFP - Tearaway world oil prices, toppling reco... ... KARACHI (Reuters) - Pakistani President Perve... The Miami Dolphins will put their courtship of... INDIANAPOLIS -- All-Star Vince Carter was trad... text wall st. bears claw back into the black (reute... oil and economy cloud stocks' outlook (reuters... oil prices soar to all-time record, posing new... ... pakistan's musharraf says won't quit as army c... saban not going to dolphins yet the miami dolp... nets get carter from raptors indianapolis -- a... tokens [wall, st., bears, claw, back, into, the, blac... [oil, and, economy, cloud, stocks, ', outlook,... [oil, prices, soar, to, all-time, record, ,, p... ... [pakistan, 's, musharraf, says, wo, n't, quit,... [saban, not, going, to, dolphins, yet, the, mi... [nets, get, carter, from, raptors, indianapoli... 120000 rows × 6 columns Carlyle Looks Toward Commercial Reuters - Private investment firm carlyle looks toward commercial [carlyle, looks, toward, Aerospace (Reu... Carlyle Grou... aerospace (reu... commercial, aerospace... Iraq Halts Oil Exports from Main Reuters - Authorities have halted iraq halts oil exports from main [iraq, halts, oil, exports, from, Southern Pipe... oil export\f... southern pipe... main, southe... Renteria signing a top-shelf deal Red Sox general manager Theo renteria signing a top-shelf deal [renteria, signing, a, top-shelf, Epstein acknowled... red sox gene... deal, red, s... Today's NFL games PITTSBURGH at NY GIANTS today's nfl games pittsburgh at [today, 's, nfl, games, Time: 1:30 p.m. Line: ... ny giants time... pittsburgh, at, ny, gi... From the tokens we just created, we then create a vocabulary for our corpus. Here, we only keep the words that occur at least 10 times, decreasing the memory needed and reducing the likelihood that our vocabulary contains noisy tokens. Note that each row in the tokens column contains a list of tokens. In order to create the vocabulary, we will need to convert the Series of lists of tokens into a Series of tokens using the explode() Pandas method. Then we will use the value_counts() method to create a Series object in which the index are the tokens and the values are the number of times they appear in the corpus. The next step is removing the tokens with a count lower than our chosen threshold. Finally, we create a list with the remaining tokens, as well as a dictionary that maps tokens to token ids (i.e., the index of the token in the list). We include in the vocabulary a special token [UNK] that will be used as a placeholder for tokens that do not appear in our vocabulary after the frequency pruning. Using this vocabulary, we construct a feature vector for each news article in the corpus. This feature vector will be encoded as a dictionary, with keys corresponding to token ids, and values corresponding to the number of times the token appears in the article. As above, the feature vectors will be stored as a new column in the dataframe. 68 Implementing Text Classification Using Perceptron and LR class index class title Wall St. Bears Claw Back Into the Black (Reuters) Oil and Economy Cloud Stocks' Outlook (Reuters) Oil prices soar to all-time record, posing new... ... Pakistan's Musharraf Says Won't Quit as Army C... Saban not going to Dolphins yet Nets get Carter from Raptors description Reuters - Short-sellers, Wall Street's dwindli... Reuters - Soaring crude prices plus worries\ab... AFP - Tearaway world oil prices, toppling reco... ... KARACHI (Reuters) - Pakistani President Perve... The Miami Dolphins will put their courtship of... INDIANAPOLIS -- All-Star Vince Carter was trad... text wall st. bears claw back into the black (reute... oil and economy cloud stocks' outlook (reuters... oil prices soar to all-time record, posing new... ... pakistan's musharraf says won't quit as army c... saban not going to dolphins yet the miami dolp... nets get carter from raptors indianapolis -- a... tokens [wall, st., bears, claw, back, into, the, blac... [oil, and, economy, cloud, stocks, ', outlook,... [oil, prices, soar, to, alltime, record, ,, p... ... [pakistan, 's, musharraf, says, wo, n't, quit,... [saban, not, going, to, dolphins, yet, the, mi... [nets, get, carter, from, raptors, indianapoli... features {427: 2, 563: 1, 1607: 1, 15062: 1, 120: 1, 73... {66: 1, 9: 2, 351: 2, 4565: 1, 158: 1, 116: 1,... {66: 2, 99: 2, 4390: 1, 4: 2, 3595: 1, 149: 1,... ... {383: 1, 23: 1, 1626: 2, 91: 1, 1809: 1, 285: ... {7762: 2, 68: 1, 661: 1, 4: 2, 1439: 2, 703: 1... {2170: 2, 226: 1, 2402: 2, 32: 1, 2995: 2, 219... . 0  3 Business 
 . 1  3 Business 
 . 2  3 Business 
 . 3  3 Business 
 . 4  3 Business 
 ... ... ... . 119995  1 World 
 . 119996  2 Sports 
 . 119997  2 Sports 
 . 119998  2 Sports 
 . 119999  2 Sports 
 120000 rows × 7 columns Carlyle Looks Toward Commercial Aerospace (Reu... Reuters - Private investment firm Carlyle Grou... carlyle looks toward commercial aerospace (reu... Iraq Halts Oil Exports from Reuters - Authorities have iraq halts oil exports from Main Southern Pipe... halted oil export\f... main southern pipe... Renteria signing a top-shelf Red Sox general manager renteria signing a topdeal Theo Epstein acknowled... shelf deal red sox gene... PITTSBURGH at NY Today's NFL games GIANTS Time: 1:30 p.m. Line: ... today's nfl games pittsburgh at ny giants time... [carlyle, looks, toward, {15999: 2, 1076: 1, 855: commercial, aerospace... 1, 1286: 1, 4251: 1, ... [iraq, halts, oil, exports, {77: 2, 7380: 1, 66: 3, from, main, southe... 1787: 1, 32: 2, 900: 2... [renteria, signing, a, top- {8428: 2, 2638: 1, 5: 4, shelf, deal, red, s... 0: 3, 127: 1, 202: 3,... [today, 's, nfl, games, {106: 1, 23: 1, 729: 1, pittsburgh, at, ny, gi... 225: 1, 1586: 1, 22: 1... The final preprocessing step is converting the features and the class indices into PyTorch tensors. Recall that we need to subtract one from the class indices to make them zero-based. At this point, the data is fully processed and we are ready to begin training. 4.2.3 Multiclass Logistic Regression Using PyTorch The model itself is a single linear layer whose input size corresponds to the size of our vocabulary, and its output size corresponds to the number of classes in our corpus. PyTorch’s Linear layer includes a bias by default, so there is no need to handle that manually the way we did for our perceptron example. The code for training this model (which implements Algorithm 6) is almost identical to that of the binary logistic repression. However, since we have to calculate a score for each of the four different classes, we need to replace the previous BCEWithLogitsLoss with CrossEntropyLoss, which applies a softmax over the scores to obtain probabilities for each class. For each example, the model predicts 4 scores – one for each label. The label with the highest score is selected using the argmax function. We evaluate the predictions of our model for each class using Scikitlearn’s classification_report, which handles the results of multiclass classification. 4.3 Summary 69 4.3 Summary In this chapter, we used movie review and news article classification to illustrate the implementation of the previously described algorithms for the binary perceptron, binary logistic regression, and multiclass logistic regression. For the binary logistic regression, we made a direct comparison between the lower-level NumPy implementation and a higher-level version that made use of PyTorch. We hope that through this series of exercises the reader has noted several key takeaways. First, data preparation is important and should be done thoughtfully. Certain tasks (e.g., text normalization or sentence splitting) are going to be frequently needed if you continue with NLP, so using or creating generic functions can be very helpful. However, what works for one dataset and one language may not be suitable for another scenario. For example, in our case, we selected different tokenizers for each of our tasks to account for the different registers of English, as well as removing diacritics during normalization. Second, when it comes to implementing machine learning algorithms, it is often easier to use a higher-level library such as PyTorch instead of NumPy. For example, with the former, the gradients are calculated by the library, whereas in NumPy we have to code them ourselves. This becomes cumbersome quickly. For example, even the derivative of the softmax is non-trivial. Third, PyTorch imposes a training structure that remains largely the same, regardless of what models are being trained. That is, at a high level, the same steps are always required: clearing the current gradients, predicting output scores for the provided inputs, calculating the loss, and optimizing. These features make PyTorch a very powerful and convenient deep learning library; we will continue to use it throughout the remainder of the book to implement more complex neural architectures.
31,560
31,649
#!/usr/bin/env python # coding: utf-8 # # Multiclass Text Classification with # # Logistic Regression Implemented with PyTorch and CE Loss # First, we will do some initialization. # In[1]: import random import torch import numpy as np import pandas as pd from tqdm.notebook import tqdm # enable tqdm in pandas tqdm.pandas() # set to True to use the gpu (if there is one available) use_gpu = True # select device device = torch.device('cuda' if use_gpu and torch.cuda.is_available() else 'cpu') print(f'device: {device.type}') # random seed seed = 1234 # set random seed if seed is not None: print(f'random seed: {seed}') random.seed(seed) np.random.seed(seed) torch.manual_seed(seed) # We will be using the AG's News Topic Classification Dataset. # It is stored in two CSV files: `train.csv` and `test.csv`, as well as a `classes.txt` that stores the labels of the classes to predict. # # First, we will load the training dataset using [pandas](https://pandas.pydata.org/) and take a quick look at how the data. # In[2]: train_df = pd.read_csv('data/ag_news_csv/train.csv', header=None) train_df.columns = ['class index', 'title', 'description'] train_df # The dataset consists of 120,000 examples, each consisting of a class index, a title, and a description. # The class labels are distributed in a separated file. We will add the labels to the dataset so that we can interpret the data more easily. Note that the label indexes are one-based, so we need to subtract one to retrieve them from the list. # In[3]: labels = open('data/ag_news_csv/classes.txt').read().splitlines() classes = train_df['class index'].map(lambda i: labels[i-1]) train_df.insert(1, 'class', classes) train_df # Let's inspect how balanced our examples are by using a bar plot. # In[4]: pd.value_counts(train_df['class']).plot.bar() # The classes are evenly distributed. That's great! # # However, the text contains some spurious backslashes in some parts of the text. # They are meant to represent newlines in the original text. # An example can be seen below, between the words "dwindling" and "band". # In[5]: print(train_df.loc[0, 'description']) # We will replace the backslashes with spaces on the whole column using pandas replace method. # In[6]: title = train_df['title'].str.lower() descr = train_df['description'].str.lower() text = title + " " + descr train_df['text'] = text.str.replace('\\', ' ', regex=False) train_df # Now we will proceed to tokenize the title and description columns using NLTK's word_tokenize(). # We will add a new column to our dataframe with the list of tokens. # In[7]: from nltk.tokenize import word_tokenize train_df['tokens'] = train_df['text'].progress_map(word_tokenize) train_df # Now we will create a vocabulary from the training data. We will only keep the terms that repeat beyond some threshold established below. # In[8]: threshold = 10 tokens = train_df['tokens'].explode().value_counts() tokens = tokens[tokens > threshold] id_to_token = ['[UNK]'] + tokens.index.tolist() token_to_id = {w:i for i,w in enumerate(id_to_token)} vocabulary_size = len(id_to_token) print(f'vocabulary size: {vocabulary_size:,}') # In[9]: from collections import defaultdict def make_feature_vector(tokens, unk_id=0): vector = defaultdict(int) for t in tokens: i = token_to_id.get(t, unk_id) vector[i] += 1 return vector train_df['features'] = train_df['tokens'].progress_map(make_feature_vector) train_df # In[10]: def make_dense(feats): x = np.zeros(vocabulary_size) for k,v in feats.items(): x[k] = v return x X_train = np.stack(train_df['features'].progress_map(make_dense)) y_train = train_df['class index'].to_numpy() - 1 X_train = torch.tensor(X_train, dtype=torch.float32) y_train = torch.tensor(y_train) # In[11]: from torch import nn from torch import optim # hyperparameters lr = 1.0 n_epochs = 5 n_examples = X_train.shape[0] n_feats = X_train.shape[1] n_classes = len(labels) # initialize the model, loss function, optimizer, and data-loader model = nn.Linear(n_feats, n_classes).to(device) loss_func = nn.CrossEntropyLoss() optimizer = optim.SGD(model.parameters(), lr=lr) # train the model indices = np.arange(n_examples) for epoch in range(n_epochs): np.random.shuffle(indices) for i in tqdm(indices, desc=f'epoch {epoch+1}'): # clear gradients model.zero_grad() # send datum to right device x = X_train[i].unsqueeze(0).to(device) y_true = y_train[i].unsqueeze(0).to(device) # predict label scores y_pred = model(x) # compute loss loss = loss_func(y_pred, y_true) # backpropagate loss.backward() # optimize model parameters optimizer.step() # Next, we evaluate on the test dataset # In[12]: # repeat all preprocessing done above, this time on the test set test_df = pd.read_csv('data/ag_news_csv/test.csv', header=None) test_df.columns = ['class index', 'title', 'description'] test_df['text'] = test_df['title'].str.lower() + " " + test_df['description'].str.lower() test_df['text'] = test_df['text'].str.replace('\\', ' ', regex=False) test_df['tokens'] = test_df['text'].progress_map(word_tokenize) test_df['features'] = test_df['tokens'].progress_map(make_feature_vector) X_test = np.stack(test_df['features'].progress_map(make_dense)) y_test = test_df['class index'].to_numpy() - 1 X_test = torch.tensor(X_test, dtype=torch.float32) y_test = torch.tensor(y_test) # In[13]: from sklearn.metrics import classification_report # set model to evaluation mode model.eval() # don't store gradients with torch.no_grad(): X_test = X_test.to(device) y_pred = torch.argmax(model(X_test), dim=1) y_pred = y_pred.cpu().numpy() print(classification_report(y_test, y_pred, target_names=labels))
3,250
3,293
20
chap04-21
chap04-21
4 Implementing Text Classification Using Perceptron and Logistic Regression In the previous chapters we have discussed the theory behind the perceptron and logistic regression, including mathematical explanations of how and why they are able to learn from examples. In this chapter we will transition from math to code. Specifically, we will discuss how to implement these models in the Python programming language. All the code that we will introduce throughout this book is available online as well: http://clulab.github.io/gentlenlp/. The reader who is not familiar with the Python programming language is encouraged to read first Appendix A, for a brief introduction to the language, and Appendix B, for a discussion on how computers encode and preprocess text. Once done, please return here. To get a better understanding of how these algorithms work under the hood, we will start by implementing them from scratch. However, as the book progresses, we will introduce some of the popular tools and libraries that make Python the language of choice for machine learning, e.g., PyTorch,1 and Hugging Face’s transformers.2 The code for all the examples in the book is provided in the form of Jupyter notebooks.3 Important fragments of these notebooks will be presented in the implementation chapters so that the reader has the whole picture just by reading the book. However, we strongly encourage you to download the notebooks and execute them yourself. We also encourage you to modify them to conduct your own experiments! 1 https://pytorch.org
2 https://huggingface.co 3 https://jupyter.org/ 55 56 Implementing Text Classification Using Perceptron and LR 4.1 Binary Classification We begin this chapter with binary classification. That is, we aim to train classifiers that assign one of two labels to a given text. As the example for this task, we will train a review classifier using the the Large Movie Review Dataset (Maas et al., 2011).4 We tackle this task by implementing first a binary perceptron classifier, followed by a binary logistic regression one. We will implement the latter both from scratch as well as using PyTorch, so the reader has a clearer understanding on how PyTorch works “under the hood.” 4.1.1 Large Movie Review Dataset This dataset contains movie reviews and their associated scores (between 1 and 10) as provided by IMDb.5 converted these scores to binary labels by assigning each review a positive or negative label if the review score was above 6 or below 5, respectively. Reviews with scores 5 and 6 were considered too neutral and thus excluded. We follow the same protocol in this chapter. The dataset is divided in two even partitions called train and test, each containing 25,000 reviews. The dataset also provides additional unlabeled reviews, but we will not use those here. Each partition contains two directories called pos and neg where the positive and negative examples are stored. Each review is stored in an independent text file, whose name is composed of an id unique to the partition and the score associated with the review, separated by an underscore. An example of a positive and a negative review is shown in Table 4.1. 4.1.2 Bag-of-words Model As discussed in Section 2.2, we will encode the text to classify as a bag of words. That is, we encode each review as a list of numbers, with each position in the list corresponding to a word in our vocabulary, and the value stored in that position corresponding to the number of times the word appears in the review. For example, say we want to encode the following two reviews: 4 https://ai.stanford.edu/~amaas/data/sentiment/ 5 https://www.imdb.com/ Maas et al. 4.1 Binary Classification 57 Table 4.1 Two examples of movie reviews from IMDb. The first is a positive review of the movie Puss in Boots (1988). The second is a negative review of the movie Valentine (2001). These reviews can be found at https://www.imdb.com/review/rw0606396/ and https://www.imdb.com/review/rw0721861/, respectively. Filename Score Binary Label train/pos/24_8.txt 8/10 Positive train/neg/141_3.txt 3/10 Negative Review Text Although this was obviously a low-budget production, the performances and the songs in this movie are worth seeing. One of Walken’s few musical roles to date. (he is a marvelous dancer and singer and he demonstrates his acrobatic skills as well - watch for the cartwheel!) Also starring Jason Connery. A great children’s story and very likable characters. This stalk and slash turkey manages to bring nothing new to an increasingly stale genre. A masked killer stalks young, pert girls and slaughters them in a variety of gruesome ways, none of which are particularly inventive. It’s not scary, it’s not clever, and it’s not funny. So what was the point of it? Review 1: Review 2: "I liked the movie. My friend liked it too. " "I hated it. Would not recommend. " First, we need to create a vocabulary that maps each word to an id that uniquely identifies it. Each of these numbers will be used as the index in a list, so they must start at zero and grow by one for each word in the vocabulary. For example, one possible vocabulary that encodes the previous reviews is: {'would': 0, 'hated': 1, 58 Implementing Text Classification Using Perceptron and LR 'my': 2, 'liked': 3, 'not': 4, 'it': 5, 'movie': 6, 'recommend': 7, 'the': 8, 'I': 9, 'too': 10, 'friend': 11} Using this mapping, we can encode the two reviews as follows: Review1: [0,0,1,2,0,1,1,0,1,1,1,1] Review2: [1,1,0,0,1,1,0,1,0,1,0,0] Note that the word liked (fourth position) in the first review has a value of two. This is because this word appears twice in that review. This is a small example with a vocabulary of only 12 terms. Of course, the same process needs to be implemented for our whole training dataset. For this purpose we will use scikit-learn’s CountVectorizer class.6 Using the CountVectorizer class simplifies things, allowing us to get started quickly with a bag-of-words approach. However, note that it makes several simplifying assumptions (e.g., text is lowercased, and punctuation and single character tokens are removed). Some of these may not be adequate to other tasks. First, we need to obtain the filenames for the reviews in the training set: Once we have acquired the filenames for the training reviews, we need
to read them using the CountVectorizer. In order for the CountVectorizer to open and read the files for us, we make use of the input='filename' constructor parameter (otherwise it would expect the string content directly). The CountVectorizer provides three methods that will be use-
ful for us: a method called fit() that is used to acquire the vocabulary,
a method transform() that converts the text into the bag-of-words representation, and a method fit_transform() that conveniently acquires the vocabulary and transforms the data in a single step. The resulting object is referred to as a document-term matrix, where each row corre- 6 https://scikitlearn.org/stable/modules/generated/sklearn.feature_ extraction.text.CountVectorizer.html 4.1 Binary Classification 59 sponds to a document, and each column corresponds to a term in the vocabulary. As the output above indicates, the resulting matrix has 25,000 rows (one for each review), and 74,849 columns (one for each term). Also you may note that this matrix is sparse, with 3,445,861 stored elements. A regular matrix of shape 25,000×74,849 would have 1,871,225,000 elements. However, most of the elements in a document-term matrix are zeros because only a few words from the vocabulary appear in each document. A sparse matrix takes advantage of this fact by storing only the non-zero cells in order to reduce the memory required to store it. Thus, sparse matrices are convenient, especially when dealing with lots of data. Nevertheless, to simplify the downstream code in this example, we will convert it into a dense matrix, i.e., a regular two-dimensional NumPy array. Finally, we also need the labels of the reviews. We assign a label of one to positive reviews, and a label of zero to negative ones. Note that the first half of the reviews are positive and the second half are negative. The label at the ith position of the y_train array corresponds to the review encoded in the ith row of the X_train matrix. 4.1.3 Perceptron Now that we have defined our task and the data processing pipeline, we will implement a perceptron classifier that classifies the movie reviews as positive or negative. The entire code discussed in this section is available in the chap4_perceptron notebook. Recall from Section 2.4 that the perceptron is composed of a weight vector w and a bias term b. These will be represented as a NumPy array w of the same length as our document vectors, and a variable b for the bias term. Both will be initialized with zeros. The parameters w and b are learned through the following algorithm, which implements Algorithm 2 from Chapter 2: There are a couple of details to point out. Line 3 of Algorithm 2 indicates that we need to repeat the training loop until convergence. Theoretically, convergence is defined as predicting all training examples correctly. This is an ambitious requirement, which is not always possible in practice, so in this code we also include a stop condition if we reach a maximum number of epochs. Another crucial difference between our implementation here and the theoretical Algorithm 2, is that we randomize the order in which the training examples are seen at the beginning of 60 Implementing Text Classification Using Perceptron and LR each epoch. This simple (but highly recommended!) change is necessary to avoid the introduction of spurious biases due to the arbitrary order of the examples in the original training partition.7 We accomplish this by storing the indices corresponding to the X_train matrix rows in a NumPy array, and shuffling these indices at the beginning of each epoch. We shuffle the indices instead of the examples so that we can preserve the mapping between examples and labels. The training loop aligns closely with Algorithm 2. We start by iterating over each example in our training data, storing the current example in the variable x,8 and its corresponding label in the variable y_true. Next, we compute the perceptron decision function shown in Algorithm 1. Note that NumPy (as well as PyTorch) uses Python’s @ operator to indicate vector or matrix multiplication, depending on its operand types. Here we use it to calculate the dot product of the example x and the weights w. To this we add the bias b to obtain the predicted score, whose sign is used to assign a positive or negative predicted label. If the prediction is correct, then no update is needed, and we can move on to the next training example. However, if the prediction is incorrect, then we need to adjust w and b, as described in Algorithm 2. Sidebar 4.1 The tqdm function This is our first exposure to the tqdm function. tqdm is a progress bar that “make your loops show a smart progress meter.”9 The name tqdm comes from the Arabic word taqaddum which can mean “progress.” Using tqdm is as simple as wrapping it around the collection to be traversed. After training, we evaluate the model’s performance on the heldout test partition. The test data is loaded similarly to the training partition, but with one notable difference; we use CountVectorizer’s transform() method instead of the fit_transform() method so that the vocabulary is not adjusted for the test data. We won’t show here the loading of the test partition since it is so similar to the code already shown, but it is available in the Jupyter notebook that accompanies this section. . 7   As an extreme example, consider a dataset where all the positive examples appear first in the training partition. This would cause the perceptron to artificially inflate the weights of the features that occur in these examples, a situation from which the learning algorithm may struggle to recover. 
 . 8  We use typewriter font when we discuss variables in the code, to distinguish code from the theoretical discussion in the other chapters. 
 9 https://github.com/tqdm/tqdm 4.1 Binary Classification 61 Using the model to assign labels to all the test data is easily done in one step – we simply multiply the entire test data document-term matrix by the previously learned weights and add the bias. Scores greater than zero indicate a positive review, and those less than zero are negative. At this point we can evaluate the classifier’s performance, which we will do using precision, recall, and F1 scores for binary classification (described in Section 2.3). For this purpose, we implement a function called binary_classification_report that computes these metrics and returns them as a dictionary: We call this function to compare the predicted labels to the true labels, and obtain the evaluation scores. Our F1 score here is 86.8%, which is much higher than the baseline that assigns labels randomly, which yields an F1 score of about 50%. This is a good result, especially considering the simplicity of the perceptron! In the next sections and chapters, we will discuss a battery of strategies to considerably improve this performance. 4.1.4 Binary Logistic Regression from Scratch Using the same task, dataset, and evaluation, we will now implement a logistic regression classifier, as described in Algorithm 5 from Chapter 3. To give the reader hands-on experience with the implementation of the gradient calculations for logistic regression, we start by implementing it from scratch using NumPy. All the code shown in this section is available in the chap4_logistic_regression_numpy notebook. In the perceptron implementation, we represented the weights and the bias as two different variables. Here, however, we will use a different approach that will allow us to unify them into a single vector variable. Specifically, we take advantage of the similarity between the derivative of the cost function with respect to the weights (Equation 3.14) and the derivative of the cost with respect to the bias (Equation 3.15). d Ci(w, b) = (σi − yi)xij (3.14 revisited) dwj d Ci(w, b) = σi − yi (3.15 revisited) db Note that the two derivative formulas are identical except that the former has a multiplication by xij, while the latter does not. However, 62 Implementing Text Classification Using Perceptron and LR since σi − yi = (σi − yi)1 we can multiply the derivative of the cost with respect to the bias by one without changing the semantics. This gives an opportunity for combining the computations, doing them both in a single pass. The idea is that we can treat the bias as a weight corresponding to a feature that always has a value of one. As can be seen above, we created a NumPy array of ones of the same length as the number of examples in our training set (i.e., the number of rows in the data matrix). Then we add this array as a new column to the data matrix, using NumPy’s column_stack function. Next, we need to initialize our model. This time we will use a single NumPy array w of the same length as the number of columns in the data matrix. The weight vector w is initialized randomly with values between 0 and 1: Before implementing the learning algorithm, we need an implementation of the logistic function. Recall that the logistic function is σ(x) = 1 (3.1 revisited) 1+e−x This function can be easily implemented in NumPy as follows: However, this naive implementation may produce the following warning during training: The term overflow indicates that the result of evaluating exp(-x) is a number so large that it can’t be represented by a float (specifically, we’re using float64 numbers). We will avoid this issue by not calling exp with values that will overflow. NumPy provides the function finfo that can be consulted to find the limits of floating point numbers: The log of the largest floating point number is the largest number for which exp() will not overflow, so we will use it as a threshold to filter out problematic values: We now have everything we need to implement Algorithm 4. The steps to follow for each example are: (1) use the model to make a prediction, (2) calculate the gradient of the loss function with respect to the model parameters, and (3) update the model parameters using the gradient. The size of the update is controlled by the learning rate. Once the model has been trained, we evaluate it on the test dataset using our binary_classification_report function from the previous section. Loading and preprocessing the test dataset follows the same 4.1 Binary Classification 63 steps as with the previous classifier. We omit the code for brevity. These are the results: The performance is comparable with that of the perceptron. The difference in F1 scores between the two classifiers (84.9% here vs. 86.8% for the perceptron) is not significant. Classifier parity is probably attributable to the fact that the signal distinguishing the two classes being easy to learn and the simpler perceptron training algorithm being sufficient in this case. Nevertheless, this task is useful in showing how to implement the logistic regression model from scratch, i.e., by implementing the gradient calculation and parameter updates manually. Next, we will implement the same model again using PyTorch, highlighting how this machine learning library simplifies the process. 4.1.5 Binary Logistic Regression Utilizing PyTorch While it is fairly straightforward to compute the derivatives for logistic regression and implement then directly in NumPy, this will not scale well to arbitrary neural architectures. Fortunately, there are libraries that automate the computation of the derivatives of the cost function (assuming it is differentiable!) for any neural network, and use the resulting gradients to perform gradient descent or other more sophisticated optimization procedures. To this end, we will use the PyTorch deep learning library10. The corresponding notebook for this section is chap4_logistic_regression_pytorch_bce. Our model for logistic regression corresponds to PyTorch’s Linear layer. When we instantiate this layer, we specify the size of the inputs (the size of our vocabulary) and the size of the output, i.e., the number of output neurons (which is one because we’re doing binary classification). The loss function we use is the binary cross-entropy loss (see Chapter 3), which is implemented as BCEWithLogitsLoss in PyTorch. In PyTorch, the gradients obtained from the loss function are applied to the model by an optimizer object, which implements and applies an optimization algorithm. Here we will use the vanilla stochastic gradient descent optimizer; we set its learning rate to 0.1. This is equivalent to the discussion in Section 3.2. Similarly to the manual implementation, the steps required to train the model for a given training example are: (1) ensure the gradients are set to zeros, (2) apply the model to obtain a prediction, (3) calculate 10 https://pytorch.org/ 64 Implementing Text Classification Using Perceptron and LR the loss, (4) compute the gradient of the loss by back-propagation, and (5) update the model parameters. Recall that in our previous implementation everything was hardcoded: applying the model, computing the gradients, and optimizing the model parameters. Here, however, the implementation of the logistic regression is expressed at a higher level of abstraction. This means that we are describing the logical steps without specifying a particular implementation. Instead, implementation details are the responsability of the chosen model, loss function, and optimizer. Thus, we could even choose a different model, loss function, and/or optimizer, and use the same training steps with little or no modification. This decoupling of the training logic from the implementation details is one of the main advantages of libraries such as PyTorch. As shown in the code above, calling the model as a function, with the feature vectors as inputs, produces the predicted scores. Once again, a positive score corresponds to a positive label. When we evaluate this implementation on the test dataset, we obtain results that are in line with our previous models: Writing the perceptron and the logistic regression from scratch is a good exercise, as it exposes us to the fundamentals of implementing machine learning algorithms. However, this becomes cumbersome for more complex neural architectures. For this reason, from this point on, we will use PyTorch for all our coding examples. 4.2 Multiclass Classification So far, in this chapter we have discussed implementing binary classifiers. Next, we will modify these binary classifiers to perform multiclass classification, following the discussion in Section 3.5. 4.2.1 AG News Dataset Before explaining the actual training/testing code, we have to choose a new dataset that is suitable for multiclass classification. To this end, we will use the AG News Classification Dataset (Zhang et al., 2015), a subset of the larger AG corpus of news articles collected from thousands of different news sources.11 The classification dataset consists of four 11 http://groups.di.unipi.it/~gulli/AG_corpus_of_news_articles.html 4.2 Multiclass Classification 65 classes, and the data is equally balanced across all classes (30,000 articles per class for train, and 1,900 articles per class for testing). The goal of the task is to classify each article as one of the four classes: World, Sports, Business, or Sci/Tech. 4.2.2 Preparing the Dataset The AG News Dataset is distributed as two CSV files (one for training and one for testing), each containing three columns: the class index, the title, and the description. The dataset also provides a text file that maps the above class indexes to more descriptive class labels. Because of the tabular nature of the dataset, pandas, a Python library
for tabular data analysis,12 is a natural choice for loading and transform-
ing it. To this end, our Jupyter notebook (chap4_multiclass_logistic_regression) demonstrates the sequence of steps required to handle the data, as well
as model training and evaluation. First, we show how to load the CSV,
add column names, and inspect the result: class index . 0  3 
 . 1  3 
 . 2  3 
 . 3  3 
 . 4  3 
 ... ... . 119995  1 
 . 119996  2 
 . 119997  2 
 . 119998  2 
 . 119999  2 
 title Wall St. Bears Claw Back Into the Black (Reuters) Carlyle Looks Toward Commercial Aerospace (Reu... Oil and Economy Cloud Stocks' Outlook (Reuters) Iraq Halts Oil Exports from Main Southern Pipe... Oil prices soar to all-time record, posing new... ... Pakistan's Musharraf Says Won't Quit as Army C... Renteria signing a top-shelf deal Saban not going to Dolphins yet Today's NFL games Nets get Carter from Raptors description Reuters - Short-sellers, Wall Street's dwindli... Reuters - Private investment firm Carlyle Grou... Reuters - Soaring crude prices plus worries\ab... Reuters - Authorities have halted oil export\f... AFP - Tearaway world oil prices, toppling reco... ... KARACHI (Reuters) - Pakistani President Perve... Red Sox general manager Theo Epstein acknowled... The Miami Dolphins will put their courtship of... PITTSBURGH at NY GIANTS Time: 1:30 p.m. Line: ... INDIANAPOLIS -- All-Star Vince Carter was trad... 120000 rows × 3 columns Since the class labels themselves are in a separate file, we manually add them to the pandas data structure (called dataframe in pandas’ terminology) to increase the interpretability of the data. We use the class index column as a starting point, and use its map method to create a new column with the corresponding labels (technically a new Series object) that is added to the dataframe using its insert method, which allows us to insert the column in a specific position. Note that the label indices are one-based, so we subtract one to align them with their labels. 12 https://pandas.pydata.org 66 Implementing Text Classification Using Perceptron and LR class index . 0  3 
 . 1  3 
 . 2  3 
 . 3  3 
 . 4  3 
 ... ... . 119995  1 
 . 119996  2 
 . 119997  2 
 . 119998  2 
 . 119999  2 
 class Business Business Business Business Business ... World Sports Sports Sports Sports title Wall St. Bears Claw Back Into the Black (Reuters) Oil and Economy Cloud Stocks' Outlook (Reuters) Oil prices soar to all-time record, posing new... Pakistan's Musharraf Says Won't Quit as Army C... Saban not going to Dolphins yet Nets get Carter from Raptors description Reuters - Short-sellers, Wall Street's dwindli... Reuters - Soaring crude prices plus worries\ab... AFP - Tearaway world oil prices, toppling reco... KARACHI (Reuters) - Pakistani President Perve... The Miami Dolphins will put their courtship of... INDIANAPOLIS -- All-Star Vince Carter was trad... Iraq Halts Oil Exports from Main Southern Pipe... Reuters - Authorities have halted oil export\f... ... ... Renteria signing a top-shelf deal Red Sox general manager Theo Epstein acknowled... 120000 rows × 4 columns Carlyle Looks Toward Commercial Aerospace (Reu... Reuters - Private investment firm Carlyle Grou... Today's NFL games PITTSBURGH at NY GIANTS Time: 1:30 p.m. Line: ... Next we will preprocess the text. First we lowercase the title and description, and then we concatenate them into a single string. Then we remove some spurious backslashes from the text. Once this is done, the preprocessed text is added to the dataframe as a new column. Note that pandas allows these steps to be applied to all rows simultaneously. class index class title Wall St. Bears Claw Back Into the Black (Reuters) Oil and Economy Cloud Stocks' Outlook (Reuters) Oil prices soar to all-time record, posing new... ... Pakistan's Musharraf Says Won't Quit as Army C... Saban not going to Dolphins yet Nets get Carter from Raptors description Reuters - Short-sellers, Wall Street's dwindli... Reuters - Soaring crude prices plus worries\ab... AFP - Tearaway world oil prices, toppling reco... ... KARACHI (Reuters) - Pakistani President Perve... The Miami Dolphins will put their courtship of... INDIANAPOLIS -- All-Star Vince Carter was trad... text wall st. bears claw back into the black (reute... oil and economy cloud stocks' outlook (reuters... oil prices soar to all-time record, posing new... ... pakistan's musharraf says won't quit as army c... saban not going to dolphins yet the miami dolp... nets get carter from raptors indianapolis -- a... . 0  3 Business 
 . 1  3 Business 
 . 2  3 Business 
 . 3  3 Business 
 . 4  3 Business 
 ... ... ... . 119995  1 World 
 . 119996  2 Sports 
 . 119997  2 Sports 
 . 119998  2 Sports 
 . 119999  2 Sports 
 120000 rows × 5 columns Carlyle Looks Toward Commercial Reuters - Private investment firm Carlyle carlyle looks toward commercial Aerospace (Reu... Grou... aerospace (reu... Iraq Halts Oil Exports from Main Southern Pipe... Reuters - Authorities have halted oil export\f... iraq halts oil exports from main southern pipe... Renteria signing a top-shelf deal Red Sox general manager Theo Epstein renteria signing a top-shelf deal red sox acknowled... gene... Today's NFL games PITTSBURGH at NY GIANTS Time: 1:30 p.m. today's nfl games pittsburgh at ny giants Line: ... time... At this point, the text is ready to be tokenized. For this purpose we will use NLTK’s word_tokenize function. This function can be applied to the whole column at once using the pandas map function, which returns a new column which we add to the dataframe. However, here we actually use the progress_map function, which provides a visual progress bar. This visual feedback is especially helpful for tasks that take more time to complete. 4.2 Multiclass Classification 67 class index . 0  3 
 . 1  3 
 . 2  3 
 . 3  3 
 . 4  3 
 ... ... . 119995  1 
 . 119996  2 
 . 119997  2 
 . 119998  2 
 . 119999  2 
 class Business Business Business Business Business ... World Sports Sports Sports Sports title Wall St. Bears Claw Back Into the Black (Reuters) Oil and Economy Cloud Stocks' Outlook (Reuters) Oil prices soar to all-time record, posing new... ... Pakistan's Musharraf Says Won't Quit as Army C... Saban not going to Dolphins yet Nets get Carter from Raptors description Reuters - Short-sellers, Wall Street's dwindli... Reuters - Soaring crude prices plus worries\ab... AFP - Tearaway world oil prices, toppling reco... ... KARACHI (Reuters) - Pakistani President Perve... The Miami Dolphins will put their courtship of... INDIANAPOLIS -- All-Star Vince Carter was trad... text wall st. bears claw back into the black (reute... oil and economy cloud stocks' outlook (reuters... oil prices soar to all-time record, posing new... ... pakistan's musharraf says won't quit as army c... saban not going to dolphins yet the miami dolp... nets get carter from raptors indianapolis -- a... tokens [wall, st., bears, claw, back, into, the, blac... [oil, and, economy, cloud, stocks, ', outlook,... [oil, prices, soar, to, all-time, record, ,, p... ... [pakistan, 's, musharraf, says, wo, n't, quit,... [saban, not, going, to, dolphins, yet, the, mi... [nets, get, carter, from, raptors, indianapoli... 120000 rows × 6 columns Carlyle Looks Toward Commercial Reuters - Private investment firm carlyle looks toward commercial [carlyle, looks, toward, Aerospace (Reu... Carlyle Grou... aerospace (reu... commercial, aerospace... Iraq Halts Oil Exports from Main Reuters - Authorities have halted iraq halts oil exports from main [iraq, halts, oil, exports, from, Southern Pipe... oil export\f... southern pipe... main, southe... Renteria signing a top-shelf deal Red Sox general manager Theo renteria signing a top-shelf deal [renteria, signing, a, top-shelf, Epstein acknowled... red sox gene... deal, red, s... Today's NFL games PITTSBURGH at NY GIANTS today's nfl games pittsburgh at [today, 's, nfl, games, Time: 1:30 p.m. Line: ... ny giants time... pittsburgh, at, ny, gi... From the tokens we just created, we then create a vocabulary for our corpus. Here, we only keep the words that occur at least 10 times, decreasing the memory needed and reducing the likelihood that our vocabulary contains noisy tokens. Note that each row in the tokens column contains a list of tokens. In order to create the vocabulary, we will need to convert the Series of lists of tokens into a Series of tokens using the explode() Pandas method. Then we will use the value_counts() method to create a Series object in which the index are the tokens and the values are the number of times they appear in the corpus. The next step is removing the tokens with a count lower than our chosen threshold. Finally, we create a list with the remaining tokens, as well as a dictionary that maps tokens to token ids (i.e., the index of the token in the list). We include in the vocabulary a special token [UNK] that will be used as a placeholder for tokens that do not appear in our vocabulary after the frequency pruning. Using this vocabulary, we construct a feature vector for each news article in the corpus. This feature vector will be encoded as a dictionary, with keys corresponding to token ids, and values corresponding to the number of times the token appears in the article. As above, the feature vectors will be stored as a new column in the dataframe. 68 Implementing Text Classification Using Perceptron and LR class index class title Wall St. Bears Claw Back Into the Black (Reuters) Oil and Economy Cloud Stocks' Outlook (Reuters) Oil prices soar to all-time record, posing new... ... Pakistan's Musharraf Says Won't Quit as Army C... Saban not going to Dolphins yet Nets get Carter from Raptors description Reuters - Short-sellers, Wall Street's dwindli... Reuters - Soaring crude prices plus worries\ab... AFP - Tearaway world oil prices, toppling reco... ... KARACHI (Reuters) - Pakistani President Perve... The Miami Dolphins will put their courtship of... INDIANAPOLIS -- All-Star Vince Carter was trad... text wall st. bears claw back into the black (reute... oil and economy cloud stocks' outlook (reuters... oil prices soar to all-time record, posing new... ... pakistan's musharraf says won't quit as army c... saban not going to dolphins yet the miami dolp... nets get carter from raptors indianapolis -- a... tokens [wall, st., bears, claw, back, into, the, blac... [oil, and, economy, cloud, stocks, ', outlook,... [oil, prices, soar, to, alltime, record, ,, p... ... [pakistan, 's, musharraf, says, wo, n't, quit,... [saban, not, going, to, dolphins, yet, the, mi... [nets, get, carter, from, raptors, indianapoli... features {427: 2, 563: 1, 1607: 1, 15062: 1, 120: 1, 73... {66: 1, 9: 2, 351: 2, 4565: 1, 158: 1, 116: 1,... {66: 2, 99: 2, 4390: 1, 4: 2, 3595: 1, 149: 1,... ... {383: 1, 23: 1, 1626: 2, 91: 1, 1809: 1, 285: ... {7762: 2, 68: 1, 661: 1, 4: 2, 1439: 2, 703: 1... {2170: 2, 226: 1, 2402: 2, 32: 1, 2995: 2, 219... . 0  3 Business 
 . 1  3 Business 
 . 2  3 Business 
 . 3  3 Business 
 . 4  3 Business 
 ... ... ... . 119995  1 World 
 . 119996  2 Sports 
 . 119997  2 Sports 
 . 119998  2 Sports 
 . 119999  2 Sports 
 120000 rows × 7 columns Carlyle Looks Toward Commercial Aerospace (Reu... Reuters - Private investment firm Carlyle Grou... carlyle looks toward commercial aerospace (reu... Iraq Halts Oil Exports from Reuters - Authorities have iraq halts oil exports from Main Southern Pipe... halted oil export\f... main southern pipe... Renteria signing a top-shelf Red Sox general manager renteria signing a topdeal Theo Epstein acknowled... shelf deal red sox gene... PITTSBURGH at NY Today's NFL games GIANTS Time: 1:30 p.m. Line: ... today's nfl games pittsburgh at ny giants time... [carlyle, looks, toward, {15999: 2, 1076: 1, 855: commercial, aerospace... 1, 1286: 1, 4251: 1, ... [iraq, halts, oil, exports, {77: 2, 7380: 1, 66: 3, from, main, southe... 1787: 1, 32: 2, 900: 2... [renteria, signing, a, top- {8428: 2, 2638: 1, 5: 4, shelf, deal, red, s... 0: 3, 127: 1, 202: 3,... [today, 's, nfl, games, {106: 1, 23: 1, 729: 1, pittsburgh, at, ny, gi... 225: 1, 1586: 1, 22: 1... The final preprocessing step is converting the features and the class indices into PyTorch tensors. Recall that we need to subtract one from the class indices to make them zero-based. At this point, the data is fully processed and we are ready to begin training. 4.2.3 Multiclass Logistic Regression Using PyTorch The model itself is a single linear layer whose input size corresponds to the size of our vocabulary, and its output size corresponds to the number of classes in our corpus. PyTorch’s Linear layer includes a bias by default, so there is no need to handle that manually the way we did for our perceptron example. The code for training this model (which implements Algorithm 6) is almost identical to that of the binary logistic repression. However, since we have to calculate a score for each of the four different classes, we need to replace the previous BCEWithLogitsLoss with CrossEntropyLoss, which applies a softmax over the scores to obtain probabilities for each class. For each example, the model predicts 4 scores – one for each label. The label with the highest score is selected using the argmax function. We evaluate the predictions of our model for each class using Scikitlearn’s classification_report, which handles the results of multiclass classification. 4.3 Summary 69 4.3 Summary In this chapter, we used movie review and news article classification to illustrate the implementation of the previously described algorithms for the binary perceptron, binary logistic regression, and multiclass logistic regression. For the binary logistic regression, we made a direct comparison between the lower-level NumPy implementation and a higher-level version that made use of PyTorch. We hope that through this series of exercises the reader has noted several key takeaways. First, data preparation is important and should be done thoughtfully. Certain tasks (e.g., text normalization or sentence splitting) are going to be frequently needed if you continue with NLP, so using or creating generic functions can be very helpful. However, what works for one dataset and one language may not be suitable for another scenario. For example, in our case, we selected different tokenizers for each of our tasks to account for the different registers of English, as well as removing diacritics during normalization. Second, when it comes to implementing machine learning algorithms, it is often easier to use a higher-level library such as PyTorch instead of NumPy. For example, with the former, the gradients are calculated by the library, whereas in NumPy we have to code them ourselves. This becomes cumbersome quickly. For example, even the derivative of the softmax is non-trivial. Third, PyTorch imposes a training structure that remains largely the same, regardless of what models are being trained. That is, at a high level, the same steps are always required: clearing the current gradients, predicting output scores for the provided inputs, calculating the loss, and optimizing. These features make PyTorch a very powerful and convenient deep learning library; we will continue to use it throughout the remainder of the book to implement more complex neural architectures.
12,663
12,972
#!/usr/bin/env python # coding: utf-8 # # Binary Text Classification with Perceptron # In[1]: import random import numpy as np from tqdm.notebook import tqdm # set this variable to a number to be used as the random seed # or to None if you don't want to set a random seed seed = 1234 if seed is not None: random.seed(seed) np.random.seed(seed) # The dataset is divided in two directories called `train` and `test`. # These directories contain the training and testing splits of the dataset. # In[2]: get_ipython().system('ls -lh data/aclImdb/') # Both the `train` and `test` directories contain two directories called `pos` and `neg` that contain text files with the positive and negative reviews, respectively. # In[3]: get_ipython().system('ls -lh data/aclImdb/train/') # We will now read the filenames of the positive and negative examples. # In[4]: from glob import glob pos_files = glob('data/aclImdb/train/pos/*.txt') neg_files = glob('data/aclImdb/train/neg/*.txt') print('number of positive reviews:', len(pos_files)) print('number of negative reviews:', len(neg_files)) # Now, we will use a [`CountVectorizer`](https://scikit-learn.org/stable/modules/generated/sklearn.feature_extraction.text.CountVectorizer.html) to read the text files, tokenize them, acquire a vocabulary from the training data, and encode it in a document-term matrix in which each row represents a review, and each column represents a term in the vocabulary. Each element $(i,j)$ in the matrix represents the number of times term $j$ appears in example $i$. # In[5]: from sklearn.feature_extraction.text import CountVectorizer # initialize CountVectorizer indicating that we will give it a list of filenames that have to be read cv = CountVectorizer(input='filename') # learn vocabulary and return sparse document-term matrix doc_term_matrix = cv.fit_transform(pos_files + neg_files) doc_term_matrix # Note in the message printed above that the matrix is of shape (25000, 74894). # In other words, it has 1,871,225,000 elements. # However, only 3,445,861 elements were stored. # This is because most of the elements in the matrix are zeros. # The reason is that the reviews are short and most words in the english language don't appear in each review. # A matrix that only stores non-zero values is called *sparse*. # # Now we will convert it to a dense numpy array: # In[6]: X_train = doc_term_matrix.toarray() X_train.shape # We will also create a numpy array with the binary labels for the reviews. # One indicates a positive review and zero a negative review. # The label `y_train[i]` corresponds to the review encoded in row `i` of the `X_train` matrix. # In[7]: # training labels y_pos = np.ones(len(pos_files)) y_neg = np.zeros(len(neg_files)) y_train = np.concatenate([y_pos, y_neg]) y_train # Now we will initialize our model, in the form of an array of weights `w` of the same size as the number of features in our dataset (i.e., the number of words in the vocabulary acquired by [`CountVectorizer`](https://scikit-learn.org/stable/modules/generated/sklearn.feature_extraction.text.CountVectorizer.html)), and a bias term `b`. # Both are initialized to zeros. # In[8]: # initialize model: the feature vector and bias term are populated with zeros n_examples, n_features = X_train.shape w = np.zeros(n_features) b = 0 # Now we will use the perceptron learning algorithm to learn the values of `w` and `b` from our training data. # In[9]: n_epochs = 10 indices = np.arange(n_examples) for epoch in range(10): n_errors = 0 # randomize the order in which training examples are seen in this epoch np.random.shuffle(indices) # traverse the training data for i in tqdm(indices, desc=f'epoch {epoch+1}'): x = X_train[i] y_true = y_train[i] # the perceptron decision based on the current model score = x @ w + b y_pred = 1 if score > 0 else 0 # update the model is the prediction was incorrect if y_true == y_pred: continue elif y_true == 1 and y_pred == 0: w = w + x b = b + 1 n_errors += 1 elif y_true == 0 and y_pred == 1: w = w - x b = b - 1 n_errors += 1 if n_errors == 0: break # The next step is evaluating the model on the test dataset. # Note that this time we use the [`transform()`](https://scikit-learn.org/stable/modules/generated/sklearn.feature_extraction.text.CountVectorizer.html#sklearn.feature_extraction.text.CountVectorizer.transform) method of the [`CountVectorizer`](https://scikit-learn.org/stable/modules/generated/sklearn.feature_extraction.text.CountVectorizer.html), instead of the [`fit_transform()`](https://scikit-learn.org/stable/modules/generated/sklearn.feature_extraction.text.CountVectorizer.html#sklearn.feature_extraction.text.CountVectorizer.fit_transform) method that we used above. This is because we want to use the learned vocabulary in the test set, instead of learning a new one. # In[10]: pos_files = glob('data/aclImdb/test/pos/*.txt') neg_files = glob('data/aclImdb/test/neg/*.txt') doc_term_matrix = cv.transform(pos_files + neg_files) X_test = doc_term_matrix.toarray() y_pos = np.ones(len(pos_files)) y_neg = np.zeros(len(neg_files)) y_test = np.concatenate([y_pos, y_neg]) # Using the model is easy: multiply the document-term matrix by the learned weights and add the bias. # We use Python's `@` operator to perform the matrix-vector multiplication. # In[11]: y_pred = (X_test @ w + b) > 0 # Now we print an evaluation of the prediction results using scikit-learn's [`classification_report()`](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.classification_report.html) function. # In[12]: def binary_classification_report(y_true, y_pred): # count true positives, false positives, true negatives, and false negatives tp = fp = tn = fn = 0 for gold, pred in zip(y_true, y_pred): if pred == True: if gold == True: tp += 1 else: fp += 1 else: if gold == False: tn += 1 else: fn += 1 # calculate precision and recall precision = tp / (tp + fp) recall = tp / (tp + fn) # calculate f1 score fscore = 2 * precision * recall / (precision + recall) # calculate accuracy accuracy = (tp + tn) / len(y_true) # number of positive labels in y_true support = sum(y_true) return { "precision": precision, "recall": recall, "f1-score": fscore, "support": support, "accuracy": accuracy, } # In[13]: binary_classification_report(y_test, y_pred)
5,806
5,856
21
chap04-22
chap04-22
4 Implementing Text Classification Using Perceptron and Logistic Regression In the previous chapters we have discussed the theory behind the perceptron and logistic regression, including mathematical explanations of how and why they are able to learn from examples. In this chapter we will transition from math to code. Specifically, we will discuss how to implement these models in the Python programming language. All the code that we will introduce throughout this book is available online as well: http://clulab.github.io/gentlenlp/. The reader who is not familiar with the Python programming language is encouraged to read first Appendix A, for a brief introduction to the language, and Appendix B, for a discussion on how computers encode and preprocess text. Once done, please return here. To get a better understanding of how these algorithms work under the hood, we will start by implementing them from scratch. However, as the book progresses, we will introduce some of the popular tools and libraries that make Python the language of choice for machine learning, e.g., PyTorch,1 and Hugging Face’s transformers.2 The code for all the examples in the book is provided in the form of Jupyter notebooks.3 Important fragments of these notebooks will be presented in the implementation chapters so that the reader has the whole picture just by reading the book. However, we strongly encourage you to download the notebooks and execute them yourself. We also encourage you to modify them to conduct your own experiments! 1 https://pytorch.org
2 https://huggingface.co 3 https://jupyter.org/ 55 56 Implementing Text Classification Using Perceptron and LR 4.1 Binary Classification We begin this chapter with binary classification. That is, we aim to train classifiers that assign one of two labels to a given text. As the example for this task, we will train a review classifier using the the Large Movie Review Dataset (Maas et al., 2011).4 We tackle this task by implementing first a binary perceptron classifier, followed by a binary logistic regression one. We will implement the latter both from scratch as well as using PyTorch, so the reader has a clearer understanding on how PyTorch works “under the hood.” 4.1.1 Large Movie Review Dataset This dataset contains movie reviews and their associated scores (between 1 and 10) as provided by IMDb.5 converted these scores to binary labels by assigning each review a positive or negative label if the review score was above 6 or below 5, respectively. Reviews with scores 5 and 6 were considered too neutral and thus excluded. We follow the same protocol in this chapter. The dataset is divided in two even partitions called train and test, each containing 25,000 reviews. The dataset also provides additional unlabeled reviews, but we will not use those here. Each partition contains two directories called pos and neg where the positive and negative examples are stored. Each review is stored in an independent text file, whose name is composed of an id unique to the partition and the score associated with the review, separated by an underscore. An example of a positive and a negative review is shown in Table 4.1. 4.1.2 Bag-of-words Model As discussed in Section 2.2, we will encode the text to classify as a bag of words. That is, we encode each review as a list of numbers, with each position in the list corresponding to a word in our vocabulary, and the value stored in that position corresponding to the number of times the word appears in the review. For example, say we want to encode the following two reviews: 4 https://ai.stanford.edu/~amaas/data/sentiment/ 5 https://www.imdb.com/ Maas et al. 4.1 Binary Classification 57 Table 4.1 Two examples of movie reviews from IMDb. The first is a positive review of the movie Puss in Boots (1988). The second is a negative review of the movie Valentine (2001). These reviews can be found at https://www.imdb.com/review/rw0606396/ and https://www.imdb.com/review/rw0721861/, respectively. Filename Score Binary Label train/pos/24_8.txt 8/10 Positive train/neg/141_3.txt 3/10 Negative Review Text Although this was obviously a low-budget production, the performances and the songs in this movie are worth seeing. One of Walken’s few musical roles to date. (he is a marvelous dancer and singer and he demonstrates his acrobatic skills as well - watch for the cartwheel!) Also starring Jason Connery. A great children’s story and very likable characters. This stalk and slash turkey manages to bring nothing new to an increasingly stale genre. A masked killer stalks young, pert girls and slaughters them in a variety of gruesome ways, none of which are particularly inventive. It’s not scary, it’s not clever, and it’s not funny. So what was the point of it? Review 1: Review 2: "I liked the movie. My friend liked it too. " "I hated it. Would not recommend. " First, we need to create a vocabulary that maps each word to an id that uniquely identifies it. Each of these numbers will be used as the index in a list, so they must start at zero and grow by one for each word in the vocabulary. For example, one possible vocabulary that encodes the previous reviews is: {'would': 0, 'hated': 1, 58 Implementing Text Classification Using Perceptron and LR 'my': 2, 'liked': 3, 'not': 4, 'it': 5, 'movie': 6, 'recommend': 7, 'the': 8, 'I': 9, 'too': 10, 'friend': 11} Using this mapping, we can encode the two reviews as follows: Review1: [0,0,1,2,0,1,1,0,1,1,1,1] Review2: [1,1,0,0,1,1,0,1,0,1,0,0] Note that the word liked (fourth position) in the first review has a value of two. This is because this word appears twice in that review. This is a small example with a vocabulary of only 12 terms. Of course, the same process needs to be implemented for our whole training dataset. For this purpose we will use scikit-learn’s CountVectorizer class.6 Using the CountVectorizer class simplifies things, allowing us to get started quickly with a bag-of-words approach. However, note that it makes several simplifying assumptions (e.g., text is lowercased, and punctuation and single character tokens are removed). Some of these may not be adequate to other tasks. First, we need to obtain the filenames for the reviews in the training set: Once we have acquired the filenames for the training reviews, we need
to read them using the CountVectorizer. In order for the CountVectorizer to open and read the files for us, we make use of the input='filename' constructor parameter (otherwise it would expect the string content directly). The CountVectorizer provides three methods that will be use-
ful for us: a method called fit() that is used to acquire the vocabulary,
a method transform() that converts the text into the bag-of-words representation, and a method fit_transform() that conveniently acquires the vocabulary and transforms the data in a single step. The resulting object is referred to as a document-term matrix, where each row corre- 6 https://scikitlearn.org/stable/modules/generated/sklearn.feature_ extraction.text.CountVectorizer.html 4.1 Binary Classification 59 sponds to a document, and each column corresponds to a term in the vocabulary. As the output above indicates, the resulting matrix has 25,000 rows (one for each review), and 74,849 columns (one for each term). Also you may note that this matrix is sparse, with 3,445,861 stored elements. A regular matrix of shape 25,000×74,849 would have 1,871,225,000 elements. However, most of the elements in a document-term matrix are zeros because only a few words from the vocabulary appear in each document. A sparse matrix takes advantage of this fact by storing only the non-zero cells in order to reduce the memory required to store it. Thus, sparse matrices are convenient, especially when dealing with lots of data. Nevertheless, to simplify the downstream code in this example, we will convert it into a dense matrix, i.e., a regular two-dimensional NumPy array. Finally, we also need the labels of the reviews. We assign a label of one to positive reviews, and a label of zero to negative ones. Note that the first half of the reviews are positive and the second half are negative. The label at the ith position of the y_train array corresponds to the review encoded in the ith row of the X_train matrix. 4.1.3 Perceptron Now that we have defined our task and the data processing pipeline, we will implement a perceptron classifier that classifies the movie reviews as positive or negative. The entire code discussed in this section is available in the chap4_perceptron notebook. Recall from Section 2.4 that the perceptron is composed of a weight vector w and a bias term b. These will be represented as a NumPy array w of the same length as our document vectors, and a variable b for the bias term. Both will be initialized with zeros. The parameters w and b are learned through the following algorithm, which implements Algorithm 2 from Chapter 2: There are a couple of details to point out. Line 3 of Algorithm 2 indicates that we need to repeat the training loop until convergence. Theoretically, convergence is defined as predicting all training examples correctly. This is an ambitious requirement, which is not always possible in practice, so in this code we also include a stop condition if we reach a maximum number of epochs. Another crucial difference between our implementation here and the theoretical Algorithm 2, is that we randomize the order in which the training examples are seen at the beginning of 60 Implementing Text Classification Using Perceptron and LR each epoch. This simple (but highly recommended!) change is necessary to avoid the introduction of spurious biases due to the arbitrary order of the examples in the original training partition.7 We accomplish this by storing the indices corresponding to the X_train matrix rows in a NumPy array, and shuffling these indices at the beginning of each epoch. We shuffle the indices instead of the examples so that we can preserve the mapping between examples and labels. The training loop aligns closely with Algorithm 2. We start by iterating over each example in our training data, storing the current example in the variable x,8 and its corresponding label in the variable y_true. Next, we compute the perceptron decision function shown in Algorithm 1. Note that NumPy (as well as PyTorch) uses Python’s @ operator to indicate vector or matrix multiplication, depending on its operand types. Here we use it to calculate the dot product of the example x and the weights w. To this we add the bias b to obtain the predicted score, whose sign is used to assign a positive or negative predicted label. If the prediction is correct, then no update is needed, and we can move on to the next training example. However, if the prediction is incorrect, then we need to adjust w and b, as described in Algorithm 2. Sidebar 4.1 The tqdm function This is our first exposure to the tqdm function. tqdm is a progress bar that “make your loops show a smart progress meter.”9 The name tqdm comes from the Arabic word taqaddum which can mean “progress.” Using tqdm is as simple as wrapping it around the collection to be traversed. After training, we evaluate the model’s performance on the heldout test partition. The test data is loaded similarly to the training partition, but with one notable difference; we use CountVectorizer’s transform() method instead of the fit_transform() method so that the vocabulary is not adjusted for the test data. We won’t show here the loading of the test partition since it is so similar to the code already shown, but it is available in the Jupyter notebook that accompanies this section. . 7   As an extreme example, consider a dataset where all the positive examples appear first in the training partition. This would cause the perceptron to artificially inflate the weights of the features that occur in these examples, a situation from which the learning algorithm may struggle to recover. 
 . 8  We use typewriter font when we discuss variables in the code, to distinguish code from the theoretical discussion in the other chapters. 
 9 https://github.com/tqdm/tqdm 4.1 Binary Classification 61 Using the model to assign labels to all the test data is easily done in one step – we simply multiply the entire test data document-term matrix by the previously learned weights and add the bias. Scores greater than zero indicate a positive review, and those less than zero are negative. At this point we can evaluate the classifier’s performance, which we will do using precision, recall, and F1 scores for binary classification (described in Section 2.3). For this purpose, we implement a function called binary_classification_report that computes these metrics and returns them as a dictionary: We call this function to compare the predicted labels to the true labels, and obtain the evaluation scores. Our F1 score here is 86.8%, which is much higher than the baseline that assigns labels randomly, which yields an F1 score of about 50%. This is a good result, especially considering the simplicity of the perceptron! In the next sections and chapters, we will discuss a battery of strategies to considerably improve this performance. 4.1.4 Binary Logistic Regression from Scratch Using the same task, dataset, and evaluation, we will now implement a logistic regression classifier, as described in Algorithm 5 from Chapter 3. To give the reader hands-on experience with the implementation of the gradient calculations for logistic regression, we start by implementing it from scratch using NumPy. All the code shown in this section is available in the chap4_logistic_regression_numpy notebook. In the perceptron implementation, we represented the weights and the bias as two different variables. Here, however, we will use a different approach that will allow us to unify them into a single vector variable. Specifically, we take advantage of the similarity between the derivative of the cost function with respect to the weights (Equation 3.14) and the derivative of the cost with respect to the bias (Equation 3.15). d Ci(w, b) = (σi − yi)xij (3.14 revisited) dwj d Ci(w, b) = σi − yi (3.15 revisited) db Note that the two derivative formulas are identical except that the former has a multiplication by xij, while the latter does not. However, 62 Implementing Text Classification Using Perceptron and LR since σi − yi = (σi − yi)1 we can multiply the derivative of the cost with respect to the bias by one without changing the semantics. This gives an opportunity for combining the computations, doing them both in a single pass. The idea is that we can treat the bias as a weight corresponding to a feature that always has a value of one. As can be seen above, we created a NumPy array of ones of the same length as the number of examples in our training set (i.e., the number of rows in the data matrix). Then we add this array as a new column to the data matrix, using NumPy’s column_stack function. Next, we need to initialize our model. This time we will use a single NumPy array w of the same length as the number of columns in the data matrix. The weight vector w is initialized randomly with values between 0 and 1: Before implementing the learning algorithm, we need an implementation of the logistic function. Recall that the logistic function is σ(x) = 1 (3.1 revisited) 1+e−x This function can be easily implemented in NumPy as follows: However, this naive implementation may produce the following warning during training: The term overflow indicates that the result of evaluating exp(-x) is a number so large that it can’t be represented by a float (specifically, we’re using float64 numbers). We will avoid this issue by not calling exp with values that will overflow. NumPy provides the function finfo that can be consulted to find the limits of floating point numbers: The log of the largest floating point number is the largest number for which exp() will not overflow, so we will use it as a threshold to filter out problematic values: We now have everything we need to implement Algorithm 4. The steps to follow for each example are: (1) use the model to make a prediction, (2) calculate the gradient of the loss function with respect to the model parameters, and (3) update the model parameters using the gradient. The size of the update is controlled by the learning rate. Once the model has been trained, we evaluate it on the test dataset using our binary_classification_report function from the previous section. Loading and preprocessing the test dataset follows the same 4.1 Binary Classification 63 steps as with the previous classifier. We omit the code for brevity. These are the results: The performance is comparable with that of the perceptron. The difference in F1 scores between the two classifiers (84.9% here vs. 86.8% for the perceptron) is not significant. Classifier parity is probably attributable to the fact that the signal distinguishing the two classes being easy to learn and the simpler perceptron training algorithm being sufficient in this case. Nevertheless, this task is useful in showing how to implement the logistic regression model from scratch, i.e., by implementing the gradient calculation and parameter updates manually. Next, we will implement the same model again using PyTorch, highlighting how this machine learning library simplifies the process. 4.1.5 Binary Logistic Regression Utilizing PyTorch While it is fairly straightforward to compute the derivatives for logistic regression and implement then directly in NumPy, this will not scale well to arbitrary neural architectures. Fortunately, there are libraries that automate the computation of the derivatives of the cost function (assuming it is differentiable!) for any neural network, and use the resulting gradients to perform gradient descent or other more sophisticated optimization procedures. To this end, we will use the PyTorch deep learning library10. The corresponding notebook for this section is chap4_logistic_regression_pytorch_bce. Our model for logistic regression corresponds to PyTorch’s Linear layer. When we instantiate this layer, we specify the size of the inputs (the size of our vocabulary) and the size of the output, i.e., the number of output neurons (which is one because we’re doing binary classification). The loss function we use is the binary cross-entropy loss (see Chapter 3), which is implemented as BCEWithLogitsLoss in PyTorch. In PyTorch, the gradients obtained from the loss function are applied to the model by an optimizer object, which implements and applies an optimization algorithm. Here we will use the vanilla stochastic gradient descent optimizer; we set its learning rate to 0.1. This is equivalent to the discussion in Section 3.2. Similarly to the manual implementation, the steps required to train the model for a given training example are: (1) ensure the gradients are set to zeros, (2) apply the model to obtain a prediction, (3) calculate 10 https://pytorch.org/ 64 Implementing Text Classification Using Perceptron and LR the loss, (4) compute the gradient of the loss by back-propagation, and (5) update the model parameters. Recall that in our previous implementation everything was hardcoded: applying the model, computing the gradients, and optimizing the model parameters. Here, however, the implementation of the logistic regression is expressed at a higher level of abstraction. This means that we are describing the logical steps without specifying a particular implementation. Instead, implementation details are the responsability of the chosen model, loss function, and optimizer. Thus, we could even choose a different model, loss function, and/or optimizer, and use the same training steps with little or no modification. This decoupling of the training logic from the implementation details is one of the main advantages of libraries such as PyTorch. As shown in the code above, calling the model as a function, with the feature vectors as inputs, produces the predicted scores. Once again, a positive score corresponds to a positive label. When we evaluate this implementation on the test dataset, we obtain results that are in line with our previous models: Writing the perceptron and the logistic regression from scratch is a good exercise, as it exposes us to the fundamentals of implementing machine learning algorithms. However, this becomes cumbersome for more complex neural architectures. For this reason, from this point on, we will use PyTorch for all our coding examples. 4.2 Multiclass Classification So far, in this chapter we have discussed implementing binary classifiers. Next, we will modify these binary classifiers to perform multiclass classification, following the discussion in Section 3.5. 4.2.1 AG News Dataset Before explaining the actual training/testing code, we have to choose a new dataset that is suitable for multiclass classification. To this end, we will use the AG News Classification Dataset (Zhang et al., 2015), a subset of the larger AG corpus of news articles collected from thousands of different news sources.11 The classification dataset consists of four 11 http://groups.di.unipi.it/~gulli/AG_corpus_of_news_articles.html 4.2 Multiclass Classification 65 classes, and the data is equally balanced across all classes (30,000 articles per class for train, and 1,900 articles per class for testing). The goal of the task is to classify each article as one of the four classes: World, Sports, Business, or Sci/Tech. 4.2.2 Preparing the Dataset The AG News Dataset is distributed as two CSV files (one for training and one for testing), each containing three columns: the class index, the title, and the description. The dataset also provides a text file that maps the above class indexes to more descriptive class labels. Because of the tabular nature of the dataset, pandas, a Python library
for tabular data analysis,12 is a natural choice for loading and transform-
ing it. To this end, our Jupyter notebook (chap4_multiclass_logistic_regression) demonstrates the sequence of steps required to handle the data, as well
as model training and evaluation. First, we show how to load the CSV,
add column names, and inspect the result: class index . 0  3 
 . 1  3 
 . 2  3 
 . 3  3 
 . 4  3 
 ... ... . 119995  1 
 . 119996  2 
 . 119997  2 
 . 119998  2 
 . 119999  2 
 title Wall St. Bears Claw Back Into the Black (Reuters) Carlyle Looks Toward Commercial Aerospace (Reu... Oil and Economy Cloud Stocks' Outlook (Reuters) Iraq Halts Oil Exports from Main Southern Pipe... Oil prices soar to all-time record, posing new... ... Pakistan's Musharraf Says Won't Quit as Army C... Renteria signing a top-shelf deal Saban not going to Dolphins yet Today's NFL games Nets get Carter from Raptors description Reuters - Short-sellers, Wall Street's dwindli... Reuters - Private investment firm Carlyle Grou... Reuters - Soaring crude prices plus worries\ab... Reuters - Authorities have halted oil export\f... AFP - Tearaway world oil prices, toppling reco... ... KARACHI (Reuters) - Pakistani President Perve... Red Sox general manager Theo Epstein acknowled... The Miami Dolphins will put their courtship of... PITTSBURGH at NY GIANTS Time: 1:30 p.m. Line: ... INDIANAPOLIS -- All-Star Vince Carter was trad... 120000 rows × 3 columns Since the class labels themselves are in a separate file, we manually add them to the pandas data structure (called dataframe in pandas’ terminology) to increase the interpretability of the data. We use the class index column as a starting point, and use its map method to create a new column with the corresponding labels (technically a new Series object) that is added to the dataframe using its insert method, which allows us to insert the column in a specific position. Note that the label indices are one-based, so we subtract one to align them with their labels. 12 https://pandas.pydata.org 66 Implementing Text Classification Using Perceptron and LR class index . 0  3 
 . 1  3 
 . 2  3 
 . 3  3 
 . 4  3 
 ... ... . 119995  1 
 . 119996  2 
 . 119997  2 
 . 119998  2 
 . 119999  2 
 class Business Business Business Business Business ... World Sports Sports Sports Sports title Wall St. Bears Claw Back Into the Black (Reuters) Oil and Economy Cloud Stocks' Outlook (Reuters) Oil prices soar to all-time record, posing new... Pakistan's Musharraf Says Won't Quit as Army C... Saban not going to Dolphins yet Nets get Carter from Raptors description Reuters - Short-sellers, Wall Street's dwindli... Reuters - Soaring crude prices plus worries\ab... AFP - Tearaway world oil prices, toppling reco... KARACHI (Reuters) - Pakistani President Perve... The Miami Dolphins will put their courtship of... INDIANAPOLIS -- All-Star Vince Carter was trad... Iraq Halts Oil Exports from Main Southern Pipe... Reuters - Authorities have halted oil export\f... ... ... Renteria signing a top-shelf deal Red Sox general manager Theo Epstein acknowled... 120000 rows × 4 columns Carlyle Looks Toward Commercial Aerospace (Reu... Reuters - Private investment firm Carlyle Grou... Today's NFL games PITTSBURGH at NY GIANTS Time: 1:30 p.m. Line: ... Next we will preprocess the text. First we lowercase the title and description, and then we concatenate them into a single string. Then we remove some spurious backslashes from the text. Once this is done, the preprocessed text is added to the dataframe as a new column. Note that pandas allows these steps to be applied to all rows simultaneously. class index class title Wall St. Bears Claw Back Into the Black (Reuters) Oil and Economy Cloud Stocks' Outlook (Reuters) Oil prices soar to all-time record, posing new... ... Pakistan's Musharraf Says Won't Quit as Army C... Saban not going to Dolphins yet Nets get Carter from Raptors description Reuters - Short-sellers, Wall Street's dwindli... Reuters - Soaring crude prices plus worries\ab... AFP - Tearaway world oil prices, toppling reco... ... KARACHI (Reuters) - Pakistani President Perve... The Miami Dolphins will put their courtship of... INDIANAPOLIS -- All-Star Vince Carter was trad... text wall st. bears claw back into the black (reute... oil and economy cloud stocks' outlook (reuters... oil prices soar to all-time record, posing new... ... pakistan's musharraf says won't quit as army c... saban not going to dolphins yet the miami dolp... nets get carter from raptors indianapolis -- a... . 0  3 Business 
 . 1  3 Business 
 . 2  3 Business 
 . 3  3 Business 
 . 4  3 Business 
 ... ... ... . 119995  1 World 
 . 119996  2 Sports 
 . 119997  2 Sports 
 . 119998  2 Sports 
 . 119999  2 Sports 
 120000 rows × 5 columns Carlyle Looks Toward Commercial Reuters - Private investment firm Carlyle carlyle looks toward commercial Aerospace (Reu... Grou... aerospace (reu... Iraq Halts Oil Exports from Main Southern Pipe... Reuters - Authorities have halted oil export\f... iraq halts oil exports from main southern pipe... Renteria signing a top-shelf deal Red Sox general manager Theo Epstein renteria signing a top-shelf deal red sox acknowled... gene... Today's NFL games PITTSBURGH at NY GIANTS Time: 1:30 p.m. today's nfl games pittsburgh at ny giants Line: ... time... At this point, the text is ready to be tokenized. For this purpose we will use NLTK’s word_tokenize function. This function can be applied to the whole column at once using the pandas map function, which returns a new column which we add to the dataframe. However, here we actually use the progress_map function, which provides a visual progress bar. This visual feedback is especially helpful for tasks that take more time to complete. 4.2 Multiclass Classification 67 class index . 0  3 
 . 1  3 
 . 2  3 
 . 3  3 
 . 4  3 
 ... ... . 119995  1 
 . 119996  2 
 . 119997  2 
 . 119998  2 
 . 119999  2 
 class Business Business Business Business Business ... World Sports Sports Sports Sports title Wall St. Bears Claw Back Into the Black (Reuters) Oil and Economy Cloud Stocks' Outlook (Reuters) Oil prices soar to all-time record, posing new... ... Pakistan's Musharraf Says Won't Quit as Army C... Saban not going to Dolphins yet Nets get Carter from Raptors description Reuters - Short-sellers, Wall Street's dwindli... Reuters - Soaring crude prices plus worries\ab... AFP - Tearaway world oil prices, toppling reco... ... KARACHI (Reuters) - Pakistani President Perve... The Miami Dolphins will put their courtship of... INDIANAPOLIS -- All-Star Vince Carter was trad... text wall st. bears claw back into the black (reute... oil and economy cloud stocks' outlook (reuters... oil prices soar to all-time record, posing new... ... pakistan's musharraf says won't quit as army c... saban not going to dolphins yet the miami dolp... nets get carter from raptors indianapolis -- a... tokens [wall, st., bears, claw, back, into, the, blac... [oil, and, economy, cloud, stocks, ', outlook,... [oil, prices, soar, to, all-time, record, ,, p... ... [pakistan, 's, musharraf, says, wo, n't, quit,... [saban, not, going, to, dolphins, yet, the, mi... [nets, get, carter, from, raptors, indianapoli... 120000 rows × 6 columns Carlyle Looks Toward Commercial Reuters - Private investment firm carlyle looks toward commercial [carlyle, looks, toward, Aerospace (Reu... Carlyle Grou... aerospace (reu... commercial, aerospace... Iraq Halts Oil Exports from Main Reuters - Authorities have halted iraq halts oil exports from main [iraq, halts, oil, exports, from, Southern Pipe... oil export\f... southern pipe... main, southe... Renteria signing a top-shelf deal Red Sox general manager Theo renteria signing a top-shelf deal [renteria, signing, a, top-shelf, Epstein acknowled... red sox gene... deal, red, s... Today's NFL games PITTSBURGH at NY GIANTS today's nfl games pittsburgh at [today, 's, nfl, games, Time: 1:30 p.m. Line: ... ny giants time... pittsburgh, at, ny, gi... From the tokens we just created, we then create a vocabulary for our corpus. Here, we only keep the words that occur at least 10 times, decreasing the memory needed and reducing the likelihood that our vocabulary contains noisy tokens. Note that each row in the tokens column contains a list of tokens. In order to create the vocabulary, we will need to convert the Series of lists of tokens into a Series of tokens using the explode() Pandas method. Then we will use the value_counts() method to create a Series object in which the index are the tokens and the values are the number of times they appear in the corpus. The next step is removing the tokens with a count lower than our chosen threshold. Finally, we create a list with the remaining tokens, as well as a dictionary that maps tokens to token ids (i.e., the index of the token in the list). We include in the vocabulary a special token [UNK] that will be used as a placeholder for tokens that do not appear in our vocabulary after the frequency pruning. Using this vocabulary, we construct a feature vector for each news article in the corpus. This feature vector will be encoded as a dictionary, with keys corresponding to token ids, and values corresponding to the number of times the token appears in the article. As above, the feature vectors will be stored as a new column in the dataframe. 68 Implementing Text Classification Using Perceptron and LR class index class title Wall St. Bears Claw Back Into the Black (Reuters) Oil and Economy Cloud Stocks' Outlook (Reuters) Oil prices soar to all-time record, posing new... ... Pakistan's Musharraf Says Won't Quit as Army C... Saban not going to Dolphins yet Nets get Carter from Raptors description Reuters - Short-sellers, Wall Street's dwindli... Reuters - Soaring crude prices plus worries\ab... AFP - Tearaway world oil prices, toppling reco... ... KARACHI (Reuters) - Pakistani President Perve... The Miami Dolphins will put their courtship of... INDIANAPOLIS -- All-Star Vince Carter was trad... text wall st. bears claw back into the black (reute... oil and economy cloud stocks' outlook (reuters... oil prices soar to all-time record, posing new... ... pakistan's musharraf says won't quit as army c... saban not going to dolphins yet the miami dolp... nets get carter from raptors indianapolis -- a... tokens [wall, st., bears, claw, back, into, the, blac... [oil, and, economy, cloud, stocks, ', outlook,... [oil, prices, soar, to, alltime, record, ,, p... ... [pakistan, 's, musharraf, says, wo, n't, quit,... [saban, not, going, to, dolphins, yet, the, mi... [nets, get, carter, from, raptors, indianapoli... features {427: 2, 563: 1, 1607: 1, 15062: 1, 120: 1, 73... {66: 1, 9: 2, 351: 2, 4565: 1, 158: 1, 116: 1,... {66: 2, 99: 2, 4390: 1, 4: 2, 3595: 1, 149: 1,... ... {383: 1, 23: 1, 1626: 2, 91: 1, 1809: 1, 285: ... {7762: 2, 68: 1, 661: 1, 4: 2, 1439: 2, 703: 1... {2170: 2, 226: 1, 2402: 2, 32: 1, 2995: 2, 219... . 0  3 Business 
 . 1  3 Business 
 . 2  3 Business 
 . 3  3 Business 
 . 4  3 Business 
 ... ... ... . 119995  1 World 
 . 119996  2 Sports 
 . 119997  2 Sports 
 . 119998  2 Sports 
 . 119999  2 Sports 
 120000 rows × 7 columns Carlyle Looks Toward Commercial Aerospace (Reu... Reuters - Private investment firm Carlyle Grou... carlyle looks toward commercial aerospace (reu... Iraq Halts Oil Exports from Reuters - Authorities have iraq halts oil exports from Main Southern Pipe... halted oil export\f... main southern pipe... Renteria signing a top-shelf Red Sox general manager renteria signing a topdeal Theo Epstein acknowled... shelf deal red sox gene... PITTSBURGH at NY Today's NFL games GIANTS Time: 1:30 p.m. Line: ... today's nfl games pittsburgh at ny giants time... [carlyle, looks, toward, {15999: 2, 1076: 1, 855: commercial, aerospace... 1, 1286: 1, 4251: 1, ... [iraq, halts, oil, exports, {77: 2, 7380: 1, 66: 3, from, main, southe... 1787: 1, 32: 2, 900: 2... [renteria, signing, a, top- {8428: 2, 2638: 1, 5: 4, shelf, deal, red, s... 0: 3, 127: 1, 202: 3,... [today, 's, nfl, games, {106: 1, 23: 1, 729: 1, pittsburgh, at, ny, gi... 225: 1, 1586: 1, 22: 1... The final preprocessing step is converting the features and the class indices into PyTorch tensors. Recall that we need to subtract one from the class indices to make them zero-based. At this point, the data is fully processed and we are ready to begin training. 4.2.3 Multiclass Logistic Regression Using PyTorch The model itself is a single linear layer whose input size corresponds to the size of our vocabulary, and its output size corresponds to the number of classes in our corpus. PyTorch’s Linear layer includes a bias by default, so there is no need to handle that manually the way we did for our perceptron example. The code for training this model (which implements Algorithm 6) is almost identical to that of the binary logistic repression. However, since we have to calculate a score for each of the four different classes, we need to replace the previous BCEWithLogitsLoss with CrossEntropyLoss, which applies a softmax over the scores to obtain probabilities for each class. For each example, the model predicts 4 scores – one for each label. The label with the highest score is selected using the argmax function. We evaluate the predictions of our model for each class using Scikitlearn’s classification_report, which handles the results of multiclass classification. 4.3 Summary 69 4.3 Summary In this chapter, we used movie review and news article classification to illustrate the implementation of the previously described algorithms for the binary perceptron, binary logistic regression, and multiclass logistic regression. For the binary logistic regression, we made a direct comparison between the lower-level NumPy implementation and a higher-level version that made use of PyTorch. We hope that through this series of exercises the reader has noted several key takeaways. First, data preparation is important and should be done thoughtfully. Certain tasks (e.g., text normalization or sentence splitting) are going to be frequently needed if you continue with NLP, so using or creating generic functions can be very helpful. However, what works for one dataset and one language may not be suitable for another scenario. For example, in our case, we selected different tokenizers for each of our tasks to account for the different registers of English, as well as removing diacritics during normalization. Second, when it comes to implementing machine learning algorithms, it is often easier to use a higher-level library such as PyTorch instead of NumPy. For example, with the former, the gradients are calculated by the library, whereas in NumPy we have to code them ourselves. This becomes cumbersome quickly. For example, even the derivative of the softmax is non-trivial. Third, PyTorch imposes a training structure that remains largely the same, regardless of what models are being trained. That is, at a high level, the same steps are always required: clearing the current gradients, predicting output scores for the provided inputs, calculating the loss, and optimizing. These features make PyTorch a very powerful and convenient deep learning library; we will continue to use it throughout the remainder of the book to implement more complex neural architectures.
16,652
16,761
#!/usr/bin/env python # coding: utf-8 # # Binary Text Classification with # # Logistic Regression Implemented from Scratch # In[1]: import random import numpy as np from tqdm.notebook import tqdm # set this variable to a number to be used as the random seed # or to None if you don't want to set a random seed seed = 1234 if seed is not None: random.seed(seed) np.random.seed(seed) # The dataset is divided in two directories called `train` and `test`. # These directories contain the training and testing splits of the dataset. # In[2]: get_ipython().system('ls -lh data/aclImdb/') # Both the `train` and `test` directories contain two directories called `pos` and `neg` that contain text files with the positive and negative reviews, respectively. # In[3]: get_ipython().system('ls -lh data/aclImdb/train/') # We will now read the filenames of the positive and negative examples. # In[4]: from glob import glob pos_files = glob('data/aclImdb/train/pos/*.txt') neg_files = glob('data/aclImdb/train/neg/*.txt') print('number of positive reviews:', len(pos_files)) print('number of negative reviews:', len(neg_files)) # Now, we will use a [`CountVectorizer`](https://scikit-learn.org/stable/modules/generated/sklearn.feature_extraction.text.CountVectorizer.html) to read the text files, tokenize them, acquire a vocabulary from the training data, and encode it in a document-term matrix in which each row represents a review, and each column represents a term in the vocabulary. Each element $(i,j)$ in the matrix represents the number of times term $j$ appears in example $i$. # In[5]: from sklearn.feature_extraction.text import CountVectorizer # initialize CountVectorizer indicating that we will give it a list of filenames that have to be read cv = CountVectorizer(input='filename') # learn vocabulary and return sparse document-term matrix doc_term_matrix = cv.fit_transform(pos_files + neg_files) doc_term_matrix # Note in the message printed above that the matrix is of shape (25000, 74894). # In other words, it has 1,871,225,000 elements. # However, only 3,445,861 elements were stored. # This is because most of the elements in the matrix are zeros. # The reason is that the reviews are short and most words in the english language don't appear in each review. # A matrix that only stores non-zero values is called *sparse*. # # Now we will convert it to a dense numpy array: # In[6]: X_train = doc_term_matrix.toarray() X_train.shape # In[7]: # Append 1s to the xs; this will allow us to multiply by the weights and # the bias in a single pass. # Make an array with a one for each row/data point ones = np.ones(X_train.shape[0]) # Concatenate these ones to existing feature vectors X_train = np.column_stack((X_train, ones)) X_train.shape # We will also create a numpy array with the binary labels for the reviews. # One indicates a positive review and zero a negative review. # The label `y_train[i]` corresponds to the review encoded in row `i` of the `X_train` matrix. # In[8]: # training labels y_pos = np.ones(len(pos_files)) y_neg = np.zeros(len(neg_files)) y_train = np.concatenate([y_pos, y_neg]) y_train # Now we will initialize our model, in the form of an array of weights `w` of the same size as the number of features in our dataset (i.e., the number of words in the vocabulary acquired by [`CountVectorizer`](https://scikit-learn.org/stable/modules/generated/sklearn.feature_extraction.text.CountVectorizer.html)), and a bias term `b`. # Both are initialized to zeros. # In[9]: # initialize model: the feature vector and bias term are populated with zeros n_examples, n_features = X_train.shape w = np.random.random(n_features) # Now we will use the logistic regression learning algorithm to learn the values of `w` and `b` from our training data. # In[10]: # from scipy.special import expit as sigmoid def sigmoid(z): if -z > np.log(np.finfo(float).max): return 0.0 return 1 / (1 + np.exp(-z)) # In[11]: lr = 1e-1 n_epochs = 10 indices = np.arange(n_examples) for epoch in range(10): # randomize the order in which training examples are seen in this epoch np.random.shuffle(indices) # traverse the training data for i in tqdm(indices, desc=f'epoch {epoch+1}'): x = X_train[i] y = y_train[i] # calculate the derivative of the cost function for this batch deriv_cost = (sigmoid(x @ w) - y) * x # update the weights w = w - lr * deriv_cost # The next step is evaluating the model on the test dataset. # Note that this time we use the [`transform()`](https://scikit-learn.org/stable/modules/generated/sklearn.feature_extraction.text.CountVectorizer.html#sklearn.feature_extraction.text.CountVectorizer.transform) method of the [`CountVectorizer`](https://scikit-learn.org/stable/modules/generated/sklearn.feature_extraction.text.CountVectorizer.html), instead of the [`fit_transform()`](https://scikit-learn.org/stable/modules/generated/sklearn.feature_extraction.text.CountVectorizer.html#sklearn.feature_extraction.text.CountVectorizer.fit_transform) method that we used above. This is because we want to use the learned vocabulary in the test set, instead of learning a new one. # In[12]: pos_files = glob('data/aclImdb/test/pos/*.txt') neg_files = glob('data/aclImdb/test/neg/*.txt') doc_term_matrix = cv.transform(pos_files + neg_files) X_test = doc_term_matrix.toarray() X_test = np.column_stack((X_test, np.ones(X_test.shape[0]))) y_pos = np.ones(len(pos_files)) y_neg = np.zeros(len(neg_files)) y_test = np.concatenate([y_pos, y_neg]) # Using the model is easy: multiply the document-term matrix by the learned weights and add the bias. # We use Python's `@` operator to perform the matrix-vector multiplication. # In[13]: y_pred = X_test @ w > 0 # Now we print an evaluation of the prediction results using scikit-learn's [`classification_report()`](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.classification_report.html) function. # In[14]: def binary_classification_report(y_true, y_pred): # count true positives, false positives, true negatives, and false negatives tp = fp = tn = fn = 0 for gold, pred in zip(y_true, y_pred): if pred == True: if gold == True: tp += 1 else: fp += 1 else: if gold == False: tn += 1 else: fn += 1 # calculate precision and recall precision = tp / (tp + fp) recall = tp / (tp + fn) # calculate f1 score fscore = 2 * precision * recall / (precision + recall) # calculate accuracy accuracy = (tp + tn) / len(y_true) # number of positive labels in y_true support = sum(y_true) return { "precision": precision, "recall": recall, "f1-score": fscore, "support": support, "accuracy": accuracy, } # In[15]: binary_classification_report(y_test, y_pred)
6,061
6,111
22
chap04-23
chap04-23
4 Implementing Text Classification Using Perceptron and Logistic Regression In the previous chapters we have discussed the theory behind the perceptron and logistic regression, including mathematical explanations of how and why they are able to learn from examples. In this chapter we will transition from math to code. Specifically, we will discuss how to implement these models in the Python programming language. All the code that we will introduce throughout this book is available online as well: http://clulab.github.io/gentlenlp/. The reader who is not familiar with the Python programming language is encouraged to read first Appendix A, for a brief introduction to the language, and Appendix B, for a discussion on how computers encode and preprocess text. Once done, please return here. To get a better understanding of how these algorithms work under the hood, we will start by implementing them from scratch. However, as the book progresses, we will introduce some of the popular tools and libraries that make Python the language of choice for machine learning, e.g., PyTorch,1 and Hugging Face’s transformers.2 The code for all the examples in the book is provided in the form of Jupyter notebooks.3 Important fragments of these notebooks will be presented in the implementation chapters so that the reader has the whole picture just by reading the book. However, we strongly encourage you to download the notebooks and execute them yourself. We also encourage you to modify them to conduct your own experiments! 1 https://pytorch.org
2 https://huggingface.co 3 https://jupyter.org/ 55 56 Implementing Text Classification Using Perceptron and LR 4.1 Binary Classification We begin this chapter with binary classification. That is, we aim to train classifiers that assign one of two labels to a given text. As the example for this task, we will train a review classifier using the the Large Movie Review Dataset (Maas et al., 2011).4 We tackle this task by implementing first a binary perceptron classifier, followed by a binary logistic regression one. We will implement the latter both from scratch as well as using PyTorch, so the reader has a clearer understanding on how PyTorch works “under the hood.” 4.1.1 Large Movie Review Dataset This dataset contains movie reviews and their associated scores (between 1 and 10) as provided by IMDb.5 converted these scores to binary labels by assigning each review a positive or negative label if the review score was above 6 or below 5, respectively. Reviews with scores 5 and 6 were considered too neutral and thus excluded. We follow the same protocol in this chapter. The dataset is divided in two even partitions called train and test, each containing 25,000 reviews. The dataset also provides additional unlabeled reviews, but we will not use those here. Each partition contains two directories called pos and neg where the positive and negative examples are stored. Each review is stored in an independent text file, whose name is composed of an id unique to the partition and the score associated with the review, separated by an underscore. An example of a positive and a negative review is shown in Table 4.1. 4.1.2 Bag-of-words Model As discussed in Section 2.2, we will encode the text to classify as a bag of words. That is, we encode each review as a list of numbers, with each position in the list corresponding to a word in our vocabulary, and the value stored in that position corresponding to the number of times the word appears in the review. For example, say we want to encode the following two reviews: 4 https://ai.stanford.edu/~amaas/data/sentiment/ 5 https://www.imdb.com/ Maas et al. 4.1 Binary Classification 57 Table 4.1 Two examples of movie reviews from IMDb. The first is a positive review of the movie Puss in Boots (1988). The second is a negative review of the movie Valentine (2001). These reviews can be found at https://www.imdb.com/review/rw0606396/ and https://www.imdb.com/review/rw0721861/, respectively. Filename Score Binary Label train/pos/24_8.txt 8/10 Positive train/neg/141_3.txt 3/10 Negative Review Text Although this was obviously a low-budget production, the performances and the songs in this movie are worth seeing. One of Walken’s few musical roles to date. (he is a marvelous dancer and singer and he demonstrates his acrobatic skills as well - watch for the cartwheel!) Also starring Jason Connery. A great children’s story and very likable characters. This stalk and slash turkey manages to bring nothing new to an increasingly stale genre. A masked killer stalks young, pert girls and slaughters them in a variety of gruesome ways, none of which are particularly inventive. It’s not scary, it’s not clever, and it’s not funny. So what was the point of it? Review 1: Review 2: "I liked the movie. My friend liked it too. " "I hated it. Would not recommend. " First, we need to create a vocabulary that maps each word to an id that uniquely identifies it. Each of these numbers will be used as the index in a list, so they must start at zero and grow by one for each word in the vocabulary. For example, one possible vocabulary that encodes the previous reviews is: {'would': 0, 'hated': 1, 58 Implementing Text Classification Using Perceptron and LR 'my': 2, 'liked': 3, 'not': 4, 'it': 5, 'movie': 6, 'recommend': 7, 'the': 8, 'I': 9, 'too': 10, 'friend': 11} Using this mapping, we can encode the two reviews as follows: Review1: [0,0,1,2,0,1,1,0,1,1,1,1] Review2: [1,1,0,0,1,1,0,1,0,1,0,0] Note that the word liked (fourth position) in the first review has a value of two. This is because this word appears twice in that review. This is a small example with a vocabulary of only 12 terms. Of course, the same process needs to be implemented for our whole training dataset. For this purpose we will use scikit-learn’s CountVectorizer class.6 Using the CountVectorizer class simplifies things, allowing us to get started quickly with a bag-of-words approach. However, note that it makes several simplifying assumptions (e.g., text is lowercased, and punctuation and single character tokens are removed). Some of these may not be adequate to other tasks. First, we need to obtain the filenames for the reviews in the training set: Once we have acquired the filenames for the training reviews, we need
to read them using the CountVectorizer. In order for the CountVectorizer to open and read the files for us, we make use of the input='filename' constructor parameter (otherwise it would expect the string content directly). The CountVectorizer provides three methods that will be use-
ful for us: a method called fit() that is used to acquire the vocabulary,
a method transform() that converts the text into the bag-of-words representation, and a method fit_transform() that conveniently acquires the vocabulary and transforms the data in a single step. The resulting object is referred to as a document-term matrix, where each row corre- 6 https://scikitlearn.org/stable/modules/generated/sklearn.feature_ extraction.text.CountVectorizer.html 4.1 Binary Classification 59 sponds to a document, and each column corresponds to a term in the vocabulary. As the output above indicates, the resulting matrix has 25,000 rows (one for each review), and 74,849 columns (one for each term). Also you may note that this matrix is sparse, with 3,445,861 stored elements. A regular matrix of shape 25,000×74,849 would have 1,871,225,000 elements. However, most of the elements in a document-term matrix are zeros because only a few words from the vocabulary appear in each document. A sparse matrix takes advantage of this fact by storing only the non-zero cells in order to reduce the memory required to store it. Thus, sparse matrices are convenient, especially when dealing with lots of data. Nevertheless, to simplify the downstream code in this example, we will convert it into a dense matrix, i.e., a regular two-dimensional NumPy array. Finally, we also need the labels of the reviews. We assign a label of one to positive reviews, and a label of zero to negative ones. Note that the first half of the reviews are positive and the second half are negative. The label at the ith position of the y_train array corresponds to the review encoded in the ith row of the X_train matrix. 4.1.3 Perceptron Now that we have defined our task and the data processing pipeline, we will implement a perceptron classifier that classifies the movie reviews as positive or negative. The entire code discussed in this section is available in the chap4_perceptron notebook. Recall from Section 2.4 that the perceptron is composed of a weight vector w and a bias term b. These will be represented as a NumPy array w of the same length as our document vectors, and a variable b for the bias term. Both will be initialized with zeros. The parameters w and b are learned through the following algorithm, which implements Algorithm 2 from Chapter 2: There are a couple of details to point out. Line 3 of Algorithm 2 indicates that we need to repeat the training loop until convergence. Theoretically, convergence is defined as predicting all training examples correctly. This is an ambitious requirement, which is not always possible in practice, so in this code we also include a stop condition if we reach a maximum number of epochs. Another crucial difference between our implementation here and the theoretical Algorithm 2, is that we randomize the order in which the training examples are seen at the beginning of 60 Implementing Text Classification Using Perceptron and LR each epoch. This simple (but highly recommended!) change is necessary to avoid the introduction of spurious biases due to the arbitrary order of the examples in the original training partition.7 We accomplish this by storing the indices corresponding to the X_train matrix rows in a NumPy array, and shuffling these indices at the beginning of each epoch. We shuffle the indices instead of the examples so that we can preserve the mapping between examples and labels. The training loop aligns closely with Algorithm 2. We start by iterating over each example in our training data, storing the current example in the variable x,8 and its corresponding label in the variable y_true. Next, we compute the perceptron decision function shown in Algorithm 1. Note that NumPy (as well as PyTorch) uses Python’s @ operator to indicate vector or matrix multiplication, depending on its operand types. Here we use it to calculate the dot product of the example x and the weights w. To this we add the bias b to obtain the predicted score, whose sign is used to assign a positive or negative predicted label. If the prediction is correct, then no update is needed, and we can move on to the next training example. However, if the prediction is incorrect, then we need to adjust w and b, as described in Algorithm 2. Sidebar 4.1 The tqdm function This is our first exposure to the tqdm function. tqdm is a progress bar that “make your loops show a smart progress meter.”9 The name tqdm comes from the Arabic word taqaddum which can mean “progress.” Using tqdm is as simple as wrapping it around the collection to be traversed. After training, we evaluate the model’s performance on the heldout test partition. The test data is loaded similarly to the training partition, but with one notable difference; we use CountVectorizer’s transform() method instead of the fit_transform() method so that the vocabulary is not adjusted for the test data. We won’t show here the loading of the test partition since it is so similar to the code already shown, but it is available in the Jupyter notebook that accompanies this section. . 7   As an extreme example, consider a dataset where all the positive examples appear first in the training partition. This would cause the perceptron to artificially inflate the weights of the features that occur in these examples, a situation from which the learning algorithm may struggle to recover. 
 . 8  We use typewriter font when we discuss variables in the code, to distinguish code from the theoretical discussion in the other chapters. 
 9 https://github.com/tqdm/tqdm 4.1 Binary Classification 61 Using the model to assign labels to all the test data is easily done in one step – we simply multiply the entire test data document-term matrix by the previously learned weights and add the bias. Scores greater than zero indicate a positive review, and those less than zero are negative. At this point we can evaluate the classifier’s performance, which we will do using precision, recall, and F1 scores for binary classification (described in Section 2.3). For this purpose, we implement a function called binary_classification_report that computes these metrics and returns them as a dictionary: We call this function to compare the predicted labels to the true labels, and obtain the evaluation scores. Our F1 score here is 86.8%, which is much higher than the baseline that assigns labels randomly, which yields an F1 score of about 50%. This is a good result, especially considering the simplicity of the perceptron! In the next sections and chapters, we will discuss a battery of strategies to considerably improve this performance. 4.1.4 Binary Logistic Regression from Scratch Using the same task, dataset, and evaluation, we will now implement a logistic regression classifier, as described in Algorithm 5 from Chapter 3. To give the reader hands-on experience with the implementation of the gradient calculations for logistic regression, we start by implementing it from scratch using NumPy. All the code shown in this section is available in the chap4_logistic_regression_numpy notebook. In the perceptron implementation, we represented the weights and the bias as two different variables. Here, however, we will use a different approach that will allow us to unify them into a single vector variable. Specifically, we take advantage of the similarity between the derivative of the cost function with respect to the weights (Equation 3.14) and the derivative of the cost with respect to the bias (Equation 3.15). d Ci(w, b) = (σi − yi)xij (3.14 revisited) dwj d Ci(w, b) = σi − yi (3.15 revisited) db Note that the two derivative formulas are identical except that the former has a multiplication by xij, while the latter does not. However, 62 Implementing Text Classification Using Perceptron and LR since σi − yi = (σi − yi)1 we can multiply the derivative of the cost with respect to the bias by one without changing the semantics. This gives an opportunity for combining the computations, doing them both in a single pass. The idea is that we can treat the bias as a weight corresponding to a feature that always has a value of one. As can be seen above, we created a NumPy array of ones of the same length as the number of examples in our training set (i.e., the number of rows in the data matrix). Then we add this array as a new column to the data matrix, using NumPy’s column_stack function. Next, we need to initialize our model. This time we will use a single NumPy array w of the same length as the number of columns in the data matrix. The weight vector w is initialized randomly with values between 0 and 1: Before implementing the learning algorithm, we need an implementation of the logistic function. Recall that the logistic function is σ(x) = 1 (3.1 revisited) 1+e−x This function can be easily implemented in NumPy as follows: However, this naive implementation may produce the following warning during training: The term overflow indicates that the result of evaluating exp(-x) is a number so large that it can’t be represented by a float (specifically, we’re using float64 numbers). We will avoid this issue by not calling exp with values that will overflow. NumPy provides the function finfo that can be consulted to find the limits of floating point numbers: The log of the largest floating point number is the largest number for which exp() will not overflow, so we will use it as a threshold to filter out problematic values: We now have everything we need to implement Algorithm 4. The steps to follow for each example are: (1) use the model to make a prediction, (2) calculate the gradient of the loss function with respect to the model parameters, and (3) update the model parameters using the gradient. The size of the update is controlled by the learning rate. Once the model has been trained, we evaluate it on the test dataset using our binary_classification_report function from the previous section. Loading and preprocessing the test dataset follows the same 4.1 Binary Classification 63 steps as with the previous classifier. We omit the code for brevity. These are the results: The performance is comparable with that of the perceptron. The difference in F1 scores between the two classifiers (84.9% here vs. 86.8% for the perceptron) is not significant. Classifier parity is probably attributable to the fact that the signal distinguishing the two classes being easy to learn and the simpler perceptron training algorithm being sufficient in this case. Nevertheless, this task is useful in showing how to implement the logistic regression model from scratch, i.e., by implementing the gradient calculation and parameter updates manually. Next, we will implement the same model again using PyTorch, highlighting how this machine learning library simplifies the process. 4.1.5 Binary Logistic Regression Utilizing PyTorch While it is fairly straightforward to compute the derivatives for logistic regression and implement then directly in NumPy, this will not scale well to arbitrary neural architectures. Fortunately, there are libraries that automate the computation of the derivatives of the cost function (assuming it is differentiable!) for any neural network, and use the resulting gradients to perform gradient descent or other more sophisticated optimization procedures. To this end, we will use the PyTorch deep learning library10. The corresponding notebook for this section is chap4_logistic_regression_pytorch_bce. Our model for logistic regression corresponds to PyTorch’s Linear layer. When we instantiate this layer, we specify the size of the inputs (the size of our vocabulary) and the size of the output, i.e., the number of output neurons (which is one because we’re doing binary classification). The loss function we use is the binary cross-entropy loss (see Chapter 3), which is implemented as BCEWithLogitsLoss in PyTorch. In PyTorch, the gradients obtained from the loss function are applied to the model by an optimizer object, which implements and applies an optimization algorithm. Here we will use the vanilla stochastic gradient descent optimizer; we set its learning rate to 0.1. This is equivalent to the discussion in Section 3.2. Similarly to the manual implementation, the steps required to train the model for a given training example are: (1) ensure the gradients are set to zeros, (2) apply the model to obtain a prediction, (3) calculate 10 https://pytorch.org/ 64 Implementing Text Classification Using Perceptron and LR the loss, (4) compute the gradient of the loss by back-propagation, and (5) update the model parameters. Recall that in our previous implementation everything was hardcoded: applying the model, computing the gradients, and optimizing the model parameters. Here, however, the implementation of the logistic regression is expressed at a higher level of abstraction. This means that we are describing the logical steps without specifying a particular implementation. Instead, implementation details are the responsability of the chosen model, loss function, and optimizer. Thus, we could even choose a different model, loss function, and/or optimizer, and use the same training steps with little or no modification. This decoupling of the training logic from the implementation details is one of the main advantages of libraries such as PyTorch. As shown in the code above, calling the model as a function, with the feature vectors as inputs, produces the predicted scores. Once again, a positive score corresponds to a positive label. When we evaluate this implementation on the test dataset, we obtain results that are in line with our previous models: Writing the perceptron and the logistic regression from scratch is a good exercise, as it exposes us to the fundamentals of implementing machine learning algorithms. However, this becomes cumbersome for more complex neural architectures. For this reason, from this point on, we will use PyTorch for all our coding examples. 4.2 Multiclass Classification So far, in this chapter we have discussed implementing binary classifiers. Next, we will modify these binary classifiers to perform multiclass classification, following the discussion in Section 3.5. 4.2.1 AG News Dataset Before explaining the actual training/testing code, we have to choose a new dataset that is suitable for multiclass classification. To this end, we will use the AG News Classification Dataset (Zhang et al., 2015), a subset of the larger AG corpus of news articles collected from thousands of different news sources.11 The classification dataset consists of four 11 http://groups.di.unipi.it/~gulli/AG_corpus_of_news_articles.html 4.2 Multiclass Classification 65 classes, and the data is equally balanced across all classes (30,000 articles per class for train, and 1,900 articles per class for testing). The goal of the task is to classify each article as one of the four classes: World, Sports, Business, or Sci/Tech. 4.2.2 Preparing the Dataset The AG News Dataset is distributed as two CSV files (one for training and one for testing), each containing three columns: the class index, the title, and the description. The dataset also provides a text file that maps the above class indexes to more descriptive class labels. Because of the tabular nature of the dataset, pandas, a Python library
for tabular data analysis,12 is a natural choice for loading and transform-
ing it. To this end, our Jupyter notebook (chap4_multiclass_logistic_regression) demonstrates the sequence of steps required to handle the data, as well
as model training and evaluation. First, we show how to load the CSV,
add column names, and inspect the result: class index . 0  3 
 . 1  3 
 . 2  3 
 . 3  3 
 . 4  3 
 ... ... . 119995  1 
 . 119996  2 
 . 119997  2 
 . 119998  2 
 . 119999  2 
 title Wall St. Bears Claw Back Into the Black (Reuters) Carlyle Looks Toward Commercial Aerospace (Reu... Oil and Economy Cloud Stocks' Outlook (Reuters) Iraq Halts Oil Exports from Main Southern Pipe... Oil prices soar to all-time record, posing new... ... Pakistan's Musharraf Says Won't Quit as Army C... Renteria signing a top-shelf deal Saban not going to Dolphins yet Today's NFL games Nets get Carter from Raptors description Reuters - Short-sellers, Wall Street's dwindli... Reuters - Private investment firm Carlyle Grou... Reuters - Soaring crude prices plus worries\ab... Reuters - Authorities have halted oil export\f... AFP - Tearaway world oil prices, toppling reco... ... KARACHI (Reuters) - Pakistani President Perve... Red Sox general manager Theo Epstein acknowled... The Miami Dolphins will put their courtship of... PITTSBURGH at NY GIANTS Time: 1:30 p.m. Line: ... INDIANAPOLIS -- All-Star Vince Carter was trad... 120000 rows × 3 columns Since the class labels themselves are in a separate file, we manually add them to the pandas data structure (called dataframe in pandas’ terminology) to increase the interpretability of the data. We use the class index column as a starting point, and use its map method to create a new column with the corresponding labels (technically a new Series object) that is added to the dataframe using its insert method, which allows us to insert the column in a specific position. Note that the label indices are one-based, so we subtract one to align them with their labels. 12 https://pandas.pydata.org 66 Implementing Text Classification Using Perceptron and LR class index . 0  3 
 . 1  3 
 . 2  3 
 . 3  3 
 . 4  3 
 ... ... . 119995  1 
 . 119996  2 
 . 119997  2 
 . 119998  2 
 . 119999  2 
 class Business Business Business Business Business ... World Sports Sports Sports Sports title Wall St. Bears Claw Back Into the Black (Reuters) Oil and Economy Cloud Stocks' Outlook (Reuters) Oil prices soar to all-time record, posing new... Pakistan's Musharraf Says Won't Quit as Army C... Saban not going to Dolphins yet Nets get Carter from Raptors description Reuters - Short-sellers, Wall Street's dwindli... Reuters - Soaring crude prices plus worries\ab... AFP - Tearaway world oil prices, toppling reco... KARACHI (Reuters) - Pakistani President Perve... The Miami Dolphins will put their courtship of... INDIANAPOLIS -- All-Star Vince Carter was trad... Iraq Halts Oil Exports from Main Southern Pipe... Reuters - Authorities have halted oil export\f... ... ... Renteria signing a top-shelf deal Red Sox general manager Theo Epstein acknowled... 120000 rows × 4 columns Carlyle Looks Toward Commercial Aerospace (Reu... Reuters - Private investment firm Carlyle Grou... Today's NFL games PITTSBURGH at NY GIANTS Time: 1:30 p.m. Line: ... Next we will preprocess the text. First we lowercase the title and description, and then we concatenate them into a single string. Then we remove some spurious backslashes from the text. Once this is done, the preprocessed text is added to the dataframe as a new column. Note that pandas allows these steps to be applied to all rows simultaneously. class index class title Wall St. Bears Claw Back Into the Black (Reuters) Oil and Economy Cloud Stocks' Outlook (Reuters) Oil prices soar to all-time record, posing new... ... Pakistan's Musharraf Says Won't Quit as Army C... Saban not going to Dolphins yet Nets get Carter from Raptors description Reuters - Short-sellers, Wall Street's dwindli... Reuters - Soaring crude prices plus worries\ab... AFP - Tearaway world oil prices, toppling reco... ... KARACHI (Reuters) - Pakistani President Perve... The Miami Dolphins will put their courtship of... INDIANAPOLIS -- All-Star Vince Carter was trad... text wall st. bears claw back into the black (reute... oil and economy cloud stocks' outlook (reuters... oil prices soar to all-time record, posing new... ... pakistan's musharraf says won't quit as army c... saban not going to dolphins yet the miami dolp... nets get carter from raptors indianapolis -- a... . 0  3 Business 
 . 1  3 Business 
 . 2  3 Business 
 . 3  3 Business 
 . 4  3 Business 
 ... ... ... . 119995  1 World 
 . 119996  2 Sports 
 . 119997  2 Sports 
 . 119998  2 Sports 
 . 119999  2 Sports 
 120000 rows × 5 columns Carlyle Looks Toward Commercial Reuters - Private investment firm Carlyle carlyle looks toward commercial Aerospace (Reu... Grou... aerospace (reu... Iraq Halts Oil Exports from Main Southern Pipe... Reuters - Authorities have halted oil export\f... iraq halts oil exports from main southern pipe... Renteria signing a top-shelf deal Red Sox general manager Theo Epstein renteria signing a top-shelf deal red sox acknowled... gene... Today's NFL games PITTSBURGH at NY GIANTS Time: 1:30 p.m. today's nfl games pittsburgh at ny giants Line: ... time... At this point, the text is ready to be tokenized. For this purpose we will use NLTK’s word_tokenize function. This function can be applied to the whole column at once using the pandas map function, which returns a new column which we add to the dataframe. However, here we actually use the progress_map function, which provides a visual progress bar. This visual feedback is especially helpful for tasks that take more time to complete. 4.2 Multiclass Classification 67 class index . 0  3 
 . 1  3 
 . 2  3 
 . 3  3 
 . 4  3 
 ... ... . 119995  1 
 . 119996  2 
 . 119997  2 
 . 119998  2 
 . 119999  2 
 class Business Business Business Business Business ... World Sports Sports Sports Sports title Wall St. Bears Claw Back Into the Black (Reuters) Oil and Economy Cloud Stocks' Outlook (Reuters) Oil prices soar to all-time record, posing new... ... Pakistan's Musharraf Says Won't Quit as Army C... Saban not going to Dolphins yet Nets get Carter from Raptors description Reuters - Short-sellers, Wall Street's dwindli... Reuters - Soaring crude prices plus worries\ab... AFP - Tearaway world oil prices, toppling reco... ... KARACHI (Reuters) - Pakistani President Perve... The Miami Dolphins will put their courtship of... INDIANAPOLIS -- All-Star Vince Carter was trad... text wall st. bears claw back into the black (reute... oil and economy cloud stocks' outlook (reuters... oil prices soar to all-time record, posing new... ... pakistan's musharraf says won't quit as army c... saban not going to dolphins yet the miami dolp... nets get carter from raptors indianapolis -- a... tokens [wall, st., bears, claw, back, into, the, blac... [oil, and, economy, cloud, stocks, ', outlook,... [oil, prices, soar, to, all-time, record, ,, p... ... [pakistan, 's, musharraf, says, wo, n't, quit,... [saban, not, going, to, dolphins, yet, the, mi... [nets, get, carter, from, raptors, indianapoli... 120000 rows × 6 columns Carlyle Looks Toward Commercial Reuters - Private investment firm carlyle looks toward commercial [carlyle, looks, toward, Aerospace (Reu... Carlyle Grou... aerospace (reu... commercial, aerospace... Iraq Halts Oil Exports from Main Reuters - Authorities have halted iraq halts oil exports from main [iraq, halts, oil, exports, from, Southern Pipe... oil export\f... southern pipe... main, southe... Renteria signing a top-shelf deal Red Sox general manager Theo renteria signing a top-shelf deal [renteria, signing, a, top-shelf, Epstein acknowled... red sox gene... deal, red, s... Today's NFL games PITTSBURGH at NY GIANTS today's nfl games pittsburgh at [today, 's, nfl, games, Time: 1:30 p.m. Line: ... ny giants time... pittsburgh, at, ny, gi... From the tokens we just created, we then create a vocabulary for our corpus. Here, we only keep the words that occur at least 10 times, decreasing the memory needed and reducing the likelihood that our vocabulary contains noisy tokens. Note that each row in the tokens column contains a list of tokens. In order to create the vocabulary, we will need to convert the Series of lists of tokens into a Series of tokens using the explode() Pandas method. Then we will use the value_counts() method to create a Series object in which the index are the tokens and the values are the number of times they appear in the corpus. The next step is removing the tokens with a count lower than our chosen threshold. Finally, we create a list with the remaining tokens, as well as a dictionary that maps tokens to token ids (i.e., the index of the token in the list). We include in the vocabulary a special token [UNK] that will be used as a placeholder for tokens that do not appear in our vocabulary after the frequency pruning. Using this vocabulary, we construct a feature vector for each news article in the corpus. This feature vector will be encoded as a dictionary, with keys corresponding to token ids, and values corresponding to the number of times the token appears in the article. As above, the feature vectors will be stored as a new column in the dataframe. 68 Implementing Text Classification Using Perceptron and LR class index class title Wall St. Bears Claw Back Into the Black (Reuters) Oil and Economy Cloud Stocks' Outlook (Reuters) Oil prices soar to all-time record, posing new... ... Pakistan's Musharraf Says Won't Quit as Army C... Saban not going to Dolphins yet Nets get Carter from Raptors description Reuters - Short-sellers, Wall Street's dwindli... Reuters - Soaring crude prices plus worries\ab... AFP - Tearaway world oil prices, toppling reco... ... KARACHI (Reuters) - Pakistani President Perve... The Miami Dolphins will put their courtship of... INDIANAPOLIS -- All-Star Vince Carter was trad... text wall st. bears claw back into the black (reute... oil and economy cloud stocks' outlook (reuters... oil prices soar to all-time record, posing new... ... pakistan's musharraf says won't quit as army c... saban not going to dolphins yet the miami dolp... nets get carter from raptors indianapolis -- a... tokens [wall, st., bears, claw, back, into, the, blac... [oil, and, economy, cloud, stocks, ', outlook,... [oil, prices, soar, to, alltime, record, ,, p... ... [pakistan, 's, musharraf, says, wo, n't, quit,... [saban, not, going, to, dolphins, yet, the, mi... [nets, get, carter, from, raptors, indianapoli... features {427: 2, 563: 1, 1607: 1, 15062: 1, 120: 1, 73... {66: 1, 9: 2, 351: 2, 4565: 1, 158: 1, 116: 1,... {66: 2, 99: 2, 4390: 1, 4: 2, 3595: 1, 149: 1,... ... {383: 1, 23: 1, 1626: 2, 91: 1, 1809: 1, 285: ... {7762: 2, 68: 1, 661: 1, 4: 2, 1439: 2, 703: 1... {2170: 2, 226: 1, 2402: 2, 32: 1, 2995: 2, 219... . 0  3 Business 
 . 1  3 Business 
 . 2  3 Business 
 . 3  3 Business 
 . 4  3 Business 
 ... ... ... . 119995  1 World 
 . 119996  2 Sports 
 . 119997  2 Sports 
 . 119998  2 Sports 
 . 119999  2 Sports 
 120000 rows × 7 columns Carlyle Looks Toward Commercial Aerospace (Reu... Reuters - Private investment firm Carlyle Grou... carlyle looks toward commercial aerospace (reu... Iraq Halts Oil Exports from Reuters - Authorities have iraq halts oil exports from Main Southern Pipe... halted oil export\f... main southern pipe... Renteria signing a top-shelf Red Sox general manager renteria signing a topdeal Theo Epstein acknowled... shelf deal red sox gene... PITTSBURGH at NY Today's NFL games GIANTS Time: 1:30 p.m. Line: ... today's nfl games pittsburgh at ny giants time... [carlyle, looks, toward, {15999: 2, 1076: 1, 855: commercial, aerospace... 1, 1286: 1, 4251: 1, ... [iraq, halts, oil, exports, {77: 2, 7380: 1, 66: 3, from, main, southe... 1787: 1, 32: 2, 900: 2... [renteria, signing, a, top- {8428: 2, 2638: 1, 5: 4, shelf, deal, red, s... 0: 3, 127: 1, 202: 3,... [today, 's, nfl, games, {106: 1, 23: 1, 729: 1, pittsburgh, at, ny, gi... 225: 1, 1586: 1, 22: 1... The final preprocessing step is converting the features and the class indices into PyTorch tensors. Recall that we need to subtract one from the class indices to make them zero-based. At this point, the data is fully processed and we are ready to begin training. 4.2.3 Multiclass Logistic Regression Using PyTorch The model itself is a single linear layer whose input size corresponds to the size of our vocabulary, and its output size corresponds to the number of classes in our corpus. PyTorch’s Linear layer includes a bias by default, so there is no need to handle that manually the way we did for our perceptron example. The code for training this model (which implements Algorithm 6) is almost identical to that of the binary logistic repression. However, since we have to calculate a score for each of the four different classes, we need to replace the previous BCEWithLogitsLoss with CrossEntropyLoss, which applies a softmax over the scores to obtain probabilities for each class. For each example, the model predicts 4 scores – one for each label. The label with the highest score is selected using the argmax function. We evaluate the predictions of our model for each class using Scikitlearn’s classification_report, which handles the results of multiclass classification. 4.3 Summary 69 4.3 Summary In this chapter, we used movie review and news article classification to illustrate the implementation of the previously described algorithms for the binary perceptron, binary logistic regression, and multiclass logistic regression. For the binary logistic regression, we made a direct comparison between the lower-level NumPy implementation and a higher-level version that made use of PyTorch. We hope that through this series of exercises the reader has noted several key takeaways. First, data preparation is important and should be done thoughtfully. Certain tasks (e.g., text normalization or sentence splitting) are going to be frequently needed if you continue with NLP, so using or creating generic functions can be very helpful. However, what works for one dataset and one language may not be suitable for another scenario. For example, in our case, we selected different tokenizers for each of our tasks to account for the different registers of English, as well as removing diacritics during normalization. Second, when it comes to implementing machine learning algorithms, it is often easier to use a higher-level library such as PyTorch instead of NumPy. For example, with the former, the gradients are calculated by the library, whereas in NumPy we have to code them ourselves. This becomes cumbersome quickly. For example, even the derivative of the softmax is non-trivial. Third, PyTorch imposes a training structure that remains largely the same, regardless of what models are being trained. That is, at a high level, the same steps are always required: clearing the current gradients, predicting output scores for the provided inputs, calculating the loss, and optimizing. These features make PyTorch a very powerful and convenient deep learning library; we will continue to use it throughout the remainder of the book to implement more complex neural architectures.
9,580
9,738
#!/usr/bin/env python # coding: utf-8 # # Binary Text Classification with Perceptron # In[1]: import random import numpy as np from tqdm.notebook import tqdm # set this variable to a number to be used as the random seed # or to None if you don't want to set a random seed seed = 1234 if seed is not None: random.seed(seed) np.random.seed(seed) # The dataset is divided in two directories called `train` and `test`. # These directories contain the training and testing splits of the dataset. # In[2]: get_ipython().system('ls -lh data/aclImdb/') # Both the `train` and `test` directories contain two directories called `pos` and `neg` that contain text files with the positive and negative reviews, respectively. # In[3]: get_ipython().system('ls -lh data/aclImdb/train/') # We will now read the filenames of the positive and negative examples. # In[4]: from glob import glob pos_files = glob('data/aclImdb/train/pos/*.txt') neg_files = glob('data/aclImdb/train/neg/*.txt') print('number of positive reviews:', len(pos_files)) print('number of negative reviews:', len(neg_files)) # Now, we will use a [`CountVectorizer`](https://scikit-learn.org/stable/modules/generated/sklearn.feature_extraction.text.CountVectorizer.html) to read the text files, tokenize them, acquire a vocabulary from the training data, and encode it in a document-term matrix in which each row represents a review, and each column represents a term in the vocabulary. Each element $(i,j)$ in the matrix represents the number of times term $j$ appears in example $i$. # In[5]: from sklearn.feature_extraction.text import CountVectorizer # initialize CountVectorizer indicating that we will give it a list of filenames that have to be read cv = CountVectorizer(input='filename') # learn vocabulary and return sparse document-term matrix doc_term_matrix = cv.fit_transform(pos_files + neg_files) doc_term_matrix # Note in the message printed above that the matrix is of shape (25000, 74894). # In other words, it has 1,871,225,000 elements. # However, only 3,445,861 elements were stored. # This is because most of the elements in the matrix are zeros. # The reason is that the reviews are short and most words in the english language don't appear in each review. # A matrix that only stores non-zero values is called *sparse*. # # Now we will convert it to a dense numpy array: # In[6]: X_train = doc_term_matrix.toarray() X_train.shape # We will also create a numpy array with the binary labels for the reviews. # One indicates a positive review and zero a negative review. # The label `y_train[i]` corresponds to the review encoded in row `i` of the `X_train` matrix. # In[7]: # training labels y_pos = np.ones(len(pos_files)) y_neg = np.zeros(len(neg_files)) y_train = np.concatenate([y_pos, y_neg]) y_train # Now we will initialize our model, in the form of an array of weights `w` of the same size as the number of features in our dataset (i.e., the number of words in the vocabulary acquired by [`CountVectorizer`](https://scikit-learn.org/stable/modules/generated/sklearn.feature_extraction.text.CountVectorizer.html)), and a bias term `b`. # Both are initialized to zeros. # In[8]: # initialize model: the feature vector and bias term are populated with zeros n_examples, n_features = X_train.shape w = np.zeros(n_features) b = 0 # Now we will use the perceptron learning algorithm to learn the values of `w` and `b` from our training data. # In[9]: n_epochs = 10 indices = np.arange(n_examples) for epoch in range(10): n_errors = 0 # randomize the order in which training examples are seen in this epoch np.random.shuffle(indices) # traverse the training data for i in tqdm(indices, desc=f'epoch {epoch+1}'): x = X_train[i] y_true = y_train[i] # the perceptron decision based on the current model score = x @ w + b y_pred = 1 if score > 0 else 0 # update the model is the prediction was incorrect if y_true == y_pred: continue elif y_true == 1 and y_pred == 0: w = w + x b = b + 1 n_errors += 1 elif y_true == 0 and y_pred == 1: w = w - x b = b - 1 n_errors += 1 if n_errors == 0: break # The next step is evaluating the model on the test dataset. # Note that this time we use the [`transform()`](https://scikit-learn.org/stable/modules/generated/sklearn.feature_extraction.text.CountVectorizer.html#sklearn.feature_extraction.text.CountVectorizer.transform) method of the [`CountVectorizer`](https://scikit-learn.org/stable/modules/generated/sklearn.feature_extraction.text.CountVectorizer.html), instead of the [`fit_transform()`](https://scikit-learn.org/stable/modules/generated/sklearn.feature_extraction.text.CountVectorizer.html#sklearn.feature_extraction.text.CountVectorizer.fit_transform) method that we used above. This is because we want to use the learned vocabulary in the test set, instead of learning a new one. # In[10]: pos_files = glob('data/aclImdb/test/pos/*.txt') neg_files = glob('data/aclImdb/test/neg/*.txt') doc_term_matrix = cv.transform(pos_files + neg_files) X_test = doc_term_matrix.toarray() y_pos = np.ones(len(pos_files)) y_neg = np.zeros(len(neg_files)) y_test = np.concatenate([y_pos, y_neg]) # Using the model is easy: multiply the document-term matrix by the learned weights and add the bias. # We use Python's `@` operator to perform the matrix-vector multiplication. # In[11]: y_pred = (X_test @ w + b) > 0 # Now we print an evaluation of the prediction results using scikit-learn's [`classification_report()`](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.classification_report.html) function. # In[12]: def binary_classification_report(y_true, y_pred): # count true positives, false positives, true negatives, and false negatives tp = fp = tn = fn = 0 for gold, pred in zip(y_true, y_pred): if pred == True: if gold == True: tp += 1 else: fp += 1 else: if gold == False: tn += 1 else: fn += 1 # calculate precision and recall precision = tp / (tp + fp) recall = tp / (tp + fn) # calculate f1 score fscore = 2 * precision * recall / (precision + recall) # calculate accuracy accuracy = (tp + tn) / len(y_true) # number of positive labels in y_true support = sum(y_true) return { "precision": precision, "recall": recall, "f1-score": fscore, "support": support, "accuracy": accuracy, } # In[13]: binary_classification_report(y_test, y_pred)
3,651
3,682
23
chap04-24
chap04-24
4 Implementing Text Classification Using Perceptron and Logistic Regression In the previous chapters we have discussed the theory behind the perceptron and logistic regression, including mathematical explanations of how and why they are able to learn from examples. In this chapter we will transition from math to code. Specifically, we will discuss how to implement these models in the Python programming language. All the code that we will introduce throughout this book is available online as well: http://clulab.github.io/gentlenlp/. The reader who is not familiar with the Python programming language is encouraged to read first Appendix A, for a brief introduction to the language, and Appendix B, for a discussion on how computers encode and preprocess text. Once done, please return here. To get a better understanding of how these algorithms work under the hood, we will start by implementing them from scratch. However, as the book progresses, we will introduce some of the popular tools and libraries that make Python the language of choice for machine learning, e.g., PyTorch,1 and Hugging Face’s transformers.2 The code for all the examples in the book is provided in the form of Jupyter notebooks.3 Important fragments of these notebooks will be presented in the implementation chapters so that the reader has the whole picture just by reading the book. However, we strongly encourage you to download the notebooks and execute them yourself. We also encourage you to modify them to conduct your own experiments! 1 https://pytorch.org
2 https://huggingface.co 3 https://jupyter.org/ 55 56 Implementing Text Classification Using Perceptron and LR 4.1 Binary Classification We begin this chapter with binary classification. That is, we aim to train classifiers that assign one of two labels to a given text. As the example for this task, we will train a review classifier using the the Large Movie Review Dataset (Maas et al., 2011).4 We tackle this task by implementing first a binary perceptron classifier, followed by a binary logistic regression one. We will implement the latter both from scratch as well as using PyTorch, so the reader has a clearer understanding on how PyTorch works “under the hood.” 4.1.1 Large Movie Review Dataset This dataset contains movie reviews and their associated scores (between 1 and 10) as provided by IMDb.5 converted these scores to binary labels by assigning each review a positive or negative label if the review score was above 6 or below 5, respectively. Reviews with scores 5 and 6 were considered too neutral and thus excluded. We follow the same protocol in this chapter. The dataset is divided in two even partitions called train and test, each containing 25,000 reviews. The dataset also provides additional unlabeled reviews, but we will not use those here. Each partition contains two directories called pos and neg where the positive and negative examples are stored. Each review is stored in an independent text file, whose name is composed of an id unique to the partition and the score associated with the review, separated by an underscore. An example of a positive and a negative review is shown in Table 4.1. 4.1.2 Bag-of-words Model As discussed in Section 2.2, we will encode the text to classify as a bag of words. That is, we encode each review as a list of numbers, with each position in the list corresponding to a word in our vocabulary, and the value stored in that position corresponding to the number of times the word appears in the review. For example, say we want to encode the following two reviews: 4 https://ai.stanford.edu/~amaas/data/sentiment/ 5 https://www.imdb.com/ Maas et al. 4.1 Binary Classification 57 Table 4.1 Two examples of movie reviews from IMDb. The first is a positive review of the movie Puss in Boots (1988). The second is a negative review of the movie Valentine (2001). These reviews can be found at https://www.imdb.com/review/rw0606396/ and https://www.imdb.com/review/rw0721861/, respectively. Filename Score Binary Label train/pos/24_8.txt 8/10 Positive train/neg/141_3.txt 3/10 Negative Review Text Although this was obviously a low-budget production, the performances and the songs in this movie are worth seeing. One of Walken’s few musical roles to date. (he is a marvelous dancer and singer and he demonstrates his acrobatic skills as well - watch for the cartwheel!) Also starring Jason Connery. A great children’s story and very likable characters. This stalk and slash turkey manages to bring nothing new to an increasingly stale genre. A masked killer stalks young, pert girls and slaughters them in a variety of gruesome ways, none of which are particularly inventive. It’s not scary, it’s not clever, and it’s not funny. So what was the point of it? Review 1: Review 2: "I liked the movie. My friend liked it too. " "I hated it. Would not recommend. " First, we need to create a vocabulary that maps each word to an id that uniquely identifies it. Each of these numbers will be used as the index in a list, so they must start at zero and grow by one for each word in the vocabulary. For example, one possible vocabulary that encodes the previous reviews is: {'would': 0, 'hated': 1, 58 Implementing Text Classification Using Perceptron and LR 'my': 2, 'liked': 3, 'not': 4, 'it': 5, 'movie': 6, 'recommend': 7, 'the': 8, 'I': 9, 'too': 10, 'friend': 11} Using this mapping, we can encode the two reviews as follows: Review1: [0,0,1,2,0,1,1,0,1,1,1,1] Review2: [1,1,0,0,1,1,0,1,0,1,0,0] Note that the word liked (fourth position) in the first review has a value of two. This is because this word appears twice in that review. This is a small example with a vocabulary of only 12 terms. Of course, the same process needs to be implemented for our whole training dataset. For this purpose we will use scikit-learn’s CountVectorizer class.6 Using the CountVectorizer class simplifies things, allowing us to get started quickly with a bag-of-words approach. However, note that it makes several simplifying assumptions (e.g., text is lowercased, and punctuation and single character tokens are removed). Some of these may not be adequate to other tasks. First, we need to obtain the filenames for the reviews in the training set: Once we have acquired the filenames for the training reviews, we need
to read them using the CountVectorizer. In order for the CountVectorizer to open and read the files for us, we make use of the input='filename' constructor parameter (otherwise it would expect the string content directly). The CountVectorizer provides three methods that will be use-
ful for us: a method called fit() that is used to acquire the vocabulary,
a method transform() that converts the text into the bag-of-words representation, and a method fit_transform() that conveniently acquires the vocabulary and transforms the data in a single step. The resulting object is referred to as a document-term matrix, where each row corre- 6 https://scikitlearn.org/stable/modules/generated/sklearn.feature_ extraction.text.CountVectorizer.html 4.1 Binary Classification 59 sponds to a document, and each column corresponds to a term in the vocabulary. As the output above indicates, the resulting matrix has 25,000 rows (one for each review), and 74,849 columns (one for each term). Also you may note that this matrix is sparse, with 3,445,861 stored elements. A regular matrix of shape 25,000×74,849 would have 1,871,225,000 elements. However, most of the elements in a document-term matrix are zeros because only a few words from the vocabulary appear in each document. A sparse matrix takes advantage of this fact by storing only the non-zero cells in order to reduce the memory required to store it. Thus, sparse matrices are convenient, especially when dealing with lots of data. Nevertheless, to simplify the downstream code in this example, we will convert it into a dense matrix, i.e., a regular two-dimensional NumPy array. Finally, we also need the labels of the reviews. We assign a label of one to positive reviews, and a label of zero to negative ones. Note that the first half of the reviews are positive and the second half are negative. The label at the ith position of the y_train array corresponds to the review encoded in the ith row of the X_train matrix. 4.1.3 Perceptron Now that we have defined our task and the data processing pipeline, we will implement a perceptron classifier that classifies the movie reviews as positive or negative. The entire code discussed in this section is available in the chap4_perceptron notebook. Recall from Section 2.4 that the perceptron is composed of a weight vector w and a bias term b. These will be represented as a NumPy array w of the same length as our document vectors, and a variable b for the bias term. Both will be initialized with zeros. The parameters w and b are learned through the following algorithm, which implements Algorithm 2 from Chapter 2: There are a couple of details to point out. Line 3 of Algorithm 2 indicates that we need to repeat the training loop until convergence. Theoretically, convergence is defined as predicting all training examples correctly. This is an ambitious requirement, which is not always possible in practice, so in this code we also include a stop condition if we reach a maximum number of epochs. Another crucial difference between our implementation here and the theoretical Algorithm 2, is that we randomize the order in which the training examples are seen at the beginning of 60 Implementing Text Classification Using Perceptron and LR each epoch. This simple (but highly recommended!) change is necessary to avoid the introduction of spurious biases due to the arbitrary order of the examples in the original training partition.7 We accomplish this by storing the indices corresponding to the X_train matrix rows in a NumPy array, and shuffling these indices at the beginning of each epoch. We shuffle the indices instead of the examples so that we can preserve the mapping between examples and labels. The training loop aligns closely with Algorithm 2. We start by iterating over each example in our training data, storing the current example in the variable x,8 and its corresponding label in the variable y_true. Next, we compute the perceptron decision function shown in Algorithm 1. Note that NumPy (as well as PyTorch) uses Python’s @ operator to indicate vector or matrix multiplication, depending on its operand types. Here we use it to calculate the dot product of the example x and the weights w. To this we add the bias b to obtain the predicted score, whose sign is used to assign a positive or negative predicted label. If the prediction is correct, then no update is needed, and we can move on to the next training example. However, if the prediction is incorrect, then we need to adjust w and b, as described in Algorithm 2. Sidebar 4.1 The tqdm function This is our first exposure to the tqdm function. tqdm is a progress bar that “make your loops show a smart progress meter.”9 The name tqdm comes from the Arabic word taqaddum which can mean “progress.” Using tqdm is as simple as wrapping it around the collection to be traversed. After training, we evaluate the model’s performance on the heldout test partition. The test data is loaded similarly to the training partition, but with one notable difference; we use CountVectorizer’s transform() method instead of the fit_transform() method so that the vocabulary is not adjusted for the test data. We won’t show here the loading of the test partition since it is so similar to the code already shown, but it is available in the Jupyter notebook that accompanies this section. . 7   As an extreme example, consider a dataset where all the positive examples appear first in the training partition. This would cause the perceptron to artificially inflate the weights of the features that occur in these examples, a situation from which the learning algorithm may struggle to recover. 
 . 8  We use typewriter font when we discuss variables in the code, to distinguish code from the theoretical discussion in the other chapters. 
 9 https://github.com/tqdm/tqdm 4.1 Binary Classification 61 Using the model to assign labels to all the test data is easily done in one step – we simply multiply the entire test data document-term matrix by the previously learned weights and add the bias. Scores greater than zero indicate a positive review, and those less than zero are negative. At this point we can evaluate the classifier’s performance, which we will do using precision, recall, and F1 scores for binary classification (described in Section 2.3). For this purpose, we implement a function called binary_classification_report that computes these metrics and returns them as a dictionary: We call this function to compare the predicted labels to the true labels, and obtain the evaluation scores. Our F1 score here is 86.8%, which is much higher than the baseline that assigns labels randomly, which yields an F1 score of about 50%. This is a good result, especially considering the simplicity of the perceptron! In the next sections and chapters, we will discuss a battery of strategies to considerably improve this performance. 4.1.4 Binary Logistic Regression from Scratch Using the same task, dataset, and evaluation, we will now implement a logistic regression classifier, as described in Algorithm 5 from Chapter 3. To give the reader hands-on experience with the implementation of the gradient calculations for logistic regression, we start by implementing it from scratch using NumPy. All the code shown in this section is available in the chap4_logistic_regression_numpy notebook. In the perceptron implementation, we represented the weights and the bias as two different variables. Here, however, we will use a different approach that will allow us to unify them into a single vector variable. Specifically, we take advantage of the similarity between the derivative of the cost function with respect to the weights (Equation 3.14) and the derivative of the cost with respect to the bias (Equation 3.15). d Ci(w, b) = (σi − yi)xij (3.14 revisited) dwj d Ci(w, b) = σi − yi (3.15 revisited) db Note that the two derivative formulas are identical except that the former has a multiplication by xij, while the latter does not. However, 62 Implementing Text Classification Using Perceptron and LR since σi − yi = (σi − yi)1 we can multiply the derivative of the cost with respect to the bias by one without changing the semantics. This gives an opportunity for combining the computations, doing them both in a single pass. The idea is that we can treat the bias as a weight corresponding to a feature that always has a value of one. As can be seen above, we created a NumPy array of ones of the same length as the number of examples in our training set (i.e., the number of rows in the data matrix). Then we add this array as a new column to the data matrix, using NumPy’s column_stack function. Next, we need to initialize our model. This time we will use a single NumPy array w of the same length as the number of columns in the data matrix. The weight vector w is initialized randomly with values between 0 and 1: Before implementing the learning algorithm, we need an implementation of the logistic function. Recall that the logistic function is σ(x) = 1 (3.1 revisited) 1+e−x This function can be easily implemented in NumPy as follows: However, this naive implementation may produce the following warning during training: The term overflow indicates that the result of evaluating exp(-x) is a number so large that it can’t be represented by a float (specifically, we’re using float64 numbers). We will avoid this issue by not calling exp with values that will overflow. NumPy provides the function finfo that can be consulted to find the limits of floating point numbers: The log of the largest floating point number is the largest number for which exp() will not overflow, so we will use it as a threshold to filter out problematic values: We now have everything we need to implement Algorithm 4. The steps to follow for each example are: (1) use the model to make a prediction, (2) calculate the gradient of the loss function with respect to the model parameters, and (3) update the model parameters using the gradient. The size of the update is controlled by the learning rate. Once the model has been trained, we evaluate it on the test dataset using our binary_classification_report function from the previous section. Loading and preprocessing the test dataset follows the same 4.1 Binary Classification 63 steps as with the previous classifier. We omit the code for brevity. These are the results: The performance is comparable with that of the perceptron. The difference in F1 scores between the two classifiers (84.9% here vs. 86.8% for the perceptron) is not significant. Classifier parity is probably attributable to the fact that the signal distinguishing the two classes being easy to learn and the simpler perceptron training algorithm being sufficient in this case. Nevertheless, this task is useful in showing how to implement the logistic regression model from scratch, i.e., by implementing the gradient calculation and parameter updates manually. Next, we will implement the same model again using PyTorch, highlighting how this machine learning library simplifies the process. 4.1.5 Binary Logistic Regression Utilizing PyTorch While it is fairly straightforward to compute the derivatives for logistic regression and implement then directly in NumPy, this will not scale well to arbitrary neural architectures. Fortunately, there are libraries that automate the computation of the derivatives of the cost function (assuming it is differentiable!) for any neural network, and use the resulting gradients to perform gradient descent or other more sophisticated optimization procedures. To this end, we will use the PyTorch deep learning library10. The corresponding notebook for this section is chap4_logistic_regression_pytorch_bce. Our model for logistic regression corresponds to PyTorch’s Linear layer. When we instantiate this layer, we specify the size of the inputs (the size of our vocabulary) and the size of the output, i.e., the number of output neurons (which is one because we’re doing binary classification). The loss function we use is the binary cross-entropy loss (see Chapter 3), which is implemented as BCEWithLogitsLoss in PyTorch. In PyTorch, the gradients obtained from the loss function are applied to the model by an optimizer object, which implements and applies an optimization algorithm. Here we will use the vanilla stochastic gradient descent optimizer; we set its learning rate to 0.1. This is equivalent to the discussion in Section 3.2. Similarly to the manual implementation, the steps required to train the model for a given training example are: (1) ensure the gradients are set to zeros, (2) apply the model to obtain a prediction, (3) calculate 10 https://pytorch.org/ 64 Implementing Text Classification Using Perceptron and LR the loss, (4) compute the gradient of the loss by back-propagation, and (5) update the model parameters. Recall that in our previous implementation everything was hardcoded: applying the model, computing the gradients, and optimizing the model parameters. Here, however, the implementation of the logistic regression is expressed at a higher level of abstraction. This means that we are describing the logical steps without specifying a particular implementation. Instead, implementation details are the responsability of the chosen model, loss function, and optimizer. Thus, we could even choose a different model, loss function, and/or optimizer, and use the same training steps with little or no modification. This decoupling of the training logic from the implementation details is one of the main advantages of libraries such as PyTorch. As shown in the code above, calling the model as a function, with the feature vectors as inputs, produces the predicted scores. Once again, a positive score corresponds to a positive label. When we evaluate this implementation on the test dataset, we obtain results that are in line with our previous models: Writing the perceptron and the logistic regression from scratch is a good exercise, as it exposes us to the fundamentals of implementing machine learning algorithms. However, this becomes cumbersome for more complex neural architectures. For this reason, from this point on, we will use PyTorch for all our coding examples. 4.2 Multiclass Classification So far, in this chapter we have discussed implementing binary classifiers. Next, we will modify these binary classifiers to perform multiclass classification, following the discussion in Section 3.5. 4.2.1 AG News Dataset Before explaining the actual training/testing code, we have to choose a new dataset that is suitable for multiclass classification. To this end, we will use the AG News Classification Dataset (Zhang et al., 2015), a subset of the larger AG corpus of news articles collected from thousands of different news sources.11 The classification dataset consists of four 11 http://groups.di.unipi.it/~gulli/AG_corpus_of_news_articles.html 4.2 Multiclass Classification 65 classes, and the data is equally balanced across all classes (30,000 articles per class for train, and 1,900 articles per class for testing). The goal of the task is to classify each article as one of the four classes: World, Sports, Business, or Sci/Tech. 4.2.2 Preparing the Dataset The AG News Dataset is distributed as two CSV files (one for training and one for testing), each containing three columns: the class index, the title, and the description. The dataset also provides a text file that maps the above class indexes to more descriptive class labels. Because of the tabular nature of the dataset, pandas, a Python library
for tabular data analysis,12 is a natural choice for loading and transform-
ing it. To this end, our Jupyter notebook (chap4_multiclass_logistic_regression) demonstrates the sequence of steps required to handle the data, as well
as model training and evaluation. First, we show how to load the CSV,
add column names, and inspect the result: class index . 0  3 
 . 1  3 
 . 2  3 
 . 3  3 
 . 4  3 
 ... ... . 119995  1 
 . 119996  2 
 . 119997  2 
 . 119998  2 
 . 119999  2 
 title Wall St. Bears Claw Back Into the Black (Reuters) Carlyle Looks Toward Commercial Aerospace (Reu... Oil and Economy Cloud Stocks' Outlook (Reuters) Iraq Halts Oil Exports from Main Southern Pipe... Oil prices soar to all-time record, posing new... ... Pakistan's Musharraf Says Won't Quit as Army C... Renteria signing a top-shelf deal Saban not going to Dolphins yet Today's NFL games Nets get Carter from Raptors description Reuters - Short-sellers, Wall Street's dwindli... Reuters - Private investment firm Carlyle Grou... Reuters - Soaring crude prices plus worries\ab... Reuters - Authorities have halted oil export\f... AFP - Tearaway world oil prices, toppling reco... ... KARACHI (Reuters) - Pakistani President Perve... Red Sox general manager Theo Epstein acknowled... The Miami Dolphins will put their courtship of... PITTSBURGH at NY GIANTS Time: 1:30 p.m. Line: ... INDIANAPOLIS -- All-Star Vince Carter was trad... 120000 rows × 3 columns Since the class labels themselves are in a separate file, we manually add them to the pandas data structure (called dataframe in pandas’ terminology) to increase the interpretability of the data. We use the class index column as a starting point, and use its map method to create a new column with the corresponding labels (technically a new Series object) that is added to the dataframe using its insert method, which allows us to insert the column in a specific position. Note that the label indices are one-based, so we subtract one to align them with their labels. 12 https://pandas.pydata.org 66 Implementing Text Classification Using Perceptron and LR class index . 0  3 
 . 1  3 
 . 2  3 
 . 3  3 
 . 4  3 
 ... ... . 119995  1 
 . 119996  2 
 . 119997  2 
 . 119998  2 
 . 119999  2 
 class Business Business Business Business Business ... World Sports Sports Sports Sports title Wall St. Bears Claw Back Into the Black (Reuters) Oil and Economy Cloud Stocks' Outlook (Reuters) Oil prices soar to all-time record, posing new... Pakistan's Musharraf Says Won't Quit as Army C... Saban not going to Dolphins yet Nets get Carter from Raptors description Reuters - Short-sellers, Wall Street's dwindli... Reuters - Soaring crude prices plus worries\ab... AFP - Tearaway world oil prices, toppling reco... KARACHI (Reuters) - Pakistani President Perve... The Miami Dolphins will put their courtship of... INDIANAPOLIS -- All-Star Vince Carter was trad... Iraq Halts Oil Exports from Main Southern Pipe... Reuters - Authorities have halted oil export\f... ... ... Renteria signing a top-shelf deal Red Sox general manager Theo Epstein acknowled... 120000 rows × 4 columns Carlyle Looks Toward Commercial Aerospace (Reu... Reuters - Private investment firm Carlyle Grou... Today's NFL games PITTSBURGH at NY GIANTS Time: 1:30 p.m. Line: ... Next we will preprocess the text. First we lowercase the title and description, and then we concatenate them into a single string. Then we remove some spurious backslashes from the text. Once this is done, the preprocessed text is added to the dataframe as a new column. Note that pandas allows these steps to be applied to all rows simultaneously. class index class title Wall St. Bears Claw Back Into the Black (Reuters) Oil and Economy Cloud Stocks' Outlook (Reuters) Oil prices soar to all-time record, posing new... ... Pakistan's Musharraf Says Won't Quit as Army C... Saban not going to Dolphins yet Nets get Carter from Raptors description Reuters - Short-sellers, Wall Street's dwindli... Reuters - Soaring crude prices plus worries\ab... AFP - Tearaway world oil prices, toppling reco... ... KARACHI (Reuters) - Pakistani President Perve... The Miami Dolphins will put their courtship of... INDIANAPOLIS -- All-Star Vince Carter was trad... text wall st. bears claw back into the black (reute... oil and economy cloud stocks' outlook (reuters... oil prices soar to all-time record, posing new... ... pakistan's musharraf says won't quit as army c... saban not going to dolphins yet the miami dolp... nets get carter from raptors indianapolis -- a... . 0  3 Business 
 . 1  3 Business 
 . 2  3 Business 
 . 3  3 Business 
 . 4  3 Business 
 ... ... ... . 119995  1 World 
 . 119996  2 Sports 
 . 119997  2 Sports 
 . 119998  2 Sports 
 . 119999  2 Sports 
 120000 rows × 5 columns Carlyle Looks Toward Commercial Reuters - Private investment firm Carlyle carlyle looks toward commercial Aerospace (Reu... Grou... aerospace (reu... Iraq Halts Oil Exports from Main Southern Pipe... Reuters - Authorities have halted oil export\f... iraq halts oil exports from main southern pipe... Renteria signing a top-shelf deal Red Sox general manager Theo Epstein renteria signing a top-shelf deal red sox acknowled... gene... Today's NFL games PITTSBURGH at NY GIANTS Time: 1:30 p.m. today's nfl games pittsburgh at ny giants Line: ... time... At this point, the text is ready to be tokenized. For this purpose we will use NLTK’s word_tokenize function. This function can be applied to the whole column at once using the pandas map function, which returns a new column which we add to the dataframe. However, here we actually use the progress_map function, which provides a visual progress bar. This visual feedback is especially helpful for tasks that take more time to complete. 4.2 Multiclass Classification 67 class index . 0  3 
 . 1  3 
 . 2  3 
 . 3  3 
 . 4  3 
 ... ... . 119995  1 
 . 119996  2 
 . 119997  2 
 . 119998  2 
 . 119999  2 
 class Business Business Business Business Business ... World Sports Sports Sports Sports title Wall St. Bears Claw Back Into the Black (Reuters) Oil and Economy Cloud Stocks' Outlook (Reuters) Oil prices soar to all-time record, posing new... ... Pakistan's Musharraf Says Won't Quit as Army C... Saban not going to Dolphins yet Nets get Carter from Raptors description Reuters - Short-sellers, Wall Street's dwindli... Reuters - Soaring crude prices plus worries\ab... AFP - Tearaway world oil prices, toppling reco... ... KARACHI (Reuters) - Pakistani President Perve... The Miami Dolphins will put their courtship of... INDIANAPOLIS -- All-Star Vince Carter was trad... text wall st. bears claw back into the black (reute... oil and economy cloud stocks' outlook (reuters... oil prices soar to all-time record, posing new... ... pakistan's musharraf says won't quit as army c... saban not going to dolphins yet the miami dolp... nets get carter from raptors indianapolis -- a... tokens [wall, st., bears, claw, back, into, the, blac... [oil, and, economy, cloud, stocks, ', outlook,... [oil, prices, soar, to, all-time, record, ,, p... ... [pakistan, 's, musharraf, says, wo, n't, quit,... [saban, not, going, to, dolphins, yet, the, mi... [nets, get, carter, from, raptors, indianapoli... 120000 rows × 6 columns Carlyle Looks Toward Commercial Reuters - Private investment firm carlyle looks toward commercial [carlyle, looks, toward, Aerospace (Reu... Carlyle Grou... aerospace (reu... commercial, aerospace... Iraq Halts Oil Exports from Main Reuters - Authorities have halted iraq halts oil exports from main [iraq, halts, oil, exports, from, Southern Pipe... oil export\f... southern pipe... main, southe... Renteria signing a top-shelf deal Red Sox general manager Theo renteria signing a top-shelf deal [renteria, signing, a, top-shelf, Epstein acknowled... red sox gene... deal, red, s... Today's NFL games PITTSBURGH at NY GIANTS today's nfl games pittsburgh at [today, 's, nfl, games, Time: 1:30 p.m. Line: ... ny giants time... pittsburgh, at, ny, gi... From the tokens we just created, we then create a vocabulary for our corpus. Here, we only keep the words that occur at least 10 times, decreasing the memory needed and reducing the likelihood that our vocabulary contains noisy tokens. Note that each row in the tokens column contains a list of tokens. In order to create the vocabulary, we will need to convert the Series of lists of tokens into a Series of tokens using the explode() Pandas method. Then we will use the value_counts() method to create a Series object in which the index are the tokens and the values are the number of times they appear in the corpus. The next step is removing the tokens with a count lower than our chosen threshold. Finally, we create a list with the remaining tokens, as well as a dictionary that maps tokens to token ids (i.e., the index of the token in the list). We include in the vocabulary a special token [UNK] that will be used as a placeholder for tokens that do not appear in our vocabulary after the frequency pruning. Using this vocabulary, we construct a feature vector for each news article in the corpus. This feature vector will be encoded as a dictionary, with keys corresponding to token ids, and values corresponding to the number of times the token appears in the article. As above, the feature vectors will be stored as a new column in the dataframe. 68 Implementing Text Classification Using Perceptron and LR class index class title Wall St. Bears Claw Back Into the Black (Reuters) Oil and Economy Cloud Stocks' Outlook (Reuters) Oil prices soar to all-time record, posing new... ... Pakistan's Musharraf Says Won't Quit as Army C... Saban not going to Dolphins yet Nets get Carter from Raptors description Reuters - Short-sellers, Wall Street's dwindli... Reuters - Soaring crude prices plus worries\ab... AFP - Tearaway world oil prices, toppling reco... ... KARACHI (Reuters) - Pakistani President Perve... The Miami Dolphins will put their courtship of... INDIANAPOLIS -- All-Star Vince Carter was trad... text wall st. bears claw back into the black (reute... oil and economy cloud stocks' outlook (reuters... oil prices soar to all-time record, posing new... ... pakistan's musharraf says won't quit as army c... saban not going to dolphins yet the miami dolp... nets get carter from raptors indianapolis -- a... tokens [wall, st., bears, claw, back, into, the, blac... [oil, and, economy, cloud, stocks, ', outlook,... [oil, prices, soar, to, alltime, record, ,, p... ... [pakistan, 's, musharraf, says, wo, n't, quit,... [saban, not, going, to, dolphins, yet, the, mi... [nets, get, carter, from, raptors, indianapoli... features {427: 2, 563: 1, 1607: 1, 15062: 1, 120: 1, 73... {66: 1, 9: 2, 351: 2, 4565: 1, 158: 1, 116: 1,... {66: 2, 99: 2, 4390: 1, 4: 2, 3595: 1, 149: 1,... ... {383: 1, 23: 1, 1626: 2, 91: 1, 1809: 1, 285: ... {7762: 2, 68: 1, 661: 1, 4: 2, 1439: 2, 703: 1... {2170: 2, 226: 1, 2402: 2, 32: 1, 2995: 2, 219... . 0  3 Business 
 . 1  3 Business 
 . 2  3 Business 
 . 3  3 Business 
 . 4  3 Business 
 ... ... ... . 119995  1 World 
 . 119996  2 Sports 
 . 119997  2 Sports 
 . 119998  2 Sports 
 . 119999  2 Sports 
 120000 rows × 7 columns Carlyle Looks Toward Commercial Aerospace (Reu... Reuters - Private investment firm Carlyle Grou... carlyle looks toward commercial aerospace (reu... Iraq Halts Oil Exports from Reuters - Authorities have iraq halts oil exports from Main Southern Pipe... halted oil export\f... main southern pipe... Renteria signing a top-shelf Red Sox general manager renteria signing a topdeal Theo Epstein acknowled... shelf deal red sox gene... PITTSBURGH at NY Today's NFL games GIANTS Time: 1:30 p.m. Line: ... today's nfl games pittsburgh at ny giants time... [carlyle, looks, toward, {15999: 2, 1076: 1, 855: commercial, aerospace... 1, 1286: 1, 4251: 1, ... [iraq, halts, oil, exports, {77: 2, 7380: 1, 66: 3, from, main, southe... 1787: 1, 32: 2, 900: 2... [renteria, signing, a, top- {8428: 2, 2638: 1, 5: 4, shelf, deal, red, s... 0: 3, 127: 1, 202: 3,... [today, 's, nfl, games, {106: 1, 23: 1, 729: 1, pittsburgh, at, ny, gi... 225: 1, 1586: 1, 22: 1... The final preprocessing step is converting the features and the class indices into PyTorch tensors. Recall that we need to subtract one from the class indices to make them zero-based. At this point, the data is fully processed and we are ready to begin training. 4.2.3 Multiclass Logistic Regression Using PyTorch The model itself is a single linear layer whose input size corresponds to the size of our vocabulary, and its output size corresponds to the number of classes in our corpus. PyTorch’s Linear layer includes a bias by default, so there is no need to handle that manually the way we did for our perceptron example. The code for training this model (which implements Algorithm 6) is almost identical to that of the binary logistic repression. However, since we have to calculate a score for each of the four different classes, we need to replace the previous BCEWithLogitsLoss with CrossEntropyLoss, which applies a softmax over the scores to obtain probabilities for each class. For each example, the model predicts 4 scores – one for each label. The label with the highest score is selected using the argmax function. We evaluate the predictions of our model for each class using Scikitlearn’s classification_report, which handles the results of multiclass classification. 4.3 Summary 69 4.3 Summary In this chapter, we used movie review and news article classification to illustrate the implementation of the previously described algorithms for the binary perceptron, binary logistic regression, and multiclass logistic regression. For the binary logistic regression, we made a direct comparison between the lower-level NumPy implementation and a higher-level version that made use of PyTorch. We hope that through this series of exercises the reader has noted several key takeaways. First, data preparation is important and should be done thoughtfully. Certain tasks (e.g., text normalization or sentence splitting) are going to be frequently needed if you continue with NLP, so using or creating generic functions can be very helpful. However, what works for one dataset and one language may not be suitable for another scenario. For example, in our case, we selected different tokenizers for each of our tasks to account for the different registers of English, as well as removing diacritics during normalization. Second, when it comes to implementing machine learning algorithms, it is often easier to use a higher-level library such as PyTorch instead of NumPy. For example, with the former, the gradients are calculated by the library, whereas in NumPy we have to code them ourselves. This becomes cumbersome quickly. For example, even the derivative of the softmax is non-trivial. Third, PyTorch imposes a training structure that remains largely the same, regardless of what models are being trained. That is, at a high level, the same steps are always required: clearing the current gradients, predicting output scores for the provided inputs, calculating the loss, and optimizing. These features make PyTorch a very powerful and convenient deep learning library; we will continue to use it throughout the remainder of the book to implement more complex neural architectures.
15,251
15,432
#!/usr/bin/env python # coding: utf-8 # # Binary Text Classification with # # Logistic Regression Implemented from Scratch # In[1]: import random import numpy as np from tqdm.notebook import tqdm # set this variable to a number to be used as the random seed # or to None if you don't want to set a random seed seed = 1234 if seed is not None: random.seed(seed) np.random.seed(seed) # The dataset is divided in two directories called `train` and `test`. # These directories contain the training and testing splits of the dataset. # In[2]: get_ipython().system('ls -lh data/aclImdb/') # Both the `train` and `test` directories contain two directories called `pos` and `neg` that contain text files with the positive and negative reviews, respectively. # In[3]: get_ipython().system('ls -lh data/aclImdb/train/') # We will now read the filenames of the positive and negative examples. # In[4]: from glob import glob pos_files = glob('data/aclImdb/train/pos/*.txt') neg_files = glob('data/aclImdb/train/neg/*.txt') print('number of positive reviews:', len(pos_files)) print('number of negative reviews:', len(neg_files)) # Now, we will use a [`CountVectorizer`](https://scikit-learn.org/stable/modules/generated/sklearn.feature_extraction.text.CountVectorizer.html) to read the text files, tokenize them, acquire a vocabulary from the training data, and encode it in a document-term matrix in which each row represents a review, and each column represents a term in the vocabulary. Each element $(i,j)$ in the matrix represents the number of times term $j$ appears in example $i$. # In[5]: from sklearn.feature_extraction.text import CountVectorizer # initialize CountVectorizer indicating that we will give it a list of filenames that have to be read cv = CountVectorizer(input='filename') # learn vocabulary and return sparse document-term matrix doc_term_matrix = cv.fit_transform(pos_files + neg_files) doc_term_matrix # Note in the message printed above that the matrix is of shape (25000, 74894). # In other words, it has 1,871,225,000 elements. # However, only 3,445,861 elements were stored. # This is because most of the elements in the matrix are zeros. # The reason is that the reviews are short and most words in the english language don't appear in each review. # A matrix that only stores non-zero values is called *sparse*. # # Now we will convert it to a dense numpy array: # In[6]: X_train = doc_term_matrix.toarray() X_train.shape # In[7]: # Append 1s to the xs; this will allow us to multiply by the weights and # the bias in a single pass. # Make an array with a one for each row/data point ones = np.ones(X_train.shape[0]) # Concatenate these ones to existing feature vectors X_train = np.column_stack((X_train, ones)) X_train.shape # We will also create a numpy array with the binary labels for the reviews. # One indicates a positive review and zero a negative review. # The label `y_train[i]` corresponds to the review encoded in row `i` of the `X_train` matrix. # In[8]: # training labels y_pos = np.ones(len(pos_files)) y_neg = np.zeros(len(neg_files)) y_train = np.concatenate([y_pos, y_neg]) y_train # Now we will initialize our model, in the form of an array of weights `w` of the same size as the number of features in our dataset (i.e., the number of words in the vocabulary acquired by [`CountVectorizer`](https://scikit-learn.org/stable/modules/generated/sklearn.feature_extraction.text.CountVectorizer.html)), and a bias term `b`. # Both are initialized to zeros. # In[9]: # initialize model: the feature vector and bias term are populated with zeros n_examples, n_features = X_train.shape w = np.random.random(n_features) # Now we will use the logistic regression learning algorithm to learn the values of `w` and `b` from our training data. # In[10]: # from scipy.special import expit as sigmoid def sigmoid(z): if -z > np.log(np.finfo(float).max): return 0.0 return 1 / (1 + np.exp(-z)) # In[11]: lr = 1e-1 n_epochs = 10 indices = np.arange(n_examples) for epoch in range(10): # randomize the order in which training examples are seen in this epoch np.random.shuffle(indices) # traverse the training data for i in tqdm(indices, desc=f'epoch {epoch+1}'): x = X_train[i] y = y_train[i] # calculate the derivative of the cost function for this batch deriv_cost = (sigmoid(x @ w) - y) * x # update the weights w = w - lr * deriv_cost # The next step is evaluating the model on the test dataset. # Note that this time we use the [`transform()`](https://scikit-learn.org/stable/modules/generated/sklearn.feature_extraction.text.CountVectorizer.html#sklearn.feature_extraction.text.CountVectorizer.transform) method of the [`CountVectorizer`](https://scikit-learn.org/stable/modules/generated/sklearn.feature_extraction.text.CountVectorizer.html), instead of the [`fit_transform()`](https://scikit-learn.org/stable/modules/generated/sklearn.feature_extraction.text.CountVectorizer.html#sklearn.feature_extraction.text.CountVectorizer.fit_transform) method that we used above. This is because we want to use the learned vocabulary in the test set, instead of learning a new one. # In[12]: pos_files = glob('data/aclImdb/test/pos/*.txt') neg_files = glob('data/aclImdb/test/neg/*.txt') doc_term_matrix = cv.transform(pos_files + neg_files) X_test = doc_term_matrix.toarray() X_test = np.column_stack((X_test, np.ones(X_test.shape[0]))) y_pos = np.ones(len(pos_files)) y_neg = np.zeros(len(neg_files)) y_test = np.concatenate([y_pos, y_neg]) # Using the model is easy: multiply the document-term matrix by the learned weights and add the bias. # We use Python's `@` operator to perform the matrix-vector multiplication. # In[13]: y_pred = X_test @ w > 0 # Now we print an evaluation of the prediction results using scikit-learn's [`classification_report()`](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.classification_report.html) function. # In[14]: def binary_classification_report(y_true, y_pred): # count true positives, false positives, true negatives, and false negatives tp = fp = tn = fn = 0 for gold, pred in zip(y_true, y_pred): if pred == True: if gold == True: tp += 1 else: fp += 1 else: if gold == False: tn += 1 else: fn += 1 # calculate precision and recall precision = tp / (tp + fp) recall = tp / (tp + fn) # calculate f1 score fscore = 2 * precision * recall / (precision + recall) # calculate accuracy accuracy = (tp + tn) / len(y_true) # number of positive labels in y_true support = sum(y_true) return { "precision": precision, "recall": recall, "f1-score": fscore, "support": support, "accuracy": accuracy, } # In[15]: binary_classification_report(y_test, y_pred)
3,641
3,713
24
chap04-25
chap04-25
4 Implementing Text Classification Using Perceptron and Logistic Regression In the previous chapters we have discussed the theory behind the perceptron and logistic regression, including mathematical explanations of how and why they are able to learn from examples. In this chapter we will transition from math to code. Specifically, we will discuss how to implement these models in the Python programming language. All the code that we will introduce throughout this book is available online as well: http://clulab.github.io/gentlenlp/. The reader who is not familiar with the Python programming language is encouraged to read first Appendix A, for a brief introduction to the language, and Appendix B, for a discussion on how computers encode and preprocess text. Once done, please return here. To get a better understanding of how these algorithms work under the hood, we will start by implementing them from scratch. However, as the book progresses, we will introduce some of the popular tools and libraries that make Python the language of choice for machine learning, e.g., PyTorch,1 and Hugging Face’s transformers.2 The code for all the examples in the book is provided in the form of Jupyter notebooks.3 Important fragments of these notebooks will be presented in the implementation chapters so that the reader has the whole picture just by reading the book. However, we strongly encourage you to download the notebooks and execute them yourself. We also encourage you to modify them to conduct your own experiments! 1 https://pytorch.org
2 https://huggingface.co 3 https://jupyter.org/ 55 56 Implementing Text Classification Using Perceptron and LR 4.1 Binary Classification We begin this chapter with binary classification. That is, we aim to train classifiers that assign one of two labels to a given text. As the example for this task, we will train a review classifier using the the Large Movie Review Dataset (Maas et al., 2011).4 We tackle this task by implementing first a binary perceptron classifier, followed by a binary logistic regression one. We will implement the latter both from scratch as well as using PyTorch, so the reader has a clearer understanding on how PyTorch works “under the hood.” 4.1.1 Large Movie Review Dataset This dataset contains movie reviews and their associated scores (between 1 and 10) as provided by IMDb.5 converted these scores to binary labels by assigning each review a positive or negative label if the review score was above 6 or below 5, respectively. Reviews with scores 5 and 6 were considered too neutral and thus excluded. We follow the same protocol in this chapter. The dataset is divided in two even partitions called train and test, each containing 25,000 reviews. The dataset also provides additional unlabeled reviews, but we will not use those here. Each partition contains two directories called pos and neg where the positive and negative examples are stored. Each review is stored in an independent text file, whose name is composed of an id unique to the partition and the score associated with the review, separated by an underscore. An example of a positive and a negative review is shown in Table 4.1. 4.1.2 Bag-of-words Model As discussed in Section 2.2, we will encode the text to classify as a bag of words. That is, we encode each review as a list of numbers, with each position in the list corresponding to a word in our vocabulary, and the value stored in that position corresponding to the number of times the word appears in the review. For example, say we want to encode the following two reviews: 4 https://ai.stanford.edu/~amaas/data/sentiment/ 5 https://www.imdb.com/ Maas et al. 4.1 Binary Classification 57 Table 4.1 Two examples of movie reviews from IMDb. The first is a positive review of the movie Puss in Boots (1988). The second is a negative review of the movie Valentine (2001). These reviews can be found at https://www.imdb.com/review/rw0606396/ and https://www.imdb.com/review/rw0721861/, respectively. Filename Score Binary Label train/pos/24_8.txt 8/10 Positive train/neg/141_3.txt 3/10 Negative Review Text Although this was obviously a low-budget production, the performances and the songs in this movie are worth seeing. One of Walken’s few musical roles to date. (he is a marvelous dancer and singer and he demonstrates his acrobatic skills as well - watch for the cartwheel!) Also starring Jason Connery. A great children’s story and very likable characters. This stalk and slash turkey manages to bring nothing new to an increasingly stale genre. A masked killer stalks young, pert girls and slaughters them in a variety of gruesome ways, none of which are particularly inventive. It’s not scary, it’s not clever, and it’s not funny. So what was the point of it? Review 1: Review 2: "I liked the movie. My friend liked it too. " "I hated it. Would not recommend. " First, we need to create a vocabulary that maps each word to an id that uniquely identifies it. Each of these numbers will be used as the index in a list, so they must start at zero and grow by one for each word in the vocabulary. For example, one possible vocabulary that encodes the previous reviews is: {'would': 0, 'hated': 1, 58 Implementing Text Classification Using Perceptron and LR 'my': 2, 'liked': 3, 'not': 4, 'it': 5, 'movie': 6, 'recommend': 7, 'the': 8, 'I': 9, 'too': 10, 'friend': 11} Using this mapping, we can encode the two reviews as follows: Review1: [0,0,1,2,0,1,1,0,1,1,1,1] Review2: [1,1,0,0,1,1,0,1,0,1,0,0] Note that the word liked (fourth position) in the first review has a value of two. This is because this word appears twice in that review. This is a small example with a vocabulary of only 12 terms. Of course, the same process needs to be implemented for our whole training dataset. For this purpose we will use scikit-learn’s CountVectorizer class.6 Using the CountVectorizer class simplifies things, allowing us to get started quickly with a bag-of-words approach. However, note that it makes several simplifying assumptions (e.g., text is lowercased, and punctuation and single character tokens are removed). Some of these may not be adequate to other tasks. First, we need to obtain the filenames for the reviews in the training set: Once we have acquired the filenames for the training reviews, we need
to read them using the CountVectorizer. In order for the CountVectorizer to open and read the files for us, we make use of the input='filename' constructor parameter (otherwise it would expect the string content directly). The CountVectorizer provides three methods that will be use-
ful for us: a method called fit() that is used to acquire the vocabulary,
a method transform() that converts the text into the bag-of-words representation, and a method fit_transform() that conveniently acquires the vocabulary and transforms the data in a single step. The resulting object is referred to as a document-term matrix, where each row corre- 6 https://scikitlearn.org/stable/modules/generated/sklearn.feature_ extraction.text.CountVectorizer.html 4.1 Binary Classification 59 sponds to a document, and each column corresponds to a term in the vocabulary. As the output above indicates, the resulting matrix has 25,000 rows (one for each review), and 74,849 columns (one for each term). Also you may note that this matrix is sparse, with 3,445,861 stored elements. A regular matrix of shape 25,000×74,849 would have 1,871,225,000 elements. However, most of the elements in a document-term matrix are zeros because only a few words from the vocabulary appear in each document. A sparse matrix takes advantage of this fact by storing only the non-zero cells in order to reduce the memory required to store it. Thus, sparse matrices are convenient, especially when dealing with lots of data. Nevertheless, to simplify the downstream code in this example, we will convert it into a dense matrix, i.e., a regular two-dimensional NumPy array. Finally, we also need the labels of the reviews. We assign a label of one to positive reviews, and a label of zero to negative ones. Note that the first half of the reviews are positive and the second half are negative. The label at the ith position of the y_train array corresponds to the review encoded in the ith row of the X_train matrix. 4.1.3 Perceptron Now that we have defined our task and the data processing pipeline, we will implement a perceptron classifier that classifies the movie reviews as positive or negative. The entire code discussed in this section is available in the chap4_perceptron notebook. Recall from Section 2.4 that the perceptron is composed of a weight vector w and a bias term b. These will be represented as a NumPy array w of the same length as our document vectors, and a variable b for the bias term. Both will be initialized with zeros. The parameters w and b are learned through the following algorithm, which implements Algorithm 2 from Chapter 2: There are a couple of details to point out. Line 3 of Algorithm 2 indicates that we need to repeat the training loop until convergence. Theoretically, convergence is defined as predicting all training examples correctly. This is an ambitious requirement, which is not always possible in practice, so in this code we also include a stop condition if we reach a maximum number of epochs. Another crucial difference between our implementation here and the theoretical Algorithm 2, is that we randomize the order in which the training examples are seen at the beginning of 60 Implementing Text Classification Using Perceptron and LR each epoch. This simple (but highly recommended!) change is necessary to avoid the introduction of spurious biases due to the arbitrary order of the examples in the original training partition.7 We accomplish this by storing the indices corresponding to the X_train matrix rows in a NumPy array, and shuffling these indices at the beginning of each epoch. We shuffle the indices instead of the examples so that we can preserve the mapping between examples and labels. The training loop aligns closely with Algorithm 2. We start by iterating over each example in our training data, storing the current example in the variable x,8 and its corresponding label in the variable y_true. Next, we compute the perceptron decision function shown in Algorithm 1. Note that NumPy (as well as PyTorch) uses Python’s @ operator to indicate vector or matrix multiplication, depending on its operand types. Here we use it to calculate the dot product of the example x and the weights w. To this we add the bias b to obtain the predicted score, whose sign is used to assign a positive or negative predicted label. If the prediction is correct, then no update is needed, and we can move on to the next training example. However, if the prediction is incorrect, then we need to adjust w and b, as described in Algorithm 2. Sidebar 4.1 The tqdm function This is our first exposure to the tqdm function. tqdm is a progress bar that “make your loops show a smart progress meter.”9 The name tqdm comes from the Arabic word taqaddum which can mean “progress.” Using tqdm is as simple as wrapping it around the collection to be traversed. After training, we evaluate the model’s performance on the heldout test partition. The test data is loaded similarly to the training partition, but with one notable difference; we use CountVectorizer’s transform() method instead of the fit_transform() method so that the vocabulary is not adjusted for the test data. We won’t show here the loading of the test partition since it is so similar to the code already shown, but it is available in the Jupyter notebook that accompanies this section. . 7   As an extreme example, consider a dataset where all the positive examples appear first in the training partition. This would cause the perceptron to artificially inflate the weights of the features that occur in these examples, a situation from which the learning algorithm may struggle to recover. 
 . 8  We use typewriter font when we discuss variables in the code, to distinguish code from the theoretical discussion in the other chapters. 
 9 https://github.com/tqdm/tqdm 4.1 Binary Classification 61 Using the model to assign labels to all the test data is easily done in one step – we simply multiply the entire test data document-term matrix by the previously learned weights and add the bias. Scores greater than zero indicate a positive review, and those less than zero are negative. At this point we can evaluate the classifier’s performance, which we will do using precision, recall, and F1 scores for binary classification (described in Section 2.3). For this purpose, we implement a function called binary_classification_report that computes these metrics and returns them as a dictionary: We call this function to compare the predicted labels to the true labels, and obtain the evaluation scores. Our F1 score here is 86.8%, which is much higher than the baseline that assigns labels randomly, which yields an F1 score of about 50%. This is a good result, especially considering the simplicity of the perceptron! In the next sections and chapters, we will discuss a battery of strategies to considerably improve this performance. 4.1.4 Binary Logistic Regression from Scratch Using the same task, dataset, and evaluation, we will now implement a logistic regression classifier, as described in Algorithm 5 from Chapter 3. To give the reader hands-on experience with the implementation of the gradient calculations for logistic regression, we start by implementing it from scratch using NumPy. All the code shown in this section is available in the chap4_logistic_regression_numpy notebook. In the perceptron implementation, we represented the weights and the bias as two different variables. Here, however, we will use a different approach that will allow us to unify them into a single vector variable. Specifically, we take advantage of the similarity between the derivative of the cost function with respect to the weights (Equation 3.14) and the derivative of the cost with respect to the bias (Equation 3.15). d Ci(w, b) = (σi − yi)xij (3.14 revisited) dwj d Ci(w, b) = σi − yi (3.15 revisited) db Note that the two derivative formulas are identical except that the former has a multiplication by xij, while the latter does not. However, 62 Implementing Text Classification Using Perceptron and LR since σi − yi = (σi − yi)1 we can multiply the derivative of the cost with respect to the bias by one without changing the semantics. This gives an opportunity for combining the computations, doing them both in a single pass. The idea is that we can treat the bias as a weight corresponding to a feature that always has a value of one. As can be seen above, we created a NumPy array of ones of the same length as the number of examples in our training set (i.e., the number of rows in the data matrix). Then we add this array as a new column to the data matrix, using NumPy’s column_stack function. Next, we need to initialize our model. This time we will use a single NumPy array w of the same length as the number of columns in the data matrix. The weight vector w is initialized randomly with values between 0 and 1: Before implementing the learning algorithm, we need an implementation of the logistic function. Recall that the logistic function is σ(x) = 1 (3.1 revisited) 1+e−x This function can be easily implemented in NumPy as follows: However, this naive implementation may produce the following warning during training: The term overflow indicates that the result of evaluating exp(-x) is a number so large that it can’t be represented by a float (specifically, we’re using float64 numbers). We will avoid this issue by not calling exp with values that will overflow. NumPy provides the function finfo that can be consulted to find the limits of floating point numbers: The log of the largest floating point number is the largest number for which exp() will not overflow, so we will use it as a threshold to filter out problematic values: We now have everything we need to implement Algorithm 4. The steps to follow for each example are: (1) use the model to make a prediction, (2) calculate the gradient of the loss function with respect to the model parameters, and (3) update the model parameters using the gradient. The size of the update is controlled by the learning rate. Once the model has been trained, we evaluate it on the test dataset using our binary_classification_report function from the previous section. Loading and preprocessing the test dataset follows the same 4.1 Binary Classification 63 steps as with the previous classifier. We omit the code for brevity. These are the results: The performance is comparable with that of the perceptron. The difference in F1 scores between the two classifiers (84.9% here vs. 86.8% for the perceptron) is not significant. Classifier parity is probably attributable to the fact that the signal distinguishing the two classes being easy to learn and the simpler perceptron training algorithm being sufficient in this case. Nevertheless, this task is useful in showing how to implement the logistic regression model from scratch, i.e., by implementing the gradient calculation and parameter updates manually. Next, we will implement the same model again using PyTorch, highlighting how this machine learning library simplifies the process. 4.1.5 Binary Logistic Regression Utilizing PyTorch While it is fairly straightforward to compute the derivatives for logistic regression and implement then directly in NumPy, this will not scale well to arbitrary neural architectures. Fortunately, there are libraries that automate the computation of the derivatives of the cost function (assuming it is differentiable!) for any neural network, and use the resulting gradients to perform gradient descent or other more sophisticated optimization procedures. To this end, we will use the PyTorch deep learning library10. The corresponding notebook for this section is chap4_logistic_regression_pytorch_bce. Our model for logistic regression corresponds to PyTorch’s Linear layer. When we instantiate this layer, we specify the size of the inputs (the size of our vocabulary) and the size of the output, i.e., the number of output neurons (which is one because we’re doing binary classification). The loss function we use is the binary cross-entropy loss (see Chapter 3), which is implemented as BCEWithLogitsLoss in PyTorch. In PyTorch, the gradients obtained from the loss function are applied to the model by an optimizer object, which implements and applies an optimization algorithm. Here we will use the vanilla stochastic gradient descent optimizer; we set its learning rate to 0.1. This is equivalent to the discussion in Section 3.2. Similarly to the manual implementation, the steps required to train the model for a given training example are: (1) ensure the gradients are set to zeros, (2) apply the model to obtain a prediction, (3) calculate 10 https://pytorch.org/ 64 Implementing Text Classification Using Perceptron and LR the loss, (4) compute the gradient of the loss by back-propagation, and (5) update the model parameters. Recall that in our previous implementation everything was hardcoded: applying the model, computing the gradients, and optimizing the model parameters. Here, however, the implementation of the logistic regression is expressed at a higher level of abstraction. This means that we are describing the logical steps without specifying a particular implementation. Instead, implementation details are the responsability of the chosen model, loss function, and optimizer. Thus, we could even choose a different model, loss function, and/or optimizer, and use the same training steps with little or no modification. This decoupling of the training logic from the implementation details is one of the main advantages of libraries such as PyTorch. As shown in the code above, calling the model as a function, with the feature vectors as inputs, produces the predicted scores. Once again, a positive score corresponds to a positive label. When we evaluate this implementation on the test dataset, we obtain results that are in line with our previous models: Writing the perceptron and the logistic regression from scratch is a good exercise, as it exposes us to the fundamentals of implementing machine learning algorithms. However, this becomes cumbersome for more complex neural architectures. For this reason, from this point on, we will use PyTorch for all our coding examples. 4.2 Multiclass Classification So far, in this chapter we have discussed implementing binary classifiers. Next, we will modify these binary classifiers to perform multiclass classification, following the discussion in Section 3.5. 4.2.1 AG News Dataset Before explaining the actual training/testing code, we have to choose a new dataset that is suitable for multiclass classification. To this end, we will use the AG News Classification Dataset (Zhang et al., 2015), a subset of the larger AG corpus of news articles collected from thousands of different news sources.11 The classification dataset consists of four 11 http://groups.di.unipi.it/~gulli/AG_corpus_of_news_articles.html 4.2 Multiclass Classification 65 classes, and the data is equally balanced across all classes (30,000 articles per class for train, and 1,900 articles per class for testing). The goal of the task is to classify each article as one of the four classes: World, Sports, Business, or Sci/Tech. 4.2.2 Preparing the Dataset The AG News Dataset is distributed as two CSV files (one for training and one for testing), each containing three columns: the class index, the title, and the description. The dataset also provides a text file that maps the above class indexes to more descriptive class labels. Because of the tabular nature of the dataset, pandas, a Python library
for tabular data analysis,12 is a natural choice for loading and transform-
ing it. To this end, our Jupyter notebook (chap4_multiclass_logistic_regression) demonstrates the sequence of steps required to handle the data, as well
as model training and evaluation. First, we show how to load the CSV,
add column names, and inspect the result: class index . 0  3 
 . 1  3 
 . 2  3 
 . 3  3 
 . 4  3 
 ... ... . 119995  1 
 . 119996  2 
 . 119997  2 
 . 119998  2 
 . 119999  2 
 title Wall St. Bears Claw Back Into the Black (Reuters) Carlyle Looks Toward Commercial Aerospace (Reu... Oil and Economy Cloud Stocks' Outlook (Reuters) Iraq Halts Oil Exports from Main Southern Pipe... Oil prices soar to all-time record, posing new... ... Pakistan's Musharraf Says Won't Quit as Army C... Renteria signing a top-shelf deal Saban not going to Dolphins yet Today's NFL games Nets get Carter from Raptors description Reuters - Short-sellers, Wall Street's dwindli... Reuters - Private investment firm Carlyle Grou... Reuters - Soaring crude prices plus worries\ab... Reuters - Authorities have halted oil export\f... AFP - Tearaway world oil prices, toppling reco... ... KARACHI (Reuters) - Pakistani President Perve... Red Sox general manager Theo Epstein acknowled... The Miami Dolphins will put their courtship of... PITTSBURGH at NY GIANTS Time: 1:30 p.m. Line: ... INDIANAPOLIS -- All-Star Vince Carter was trad... 120000 rows × 3 columns Since the class labels themselves are in a separate file, we manually add them to the pandas data structure (called dataframe in pandas’ terminology) to increase the interpretability of the data. We use the class index column as a starting point, and use its map method to create a new column with the corresponding labels (technically a new Series object) that is added to the dataframe using its insert method, which allows us to insert the column in a specific position. Note that the label indices are one-based, so we subtract one to align them with their labels. 12 https://pandas.pydata.org 66 Implementing Text Classification Using Perceptron and LR class index . 0  3 
 . 1  3 
 . 2  3 
 . 3  3 
 . 4  3 
 ... ... . 119995  1 
 . 119996  2 
 . 119997  2 
 . 119998  2 
 . 119999  2 
 class Business Business Business Business Business ... World Sports Sports Sports Sports title Wall St. Bears Claw Back Into the Black (Reuters) Oil and Economy Cloud Stocks' Outlook (Reuters) Oil prices soar to all-time record, posing new... Pakistan's Musharraf Says Won't Quit as Army C... Saban not going to Dolphins yet Nets get Carter from Raptors description Reuters - Short-sellers, Wall Street's dwindli... Reuters - Soaring crude prices plus worries\ab... AFP - Tearaway world oil prices, toppling reco... KARACHI (Reuters) - Pakistani President Perve... The Miami Dolphins will put their courtship of... INDIANAPOLIS -- All-Star Vince Carter was trad... Iraq Halts Oil Exports from Main Southern Pipe... Reuters - Authorities have halted oil export\f... ... ... Renteria signing a top-shelf deal Red Sox general manager Theo Epstein acknowled... 120000 rows × 4 columns Carlyle Looks Toward Commercial Aerospace (Reu... Reuters - Private investment firm Carlyle Grou... Today's NFL games PITTSBURGH at NY GIANTS Time: 1:30 p.m. Line: ... Next we will preprocess the text. First we lowercase the title and description, and then we concatenate them into a single string. Then we remove some spurious backslashes from the text. Once this is done, the preprocessed text is added to the dataframe as a new column. Note that pandas allows these steps to be applied to all rows simultaneously. class index class title Wall St. Bears Claw Back Into the Black (Reuters) Oil and Economy Cloud Stocks' Outlook (Reuters) Oil prices soar to all-time record, posing new... ... Pakistan's Musharraf Says Won't Quit as Army C... Saban not going to Dolphins yet Nets get Carter from Raptors description Reuters - Short-sellers, Wall Street's dwindli... Reuters - Soaring crude prices plus worries\ab... AFP - Tearaway world oil prices, toppling reco... ... KARACHI (Reuters) - Pakistani President Perve... The Miami Dolphins will put their courtship of... INDIANAPOLIS -- All-Star Vince Carter was trad... text wall st. bears claw back into the black (reute... oil and economy cloud stocks' outlook (reuters... oil prices soar to all-time record, posing new... ... pakistan's musharraf says won't quit as army c... saban not going to dolphins yet the miami dolp... nets get carter from raptors indianapolis -- a... . 0  3 Business 
 . 1  3 Business 
 . 2  3 Business 
 . 3  3 Business 
 . 4  3 Business 
 ... ... ... . 119995  1 World 
 . 119996  2 Sports 
 . 119997  2 Sports 
 . 119998  2 Sports 
 . 119999  2 Sports 
 120000 rows × 5 columns Carlyle Looks Toward Commercial Reuters - Private investment firm Carlyle carlyle looks toward commercial Aerospace (Reu... Grou... aerospace (reu... Iraq Halts Oil Exports from Main Southern Pipe... Reuters - Authorities have halted oil export\f... iraq halts oil exports from main southern pipe... Renteria signing a top-shelf deal Red Sox general manager Theo Epstein renteria signing a top-shelf deal red sox acknowled... gene... Today's NFL games PITTSBURGH at NY GIANTS Time: 1:30 p.m. today's nfl games pittsburgh at ny giants Line: ... time... At this point, the text is ready to be tokenized. For this purpose we will use NLTK’s word_tokenize function. This function can be applied to the whole column at once using the pandas map function, which returns a new column which we add to the dataframe. However, here we actually use the progress_map function, which provides a visual progress bar. This visual feedback is especially helpful for tasks that take more time to complete. 4.2 Multiclass Classification 67 class index . 0  3 
 . 1  3 
 . 2  3 
 . 3  3 
 . 4  3 
 ... ... . 119995  1 
 . 119996  2 
 . 119997  2 
 . 119998  2 
 . 119999  2 
 class Business Business Business Business Business ... World Sports Sports Sports Sports title Wall St. Bears Claw Back Into the Black (Reuters) Oil and Economy Cloud Stocks' Outlook (Reuters) Oil prices soar to all-time record, posing new... ... Pakistan's Musharraf Says Won't Quit as Army C... Saban not going to Dolphins yet Nets get Carter from Raptors description Reuters - Short-sellers, Wall Street's dwindli... Reuters - Soaring crude prices plus worries\ab... AFP - Tearaway world oil prices, toppling reco... ... KARACHI (Reuters) - Pakistani President Perve... The Miami Dolphins will put their courtship of... INDIANAPOLIS -- All-Star Vince Carter was trad... text wall st. bears claw back into the black (reute... oil and economy cloud stocks' outlook (reuters... oil prices soar to all-time record, posing new... ... pakistan's musharraf says won't quit as army c... saban not going to dolphins yet the miami dolp... nets get carter from raptors indianapolis -- a... tokens [wall, st., bears, claw, back, into, the, blac... [oil, and, economy, cloud, stocks, ', outlook,... [oil, prices, soar, to, all-time, record, ,, p... ... [pakistan, 's, musharraf, says, wo, n't, quit,... [saban, not, going, to, dolphins, yet, the, mi... [nets, get, carter, from, raptors, indianapoli... 120000 rows × 6 columns Carlyle Looks Toward Commercial Reuters - Private investment firm carlyle looks toward commercial [carlyle, looks, toward, Aerospace (Reu... Carlyle Grou... aerospace (reu... commercial, aerospace... Iraq Halts Oil Exports from Main Reuters - Authorities have halted iraq halts oil exports from main [iraq, halts, oil, exports, from, Southern Pipe... oil export\f... southern pipe... main, southe... Renteria signing a top-shelf deal Red Sox general manager Theo renteria signing a top-shelf deal [renteria, signing, a, top-shelf, Epstein acknowled... red sox gene... deal, red, s... Today's NFL games PITTSBURGH at NY GIANTS today's nfl games pittsburgh at [today, 's, nfl, games, Time: 1:30 p.m. Line: ... ny giants time... pittsburgh, at, ny, gi... From the tokens we just created, we then create a vocabulary for our corpus. Here, we only keep the words that occur at least 10 times, decreasing the memory needed and reducing the likelihood that our vocabulary contains noisy tokens. Note that each row in the tokens column contains a list of tokens. In order to create the vocabulary, we will need to convert the Series of lists of tokens into a Series of tokens using the explode() Pandas method. Then we will use the value_counts() method to create a Series object in which the index are the tokens and the values are the number of times they appear in the corpus. The next step is removing the tokens with a count lower than our chosen threshold. Finally, we create a list with the remaining tokens, as well as a dictionary that maps tokens to token ids (i.e., the index of the token in the list). We include in the vocabulary a special token [UNK] that will be used as a placeholder for tokens that do not appear in our vocabulary after the frequency pruning. Using this vocabulary, we construct a feature vector for each news article in the corpus. This feature vector will be encoded as a dictionary, with keys corresponding to token ids, and values corresponding to the number of times the token appears in the article. As above, the feature vectors will be stored as a new column in the dataframe. 68 Implementing Text Classification Using Perceptron and LR class index class title Wall St. Bears Claw Back Into the Black (Reuters) Oil and Economy Cloud Stocks' Outlook (Reuters) Oil prices soar to all-time record, posing new... ... Pakistan's Musharraf Says Won't Quit as Army C... Saban not going to Dolphins yet Nets get Carter from Raptors description Reuters - Short-sellers, Wall Street's dwindli... Reuters - Soaring crude prices plus worries\ab... AFP - Tearaway world oil prices, toppling reco... ... KARACHI (Reuters) - Pakistani President Perve... The Miami Dolphins will put their courtship of... INDIANAPOLIS -- All-Star Vince Carter was trad... text wall st. bears claw back into the black (reute... oil and economy cloud stocks' outlook (reuters... oil prices soar to all-time record, posing new... ... pakistan's musharraf says won't quit as army c... saban not going to dolphins yet the miami dolp... nets get carter from raptors indianapolis -- a... tokens [wall, st., bears, claw, back, into, the, blac... [oil, and, economy, cloud, stocks, ', outlook,... [oil, prices, soar, to, alltime, record, ,, p... ... [pakistan, 's, musharraf, says, wo, n't, quit,... [saban, not, going, to, dolphins, yet, the, mi... [nets, get, carter, from, raptors, indianapoli... features {427: 2, 563: 1, 1607: 1, 15062: 1, 120: 1, 73... {66: 1, 9: 2, 351: 2, 4565: 1, 158: 1, 116: 1,... {66: 2, 99: 2, 4390: 1, 4: 2, 3595: 1, 149: 1,... ... {383: 1, 23: 1, 1626: 2, 91: 1, 1809: 1, 285: ... {7762: 2, 68: 1, 661: 1, 4: 2, 1439: 2, 703: 1... {2170: 2, 226: 1, 2402: 2, 32: 1, 2995: 2, 219... . 0  3 Business 
 . 1  3 Business 
 . 2  3 Business 
 . 3  3 Business 
 . 4  3 Business 
 ... ... ... . 119995  1 World 
 . 119996  2 Sports 
 . 119997  2 Sports 
 . 119998  2 Sports 
 . 119999  2 Sports 
 120000 rows × 7 columns Carlyle Looks Toward Commercial Aerospace (Reu... Reuters - Private investment firm Carlyle Grou... carlyle looks toward commercial aerospace (reu... Iraq Halts Oil Exports from Reuters - Authorities have iraq halts oil exports from Main Southern Pipe... halted oil export\f... main southern pipe... Renteria signing a top-shelf Red Sox general manager renteria signing a topdeal Theo Epstein acknowled... shelf deal red sox gene... PITTSBURGH at NY Today's NFL games GIANTS Time: 1:30 p.m. Line: ... today's nfl games pittsburgh at ny giants time... [carlyle, looks, toward, {15999: 2, 1076: 1, 855: commercial, aerospace... 1, 1286: 1, 4251: 1, ... [iraq, halts, oil, exports, {77: 2, 7380: 1, 66: 3, from, main, southe... 1787: 1, 32: 2, 900: 2... [renteria, signing, a, top- {8428: 2, 2638: 1, 5: 4, shelf, deal, red, s... 0: 3, 127: 1, 202: 3,... [today, 's, nfl, games, {106: 1, 23: 1, 729: 1, pittsburgh, at, ny, gi... 225: 1, 1586: 1, 22: 1... The final preprocessing step is converting the features and the class indices into PyTorch tensors. Recall that we need to subtract one from the class indices to make them zero-based. At this point, the data is fully processed and we are ready to begin training. 4.2.3 Multiclass Logistic Regression Using PyTorch The model itself is a single linear layer whose input size corresponds to the size of our vocabulary, and its output size corresponds to the number of classes in our corpus. PyTorch’s Linear layer includes a bias by default, so there is no need to handle that manually the way we did for our perceptron example. The code for training this model (which implements Algorithm 6) is almost identical to that of the binary logistic repression. However, since we have to calculate a score for each of the four different classes, we need to replace the previous BCEWithLogitsLoss with CrossEntropyLoss, which applies a softmax over the scores to obtain probabilities for each class. For each example, the model predicts 4 scores – one for each label. The label with the highest score is selected using the argmax function. We evaluate the predictions of our model for each class using Scikitlearn’s classification_report, which handles the results of multiclass classification. 4.3 Summary 69 4.3 Summary In this chapter, we used movie review and news article classification to illustrate the implementation of the previously described algorithms for the binary perceptron, binary logistic regression, and multiclass logistic regression. For the binary logistic regression, we made a direct comparison between the lower-level NumPy implementation and a higher-level version that made use of PyTorch. We hope that through this series of exercises the reader has noted several key takeaways. First, data preparation is important and should be done thoughtfully. Certain tasks (e.g., text normalization or sentence splitting) are going to be frequently needed if you continue with NLP, so using or creating generic functions can be very helpful. However, what works for one dataset and one language may not be suitable for another scenario. For example, in our case, we selected different tokenizers for each of our tasks to account for the different registers of English, as well as removing diacritics during normalization. Second, when it comes to implementing machine learning algorithms, it is often easier to use a higher-level library such as PyTorch instead of NumPy. For example, with the former, the gradients are calculated by the library, whereas in NumPy we have to code them ourselves. This becomes cumbersome quickly. For example, even the derivative of the softmax is non-trivial. Third, PyTorch imposes a training structure that remains largely the same, regardless of what models are being trained. That is, at a high level, the same steps are always required: clearing the current gradients, predicting output scores for the provided inputs, calculating the loss, and optimizing. These features make PyTorch a very powerful and convenient deep learning library; we will continue to use it throughout the remainder of the book to implement more complex neural architectures.
34,830
35,013
#!/usr/bin/env python # coding: utf-8 # # Multiclass Text Classification with # # Logistic Regression Implemented with PyTorch and CE Loss # First, we will do some initialization. # In[1]: import random import torch import numpy as np import pandas as pd from tqdm.notebook import tqdm # enable tqdm in pandas tqdm.pandas() # set to True to use the gpu (if there is one available) use_gpu = True # select device device = torch.device('cuda' if use_gpu and torch.cuda.is_available() else 'cpu') print(f'device: {device.type}') # random seed seed = 1234 # set random seed if seed is not None: print(f'random seed: {seed}') random.seed(seed) np.random.seed(seed) torch.manual_seed(seed) # We will be using the AG's News Topic Classification Dataset. # It is stored in two CSV files: `train.csv` and `test.csv`, as well as a `classes.txt` that stores the labels of the classes to predict. # # First, we will load the training dataset using [pandas](https://pandas.pydata.org/) and take a quick look at how the data. # In[2]: train_df = pd.read_csv('data/ag_news_csv/train.csv', header=None) train_df.columns = ['class index', 'title', 'description'] train_df # The dataset consists of 120,000 examples, each consisting of a class index, a title, and a description. # The class labels are distributed in a separated file. We will add the labels to the dataset so that we can interpret the data more easily. Note that the label indexes are one-based, so we need to subtract one to retrieve them from the list. # In[3]: labels = open('data/ag_news_csv/classes.txt').read().splitlines() classes = train_df['class index'].map(lambda i: labels[i-1]) train_df.insert(1, 'class', classes) train_df # Let's inspect how balanced our examples are by using a bar plot. # In[4]: pd.value_counts(train_df['class']).plot.bar() # The classes are evenly distributed. That's great! # # However, the text contains some spurious backslashes in some parts of the text. # They are meant to represent newlines in the original text. # An example can be seen below, between the words "dwindling" and "band". # In[5]: print(train_df.loc[0, 'description']) # We will replace the backslashes with spaces on the whole column using pandas replace method. # In[6]: title = train_df['title'].str.lower() descr = train_df['description'].str.lower() text = title + " " + descr train_df['text'] = text.str.replace('\\', ' ', regex=False) train_df # Now we will proceed to tokenize the title and description columns using NLTK's word_tokenize(). # We will add a new column to our dataframe with the list of tokens. # In[7]: from nltk.tokenize import word_tokenize train_df['tokens'] = train_df['text'].progress_map(word_tokenize) train_df # Now we will create a vocabulary from the training data. We will only keep the terms that repeat beyond some threshold established below. # In[8]: threshold = 10 tokens = train_df['tokens'].explode().value_counts() tokens = tokens[tokens > threshold] id_to_token = ['[UNK]'] + tokens.index.tolist() token_to_id = {w:i for i,w in enumerate(id_to_token)} vocabulary_size = len(id_to_token) print(f'vocabulary size: {vocabulary_size:,}') # In[9]: from collections import defaultdict def make_feature_vector(tokens, unk_id=0): vector = defaultdict(int) for t in tokens: i = token_to_id.get(t, unk_id) vector[i] += 1 return vector train_df['features'] = train_df['tokens'].progress_map(make_feature_vector) train_df # In[10]: def make_dense(feats): x = np.zeros(vocabulary_size) for k,v in feats.items(): x[k] = v return x X_train = np.stack(train_df['features'].progress_map(make_dense)) y_train = train_df['class index'].to_numpy() - 1 X_train = torch.tensor(X_train, dtype=torch.float32) y_train = torch.tensor(y_train) # In[11]: from torch import nn from torch import optim # hyperparameters lr = 1.0 n_epochs = 5 n_examples = X_train.shape[0] n_feats = X_train.shape[1] n_classes = len(labels) # initialize the model, loss function, optimizer, and data-loader model = nn.Linear(n_feats, n_classes).to(device) loss_func = nn.CrossEntropyLoss() optimizer = optim.SGD(model.parameters(), lr=lr) # train the model indices = np.arange(n_examples) for epoch in range(n_epochs): np.random.shuffle(indices) for i in tqdm(indices, desc=f'epoch {epoch+1}'): # clear gradients model.zero_grad() # send datum to right device x = X_train[i].unsqueeze(0).to(device) y_true = y_train[i].unsqueeze(0).to(device) # predict label scores y_pred = model(x) # compute loss loss = loss_func(y_pred, y_true) # backpropagate loss.backward() # optimize model parameters optimizer.step() # Next, we evaluate on the test dataset # In[12]: # repeat all preprocessing done above, this time on the test set test_df = pd.read_csv('data/ag_news_csv/test.csv', header=None) test_df.columns = ['class index', 'title', 'description'] test_df['text'] = test_df['title'].str.lower() + " " + test_df['description'].str.lower() test_df['text'] = test_df['text'].str.replace('\\', ' ', regex=False) test_df['tokens'] = test_df['text'].progress_map(word_tokenize) test_df['features'] = test_df['tokens'].progress_map(make_feature_vector) X_test = np.stack(test_df['features'].progress_map(make_dense)) y_test = test_df['class index'].to_numpy() - 1 X_test = torch.tensor(X_test, dtype=torch.float32) y_test = torch.tensor(y_test) # In[13]: from sklearn.metrics import classification_report # set model to evaluation mode model.eval() # don't store gradients with torch.no_grad(): X_test = X_test.to(device) y_pred = torch.argmax(model(X_test), dim=1) y_pred = y_pred.cpu().numpy() print(classification_report(y_test, y_pred, target_names=labels))
3,642
3,843
25
chap04-26
chap04-26
4 Implementing Text Classification Using Perceptron and Logistic Regression In the previous chapters we have discussed the theory behind the perceptron and logistic regression, including mathematical explanations of how and why they are able to learn from examples. In this chapter we will transition from math to code. Specifically, we will discuss how to implement these models in the Python programming language. All the code that we will introduce throughout this book is available online as well: http://clulab.github.io/gentlenlp/. The reader who is not familiar with the Python programming language is encouraged to read first Appendix A, for a brief introduction to the language, and Appendix B, for a discussion on how computers encode and preprocess text. Once done, please return here. To get a better understanding of how these algorithms work under the hood, we will start by implementing them from scratch. However, as the book progresses, we will introduce some of the popular tools and libraries that make Python the language of choice for machine learning, e.g., PyTorch,1 and Hugging Face’s transformers.2 The code for all the examples in the book is provided in the form of Jupyter notebooks.3 Important fragments of these notebooks will be presented in the implementation chapters so that the reader has the whole picture just by reading the book. However, we strongly encourage you to download the notebooks and execute them yourself. We also encourage you to modify them to conduct your own experiments! 1 https://pytorch.org
2 https://huggingface.co 3 https://jupyter.org/ 55 56 Implementing Text Classification Using Perceptron and LR 4.1 Binary Classification We begin this chapter with binary classification. That is, we aim to train classifiers that assign one of two labels to a given text. As the example for this task, we will train a review classifier using the the Large Movie Review Dataset (Maas et al., 2011).4 We tackle this task by implementing first a binary perceptron classifier, followed by a binary logistic regression one. We will implement the latter both from scratch as well as using PyTorch, so the reader has a clearer understanding on how PyTorch works “under the hood.” 4.1.1 Large Movie Review Dataset This dataset contains movie reviews and their associated scores (between 1 and 10) as provided by IMDb.5 converted these scores to binary labels by assigning each review a positive or negative label if the review score was above 6 or below 5, respectively. Reviews with scores 5 and 6 were considered too neutral and thus excluded. We follow the same protocol in this chapter. The dataset is divided in two even partitions called train and test, each containing 25,000 reviews. The dataset also provides additional unlabeled reviews, but we will not use those here. Each partition contains two directories called pos and neg where the positive and negative examples are stored. Each review is stored in an independent text file, whose name is composed of an id unique to the partition and the score associated with the review, separated by an underscore. An example of a positive and a negative review is shown in Table 4.1. 4.1.2 Bag-of-words Model As discussed in Section 2.2, we will encode the text to classify as a bag of words. That is, we encode each review as a list of numbers, with each position in the list corresponding to a word in our vocabulary, and the value stored in that position corresponding to the number of times the word appears in the review. For example, say we want to encode the following two reviews: 4 https://ai.stanford.edu/~amaas/data/sentiment/ 5 https://www.imdb.com/ Maas et al. 4.1 Binary Classification 57 Table 4.1 Two examples of movie reviews from IMDb. The first is a positive review of the movie Puss in Boots (1988). The second is a negative review of the movie Valentine (2001). These reviews can be found at https://www.imdb.com/review/rw0606396/ and https://www.imdb.com/review/rw0721861/, respectively. Filename Score Binary Label train/pos/24_8.txt 8/10 Positive train/neg/141_3.txt 3/10 Negative Review Text Although this was obviously a low-budget production, the performances and the songs in this movie are worth seeing. One of Walken’s few musical roles to date. (he is a marvelous dancer and singer and he demonstrates his acrobatic skills as well - watch for the cartwheel!) Also starring Jason Connery. A great children’s story and very likable characters. This stalk and slash turkey manages to bring nothing new to an increasingly stale genre. A masked killer stalks young, pert girls and slaughters them in a variety of gruesome ways, none of which are particularly inventive. It’s not scary, it’s not clever, and it’s not funny. So what was the point of it? Review 1: Review 2: "I liked the movie. My friend liked it too. " "I hated it. Would not recommend. " First, we need to create a vocabulary that maps each word to an id that uniquely identifies it. Each of these numbers will be used as the index in a list, so they must start at zero and grow by one for each word in the vocabulary. For example, one possible vocabulary that encodes the previous reviews is: {'would': 0, 'hated': 1, 58 Implementing Text Classification Using Perceptron and LR 'my': 2, 'liked': 3, 'not': 4, 'it': 5, 'movie': 6, 'recommend': 7, 'the': 8, 'I': 9, 'too': 10, 'friend': 11} Using this mapping, we can encode the two reviews as follows: Review1: [0,0,1,2,0,1,1,0,1,1,1,1] Review2: [1,1,0,0,1,1,0,1,0,1,0,0] Note that the word liked (fourth position) in the first review has a value of two. This is because this word appears twice in that review. This is a small example with a vocabulary of only 12 terms. Of course, the same process needs to be implemented for our whole training dataset. For this purpose we will use scikit-learn’s CountVectorizer class.6 Using the CountVectorizer class simplifies things, allowing us to get started quickly with a bag-of-words approach. However, note that it makes several simplifying assumptions (e.g., text is lowercased, and punctuation and single character tokens are removed). Some of these may not be adequate to other tasks. First, we need to obtain the filenames for the reviews in the training set: Once we have acquired the filenames for the training reviews, we need
to read them using the CountVectorizer. In order for the CountVectorizer to open and read the files for us, we make use of the input='filename' constructor parameter (otherwise it would expect the string content directly). The CountVectorizer provides three methods that will be use-
ful for us: a method called fit() that is used to acquire the vocabulary,
a method transform() that converts the text into the bag-of-words representation, and a method fit_transform() that conveniently acquires the vocabulary and transforms the data in a single step. The resulting object is referred to as a document-term matrix, where each row corre- 6 https://scikitlearn.org/stable/modules/generated/sklearn.feature_ extraction.text.CountVectorizer.html 4.1 Binary Classification 59 sponds to a document, and each column corresponds to a term in the vocabulary. As the output above indicates, the resulting matrix has 25,000 rows (one for each review), and 74,849 columns (one for each term). Also you may note that this matrix is sparse, with 3,445,861 stored elements. A regular matrix of shape 25,000×74,849 would have 1,871,225,000 elements. However, most of the elements in a document-term matrix are zeros because only a few words from the vocabulary appear in each document. A sparse matrix takes advantage of this fact by storing only the non-zero cells in order to reduce the memory required to store it. Thus, sparse matrices are convenient, especially when dealing with lots of data. Nevertheless, to simplify the downstream code in this example, we will convert it into a dense matrix, i.e., a regular two-dimensional NumPy array. Finally, we also need the labels of the reviews. We assign a label of one to positive reviews, and a label of zero to negative ones. Note that the first half of the reviews are positive and the second half are negative. The label at the ith position of the y_train array corresponds to the review encoded in the ith row of the X_train matrix. 4.1.3 Perceptron Now that we have defined our task and the data processing pipeline, we will implement a perceptron classifier that classifies the movie reviews as positive or negative. The entire code discussed in this section is available in the chap4_perceptron notebook. Recall from Section 2.4 that the perceptron is composed of a weight vector w and a bias term b. These will be represented as a NumPy array w of the same length as our document vectors, and a variable b for the bias term. Both will be initialized with zeros. The parameters w and b are learned through the following algorithm, which implements Algorithm 2 from Chapter 2: There are a couple of details to point out. Line 3 of Algorithm 2 indicates that we need to repeat the training loop until convergence. Theoretically, convergence is defined as predicting all training examples correctly. This is an ambitious requirement, which is not always possible in practice, so in this code we also include a stop condition if we reach a maximum number of epochs. Another crucial difference between our implementation here and the theoretical Algorithm 2, is that we randomize the order in which the training examples are seen at the beginning of 60 Implementing Text Classification Using Perceptron and LR each epoch. This simple (but highly recommended!) change is necessary to avoid the introduction of spurious biases due to the arbitrary order of the examples in the original training partition.7 We accomplish this by storing the indices corresponding to the X_train matrix rows in a NumPy array, and shuffling these indices at the beginning of each epoch. We shuffle the indices instead of the examples so that we can preserve the mapping between examples and labels. The training loop aligns closely with Algorithm 2. We start by iterating over each example in our training data, storing the current example in the variable x,8 and its corresponding label in the variable y_true. Next, we compute the perceptron decision function shown in Algorithm 1. Note that NumPy (as well as PyTorch) uses Python’s @ operator to indicate vector or matrix multiplication, depending on its operand types. Here we use it to calculate the dot product of the example x and the weights w. To this we add the bias b to obtain the predicted score, whose sign is used to assign a positive or negative predicted label. If the prediction is correct, then no update is needed, and we can move on to the next training example. However, if the prediction is incorrect, then we need to adjust w and b, as described in Algorithm 2. Sidebar 4.1 The tqdm function This is our first exposure to the tqdm function. tqdm is a progress bar that “make your loops show a smart progress meter.”9 The name tqdm comes from the Arabic word taqaddum which can mean “progress.” Using tqdm is as simple as wrapping it around the collection to be traversed. After training, we evaluate the model’s performance on the heldout test partition. The test data is loaded similarly to the training partition, but with one notable difference; we use CountVectorizer’s transform() method instead of the fit_transform() method so that the vocabulary is not adjusted for the test data. We won’t show here the loading of the test partition since it is so similar to the code already shown, but it is available in the Jupyter notebook that accompanies this section. . 7   As an extreme example, consider a dataset where all the positive examples appear first in the training partition. This would cause the perceptron to artificially inflate the weights of the features that occur in these examples, a situation from which the learning algorithm may struggle to recover. 
 . 8  We use typewriter font when we discuss variables in the code, to distinguish code from the theoretical discussion in the other chapters. 
 9 https://github.com/tqdm/tqdm 4.1 Binary Classification 61 Using the model to assign labels to all the test data is easily done in one step – we simply multiply the entire test data document-term matrix by the previously learned weights and add the bias. Scores greater than zero indicate a positive review, and those less than zero are negative. At this point we can evaluate the classifier’s performance, which we will do using precision, recall, and F1 scores for binary classification (described in Section 2.3). For this purpose, we implement a function called binary_classification_report that computes these metrics and returns them as a dictionary: We call this function to compare the predicted labels to the true labels, and obtain the evaluation scores. Our F1 score here is 86.8%, which is much higher than the baseline that assigns labels randomly, which yields an F1 score of about 50%. This is a good result, especially considering the simplicity of the perceptron! In the next sections and chapters, we will discuss a battery of strategies to considerably improve this performance. 4.1.4 Binary Logistic Regression from Scratch Using the same task, dataset, and evaluation, we will now implement a logistic regression classifier, as described in Algorithm 5 from Chapter 3. To give the reader hands-on experience with the implementation of the gradient calculations for logistic regression, we start by implementing it from scratch using NumPy. All the code shown in this section is available in the chap4_logistic_regression_numpy notebook. In the perceptron implementation, we represented the weights and the bias as two different variables. Here, however, we will use a different approach that will allow us to unify them into a single vector variable. Specifically, we take advantage of the similarity between the derivative of the cost function with respect to the weights (Equation 3.14) and the derivative of the cost with respect to the bias (Equation 3.15). d Ci(w, b) = (σi − yi)xij (3.14 revisited) dwj d Ci(w, b) = σi − yi (3.15 revisited) db Note that the two derivative formulas are identical except that the former has a multiplication by xij, while the latter does not. However, 62 Implementing Text Classification Using Perceptron and LR since σi − yi = (σi − yi)1 we can multiply the derivative of the cost with respect to the bias by one without changing the semantics. This gives an opportunity for combining the computations, doing them both in a single pass. The idea is that we can treat the bias as a weight corresponding to a feature that always has a value of one. As can be seen above, we created a NumPy array of ones of the same length as the number of examples in our training set (i.e., the number of rows in the data matrix). Then we add this array as a new column to the data matrix, using NumPy’s column_stack function. Next, we need to initialize our model. This time we will use a single NumPy array w of the same length as the number of columns in the data matrix. The weight vector w is initialized randomly with values between 0 and 1: Before implementing the learning algorithm, we need an implementation of the logistic function. Recall that the logistic function is σ(x) = 1 (3.1 revisited) 1+e−x This function can be easily implemented in NumPy as follows: However, this naive implementation may produce the following warning during training: The term overflow indicates that the result of evaluating exp(-x) is a number so large that it can’t be represented by a float (specifically, we’re using float64 numbers). We will avoid this issue by not calling exp with values that will overflow. NumPy provides the function finfo that can be consulted to find the limits of floating point numbers: The log of the largest floating point number is the largest number for which exp() will not overflow, so we will use it as a threshold to filter out problematic values: We now have everything we need to implement Algorithm 4. The steps to follow for each example are: (1) use the model to make a prediction, (2) calculate the gradient of the loss function with respect to the model parameters, and (3) update the model parameters using the gradient. The size of the update is controlled by the learning rate. Once the model has been trained, we evaluate it on the test dataset using our binary_classification_report function from the previous section. Loading and preprocessing the test dataset follows the same 4.1 Binary Classification 63 steps as with the previous classifier. We omit the code for brevity. These are the results: The performance is comparable with that of the perceptron. The difference in F1 scores between the two classifiers (84.9% here vs. 86.8% for the perceptron) is not significant. Classifier parity is probably attributable to the fact that the signal distinguishing the two classes being easy to learn and the simpler perceptron training algorithm being sufficient in this case. Nevertheless, this task is useful in showing how to implement the logistic regression model from scratch, i.e., by implementing the gradient calculation and parameter updates manually. Next, we will implement the same model again using PyTorch, highlighting how this machine learning library simplifies the process. 4.1.5 Binary Logistic Regression Utilizing PyTorch While it is fairly straightforward to compute the derivatives for logistic regression and implement then directly in NumPy, this will not scale well to arbitrary neural architectures. Fortunately, there are libraries that automate the computation of the derivatives of the cost function (assuming it is differentiable!) for any neural network, and use the resulting gradients to perform gradient descent or other more sophisticated optimization procedures. To this end, we will use the PyTorch deep learning library10. The corresponding notebook for this section is chap4_logistic_regression_pytorch_bce. Our model for logistic regression corresponds to PyTorch’s Linear layer. When we instantiate this layer, we specify the size of the inputs (the size of our vocabulary) and the size of the output, i.e., the number of output neurons (which is one because we’re doing binary classification). The loss function we use is the binary cross-entropy loss (see Chapter 3), which is implemented as BCEWithLogitsLoss in PyTorch. In PyTorch, the gradients obtained from the loss function are applied to the model by an optimizer object, which implements and applies an optimization algorithm. Here we will use the vanilla stochastic gradient descent optimizer; we set its learning rate to 0.1. This is equivalent to the discussion in Section 3.2. Similarly to the manual implementation, the steps required to train the model for a given training example are: (1) ensure the gradients are set to zeros, (2) apply the model to obtain a prediction, (3) calculate 10 https://pytorch.org/ 64 Implementing Text Classification Using Perceptron and LR the loss, (4) compute the gradient of the loss by back-propagation, and (5) update the model parameters. Recall that in our previous implementation everything was hardcoded: applying the model, computing the gradients, and optimizing the model parameters. Here, however, the implementation of the logistic regression is expressed at a higher level of abstraction. This means that we are describing the logical steps without specifying a particular implementation. Instead, implementation details are the responsability of the chosen model, loss function, and optimizer. Thus, we could even choose a different model, loss function, and/or optimizer, and use the same training steps with little or no modification. This decoupling of the training logic from the implementation details is one of the main advantages of libraries such as PyTorch. As shown in the code above, calling the model as a function, with the feature vectors as inputs, produces the predicted scores. Once again, a positive score corresponds to a positive label. When we evaluate this implementation on the test dataset, we obtain results that are in line with our previous models: Writing the perceptron and the logistic regression from scratch is a good exercise, as it exposes us to the fundamentals of implementing machine learning algorithms. However, this becomes cumbersome for more complex neural architectures. For this reason, from this point on, we will use PyTorch for all our coding examples. 4.2 Multiclass Classification So far, in this chapter we have discussed implementing binary classifiers. Next, we will modify these binary classifiers to perform multiclass classification, following the discussion in Section 3.5. 4.2.1 AG News Dataset Before explaining the actual training/testing code, we have to choose a new dataset that is suitable for multiclass classification. To this end, we will use the AG News Classification Dataset (Zhang et al., 2015), a subset of the larger AG corpus of news articles collected from thousands of different news sources.11 The classification dataset consists of four 11 http://groups.di.unipi.it/~gulli/AG_corpus_of_news_articles.html 4.2 Multiclass Classification 65 classes, and the data is equally balanced across all classes (30,000 articles per class for train, and 1,900 articles per class for testing). The goal of the task is to classify each article as one of the four classes: World, Sports, Business, or Sci/Tech. 4.2.2 Preparing the Dataset The AG News Dataset is distributed as two CSV files (one for training and one for testing), each containing three columns: the class index, the title, and the description. The dataset also provides a text file that maps the above class indexes to more descriptive class labels. Because of the tabular nature of the dataset, pandas, a Python library
for tabular data analysis,12 is a natural choice for loading and transform-
ing it. To this end, our Jupyter notebook (chap4_multiclass_logistic_regression) demonstrates the sequence of steps required to handle the data, as well
as model training and evaluation. First, we show how to load the CSV,
add column names, and inspect the result: class index . 0  3 
 . 1  3 
 . 2  3 
 . 3  3 
 . 4  3 
 ... ... . 119995  1 
 . 119996  2 
 . 119997  2 
 . 119998  2 
 . 119999  2 
 title Wall St. Bears Claw Back Into the Black (Reuters) Carlyle Looks Toward Commercial Aerospace (Reu... Oil and Economy Cloud Stocks' Outlook (Reuters) Iraq Halts Oil Exports from Main Southern Pipe... Oil prices soar to all-time record, posing new... ... Pakistan's Musharraf Says Won't Quit as Army C... Renteria signing a top-shelf deal Saban not going to Dolphins yet Today's NFL games Nets get Carter from Raptors description Reuters - Short-sellers, Wall Street's dwindli... Reuters - Private investment firm Carlyle Grou... Reuters - Soaring crude prices plus worries\ab... Reuters - Authorities have halted oil export\f... AFP - Tearaway world oil prices, toppling reco... ... KARACHI (Reuters) - Pakistani President Perve... Red Sox general manager Theo Epstein acknowled... The Miami Dolphins will put their courtship of... PITTSBURGH at NY GIANTS Time: 1:30 p.m. Line: ... INDIANAPOLIS -- All-Star Vince Carter was trad... 120000 rows × 3 columns Since the class labels themselves are in a separate file, we manually add them to the pandas data structure (called dataframe in pandas’ terminology) to increase the interpretability of the data. We use the class index column as a starting point, and use its map method to create a new column with the corresponding labels (technically a new Series object) that is added to the dataframe using its insert method, which allows us to insert the column in a specific position. Note that the label indices are one-based, so we subtract one to align them with their labels. 12 https://pandas.pydata.org 66 Implementing Text Classification Using Perceptron and LR class index . 0  3 
 . 1  3 
 . 2  3 
 . 3  3 
 . 4  3 
 ... ... . 119995  1 
 . 119996  2 
 . 119997  2 
 . 119998  2 
 . 119999  2 
 class Business Business Business Business Business ... World Sports Sports Sports Sports title Wall St. Bears Claw Back Into the Black (Reuters) Oil and Economy Cloud Stocks' Outlook (Reuters) Oil prices soar to all-time record, posing new... Pakistan's Musharraf Says Won't Quit as Army C... Saban not going to Dolphins yet Nets get Carter from Raptors description Reuters - Short-sellers, Wall Street's dwindli... Reuters - Soaring crude prices plus worries\ab... AFP - Tearaway world oil prices, toppling reco... KARACHI (Reuters) - Pakistani President Perve... The Miami Dolphins will put their courtship of... INDIANAPOLIS -- All-Star Vince Carter was trad... Iraq Halts Oil Exports from Main Southern Pipe... Reuters - Authorities have halted oil export\f... ... ... Renteria signing a top-shelf deal Red Sox general manager Theo Epstein acknowled... 120000 rows × 4 columns Carlyle Looks Toward Commercial Aerospace (Reu... Reuters - Private investment firm Carlyle Grou... Today's NFL games PITTSBURGH at NY GIANTS Time: 1:30 p.m. Line: ... Next we will preprocess the text. First we lowercase the title and description, and then we concatenate them into a single string. Then we remove some spurious backslashes from the text. Once this is done, the preprocessed text is added to the dataframe as a new column. Note that pandas allows these steps to be applied to all rows simultaneously. class index class title Wall St. Bears Claw Back Into the Black (Reuters) Oil and Economy Cloud Stocks' Outlook (Reuters) Oil prices soar to all-time record, posing new... ... Pakistan's Musharraf Says Won't Quit as Army C... Saban not going to Dolphins yet Nets get Carter from Raptors description Reuters - Short-sellers, Wall Street's dwindli... Reuters - Soaring crude prices plus worries\ab... AFP - Tearaway world oil prices, toppling reco... ... KARACHI (Reuters) - Pakistani President Perve... The Miami Dolphins will put their courtship of... INDIANAPOLIS -- All-Star Vince Carter was trad... text wall st. bears claw back into the black (reute... oil and economy cloud stocks' outlook (reuters... oil prices soar to all-time record, posing new... ... pakistan's musharraf says won't quit as army c... saban not going to dolphins yet the miami dolp... nets get carter from raptors indianapolis -- a... . 0  3 Business 
 . 1  3 Business 
 . 2  3 Business 
 . 3  3 Business 
 . 4  3 Business 
 ... ... ... . 119995  1 World 
 . 119996  2 Sports 
 . 119997  2 Sports 
 . 119998  2 Sports 
 . 119999  2 Sports 
 120000 rows × 5 columns Carlyle Looks Toward Commercial Reuters - Private investment firm Carlyle carlyle looks toward commercial Aerospace (Reu... Grou... aerospace (reu... Iraq Halts Oil Exports from Main Southern Pipe... Reuters - Authorities have halted oil export\f... iraq halts oil exports from main southern pipe... Renteria signing a top-shelf deal Red Sox general manager Theo Epstein renteria signing a top-shelf deal red sox acknowled... gene... Today's NFL games PITTSBURGH at NY GIANTS Time: 1:30 p.m. today's nfl games pittsburgh at ny giants Line: ... time... At this point, the text is ready to be tokenized. For this purpose we will use NLTK’s word_tokenize function. This function can be applied to the whole column at once using the pandas map function, which returns a new column which we add to the dataframe. However, here we actually use the progress_map function, which provides a visual progress bar. This visual feedback is especially helpful for tasks that take more time to complete. 4.2 Multiclass Classification 67 class index . 0  3 
 . 1  3 
 . 2  3 
 . 3  3 
 . 4  3 
 ... ... . 119995  1 
 . 119996  2 
 . 119997  2 
 . 119998  2 
 . 119999  2 
 class Business Business Business Business Business ... World Sports Sports Sports Sports title Wall St. Bears Claw Back Into the Black (Reuters) Oil and Economy Cloud Stocks' Outlook (Reuters) Oil prices soar to all-time record, posing new... ... Pakistan's Musharraf Says Won't Quit as Army C... Saban not going to Dolphins yet Nets get Carter from Raptors description Reuters - Short-sellers, Wall Street's dwindli... Reuters - Soaring crude prices plus worries\ab... AFP - Tearaway world oil prices, toppling reco... ... KARACHI (Reuters) - Pakistani President Perve... The Miami Dolphins will put their courtship of... INDIANAPOLIS -- All-Star Vince Carter was trad... text wall st. bears claw back into the black (reute... oil and economy cloud stocks' outlook (reuters... oil prices soar to all-time record, posing new... ... pakistan's musharraf says won't quit as army c... saban not going to dolphins yet the miami dolp... nets get carter from raptors indianapolis -- a... tokens [wall, st., bears, claw, back, into, the, blac... [oil, and, economy, cloud, stocks, ', outlook,... [oil, prices, soar, to, all-time, record, ,, p... ... [pakistan, 's, musharraf, says, wo, n't, quit,... [saban, not, going, to, dolphins, yet, the, mi... [nets, get, carter, from, raptors, indianapoli... 120000 rows × 6 columns Carlyle Looks Toward Commercial Reuters - Private investment firm carlyle looks toward commercial [carlyle, looks, toward, Aerospace (Reu... Carlyle Grou... aerospace (reu... commercial, aerospace... Iraq Halts Oil Exports from Main Reuters - Authorities have halted iraq halts oil exports from main [iraq, halts, oil, exports, from, Southern Pipe... oil export\f... southern pipe... main, southe... Renteria signing a top-shelf deal Red Sox general manager Theo renteria signing a top-shelf deal [renteria, signing, a, top-shelf, Epstein acknowled... red sox gene... deal, red, s... Today's NFL games PITTSBURGH at NY GIANTS today's nfl games pittsburgh at [today, 's, nfl, games, Time: 1:30 p.m. Line: ... ny giants time... pittsburgh, at, ny, gi... From the tokens we just created, we then create a vocabulary for our corpus. Here, we only keep the words that occur at least 10 times, decreasing the memory needed and reducing the likelihood that our vocabulary contains noisy tokens. Note that each row in the tokens column contains a list of tokens. In order to create the vocabulary, we will need to convert the Series of lists of tokens into a Series of tokens using the explode() Pandas method. Then we will use the value_counts() method to create a Series object in which the index are the tokens and the values are the number of times they appear in the corpus. The next step is removing the tokens with a count lower than our chosen threshold. Finally, we create a list with the remaining tokens, as well as a dictionary that maps tokens to token ids (i.e., the index of the token in the list). We include in the vocabulary a special token [UNK] that will be used as a placeholder for tokens that do not appear in our vocabulary after the frequency pruning. Using this vocabulary, we construct a feature vector for each news article in the corpus. This feature vector will be encoded as a dictionary, with keys corresponding to token ids, and values corresponding to the number of times the token appears in the article. As above, the feature vectors will be stored as a new column in the dataframe. 68 Implementing Text Classification Using Perceptron and LR class index class title Wall St. Bears Claw Back Into the Black (Reuters) Oil and Economy Cloud Stocks' Outlook (Reuters) Oil prices soar to all-time record, posing new... ... Pakistan's Musharraf Says Won't Quit as Army C... Saban not going to Dolphins yet Nets get Carter from Raptors description Reuters - Short-sellers, Wall Street's dwindli... Reuters - Soaring crude prices plus worries\ab... AFP - Tearaway world oil prices, toppling reco... ... KARACHI (Reuters) - Pakistani President Perve... The Miami Dolphins will put their courtship of... INDIANAPOLIS -- All-Star Vince Carter was trad... text wall st. bears claw back into the black (reute... oil and economy cloud stocks' outlook (reuters... oil prices soar to all-time record, posing new... ... pakistan's musharraf says won't quit as army c... saban not going to dolphins yet the miami dolp... nets get carter from raptors indianapolis -- a... tokens [wall, st., bears, claw, back, into, the, blac... [oil, and, economy, cloud, stocks, ', outlook,... [oil, prices, soar, to, alltime, record, ,, p... ... [pakistan, 's, musharraf, says, wo, n't, quit,... [saban, not, going, to, dolphins, yet, the, mi... [nets, get, carter, from, raptors, indianapoli... features {427: 2, 563: 1, 1607: 1, 15062: 1, 120: 1, 73... {66: 1, 9: 2, 351: 2, 4565: 1, 158: 1, 116: 1,... {66: 2, 99: 2, 4390: 1, 4: 2, 3595: 1, 149: 1,... ... {383: 1, 23: 1, 1626: 2, 91: 1, 1809: 1, 285: ... {7762: 2, 68: 1, 661: 1, 4: 2, 1439: 2, 703: 1... {2170: 2, 226: 1, 2402: 2, 32: 1, 2995: 2, 219... . 0  3 Business 
 . 1  3 Business 
 . 2  3 Business 
 . 3  3 Business 
 . 4  3 Business 
 ... ... ... . 119995  1 World 
 . 119996  2 Sports 
 . 119997  2 Sports 
 . 119998  2 Sports 
 . 119999  2 Sports 
 120000 rows × 7 columns Carlyle Looks Toward Commercial Aerospace (Reu... Reuters - Private investment firm Carlyle Grou... carlyle looks toward commercial aerospace (reu... Iraq Halts Oil Exports from Reuters - Authorities have iraq halts oil exports from Main Southern Pipe... halted oil export\f... main southern pipe... Renteria signing a top-shelf Red Sox general manager renteria signing a topdeal Theo Epstein acknowled... shelf deal red sox gene... PITTSBURGH at NY Today's NFL games GIANTS Time: 1:30 p.m. Line: ... today's nfl games pittsburgh at ny giants time... [carlyle, looks, toward, {15999: 2, 1076: 1, 855: commercial, aerospace... 1, 1286: 1, 4251: 1, ... [iraq, halts, oil, exports, {77: 2, 7380: 1, 66: 3, from, main, southe... 1787: 1, 32: 2, 900: 2... [renteria, signing, a, top- {8428: 2, 2638: 1, 5: 4, shelf, deal, red, s... 0: 3, 127: 1, 202: 3,... [today, 's, nfl, games, {106: 1, 23: 1, 729: 1, pittsburgh, at, ny, gi... 225: 1, 1586: 1, 22: 1... The final preprocessing step is converting the features and the class indices into PyTorch tensors. Recall that we need to subtract one from the class indices to make them zero-based. At this point, the data is fully processed and we are ready to begin training. 4.2.3 Multiclass Logistic Regression Using PyTorch The model itself is a single linear layer whose input size corresponds to the size of our vocabulary, and its output size corresponds to the number of classes in our corpus. PyTorch’s Linear layer includes a bias by default, so there is no need to handle that manually the way we did for our perceptron example. The code for training this model (which implements Algorithm 6) is almost identical to that of the binary logistic repression. However, since we have to calculate a score for each of the four different classes, we need to replace the previous BCEWithLogitsLoss with CrossEntropyLoss, which applies a softmax over the scores to obtain probabilities for each class. For each example, the model predicts 4 scores – one for each label. The label with the highest score is selected using the argmax function. We evaluate the predictions of our model for each class using Scikitlearn’s classification_report, which handles the results of multiclass classification. 4.3 Summary 69 4.3 Summary In this chapter, we used movie review and news article classification to illustrate the implementation of the previously described algorithms for the binary perceptron, binary logistic regression, and multiclass logistic regression. For the binary logistic regression, we made a direct comparison between the lower-level NumPy implementation and a higher-level version that made use of PyTorch. We hope that through this series of exercises the reader has noted several key takeaways. First, data preparation is important and should be done thoughtfully. Certain tasks (e.g., text normalization or sentence splitting) are going to be frequently needed if you continue with NLP, so using or creating generic functions can be very helpful. However, what works for one dataset and one language may not be suitable for another scenario. For example, in our case, we selected different tokenizers for each of our tasks to account for the different registers of English, as well as removing diacritics during normalization. Second, when it comes to implementing machine learning algorithms, it is often easier to use a higher-level library such as PyTorch instead of NumPy. For example, with the former, the gradients are calculated by the library, whereas in NumPy we have to code them ourselves. This becomes cumbersome quickly. For example, even the derivative of the softmax is non-trivial. Third, PyTorch imposes a training structure that remains largely the same, regardless of what models are being trained. That is, at a high level, the same steps are always required: clearing the current gradients, predicting output scores for the provided inputs, calculating the loss, and optimizing. These features make PyTorch a very powerful and convenient deep learning library; we will continue to use it throughout the remainder of the book to implement more complex neural architectures.
8,146
8,316
#!/usr/bin/env python # coding: utf-8 # # Binary Text Classification with # # Logistic Regression Implemented from Scratch # In[1]: import random import numpy as np from tqdm.notebook import tqdm # set this variable to a number to be used as the random seed # or to None if you don't want to set a random seed seed = 1234 if seed is not None: random.seed(seed) np.random.seed(seed) # The dataset is divided in two directories called `train` and `test`. # These directories contain the training and testing splits of the dataset. # In[2]: get_ipython().system('ls -lh data/aclImdb/') # Both the `train` and `test` directories contain two directories called `pos` and `neg` that contain text files with the positive and negative reviews, respectively. # In[3]: get_ipython().system('ls -lh data/aclImdb/train/') # We will now read the filenames of the positive and negative examples. # In[4]: from glob import glob pos_files = glob('data/aclImdb/train/pos/*.txt') neg_files = glob('data/aclImdb/train/neg/*.txt') print('number of positive reviews:', len(pos_files)) print('number of negative reviews:', len(neg_files)) # Now, we will use a [`CountVectorizer`](https://scikit-learn.org/stable/modules/generated/sklearn.feature_extraction.text.CountVectorizer.html) to read the text files, tokenize them, acquire a vocabulary from the training data, and encode it in a document-term matrix in which each row represents a review, and each column represents a term in the vocabulary. Each element $(i,j)$ in the matrix represents the number of times term $j$ appears in example $i$. # In[5]: from sklearn.feature_extraction.text import CountVectorizer # initialize CountVectorizer indicating that we will give it a list of filenames that have to be read cv = CountVectorizer(input='filename') # learn vocabulary and return sparse document-term matrix doc_term_matrix = cv.fit_transform(pos_files + neg_files) doc_term_matrix # Note in the message printed above that the matrix is of shape (25000, 74894). # In other words, it has 1,871,225,000 elements. # However, only 3,445,861 elements were stored. # This is because most of the elements in the matrix are zeros. # The reason is that the reviews are short and most words in the english language don't appear in each review. # A matrix that only stores non-zero values is called *sparse*. # # Now we will convert it to a dense numpy array: # In[6]: X_train = doc_term_matrix.toarray() X_train.shape # In[7]: # Append 1s to the xs; this will allow us to multiply by the weights and # the bias in a single pass. # Make an array with a one for each row/data point ones = np.ones(X_train.shape[0]) # Concatenate these ones to existing feature vectors X_train = np.column_stack((X_train, ones)) X_train.shape # We will also create a numpy array with the binary labels for the reviews. # One indicates a positive review and zero a negative review. # The label `y_train[i]` corresponds to the review encoded in row `i` of the `X_train` matrix. # In[8]: # training labels y_pos = np.ones(len(pos_files)) y_neg = np.zeros(len(neg_files)) y_train = np.concatenate([y_pos, y_neg]) y_train # Now we will initialize our model, in the form of an array of weights `w` of the same size as the number of features in our dataset (i.e., the number of words in the vocabulary acquired by [`CountVectorizer`](https://scikit-learn.org/stable/modules/generated/sklearn.feature_extraction.text.CountVectorizer.html)), and a bias term `b`. # Both are initialized to zeros. # In[9]: # initialize model: the feature vector and bias term are populated with zeros n_examples, n_features = X_train.shape w = np.random.random(n_features) # Now we will use the logistic regression learning algorithm to learn the values of `w` and `b` from our training data. # In[10]: # from scipy.special import expit as sigmoid def sigmoid(z): if -z > np.log(np.finfo(float).max): return 0.0 return 1 / (1 + np.exp(-z)) # In[11]: lr = 1e-1 n_epochs = 10 indices = np.arange(n_examples) for epoch in range(10): # randomize the order in which training examples are seen in this epoch np.random.shuffle(indices) # traverse the training data for i in tqdm(indices, desc=f'epoch {epoch+1}'): x = X_train[i] y = y_train[i] # calculate the derivative of the cost function for this batch deriv_cost = (sigmoid(x @ w) - y) * x # update the weights w = w - lr * deriv_cost # The next step is evaluating the model on the test dataset. # Note that this time we use the [`transform()`](https://scikit-learn.org/stable/modules/generated/sklearn.feature_extraction.text.CountVectorizer.html#sklearn.feature_extraction.text.CountVectorizer.transform) method of the [`CountVectorizer`](https://scikit-learn.org/stable/modules/generated/sklearn.feature_extraction.text.CountVectorizer.html), instead of the [`fit_transform()`](https://scikit-learn.org/stable/modules/generated/sklearn.feature_extraction.text.CountVectorizer.html#sklearn.feature_extraction.text.CountVectorizer.fit_transform) method that we used above. This is because we want to use the learned vocabulary in the test set, instead of learning a new one. # In[12]: pos_files = glob('data/aclImdb/test/pos/*.txt') neg_files = glob('data/aclImdb/test/neg/*.txt') doc_term_matrix = cv.transform(pos_files + neg_files) X_test = doc_term_matrix.toarray() X_test = np.column_stack((X_test, np.ones(X_test.shape[0]))) y_pos = np.ones(len(pos_files)) y_neg = np.zeros(len(neg_files)) y_test = np.concatenate([y_pos, y_neg]) # Using the model is easy: multiply the document-term matrix by the learned weights and add the bias. # We use Python's `@` operator to perform the matrix-vector multiplication. # In[13]: y_pred = X_test @ w > 0 # Now we print an evaluation of the prediction results using scikit-learn's [`classification_report()`](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.classification_report.html) function. # In[14]: def binary_classification_report(y_true, y_pred): # count true positives, false positives, true negatives, and false negatives tp = fp = tn = fn = 0 for gold, pred in zip(y_true, y_pred): if pred == True: if gold == True: tp += 1 else: fp += 1 else: if gold == False: tn += 1 else: fn += 1 # calculate precision and recall precision = tp / (tp + fp) recall = tp / (tp + fn) # calculate f1 score fscore = 2 * precision * recall / (precision + recall) # calculate accuracy accuracy = (tp + tn) / len(y_true) # number of positive labels in y_true support = sum(y_true) return { "precision": precision, "recall": recall, "f1-score": fscore, "support": support, "accuracy": accuracy, } # In[15]: binary_classification_report(y_test, y_pred)
3,065
3,130
26
chap04-27
chap04-27
4 Implementing Text Classification Using Perceptron and Logistic Regression In the previous chapters we have discussed the theory behind the perceptron and logistic regression, including mathematical explanations of how and why they are able to learn from examples. In this chapter we will transition from math to code. Specifically, we will discuss how to implement these models in the Python programming language. All the code that we will introduce throughout this book is available online as well: http://clulab.github.io/gentlenlp/. The reader who is not familiar with the Python programming language is encouraged to read first Appendix A, for a brief introduction to the language, and Appendix B, for a discussion on how computers encode and preprocess text. Once done, please return here. To get a better understanding of how these algorithms work under the hood, we will start by implementing them from scratch. However, as the book progresses, we will introduce some of the popular tools and libraries that make Python the language of choice for machine learning, e.g., PyTorch,1 and Hugging Face’s transformers.2 The code for all the examples in the book is provided in the form of Jupyter notebooks.3 Important fragments of these notebooks will be presented in the implementation chapters so that the reader has the whole picture just by reading the book. However, we strongly encourage you to download the notebooks and execute them yourself. We also encourage you to modify them to conduct your own experiments! 1 https://pytorch.org
2 https://huggingface.co 3 https://jupyter.org/ 55 56 Implementing Text Classification Using Perceptron and LR 4.1 Binary Classification We begin this chapter with binary classification. That is, we aim to train classifiers that assign one of two labels to a given text. As the example for this task, we will train a review classifier using the the Large Movie Review Dataset (Maas et al., 2011).4 We tackle this task by implementing first a binary perceptron classifier, followed by a binary logistic regression one. We will implement the latter both from scratch as well as using PyTorch, so the reader has a clearer understanding on how PyTorch works “under the hood.” 4.1.1 Large Movie Review Dataset This dataset contains movie reviews and their associated scores (between 1 and 10) as provided by IMDb.5 converted these scores to binary labels by assigning each review a positive or negative label if the review score was above 6 or below 5, respectively. Reviews with scores 5 and 6 were considered too neutral and thus excluded. We follow the same protocol in this chapter. The dataset is divided in two even partitions called train and test, each containing 25,000 reviews. The dataset also provides additional unlabeled reviews, but we will not use those here. Each partition contains two directories called pos and neg where the positive and negative examples are stored. Each review is stored in an independent text file, whose name is composed of an id unique to the partition and the score associated with the review, separated by an underscore. An example of a positive and a negative review is shown in Table 4.1. 4.1.2 Bag-of-words Model As discussed in Section 2.2, we will encode the text to classify as a bag of words. That is, we encode each review as a list of numbers, with each position in the list corresponding to a word in our vocabulary, and the value stored in that position corresponding to the number of times the word appears in the review. For example, say we want to encode the following two reviews: 4 https://ai.stanford.edu/~amaas/data/sentiment/ 5 https://www.imdb.com/ Maas et al. 4.1 Binary Classification 57 Table 4.1 Two examples of movie reviews from IMDb. The first is a positive review of the movie Puss in Boots (1988). The second is a negative review of the movie Valentine (2001). These reviews can be found at https://www.imdb.com/review/rw0606396/ and https://www.imdb.com/review/rw0721861/, respectively. Filename Score Binary Label train/pos/24_8.txt 8/10 Positive train/neg/141_3.txt 3/10 Negative Review Text Although this was obviously a low-budget production, the performances and the songs in this movie are worth seeing. One of Walken’s few musical roles to date. (he is a marvelous dancer and singer and he demonstrates his acrobatic skills as well - watch for the cartwheel!) Also starring Jason Connery. A great children’s story and very likable characters. This stalk and slash turkey manages to bring nothing new to an increasingly stale genre. A masked killer stalks young, pert girls and slaughters them in a variety of gruesome ways, none of which are particularly inventive. It’s not scary, it’s not clever, and it’s not funny. So what was the point of it? Review 1: Review 2: "I liked the movie. My friend liked it too. " "I hated it. Would not recommend. " First, we need to create a vocabulary that maps each word to an id that uniquely identifies it. Each of these numbers will be used as the index in a list, so they must start at zero and grow by one for each word in the vocabulary. For example, one possible vocabulary that encodes the previous reviews is: {'would': 0, 'hated': 1, 58 Implementing Text Classification Using Perceptron and LR 'my': 2, 'liked': 3, 'not': 4, 'it': 5, 'movie': 6, 'recommend': 7, 'the': 8, 'I': 9, 'too': 10, 'friend': 11} Using this mapping, we can encode the two reviews as follows: Review1: [0,0,1,2,0,1,1,0,1,1,1,1] Review2: [1,1,0,0,1,1,0,1,0,1,0,0] Note that the word liked (fourth position) in the first review has a value of two. This is because this word appears twice in that review. This is a small example with a vocabulary of only 12 terms. Of course, the same process needs to be implemented for our whole training dataset. For this purpose we will use scikit-learn’s CountVectorizer class.6 Using the CountVectorizer class simplifies things, allowing us to get started quickly with a bag-of-words approach. However, note that it makes several simplifying assumptions (e.g., text is lowercased, and punctuation and single character tokens are removed). Some of these may not be adequate to other tasks. First, we need to obtain the filenames for the reviews in the training set: Once we have acquired the filenames for the training reviews, we need
to read them using the CountVectorizer. In order for the CountVectorizer to open and read the files for us, we make use of the input='filename' constructor parameter (otherwise it would expect the string content directly). The CountVectorizer provides three methods that will be use-
ful for us: a method called fit() that is used to acquire the vocabulary,
a method transform() that converts the text into the bag-of-words representation, and a method fit_transform() that conveniently acquires the vocabulary and transforms the data in a single step. The resulting object is referred to as a document-term matrix, where each row corre- 6 https://scikitlearn.org/stable/modules/generated/sklearn.feature_ extraction.text.CountVectorizer.html 4.1 Binary Classification 59 sponds to a document, and each column corresponds to a term in the vocabulary. As the output above indicates, the resulting matrix has 25,000 rows (one for each review), and 74,849 columns (one for each term). Also you may note that this matrix is sparse, with 3,445,861 stored elements. A regular matrix of shape 25,000×74,849 would have 1,871,225,000 elements. However, most of the elements in a document-term matrix are zeros because only a few words from the vocabulary appear in each document. A sparse matrix takes advantage of this fact by storing only the non-zero cells in order to reduce the memory required to store it. Thus, sparse matrices are convenient, especially when dealing with lots of data. Nevertheless, to simplify the downstream code in this example, we will convert it into a dense matrix, i.e., a regular two-dimensional NumPy array. Finally, we also need the labels of the reviews. We assign a label of one to positive reviews, and a label of zero to negative ones. Note that the first half of the reviews are positive and the second half are negative. The label at the ith position of the y_train array corresponds to the review encoded in the ith row of the X_train matrix. 4.1.3 Perceptron Now that we have defined our task and the data processing pipeline, we will implement a perceptron classifier that classifies the movie reviews as positive or negative. The entire code discussed in this section is available in the chap4_perceptron notebook. Recall from Section 2.4 that the perceptron is composed of a weight vector w and a bias term b. These will be represented as a NumPy array w of the same length as our document vectors, and a variable b for the bias term. Both will be initialized with zeros. The parameters w and b are learned through the following algorithm, which implements Algorithm 2 from Chapter 2: There are a couple of details to point out. Line 3 of Algorithm 2 indicates that we need to repeat the training loop until convergence. Theoretically, convergence is defined as predicting all training examples correctly. This is an ambitious requirement, which is not always possible in practice, so in this code we also include a stop condition if we reach a maximum number of epochs. Another crucial difference between our implementation here and the theoretical Algorithm 2, is that we randomize the order in which the training examples are seen at the beginning of 60 Implementing Text Classification Using Perceptron and LR each epoch. This simple (but highly recommended!) change is necessary to avoid the introduction of spurious biases due to the arbitrary order of the examples in the original training partition.7 We accomplish this by storing the indices corresponding to the X_train matrix rows in a NumPy array, and shuffling these indices at the beginning of each epoch. We shuffle the indices instead of the examples so that we can preserve the mapping between examples and labels. The training loop aligns closely with Algorithm 2. We start by iterating over each example in our training data, storing the current example in the variable x,8 and its corresponding label in the variable y_true. Next, we compute the perceptron decision function shown in Algorithm 1. Note that NumPy (as well as PyTorch) uses Python’s @ operator to indicate vector or matrix multiplication, depending on its operand types. Here we use it to calculate the dot product of the example x and the weights w. To this we add the bias b to obtain the predicted score, whose sign is used to assign a positive or negative predicted label. If the prediction is correct, then no update is needed, and we can move on to the next training example. However, if the prediction is incorrect, then we need to adjust w and b, as described in Algorithm 2. Sidebar 4.1 The tqdm function This is our first exposure to the tqdm function. tqdm is a progress bar that “make your loops show a smart progress meter.”9 The name tqdm comes from the Arabic word taqaddum which can mean “progress.” Using tqdm is as simple as wrapping it around the collection to be traversed. After training, we evaluate the model’s performance on the heldout test partition. The test data is loaded similarly to the training partition, but with one notable difference; we use CountVectorizer’s transform() method instead of the fit_transform() method so that the vocabulary is not adjusted for the test data. We won’t show here the loading of the test partition since it is so similar to the code already shown, but it is available in the Jupyter notebook that accompanies this section. . 7   As an extreme example, consider a dataset where all the positive examples appear first in the training partition. This would cause the perceptron to artificially inflate the weights of the features that occur in these examples, a situation from which the learning algorithm may struggle to recover. 
 . 8  We use typewriter font when we discuss variables in the code, to distinguish code from the theoretical discussion in the other chapters. 
 9 https://github.com/tqdm/tqdm 4.1 Binary Classification 61 Using the model to assign labels to all the test data is easily done in one step – we simply multiply the entire test data document-term matrix by the previously learned weights and add the bias. Scores greater than zero indicate a positive review, and those less than zero are negative. At this point we can evaluate the classifier’s performance, which we will do using precision, recall, and F1 scores for binary classification (described in Section 2.3). For this purpose, we implement a function called binary_classification_report that computes these metrics and returns them as a dictionary: We call this function to compare the predicted labels to the true labels, and obtain the evaluation scores. Our F1 score here is 86.8%, which is much higher than the baseline that assigns labels randomly, which yields an F1 score of about 50%. This is a good result, especially considering the simplicity of the perceptron! In the next sections and chapters, we will discuss a battery of strategies to considerably improve this performance. 4.1.4 Binary Logistic Regression from Scratch Using the same task, dataset, and evaluation, we will now implement a logistic regression classifier, as described in Algorithm 5 from Chapter 3. To give the reader hands-on experience with the implementation of the gradient calculations for logistic regression, we start by implementing it from scratch using NumPy. All the code shown in this section is available in the chap4_logistic_regression_numpy notebook. In the perceptron implementation, we represented the weights and the bias as two different variables. Here, however, we will use a different approach that will allow us to unify them into a single vector variable. Specifically, we take advantage of the similarity between the derivative of the cost function with respect to the weights (Equation 3.14) and the derivative of the cost with respect to the bias (Equation 3.15). d Ci(w, b) = (σi − yi)xij (3.14 revisited) dwj d Ci(w, b) = σi − yi (3.15 revisited) db Note that the two derivative formulas are identical except that the former has a multiplication by xij, while the latter does not. However, 62 Implementing Text Classification Using Perceptron and LR since σi − yi = (σi − yi)1 we can multiply the derivative of the cost with respect to the bias by one without changing the semantics. This gives an opportunity for combining the computations, doing them both in a single pass. The idea is that we can treat the bias as a weight corresponding to a feature that always has a value of one. As can be seen above, we created a NumPy array of ones of the same length as the number of examples in our training set (i.e., the number of rows in the data matrix). Then we add this array as a new column to the data matrix, using NumPy’s column_stack function. Next, we need to initialize our model. This time we will use a single NumPy array w of the same length as the number of columns in the data matrix. The weight vector w is initialized randomly with values between 0 and 1: Before implementing the learning algorithm, we need an implementation of the logistic function. Recall that the logistic function is σ(x) = 1 (3.1 revisited) 1+e−x This function can be easily implemented in NumPy as follows: However, this naive implementation may produce the following warning during training: The term overflow indicates that the result of evaluating exp(-x) is a number so large that it can’t be represented by a float (specifically, we’re using float64 numbers). We will avoid this issue by not calling exp with values that will overflow. NumPy provides the function finfo that can be consulted to find the limits of floating point numbers: The log of the largest floating point number is the largest number for which exp() will not overflow, so we will use it as a threshold to filter out problematic values: We now have everything we need to implement Algorithm 4. The steps to follow for each example are: (1) use the model to make a prediction, (2) calculate the gradient of the loss function with respect to the model parameters, and (3) update the model parameters using the gradient. The size of the update is controlled by the learning rate. Once the model has been trained, we evaluate it on the test dataset using our binary_classification_report function from the previous section. Loading and preprocessing the test dataset follows the same 4.1 Binary Classification 63 steps as with the previous classifier. We omit the code for brevity. These are the results: The performance is comparable with that of the perceptron. The difference in F1 scores between the two classifiers (84.9% here vs. 86.8% for the perceptron) is not significant. Classifier parity is probably attributable to the fact that the signal distinguishing the two classes being easy to learn and the simpler perceptron training algorithm being sufficient in this case. Nevertheless, this task is useful in showing how to implement the logistic regression model from scratch, i.e., by implementing the gradient calculation and parameter updates manually. Next, we will implement the same model again using PyTorch, highlighting how this machine learning library simplifies the process. 4.1.5 Binary Logistic Regression Utilizing PyTorch While it is fairly straightforward to compute the derivatives for logistic regression and implement then directly in NumPy, this will not scale well to arbitrary neural architectures. Fortunately, there are libraries that automate the computation of the derivatives of the cost function (assuming it is differentiable!) for any neural network, and use the resulting gradients to perform gradient descent or other more sophisticated optimization procedures. To this end, we will use the PyTorch deep learning library10. The corresponding notebook for this section is chap4_logistic_regression_pytorch_bce. Our model for logistic regression corresponds to PyTorch’s Linear layer. When we instantiate this layer, we specify the size of the inputs (the size of our vocabulary) and the size of the output, i.e., the number of output neurons (which is one because we’re doing binary classification). The loss function we use is the binary cross-entropy loss (see Chapter 3), which is implemented as BCEWithLogitsLoss in PyTorch. In PyTorch, the gradients obtained from the loss function are applied to the model by an optimizer object, which implements and applies an optimization algorithm. Here we will use the vanilla stochastic gradient descent optimizer; we set its learning rate to 0.1. This is equivalent to the discussion in Section 3.2. Similarly to the manual implementation, the steps required to train the model for a given training example are: (1) ensure the gradients are set to zeros, (2) apply the model to obtain a prediction, (3) calculate 10 https://pytorch.org/ 64 Implementing Text Classification Using Perceptron and LR the loss, (4) compute the gradient of the loss by back-propagation, and (5) update the model parameters. Recall that in our previous implementation everything was hardcoded: applying the model, computing the gradients, and optimizing the model parameters. Here, however, the implementation of the logistic regression is expressed at a higher level of abstraction. This means that we are describing the logical steps without specifying a particular implementation. Instead, implementation details are the responsability of the chosen model, loss function, and optimizer. Thus, we could even choose a different model, loss function, and/or optimizer, and use the same training steps with little or no modification. This decoupling of the training logic from the implementation details is one of the main advantages of libraries such as PyTorch. As shown in the code above, calling the model as a function, with the feature vectors as inputs, produces the predicted scores. Once again, a positive score corresponds to a positive label. When we evaluate this implementation on the test dataset, we obtain results that are in line with our previous models: Writing the perceptron and the logistic regression from scratch is a good exercise, as it exposes us to the fundamentals of implementing machine learning algorithms. However, this becomes cumbersome for more complex neural architectures. For this reason, from this point on, we will use PyTorch for all our coding examples. 4.2 Multiclass Classification So far, in this chapter we have discussed implementing binary classifiers. Next, we will modify these binary classifiers to perform multiclass classification, following the discussion in Section 3.5. 4.2.1 AG News Dataset Before explaining the actual training/testing code, we have to choose a new dataset that is suitable for multiclass classification. To this end, we will use the AG News Classification Dataset (Zhang et al., 2015), a subset of the larger AG corpus of news articles collected from thousands of different news sources.11 The classification dataset consists of four 11 http://groups.di.unipi.it/~gulli/AG_corpus_of_news_articles.html 4.2 Multiclass Classification 65 classes, and the data is equally balanced across all classes (30,000 articles per class for train, and 1,900 articles per class for testing). The goal of the task is to classify each article as one of the four classes: World, Sports, Business, or Sci/Tech. 4.2.2 Preparing the Dataset The AG News Dataset is distributed as two CSV files (one for training and one for testing), each containing three columns: the class index, the title, and the description. The dataset also provides a text file that maps the above class indexes to more descriptive class labels. Because of the tabular nature of the dataset, pandas, a Python library
for tabular data analysis,12 is a natural choice for loading and transform-
ing it. To this end, our Jupyter notebook (chap4_multiclass_logistic_regression) demonstrates the sequence of steps required to handle the data, as well
as model training and evaluation. First, we show how to load the CSV,
add column names, and inspect the result: class index . 0  3 
 . 1  3 
 . 2  3 
 . 3  3 
 . 4  3 
 ... ... . 119995  1 
 . 119996  2 
 . 119997  2 
 . 119998  2 
 . 119999  2 
 title Wall St. Bears Claw Back Into the Black (Reuters) Carlyle Looks Toward Commercial Aerospace (Reu... Oil and Economy Cloud Stocks' Outlook (Reuters) Iraq Halts Oil Exports from Main Southern Pipe... Oil prices soar to all-time record, posing new... ... Pakistan's Musharraf Says Won't Quit as Army C... Renteria signing a top-shelf deal Saban not going to Dolphins yet Today's NFL games Nets get Carter from Raptors description Reuters - Short-sellers, Wall Street's dwindli... Reuters - Private investment firm Carlyle Grou... Reuters - Soaring crude prices plus worries\ab... Reuters - Authorities have halted oil export\f... AFP - Tearaway world oil prices, toppling reco... ... KARACHI (Reuters) - Pakistani President Perve... Red Sox general manager Theo Epstein acknowled... The Miami Dolphins will put their courtship of... PITTSBURGH at NY GIANTS Time: 1:30 p.m. Line: ... INDIANAPOLIS -- All-Star Vince Carter was trad... 120000 rows × 3 columns Since the class labels themselves are in a separate file, we manually add them to the pandas data structure (called dataframe in pandas’ terminology) to increase the interpretability of the data. We use the class index column as a starting point, and use its map method to create a new column with the corresponding labels (technically a new Series object) that is added to the dataframe using its insert method, which allows us to insert the column in a specific position. Note that the label indices are one-based, so we subtract one to align them with their labels. 12 https://pandas.pydata.org 66 Implementing Text Classification Using Perceptron and LR class index . 0  3 
 . 1  3 
 . 2  3 
 . 3  3 
 . 4  3 
 ... ... . 119995  1 
 . 119996  2 
 . 119997  2 
 . 119998  2 
 . 119999  2 
 class Business Business Business Business Business ... World Sports Sports Sports Sports title Wall St. Bears Claw Back Into the Black (Reuters) Oil and Economy Cloud Stocks' Outlook (Reuters) Oil prices soar to all-time record, posing new... Pakistan's Musharraf Says Won't Quit as Army C... Saban not going to Dolphins yet Nets get Carter from Raptors description Reuters - Short-sellers, Wall Street's dwindli... Reuters - Soaring crude prices plus worries\ab... AFP - Tearaway world oil prices, toppling reco... KARACHI (Reuters) - Pakistani President Perve... The Miami Dolphins will put their courtship of... INDIANAPOLIS -- All-Star Vince Carter was trad... Iraq Halts Oil Exports from Main Southern Pipe... Reuters - Authorities have halted oil export\f... ... ... Renteria signing a top-shelf deal Red Sox general manager Theo Epstein acknowled... 120000 rows × 4 columns Carlyle Looks Toward Commercial Aerospace (Reu... Reuters - Private investment firm Carlyle Grou... Today's NFL games PITTSBURGH at NY GIANTS Time: 1:30 p.m. Line: ... Next we will preprocess the text. First we lowercase the title and description, and then we concatenate them into a single string. Then we remove some spurious backslashes from the text. Once this is done, the preprocessed text is added to the dataframe as a new column. Note that pandas allows these steps to be applied to all rows simultaneously. class index class title Wall St. Bears Claw Back Into the Black (Reuters) Oil and Economy Cloud Stocks' Outlook (Reuters) Oil prices soar to all-time record, posing new... ... Pakistan's Musharraf Says Won't Quit as Army C... Saban not going to Dolphins yet Nets get Carter from Raptors description Reuters - Short-sellers, Wall Street's dwindli... Reuters - Soaring crude prices plus worries\ab... AFP - Tearaway world oil prices, toppling reco... ... KARACHI (Reuters) - Pakistani President Perve... The Miami Dolphins will put their courtship of... INDIANAPOLIS -- All-Star Vince Carter was trad... text wall st. bears claw back into the black (reute... oil and economy cloud stocks' outlook (reuters... oil prices soar to all-time record, posing new... ... pakistan's musharraf says won't quit as army c... saban not going to dolphins yet the miami dolp... nets get carter from raptors indianapolis -- a... . 0  3 Business 
 . 1  3 Business 
 . 2  3 Business 
 . 3  3 Business 
 . 4  3 Business 
 ... ... ... . 119995  1 World 
 . 119996  2 Sports 
 . 119997  2 Sports 
 . 119998  2 Sports 
 . 119999  2 Sports 
 120000 rows × 5 columns Carlyle Looks Toward Commercial Reuters - Private investment firm Carlyle carlyle looks toward commercial Aerospace (Reu... Grou... aerospace (reu... Iraq Halts Oil Exports from Main Southern Pipe... Reuters - Authorities have halted oil export\f... iraq halts oil exports from main southern pipe... Renteria signing a top-shelf deal Red Sox general manager Theo Epstein renteria signing a top-shelf deal red sox acknowled... gene... Today's NFL games PITTSBURGH at NY GIANTS Time: 1:30 p.m. today's nfl games pittsburgh at ny giants Line: ... time... At this point, the text is ready to be tokenized. For this purpose we will use NLTK’s word_tokenize function. This function can be applied to the whole column at once using the pandas map function, which returns a new column which we add to the dataframe. However, here we actually use the progress_map function, which provides a visual progress bar. This visual feedback is especially helpful for tasks that take more time to complete. 4.2 Multiclass Classification 67 class index . 0  3 
 . 1  3 
 . 2  3 
 . 3  3 
 . 4  3 
 ... ... . 119995  1 
 . 119996  2 
 . 119997  2 
 . 119998  2 
 . 119999  2 
 class Business Business Business Business Business ... World Sports Sports Sports Sports title Wall St. Bears Claw Back Into the Black (Reuters) Oil and Economy Cloud Stocks' Outlook (Reuters) Oil prices soar to all-time record, posing new... ... Pakistan's Musharraf Says Won't Quit as Army C... Saban not going to Dolphins yet Nets get Carter from Raptors description Reuters - Short-sellers, Wall Street's dwindli... Reuters - Soaring crude prices plus worries\ab... AFP - Tearaway world oil prices, toppling reco... ... KARACHI (Reuters) - Pakistani President Perve... The Miami Dolphins will put their courtship of... INDIANAPOLIS -- All-Star Vince Carter was trad... text wall st. bears claw back into the black (reute... oil and economy cloud stocks' outlook (reuters... oil prices soar to all-time record, posing new... ... pakistan's musharraf says won't quit as army c... saban not going to dolphins yet the miami dolp... nets get carter from raptors indianapolis -- a... tokens [wall, st., bears, claw, back, into, the, blac... [oil, and, economy, cloud, stocks, ', outlook,... [oil, prices, soar, to, all-time, record, ,, p... ... [pakistan, 's, musharraf, says, wo, n't, quit,... [saban, not, going, to, dolphins, yet, the, mi... [nets, get, carter, from, raptors, indianapoli... 120000 rows × 6 columns Carlyle Looks Toward Commercial Reuters - Private investment firm carlyle looks toward commercial [carlyle, looks, toward, Aerospace (Reu... Carlyle Grou... aerospace (reu... commercial, aerospace... Iraq Halts Oil Exports from Main Reuters - Authorities have halted iraq halts oil exports from main [iraq, halts, oil, exports, from, Southern Pipe... oil export\f... southern pipe... main, southe... Renteria signing a top-shelf deal Red Sox general manager Theo renteria signing a top-shelf deal [renteria, signing, a, top-shelf, Epstein acknowled... red sox gene... deal, red, s... Today's NFL games PITTSBURGH at NY GIANTS today's nfl games pittsburgh at [today, 's, nfl, games, Time: 1:30 p.m. Line: ... ny giants time... pittsburgh, at, ny, gi... From the tokens we just created, we then create a vocabulary for our corpus. Here, we only keep the words that occur at least 10 times, decreasing the memory needed and reducing the likelihood that our vocabulary contains noisy tokens. Note that each row in the tokens column contains a list of tokens. In order to create the vocabulary, we will need to convert the Series of lists of tokens into a Series of tokens using the explode() Pandas method. Then we will use the value_counts() method to create a Series object in which the index are the tokens and the values are the number of times they appear in the corpus. The next step is removing the tokens with a count lower than our chosen threshold. Finally, we create a list with the remaining tokens, as well as a dictionary that maps tokens to token ids (i.e., the index of the token in the list). We include in the vocabulary a special token [UNK] that will be used as a placeholder for tokens that do not appear in our vocabulary after the frequency pruning. Using this vocabulary, we construct a feature vector for each news article in the corpus. This feature vector will be encoded as a dictionary, with keys corresponding to token ids, and values corresponding to the number of times the token appears in the article. As above, the feature vectors will be stored as a new column in the dataframe. 68 Implementing Text Classification Using Perceptron and LR class index class title Wall St. Bears Claw Back Into the Black (Reuters) Oil and Economy Cloud Stocks' Outlook (Reuters) Oil prices soar to all-time record, posing new... ... Pakistan's Musharraf Says Won't Quit as Army C... Saban not going to Dolphins yet Nets get Carter from Raptors description Reuters - Short-sellers, Wall Street's dwindli... Reuters - Soaring crude prices plus worries\ab... AFP - Tearaway world oil prices, toppling reco... ... KARACHI (Reuters) - Pakistani President Perve... The Miami Dolphins will put their courtship of... INDIANAPOLIS -- All-Star Vince Carter was trad... text wall st. bears claw back into the black (reute... oil and economy cloud stocks' outlook (reuters... oil prices soar to all-time record, posing new... ... pakistan's musharraf says won't quit as army c... saban not going to dolphins yet the miami dolp... nets get carter from raptors indianapolis -- a... tokens [wall, st., bears, claw, back, into, the, blac... [oil, and, economy, cloud, stocks, ', outlook,... [oil, prices, soar, to, alltime, record, ,, p... ... [pakistan, 's, musharraf, says, wo, n't, quit,... [saban, not, going, to, dolphins, yet, the, mi... [nets, get, carter, from, raptors, indianapoli... features {427: 2, 563: 1, 1607: 1, 15062: 1, 120: 1, 73... {66: 1, 9: 2, 351: 2, 4565: 1, 158: 1, 116: 1,... {66: 2, 99: 2, 4390: 1, 4: 2, 3595: 1, 149: 1,... ... {383: 1, 23: 1, 1626: 2, 91: 1, 1809: 1, 285: ... {7762: 2, 68: 1, 661: 1, 4: 2, 1439: 2, 703: 1... {2170: 2, 226: 1, 2402: 2, 32: 1, 2995: 2, 219... . 0  3 Business 
 . 1  3 Business 
 . 2  3 Business 
 . 3  3 Business 
 . 4  3 Business 
 ... ... ... . 119995  1 World 
 . 119996  2 Sports 
 . 119997  2 Sports 
 . 119998  2 Sports 
 . 119999  2 Sports 
 120000 rows × 7 columns Carlyle Looks Toward Commercial Aerospace (Reu... Reuters - Private investment firm Carlyle Grou... carlyle looks toward commercial aerospace (reu... Iraq Halts Oil Exports from Reuters - Authorities have iraq halts oil exports from Main Southern Pipe... halted oil export\f... main southern pipe... Renteria signing a top-shelf Red Sox general manager renteria signing a topdeal Theo Epstein acknowled... shelf deal red sox gene... PITTSBURGH at NY Today's NFL games GIANTS Time: 1:30 p.m. Line: ... today's nfl games pittsburgh at ny giants time... [carlyle, looks, toward, {15999: 2, 1076: 1, 855: commercial, aerospace... 1, 1286: 1, 4251: 1, ... [iraq, halts, oil, exports, {77: 2, 7380: 1, 66: 3, from, main, southe... 1787: 1, 32: 2, 900: 2... [renteria, signing, a, top- {8428: 2, 2638: 1, 5: 4, shelf, deal, red, s... 0: 3, 127: 1, 202: 3,... [today, 's, nfl, games, {106: 1, 23: 1, 729: 1, pittsburgh, at, ny, gi... 225: 1, 1586: 1, 22: 1... The final preprocessing step is converting the features and the class indices into PyTorch tensors. Recall that we need to subtract one from the class indices to make them zero-based. At this point, the data is fully processed and we are ready to begin training. 4.2.3 Multiclass Logistic Regression Using PyTorch The model itself is a single linear layer whose input size corresponds to the size of our vocabulary, and its output size corresponds to the number of classes in our corpus. PyTorch’s Linear layer includes a bias by default, so there is no need to handle that manually the way we did for our perceptron example. The code for training this model (which implements Algorithm 6) is almost identical to that of the binary logistic repression. However, since we have to calculate a score for each of the four different classes, we need to replace the previous BCEWithLogitsLoss with CrossEntropyLoss, which applies a softmax over the scores to obtain probabilities for each class. For each example, the model predicts 4 scores – one for each label. The label with the highest score is selected using the argmax function. We evaluate the predictions of our model for each class using Scikitlearn’s classification_report, which handles the results of multiclass classification. 4.3 Summary 69 4.3 Summary In this chapter, we used movie review and news article classification to illustrate the implementation of the previously described algorithms for the binary perceptron, binary logistic regression, and multiclass logistic regression. For the binary logistic regression, we made a direct comparison between the lower-level NumPy implementation and a higher-level version that made use of PyTorch. We hope that through this series of exercises the reader has noted several key takeaways. First, data preparation is important and should be done thoughtfully. Certain tasks (e.g., text normalization or sentence splitting) are going to be frequently needed if you continue with NLP, so using or creating generic functions can be very helpful. However, what works for one dataset and one language may not be suitable for another scenario. For example, in our case, we selected different tokenizers for each of our tasks to account for the different registers of English, as well as removing diacritics during normalization. Second, when it comes to implementing machine learning algorithms, it is often easier to use a higher-level library such as PyTorch instead of NumPy. For example, with the former, the gradients are calculated by the library, whereas in NumPy we have to code them ourselves. This becomes cumbersome quickly. For example, even the derivative of the softmax is non-trivial. Third, PyTorch imposes a training structure that remains largely the same, regardless of what models are being trained. That is, at a high level, the same steps are always required: clearing the current gradients, predicting output scores for the provided inputs, calculating the loss, and optimizing. These features make PyTorch a very powerful and convenient deep learning library; we will continue to use it throughout the remainder of the book to implement more complex neural architectures.
8,748
8,976
#!/usr/bin/env python # coding: utf-8 # # Binary Text Classification with Perceptron # In[1]: import random import numpy as np from tqdm.notebook import tqdm # set this variable to a number to be used as the random seed # or to None if you don't want to set a random seed seed = 1234 if seed is not None: random.seed(seed) np.random.seed(seed) # The dataset is divided in two directories called `train` and `test`. # These directories contain the training and testing splits of the dataset. # In[2]: get_ipython().system('ls -lh data/aclImdb/') # Both the `train` and `test` directories contain two directories called `pos` and `neg` that contain text files with the positive and negative reviews, respectively. # In[3]: get_ipython().system('ls -lh data/aclImdb/train/') # We will now read the filenames of the positive and negative examples. # In[4]: from glob import glob pos_files = glob('data/aclImdb/train/pos/*.txt') neg_files = glob('data/aclImdb/train/neg/*.txt') print('number of positive reviews:', len(pos_files)) print('number of negative reviews:', len(neg_files)) # Now, we will use a [`CountVectorizer`](https://scikit-learn.org/stable/modules/generated/sklearn.feature_extraction.text.CountVectorizer.html) to read the text files, tokenize them, acquire a vocabulary from the training data, and encode it in a document-term matrix in which each row represents a review, and each column represents a term in the vocabulary. Each element $(i,j)$ in the matrix represents the number of times term $j$ appears in example $i$. # In[5]: from sklearn.feature_extraction.text import CountVectorizer # initialize CountVectorizer indicating that we will give it a list of filenames that have to be read cv = CountVectorizer(input='filename') # learn vocabulary and return sparse document-term matrix doc_term_matrix = cv.fit_transform(pos_files + neg_files) doc_term_matrix # Note in the message printed above that the matrix is of shape (25000, 74894). # In other words, it has 1,871,225,000 elements. # However, only 3,445,861 elements were stored. # This is because most of the elements in the matrix are zeros. # The reason is that the reviews are short and most words in the english language don't appear in each review. # A matrix that only stores non-zero values is called *sparse*. # # Now we will convert it to a dense numpy array: # In[6]: X_train = doc_term_matrix.toarray() X_train.shape # We will also create a numpy array with the binary labels for the reviews. # One indicates a positive review and zero a negative review. # The label `y_train[i]` corresponds to the review encoded in row `i` of the `X_train` matrix. # In[7]: # training labels y_pos = np.ones(len(pos_files)) y_neg = np.zeros(len(neg_files)) y_train = np.concatenate([y_pos, y_neg]) y_train # Now we will initialize our model, in the form of an array of weights `w` of the same size as the number of features in our dataset (i.e., the number of words in the vocabulary acquired by [`CountVectorizer`](https://scikit-learn.org/stable/modules/generated/sklearn.feature_extraction.text.CountVectorizer.html)), and a bias term `b`. # Both are initialized to zeros. # In[8]: # initialize model: the feature vector and bias term are populated with zeros n_examples, n_features = X_train.shape w = np.zeros(n_features) b = 0 # Now we will use the perceptron learning algorithm to learn the values of `w` and `b` from our training data. # In[9]: n_epochs = 10 indices = np.arange(n_examples) for epoch in range(10): n_errors = 0 # randomize the order in which training examples are seen in this epoch np.random.shuffle(indices) # traverse the training data for i in tqdm(indices, desc=f'epoch {epoch+1}'): x = X_train[i] y_true = y_train[i] # the perceptron decision based on the current model score = x @ w + b y_pred = 1 if score > 0 else 0 # update the model is the prediction was incorrect if y_true == y_pred: continue elif y_true == 1 and y_pred == 0: w = w + x b = b + 1 n_errors += 1 elif y_true == 0 and y_pred == 1: w = w - x b = b - 1 n_errors += 1 if n_errors == 0: break # The next step is evaluating the model on the test dataset. # Note that this time we use the [`transform()`](https://scikit-learn.org/stable/modules/generated/sklearn.feature_extraction.text.CountVectorizer.html#sklearn.feature_extraction.text.CountVectorizer.transform) method of the [`CountVectorizer`](https://scikit-learn.org/stable/modules/generated/sklearn.feature_extraction.text.CountVectorizer.html), instead of the [`fit_transform()`](https://scikit-learn.org/stable/modules/generated/sklearn.feature_extraction.text.CountVectorizer.html#sklearn.feature_extraction.text.CountVectorizer.fit_transform) method that we used above. This is because we want to use the learned vocabulary in the test set, instead of learning a new one. # In[10]: pos_files = glob('data/aclImdb/test/pos/*.txt') neg_files = glob('data/aclImdb/test/neg/*.txt') doc_term_matrix = cv.transform(pos_files + neg_files) X_test = doc_term_matrix.toarray() y_pos = np.ones(len(pos_files)) y_neg = np.zeros(len(neg_files)) y_test = np.concatenate([y_pos, y_neg]) # Using the model is easy: multiply the document-term matrix by the learned weights and add the bias. # We use Python's `@` operator to perform the matrix-vector multiplication. # In[11]: y_pred = (X_test @ w + b) > 0 # Now we print an evaluation of the prediction results using scikit-learn's [`classification_report()`](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.classification_report.html) function. # In[12]: def binary_classification_report(y_true, y_pred): # count true positives, false positives, true negatives, and false negatives tp = fp = tn = fn = 0 for gold, pred in zip(y_true, y_pred): if pred == True: if gold == True: tp += 1 else: fp += 1 else: if gold == False: tn += 1 else: fn += 1 # calculate precision and recall precision = tp / (tp + fp) recall = tp / (tp + fn) # calculate f1 score fscore = 2 * precision * recall / (precision + recall) # calculate accuracy accuracy = (tp + tn) / len(y_true) # number of positive labels in y_true support = sum(y_true) return { "precision": precision, "recall": recall, "f1-score": fscore, "support": support, "accuracy": accuracy, } # In[13]: binary_classification_report(y_test, y_pred)
3,292
3,362
27
chap04-28
chap04-28
4 Implementing Text Classification Using Perceptron and Logistic Regression In the previous chapters we have discussed the theory behind the perceptron and logistic regression, including mathematical explanations of how and why they are able to learn from examples. In this chapter we will transition from math to code. Specifically, we will discuss how to implement these models in the Python programming language. All the code that we will introduce throughout this book is available online as well: http://clulab.github.io/gentlenlp/. The reader who is not familiar with the Python programming language is encouraged to read first Appendix A, for a brief introduction to the language, and Appendix B, for a discussion on how computers encode and preprocess text. Once done, please return here. To get a better understanding of how these algorithms work under the hood, we will start by implementing them from scratch. However, as the book progresses, we will introduce some of the popular tools and libraries that make Python the language of choice for machine learning, e.g., PyTorch,1 and Hugging Face’s transformers.2 The code for all the examples in the book is provided in the form of Jupyter notebooks.3 Important fragments of these notebooks will be presented in the implementation chapters so that the reader has the whole picture just by reading the book. However, we strongly encourage you to download the notebooks and execute them yourself. We also encourage you to modify them to conduct your own experiments! 1 https://pytorch.org
2 https://huggingface.co 3 https://jupyter.org/ 55 56 Implementing Text Classification Using Perceptron and LR 4.1 Binary Classification We begin this chapter with binary classification. That is, we aim to train classifiers that assign one of two labels to a given text. As the example for this task, we will train a review classifier using the the Large Movie Review Dataset (Maas et al., 2011).4 We tackle this task by implementing first a binary perceptron classifier, followed by a binary logistic regression one. We will implement the latter both from scratch as well as using PyTorch, so the reader has a clearer understanding on how PyTorch works “under the hood.” 4.1.1 Large Movie Review Dataset This dataset contains movie reviews and their associated scores (between 1 and 10) as provided by IMDb.5 converted these scores to binary labels by assigning each review a positive or negative label if the review score was above 6 or below 5, respectively. Reviews with scores 5 and 6 were considered too neutral and thus excluded. We follow the same protocol in this chapter. The dataset is divided in two even partitions called train and test, each containing 25,000 reviews. The dataset also provides additional unlabeled reviews, but we will not use those here. Each partition contains two directories called pos and neg where the positive and negative examples are stored. Each review is stored in an independent text file, whose name is composed of an id unique to the partition and the score associated with the review, separated by an underscore. An example of a positive and a negative review is shown in Table 4.1. 4.1.2 Bag-of-words Model As discussed in Section 2.2, we will encode the text to classify as a bag of words. That is, we encode each review as a list of numbers, with each position in the list corresponding to a word in our vocabulary, and the value stored in that position corresponding to the number of times the word appears in the review. For example, say we want to encode the following two reviews: 4 https://ai.stanford.edu/~amaas/data/sentiment/ 5 https://www.imdb.com/ Maas et al. 4.1 Binary Classification 57 Table 4.1 Two examples of movie reviews from IMDb. The first is a positive review of the movie Puss in Boots (1988). The second is a negative review of the movie Valentine (2001). These reviews can be found at https://www.imdb.com/review/rw0606396/ and https://www.imdb.com/review/rw0721861/, respectively. Filename Score Binary Label train/pos/24_8.txt 8/10 Positive train/neg/141_3.txt 3/10 Negative Review Text Although this was obviously a low-budget production, the performances and the songs in this movie are worth seeing. One of Walken’s few musical roles to date. (he is a marvelous dancer and singer and he demonstrates his acrobatic skills as well - watch for the cartwheel!) Also starring Jason Connery. A great children’s story and very likable characters. This stalk and slash turkey manages to bring nothing new to an increasingly stale genre. A masked killer stalks young, pert girls and slaughters them in a variety of gruesome ways, none of which are particularly inventive. It’s not scary, it’s not clever, and it’s not funny. So what was the point of it? Review 1: Review 2: "I liked the movie. My friend liked it too. " "I hated it. Would not recommend. " First, we need to create a vocabulary that maps each word to an id that uniquely identifies it. Each of these numbers will be used as the index in a list, so they must start at zero and grow by one for each word in the vocabulary. For example, one possible vocabulary that encodes the previous reviews is: {'would': 0, 'hated': 1, 58 Implementing Text Classification Using Perceptron and LR 'my': 2, 'liked': 3, 'not': 4, 'it': 5, 'movie': 6, 'recommend': 7, 'the': 8, 'I': 9, 'too': 10, 'friend': 11} Using this mapping, we can encode the two reviews as follows: Review1: [0,0,1,2,0,1,1,0,1,1,1,1] Review2: [1,1,0,0,1,1,0,1,0,1,0,0] Note that the word liked (fourth position) in the first review has a value of two. This is because this word appears twice in that review. This is a small example with a vocabulary of only 12 terms. Of course, the same process needs to be implemented for our whole training dataset. For this purpose we will use scikit-learn’s CountVectorizer class.6 Using the CountVectorizer class simplifies things, allowing us to get started quickly with a bag-of-words approach. However, note that it makes several simplifying assumptions (e.g., text is lowercased, and punctuation and single character tokens are removed). Some of these may not be adequate to other tasks. First, we need to obtain the filenames for the reviews in the training set: Once we have acquired the filenames for the training reviews, we need
to read them using the CountVectorizer. In order for the CountVectorizer to open and read the files for us, we make use of the input='filename' constructor parameter (otherwise it would expect the string content directly). The CountVectorizer provides three methods that will be use-
ful for us: a method called fit() that is used to acquire the vocabulary,
a method transform() that converts the text into the bag-of-words representation, and a method fit_transform() that conveniently acquires the vocabulary and transforms the data in a single step. The resulting object is referred to as a document-term matrix, where each row corre- 6 https://scikitlearn.org/stable/modules/generated/sklearn.feature_ extraction.text.CountVectorizer.html 4.1 Binary Classification 59 sponds to a document, and each column corresponds to a term in the vocabulary. As the output above indicates, the resulting matrix has 25,000 rows (one for each review), and 74,849 columns (one for each term). Also you may note that this matrix is sparse, with 3,445,861 stored elements. A regular matrix of shape 25,000×74,849 would have 1,871,225,000 elements. However, most of the elements in a document-term matrix are zeros because only a few words from the vocabulary appear in each document. A sparse matrix takes advantage of this fact by storing only the non-zero cells in order to reduce the memory required to store it. Thus, sparse matrices are convenient, especially when dealing with lots of data. Nevertheless, to simplify the downstream code in this example, we will convert it into a dense matrix, i.e., a regular two-dimensional NumPy array. Finally, we also need the labels of the reviews. We assign a label of one to positive reviews, and a label of zero to negative ones. Note that the first half of the reviews are positive and the second half are negative. The label at the ith position of the y_train array corresponds to the review encoded in the ith row of the X_train matrix. 4.1.3 Perceptron Now that we have defined our task and the data processing pipeline, we will implement a perceptron classifier that classifies the movie reviews as positive or negative. The entire code discussed in this section is available in the chap4_perceptron notebook. Recall from Section 2.4 that the perceptron is composed of a weight vector w and a bias term b. These will be represented as a NumPy array w of the same length as our document vectors, and a variable b for the bias term. Both will be initialized with zeros. The parameters w and b are learned through the following algorithm, which implements Algorithm 2 from Chapter 2: There are a couple of details to point out. Line 3 of Algorithm 2 indicates that we need to repeat the training loop until convergence. Theoretically, convergence is defined as predicting all training examples correctly. This is an ambitious requirement, which is not always possible in practice, so in this code we also include a stop condition if we reach a maximum number of epochs. Another crucial difference between our implementation here and the theoretical Algorithm 2, is that we randomize the order in which the training examples are seen at the beginning of 60 Implementing Text Classification Using Perceptron and LR each epoch. This simple (but highly recommended!) change is necessary to avoid the introduction of spurious biases due to the arbitrary order of the examples in the original training partition.7 We accomplish this by storing the indices corresponding to the X_train matrix rows in a NumPy array, and shuffling these indices at the beginning of each epoch. We shuffle the indices instead of the examples so that we can preserve the mapping between examples and labels. The training loop aligns closely with Algorithm 2. We start by iterating over each example in our training data, storing the current example in the variable x,8 and its corresponding label in the variable y_true. Next, we compute the perceptron decision function shown in Algorithm 1. Note that NumPy (as well as PyTorch) uses Python’s @ operator to indicate vector or matrix multiplication, depending on its operand types. Here we use it to calculate the dot product of the example x and the weights w. To this we add the bias b to obtain the predicted score, whose sign is used to assign a positive or negative predicted label. If the prediction is correct, then no update is needed, and we can move on to the next training example. However, if the prediction is incorrect, then we need to adjust w and b, as described in Algorithm 2. Sidebar 4.1 The tqdm function This is our first exposure to the tqdm function. tqdm is a progress bar that “make your loops show a smart progress meter.”9 The name tqdm comes from the Arabic word taqaddum which can mean “progress.” Using tqdm is as simple as wrapping it around the collection to be traversed. After training, we evaluate the model’s performance on the heldout test partition. The test data is loaded similarly to the training partition, but with one notable difference; we use CountVectorizer’s transform() method instead of the fit_transform() method so that the vocabulary is not adjusted for the test data. We won’t show here the loading of the test partition since it is so similar to the code already shown, but it is available in the Jupyter notebook that accompanies this section. . 7   As an extreme example, consider a dataset where all the positive examples appear first in the training partition. This would cause the perceptron to artificially inflate the weights of the features that occur in these examples, a situation from which the learning algorithm may struggle to recover. 
 . 8  We use typewriter font when we discuss variables in the code, to distinguish code from the theoretical discussion in the other chapters. 
 9 https://github.com/tqdm/tqdm 4.1 Binary Classification 61 Using the model to assign labels to all the test data is easily done in one step – we simply multiply the entire test data document-term matrix by the previously learned weights and add the bias. Scores greater than zero indicate a positive review, and those less than zero are negative. At this point we can evaluate the classifier’s performance, which we will do using precision, recall, and F1 scores for binary classification (described in Section 2.3). For this purpose, we implement a function called binary_classification_report that computes these metrics and returns them as a dictionary: We call this function to compare the predicted labels to the true labels, and obtain the evaluation scores. Our F1 score here is 86.8%, which is much higher than the baseline that assigns labels randomly, which yields an F1 score of about 50%. This is a good result, especially considering the simplicity of the perceptron! In the next sections and chapters, we will discuss a battery of strategies to considerably improve this performance. 4.1.4 Binary Logistic Regression from Scratch Using the same task, dataset, and evaluation, we will now implement a logistic regression classifier, as described in Algorithm 5 from Chapter 3. To give the reader hands-on experience with the implementation of the gradient calculations for logistic regression, we start by implementing it from scratch using NumPy. All the code shown in this section is available in the chap4_logistic_regression_numpy notebook. In the perceptron implementation, we represented the weights and the bias as two different variables. Here, however, we will use a different approach that will allow us to unify them into a single vector variable. Specifically, we take advantage of the similarity between the derivative of the cost function with respect to the weights (Equation 3.14) and the derivative of the cost with respect to the bias (Equation 3.15). d Ci(w, b) = (σi − yi)xij (3.14 revisited) dwj d Ci(w, b) = σi − yi (3.15 revisited) db Note that the two derivative formulas are identical except that the former has a multiplication by xij, while the latter does not. However, 62 Implementing Text Classification Using Perceptron and LR since σi − yi = (σi − yi)1 we can multiply the derivative of the cost with respect to the bias by one without changing the semantics. This gives an opportunity for combining the computations, doing them both in a single pass. The idea is that we can treat the bias as a weight corresponding to a feature that always has a value of one. As can be seen above, we created a NumPy array of ones of the same length as the number of examples in our training set (i.e., the number of rows in the data matrix). Then we add this array as a new column to the data matrix, using NumPy’s column_stack function. Next, we need to initialize our model. This time we will use a single NumPy array w of the same length as the number of columns in the data matrix. The weight vector w is initialized randomly with values between 0 and 1: Before implementing the learning algorithm, we need an implementation of the logistic function. Recall that the logistic function is σ(x) = 1 (3.1 revisited) 1+e−x This function can be easily implemented in NumPy as follows: However, this naive implementation may produce the following warning during training: The term overflow indicates that the result of evaluating exp(-x) is a number so large that it can’t be represented by a float (specifically, we’re using float64 numbers). We will avoid this issue by not calling exp with values that will overflow. NumPy provides the function finfo that can be consulted to find the limits of floating point numbers: The log of the largest floating point number is the largest number for which exp() will not overflow, so we will use it as a threshold to filter out problematic values: We now have everything we need to implement Algorithm 4. The steps to follow for each example are: (1) use the model to make a prediction, (2) calculate the gradient of the loss function with respect to the model parameters, and (3) update the model parameters using the gradient. The size of the update is controlled by the learning rate. Once the model has been trained, we evaluate it on the test dataset using our binary_classification_report function from the previous section. Loading and preprocessing the test dataset follows the same 4.1 Binary Classification 63 steps as with the previous classifier. We omit the code for brevity. These are the results: The performance is comparable with that of the perceptron. The difference in F1 scores between the two classifiers (84.9% here vs. 86.8% for the perceptron) is not significant. Classifier parity is probably attributable to the fact that the signal distinguishing the two classes being easy to learn and the simpler perceptron training algorithm being sufficient in this case. Nevertheless, this task is useful in showing how to implement the logistic regression model from scratch, i.e., by implementing the gradient calculation and parameter updates manually. Next, we will implement the same model again using PyTorch, highlighting how this machine learning library simplifies the process. 4.1.5 Binary Logistic Regression Utilizing PyTorch While it is fairly straightforward to compute the derivatives for logistic regression and implement then directly in NumPy, this will not scale well to arbitrary neural architectures. Fortunately, there are libraries that automate the computation of the derivatives of the cost function (assuming it is differentiable!) for any neural network, and use the resulting gradients to perform gradient descent or other more sophisticated optimization procedures. To this end, we will use the PyTorch deep learning library10. The corresponding notebook for this section is chap4_logistic_regression_pytorch_bce. Our model for logistic regression corresponds to PyTorch’s Linear layer. When we instantiate this layer, we specify the size of the inputs (the size of our vocabulary) and the size of the output, i.e., the number of output neurons (which is one because we’re doing binary classification). The loss function we use is the binary cross-entropy loss (see Chapter 3), which is implemented as BCEWithLogitsLoss in PyTorch. In PyTorch, the gradients obtained from the loss function are applied to the model by an optimizer object, which implements and applies an optimization algorithm. Here we will use the vanilla stochastic gradient descent optimizer; we set its learning rate to 0.1. This is equivalent to the discussion in Section 3.2. Similarly to the manual implementation, the steps required to train the model for a given training example are: (1) ensure the gradients are set to zeros, (2) apply the model to obtain a prediction, (3) calculate 10 https://pytorch.org/ 64 Implementing Text Classification Using Perceptron and LR the loss, (4) compute the gradient of the loss by back-propagation, and (5) update the model parameters. Recall that in our previous implementation everything was hardcoded: applying the model, computing the gradients, and optimizing the model parameters. Here, however, the implementation of the logistic regression is expressed at a higher level of abstraction. This means that we are describing the logical steps without specifying a particular implementation. Instead, implementation details are the responsability of the chosen model, loss function, and optimizer. Thus, we could even choose a different model, loss function, and/or optimizer, and use the same training steps with little or no modification. This decoupling of the training logic from the implementation details is one of the main advantages of libraries such as PyTorch. As shown in the code above, calling the model as a function, with the feature vectors as inputs, produces the predicted scores. Once again, a positive score corresponds to a positive label. When we evaluate this implementation on the test dataset, we obtain results that are in line with our previous models: Writing the perceptron and the logistic regression from scratch is a good exercise, as it exposes us to the fundamentals of implementing machine learning algorithms. However, this becomes cumbersome for more complex neural architectures. For this reason, from this point on, we will use PyTorch for all our coding examples. 4.2 Multiclass Classification So far, in this chapter we have discussed implementing binary classifiers. Next, we will modify these binary classifiers to perform multiclass classification, following the discussion in Section 3.5. 4.2.1 AG News Dataset Before explaining the actual training/testing code, we have to choose a new dataset that is suitable for multiclass classification. To this end, we will use the AG News Classification Dataset (Zhang et al., 2015), a subset of the larger AG corpus of news articles collected from thousands of different news sources.11 The classification dataset consists of four 11 http://groups.di.unipi.it/~gulli/AG_corpus_of_news_articles.html 4.2 Multiclass Classification 65 classes, and the data is equally balanced across all classes (30,000 articles per class for train, and 1,900 articles per class for testing). The goal of the task is to classify each article as one of the four classes: World, Sports, Business, or Sci/Tech. 4.2.2 Preparing the Dataset The AG News Dataset is distributed as two CSV files (one for training and one for testing), each containing three columns: the class index, the title, and the description. The dataset also provides a text file that maps the above class indexes to more descriptive class labels. Because of the tabular nature of the dataset, pandas, a Python library
for tabular data analysis,12 is a natural choice for loading and transform-
ing it. To this end, our Jupyter notebook (chap4_multiclass_logistic_regression) demonstrates the sequence of steps required to handle the data, as well
as model training and evaluation. First, we show how to load the CSV,
add column names, and inspect the result: class index . 0  3 
 . 1  3 
 . 2  3 
 . 3  3 
 . 4  3 
 ... ... . 119995  1 
 . 119996  2 
 . 119997  2 
 . 119998  2 
 . 119999  2 
 title Wall St. Bears Claw Back Into the Black (Reuters) Carlyle Looks Toward Commercial Aerospace (Reu... Oil and Economy Cloud Stocks' Outlook (Reuters) Iraq Halts Oil Exports from Main Southern Pipe... Oil prices soar to all-time record, posing new... ... Pakistan's Musharraf Says Won't Quit as Army C... Renteria signing a top-shelf deal Saban not going to Dolphins yet Today's NFL games Nets get Carter from Raptors description Reuters - Short-sellers, Wall Street's dwindli... Reuters - Private investment firm Carlyle Grou... Reuters - Soaring crude prices plus worries\ab... Reuters - Authorities have halted oil export\f... AFP - Tearaway world oil prices, toppling reco... ... KARACHI (Reuters) - Pakistani President Perve... Red Sox general manager Theo Epstein acknowled... The Miami Dolphins will put their courtship of... PITTSBURGH at NY GIANTS Time: 1:30 p.m. Line: ... INDIANAPOLIS -- All-Star Vince Carter was trad... 120000 rows × 3 columns Since the class labels themselves are in a separate file, we manually add them to the pandas data structure (called dataframe in pandas’ terminology) to increase the interpretability of the data. We use the class index column as a starting point, and use its map method to create a new column with the corresponding labels (technically a new Series object) that is added to the dataframe using its insert method, which allows us to insert the column in a specific position. Note that the label indices are one-based, so we subtract one to align them with their labels. 12 https://pandas.pydata.org 66 Implementing Text Classification Using Perceptron and LR class index . 0  3 
 . 1  3 
 . 2  3 
 . 3  3 
 . 4  3 
 ... ... . 119995  1 
 . 119996  2 
 . 119997  2 
 . 119998  2 
 . 119999  2 
 class Business Business Business Business Business ... World Sports Sports Sports Sports title Wall St. Bears Claw Back Into the Black (Reuters) Oil and Economy Cloud Stocks' Outlook (Reuters) Oil prices soar to all-time record, posing new... Pakistan's Musharraf Says Won't Quit as Army C... Saban not going to Dolphins yet Nets get Carter from Raptors description Reuters - Short-sellers, Wall Street's dwindli... Reuters - Soaring crude prices plus worries\ab... AFP - Tearaway world oil prices, toppling reco... KARACHI (Reuters) - Pakistani President Perve... The Miami Dolphins will put their courtship of... INDIANAPOLIS -- All-Star Vince Carter was trad... Iraq Halts Oil Exports from Main Southern Pipe... Reuters - Authorities have halted oil export\f... ... ... Renteria signing a top-shelf deal Red Sox general manager Theo Epstein acknowled... 120000 rows × 4 columns Carlyle Looks Toward Commercial Aerospace (Reu... Reuters - Private investment firm Carlyle Grou... Today's NFL games PITTSBURGH at NY GIANTS Time: 1:30 p.m. Line: ... Next we will preprocess the text. First we lowercase the title and description, and then we concatenate them into a single string. Then we remove some spurious backslashes from the text. Once this is done, the preprocessed text is added to the dataframe as a new column. Note that pandas allows these steps to be applied to all rows simultaneously. class index class title Wall St. Bears Claw Back Into the Black (Reuters) Oil and Economy Cloud Stocks' Outlook (Reuters) Oil prices soar to all-time record, posing new... ... Pakistan's Musharraf Says Won't Quit as Army C... Saban not going to Dolphins yet Nets get Carter from Raptors description Reuters - Short-sellers, Wall Street's dwindli... Reuters - Soaring crude prices plus worries\ab... AFP - Tearaway world oil prices, toppling reco... ... KARACHI (Reuters) - Pakistani President Perve... The Miami Dolphins will put their courtship of... INDIANAPOLIS -- All-Star Vince Carter was trad... text wall st. bears claw back into the black (reute... oil and economy cloud stocks' outlook (reuters... oil prices soar to all-time record, posing new... ... pakistan's musharraf says won't quit as army c... saban not going to dolphins yet the miami dolp... nets get carter from raptors indianapolis -- a... . 0  3 Business 
 . 1  3 Business 
 . 2  3 Business 
 . 3  3 Business 
 . 4  3 Business 
 ... ... ... . 119995  1 World 
 . 119996  2 Sports 
 . 119997  2 Sports 
 . 119998  2 Sports 
 . 119999  2 Sports 
 120000 rows × 5 columns Carlyle Looks Toward Commercial Reuters - Private investment firm Carlyle carlyle looks toward commercial Aerospace (Reu... Grou... aerospace (reu... Iraq Halts Oil Exports from Main Southern Pipe... Reuters - Authorities have halted oil export\f... iraq halts oil exports from main southern pipe... Renteria signing a top-shelf deal Red Sox general manager Theo Epstein renteria signing a top-shelf deal red sox acknowled... gene... Today's NFL games PITTSBURGH at NY GIANTS Time: 1:30 p.m. today's nfl games pittsburgh at ny giants Line: ... time... At this point, the text is ready to be tokenized. For this purpose we will use NLTK’s word_tokenize function. This function can be applied to the whole column at once using the pandas map function, which returns a new column which we add to the dataframe. However, here we actually use the progress_map function, which provides a visual progress bar. This visual feedback is especially helpful for tasks that take more time to complete. 4.2 Multiclass Classification 67 class index . 0  3 
 . 1  3 
 . 2  3 
 . 3  3 
 . 4  3 
 ... ... . 119995  1 
 . 119996  2 
 . 119997  2 
 . 119998  2 
 . 119999  2 
 class Business Business Business Business Business ... World Sports Sports Sports Sports title Wall St. Bears Claw Back Into the Black (Reuters) Oil and Economy Cloud Stocks' Outlook (Reuters) Oil prices soar to all-time record, posing new... ... Pakistan's Musharraf Says Won't Quit as Army C... Saban not going to Dolphins yet Nets get Carter from Raptors description Reuters - Short-sellers, Wall Street's dwindli... Reuters - Soaring crude prices plus worries\ab... AFP - Tearaway world oil prices, toppling reco... ... KARACHI (Reuters) - Pakistani President Perve... The Miami Dolphins will put their courtship of... INDIANAPOLIS -- All-Star Vince Carter was trad... text wall st. bears claw back into the black (reute... oil and economy cloud stocks' outlook (reuters... oil prices soar to all-time record, posing new... ... pakistan's musharraf says won't quit as army c... saban not going to dolphins yet the miami dolp... nets get carter from raptors indianapolis -- a... tokens [wall, st., bears, claw, back, into, the, blac... [oil, and, economy, cloud, stocks, ', outlook,... [oil, prices, soar, to, all-time, record, ,, p... ... [pakistan, 's, musharraf, says, wo, n't, quit,... [saban, not, going, to, dolphins, yet, the, mi... [nets, get, carter, from, raptors, indianapoli... 120000 rows × 6 columns Carlyle Looks Toward Commercial Reuters - Private investment firm carlyle looks toward commercial [carlyle, looks, toward, Aerospace (Reu... Carlyle Grou... aerospace (reu... commercial, aerospace... Iraq Halts Oil Exports from Main Reuters - Authorities have halted iraq halts oil exports from main [iraq, halts, oil, exports, from, Southern Pipe... oil export\f... southern pipe... main, southe... Renteria signing a top-shelf deal Red Sox general manager Theo renteria signing a top-shelf deal [renteria, signing, a, top-shelf, Epstein acknowled... red sox gene... deal, red, s... Today's NFL games PITTSBURGH at NY GIANTS today's nfl games pittsburgh at [today, 's, nfl, games, Time: 1:30 p.m. Line: ... ny giants time... pittsburgh, at, ny, gi... From the tokens we just created, we then create a vocabulary for our corpus. Here, we only keep the words that occur at least 10 times, decreasing the memory needed and reducing the likelihood that our vocabulary contains noisy tokens. Note that each row in the tokens column contains a list of tokens. In order to create the vocabulary, we will need to convert the Series of lists of tokens into a Series of tokens using the explode() Pandas method. Then we will use the value_counts() method to create a Series object in which the index are the tokens and the values are the number of times they appear in the corpus. The next step is removing the tokens with a count lower than our chosen threshold. Finally, we create a list with the remaining tokens, as well as a dictionary that maps tokens to token ids (i.e., the index of the token in the list). We include in the vocabulary a special token [UNK] that will be used as a placeholder for tokens that do not appear in our vocabulary after the frequency pruning. Using this vocabulary, we construct a feature vector for each news article in the corpus. This feature vector will be encoded as a dictionary, with keys corresponding to token ids, and values corresponding to the number of times the token appears in the article. As above, the feature vectors will be stored as a new column in the dataframe. 68 Implementing Text Classification Using Perceptron and LR class index class title Wall St. Bears Claw Back Into the Black (Reuters) Oil and Economy Cloud Stocks' Outlook (Reuters) Oil prices soar to all-time record, posing new... ... Pakistan's Musharraf Says Won't Quit as Army C... Saban not going to Dolphins yet Nets get Carter from Raptors description Reuters - Short-sellers, Wall Street's dwindli... Reuters - Soaring crude prices plus worries\ab... AFP - Tearaway world oil prices, toppling reco... ... KARACHI (Reuters) - Pakistani President Perve... The Miami Dolphins will put their courtship of... INDIANAPOLIS -- All-Star Vince Carter was trad... text wall st. bears claw back into the black (reute... oil and economy cloud stocks' outlook (reuters... oil prices soar to all-time record, posing new... ... pakistan's musharraf says won't quit as army c... saban not going to dolphins yet the miami dolp... nets get carter from raptors indianapolis -- a... tokens [wall, st., bears, claw, back, into, the, blac... [oil, and, economy, cloud, stocks, ', outlook,... [oil, prices, soar, to, alltime, record, ,, p... ... [pakistan, 's, musharraf, says, wo, n't, quit,... [saban, not, going, to, dolphins, yet, the, mi... [nets, get, carter, from, raptors, indianapoli... features {427: 2, 563: 1, 1607: 1, 15062: 1, 120: 1, 73... {66: 1, 9: 2, 351: 2, 4565: 1, 158: 1, 116: 1,... {66: 2, 99: 2, 4390: 1, 4: 2, 3595: 1, 149: 1,... ... {383: 1, 23: 1, 1626: 2, 91: 1, 1809: 1, 285: ... {7762: 2, 68: 1, 661: 1, 4: 2, 1439: 2, 703: 1... {2170: 2, 226: 1, 2402: 2, 32: 1, 2995: 2, 219... . 0  3 Business 
 . 1  3 Business 
 . 2  3 Business 
 . 3  3 Business 
 . 4  3 Business 
 ... ... ... . 119995  1 World 
 . 119996  2 Sports 
 . 119997  2 Sports 
 . 119998  2 Sports 
 . 119999  2 Sports 
 120000 rows × 7 columns Carlyle Looks Toward Commercial Aerospace (Reu... Reuters - Private investment firm Carlyle Grou... carlyle looks toward commercial aerospace (reu... Iraq Halts Oil Exports from Reuters - Authorities have iraq halts oil exports from Main Southern Pipe... halted oil export\f... main southern pipe... Renteria signing a top-shelf Red Sox general manager renteria signing a topdeal Theo Epstein acknowled... shelf deal red sox gene... PITTSBURGH at NY Today's NFL games GIANTS Time: 1:30 p.m. Line: ... today's nfl games pittsburgh at ny giants time... [carlyle, looks, toward, {15999: 2, 1076: 1, 855: commercial, aerospace... 1, 1286: 1, 4251: 1, ... [iraq, halts, oil, exports, {77: 2, 7380: 1, 66: 3, from, main, southe... 1787: 1, 32: 2, 900: 2... [renteria, signing, a, top- {8428: 2, 2638: 1, 5: 4, shelf, deal, red, s... 0: 3, 127: 1, 202: 3,... [today, 's, nfl, games, {106: 1, 23: 1, 729: 1, pittsburgh, at, ny, gi... 225: 1, 1586: 1, 22: 1... The final preprocessing step is converting the features and the class indices into PyTorch tensors. Recall that we need to subtract one from the class indices to make them zero-based. At this point, the data is fully processed and we are ready to begin training. 4.2.3 Multiclass Logistic Regression Using PyTorch The model itself is a single linear layer whose input size corresponds to the size of our vocabulary, and its output size corresponds to the number of classes in our corpus. PyTorch’s Linear layer includes a bias by default, so there is no need to handle that manually the way we did for our perceptron example. The code for training this model (which implements Algorithm 6) is almost identical to that of the binary logistic repression. However, since we have to calculate a score for each of the four different classes, we need to replace the previous BCEWithLogitsLoss with CrossEntropyLoss, which applies a softmax over the scores to obtain probabilities for each class. For each example, the model predicts 4 scores – one for each label. The label with the highest score is selected using the argmax function. We evaluate the predictions of our model for each class using Scikitlearn’s classification_report, which handles the results of multiclass classification. 4.3 Summary 69 4.3 Summary In this chapter, we used movie review and news article classification to illustrate the implementation of the previously described algorithms for the binary perceptron, binary logistic regression, and multiclass logistic regression. For the binary logistic regression, we made a direct comparison between the lower-level NumPy implementation and a higher-level version that made use of PyTorch. We hope that through this series of exercises the reader has noted several key takeaways. First, data preparation is important and should be done thoughtfully. Certain tasks (e.g., text normalization or sentence splitting) are going to be frequently needed if you continue with NLP, so using or creating generic functions can be very helpful. However, what works for one dataset and one language may not be suitable for another scenario. For example, in our case, we selected different tokenizers for each of our tasks to account for the different registers of English, as well as removing diacritics during normalization. Second, when it comes to implementing machine learning algorithms, it is often easier to use a higher-level library such as PyTorch instead of NumPy. For example, with the former, the gradients are calculated by the library, whereas in NumPy we have to code them ourselves. This becomes cumbersome quickly. For example, even the derivative of the softmax is non-trivial. Third, PyTorch imposes a training structure that remains largely the same, regardless of what models are being trained. That is, at a high level, the same steps are always required: clearing the current gradients, predicting output scores for the provided inputs, calculating the loss, and optimizing. These features make PyTorch a very powerful and convenient deep learning library; we will continue to use it throughout the remainder of the book to implement more complex neural architectures.
14,835
14,944
#!/usr/bin/env python # coding: utf-8 # # Binary Text Classification with # # Logistic Regression Implemented from Scratch # In[1]: import random import numpy as np from tqdm.notebook import tqdm # set this variable to a number to be used as the random seed # or to None if you don't want to set a random seed seed = 1234 if seed is not None: random.seed(seed) np.random.seed(seed) # The dataset is divided in two directories called `train` and `test`. # These directories contain the training and testing splits of the dataset. # In[2]: get_ipython().system('ls -lh data/aclImdb/') # Both the `train` and `test` directories contain two directories called `pos` and `neg` that contain text files with the positive and negative reviews, respectively. # In[3]: get_ipython().system('ls -lh data/aclImdb/train/') # We will now read the filenames of the positive and negative examples. # In[4]: from glob import glob pos_files = glob('data/aclImdb/train/pos/*.txt') neg_files = glob('data/aclImdb/train/neg/*.txt') print('number of positive reviews:', len(pos_files)) print('number of negative reviews:', len(neg_files)) # Now, we will use a [`CountVectorizer`](https://scikit-learn.org/stable/modules/generated/sklearn.feature_extraction.text.CountVectorizer.html) to read the text files, tokenize them, acquire a vocabulary from the training data, and encode it in a document-term matrix in which each row represents a review, and each column represents a term in the vocabulary. Each element $(i,j)$ in the matrix represents the number of times term $j$ appears in example $i$. # In[5]: from sklearn.feature_extraction.text import CountVectorizer # initialize CountVectorizer indicating that we will give it a list of filenames that have to be read cv = CountVectorizer(input='filename') # learn vocabulary and return sparse document-term matrix doc_term_matrix = cv.fit_transform(pos_files + neg_files) doc_term_matrix # Note in the message printed above that the matrix is of shape (25000, 74894). # In other words, it has 1,871,225,000 elements. # However, only 3,445,861 elements were stored. # This is because most of the elements in the matrix are zeros. # The reason is that the reviews are short and most words in the english language don't appear in each review. # A matrix that only stores non-zero values is called *sparse*. # # Now we will convert it to a dense numpy array: # In[6]: X_train = doc_term_matrix.toarray() X_train.shape # In[7]: # Append 1s to the xs; this will allow us to multiply by the weights and # the bias in a single pass. # Make an array with a one for each row/data point ones = np.ones(X_train.shape[0]) # Concatenate these ones to existing feature vectors X_train = np.column_stack((X_train, ones)) X_train.shape # We will also create a numpy array with the binary labels for the reviews. # One indicates a positive review and zero a negative review. # The label `y_train[i]` corresponds to the review encoded in row `i` of the `X_train` matrix. # In[8]: # training labels y_pos = np.ones(len(pos_files)) y_neg = np.zeros(len(neg_files)) y_train = np.concatenate([y_pos, y_neg]) y_train # Now we will initialize our model, in the form of an array of weights `w` of the same size as the number of features in our dataset (i.e., the number of words in the vocabulary acquired by [`CountVectorizer`](https://scikit-learn.org/stable/modules/generated/sklearn.feature_extraction.text.CountVectorizer.html)), and a bias term `b`. # Both are initialized to zeros. # In[9]: # initialize model: the feature vector and bias term are populated with zeros n_examples, n_features = X_train.shape w = np.random.random(n_features) # Now we will use the logistic regression learning algorithm to learn the values of `w` and `b` from our training data. # In[10]: # from scipy.special import expit as sigmoid def sigmoid(z): if -z > np.log(np.finfo(float).max): return 0.0 return 1 / (1 + np.exp(-z)) # In[11]: lr = 1e-1 n_epochs = 10 indices = np.arange(n_examples) for epoch in range(10): # randomize the order in which training examples are seen in this epoch np.random.shuffle(indices) # traverse the training data for i in tqdm(indices, desc=f'epoch {epoch+1}'): x = X_train[i] y = y_train[i] # calculate the derivative of the cost function for this batch deriv_cost = (sigmoid(x @ w) - y) * x # update the weights w = w - lr * deriv_cost # The next step is evaluating the model on the test dataset. # Note that this time we use the [`transform()`](https://scikit-learn.org/stable/modules/generated/sklearn.feature_extraction.text.CountVectorizer.html#sklearn.feature_extraction.text.CountVectorizer.transform) method of the [`CountVectorizer`](https://scikit-learn.org/stable/modules/generated/sklearn.feature_extraction.text.CountVectorizer.html), instead of the [`fit_transform()`](https://scikit-learn.org/stable/modules/generated/sklearn.feature_extraction.text.CountVectorizer.html#sklearn.feature_extraction.text.CountVectorizer.fit_transform) method that we used above. This is because we want to use the learned vocabulary in the test set, instead of learning a new one. # In[12]: pos_files = glob('data/aclImdb/test/pos/*.txt') neg_files = glob('data/aclImdb/test/neg/*.txt') doc_term_matrix = cv.transform(pos_files + neg_files) X_test = doc_term_matrix.toarray() X_test = np.column_stack((X_test, np.ones(X_test.shape[0]))) y_pos = np.ones(len(pos_files)) y_neg = np.zeros(len(neg_files)) y_test = np.concatenate([y_pos, y_neg]) # Using the model is easy: multiply the document-term matrix by the learned weights and add the bias. # We use Python's `@` operator to perform the matrix-vector multiplication. # In[13]: y_pred = X_test @ w > 0 # Now we print an evaluation of the prediction results using scikit-learn's [`classification_report()`](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.classification_report.html) function. # In[14]: def binary_classification_report(y_true, y_pred): # count true positives, false positives, true negatives, and false negatives tp = fp = tn = fn = 0 for gold, pred in zip(y_true, y_pred): if pred == True: if gold == True: tp += 1 else: fp += 1 else: if gold == False: tn += 1 else: fn += 1 # calculate precision and recall precision = tp / (tp + fp) recall = tp / (tp + fn) # calculate f1 score fscore = 2 * precision * recall / (precision + recall) # calculate accuracy accuracy = (tp + tn) / len(y_true) # number of positive labels in y_true support = sum(y_true) return { "precision": precision, "recall": recall, "f1-score": fscore, "support": support, "accuracy": accuracy, } # In[15]: binary_classification_report(y_test, y_pred)
2,657
2,786
28
chap04-29
chap04-29
4 Implementing Text Classification Using Perceptron and Logistic Regression In the previous chapters we have discussed the theory behind the perceptron and logistic regression, including mathematical explanations of how and why they are able to learn from examples. In this chapter we will transition from math to code. Specifically, we will discuss how to implement these models in the Python programming language. All the code that we will introduce throughout this book is available online as well: http://clulab.github.io/gentlenlp/. The reader who is not familiar with the Python programming language is encouraged to read first Appendix A, for a brief introduction to the language, and Appendix B, for a discussion on how computers encode and preprocess text. Once done, please return here. To get a better understanding of how these algorithms work under the hood, we will start by implementing them from scratch. However, as the book progresses, we will introduce some of the popular tools and libraries that make Python the language of choice for machine learning, e.g., PyTorch,1 and Hugging Face’s transformers.2 The code for all the examples in the book is provided in the form of Jupyter notebooks.3 Important fragments of these notebooks will be presented in the implementation chapters so that the reader has the whole picture just by reading the book. However, we strongly encourage you to download the notebooks and execute them yourself. We also encourage you to modify them to conduct your own experiments! 1 https://pytorch.org
2 https://huggingface.co 3 https://jupyter.org/ 55 56 Implementing Text Classification Using Perceptron and LR 4.1 Binary Classification We begin this chapter with binary classification. That is, we aim to train classifiers that assign one of two labels to a given text. As the example for this task, we will train a review classifier using the the Large Movie Review Dataset (Maas et al., 2011).4 We tackle this task by implementing first a binary perceptron classifier, followed by a binary logistic regression one. We will implement the latter both from scratch as well as using PyTorch, so the reader has a clearer understanding on how PyTorch works “under the hood.” 4.1.1 Large Movie Review Dataset This dataset contains movie reviews and their associated scores (between 1 and 10) as provided by IMDb.5 converted these scores to binary labels by assigning each review a positive or negative label if the review score was above 6 or below 5, respectively. Reviews with scores 5 and 6 were considered too neutral and thus excluded. We follow the same protocol in this chapter. The dataset is divided in two even partitions called train and test, each containing 25,000 reviews. The dataset also provides additional unlabeled reviews, but we will not use those here. Each partition contains two directories called pos and neg where the positive and negative examples are stored. Each review is stored in an independent text file, whose name is composed of an id unique to the partition and the score associated with the review, separated by an underscore. An example of a positive and a negative review is shown in Table 4.1. 4.1.2 Bag-of-words Model As discussed in Section 2.2, we will encode the text to classify as a bag of words. That is, we encode each review as a list of numbers, with each position in the list corresponding to a word in our vocabulary, and the value stored in that position corresponding to the number of times the word appears in the review. For example, say we want to encode the following two reviews: 4 https://ai.stanford.edu/~amaas/data/sentiment/ 5 https://www.imdb.com/ Maas et al. 4.1 Binary Classification 57 Table 4.1 Two examples of movie reviews from IMDb. The first is a positive review of the movie Puss in Boots (1988). The second is a negative review of the movie Valentine (2001). These reviews can be found at https://www.imdb.com/review/rw0606396/ and https://www.imdb.com/review/rw0721861/, respectively. Filename Score Binary Label train/pos/24_8.txt 8/10 Positive train/neg/141_3.txt 3/10 Negative Review Text Although this was obviously a low-budget production, the performances and the songs in this movie are worth seeing. One of Walken’s few musical roles to date. (he is a marvelous dancer and singer and he demonstrates his acrobatic skills as well - watch for the cartwheel!) Also starring Jason Connery. A great children’s story and very likable characters. This stalk and slash turkey manages to bring nothing new to an increasingly stale genre. A masked killer stalks young, pert girls and slaughters them in a variety of gruesome ways, none of which are particularly inventive. It’s not scary, it’s not clever, and it’s not funny. So what was the point of it? Review 1: Review 2: "I liked the movie. My friend liked it too. " "I hated it. Would not recommend. " First, we need to create a vocabulary that maps each word to an id that uniquely identifies it. Each of these numbers will be used as the index in a list, so they must start at zero and grow by one for each word in the vocabulary. For example, one possible vocabulary that encodes the previous reviews is: {'would': 0, 'hated': 1, 58 Implementing Text Classification Using Perceptron and LR 'my': 2, 'liked': 3, 'not': 4, 'it': 5, 'movie': 6, 'recommend': 7, 'the': 8, 'I': 9, 'too': 10, 'friend': 11} Using this mapping, we can encode the two reviews as follows: Review1: [0,0,1,2,0,1,1,0,1,1,1,1] Review2: [1,1,0,0,1,1,0,1,0,1,0,0] Note that the word liked (fourth position) in the first review has a value of two. This is because this word appears twice in that review. This is a small example with a vocabulary of only 12 terms. Of course, the same process needs to be implemented for our whole training dataset. For this purpose we will use scikit-learn’s CountVectorizer class.6 Using the CountVectorizer class simplifies things, allowing us to get started quickly with a bag-of-words approach. However, note that it makes several simplifying assumptions (e.g., text is lowercased, and punctuation and single character tokens are removed). Some of these may not be adequate to other tasks. First, we need to obtain the filenames for the reviews in the training set: Once we have acquired the filenames for the training reviews, we need
to read them using the CountVectorizer. In order for the CountVectorizer to open and read the files for us, we make use of the input='filename' constructor parameter (otherwise it would expect the string content directly). The CountVectorizer provides three methods that will be use-
ful for us: a method called fit() that is used to acquire the vocabulary,
a method transform() that converts the text into the bag-of-words representation, and a method fit_transform() that conveniently acquires the vocabulary and transforms the data in a single step. The resulting object is referred to as a document-term matrix, where each row corre- 6 https://scikitlearn.org/stable/modules/generated/sklearn.feature_ extraction.text.CountVectorizer.html 4.1 Binary Classification 59 sponds to a document, and each column corresponds to a term in the vocabulary. As the output above indicates, the resulting matrix has 25,000 rows (one for each review), and 74,849 columns (one for each term). Also you may note that this matrix is sparse, with 3,445,861 stored elements. A regular matrix of shape 25,000×74,849 would have 1,871,225,000 elements. However, most of the elements in a document-term matrix are zeros because only a few words from the vocabulary appear in each document. A sparse matrix takes advantage of this fact by storing only the non-zero cells in order to reduce the memory required to store it. Thus, sparse matrices are convenient, especially when dealing with lots of data. Nevertheless, to simplify the downstream code in this example, we will convert it into a dense matrix, i.e., a regular two-dimensional NumPy array. Finally, we also need the labels of the reviews. We assign a label of one to positive reviews, and a label of zero to negative ones. Note that the first half of the reviews are positive and the second half are negative. The label at the ith position of the y_train array corresponds to the review encoded in the ith row of the X_train matrix. 4.1.3 Perceptron Now that we have defined our task and the data processing pipeline, we will implement a perceptron classifier that classifies the movie reviews as positive or negative. The entire code discussed in this section is available in the chap4_perceptron notebook. Recall from Section 2.4 that the perceptron is composed of a weight vector w and a bias term b. These will be represented as a NumPy array w of the same length as our document vectors, and a variable b for the bias term. Both will be initialized with zeros. The parameters w and b are learned through the following algorithm, which implements Algorithm 2 from Chapter 2: There are a couple of details to point out. Line 3 of Algorithm 2 indicates that we need to repeat the training loop until convergence. Theoretically, convergence is defined as predicting all training examples correctly. This is an ambitious requirement, which is not always possible in practice, so in this code we also include a stop condition if we reach a maximum number of epochs. Another crucial difference between our implementation here and the theoretical Algorithm 2, is that we randomize the order in which the training examples are seen at the beginning of 60 Implementing Text Classification Using Perceptron and LR each epoch. This simple (but highly recommended!) change is necessary to avoid the introduction of spurious biases due to the arbitrary order of the examples in the original training partition.7 We accomplish this by storing the indices corresponding to the X_train matrix rows in a NumPy array, and shuffling these indices at the beginning of each epoch. We shuffle the indices instead of the examples so that we can preserve the mapping between examples and labels. The training loop aligns closely with Algorithm 2. We start by iterating over each example in our training data, storing the current example in the variable x,8 and its corresponding label in the variable y_true. Next, we compute the perceptron decision function shown in Algorithm 1. Note that NumPy (as well as PyTorch) uses Python’s @ operator to indicate vector or matrix multiplication, depending on its operand types. Here we use it to calculate the dot product of the example x and the weights w. To this we add the bias b to obtain the predicted score, whose sign is used to assign a positive or negative predicted label. If the prediction is correct, then no update is needed, and we can move on to the next training example. However, if the prediction is incorrect, then we need to adjust w and b, as described in Algorithm 2. Sidebar 4.1 The tqdm function This is our first exposure to the tqdm function. tqdm is a progress bar that “make your loops show a smart progress meter.”9 The name tqdm comes from the Arabic word taqaddum which can mean “progress.” Using tqdm is as simple as wrapping it around the collection to be traversed. After training, we evaluate the model’s performance on the heldout test partition. The test data is loaded similarly to the training partition, but with one notable difference; we use CountVectorizer’s transform() method instead of the fit_transform() method so that the vocabulary is not adjusted for the test data. We won’t show here the loading of the test partition since it is so similar to the code already shown, but it is available in the Jupyter notebook that accompanies this section. . 7   As an extreme example, consider a dataset where all the positive examples appear first in the training partition. This would cause the perceptron to artificially inflate the weights of the features that occur in these examples, a situation from which the learning algorithm may struggle to recover. 
 . 8  We use typewriter font when we discuss variables in the code, to distinguish code from the theoretical discussion in the other chapters. 
 9 https://github.com/tqdm/tqdm 4.1 Binary Classification 61 Using the model to assign labels to all the test data is easily done in one step – we simply multiply the entire test data document-term matrix by the previously learned weights and add the bias. Scores greater than zero indicate a positive review, and those less than zero are negative. At this point we can evaluate the classifier’s performance, which we will do using precision, recall, and F1 scores for binary classification (described in Section 2.3). For this purpose, we implement a function called binary_classification_report that computes these metrics and returns them as a dictionary: We call this function to compare the predicted labels to the true labels, and obtain the evaluation scores. Our F1 score here is 86.8%, which is much higher than the baseline that assigns labels randomly, which yields an F1 score of about 50%. This is a good result, especially considering the simplicity of the perceptron! In the next sections and chapters, we will discuss a battery of strategies to considerably improve this performance. 4.1.4 Binary Logistic Regression from Scratch Using the same task, dataset, and evaluation, we will now implement a logistic regression classifier, as described in Algorithm 5 from Chapter 3. To give the reader hands-on experience with the implementation of the gradient calculations for logistic regression, we start by implementing it from scratch using NumPy. All the code shown in this section is available in the chap4_logistic_regression_numpy notebook. In the perceptron implementation, we represented the weights and the bias as two different variables. Here, however, we will use a different approach that will allow us to unify them into a single vector variable. Specifically, we take advantage of the similarity between the derivative of the cost function with respect to the weights (Equation 3.14) and the derivative of the cost with respect to the bias (Equation 3.15). d Ci(w, b) = (σi − yi)xij (3.14 revisited) dwj d Ci(w, b) = σi − yi (3.15 revisited) db Note that the two derivative formulas are identical except that the former has a multiplication by xij, while the latter does not. However, 62 Implementing Text Classification Using Perceptron and LR since σi − yi = (σi − yi)1 we can multiply the derivative of the cost with respect to the bias by one without changing the semantics. This gives an opportunity for combining the computations, doing them both in a single pass. The idea is that we can treat the bias as a weight corresponding to a feature that always has a value of one. As can be seen above, we created a NumPy array of ones of the same length as the number of examples in our training set (i.e., the number of rows in the data matrix). Then we add this array as a new column to the data matrix, using NumPy’s column_stack function. Next, we need to initialize our model. This time we will use a single NumPy array w of the same length as the number of columns in the data matrix. The weight vector w is initialized randomly with values between 0 and 1: Before implementing the learning algorithm, we need an implementation of the logistic function. Recall that the logistic function is σ(x) = 1 (3.1 revisited) 1+e−x This function can be easily implemented in NumPy as follows: However, this naive implementation may produce the following warning during training: The term overflow indicates that the result of evaluating exp(-x) is a number so large that it can’t be represented by a float (specifically, we’re using float64 numbers). We will avoid this issue by not calling exp with values that will overflow. NumPy provides the function finfo that can be consulted to find the limits of floating point numbers: The log of the largest floating point number is the largest number for which exp() will not overflow, so we will use it as a threshold to filter out problematic values: We now have everything we need to implement Algorithm 4. The steps to follow for each example are: (1) use the model to make a prediction, (2) calculate the gradient of the loss function with respect to the model parameters, and (3) update the model parameters using the gradient. The size of the update is controlled by the learning rate. Once the model has been trained, we evaluate it on the test dataset using our binary_classification_report function from the previous section. Loading and preprocessing the test dataset follows the same 4.1 Binary Classification 63 steps as with the previous classifier. We omit the code for brevity. These are the results: The performance is comparable with that of the perceptron. The difference in F1 scores between the two classifiers (84.9% here vs. 86.8% for the perceptron) is not significant. Classifier parity is probably attributable to the fact that the signal distinguishing the two classes being easy to learn and the simpler perceptron training algorithm being sufficient in this case. Nevertheless, this task is useful in showing how to implement the logistic regression model from scratch, i.e., by implementing the gradient calculation and parameter updates manually. Next, we will implement the same model again using PyTorch, highlighting how this machine learning library simplifies the process. 4.1.5 Binary Logistic Regression Utilizing PyTorch While it is fairly straightforward to compute the derivatives for logistic regression and implement then directly in NumPy, this will not scale well to arbitrary neural architectures. Fortunately, there are libraries that automate the computation of the derivatives of the cost function (assuming it is differentiable!) for any neural network, and use the resulting gradients to perform gradient descent or other more sophisticated optimization procedures. To this end, we will use the PyTorch deep learning library10. The corresponding notebook for this section is chap4_logistic_regression_pytorch_bce. Our model for logistic regression corresponds to PyTorch’s Linear layer. When we instantiate this layer, we specify the size of the inputs (the size of our vocabulary) and the size of the output, i.e., the number of output neurons (which is one because we’re doing binary classification). The loss function we use is the binary cross-entropy loss (see Chapter 3), which is implemented as BCEWithLogitsLoss in PyTorch. In PyTorch, the gradients obtained from the loss function are applied to the model by an optimizer object, which implements and applies an optimization algorithm. Here we will use the vanilla stochastic gradient descent optimizer; we set its learning rate to 0.1. This is equivalent to the discussion in Section 3.2. Similarly to the manual implementation, the steps required to train the model for a given training example are: (1) ensure the gradients are set to zeros, (2) apply the model to obtain a prediction, (3) calculate 10 https://pytorch.org/ 64 Implementing Text Classification Using Perceptron and LR the loss, (4) compute the gradient of the loss by back-propagation, and (5) update the model parameters. Recall that in our previous implementation everything was hardcoded: applying the model, computing the gradients, and optimizing the model parameters. Here, however, the implementation of the logistic regression is expressed at a higher level of abstraction. This means that we are describing the logical steps without specifying a particular implementation. Instead, implementation details are the responsability of the chosen model, loss function, and optimizer. Thus, we could even choose a different model, loss function, and/or optimizer, and use the same training steps with little or no modification. This decoupling of the training logic from the implementation details is one of the main advantages of libraries such as PyTorch. As shown in the code above, calling the model as a function, with the feature vectors as inputs, produces the predicted scores. Once again, a positive score corresponds to a positive label. When we evaluate this implementation on the test dataset, we obtain results that are in line with our previous models: Writing the perceptron and the logistic regression from scratch is a good exercise, as it exposes us to the fundamentals of implementing machine learning algorithms. However, this becomes cumbersome for more complex neural architectures. For this reason, from this point on, we will use PyTorch for all our coding examples. 4.2 Multiclass Classification So far, in this chapter we have discussed implementing binary classifiers. Next, we will modify these binary classifiers to perform multiclass classification, following the discussion in Section 3.5. 4.2.1 AG News Dataset Before explaining the actual training/testing code, we have to choose a new dataset that is suitable for multiclass classification. To this end, we will use the AG News Classification Dataset (Zhang et al., 2015), a subset of the larger AG corpus of news articles collected from thousands of different news sources.11 The classification dataset consists of four 11 http://groups.di.unipi.it/~gulli/AG_corpus_of_news_articles.html 4.2 Multiclass Classification 65 classes, and the data is equally balanced across all classes (30,000 articles per class for train, and 1,900 articles per class for testing). The goal of the task is to classify each article as one of the four classes: World, Sports, Business, or Sci/Tech. 4.2.2 Preparing the Dataset The AG News Dataset is distributed as two CSV files (one for training and one for testing), each containing three columns: the class index, the title, and the description. The dataset also provides a text file that maps the above class indexes to more descriptive class labels. Because of the tabular nature of the dataset, pandas, a Python library
for tabular data analysis,12 is a natural choice for loading and transform-
ing it. To this end, our Jupyter notebook (chap4_multiclass_logistic_regression) demonstrates the sequence of steps required to handle the data, as well
as model training and evaluation. First, we show how to load the CSV,
add column names, and inspect the result: class index . 0  3 
 . 1  3 
 . 2  3 
 . 3  3 
 . 4  3 
 ... ... . 119995  1 
 . 119996  2 
 . 119997  2 
 . 119998  2 
 . 119999  2 
 title Wall St. Bears Claw Back Into the Black (Reuters) Carlyle Looks Toward Commercial Aerospace (Reu... Oil and Economy Cloud Stocks' Outlook (Reuters) Iraq Halts Oil Exports from Main Southern Pipe... Oil prices soar to all-time record, posing new... ... Pakistan's Musharraf Says Won't Quit as Army C... Renteria signing a top-shelf deal Saban not going to Dolphins yet Today's NFL games Nets get Carter from Raptors description Reuters - Short-sellers, Wall Street's dwindli... Reuters - Private investment firm Carlyle Grou... Reuters - Soaring crude prices plus worries\ab... Reuters - Authorities have halted oil export\f... AFP - Tearaway world oil prices, toppling reco... ... KARACHI (Reuters) - Pakistani President Perve... Red Sox general manager Theo Epstein acknowled... The Miami Dolphins will put their courtship of... PITTSBURGH at NY GIANTS Time: 1:30 p.m. Line: ... INDIANAPOLIS -- All-Star Vince Carter was trad... 120000 rows × 3 columns Since the class labels themselves are in a separate file, we manually add them to the pandas data structure (called dataframe in pandas’ terminology) to increase the interpretability of the data. We use the class index column as a starting point, and use its map method to create a new column with the corresponding labels (technically a new Series object) that is added to the dataframe using its insert method, which allows us to insert the column in a specific position. Note that the label indices are one-based, so we subtract one to align them with their labels. 12 https://pandas.pydata.org 66 Implementing Text Classification Using Perceptron and LR class index . 0  3 
 . 1  3 
 . 2  3 
 . 3  3 
 . 4  3 
 ... ... . 119995  1 
 . 119996  2 
 . 119997  2 
 . 119998  2 
 . 119999  2 
 class Business Business Business Business Business ... World Sports Sports Sports Sports title Wall St. Bears Claw Back Into the Black (Reuters) Oil and Economy Cloud Stocks' Outlook (Reuters) Oil prices soar to all-time record, posing new... Pakistan's Musharraf Says Won't Quit as Army C... Saban not going to Dolphins yet Nets get Carter from Raptors description Reuters - Short-sellers, Wall Street's dwindli... Reuters - Soaring crude prices plus worries\ab... AFP - Tearaway world oil prices, toppling reco... KARACHI (Reuters) - Pakistani President Perve... The Miami Dolphins will put their courtship of... INDIANAPOLIS -- All-Star Vince Carter was trad... Iraq Halts Oil Exports from Main Southern Pipe... Reuters - Authorities have halted oil export\f... ... ... Renteria signing a top-shelf deal Red Sox general manager Theo Epstein acknowled... 120000 rows × 4 columns Carlyle Looks Toward Commercial Aerospace (Reu... Reuters - Private investment firm Carlyle Grou... Today's NFL games PITTSBURGH at NY GIANTS Time: 1:30 p.m. Line: ... Next we will preprocess the text. First we lowercase the title and description, and then we concatenate them into a single string. Then we remove some spurious backslashes from the text. Once this is done, the preprocessed text is added to the dataframe as a new column. Note that pandas allows these steps to be applied to all rows simultaneously. class index class title Wall St. Bears Claw Back Into the Black (Reuters) Oil and Economy Cloud Stocks' Outlook (Reuters) Oil prices soar to all-time record, posing new... ... Pakistan's Musharraf Says Won't Quit as Army C... Saban not going to Dolphins yet Nets get Carter from Raptors description Reuters - Short-sellers, Wall Street's dwindli... Reuters - Soaring crude prices plus worries\ab... AFP - Tearaway world oil prices, toppling reco... ... KARACHI (Reuters) - Pakistani President Perve... The Miami Dolphins will put their courtship of... INDIANAPOLIS -- All-Star Vince Carter was trad... text wall st. bears claw back into the black (reute... oil and economy cloud stocks' outlook (reuters... oil prices soar to all-time record, posing new... ... pakistan's musharraf says won't quit as army c... saban not going to dolphins yet the miami dolp... nets get carter from raptors indianapolis -- a... . 0  3 Business 
 . 1  3 Business 
 . 2  3 Business 
 . 3  3 Business 
 . 4  3 Business 
 ... ... ... . 119995  1 World 
 . 119996  2 Sports 
 . 119997  2 Sports 
 . 119998  2 Sports 
 . 119999  2 Sports 
 120000 rows × 5 columns Carlyle Looks Toward Commercial Reuters - Private investment firm Carlyle carlyle looks toward commercial Aerospace (Reu... Grou... aerospace (reu... Iraq Halts Oil Exports from Main Southern Pipe... Reuters - Authorities have halted oil export\f... iraq halts oil exports from main southern pipe... Renteria signing a top-shelf deal Red Sox general manager Theo Epstein renteria signing a top-shelf deal red sox acknowled... gene... Today's NFL games PITTSBURGH at NY GIANTS Time: 1:30 p.m. today's nfl games pittsburgh at ny giants Line: ... time... At this point, the text is ready to be tokenized. For this purpose we will use NLTK’s word_tokenize function. This function can be applied to the whole column at once using the pandas map function, which returns a new column which we add to the dataframe. However, here we actually use the progress_map function, which provides a visual progress bar. This visual feedback is especially helpful for tasks that take more time to complete. 4.2 Multiclass Classification 67 class index . 0  3 
 . 1  3 
 . 2  3 
 . 3  3 
 . 4  3 
 ... ... . 119995  1 
 . 119996  2 
 . 119997  2 
 . 119998  2 
 . 119999  2 
 class Business Business Business Business Business ... World Sports Sports Sports Sports title Wall St. Bears Claw Back Into the Black (Reuters) Oil and Economy Cloud Stocks' Outlook (Reuters) Oil prices soar to all-time record, posing new... ... Pakistan's Musharraf Says Won't Quit as Army C... Saban not going to Dolphins yet Nets get Carter from Raptors description Reuters - Short-sellers, Wall Street's dwindli... Reuters - Soaring crude prices plus worries\ab... AFP - Tearaway world oil prices, toppling reco... ... KARACHI (Reuters) - Pakistani President Perve... The Miami Dolphins will put their courtship of... INDIANAPOLIS -- All-Star Vince Carter was trad... text wall st. bears claw back into the black (reute... oil and economy cloud stocks' outlook (reuters... oil prices soar to all-time record, posing new... ... pakistan's musharraf says won't quit as army c... saban not going to dolphins yet the miami dolp... nets get carter from raptors indianapolis -- a... tokens [wall, st., bears, claw, back, into, the, blac... [oil, and, economy, cloud, stocks, ', outlook,... [oil, prices, soar, to, all-time, record, ,, p... ... [pakistan, 's, musharraf, says, wo, n't, quit,... [saban, not, going, to, dolphins, yet, the, mi... [nets, get, carter, from, raptors, indianapoli... 120000 rows × 6 columns Carlyle Looks Toward Commercial Reuters - Private investment firm carlyle looks toward commercial [carlyle, looks, toward, Aerospace (Reu... Carlyle Grou... aerospace (reu... commercial, aerospace... Iraq Halts Oil Exports from Main Reuters - Authorities have halted iraq halts oil exports from main [iraq, halts, oil, exports, from, Southern Pipe... oil export\f... southern pipe... main, southe... Renteria signing a top-shelf deal Red Sox general manager Theo renteria signing a top-shelf deal [renteria, signing, a, top-shelf, Epstein acknowled... red sox gene... deal, red, s... Today's NFL games PITTSBURGH at NY GIANTS today's nfl games pittsburgh at [today, 's, nfl, games, Time: 1:30 p.m. Line: ... ny giants time... pittsburgh, at, ny, gi... From the tokens we just created, we then create a vocabulary for our corpus. Here, we only keep the words that occur at least 10 times, decreasing the memory needed and reducing the likelihood that our vocabulary contains noisy tokens. Note that each row in the tokens column contains a list of tokens. In order to create the vocabulary, we will need to convert the Series of lists of tokens into a Series of tokens using the explode() Pandas method. Then we will use the value_counts() method to create a Series object in which the index are the tokens and the values are the number of times they appear in the corpus. The next step is removing the tokens with a count lower than our chosen threshold. Finally, we create a list with the remaining tokens, as well as a dictionary that maps tokens to token ids (i.e., the index of the token in the list). We include in the vocabulary a special token [UNK] that will be used as a placeholder for tokens that do not appear in our vocabulary after the frequency pruning. Using this vocabulary, we construct a feature vector for each news article in the corpus. This feature vector will be encoded as a dictionary, with keys corresponding to token ids, and values corresponding to the number of times the token appears in the article. As above, the feature vectors will be stored as a new column in the dataframe. 68 Implementing Text Classification Using Perceptron and LR class index class title Wall St. Bears Claw Back Into the Black (Reuters) Oil and Economy Cloud Stocks' Outlook (Reuters) Oil prices soar to all-time record, posing new... ... Pakistan's Musharraf Says Won't Quit as Army C... Saban not going to Dolphins yet Nets get Carter from Raptors description Reuters - Short-sellers, Wall Street's dwindli... Reuters - Soaring crude prices plus worries\ab... AFP - Tearaway world oil prices, toppling reco... ... KARACHI (Reuters) - Pakistani President Perve... The Miami Dolphins will put their courtship of... INDIANAPOLIS -- All-Star Vince Carter was trad... text wall st. bears claw back into the black (reute... oil and economy cloud stocks' outlook (reuters... oil prices soar to all-time record, posing new... ... pakistan's musharraf says won't quit as army c... saban not going to dolphins yet the miami dolp... nets get carter from raptors indianapolis -- a... tokens [wall, st., bears, claw, back, into, the, blac... [oil, and, economy, cloud, stocks, ', outlook,... [oil, prices, soar, to, alltime, record, ,, p... ... [pakistan, 's, musharraf, says, wo, n't, quit,... [saban, not, going, to, dolphins, yet, the, mi... [nets, get, carter, from, raptors, indianapoli... features {427: 2, 563: 1, 1607: 1, 15062: 1, 120: 1, 73... {66: 1, 9: 2, 351: 2, 4565: 1, 158: 1, 116: 1,... {66: 2, 99: 2, 4390: 1, 4: 2, 3595: 1, 149: 1,... ... {383: 1, 23: 1, 1626: 2, 91: 1, 1809: 1, 285: ... {7762: 2, 68: 1, 661: 1, 4: 2, 1439: 2, 703: 1... {2170: 2, 226: 1, 2402: 2, 32: 1, 2995: 2, 219... . 0  3 Business 
 . 1  3 Business 
 . 2  3 Business 
 . 3  3 Business 
 . 4  3 Business 
 ... ... ... . 119995  1 World 
 . 119996  2 Sports 
 . 119997  2 Sports 
 . 119998  2 Sports 
 . 119999  2 Sports 
 120000 rows × 7 columns Carlyle Looks Toward Commercial Aerospace (Reu... Reuters - Private investment firm Carlyle Grou... carlyle looks toward commercial aerospace (reu... Iraq Halts Oil Exports from Reuters - Authorities have iraq halts oil exports from Main Southern Pipe... halted oil export\f... main southern pipe... Renteria signing a top-shelf Red Sox general manager renteria signing a topdeal Theo Epstein acknowled... shelf deal red sox gene... PITTSBURGH at NY Today's NFL games GIANTS Time: 1:30 p.m. Line: ... today's nfl games pittsburgh at ny giants time... [carlyle, looks, toward, {15999: 2, 1076: 1, 855: commercial, aerospace... 1, 1286: 1, 4251: 1, ... [iraq, halts, oil, exports, {77: 2, 7380: 1, 66: 3, from, main, southe... 1787: 1, 32: 2, 900: 2... [renteria, signing, a, top- {8428: 2, 2638: 1, 5: 4, shelf, deal, red, s... 0: 3, 127: 1, 202: 3,... [today, 's, nfl, games, {106: 1, 23: 1, 729: 1, pittsburgh, at, ny, gi... 225: 1, 1586: 1, 22: 1... The final preprocessing step is converting the features and the class indices into PyTorch tensors. Recall that we need to subtract one from the class indices to make them zero-based. At this point, the data is fully processed and we are ready to begin training. 4.2.3 Multiclass Logistic Regression Using PyTorch The model itself is a single linear layer whose input size corresponds to the size of our vocabulary, and its output size corresponds to the number of classes in our corpus. PyTorch’s Linear layer includes a bias by default, so there is no need to handle that manually the way we did for our perceptron example. The code for training this model (which implements Algorithm 6) is almost identical to that of the binary logistic repression. However, since we have to calculate a score for each of the four different classes, we need to replace the previous BCEWithLogitsLoss with CrossEntropyLoss, which applies a softmax over the scores to obtain probabilities for each class. For each example, the model predicts 4 scores – one for each label. The label with the highest score is selected using the argmax function. We evaluate the predictions of our model for each class using Scikitlearn’s classification_report, which handles the results of multiclass classification. 4.3 Summary 69 4.3 Summary In this chapter, we used movie review and news article classification to illustrate the implementation of the previously described algorithms for the binary perceptron, binary logistic regression, and multiclass logistic regression. For the binary logistic regression, we made a direct comparison between the lower-level NumPy implementation and a higher-level version that made use of PyTorch. We hope that through this series of exercises the reader has noted several key takeaways. First, data preparation is important and should be done thoughtfully. Certain tasks (e.g., text normalization or sentence splitting) are going to be frequently needed if you continue with NLP, so using or creating generic functions can be very helpful. However, what works for one dataset and one language may not be suitable for another scenario. For example, in our case, we selected different tokenizers for each of our tasks to account for the different registers of English, as well as removing diacritics during normalization. Second, when it comes to implementing machine learning algorithms, it is often easier to use a higher-level library such as PyTorch instead of NumPy. For example, with the former, the gradients are calculated by the library, whereas in NumPy we have to code them ourselves. This becomes cumbersome quickly. For example, even the derivative of the softmax is non-trivial. Third, PyTorch imposes a training structure that remains largely the same, regardless of what models are being trained. That is, at a high level, the same steps are always required: clearing the current gradients, predicting output scores for the provided inputs, calculating the loss, and optimizing. These features make PyTorch a very powerful and convenient deep learning library; we will continue to use it throughout the remainder of the book to implement more complex neural architectures.
16,380
16,414
#!/usr/bin/env python # coding: utf-8 # # Binary Text Classification with # # Logistic Regression Implemented from Scratch # In[1]: import random import numpy as np from tqdm.notebook import tqdm # set this variable to a number to be used as the random seed # or to None if you don't want to set a random seed seed = 1234 if seed is not None: random.seed(seed) np.random.seed(seed) # The dataset is divided in two directories called `train` and `test`. # These directories contain the training and testing splits of the dataset. # In[2]: get_ipython().system('ls -lh data/aclImdb/') # Both the `train` and `test` directories contain two directories called `pos` and `neg` that contain text files with the positive and negative reviews, respectively. # In[3]: get_ipython().system('ls -lh data/aclImdb/train/') # We will now read the filenames of the positive and negative examples. # In[4]: from glob import glob pos_files = glob('data/aclImdb/train/pos/*.txt') neg_files = glob('data/aclImdb/train/neg/*.txt') print('number of positive reviews:', len(pos_files)) print('number of negative reviews:', len(neg_files)) # Now, we will use a [`CountVectorizer`](https://scikit-learn.org/stable/modules/generated/sklearn.feature_extraction.text.CountVectorizer.html) to read the text files, tokenize them, acquire a vocabulary from the training data, and encode it in a document-term matrix in which each row represents a review, and each column represents a term in the vocabulary. Each element $(i,j)$ in the matrix represents the number of times term $j$ appears in example $i$. # In[5]: from sklearn.feature_extraction.text import CountVectorizer # initialize CountVectorizer indicating that we will give it a list of filenames that have to be read cv = CountVectorizer(input='filename') # learn vocabulary and return sparse document-term matrix doc_term_matrix = cv.fit_transform(pos_files + neg_files) doc_term_matrix # Note in the message printed above that the matrix is of shape (25000, 74894). # In other words, it has 1,871,225,000 elements. # However, only 3,445,861 elements were stored. # This is because most of the elements in the matrix are zeros. # The reason is that the reviews are short and most words in the english language don't appear in each review. # A matrix that only stores non-zero values is called *sparse*. # # Now we will convert it to a dense numpy array: # In[6]: X_train = doc_term_matrix.toarray() X_train.shape # In[7]: # Append 1s to the xs; this will allow us to multiply by the weights and # the bias in a single pass. # Make an array with a one for each row/data point ones = np.ones(X_train.shape[0]) # Concatenate these ones to existing feature vectors X_train = np.column_stack((X_train, ones)) X_train.shape # We will also create a numpy array with the binary labels for the reviews. # One indicates a positive review and zero a negative review. # The label `y_train[i]` corresponds to the review encoded in row `i` of the `X_train` matrix. # In[8]: # training labels y_pos = np.ones(len(pos_files)) y_neg = np.zeros(len(neg_files)) y_train = np.concatenate([y_pos, y_neg]) y_train # Now we will initialize our model, in the form of an array of weights `w` of the same size as the number of features in our dataset (i.e., the number of words in the vocabulary acquired by [`CountVectorizer`](https://scikit-learn.org/stable/modules/generated/sklearn.feature_extraction.text.CountVectorizer.html)), and a bias term `b`. # Both are initialized to zeros. # In[9]: # initialize model: the feature vector and bias term are populated with zeros n_examples, n_features = X_train.shape w = np.random.random(n_features) # Now we will use the logistic regression learning algorithm to learn the values of `w` and `b` from our training data. # In[10]: # from scipy.special import expit as sigmoid def sigmoid(z): if -z > np.log(np.finfo(float).max): return 0.0 return 1 / (1 + np.exp(-z)) # In[11]: lr = 1e-1 n_epochs = 10 indices = np.arange(n_examples) for epoch in range(10): # randomize the order in which training examples are seen in this epoch np.random.shuffle(indices) # traverse the training data for i in tqdm(indices, desc=f'epoch {epoch+1}'): x = X_train[i] y = y_train[i] # calculate the derivative of the cost function for this batch deriv_cost = (sigmoid(x @ w) - y) * x # update the weights w = w - lr * deriv_cost # The next step is evaluating the model on the test dataset. # Note that this time we use the [`transform()`](https://scikit-learn.org/stable/modules/generated/sklearn.feature_extraction.text.CountVectorizer.html#sklearn.feature_extraction.text.CountVectorizer.transform) method of the [`CountVectorizer`](https://scikit-learn.org/stable/modules/generated/sklearn.feature_extraction.text.CountVectorizer.html), instead of the [`fit_transform()`](https://scikit-learn.org/stable/modules/generated/sklearn.feature_extraction.text.CountVectorizer.html#sklearn.feature_extraction.text.CountVectorizer.fit_transform) method that we used above. This is because we want to use the learned vocabulary in the test set, instead of learning a new one. # In[12]: pos_files = glob('data/aclImdb/test/pos/*.txt') neg_files = glob('data/aclImdb/test/neg/*.txt') doc_term_matrix = cv.transform(pos_files + neg_files) X_test = doc_term_matrix.toarray() X_test = np.column_stack((X_test, np.ones(X_test.shape[0]))) y_pos = np.ones(len(pos_files)) y_neg = np.zeros(len(neg_files)) y_test = np.concatenate([y_pos, y_neg]) # Using the model is easy: multiply the document-term matrix by the learned weights and add the bias. # We use Python's `@` operator to perform the matrix-vector multiplication. # In[13]: y_pred = X_test @ w > 0 # Now we print an evaluation of the prediction results using scikit-learn's [`classification_report()`](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.classification_report.html) function. # In[14]: def binary_classification_report(y_true, y_pred): # count true positives, false positives, true negatives, and false negatives tp = fp = tn = fn = 0 for gold, pred in zip(y_true, y_pred): if pred == True: if gold == True: tp += 1 else: fp += 1 else: if gold == False: tn += 1 else: fn += 1 # calculate precision and recall precision = tp / (tp + fp) recall = tp / (tp + fn) # calculate f1 score fscore = 2 * precision * recall / (precision + recall) # calculate accuracy accuracy = (tp + tn) / len(y_true) # number of positive labels in y_true support = sum(y_true) return { "precision": precision, "recall": recall, "f1-score": fscore, "support": support, "accuracy": accuracy, } # In[15]: binary_classification_report(y_test, y_pred)
4,407
4,453
29
chap04-30
chap04-30
4 Implementing Text Classification Using Perceptron and Logistic Regression In the previous chapters we have discussed the theory behind the perceptron and logistic regression, including mathematical explanations of how and why they are able to learn from examples. In this chapter we will transition from math to code. Specifically, we will discuss how to implement these models in the Python programming language. All the code that we will introduce throughout this book is available online as well: http://clulab.github.io/gentlenlp/. The reader who is not familiar with the Python programming language is encouraged to read first Appendix A, for a brief introduction to the language, and Appendix B, for a discussion on how computers encode and preprocess text. Once done, please return here. To get a better understanding of how these algorithms work under the hood, we will start by implementing them from scratch. However, as the book progresses, we will introduce some of the popular tools and libraries that make Python the language of choice for machine learning, e.g., PyTorch,1 and Hugging Face’s transformers.2 The code for all the examples in the book is provided in the form of Jupyter notebooks.3 Important fragments of these notebooks will be presented in the implementation chapters so that the reader has the whole picture just by reading the book. However, we strongly encourage you to download the notebooks and execute them yourself. We also encourage you to modify them to conduct your own experiments! 1 https://pytorch.org
2 https://huggingface.co 3 https://jupyter.org/ 55 56 Implementing Text Classification Using Perceptron and LR 4.1 Binary Classification We begin this chapter with binary classification. That is, we aim to train classifiers that assign one of two labels to a given text. As the example for this task, we will train a review classifier using the the Large Movie Review Dataset (Maas et al., 2011).4 We tackle this task by implementing first a binary perceptron classifier, followed by a binary logistic regression one. We will implement the latter both from scratch as well as using PyTorch, so the reader has a clearer understanding on how PyTorch works “under the hood.” 4.1.1 Large Movie Review Dataset This dataset contains movie reviews and their associated scores (between 1 and 10) as provided by IMDb.5 converted these scores to binary labels by assigning each review a positive or negative label if the review score was above 6 or below 5, respectively. Reviews with scores 5 and 6 were considered too neutral and thus excluded. We follow the same protocol in this chapter. The dataset is divided in two even partitions called train and test, each containing 25,000 reviews. The dataset also provides additional unlabeled reviews, but we will not use those here. Each partition contains two directories called pos and neg where the positive and negative examples are stored. Each review is stored in an independent text file, whose name is composed of an id unique to the partition and the score associated with the review, separated by an underscore. An example of a positive and a negative review is shown in Table 4.1. 4.1.2 Bag-of-words Model As discussed in Section 2.2, we will encode the text to classify as a bag of words. That is, we encode each review as a list of numbers, with each position in the list corresponding to a word in our vocabulary, and the value stored in that position corresponding to the number of times the word appears in the review. For example, say we want to encode the following two reviews: 4 https://ai.stanford.edu/~amaas/data/sentiment/ 5 https://www.imdb.com/ Maas et al. 4.1 Binary Classification 57 Table 4.1 Two examples of movie reviews from IMDb. The first is a positive review of the movie Puss in Boots (1988). The second is a negative review of the movie Valentine (2001). These reviews can be found at https://www.imdb.com/review/rw0606396/ and https://www.imdb.com/review/rw0721861/, respectively. Filename Score Binary Label train/pos/24_8.txt 8/10 Positive train/neg/141_3.txt 3/10 Negative Review Text Although this was obviously a low-budget production, the performances and the songs in this movie are worth seeing. One of Walken’s few musical roles to date. (he is a marvelous dancer and singer and he demonstrates his acrobatic skills as well - watch for the cartwheel!) Also starring Jason Connery. A great children’s story and very likable characters. This stalk and slash turkey manages to bring nothing new to an increasingly stale genre. A masked killer stalks young, pert girls and slaughters them in a variety of gruesome ways, none of which are particularly inventive. It’s not scary, it’s not clever, and it’s not funny. So what was the point of it? Review 1: Review 2: "I liked the movie. My friend liked it too. " "I hated it. Would not recommend. " First, we need to create a vocabulary that maps each word to an id that uniquely identifies it. Each of these numbers will be used as the index in a list, so they must start at zero and grow by one for each word in the vocabulary. For example, one possible vocabulary that encodes the previous reviews is: {'would': 0, 'hated': 1, 58 Implementing Text Classification Using Perceptron and LR 'my': 2, 'liked': 3, 'not': 4, 'it': 5, 'movie': 6, 'recommend': 7, 'the': 8, 'I': 9, 'too': 10, 'friend': 11} Using this mapping, we can encode the two reviews as follows: Review1: [0,0,1,2,0,1,1,0,1,1,1,1] Review2: [1,1,0,0,1,1,0,1,0,1,0,0] Note that the word liked (fourth position) in the first review has a value of two. This is because this word appears twice in that review. This is a small example with a vocabulary of only 12 terms. Of course, the same process needs to be implemented for our whole training dataset. For this purpose we will use scikit-learn’s CountVectorizer class.6 Using the CountVectorizer class simplifies things, allowing us to get started quickly with a bag-of-words approach. However, note that it makes several simplifying assumptions (e.g., text is lowercased, and punctuation and single character tokens are removed). Some of these may not be adequate to other tasks. First, we need to obtain the filenames for the reviews in the training set: Once we have acquired the filenames for the training reviews, we need
to read them using the CountVectorizer. In order for the CountVectorizer to open and read the files for us, we make use of the input='filename' constructor parameter (otherwise it would expect the string content directly). The CountVectorizer provides three methods that will be use-
ful for us: a method called fit() that is used to acquire the vocabulary,
a method transform() that converts the text into the bag-of-words representation, and a method fit_transform() that conveniently acquires the vocabulary and transforms the data in a single step. The resulting object is referred to as a document-term matrix, where each row corre- 6 https://scikitlearn.org/stable/modules/generated/sklearn.feature_ extraction.text.CountVectorizer.html 4.1 Binary Classification 59 sponds to a document, and each column corresponds to a term in the vocabulary. As the output above indicates, the resulting matrix has 25,000 rows (one for each review), and 74,849 columns (one for each term). Also you may note that this matrix is sparse, with 3,445,861 stored elements. A regular matrix of shape 25,000×74,849 would have 1,871,225,000 elements. However, most of the elements in a document-term matrix are zeros because only a few words from the vocabulary appear in each document. A sparse matrix takes advantage of this fact by storing only the non-zero cells in order to reduce the memory required to store it. Thus, sparse matrices are convenient, especially when dealing with lots of data. Nevertheless, to simplify the downstream code in this example, we will convert it into a dense matrix, i.e., a regular two-dimensional NumPy array. Finally, we also need the labels of the reviews. We assign a label of one to positive reviews, and a label of zero to negative ones. Note that the first half of the reviews are positive and the second half are negative. The label at the ith position of the y_train array corresponds to the review encoded in the ith row of the X_train matrix. 4.1.3 Perceptron Now that we have defined our task and the data processing pipeline, we will implement a perceptron classifier that classifies the movie reviews as positive or negative. The entire code discussed in this section is available in the chap4_perceptron notebook. Recall from Section 2.4 that the perceptron is composed of a weight vector w and a bias term b. These will be represented as a NumPy array w of the same length as our document vectors, and a variable b for the bias term. Both will be initialized with zeros. The parameters w and b are learned through the following algorithm, which implements Algorithm 2 from Chapter 2: There are a couple of details to point out. Line 3 of Algorithm 2 indicates that we need to repeat the training loop until convergence. Theoretically, convergence is defined as predicting all training examples correctly. This is an ambitious requirement, which is not always possible in practice, so in this code we also include a stop condition if we reach a maximum number of epochs. Another crucial difference between our implementation here and the theoretical Algorithm 2, is that we randomize the order in which the training examples are seen at the beginning of 60 Implementing Text Classification Using Perceptron and LR each epoch. This simple (but highly recommended!) change is necessary to avoid the introduction of spurious biases due to the arbitrary order of the examples in the original training partition.7 We accomplish this by storing the indices corresponding to the X_train matrix rows in a NumPy array, and shuffling these indices at the beginning of each epoch. We shuffle the indices instead of the examples so that we can preserve the mapping between examples and labels. The training loop aligns closely with Algorithm 2. We start by iterating over each example in our training data, storing the current example in the variable x,8 and its corresponding label in the variable y_true. Next, we compute the perceptron decision function shown in Algorithm 1. Note that NumPy (as well as PyTorch) uses Python’s @ operator to indicate vector or matrix multiplication, depending on its operand types. Here we use it to calculate the dot product of the example x and the weights w. To this we add the bias b to obtain the predicted score, whose sign is used to assign a positive or negative predicted label. If the prediction is correct, then no update is needed, and we can move on to the next training example. However, if the prediction is incorrect, then we need to adjust w and b, as described in Algorithm 2. Sidebar 4.1 The tqdm function This is our first exposure to the tqdm function. tqdm is a progress bar that “make your loops show a smart progress meter.”9 The name tqdm comes from the Arabic word taqaddum which can mean “progress.” Using tqdm is as simple as wrapping it around the collection to be traversed. After training, we evaluate the model’s performance on the heldout test partition. The test data is loaded similarly to the training partition, but with one notable difference; we use CountVectorizer’s transform() method instead of the fit_transform() method so that the vocabulary is not adjusted for the test data. We won’t show here the loading of the test partition since it is so similar to the code already shown, but it is available in the Jupyter notebook that accompanies this section. . 7   As an extreme example, consider a dataset where all the positive examples appear first in the training partition. This would cause the perceptron to artificially inflate the weights of the features that occur in these examples, a situation from which the learning algorithm may struggle to recover. 
 . 8  We use typewriter font when we discuss variables in the code, to distinguish code from the theoretical discussion in the other chapters. 
 9 https://github.com/tqdm/tqdm 4.1 Binary Classification 61 Using the model to assign labels to all the test data is easily done in one step – we simply multiply the entire test data document-term matrix by the previously learned weights and add the bias. Scores greater than zero indicate a positive review, and those less than zero are negative. At this point we can evaluate the classifier’s performance, which we will do using precision, recall, and F1 scores for binary classification (described in Section 2.3). For this purpose, we implement a function called binary_classification_report that computes these metrics and returns them as a dictionary: We call this function to compare the predicted labels to the true labels, and obtain the evaluation scores. Our F1 score here is 86.8%, which is much higher than the baseline that assigns labels randomly, which yields an F1 score of about 50%. This is a good result, especially considering the simplicity of the perceptron! In the next sections and chapters, we will discuss a battery of strategies to considerably improve this performance. 4.1.4 Binary Logistic Regression from Scratch Using the same task, dataset, and evaluation, we will now implement a logistic regression classifier, as described in Algorithm 5 from Chapter 3. To give the reader hands-on experience with the implementation of the gradient calculations for logistic regression, we start by implementing it from scratch using NumPy. All the code shown in this section is available in the chap4_logistic_regression_numpy notebook. In the perceptron implementation, we represented the weights and the bias as two different variables. Here, however, we will use a different approach that will allow us to unify them into a single vector variable. Specifically, we take advantage of the similarity between the derivative of the cost function with respect to the weights (Equation 3.14) and the derivative of the cost with respect to the bias (Equation 3.15). d Ci(w, b) = (σi − yi)xij (3.14 revisited) dwj d Ci(w, b) = σi − yi (3.15 revisited) db Note that the two derivative formulas are identical except that the former has a multiplication by xij, while the latter does not. However, 62 Implementing Text Classification Using Perceptron and LR since σi − yi = (σi − yi)1 we can multiply the derivative of the cost with respect to the bias by one without changing the semantics. This gives an opportunity for combining the computations, doing them both in a single pass. The idea is that we can treat the bias as a weight corresponding to a feature that always has a value of one. As can be seen above, we created a NumPy array of ones of the same length as the number of examples in our training set (i.e., the number of rows in the data matrix). Then we add this array as a new column to the data matrix, using NumPy’s column_stack function. Next, we need to initialize our model. This time we will use a single NumPy array w of the same length as the number of columns in the data matrix. The weight vector w is initialized randomly with values between 0 and 1: Before implementing the learning algorithm, we need an implementation of the logistic function. Recall that the logistic function is σ(x) = 1 (3.1 revisited) 1+e−x This function can be easily implemented in NumPy as follows: However, this naive implementation may produce the following warning during training: The term overflow indicates that the result of evaluating exp(-x) is a number so large that it can’t be represented by a float (specifically, we’re using float64 numbers). We will avoid this issue by not calling exp with values that will overflow. NumPy provides the function finfo that can be consulted to find the limits of floating point numbers: The log of the largest floating point number is the largest number for which exp() will not overflow, so we will use it as a threshold to filter out problematic values: We now have everything we need to implement Algorithm 4. The steps to follow for each example are: (1) use the model to make a prediction, (2) calculate the gradient of the loss function with respect to the model parameters, and (3) update the model parameters using the gradient. The size of the update is controlled by the learning rate. Once the model has been trained, we evaluate it on the test dataset using our binary_classification_report function from the previous section. Loading and preprocessing the test dataset follows the same 4.1 Binary Classification 63 steps as with the previous classifier. We omit the code for brevity. These are the results: The performance is comparable with that of the perceptron. The difference in F1 scores between the two classifiers (84.9% here vs. 86.8% for the perceptron) is not significant. Classifier parity is probably attributable to the fact that the signal distinguishing the two classes being easy to learn and the simpler perceptron training algorithm being sufficient in this case. Nevertheless, this task is useful in showing how to implement the logistic regression model from scratch, i.e., by implementing the gradient calculation and parameter updates manually. Next, we will implement the same model again using PyTorch, highlighting how this machine learning library simplifies the process. 4.1.5 Binary Logistic Regression Utilizing PyTorch While it is fairly straightforward to compute the derivatives for logistic regression and implement then directly in NumPy, this will not scale well to arbitrary neural architectures. Fortunately, there are libraries that automate the computation of the derivatives of the cost function (assuming it is differentiable!) for any neural network, and use the resulting gradients to perform gradient descent or other more sophisticated optimization procedures. To this end, we will use the PyTorch deep learning library10. The corresponding notebook for this section is chap4_logistic_regression_pytorch_bce. Our model for logistic regression corresponds to PyTorch’s Linear layer. When we instantiate this layer, we specify the size of the inputs (the size of our vocabulary) and the size of the output, i.e., the number of output neurons (which is one because we’re doing binary classification). The loss function we use is the binary cross-entropy loss (see Chapter 3), which is implemented as BCEWithLogitsLoss in PyTorch. In PyTorch, the gradients obtained from the loss function are applied to the model by an optimizer object, which implements and applies an optimization algorithm. Here we will use the vanilla stochastic gradient descent optimizer; we set its learning rate to 0.1. This is equivalent to the discussion in Section 3.2. Similarly to the manual implementation, the steps required to train the model for a given training example are: (1) ensure the gradients are set to zeros, (2) apply the model to obtain a prediction, (3) calculate 10 https://pytorch.org/ 64 Implementing Text Classification Using Perceptron and LR the loss, (4) compute the gradient of the loss by back-propagation, and (5) update the model parameters. Recall that in our previous implementation everything was hardcoded: applying the model, computing the gradients, and optimizing the model parameters. Here, however, the implementation of the logistic regression is expressed at a higher level of abstraction. This means that we are describing the logical steps without specifying a particular implementation. Instead, implementation details are the responsability of the chosen model, loss function, and optimizer. Thus, we could even choose a different model, loss function, and/or optimizer, and use the same training steps with little or no modification. This decoupling of the training logic from the implementation details is one of the main advantages of libraries such as PyTorch. As shown in the code above, calling the model as a function, with the feature vectors as inputs, produces the predicted scores. Once again, a positive score corresponds to a positive label. When we evaluate this implementation on the test dataset, we obtain results that are in line with our previous models: Writing the perceptron and the logistic regression from scratch is a good exercise, as it exposes us to the fundamentals of implementing machine learning algorithms. However, this becomes cumbersome for more complex neural architectures. For this reason, from this point on, we will use PyTorch for all our coding examples. 4.2 Multiclass Classification So far, in this chapter we have discussed implementing binary classifiers. Next, we will modify these binary classifiers to perform multiclass classification, following the discussion in Section 3.5. 4.2.1 AG News Dataset Before explaining the actual training/testing code, we have to choose a new dataset that is suitable for multiclass classification. To this end, we will use the AG News Classification Dataset (Zhang et al., 2015), a subset of the larger AG corpus of news articles collected from thousands of different news sources.11 The classification dataset consists of four 11 http://groups.di.unipi.it/~gulli/AG_corpus_of_news_articles.html 4.2 Multiclass Classification 65 classes, and the data is equally balanced across all classes (30,000 articles per class for train, and 1,900 articles per class for testing). The goal of the task is to classify each article as one of the four classes: World, Sports, Business, or Sci/Tech. 4.2.2 Preparing the Dataset The AG News Dataset is distributed as two CSV files (one for training and one for testing), each containing three columns: the class index, the title, and the description. The dataset also provides a text file that maps the above class indexes to more descriptive class labels. Because of the tabular nature of the dataset, pandas, a Python library
for tabular data analysis,12 is a natural choice for loading and transform-
ing it. To this end, our Jupyter notebook (chap4_multiclass_logistic_regression) demonstrates the sequence of steps required to handle the data, as well
as model training and evaluation. First, we show how to load the CSV,
add column names, and inspect the result: class index . 0  3 
 . 1  3 
 . 2  3 
 . 3  3 
 . 4  3 
 ... ... . 119995  1 
 . 119996  2 
 . 119997  2 
 . 119998  2 
 . 119999  2 
 title Wall St. Bears Claw Back Into the Black (Reuters) Carlyle Looks Toward Commercial Aerospace (Reu... Oil and Economy Cloud Stocks' Outlook (Reuters) Iraq Halts Oil Exports from Main Southern Pipe... Oil prices soar to all-time record, posing new... ... Pakistan's Musharraf Says Won't Quit as Army C... Renteria signing a top-shelf deal Saban not going to Dolphins yet Today's NFL games Nets get Carter from Raptors description Reuters - Short-sellers, Wall Street's dwindli... Reuters - Private investment firm Carlyle Grou... Reuters - Soaring crude prices plus worries\ab... Reuters - Authorities have halted oil export\f... AFP - Tearaway world oil prices, toppling reco... ... KARACHI (Reuters) - Pakistani President Perve... Red Sox general manager Theo Epstein acknowled... The Miami Dolphins will put their courtship of... PITTSBURGH at NY GIANTS Time: 1:30 p.m. Line: ... INDIANAPOLIS -- All-Star Vince Carter was trad... 120000 rows × 3 columns Since the class labels themselves are in a separate file, we manually add them to the pandas data structure (called dataframe in pandas’ terminology) to increase the interpretability of the data. We use the class index column as a starting point, and use its map method to create a new column with the corresponding labels (technically a new Series object) that is added to the dataframe using its insert method, which allows us to insert the column in a specific position. Note that the label indices are one-based, so we subtract one to align them with their labels. 12 https://pandas.pydata.org 66 Implementing Text Classification Using Perceptron and LR class index . 0  3 
 . 1  3 
 . 2  3 
 . 3  3 
 . 4  3 
 ... ... . 119995  1 
 . 119996  2 
 . 119997  2 
 . 119998  2 
 . 119999  2 
 class Business Business Business Business Business ... World Sports Sports Sports Sports title Wall St. Bears Claw Back Into the Black (Reuters) Oil and Economy Cloud Stocks' Outlook (Reuters) Oil prices soar to all-time record, posing new... Pakistan's Musharraf Says Won't Quit as Army C... Saban not going to Dolphins yet Nets get Carter from Raptors description Reuters - Short-sellers, Wall Street's dwindli... Reuters - Soaring crude prices plus worries\ab... AFP - Tearaway world oil prices, toppling reco... KARACHI (Reuters) - Pakistani President Perve... The Miami Dolphins will put their courtship of... INDIANAPOLIS -- All-Star Vince Carter was trad... Iraq Halts Oil Exports from Main Southern Pipe... Reuters - Authorities have halted oil export\f... ... ... Renteria signing a top-shelf deal Red Sox general manager Theo Epstein acknowled... 120000 rows × 4 columns Carlyle Looks Toward Commercial Aerospace (Reu... Reuters - Private investment firm Carlyle Grou... Today's NFL games PITTSBURGH at NY GIANTS Time: 1:30 p.m. Line: ... Next we will preprocess the text. First we lowercase the title and description, and then we concatenate them into a single string. Then we remove some spurious backslashes from the text. Once this is done, the preprocessed text is added to the dataframe as a new column. Note that pandas allows these steps to be applied to all rows simultaneously. class index class title Wall St. Bears Claw Back Into the Black (Reuters) Oil and Economy Cloud Stocks' Outlook (Reuters) Oil prices soar to all-time record, posing new... ... Pakistan's Musharraf Says Won't Quit as Army C... Saban not going to Dolphins yet Nets get Carter from Raptors description Reuters - Short-sellers, Wall Street's dwindli... Reuters - Soaring crude prices plus worries\ab... AFP - Tearaway world oil prices, toppling reco... ... KARACHI (Reuters) - Pakistani President Perve... The Miami Dolphins will put their courtship of... INDIANAPOLIS -- All-Star Vince Carter was trad... text wall st. bears claw back into the black (reute... oil and economy cloud stocks' outlook (reuters... oil prices soar to all-time record, posing new... ... pakistan's musharraf says won't quit as army c... saban not going to dolphins yet the miami dolp... nets get carter from raptors indianapolis -- a... . 0  3 Business 
 . 1  3 Business 
 . 2  3 Business 
 . 3  3 Business 
 . 4  3 Business 
 ... ... ... . 119995  1 World 
 . 119996  2 Sports 
 . 119997  2 Sports 
 . 119998  2 Sports 
 . 119999  2 Sports 
 120000 rows × 5 columns Carlyle Looks Toward Commercial Reuters - Private investment firm Carlyle carlyle looks toward commercial Aerospace (Reu... Grou... aerospace (reu... Iraq Halts Oil Exports from Main Southern Pipe... Reuters - Authorities have halted oil export\f... iraq halts oil exports from main southern pipe... Renteria signing a top-shelf deal Red Sox general manager Theo Epstein renteria signing a top-shelf deal red sox acknowled... gene... Today's NFL games PITTSBURGH at NY GIANTS Time: 1:30 p.m. today's nfl games pittsburgh at ny giants Line: ... time... At this point, the text is ready to be tokenized. For this purpose we will use NLTK’s word_tokenize function. This function can be applied to the whole column at once using the pandas map function, which returns a new column which we add to the dataframe. However, here we actually use the progress_map function, which provides a visual progress bar. This visual feedback is especially helpful for tasks that take more time to complete. 4.2 Multiclass Classification 67 class index . 0  3 
 . 1  3 
 . 2  3 
 . 3  3 
 . 4  3 
 ... ... . 119995  1 
 . 119996  2 
 . 119997  2 
 . 119998  2 
 . 119999  2 
 class Business Business Business Business Business ... World Sports Sports Sports Sports title Wall St. Bears Claw Back Into the Black (Reuters) Oil and Economy Cloud Stocks' Outlook (Reuters) Oil prices soar to all-time record, posing new... ... Pakistan's Musharraf Says Won't Quit as Army C... Saban not going to Dolphins yet Nets get Carter from Raptors description Reuters - Short-sellers, Wall Street's dwindli... Reuters - Soaring crude prices plus worries\ab... AFP - Tearaway world oil prices, toppling reco... ... KARACHI (Reuters) - Pakistani President Perve... The Miami Dolphins will put their courtship of... INDIANAPOLIS -- All-Star Vince Carter was trad... text wall st. bears claw back into the black (reute... oil and economy cloud stocks' outlook (reuters... oil prices soar to all-time record, posing new... ... pakistan's musharraf says won't quit as army c... saban not going to dolphins yet the miami dolp... nets get carter from raptors indianapolis -- a... tokens [wall, st., bears, claw, back, into, the, blac... [oil, and, economy, cloud, stocks, ', outlook,... [oil, prices, soar, to, all-time, record, ,, p... ... [pakistan, 's, musharraf, says, wo, n't, quit,... [saban, not, going, to, dolphins, yet, the, mi... [nets, get, carter, from, raptors, indianapoli... 120000 rows × 6 columns Carlyle Looks Toward Commercial Reuters - Private investment firm carlyle looks toward commercial [carlyle, looks, toward, Aerospace (Reu... Carlyle Grou... aerospace (reu... commercial, aerospace... Iraq Halts Oil Exports from Main Reuters - Authorities have halted iraq halts oil exports from main [iraq, halts, oil, exports, from, Southern Pipe... oil export\f... southern pipe... main, southe... Renteria signing a top-shelf deal Red Sox general manager Theo renteria signing a top-shelf deal [renteria, signing, a, top-shelf, Epstein acknowled... red sox gene... deal, red, s... Today's NFL games PITTSBURGH at NY GIANTS today's nfl games pittsburgh at [today, 's, nfl, games, Time: 1:30 p.m. Line: ... ny giants time... pittsburgh, at, ny, gi... From the tokens we just created, we then create a vocabulary for our corpus. Here, we only keep the words that occur at least 10 times, decreasing the memory needed and reducing the likelihood that our vocabulary contains noisy tokens. Note that each row in the tokens column contains a list of tokens. In order to create the vocabulary, we will need to convert the Series of lists of tokens into a Series of tokens using the explode() Pandas method. Then we will use the value_counts() method to create a Series object in which the index are the tokens and the values are the number of times they appear in the corpus. The next step is removing the tokens with a count lower than our chosen threshold. Finally, we create a list with the remaining tokens, as well as a dictionary that maps tokens to token ids (i.e., the index of the token in the list). We include in the vocabulary a special token [UNK] that will be used as a placeholder for tokens that do not appear in our vocabulary after the frequency pruning. Using this vocabulary, we construct a feature vector for each news article in the corpus. This feature vector will be encoded as a dictionary, with keys corresponding to token ids, and values corresponding to the number of times the token appears in the article. As above, the feature vectors will be stored as a new column in the dataframe. 68 Implementing Text Classification Using Perceptron and LR class index class title Wall St. Bears Claw Back Into the Black (Reuters) Oil and Economy Cloud Stocks' Outlook (Reuters) Oil prices soar to all-time record, posing new... ... Pakistan's Musharraf Says Won't Quit as Army C... Saban not going to Dolphins yet Nets get Carter from Raptors description Reuters - Short-sellers, Wall Street's dwindli... Reuters - Soaring crude prices plus worries\ab... AFP - Tearaway world oil prices, toppling reco... ... KARACHI (Reuters) - Pakistani President Perve... The Miami Dolphins will put their courtship of... INDIANAPOLIS -- All-Star Vince Carter was trad... text wall st. bears claw back into the black (reute... oil and economy cloud stocks' outlook (reuters... oil prices soar to all-time record, posing new... ... pakistan's musharraf says won't quit as army c... saban not going to dolphins yet the miami dolp... nets get carter from raptors indianapolis -- a... tokens [wall, st., bears, claw, back, into, the, blac... [oil, and, economy, cloud, stocks, ', outlook,... [oil, prices, soar, to, alltime, record, ,, p... ... [pakistan, 's, musharraf, says, wo, n't, quit,... [saban, not, going, to, dolphins, yet, the, mi... [nets, get, carter, from, raptors, indianapoli... features {427: 2, 563: 1, 1607: 1, 15062: 1, 120: 1, 73... {66: 1, 9: 2, 351: 2, 4565: 1, 158: 1, 116: 1,... {66: 2, 99: 2, 4390: 1, 4: 2, 3595: 1, 149: 1,... ... {383: 1, 23: 1, 1626: 2, 91: 1, 1809: 1, 285: ... {7762: 2, 68: 1, 661: 1, 4: 2, 1439: 2, 703: 1... {2170: 2, 226: 1, 2402: 2, 32: 1, 2995: 2, 219... . 0  3 Business 
 . 1  3 Business 
 . 2  3 Business 
 . 3  3 Business 
 . 4  3 Business 
 ... ... ... . 119995  1 World 
 . 119996  2 Sports 
 . 119997  2 Sports 
 . 119998  2 Sports 
 . 119999  2 Sports 
 120000 rows × 7 columns Carlyle Looks Toward Commercial Aerospace (Reu... Reuters - Private investment firm Carlyle Grou... carlyle looks toward commercial aerospace (reu... Iraq Halts Oil Exports from Reuters - Authorities have iraq halts oil exports from Main Southern Pipe... halted oil export\f... main southern pipe... Renteria signing a top-shelf Red Sox general manager renteria signing a topdeal Theo Epstein acknowled... shelf deal red sox gene... PITTSBURGH at NY Today's NFL games GIANTS Time: 1:30 p.m. Line: ... today's nfl games pittsburgh at ny giants time... [carlyle, looks, toward, {15999: 2, 1076: 1, 855: commercial, aerospace... 1, 1286: 1, 4251: 1, ... [iraq, halts, oil, exports, {77: 2, 7380: 1, 66: 3, from, main, southe... 1787: 1, 32: 2, 900: 2... [renteria, signing, a, top- {8428: 2, 2638: 1, 5: 4, shelf, deal, red, s... 0: 3, 127: 1, 202: 3,... [today, 's, nfl, games, {106: 1, 23: 1, 729: 1, pittsburgh, at, ny, gi... 225: 1, 1586: 1, 22: 1... The final preprocessing step is converting the features and the class indices into PyTorch tensors. Recall that we need to subtract one from the class indices to make them zero-based. At this point, the data is fully processed and we are ready to begin training. 4.2.3 Multiclass Logistic Regression Using PyTorch The model itself is a single linear layer whose input size corresponds to the size of our vocabulary, and its output size corresponds to the number of classes in our corpus. PyTorch’s Linear layer includes a bias by default, so there is no need to handle that manually the way we did for our perceptron example. The code for training this model (which implements Algorithm 6) is almost identical to that of the binary logistic repression. However, since we have to calculate a score for each of the four different classes, we need to replace the previous BCEWithLogitsLoss with CrossEntropyLoss, which applies a softmax over the scores to obtain probabilities for each class. For each example, the model predicts 4 scores – one for each label. The label with the highest score is selected using the argmax function. We evaluate the predictions of our model for each class using Scikitlearn’s classification_report, which handles the results of multiclass classification. 4.3 Summary 69 4.3 Summary In this chapter, we used movie review and news article classification to illustrate the implementation of the previously described algorithms for the binary perceptron, binary logistic regression, and multiclass logistic regression. For the binary logistic regression, we made a direct comparison between the lower-level NumPy implementation and a higher-level version that made use of PyTorch. We hope that through this series of exercises the reader has noted several key takeaways. First, data preparation is important and should be done thoughtfully. Certain tasks (e.g., text normalization or sentence splitting) are going to be frequently needed if you continue with NLP, so using or creating generic functions can be very helpful. However, what works for one dataset and one language may not be suitable for another scenario. For example, in our case, we selected different tokenizers for each of our tasks to account for the different registers of English, as well as removing diacritics during normalization. Second, when it comes to implementing machine learning algorithms, it is often easier to use a higher-level library such as PyTorch instead of NumPy. For example, with the former, the gradients are calculated by the library, whereas in NumPy we have to code them ourselves. This becomes cumbersome quickly. For example, even the derivative of the softmax is non-trivial. Third, PyTorch imposes a training structure that remains largely the same, regardless of what models are being trained. That is, at a high level, the same steps are always required: clearing the current gradients, predicting output scores for the provided inputs, calculating the loss, and optimizing. These features make PyTorch a very powerful and convenient deep learning library; we will continue to use it throughout the remainder of the book to implement more complex neural architectures.
35,970
36,124
#!/usr/bin/env python # coding: utf-8 # # Multiclass Text Classification with # # Logistic Regression Implemented with PyTorch and CE Loss # First, we will do some initialization. # In[1]: import random import torch import numpy as np import pandas as pd from tqdm.notebook import tqdm # enable tqdm in pandas tqdm.pandas() # set to True to use the gpu (if there is one available) use_gpu = True # select device device = torch.device('cuda' if use_gpu and torch.cuda.is_available() else 'cpu') print(f'device: {device.type}') # random seed seed = 1234 # set random seed if seed is not None: print(f'random seed: {seed}') random.seed(seed) np.random.seed(seed) torch.manual_seed(seed) # We will be using the AG's News Topic Classification Dataset. # It is stored in two CSV files: `train.csv` and `test.csv`, as well as a `classes.txt` that stores the labels of the classes to predict. # # First, we will load the training dataset using [pandas](https://pandas.pydata.org/) and take a quick look at how the data. # In[2]: train_df = pd.read_csv('data/ag_news_csv/train.csv', header=None) train_df.columns = ['class index', 'title', 'description'] train_df # The dataset consists of 120,000 examples, each consisting of a class index, a title, and a description. # The class labels are distributed in a separated file. We will add the labels to the dataset so that we can interpret the data more easily. Note that the label indexes are one-based, so we need to subtract one to retrieve them from the list. # In[3]: labels = open('data/ag_news_csv/classes.txt').read().splitlines() classes = train_df['class index'].map(lambda i: labels[i-1]) train_df.insert(1, 'class', classes) train_df # Let's inspect how balanced our examples are by using a bar plot. # In[4]: pd.value_counts(train_df['class']).plot.bar() # The classes are evenly distributed. That's great! # # However, the text contains some spurious backslashes in some parts of the text. # They are meant to represent newlines in the original text. # An example can be seen below, between the words "dwindling" and "band". # In[5]: print(train_df.loc[0, 'description']) # We will replace the backslashes with spaces on the whole column using pandas replace method. # In[6]: title = train_df['title'].str.lower() descr = train_df['description'].str.lower() text = title + " " + descr train_df['text'] = text.str.replace('\\', ' ', regex=False) train_df # Now we will proceed to tokenize the title and description columns using NLTK's word_tokenize(). # We will add a new column to our dataframe with the list of tokens. # In[7]: from nltk.tokenize import word_tokenize train_df['tokens'] = train_df['text'].progress_map(word_tokenize) train_df # Now we will create a vocabulary from the training data. We will only keep the terms that repeat beyond some threshold established below. # In[8]: threshold = 10 tokens = train_df['tokens'].explode().value_counts() tokens = tokens[tokens > threshold] id_to_token = ['[UNK]'] + tokens.index.tolist() token_to_id = {w:i for i,w in enumerate(id_to_token)} vocabulary_size = len(id_to_token) print(f'vocabulary size: {vocabulary_size:,}') # In[9]: from collections import defaultdict def make_feature_vector(tokens, unk_id=0): vector = defaultdict(int) for t in tokens: i = token_to_id.get(t, unk_id) vector[i] += 1 return vector train_df['features'] = train_df['tokens'].progress_map(make_feature_vector) train_df # In[10]: def make_dense(feats): x = np.zeros(vocabulary_size) for k,v in feats.items(): x[k] = v return x X_train = np.stack(train_df['features'].progress_map(make_dense)) y_train = train_df['class index'].to_numpy() - 1 X_train = torch.tensor(X_train, dtype=torch.float32) y_train = torch.tensor(y_train) # In[11]: from torch import nn from torch import optim # hyperparameters lr = 1.0 n_epochs = 5 n_examples = X_train.shape[0] n_feats = X_train.shape[1] n_classes = len(labels) # initialize the model, loss function, optimizer, and data-loader model = nn.Linear(n_feats, n_classes).to(device) loss_func = nn.CrossEntropyLoss() optimizer = optim.SGD(model.parameters(), lr=lr) # train the model indices = np.arange(n_examples) for epoch in range(n_epochs): np.random.shuffle(indices) for i in tqdm(indices, desc=f'epoch {epoch+1}'): # clear gradients model.zero_grad() # send datum to right device x = X_train[i].unsqueeze(0).to(device) y_true = y_train[i].unsqueeze(0).to(device) # predict label scores y_pred = model(x) # compute loss loss = loss_func(y_pred, y_true) # backpropagate loss.backward() # optimize model parameters optimizer.step() # Next, we evaluate on the test dataset # In[12]: # repeat all preprocessing done above, this time on the test set test_df = pd.read_csv('data/ag_news_csv/test.csv', header=None) test_df.columns = ['class index', 'title', 'description'] test_df['text'] = test_df['title'].str.lower() + " " + test_df['description'].str.lower() test_df['text'] = test_df['text'].str.replace('\\', ' ', regex=False) test_df['tokens'] = test_df['text'].progress_map(word_tokenize) test_df['features'] = test_df['tokens'].progress_map(make_feature_vector) X_test = np.stack(test_df['features'].progress_map(make_dense)) y_test = test_df['class index'].to_numpy() - 1 X_test = torch.tensor(X_test, dtype=torch.float32) y_test = torch.tensor(y_test) # In[13]: from sklearn.metrics import classification_report # set model to evaluation mode model.eval() # don't store gradients with torch.no_grad(): X_test = X_test.to(device) y_pred = torch.argmax(model(X_test), dim=1) y_pred = y_pred.cpu().numpy() print(classification_report(y_test, y_pred, target_names=labels))
5,808
5,878
30
chap04-31
chap04-31
4 Implementing Text Classification Using Perceptron and Logistic Regression In the previous chapters we have discussed the theory behind the perceptron and logistic regression, including mathematical explanations of how and why they are able to learn from examples. In this chapter we will transition from math to code. Specifically, we will discuss how to implement these models in the Python programming language. All the code that we will introduce throughout this book is available online as well: http://clulab.github.io/gentlenlp/. The reader who is not familiar with the Python programming language is encouraged to read first Appendix A, for a brief introduction to the language, and Appendix B, for a discussion on how computers encode and preprocess text. Once done, please return here. To get a better understanding of how these algorithms work under the hood, we will start by implementing them from scratch. However, as the book progresses, we will introduce some of the popular tools and libraries that make Python the language of choice for machine learning, e.g., PyTorch,1 and Hugging Face’s transformers.2 The code for all the examples in the book is provided in the form of Jupyter notebooks.3 Important fragments of these notebooks will be presented in the implementation chapters so that the reader has the whole picture just by reading the book. However, we strongly encourage you to download the notebooks and execute them yourself. We also encourage you to modify them to conduct your own experiments! 1 https://pytorch.org
2 https://huggingface.co 3 https://jupyter.org/ 55 56 Implementing Text Classification Using Perceptron and LR 4.1 Binary Classification We begin this chapter with binary classification. That is, we aim to train classifiers that assign one of two labels to a given text. As the example for this task, we will train a review classifier using the the Large Movie Review Dataset (Maas et al., 2011).4 We tackle this task by implementing first a binary perceptron classifier, followed by a binary logistic regression one. We will implement the latter both from scratch as well as using PyTorch, so the reader has a clearer understanding on how PyTorch works “under the hood.” 4.1.1 Large Movie Review Dataset This dataset contains movie reviews and their associated scores (between 1 and 10) as provided by IMDb.5 converted these scores to binary labels by assigning each review a positive or negative label if the review score was above 6 or below 5, respectively. Reviews with scores 5 and 6 were considered too neutral and thus excluded. We follow the same protocol in this chapter. The dataset is divided in two even partitions called train and test, each containing 25,000 reviews. The dataset also provides additional unlabeled reviews, but we will not use those here. Each partition contains two directories called pos and neg where the positive and negative examples are stored. Each review is stored in an independent text file, whose name is composed of an id unique to the partition and the score associated with the review, separated by an underscore. An example of a positive and a negative review is shown in Table 4.1. 4.1.2 Bag-of-words Model As discussed in Section 2.2, we will encode the text to classify as a bag of words. That is, we encode each review as a list of numbers, with each position in the list corresponding to a word in our vocabulary, and the value stored in that position corresponding to the number of times the word appears in the review. For example, say we want to encode the following two reviews: 4 https://ai.stanford.edu/~amaas/data/sentiment/ 5 https://www.imdb.com/ Maas et al. 4.1 Binary Classification 57 Table 4.1 Two examples of movie reviews from IMDb. The first is a positive review of the movie Puss in Boots (1988). The second is a negative review of the movie Valentine (2001). These reviews can be found at https://www.imdb.com/review/rw0606396/ and https://www.imdb.com/review/rw0721861/, respectively. Filename Score Binary Label train/pos/24_8.txt 8/10 Positive train/neg/141_3.txt 3/10 Negative Review Text Although this was obviously a low-budget production, the performances and the songs in this movie are worth seeing. One of Walken’s few musical roles to date. (he is a marvelous dancer and singer and he demonstrates his acrobatic skills as well - watch for the cartwheel!) Also starring Jason Connery. A great children’s story and very likable characters. This stalk and slash turkey manages to bring nothing new to an increasingly stale genre. A masked killer stalks young, pert girls and slaughters them in a variety of gruesome ways, none of which are particularly inventive. It’s not scary, it’s not clever, and it’s not funny. So what was the point of it? Review 1: Review 2: "I liked the movie. My friend liked it too. " "I hated it. Would not recommend. " First, we need to create a vocabulary that maps each word to an id that uniquely identifies it. Each of these numbers will be used as the index in a list, so they must start at zero and grow by one for each word in the vocabulary. For example, one possible vocabulary that encodes the previous reviews is: {'would': 0, 'hated': 1, 58 Implementing Text Classification Using Perceptron and LR 'my': 2, 'liked': 3, 'not': 4, 'it': 5, 'movie': 6, 'recommend': 7, 'the': 8, 'I': 9, 'too': 10, 'friend': 11} Using this mapping, we can encode the two reviews as follows: Review1: [0,0,1,2,0,1,1,0,1,1,1,1] Review2: [1,1,0,0,1,1,0,1,0,1,0,0] Note that the word liked (fourth position) in the first review has a value of two. This is because this word appears twice in that review. This is a small example with a vocabulary of only 12 terms. Of course, the same process needs to be implemented for our whole training dataset. For this purpose we will use scikit-learn’s CountVectorizer class.6 Using the CountVectorizer class simplifies things, allowing us to get started quickly with a bag-of-words approach. However, note that it makes several simplifying assumptions (e.g., text is lowercased, and punctuation and single character tokens are removed). Some of these may not be adequate to other tasks. First, we need to obtain the filenames for the reviews in the training set: Once we have acquired the filenames for the training reviews, we need
to read them using the CountVectorizer. In order for the CountVectorizer to open and read the files for us, we make use of the input='filename' constructor parameter (otherwise it would expect the string content directly). The CountVectorizer provides three methods that will be use-
ful for us: a method called fit() that is used to acquire the vocabulary,
a method transform() that converts the text into the bag-of-words representation, and a method fit_transform() that conveniently acquires the vocabulary and transforms the data in a single step. The resulting object is referred to as a document-term matrix, where each row corre- 6 https://scikitlearn.org/stable/modules/generated/sklearn.feature_ extraction.text.CountVectorizer.html 4.1 Binary Classification 59 sponds to a document, and each column corresponds to a term in the vocabulary. As the output above indicates, the resulting matrix has 25,000 rows (one for each review), and 74,849 columns (one for each term). Also you may note that this matrix is sparse, with 3,445,861 stored elements. A regular matrix of shape 25,000×74,849 would have 1,871,225,000 elements. However, most of the elements in a document-term matrix are zeros because only a few words from the vocabulary appear in each document. A sparse matrix takes advantage of this fact by storing only the non-zero cells in order to reduce the memory required to store it. Thus, sparse matrices are convenient, especially when dealing with lots of data. Nevertheless, to simplify the downstream code in this example, we will convert it into a dense matrix, i.e., a regular two-dimensional NumPy array. Finally, we also need the labels of the reviews. We assign a label of one to positive reviews, and a label of zero to negative ones. Note that the first half of the reviews are positive and the second half are negative. The label at the ith position of the y_train array corresponds to the review encoded in the ith row of the X_train matrix. 4.1.3 Perceptron Now that we have defined our task and the data processing pipeline, we will implement a perceptron classifier that classifies the movie reviews as positive or negative. The entire code discussed in this section is available in the chap4_perceptron notebook. Recall from Section 2.4 that the perceptron is composed of a weight vector w and a bias term b. These will be represented as a NumPy array w of the same length as our document vectors, and a variable b for the bias term. Both will be initialized with zeros. The parameters w and b are learned through the following algorithm, which implements Algorithm 2 from Chapter 2: There are a couple of details to point out. Line 3 of Algorithm 2 indicates that we need to repeat the training loop until convergence. Theoretically, convergence is defined as predicting all training examples correctly. This is an ambitious requirement, which is not always possible in practice, so in this code we also include a stop condition if we reach a maximum number of epochs. Another crucial difference between our implementation here and the theoretical Algorithm 2, is that we randomize the order in which the training examples are seen at the beginning of 60 Implementing Text Classification Using Perceptron and LR each epoch. This simple (but highly recommended!) change is necessary to avoid the introduction of spurious biases due to the arbitrary order of the examples in the original training partition.7 We accomplish this by storing the indices corresponding to the X_train matrix rows in a NumPy array, and shuffling these indices at the beginning of each epoch. We shuffle the indices instead of the examples so that we can preserve the mapping between examples and labels. The training loop aligns closely with Algorithm 2. We start by iterating over each example in our training data, storing the current example in the variable x,8 and its corresponding label in the variable y_true. Next, we compute the perceptron decision function shown in Algorithm 1. Note that NumPy (as well as PyTorch) uses Python’s @ operator to indicate vector or matrix multiplication, depending on its operand types. Here we use it to calculate the dot product of the example x and the weights w. To this we add the bias b to obtain the predicted score, whose sign is used to assign a positive or negative predicted label. If the prediction is correct, then no update is needed, and we can move on to the next training example. However, if the prediction is incorrect, then we need to adjust w and b, as described in Algorithm 2. Sidebar 4.1 The tqdm function This is our first exposure to the tqdm function. tqdm is a progress bar that “make your loops show a smart progress meter.”9 The name tqdm comes from the Arabic word taqaddum which can mean “progress.” Using tqdm is as simple as wrapping it around the collection to be traversed. After training, we evaluate the model’s performance on the heldout test partition. The test data is loaded similarly to the training partition, but with one notable difference; we use CountVectorizer’s transform() method instead of the fit_transform() method so that the vocabulary is not adjusted for the test data. We won’t show here the loading of the test partition since it is so similar to the code already shown, but it is available in the Jupyter notebook that accompanies this section. . 7   As an extreme example, consider a dataset where all the positive examples appear first in the training partition. This would cause the perceptron to artificially inflate the weights of the features that occur in these examples, a situation from which the learning algorithm may struggle to recover. 
 . 8  We use typewriter font when we discuss variables in the code, to distinguish code from the theoretical discussion in the other chapters. 
 9 https://github.com/tqdm/tqdm 4.1 Binary Classification 61 Using the model to assign labels to all the test data is easily done in one step – we simply multiply the entire test data document-term matrix by the previously learned weights and add the bias. Scores greater than zero indicate a positive review, and those less than zero are negative. At this point we can evaluate the classifier’s performance, which we will do using precision, recall, and F1 scores for binary classification (described in Section 2.3). For this purpose, we implement a function called binary_classification_report that computes these metrics and returns them as a dictionary: We call this function to compare the predicted labels to the true labels, and obtain the evaluation scores. Our F1 score here is 86.8%, which is much higher than the baseline that assigns labels randomly, which yields an F1 score of about 50%. This is a good result, especially considering the simplicity of the perceptron! In the next sections and chapters, we will discuss a battery of strategies to considerably improve this performance. 4.1.4 Binary Logistic Regression from Scratch Using the same task, dataset, and evaluation, we will now implement a logistic regression classifier, as described in Algorithm 5 from Chapter 3. To give the reader hands-on experience with the implementation of the gradient calculations for logistic regression, we start by implementing it from scratch using NumPy. All the code shown in this section is available in the chap4_logistic_regression_numpy notebook. In the perceptron implementation, we represented the weights and the bias as two different variables. Here, however, we will use a different approach that will allow us to unify them into a single vector variable. Specifically, we take advantage of the similarity between the derivative of the cost function with respect to the weights (Equation 3.14) and the derivative of the cost with respect to the bias (Equation 3.15). d Ci(w, b) = (σi − yi)xij (3.14 revisited) dwj d Ci(w, b) = σi − yi (3.15 revisited) db Note that the two derivative formulas are identical except that the former has a multiplication by xij, while the latter does not. However, 62 Implementing Text Classification Using Perceptron and LR since σi − yi = (σi − yi)1 we can multiply the derivative of the cost with respect to the bias by one without changing the semantics. This gives an opportunity for combining the computations, doing them both in a single pass. The idea is that we can treat the bias as a weight corresponding to a feature that always has a value of one. As can be seen above, we created a NumPy array of ones of the same length as the number of examples in our training set (i.e., the number of rows in the data matrix). Then we add this array as a new column to the data matrix, using NumPy’s column_stack function. Next, we need to initialize our model. This time we will use a single NumPy array w of the same length as the number of columns in the data matrix. The weight vector w is initialized randomly with values between 0 and 1: Before implementing the learning algorithm, we need an implementation of the logistic function. Recall that the logistic function is σ(x) = 1 (3.1 revisited) 1+e−x This function can be easily implemented in NumPy as follows: However, this naive implementation may produce the following warning during training: The term overflow indicates that the result of evaluating exp(-x) is a number so large that it can’t be represented by a float (specifically, we’re using float64 numbers). We will avoid this issue by not calling exp with values that will overflow. NumPy provides the function finfo that can be consulted to find the limits of floating point numbers: The log of the largest floating point number is the largest number for which exp() will not overflow, so we will use it as a threshold to filter out problematic values: We now have everything we need to implement Algorithm 4. The steps to follow for each example are: (1) use the model to make a prediction, (2) calculate the gradient of the loss function with respect to the model parameters, and (3) update the model parameters using the gradient. The size of the update is controlled by the learning rate. Once the model has been trained, we evaluate it on the test dataset using our binary_classification_report function from the previous section. Loading and preprocessing the test dataset follows the same 4.1 Binary Classification 63 steps as with the previous classifier. We omit the code for brevity. These are the results: The performance is comparable with that of the perceptron. The difference in F1 scores between the two classifiers (84.9% here vs. 86.8% for the perceptron) is not significant. Classifier parity is probably attributable to the fact that the signal distinguishing the two classes being easy to learn and the simpler perceptron training algorithm being sufficient in this case. Nevertheless, this task is useful in showing how to implement the logistic regression model from scratch, i.e., by implementing the gradient calculation and parameter updates manually. Next, we will implement the same model again using PyTorch, highlighting how this machine learning library simplifies the process. 4.1.5 Binary Logistic Regression Utilizing PyTorch While it is fairly straightforward to compute the derivatives for logistic regression and implement then directly in NumPy, this will not scale well to arbitrary neural architectures. Fortunately, there are libraries that automate the computation of the derivatives of the cost function (assuming it is differentiable!) for any neural network, and use the resulting gradients to perform gradient descent or other more sophisticated optimization procedures. To this end, we will use the PyTorch deep learning library10. The corresponding notebook for this section is chap4_logistic_regression_pytorch_bce. Our model for logistic regression corresponds to PyTorch’s Linear layer. When we instantiate this layer, we specify the size of the inputs (the size of our vocabulary) and the size of the output, i.e., the number of output neurons (which is one because we’re doing binary classification). The loss function we use is the binary cross-entropy loss (see Chapter 3), which is implemented as BCEWithLogitsLoss in PyTorch. In PyTorch, the gradients obtained from the loss function are applied to the model by an optimizer object, which implements and applies an optimization algorithm. Here we will use the vanilla stochastic gradient descent optimizer; we set its learning rate to 0.1. This is equivalent to the discussion in Section 3.2. Similarly to the manual implementation, the steps required to train the model for a given training example are: (1) ensure the gradients are set to zeros, (2) apply the model to obtain a prediction, (3) calculate 10 https://pytorch.org/ 64 Implementing Text Classification Using Perceptron and LR the loss, (4) compute the gradient of the loss by back-propagation, and (5) update the model parameters. Recall that in our previous implementation everything was hardcoded: applying the model, computing the gradients, and optimizing the model parameters. Here, however, the implementation of the logistic regression is expressed at a higher level of abstraction. This means that we are describing the logical steps without specifying a particular implementation. Instead, implementation details are the responsability of the chosen model, loss function, and optimizer. Thus, we could even choose a different model, loss function, and/or optimizer, and use the same training steps with little or no modification. This decoupling of the training logic from the implementation details is one of the main advantages of libraries such as PyTorch. As shown in the code above, calling the model as a function, with the feature vectors as inputs, produces the predicted scores. Once again, a positive score corresponds to a positive label. When we evaluate this implementation on the test dataset, we obtain results that are in line with our previous models: Writing the perceptron and the logistic regression from scratch is a good exercise, as it exposes us to the fundamentals of implementing machine learning algorithms. However, this becomes cumbersome for more complex neural architectures. For this reason, from this point on, we will use PyTorch for all our coding examples. 4.2 Multiclass Classification So far, in this chapter we have discussed implementing binary classifiers. Next, we will modify these binary classifiers to perform multiclass classification, following the discussion in Section 3.5. 4.2.1 AG News Dataset Before explaining the actual training/testing code, we have to choose a new dataset that is suitable for multiclass classification. To this end, we will use the AG News Classification Dataset (Zhang et al., 2015), a subset of the larger AG corpus of news articles collected from thousands of different news sources.11 The classification dataset consists of four 11 http://groups.di.unipi.it/~gulli/AG_corpus_of_news_articles.html 4.2 Multiclass Classification 65 classes, and the data is equally balanced across all classes (30,000 articles per class for train, and 1,900 articles per class for testing). The goal of the task is to classify each article as one of the four classes: World, Sports, Business, or Sci/Tech. 4.2.2 Preparing the Dataset The AG News Dataset is distributed as two CSV files (one for training and one for testing), each containing three columns: the class index, the title, and the description. The dataset also provides a text file that maps the above class indexes to more descriptive class labels. Because of the tabular nature of the dataset, pandas, a Python library
for tabular data analysis,12 is a natural choice for loading and transform-
ing it. To this end, our Jupyter notebook (chap4_multiclass_logistic_regression) demonstrates the sequence of steps required to handle the data, as well
as model training and evaluation. First, we show how to load the CSV,
add column names, and inspect the result: class index . 0  3 
 . 1  3 
 . 2  3 
 . 3  3 
 . 4  3 
 ... ... . 119995  1 
 . 119996  2 
 . 119997  2 
 . 119998  2 
 . 119999  2 
 title Wall St. Bears Claw Back Into the Black (Reuters) Carlyle Looks Toward Commercial Aerospace (Reu... Oil and Economy Cloud Stocks' Outlook (Reuters) Iraq Halts Oil Exports from Main Southern Pipe... Oil prices soar to all-time record, posing new... ... Pakistan's Musharraf Says Won't Quit as Army C... Renteria signing a top-shelf deal Saban not going to Dolphins yet Today's NFL games Nets get Carter from Raptors description Reuters - Short-sellers, Wall Street's dwindli... Reuters - Private investment firm Carlyle Grou... Reuters - Soaring crude prices plus worries\ab... Reuters - Authorities have halted oil export\f... AFP - Tearaway world oil prices, toppling reco... ... KARACHI (Reuters) - Pakistani President Perve... Red Sox general manager Theo Epstein acknowled... The Miami Dolphins will put their courtship of... PITTSBURGH at NY GIANTS Time: 1:30 p.m. Line: ... INDIANAPOLIS -- All-Star Vince Carter was trad... 120000 rows × 3 columns Since the class labels themselves are in a separate file, we manually add them to the pandas data structure (called dataframe in pandas’ terminology) to increase the interpretability of the data. We use the class index column as a starting point, and use its map method to create a new column with the corresponding labels (technically a new Series object) that is added to the dataframe using its insert method, which allows us to insert the column in a specific position. Note that the label indices are one-based, so we subtract one to align them with their labels. 12 https://pandas.pydata.org 66 Implementing Text Classification Using Perceptron and LR class index . 0  3 
 . 1  3 
 . 2  3 
 . 3  3 
 . 4  3 
 ... ... . 119995  1 
 . 119996  2 
 . 119997  2 
 . 119998  2 
 . 119999  2 
 class Business Business Business Business Business ... World Sports Sports Sports Sports title Wall St. Bears Claw Back Into the Black (Reuters) Oil and Economy Cloud Stocks' Outlook (Reuters) Oil prices soar to all-time record, posing new... Pakistan's Musharraf Says Won't Quit as Army C... Saban not going to Dolphins yet Nets get Carter from Raptors description Reuters - Short-sellers, Wall Street's dwindli... Reuters - Soaring crude prices plus worries\ab... AFP - Tearaway world oil prices, toppling reco... KARACHI (Reuters) - Pakistani President Perve... The Miami Dolphins will put their courtship of... INDIANAPOLIS -- All-Star Vince Carter was trad... Iraq Halts Oil Exports from Main Southern Pipe... Reuters - Authorities have halted oil export\f... ... ... Renteria signing a top-shelf deal Red Sox general manager Theo Epstein acknowled... 120000 rows × 4 columns Carlyle Looks Toward Commercial Aerospace (Reu... Reuters - Private investment firm Carlyle Grou... Today's NFL games PITTSBURGH at NY GIANTS Time: 1:30 p.m. Line: ... Next we will preprocess the text. First we lowercase the title and description, and then we concatenate them into a single string. Then we remove some spurious backslashes from the text. Once this is done, the preprocessed text is added to the dataframe as a new column. Note that pandas allows these steps to be applied to all rows simultaneously. class index class title Wall St. Bears Claw Back Into the Black (Reuters) Oil and Economy Cloud Stocks' Outlook (Reuters) Oil prices soar to all-time record, posing new... ... Pakistan's Musharraf Says Won't Quit as Army C... Saban not going to Dolphins yet Nets get Carter from Raptors description Reuters - Short-sellers, Wall Street's dwindli... Reuters - Soaring crude prices plus worries\ab... AFP - Tearaway world oil prices, toppling reco... ... KARACHI (Reuters) - Pakistani President Perve... The Miami Dolphins will put their courtship of... INDIANAPOLIS -- All-Star Vince Carter was trad... text wall st. bears claw back into the black (reute... oil and economy cloud stocks' outlook (reuters... oil prices soar to all-time record, posing new... ... pakistan's musharraf says won't quit as army c... saban not going to dolphins yet the miami dolp... nets get carter from raptors indianapolis -- a... . 0  3 Business 
 . 1  3 Business 
 . 2  3 Business 
 . 3  3 Business 
 . 4  3 Business 
 ... ... ... . 119995  1 World 
 . 119996  2 Sports 
 . 119997  2 Sports 
 . 119998  2 Sports 
 . 119999  2 Sports 
 120000 rows × 5 columns Carlyle Looks Toward Commercial Reuters - Private investment firm Carlyle carlyle looks toward commercial Aerospace (Reu... Grou... aerospace (reu... Iraq Halts Oil Exports from Main Southern Pipe... Reuters - Authorities have halted oil export\f... iraq halts oil exports from main southern pipe... Renteria signing a top-shelf deal Red Sox general manager Theo Epstein renteria signing a top-shelf deal red sox acknowled... gene... Today's NFL games PITTSBURGH at NY GIANTS Time: 1:30 p.m. today's nfl games pittsburgh at ny giants Line: ... time... At this point, the text is ready to be tokenized. For this purpose we will use NLTK’s word_tokenize function. This function can be applied to the whole column at once using the pandas map function, which returns a new column which we add to the dataframe. However, here we actually use the progress_map function, which provides a visual progress bar. This visual feedback is especially helpful for tasks that take more time to complete. 4.2 Multiclass Classification 67 class index . 0  3 
 . 1  3 
 . 2  3 
 . 3  3 
 . 4  3 
 ... ... . 119995  1 
 . 119996  2 
 . 119997  2 
 . 119998  2 
 . 119999  2 
 class Business Business Business Business Business ... World Sports Sports Sports Sports title Wall St. Bears Claw Back Into the Black (Reuters) Oil and Economy Cloud Stocks' Outlook (Reuters) Oil prices soar to all-time record, posing new... ... Pakistan's Musharraf Says Won't Quit as Army C... Saban not going to Dolphins yet Nets get Carter from Raptors description Reuters - Short-sellers, Wall Street's dwindli... Reuters - Soaring crude prices plus worries\ab... AFP - Tearaway world oil prices, toppling reco... ... KARACHI (Reuters) - Pakistani President Perve... The Miami Dolphins will put their courtship of... INDIANAPOLIS -- All-Star Vince Carter was trad... text wall st. bears claw back into the black (reute... oil and economy cloud stocks' outlook (reuters... oil prices soar to all-time record, posing new... ... pakistan's musharraf says won't quit as army c... saban not going to dolphins yet the miami dolp... nets get carter from raptors indianapolis -- a... tokens [wall, st., bears, claw, back, into, the, blac... [oil, and, economy, cloud, stocks, ', outlook,... [oil, prices, soar, to, all-time, record, ,, p... ... [pakistan, 's, musharraf, says, wo, n't, quit,... [saban, not, going, to, dolphins, yet, the, mi... [nets, get, carter, from, raptors, indianapoli... 120000 rows × 6 columns Carlyle Looks Toward Commercial Reuters - Private investment firm carlyle looks toward commercial [carlyle, looks, toward, Aerospace (Reu... Carlyle Grou... aerospace (reu... commercial, aerospace... Iraq Halts Oil Exports from Main Reuters - Authorities have halted iraq halts oil exports from main [iraq, halts, oil, exports, from, Southern Pipe... oil export\f... southern pipe... main, southe... Renteria signing a top-shelf deal Red Sox general manager Theo renteria signing a top-shelf deal [renteria, signing, a, top-shelf, Epstein acknowled... red sox gene... deal, red, s... Today's NFL games PITTSBURGH at NY GIANTS today's nfl games pittsburgh at [today, 's, nfl, games, Time: 1:30 p.m. Line: ... ny giants time... pittsburgh, at, ny, gi... From the tokens we just created, we then create a vocabulary for our corpus. Here, we only keep the words that occur at least 10 times, decreasing the memory needed and reducing the likelihood that our vocabulary contains noisy tokens. Note that each row in the tokens column contains a list of tokens. In order to create the vocabulary, we will need to convert the Series of lists of tokens into a Series of tokens using the explode() Pandas method. Then we will use the value_counts() method to create a Series object in which the index are the tokens and the values are the number of times they appear in the corpus. The next step is removing the tokens with a count lower than our chosen threshold. Finally, we create a list with the remaining tokens, as well as a dictionary that maps tokens to token ids (i.e., the index of the token in the list). We include in the vocabulary a special token [UNK] that will be used as a placeholder for tokens that do not appear in our vocabulary after the frequency pruning. Using this vocabulary, we construct a feature vector for each news article in the corpus. This feature vector will be encoded as a dictionary, with keys corresponding to token ids, and values corresponding to the number of times the token appears in the article. As above, the feature vectors will be stored as a new column in the dataframe. 68 Implementing Text Classification Using Perceptron and LR class index class title Wall St. Bears Claw Back Into the Black (Reuters) Oil and Economy Cloud Stocks' Outlook (Reuters) Oil prices soar to all-time record, posing new... ... Pakistan's Musharraf Says Won't Quit as Army C... Saban not going to Dolphins yet Nets get Carter from Raptors description Reuters - Short-sellers, Wall Street's dwindli... Reuters - Soaring crude prices plus worries\ab... AFP - Tearaway world oil prices, toppling reco... ... KARACHI (Reuters) - Pakistani President Perve... The Miami Dolphins will put their courtship of... INDIANAPOLIS -- All-Star Vince Carter was trad... text wall st. bears claw back into the black (reute... oil and economy cloud stocks' outlook (reuters... oil prices soar to all-time record, posing new... ... pakistan's musharraf says won't quit as army c... saban not going to dolphins yet the miami dolp... nets get carter from raptors indianapolis -- a... tokens [wall, st., bears, claw, back, into, the, blac... [oil, and, economy, cloud, stocks, ', outlook,... [oil, prices, soar, to, alltime, record, ,, p... ... [pakistan, 's, musharraf, says, wo, n't, quit,... [saban, not, going, to, dolphins, yet, the, mi... [nets, get, carter, from, raptors, indianapoli... features {427: 2, 563: 1, 1607: 1, 15062: 1, 120: 1, 73... {66: 1, 9: 2, 351: 2, 4565: 1, 158: 1, 116: 1,... {66: 2, 99: 2, 4390: 1, 4: 2, 3595: 1, 149: 1,... ... {383: 1, 23: 1, 1626: 2, 91: 1, 1809: 1, 285: ... {7762: 2, 68: 1, 661: 1, 4: 2, 1439: 2, 703: 1... {2170: 2, 226: 1, 2402: 2, 32: 1, 2995: 2, 219... . 0  3 Business 
 . 1  3 Business 
 . 2  3 Business 
 . 3  3 Business 
 . 4  3 Business 
 ... ... ... . 119995  1 World 
 . 119996  2 Sports 
 . 119997  2 Sports 
 . 119998  2 Sports 
 . 119999  2 Sports 
 120000 rows × 7 columns Carlyle Looks Toward Commercial Aerospace (Reu... Reuters - Private investment firm Carlyle Grou... carlyle looks toward commercial aerospace (reu... Iraq Halts Oil Exports from Reuters - Authorities have iraq halts oil exports from Main Southern Pipe... halted oil export\f... main southern pipe... Renteria signing a top-shelf Red Sox general manager renteria signing a topdeal Theo Epstein acknowled... shelf deal red sox gene... PITTSBURGH at NY Today's NFL games GIANTS Time: 1:30 p.m. Line: ... today's nfl games pittsburgh at ny giants time... [carlyle, looks, toward, {15999: 2, 1076: 1, 855: commercial, aerospace... 1, 1286: 1, 4251: 1, ... [iraq, halts, oil, exports, {77: 2, 7380: 1, 66: 3, from, main, southe... 1787: 1, 32: 2, 900: 2... [renteria, signing, a, top- {8428: 2, 2638: 1, 5: 4, shelf, deal, red, s... 0: 3, 127: 1, 202: 3,... [today, 's, nfl, games, {106: 1, 23: 1, 729: 1, pittsburgh, at, ny, gi... 225: 1, 1586: 1, 22: 1... The final preprocessing step is converting the features and the class indices into PyTorch tensors. Recall that we need to subtract one from the class indices to make them zero-based. At this point, the data is fully processed and we are ready to begin training. 4.2.3 Multiclass Logistic Regression Using PyTorch The model itself is a single linear layer whose input size corresponds to the size of our vocabulary, and its output size corresponds to the number of classes in our corpus. PyTorch’s Linear layer includes a bias by default, so there is no need to handle that manually the way we did for our perceptron example. The code for training this model (which implements Algorithm 6) is almost identical to that of the binary logistic repression. However, since we have to calculate a score for each of the four different classes, we need to replace the previous BCEWithLogitsLoss with CrossEntropyLoss, which applies a softmax over the scores to obtain probabilities for each class. For each example, the model predicts 4 scores – one for each label. The label with the highest score is selected using the argmax function. We evaluate the predictions of our model for each class using Scikitlearn’s classification_report, which handles the results of multiclass classification. 4.3 Summary 69 4.3 Summary In this chapter, we used movie review and news article classification to illustrate the implementation of the previously described algorithms for the binary perceptron, binary logistic regression, and multiclass logistic regression. For the binary logistic regression, we made a direct comparison between the lower-level NumPy implementation and a higher-level version that made use of PyTorch. We hope that through this series of exercises the reader has noted several key takeaways. First, data preparation is important and should be done thoughtfully. Certain tasks (e.g., text normalization or sentence splitting) are going to be frequently needed if you continue with NLP, so using or creating generic functions can be very helpful. However, what works for one dataset and one language may not be suitable for another scenario. For example, in our case, we selected different tokenizers for each of our tasks to account for the different registers of English, as well as removing diacritics during normalization. Second, when it comes to implementing machine learning algorithms, it is often easier to use a higher-level library such as PyTorch instead of NumPy. For example, with the former, the gradients are calculated by the library, whereas in NumPy we have to code them ourselves. This becomes cumbersome quickly. For example, even the derivative of the softmax is non-trivial. Third, PyTorch imposes a training structure that remains largely the same, regardless of what models are being trained. That is, at a high level, the same steps are always required: clearing the current gradients, predicting output scores for the provided inputs, calculating the loss, and optimizing. These features make PyTorch a very powerful and convenient deep learning library; we will continue to use it throughout the remainder of the book to implement more complex neural architectures.
35,675
35,827
#!/usr/bin/env python # coding: utf-8 # # Multiclass Text Classification with # # Logistic Regression Implemented with PyTorch and CE Loss # First, we will do some initialization. # In[1]: import random import torch import numpy as np import pandas as pd from tqdm.notebook import tqdm # enable tqdm in pandas tqdm.pandas() # set to True to use the gpu (if there is one available) use_gpu = True # select device device = torch.device('cuda' if use_gpu and torch.cuda.is_available() else 'cpu') print(f'device: {device.type}') # random seed seed = 1234 # set random seed if seed is not None: print(f'random seed: {seed}') random.seed(seed) np.random.seed(seed) torch.manual_seed(seed) # We will be using the AG's News Topic Classification Dataset. # It is stored in two CSV files: `train.csv` and `test.csv`, as well as a `classes.txt` that stores the labels of the classes to predict. # # First, we will load the training dataset using [pandas](https://pandas.pydata.org/) and take a quick look at how the data. # In[2]: train_df = pd.read_csv('data/ag_news_csv/train.csv', header=None) train_df.columns = ['class index', 'title', 'description'] train_df # The dataset consists of 120,000 examples, each consisting of a class index, a title, and a description. # The class labels are distributed in a separated file. We will add the labels to the dataset so that we can interpret the data more easily. Note that the label indexes are one-based, so we need to subtract one to retrieve them from the list. # In[3]: labels = open('data/ag_news_csv/classes.txt').read().splitlines() classes = train_df['class index'].map(lambda i: labels[i-1]) train_df.insert(1, 'class', classes) train_df # Let's inspect how balanced our examples are by using a bar plot. # In[4]: pd.value_counts(train_df['class']).plot.bar() # The classes are evenly distributed. That's great! # # However, the text contains some spurious backslashes in some parts of the text. # They are meant to represent newlines in the original text. # An example can be seen below, between the words "dwindling" and "band". # In[5]: print(train_df.loc[0, 'description']) # We will replace the backslashes with spaces on the whole column using pandas replace method. # In[6]: title = train_df['title'].str.lower() descr = train_df['description'].str.lower() text = title + " " + descr train_df['text'] = text.str.replace('\\', ' ', regex=False) train_df # Now we will proceed to tokenize the title and description columns using NLTK's word_tokenize(). # We will add a new column to our dataframe with the list of tokens. # In[7]: from nltk.tokenize import word_tokenize train_df['tokens'] = train_df['text'].progress_map(word_tokenize) train_df # Now we will create a vocabulary from the training data. We will only keep the terms that repeat beyond some threshold established below. # In[8]: threshold = 10 tokens = train_df['tokens'].explode().value_counts() tokens = tokens[tokens > threshold] id_to_token = ['[UNK]'] + tokens.index.tolist() token_to_id = {w:i for i,w in enumerate(id_to_token)} vocabulary_size = len(id_to_token) print(f'vocabulary size: {vocabulary_size:,}') # In[9]: from collections import defaultdict def make_feature_vector(tokens, unk_id=0): vector = defaultdict(int) for t in tokens: i = token_to_id.get(t, unk_id) vector[i] += 1 return vector train_df['features'] = train_df['tokens'].progress_map(make_feature_vector) train_df # In[10]: def make_dense(feats): x = np.zeros(vocabulary_size) for k,v in feats.items(): x[k] = v return x X_train = np.stack(train_df['features'].progress_map(make_dense)) y_train = train_df['class index'].to_numpy() - 1 X_train = torch.tensor(X_train, dtype=torch.float32) y_train = torch.tensor(y_train) # In[11]: from torch import nn from torch import optim # hyperparameters lr = 1.0 n_epochs = 5 n_examples = X_train.shape[0] n_feats = X_train.shape[1] n_classes = len(labels) # initialize the model, loss function, optimizer, and data-loader model = nn.Linear(n_feats, n_classes).to(device) loss_func = nn.CrossEntropyLoss() optimizer = optim.SGD(model.parameters(), lr=lr) # train the model indices = np.arange(n_examples) for epoch in range(n_epochs): np.random.shuffle(indices) for i in tqdm(indices, desc=f'epoch {epoch+1}'): # clear gradients model.zero_grad() # send datum to right device x = X_train[i].unsqueeze(0).to(device) y_true = y_train[i].unsqueeze(0).to(device) # predict label scores y_pred = model(x) # compute loss loss = loss_func(y_pred, y_true) # backpropagate loss.backward() # optimize model parameters optimizer.step() # Next, we evaluate on the test dataset # In[12]: # repeat all preprocessing done above, this time on the test set test_df = pd.read_csv('data/ag_news_csv/test.csv', header=None) test_df.columns = ['class index', 'title', 'description'] test_df['text'] = test_df['title'].str.lower() + " " + test_df['description'].str.lower() test_df['text'] = test_df['text'].str.replace('\\', ' ', regex=False) test_df['tokens'] = test_df['text'].progress_map(word_tokenize) test_df['features'] = test_df['tokens'].progress_map(make_feature_vector) X_test = np.stack(test_df['features'].progress_map(make_dense)) y_test = test_df['class index'].to_numpy() - 1 X_test = torch.tensor(X_test, dtype=torch.float32) y_test = torch.tensor(y_test) # In[13]: from sklearn.metrics import classification_report # set model to evaluation mode model.eval() # don't store gradients with torch.no_grad(): X_test = X_test.to(device) y_pred = torch.argmax(model(X_test), dim=1) y_pred = y_pred.cpu().numpy() print(classification_report(y_test, y_pred, target_names=labels))
4,140
4,174
31
chap04-32
chap04-32
4 Implementing Text Classification Using Perceptron and Logistic Regression In the previous chapters we have discussed the theory behind the perceptron and logistic regression, including mathematical explanations of how and why they are able to learn from examples. In this chapter we will transition from math to code. Specifically, we will discuss how to implement these models in the Python programming language. All the code that we will introduce throughout this book is available online as well: http://clulab.github.io/gentlenlp/. The reader who is not familiar with the Python programming language is encouraged to read first Appendix A, for a brief introduction to the language, and Appendix B, for a discussion on how computers encode and preprocess text. Once done, please return here. To get a better understanding of how these algorithms work under the hood, we will start by implementing them from scratch. However, as the book progresses, we will introduce some of the popular tools and libraries that make Python the language of choice for machine learning, e.g., PyTorch,1 and Hugging Face’s transformers.2 The code for all the examples in the book is provided in the form of Jupyter notebooks.3 Important fragments of these notebooks will be presented in the implementation chapters so that the reader has the whole picture just by reading the book. However, we strongly encourage you to download the notebooks and execute them yourself. We also encourage you to modify them to conduct your own experiments! 1 https://pytorch.org
2 https://huggingface.co 3 https://jupyter.org/ 55 56 Implementing Text Classification Using Perceptron and LR 4.1 Binary Classification We begin this chapter with binary classification. That is, we aim to train classifiers that assign one of two labels to a given text. As the example for this task, we will train a review classifier using the the Large Movie Review Dataset (Maas et al., 2011).4 We tackle this task by implementing first a binary perceptron classifier, followed by a binary logistic regression one. We will implement the latter both from scratch as well as using PyTorch, so the reader has a clearer understanding on how PyTorch works “under the hood.” 4.1.1 Large Movie Review Dataset This dataset contains movie reviews and their associated scores (between 1 and 10) as provided by IMDb.5 converted these scores to binary labels by assigning each review a positive or negative label if the review score was above 6 or below 5, respectively. Reviews with scores 5 and 6 were considered too neutral and thus excluded. We follow the same protocol in this chapter. The dataset is divided in two even partitions called train and test, each containing 25,000 reviews. The dataset also provides additional unlabeled reviews, but we will not use those here. Each partition contains two directories called pos and neg where the positive and negative examples are stored. Each review is stored in an independent text file, whose name is composed of an id unique to the partition and the score associated with the review, separated by an underscore. An example of a positive and a negative review is shown in Table 4.1. 4.1.2 Bag-of-words Model As discussed in Section 2.2, we will encode the text to classify as a bag of words. That is, we encode each review as a list of numbers, with each position in the list corresponding to a word in our vocabulary, and the value stored in that position corresponding to the number of times the word appears in the review. For example, say we want to encode the following two reviews: 4 https://ai.stanford.edu/~amaas/data/sentiment/ 5 https://www.imdb.com/ Maas et al. 4.1 Binary Classification 57 Table 4.1 Two examples of movie reviews from IMDb. The first is a positive review of the movie Puss in Boots (1988). The second is a negative review of the movie Valentine (2001). These reviews can be found at https://www.imdb.com/review/rw0606396/ and https://www.imdb.com/review/rw0721861/, respectively. Filename Score Binary Label train/pos/24_8.txt 8/10 Positive train/neg/141_3.txt 3/10 Negative Review Text Although this was obviously a low-budget production, the performances and the songs in this movie are worth seeing. One of Walken’s few musical roles to date. (he is a marvelous dancer and singer and he demonstrates his acrobatic skills as well - watch for the cartwheel!) Also starring Jason Connery. A great children’s story and very likable characters. This stalk and slash turkey manages to bring nothing new to an increasingly stale genre. A masked killer stalks young, pert girls and slaughters them in a variety of gruesome ways, none of which are particularly inventive. It’s not scary, it’s not clever, and it’s not funny. So what was the point of it? Review 1: Review 2: "I liked the movie. My friend liked it too. " "I hated it. Would not recommend. " First, we need to create a vocabulary that maps each word to an id that uniquely identifies it. Each of these numbers will be used as the index in a list, so they must start at zero and grow by one for each word in the vocabulary. For example, one possible vocabulary that encodes the previous reviews is: {'would': 0, 'hated': 1, 58 Implementing Text Classification Using Perceptron and LR 'my': 2, 'liked': 3, 'not': 4, 'it': 5, 'movie': 6, 'recommend': 7, 'the': 8, 'I': 9, 'too': 10, 'friend': 11} Using this mapping, we can encode the two reviews as follows: Review1: [0,0,1,2,0,1,1,0,1,1,1,1] Review2: [1,1,0,0,1,1,0,1,0,1,0,0] Note that the word liked (fourth position) in the first review has a value of two. This is because this word appears twice in that review. This is a small example with a vocabulary of only 12 terms. Of course, the same process needs to be implemented for our whole training dataset. For this purpose we will use scikit-learn’s CountVectorizer class.6 Using the CountVectorizer class simplifies things, allowing us to get started quickly with a bag-of-words approach. However, note that it makes several simplifying assumptions (e.g., text is lowercased, and punctuation and single character tokens are removed). Some of these may not be adequate to other tasks. First, we need to obtain the filenames for the reviews in the training set: Once we have acquired the filenames for the training reviews, we need
to read them using the CountVectorizer. In order for the CountVectorizer to open and read the files for us, we make use of the input='filename' constructor parameter (otherwise it would expect the string content directly). The CountVectorizer provides three methods that will be use-
ful for us: a method called fit() that is used to acquire the vocabulary,
a method transform() that converts the text into the bag-of-words representation, and a method fit_transform() that conveniently acquires the vocabulary and transforms the data in a single step. The resulting object is referred to as a document-term matrix, where each row corre- 6 https://scikitlearn.org/stable/modules/generated/sklearn.feature_ extraction.text.CountVectorizer.html 4.1 Binary Classification 59 sponds to a document, and each column corresponds to a term in the vocabulary. As the output above indicates, the resulting matrix has 25,000 rows (one for each review), and 74,849 columns (one for each term). Also you may note that this matrix is sparse, with 3,445,861 stored elements. A regular matrix of shape 25,000×74,849 would have 1,871,225,000 elements. However, most of the elements in a document-term matrix are zeros because only a few words from the vocabulary appear in each document. A sparse matrix takes advantage of this fact by storing only the non-zero cells in order to reduce the memory required to store it. Thus, sparse matrices are convenient, especially when dealing with lots of data. Nevertheless, to simplify the downstream code in this example, we will convert it into a dense matrix, i.e., a regular two-dimensional NumPy array. Finally, we also need the labels of the reviews. We assign a label of one to positive reviews, and a label of zero to negative ones. Note that the first half of the reviews are positive and the second half are negative. The label at the ith position of the y_train array corresponds to the review encoded in the ith row of the X_train matrix. 4.1.3 Perceptron Now that we have defined our task and the data processing pipeline, we will implement a perceptron classifier that classifies the movie reviews as positive or negative. The entire code discussed in this section is available in the chap4_perceptron notebook. Recall from Section 2.4 that the perceptron is composed of a weight vector w and a bias term b. These will be represented as a NumPy array w of the same length as our document vectors, and a variable b for the bias term. Both will be initialized with zeros. The parameters w and b are learned through the following algorithm, which implements Algorithm 2 from Chapter 2: There are a couple of details to point out. Line 3 of Algorithm 2 indicates that we need to repeat the training loop until convergence. Theoretically, convergence is defined as predicting all training examples correctly. This is an ambitious requirement, which is not always possible in practice, so in this code we also include a stop condition if we reach a maximum number of epochs. Another crucial difference between our implementation here and the theoretical Algorithm 2, is that we randomize the order in which the training examples are seen at the beginning of 60 Implementing Text Classification Using Perceptron and LR each epoch. This simple (but highly recommended!) change is necessary to avoid the introduction of spurious biases due to the arbitrary order of the examples in the original training partition.7 We accomplish this by storing the indices corresponding to the X_train matrix rows in a NumPy array, and shuffling these indices at the beginning of each epoch. We shuffle the indices instead of the examples so that we can preserve the mapping between examples and labels. The training loop aligns closely with Algorithm 2. We start by iterating over each example in our training data, storing the current example in the variable x,8 and its corresponding label in the variable y_true. Next, we compute the perceptron decision function shown in Algorithm 1. Note that NumPy (as well as PyTorch) uses Python’s @ operator to indicate vector or matrix multiplication, depending on its operand types. Here we use it to calculate the dot product of the example x and the weights w. To this we add the bias b to obtain the predicted score, whose sign is used to assign a positive or negative predicted label. If the prediction is correct, then no update is needed, and we can move on to the next training example. However, if the prediction is incorrect, then we need to adjust w and b, as described in Algorithm 2. Sidebar 4.1 The tqdm function This is our first exposure to the tqdm function. tqdm is a progress bar that “make your loops show a smart progress meter.”9 The name tqdm comes from the Arabic word taqaddum which can mean “progress.” Using tqdm is as simple as wrapping it around the collection to be traversed. After training, we evaluate the model’s performance on the heldout test partition. The test data is loaded similarly to the training partition, but with one notable difference; we use CountVectorizer’s transform() method instead of the fit_transform() method so that the vocabulary is not adjusted for the test data. We won’t show here the loading of the test partition since it is so similar to the code already shown, but it is available in the Jupyter notebook that accompanies this section. . 7   As an extreme example, consider a dataset where all the positive examples appear first in the training partition. This would cause the perceptron to artificially inflate the weights of the features that occur in these examples, a situation from which the learning algorithm may struggle to recover. 
 . 8  We use typewriter font when we discuss variables in the code, to distinguish code from the theoretical discussion in the other chapters. 
 9 https://github.com/tqdm/tqdm 4.1 Binary Classification 61 Using the model to assign labels to all the test data is easily done in one step – we simply multiply the entire test data document-term matrix by the previously learned weights and add the bias. Scores greater than zero indicate a positive review, and those less than zero are negative. At this point we can evaluate the classifier’s performance, which we will do using precision, recall, and F1 scores for binary classification (described in Section 2.3). For this purpose, we implement a function called binary_classification_report that computes these metrics and returns them as a dictionary: We call this function to compare the predicted labels to the true labels, and obtain the evaluation scores. Our F1 score here is 86.8%, which is much higher than the baseline that assigns labels randomly, which yields an F1 score of about 50%. This is a good result, especially considering the simplicity of the perceptron! In the next sections and chapters, we will discuss a battery of strategies to considerably improve this performance. 4.1.4 Binary Logistic Regression from Scratch Using the same task, dataset, and evaluation, we will now implement a logistic regression classifier, as described in Algorithm 5 from Chapter 3. To give the reader hands-on experience with the implementation of the gradient calculations for logistic regression, we start by implementing it from scratch using NumPy. All the code shown in this section is available in the chap4_logistic_regression_numpy notebook. In the perceptron implementation, we represented the weights and the bias as two different variables. Here, however, we will use a different approach that will allow us to unify them into a single vector variable. Specifically, we take advantage of the similarity between the derivative of the cost function with respect to the weights (Equation 3.14) and the derivative of the cost with respect to the bias (Equation 3.15). d Ci(w, b) = (σi − yi)xij (3.14 revisited) dwj d Ci(w, b) = σi − yi (3.15 revisited) db Note that the two derivative formulas are identical except that the former has a multiplication by xij, while the latter does not. However, 62 Implementing Text Classification Using Perceptron and LR since σi − yi = (σi − yi)1 we can multiply the derivative of the cost with respect to the bias by one without changing the semantics. This gives an opportunity for combining the computations, doing them both in a single pass. The idea is that we can treat the bias as a weight corresponding to a feature that always has a value of one. As can be seen above, we created a NumPy array of ones of the same length as the number of examples in our training set (i.e., the number of rows in the data matrix). Then we add this array as a new column to the data matrix, using NumPy’s column_stack function. Next, we need to initialize our model. This time we will use a single NumPy array w of the same length as the number of columns in the data matrix. The weight vector w is initialized randomly with values between 0 and 1: Before implementing the learning algorithm, we need an implementation of the logistic function. Recall that the logistic function is σ(x) = 1 (3.1 revisited) 1+e−x This function can be easily implemented in NumPy as follows: However, this naive implementation may produce the following warning during training: The term overflow indicates that the result of evaluating exp(-x) is a number so large that it can’t be represented by a float (specifically, we’re using float64 numbers). We will avoid this issue by not calling exp with values that will overflow. NumPy provides the function finfo that can be consulted to find the limits of floating point numbers: The log of the largest floating point number is the largest number for which exp() will not overflow, so we will use it as a threshold to filter out problematic values: We now have everything we need to implement Algorithm 4. The steps to follow for each example are: (1) use the model to make a prediction, (2) calculate the gradient of the loss function with respect to the model parameters, and (3) update the model parameters using the gradient. The size of the update is controlled by the learning rate. Once the model has been trained, we evaluate it on the test dataset using our binary_classification_report function from the previous section. Loading and preprocessing the test dataset follows the same 4.1 Binary Classification 63 steps as with the previous classifier. We omit the code for brevity. These are the results: The performance is comparable with that of the perceptron. The difference in F1 scores between the two classifiers (84.9% here vs. 86.8% for the perceptron) is not significant. Classifier parity is probably attributable to the fact that the signal distinguishing the two classes being easy to learn and the simpler perceptron training algorithm being sufficient in this case. Nevertheless, this task is useful in showing how to implement the logistic regression model from scratch, i.e., by implementing the gradient calculation and parameter updates manually. Next, we will implement the same model again using PyTorch, highlighting how this machine learning library simplifies the process. 4.1.5 Binary Logistic Regression Utilizing PyTorch While it is fairly straightforward to compute the derivatives for logistic regression and implement then directly in NumPy, this will not scale well to arbitrary neural architectures. Fortunately, there are libraries that automate the computation of the derivatives of the cost function (assuming it is differentiable!) for any neural network, and use the resulting gradients to perform gradient descent or other more sophisticated optimization procedures. To this end, we will use the PyTorch deep learning library10. The corresponding notebook for this section is chap4_logistic_regression_pytorch_bce. Our model for logistic regression corresponds to PyTorch’s Linear layer. When we instantiate this layer, we specify the size of the inputs (the size of our vocabulary) and the size of the output, i.e., the number of output neurons (which is one because we’re doing binary classification). The loss function we use is the binary cross-entropy loss (see Chapter 3), which is implemented as BCEWithLogitsLoss in PyTorch. In PyTorch, the gradients obtained from the loss function are applied to the model by an optimizer object, which implements and applies an optimization algorithm. Here we will use the vanilla stochastic gradient descent optimizer; we set its learning rate to 0.1. This is equivalent to the discussion in Section 3.2. Similarly to the manual implementation, the steps required to train the model for a given training example are: (1) ensure the gradients are set to zeros, (2) apply the model to obtain a prediction, (3) calculate 10 https://pytorch.org/ 64 Implementing Text Classification Using Perceptron and LR the loss, (4) compute the gradient of the loss by back-propagation, and (5) update the model parameters. Recall that in our previous implementation everything was hardcoded: applying the model, computing the gradients, and optimizing the model parameters. Here, however, the implementation of the logistic regression is expressed at a higher level of abstraction. This means that we are describing the logical steps without specifying a particular implementation. Instead, implementation details are the responsability of the chosen model, loss function, and optimizer. Thus, we could even choose a different model, loss function, and/or optimizer, and use the same training steps with little or no modification. This decoupling of the training logic from the implementation details is one of the main advantages of libraries such as PyTorch. As shown in the code above, calling the model as a function, with the feature vectors as inputs, produces the predicted scores. Once again, a positive score corresponds to a positive label. When we evaluate this implementation on the test dataset, we obtain results that are in line with our previous models: Writing the perceptron and the logistic regression from scratch is a good exercise, as it exposes us to the fundamentals of implementing machine learning algorithms. However, this becomes cumbersome for more complex neural architectures. For this reason, from this point on, we will use PyTorch for all our coding examples. 4.2 Multiclass Classification So far, in this chapter we have discussed implementing binary classifiers. Next, we will modify these binary classifiers to perform multiclass classification, following the discussion in Section 3.5. 4.2.1 AG News Dataset Before explaining the actual training/testing code, we have to choose a new dataset that is suitable for multiclass classification. To this end, we will use the AG News Classification Dataset (Zhang et al., 2015), a subset of the larger AG corpus of news articles collected from thousands of different news sources.11 The classification dataset consists of four 11 http://groups.di.unipi.it/~gulli/AG_corpus_of_news_articles.html 4.2 Multiclass Classification 65 classes, and the data is equally balanced across all classes (30,000 articles per class for train, and 1,900 articles per class for testing). The goal of the task is to classify each article as one of the four classes: World, Sports, Business, or Sci/Tech. 4.2.2 Preparing the Dataset The AG News Dataset is distributed as two CSV files (one for training and one for testing), each containing three columns: the class index, the title, and the description. The dataset also provides a text file that maps the above class indexes to more descriptive class labels. Because of the tabular nature of the dataset, pandas, a Python library
for tabular data analysis,12 is a natural choice for loading and transform-
ing it. To this end, our Jupyter notebook (chap4_multiclass_logistic_regression) demonstrates the sequence of steps required to handle the data, as well
as model training and evaluation. First, we show how to load the CSV,
add column names, and inspect the result: class index . 0  3 
 . 1  3 
 . 2  3 
 . 3  3 
 . 4  3 
 ... ... . 119995  1 
 . 119996  2 
 . 119997  2 
 . 119998  2 
 . 119999  2 
 title Wall St. Bears Claw Back Into the Black (Reuters) Carlyle Looks Toward Commercial Aerospace (Reu... Oil and Economy Cloud Stocks' Outlook (Reuters) Iraq Halts Oil Exports from Main Southern Pipe... Oil prices soar to all-time record, posing new... ... Pakistan's Musharraf Says Won't Quit as Army C... Renteria signing a top-shelf deal Saban not going to Dolphins yet Today's NFL games Nets get Carter from Raptors description Reuters - Short-sellers, Wall Street's dwindli... Reuters - Private investment firm Carlyle Grou... Reuters - Soaring crude prices plus worries\ab... Reuters - Authorities have halted oil export\f... AFP - Tearaway world oil prices, toppling reco... ... KARACHI (Reuters) - Pakistani President Perve... Red Sox general manager Theo Epstein acknowled... The Miami Dolphins will put their courtship of... PITTSBURGH at NY GIANTS Time: 1:30 p.m. Line: ... INDIANAPOLIS -- All-Star Vince Carter was trad... 120000 rows × 3 columns Since the class labels themselves are in a separate file, we manually add them to the pandas data structure (called dataframe in pandas’ terminology) to increase the interpretability of the data. We use the class index column as a starting point, and use its map method to create a new column with the corresponding labels (technically a new Series object) that is added to the dataframe using its insert method, which allows us to insert the column in a specific position. Note that the label indices are one-based, so we subtract one to align them with their labels. 12 https://pandas.pydata.org 66 Implementing Text Classification Using Perceptron and LR class index . 0  3 
 . 1  3 
 . 2  3 
 . 3  3 
 . 4  3 
 ... ... . 119995  1 
 . 119996  2 
 . 119997  2 
 . 119998  2 
 . 119999  2 
 class Business Business Business Business Business ... World Sports Sports Sports Sports title Wall St. Bears Claw Back Into the Black (Reuters) Oil and Economy Cloud Stocks' Outlook (Reuters) Oil prices soar to all-time record, posing new... Pakistan's Musharraf Says Won't Quit as Army C... Saban not going to Dolphins yet Nets get Carter from Raptors description Reuters - Short-sellers, Wall Street's dwindli... Reuters - Soaring crude prices plus worries\ab... AFP - Tearaway world oil prices, toppling reco... KARACHI (Reuters) - Pakistani President Perve... The Miami Dolphins will put their courtship of... INDIANAPOLIS -- All-Star Vince Carter was trad... Iraq Halts Oil Exports from Main Southern Pipe... Reuters - Authorities have halted oil export\f... ... ... Renteria signing a top-shelf deal Red Sox general manager Theo Epstein acknowled... 120000 rows × 4 columns Carlyle Looks Toward Commercial Aerospace (Reu... Reuters - Private investment firm Carlyle Grou... Today's NFL games PITTSBURGH at NY GIANTS Time: 1:30 p.m. Line: ... Next we will preprocess the text. First we lowercase the title and description, and then we concatenate them into a single string. Then we remove some spurious backslashes from the text. Once this is done, the preprocessed text is added to the dataframe as a new column. Note that pandas allows these steps to be applied to all rows simultaneously. class index class title Wall St. Bears Claw Back Into the Black (Reuters) Oil and Economy Cloud Stocks' Outlook (Reuters) Oil prices soar to all-time record, posing new... ... Pakistan's Musharraf Says Won't Quit as Army C... Saban not going to Dolphins yet Nets get Carter from Raptors description Reuters - Short-sellers, Wall Street's dwindli... Reuters - Soaring crude prices plus worries\ab... AFP - Tearaway world oil prices, toppling reco... ... KARACHI (Reuters) - Pakistani President Perve... The Miami Dolphins will put their courtship of... INDIANAPOLIS -- All-Star Vince Carter was trad... text wall st. bears claw back into the black (reute... oil and economy cloud stocks' outlook (reuters... oil prices soar to all-time record, posing new... ... pakistan's musharraf says won't quit as army c... saban not going to dolphins yet the miami dolp... nets get carter from raptors indianapolis -- a... . 0  3 Business 
 . 1  3 Business 
 . 2  3 Business 
 . 3  3 Business 
 . 4  3 Business 
 ... ... ... . 119995  1 World 
 . 119996  2 Sports 
 . 119997  2 Sports 
 . 119998  2 Sports 
 . 119999  2 Sports 
 120000 rows × 5 columns Carlyle Looks Toward Commercial Reuters - Private investment firm Carlyle carlyle looks toward commercial Aerospace (Reu... Grou... aerospace (reu... Iraq Halts Oil Exports from Main Southern Pipe... Reuters - Authorities have halted oil export\f... iraq halts oil exports from main southern pipe... Renteria signing a top-shelf deal Red Sox general manager Theo Epstein renteria signing a top-shelf deal red sox acknowled... gene... Today's NFL games PITTSBURGH at NY GIANTS Time: 1:30 p.m. today's nfl games pittsburgh at ny giants Line: ... time... At this point, the text is ready to be tokenized. For this purpose we will use NLTK’s word_tokenize function. This function can be applied to the whole column at once using the pandas map function, which returns a new column which we add to the dataframe. However, here we actually use the progress_map function, which provides a visual progress bar. This visual feedback is especially helpful for tasks that take more time to complete. 4.2 Multiclass Classification 67 class index . 0  3 
 . 1  3 
 . 2  3 
 . 3  3 
 . 4  3 
 ... ... . 119995  1 
 . 119996  2 
 . 119997  2 
 . 119998  2 
 . 119999  2 
 class Business Business Business Business Business ... World Sports Sports Sports Sports title Wall St. Bears Claw Back Into the Black (Reuters) Oil and Economy Cloud Stocks' Outlook (Reuters) Oil prices soar to all-time record, posing new... ... Pakistan's Musharraf Says Won't Quit as Army C... Saban not going to Dolphins yet Nets get Carter from Raptors description Reuters - Short-sellers, Wall Street's dwindli... Reuters - Soaring crude prices plus worries\ab... AFP - Tearaway world oil prices, toppling reco... ... KARACHI (Reuters) - Pakistani President Perve... The Miami Dolphins will put their courtship of... INDIANAPOLIS -- All-Star Vince Carter was trad... text wall st. bears claw back into the black (reute... oil and economy cloud stocks' outlook (reuters... oil prices soar to all-time record, posing new... ... pakistan's musharraf says won't quit as army c... saban not going to dolphins yet the miami dolp... nets get carter from raptors indianapolis -- a... tokens [wall, st., bears, claw, back, into, the, blac... [oil, and, economy, cloud, stocks, ', outlook,... [oil, prices, soar, to, all-time, record, ,, p... ... [pakistan, 's, musharraf, says, wo, n't, quit,... [saban, not, going, to, dolphins, yet, the, mi... [nets, get, carter, from, raptors, indianapoli... 120000 rows × 6 columns Carlyle Looks Toward Commercial Reuters - Private investment firm carlyle looks toward commercial [carlyle, looks, toward, Aerospace (Reu... Carlyle Grou... aerospace (reu... commercial, aerospace... Iraq Halts Oil Exports from Main Reuters - Authorities have halted iraq halts oil exports from main [iraq, halts, oil, exports, from, Southern Pipe... oil export\f... southern pipe... main, southe... Renteria signing a top-shelf deal Red Sox general manager Theo renteria signing a top-shelf deal [renteria, signing, a, top-shelf, Epstein acknowled... red sox gene... deal, red, s... Today's NFL games PITTSBURGH at NY GIANTS today's nfl games pittsburgh at [today, 's, nfl, games, Time: 1:30 p.m. Line: ... ny giants time... pittsburgh, at, ny, gi... From the tokens we just created, we then create a vocabulary for our corpus. Here, we only keep the words that occur at least 10 times, decreasing the memory needed and reducing the likelihood that our vocabulary contains noisy tokens. Note that each row in the tokens column contains a list of tokens. In order to create the vocabulary, we will need to convert the Series of lists of tokens into a Series of tokens using the explode() Pandas method. Then we will use the value_counts() method to create a Series object in which the index are the tokens and the values are the number of times they appear in the corpus. The next step is removing the tokens with a count lower than our chosen threshold. Finally, we create a list with the remaining tokens, as well as a dictionary that maps tokens to token ids (i.e., the index of the token in the list). We include in the vocabulary a special token [UNK] that will be used as a placeholder for tokens that do not appear in our vocabulary after the frequency pruning. Using this vocabulary, we construct a feature vector for each news article in the corpus. This feature vector will be encoded as a dictionary, with keys corresponding to token ids, and values corresponding to the number of times the token appears in the article. As above, the feature vectors will be stored as a new column in the dataframe. 68 Implementing Text Classification Using Perceptron and LR class index class title Wall St. Bears Claw Back Into the Black (Reuters) Oil and Economy Cloud Stocks' Outlook (Reuters) Oil prices soar to all-time record, posing new... ... Pakistan's Musharraf Says Won't Quit as Army C... Saban not going to Dolphins yet Nets get Carter from Raptors description Reuters - Short-sellers, Wall Street's dwindli... Reuters - Soaring crude prices plus worries\ab... AFP - Tearaway world oil prices, toppling reco... ... KARACHI (Reuters) - Pakistani President Perve... The Miami Dolphins will put their courtship of... INDIANAPOLIS -- All-Star Vince Carter was trad... text wall st. bears claw back into the black (reute... oil and economy cloud stocks' outlook (reuters... oil prices soar to all-time record, posing new... ... pakistan's musharraf says won't quit as army c... saban not going to dolphins yet the miami dolp... nets get carter from raptors indianapolis -- a... tokens [wall, st., bears, claw, back, into, the, blac... [oil, and, economy, cloud, stocks, ', outlook,... [oil, prices, soar, to, alltime, record, ,, p... ... [pakistan, 's, musharraf, says, wo, n't, quit,... [saban, not, going, to, dolphins, yet, the, mi... [nets, get, carter, from, raptors, indianapoli... features {427: 2, 563: 1, 1607: 1, 15062: 1, 120: 1, 73... {66: 1, 9: 2, 351: 2, 4565: 1, 158: 1, 116: 1,... {66: 2, 99: 2, 4390: 1, 4: 2, 3595: 1, 149: 1,... ... {383: 1, 23: 1, 1626: 2, 91: 1, 1809: 1, 285: ... {7762: 2, 68: 1, 661: 1, 4: 2, 1439: 2, 703: 1... {2170: 2, 226: 1, 2402: 2, 32: 1, 2995: 2, 219... . 0  3 Business 
 . 1  3 Business 
 . 2  3 Business 
 . 3  3 Business 
 . 4  3 Business 
 ... ... ... . 119995  1 World 
 . 119996  2 Sports 
 . 119997  2 Sports 
 . 119998  2 Sports 
 . 119999  2 Sports 
 120000 rows × 7 columns Carlyle Looks Toward Commercial Aerospace (Reu... Reuters - Private investment firm Carlyle Grou... carlyle looks toward commercial aerospace (reu... Iraq Halts Oil Exports from Reuters - Authorities have iraq halts oil exports from Main Southern Pipe... halted oil export\f... main southern pipe... Renteria signing a top-shelf Red Sox general manager renteria signing a topdeal Theo Epstein acknowled... shelf deal red sox gene... PITTSBURGH at NY Today's NFL games GIANTS Time: 1:30 p.m. Line: ... today's nfl games pittsburgh at ny giants time... [carlyle, looks, toward, {15999: 2, 1076: 1, 855: commercial, aerospace... 1, 1286: 1, 4251: 1, ... [iraq, halts, oil, exports, {77: 2, 7380: 1, 66: 3, from, main, southe... 1787: 1, 32: 2, 900: 2... [renteria, signing, a, top- {8428: 2, 2638: 1, 5: 4, shelf, deal, red, s... 0: 3, 127: 1, 202: 3,... [today, 's, nfl, games, {106: 1, 23: 1, 729: 1, pittsburgh, at, ny, gi... 225: 1, 1586: 1, 22: 1... The final preprocessing step is converting the features and the class indices into PyTorch tensors. Recall that we need to subtract one from the class indices to make them zero-based. At this point, the data is fully processed and we are ready to begin training. 4.2.3 Multiclass Logistic Regression Using PyTorch The model itself is a single linear layer whose input size corresponds to the size of our vocabulary, and its output size corresponds to the number of classes in our corpus. PyTorch’s Linear layer includes a bias by default, so there is no need to handle that manually the way we did for our perceptron example. The code for training this model (which implements Algorithm 6) is almost identical to that of the binary logistic repression. However, since we have to calculate a score for each of the four different classes, we need to replace the previous BCEWithLogitsLoss with CrossEntropyLoss, which applies a softmax over the scores to obtain probabilities for each class. For each example, the model predicts 4 scores – one for each label. The label with the highest score is selected using the argmax function. We evaluate the predictions of our model for each class using Scikitlearn’s classification_report, which handles the results of multiclass classification. 4.3 Summary 69 4.3 Summary In this chapter, we used movie review and news article classification to illustrate the implementation of the previously described algorithms for the binary perceptron, binary logistic regression, and multiclass logistic regression. For the binary logistic regression, we made a direct comparison between the lower-level NumPy implementation and a higher-level version that made use of PyTorch. We hope that through this series of exercises the reader has noted several key takeaways. First, data preparation is important and should be done thoughtfully. Certain tasks (e.g., text normalization or sentence splitting) are going to be frequently needed if you continue with NLP, so using or creating generic functions can be very helpful. However, what works for one dataset and one language may not be suitable for another scenario. For example, in our case, we selected different tokenizers for each of our tasks to account for the different registers of English, as well as removing diacritics during normalization. Second, when it comes to implementing machine learning algorithms, it is often easier to use a higher-level library such as PyTorch instead of NumPy. For example, with the former, the gradients are calculated by the library, whereas in NumPy we have to code them ourselves. This becomes cumbersome quickly. For example, even the derivative of the softmax is non-trivial. Third, PyTorch imposes a training structure that remains largely the same, regardless of what models are being trained. That is, at a high level, the same steps are always required: clearing the current gradients, predicting output scores for the provided inputs, calculating the loss, and optimizing. These features make PyTorch a very powerful and convenient deep learning library; we will continue to use it throughout the remainder of the book to implement more complex neural architectures.
6,304
6,379
#!/usr/bin/env python # coding: utf-8 # # Binary Text Classification with # # Logistic Regression Implemented from Scratch # In[1]: import random import numpy as np from tqdm.notebook import tqdm # set this variable to a number to be used as the random seed # or to None if you don't want to set a random seed seed = 1234 if seed is not None: random.seed(seed) np.random.seed(seed) # The dataset is divided in two directories called `train` and `test`. # These directories contain the training and testing splits of the dataset. # In[2]: get_ipython().system('ls -lh data/aclImdb/') # Both the `train` and `test` directories contain two directories called `pos` and `neg` that contain text files with the positive and negative reviews, respectively. # In[3]: get_ipython().system('ls -lh data/aclImdb/train/') # We will now read the filenames of the positive and negative examples. # In[4]: from glob import glob pos_files = glob('data/aclImdb/train/pos/*.txt') neg_files = glob('data/aclImdb/train/neg/*.txt') print('number of positive reviews:', len(pos_files)) print('number of negative reviews:', len(neg_files)) # Now, we will use a [`CountVectorizer`](https://scikit-learn.org/stable/modules/generated/sklearn.feature_extraction.text.CountVectorizer.html) to read the text files, tokenize them, acquire a vocabulary from the training data, and encode it in a document-term matrix in which each row represents a review, and each column represents a term in the vocabulary. Each element $(i,j)$ in the matrix represents the number of times term $j$ appears in example $i$. # In[5]: from sklearn.feature_extraction.text import CountVectorizer # initialize CountVectorizer indicating that we will give it a list of filenames that have to be read cv = CountVectorizer(input='filename') # learn vocabulary and return sparse document-term matrix doc_term_matrix = cv.fit_transform(pos_files + neg_files) doc_term_matrix # Note in the message printed above that the matrix is of shape (25000, 74894). # In other words, it has 1,871,225,000 elements. # However, only 3,445,861 elements were stored. # This is because most of the elements in the matrix are zeros. # The reason is that the reviews are short and most words in the english language don't appear in each review. # A matrix that only stores non-zero values is called *sparse*. # # Now we will convert it to a dense numpy array: # In[6]: X_train = doc_term_matrix.toarray() X_train.shape # In[7]: # Append 1s to the xs; this will allow us to multiply by the weights and # the bias in a single pass. # Make an array with a one for each row/data point ones = np.ones(X_train.shape[0]) # Concatenate these ones to existing feature vectors X_train = np.column_stack((X_train, ones)) X_train.shape # We will also create a numpy array with the binary labels for the reviews. # One indicates a positive review and zero a negative review. # The label `y_train[i]` corresponds to the review encoded in row `i` of the `X_train` matrix. # In[8]: # training labels y_pos = np.ones(len(pos_files)) y_neg = np.zeros(len(neg_files)) y_train = np.concatenate([y_pos, y_neg]) y_train # Now we will initialize our model, in the form of an array of weights `w` of the same size as the number of features in our dataset (i.e., the number of words in the vocabulary acquired by [`CountVectorizer`](https://scikit-learn.org/stable/modules/generated/sklearn.feature_extraction.text.CountVectorizer.html)), and a bias term `b`. # Both are initialized to zeros. # In[9]: # initialize model: the feature vector and bias term are populated with zeros n_examples, n_features = X_train.shape w = np.random.random(n_features) # Now we will use the logistic regression learning algorithm to learn the values of `w` and `b` from our training data. # In[10]: # from scipy.special import expit as sigmoid def sigmoid(z): if -z > np.log(np.finfo(float).max): return 0.0 return 1 / (1 + np.exp(-z)) # In[11]: lr = 1e-1 n_epochs = 10 indices = np.arange(n_examples) for epoch in range(10): # randomize the order in which training examples are seen in this epoch np.random.shuffle(indices) # traverse the training data for i in tqdm(indices, desc=f'epoch {epoch+1}'): x = X_train[i] y = y_train[i] # calculate the derivative of the cost function for this batch deriv_cost = (sigmoid(x @ w) - y) * x # update the weights w = w - lr * deriv_cost # The next step is evaluating the model on the test dataset. # Note that this time we use the [`transform()`](https://scikit-learn.org/stable/modules/generated/sklearn.feature_extraction.text.CountVectorizer.html#sklearn.feature_extraction.text.CountVectorizer.transform) method of the [`CountVectorizer`](https://scikit-learn.org/stable/modules/generated/sklearn.feature_extraction.text.CountVectorizer.html), instead of the [`fit_transform()`](https://scikit-learn.org/stable/modules/generated/sklearn.feature_extraction.text.CountVectorizer.html#sklearn.feature_extraction.text.CountVectorizer.fit_transform) method that we used above. This is because we want to use the learned vocabulary in the test set, instead of learning a new one. # In[12]: pos_files = glob('data/aclImdb/test/pos/*.txt') neg_files = glob('data/aclImdb/test/neg/*.txt') doc_term_matrix = cv.transform(pos_files + neg_files) X_test = doc_term_matrix.toarray() X_test = np.column_stack((X_test, np.ones(X_test.shape[0]))) y_pos = np.ones(len(pos_files)) y_neg = np.zeros(len(neg_files)) y_test = np.concatenate([y_pos, y_neg]) # Using the model is easy: multiply the document-term matrix by the learned weights and add the bias. # We use Python's `@` operator to perform the matrix-vector multiplication. # In[13]: y_pred = X_test @ w > 0 # Now we print an evaluation of the prediction results using scikit-learn's [`classification_report()`](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.classification_report.html) function. # In[14]: def binary_classification_report(y_true, y_pred): # count true positives, false positives, true negatives, and false negatives tp = fp = tn = fn = 0 for gold, pred in zip(y_true, y_pred): if pred == True: if gold == True: tp += 1 else: fp += 1 else: if gold == False: tn += 1 else: fn += 1 # calculate precision and recall precision = tp / (tp + fp) recall = tp / (tp + fn) # calculate f1 score fscore = 2 * precision * recall / (precision + recall) # calculate accuracy accuracy = (tp + tn) / len(y_true) # number of positive labels in y_true support = sum(y_true) return { "precision": precision, "recall": recall, "f1-score": fscore, "support": support, "accuracy": accuracy, } # In[15]: binary_classification_report(y_test, y_pred)
944
994
32
chap04-33
chap04-33
4 Implementing Text Classification Using Perceptron and Logistic Regression In the previous chapters we have discussed the theory behind the perceptron and logistic regression, including mathematical explanations of how and why they are able to learn from examples. In this chapter we will transition from math to code. Specifically, we will discuss how to implement these models in the Python programming language. All the code that we will introduce throughout this book is available online as well: http://clulab.github.io/gentlenlp/. The reader who is not familiar with the Python programming language is encouraged to read first Appendix A, for a brief introduction to the language, and Appendix B, for a discussion on how computers encode and preprocess text. Once done, please return here. To get a better understanding of how these algorithms work under the hood, we will start by implementing them from scratch. However, as the book progresses, we will introduce some of the popular tools and libraries that make Python the language of choice for machine learning, e.g., PyTorch,1 and Hugging Face’s transformers.2 The code for all the examples in the book is provided in the form of Jupyter notebooks.3 Important fragments of these notebooks will be presented in the implementation chapters so that the reader has the whole picture just by reading the book. However, we strongly encourage you to download the notebooks and execute them yourself. We also encourage you to modify them to conduct your own experiments! 1 https://pytorch.org
2 https://huggingface.co 3 https://jupyter.org/ 55 56 Implementing Text Classification Using Perceptron and LR 4.1 Binary Classification We begin this chapter with binary classification. That is, we aim to train classifiers that assign one of two labels to a given text. As the example for this task, we will train a review classifier using the the Large Movie Review Dataset (Maas et al., 2011).4 We tackle this task by implementing first a binary perceptron classifier, followed by a binary logistic regression one. We will implement the latter both from scratch as well as using PyTorch, so the reader has a clearer understanding on how PyTorch works “under the hood.” 4.1.1 Large Movie Review Dataset This dataset contains movie reviews and their associated scores (between 1 and 10) as provided by IMDb.5 converted these scores to binary labels by assigning each review a positive or negative label if the review score was above 6 or below 5, respectively. Reviews with scores 5 and 6 were considered too neutral and thus excluded. We follow the same protocol in this chapter. The dataset is divided in two even partitions called train and test, each containing 25,000 reviews. The dataset also provides additional unlabeled reviews, but we will not use those here. Each partition contains two directories called pos and neg where the positive and negative examples are stored. Each review is stored in an independent text file, whose name is composed of an id unique to the partition and the score associated with the review, separated by an underscore. An example of a positive and a negative review is shown in Table 4.1. 4.1.2 Bag-of-words Model As discussed in Section 2.2, we will encode the text to classify as a bag of words. That is, we encode each review as a list of numbers, with each position in the list corresponding to a word in our vocabulary, and the value stored in that position corresponding to the number of times the word appears in the review. For example, say we want to encode the following two reviews: 4 https://ai.stanford.edu/~amaas/data/sentiment/ 5 https://www.imdb.com/ Maas et al. 4.1 Binary Classification 57 Table 4.1 Two examples of movie reviews from IMDb. The first is a positive review of the movie Puss in Boots (1988). The second is a negative review of the movie Valentine (2001). These reviews can be found at https://www.imdb.com/review/rw0606396/ and https://www.imdb.com/review/rw0721861/, respectively. Filename Score Binary Label train/pos/24_8.txt 8/10 Positive train/neg/141_3.txt 3/10 Negative Review Text Although this was obviously a low-budget production, the performances and the songs in this movie are worth seeing. One of Walken’s few musical roles to date. (he is a marvelous dancer and singer and he demonstrates his acrobatic skills as well - watch for the cartwheel!) Also starring Jason Connery. A great children’s story and very likable characters. This stalk and slash turkey manages to bring nothing new to an increasingly stale genre. A masked killer stalks young, pert girls and slaughters them in a variety of gruesome ways, none of which are particularly inventive. It’s not scary, it’s not clever, and it’s not funny. So what was the point of it? Review 1: Review 2: "I liked the movie. My friend liked it too. " "I hated it. Would not recommend. " First, we need to create a vocabulary that maps each word to an id that uniquely identifies it. Each of these numbers will be used as the index in a list, so they must start at zero and grow by one for each word in the vocabulary. For example, one possible vocabulary that encodes the previous reviews is: {'would': 0, 'hated': 1, 58 Implementing Text Classification Using Perceptron and LR 'my': 2, 'liked': 3, 'not': 4, 'it': 5, 'movie': 6, 'recommend': 7, 'the': 8, 'I': 9, 'too': 10, 'friend': 11} Using this mapping, we can encode the two reviews as follows: Review1: [0,0,1,2,0,1,1,0,1,1,1,1] Review2: [1,1,0,0,1,1,0,1,0,1,0,0] Note that the word liked (fourth position) in the first review has a value of two. This is because this word appears twice in that review. This is a small example with a vocabulary of only 12 terms. Of course, the same process needs to be implemented for our whole training dataset. For this purpose we will use scikit-learn’s CountVectorizer class.6 Using the CountVectorizer class simplifies things, allowing us to get started quickly with a bag-of-words approach. However, note that it makes several simplifying assumptions (e.g., text is lowercased, and punctuation and single character tokens are removed). Some of these may not be adequate to other tasks. First, we need to obtain the filenames for the reviews in the training set: Once we have acquired the filenames for the training reviews, we need
to read them using the CountVectorizer. In order for the CountVectorizer to open and read the files for us, we make use of the input='filename' constructor parameter (otherwise it would expect the string content directly). The CountVectorizer provides three methods that will be use-
ful for us: a method called fit() that is used to acquire the vocabulary,
a method transform() that converts the text into the bag-of-words representation, and a method fit_transform() that conveniently acquires the vocabulary and transforms the data in a single step. The resulting object is referred to as a document-term matrix, where each row corre- 6 https://scikitlearn.org/stable/modules/generated/sklearn.feature_ extraction.text.CountVectorizer.html 4.1 Binary Classification 59 sponds to a document, and each column corresponds to a term in the vocabulary. As the output above indicates, the resulting matrix has 25,000 rows (one for each review), and 74,849 columns (one for each term). Also you may note that this matrix is sparse, with 3,445,861 stored elements. A regular matrix of shape 25,000×74,849 would have 1,871,225,000 elements. However, most of the elements in a document-term matrix are zeros because only a few words from the vocabulary appear in each document. A sparse matrix takes advantage of this fact by storing only the non-zero cells in order to reduce the memory required to store it. Thus, sparse matrices are convenient, especially when dealing with lots of data. Nevertheless, to simplify the downstream code in this example, we will convert it into a dense matrix, i.e., a regular two-dimensional NumPy array. Finally, we also need the labels of the reviews. We assign a label of one to positive reviews, and a label of zero to negative ones. Note that the first half of the reviews are positive and the second half are negative. The label at the ith position of the y_train array corresponds to the review encoded in the ith row of the X_train matrix. 4.1.3 Perceptron Now that we have defined our task and the data processing pipeline, we will implement a perceptron classifier that classifies the movie reviews as positive or negative. The entire code discussed in this section is available in the chap4_perceptron notebook. Recall from Section 2.4 that the perceptron is composed of a weight vector w and a bias term b. These will be represented as a NumPy array w of the same length as our document vectors, and a variable b for the bias term. Both will be initialized with zeros. The parameters w and b are learned through the following algorithm, which implements Algorithm 2 from Chapter 2: There are a couple of details to point out. Line 3 of Algorithm 2 indicates that we need to repeat the training loop until convergence. Theoretically, convergence is defined as predicting all training examples correctly. This is an ambitious requirement, which is not always possible in practice, so in this code we also include a stop condition if we reach a maximum number of epochs. Another crucial difference between our implementation here and the theoretical Algorithm 2, is that we randomize the order in which the training examples are seen at the beginning of 60 Implementing Text Classification Using Perceptron and LR each epoch. This simple (but highly recommended!) change is necessary to avoid the introduction of spurious biases due to the arbitrary order of the examples in the original training partition.7 We accomplish this by storing the indices corresponding to the X_train matrix rows in a NumPy array, and shuffling these indices at the beginning of each epoch. We shuffle the indices instead of the examples so that we can preserve the mapping between examples and labels. The training loop aligns closely with Algorithm 2. We start by iterating over each example in our training data, storing the current example in the variable x,8 and its corresponding label in the variable y_true. Next, we compute the perceptron decision function shown in Algorithm 1. Note that NumPy (as well as PyTorch) uses Python’s @ operator to indicate vector or matrix multiplication, depending on its operand types. Here we use it to calculate the dot product of the example x and the weights w. To this we add the bias b to obtain the predicted score, whose sign is used to assign a positive or negative predicted label. If the prediction is correct, then no update is needed, and we can move on to the next training example. However, if the prediction is incorrect, then we need to adjust w and b, as described in Algorithm 2. Sidebar 4.1 The tqdm function This is our first exposure to the tqdm function. tqdm is a progress bar that “make your loops show a smart progress meter.”9 The name tqdm comes from the Arabic word taqaddum which can mean “progress.” Using tqdm is as simple as wrapping it around the collection to be traversed. After training, we evaluate the model’s performance on the heldout test partition. The test data is loaded similarly to the training partition, but with one notable difference; we use CountVectorizer’s transform() method instead of the fit_transform() method so that the vocabulary is not adjusted for the test data. We won’t show here the loading of the test partition since it is so similar to the code already shown, but it is available in the Jupyter notebook that accompanies this section. . 7   As an extreme example, consider a dataset where all the positive examples appear first in the training partition. This would cause the perceptron to artificially inflate the weights of the features that occur in these examples, a situation from which the learning algorithm may struggle to recover. 
 . 8  We use typewriter font when we discuss variables in the code, to distinguish code from the theoretical discussion in the other chapters. 
 9 https://github.com/tqdm/tqdm 4.1 Binary Classification 61 Using the model to assign labels to all the test data is easily done in one step – we simply multiply the entire test data document-term matrix by the previously learned weights and add the bias. Scores greater than zero indicate a positive review, and those less than zero are negative. At this point we can evaluate the classifier’s performance, which we will do using precision, recall, and F1 scores for binary classification (described in Section 2.3). For this purpose, we implement a function called binary_classification_report that computes these metrics and returns them as a dictionary: We call this function to compare the predicted labels to the true labels, and obtain the evaluation scores. Our F1 score here is 86.8%, which is much higher than the baseline that assigns labels randomly, which yields an F1 score of about 50%. This is a good result, especially considering the simplicity of the perceptron! In the next sections and chapters, we will discuss a battery of strategies to considerably improve this performance. 4.1.4 Binary Logistic Regression from Scratch Using the same task, dataset, and evaluation, we will now implement a logistic regression classifier, as described in Algorithm 5 from Chapter 3. To give the reader hands-on experience with the implementation of the gradient calculations for logistic regression, we start by implementing it from scratch using NumPy. All the code shown in this section is available in the chap4_logistic_regression_numpy notebook. In the perceptron implementation, we represented the weights and the bias as two different variables. Here, however, we will use a different approach that will allow us to unify them into a single vector variable. Specifically, we take advantage of the similarity between the derivative of the cost function with respect to the weights (Equation 3.14) and the derivative of the cost with respect to the bias (Equation 3.15). d Ci(w, b) = (σi − yi)xij (3.14 revisited) dwj d Ci(w, b) = σi − yi (3.15 revisited) db Note that the two derivative formulas are identical except that the former has a multiplication by xij, while the latter does not. However, 62 Implementing Text Classification Using Perceptron and LR since σi − yi = (σi − yi)1 we can multiply the derivative of the cost with respect to the bias by one without changing the semantics. This gives an opportunity for combining the computations, doing them both in a single pass. The idea is that we can treat the bias as a weight corresponding to a feature that always has a value of one. As can be seen above, we created a NumPy array of ones of the same length as the number of examples in our training set (i.e., the number of rows in the data matrix). Then we add this array as a new column to the data matrix, using NumPy’s column_stack function. Next, we need to initialize our model. This time we will use a single NumPy array w of the same length as the number of columns in the data matrix. The weight vector w is initialized randomly with values between 0 and 1: Before implementing the learning algorithm, we need an implementation of the logistic function. Recall that the logistic function is σ(x) = 1 (3.1 revisited) 1+e−x This function can be easily implemented in NumPy as follows: However, this naive implementation may produce the following warning during training: The term overflow indicates that the result of evaluating exp(-x) is a number so large that it can’t be represented by a float (specifically, we’re using float64 numbers). We will avoid this issue by not calling exp with values that will overflow. NumPy provides the function finfo that can be consulted to find the limits of floating point numbers: The log of the largest floating point number is the largest number for which exp() will not overflow, so we will use it as a threshold to filter out problematic values: We now have everything we need to implement Algorithm 4. The steps to follow for each example are: (1) use the model to make a prediction, (2) calculate the gradient of the loss function with respect to the model parameters, and (3) update the model parameters using the gradient. The size of the update is controlled by the learning rate. Once the model has been trained, we evaluate it on the test dataset using our binary_classification_report function from the previous section. Loading and preprocessing the test dataset follows the same 4.1 Binary Classification 63 steps as with the previous classifier. We omit the code for brevity. These are the results: The performance is comparable with that of the perceptron. The difference in F1 scores between the two classifiers (84.9% here vs. 86.8% for the perceptron) is not significant. Classifier parity is probably attributable to the fact that the signal distinguishing the two classes being easy to learn and the simpler perceptron training algorithm being sufficient in this case. Nevertheless, this task is useful in showing how to implement the logistic regression model from scratch, i.e., by implementing the gradient calculation and parameter updates manually. Next, we will implement the same model again using PyTorch, highlighting how this machine learning library simplifies the process. 4.1.5 Binary Logistic Regression Utilizing PyTorch While it is fairly straightforward to compute the derivatives for logistic regression and implement then directly in NumPy, this will not scale well to arbitrary neural architectures. Fortunately, there are libraries that automate the computation of the derivatives of the cost function (assuming it is differentiable!) for any neural network, and use the resulting gradients to perform gradient descent or other more sophisticated optimization procedures. To this end, we will use the PyTorch deep learning library10. The corresponding notebook for this section is chap4_logistic_regression_pytorch_bce. Our model for logistic regression corresponds to PyTorch’s Linear layer. When we instantiate this layer, we specify the size of the inputs (the size of our vocabulary) and the size of the output, i.e., the number of output neurons (which is one because we’re doing binary classification). The loss function we use is the binary cross-entropy loss (see Chapter 3), which is implemented as BCEWithLogitsLoss in PyTorch. In PyTorch, the gradients obtained from the loss function are applied to the model by an optimizer object, which implements and applies an optimization algorithm. Here we will use the vanilla stochastic gradient descent optimizer; we set its learning rate to 0.1. This is equivalent to the discussion in Section 3.2. Similarly to the manual implementation, the steps required to train the model for a given training example are: (1) ensure the gradients are set to zeros, (2) apply the model to obtain a prediction, (3) calculate 10 https://pytorch.org/ 64 Implementing Text Classification Using Perceptron and LR the loss, (4) compute the gradient of the loss by back-propagation, and (5) update the model parameters. Recall that in our previous implementation everything was hardcoded: applying the model, computing the gradients, and optimizing the model parameters. Here, however, the implementation of the logistic regression is expressed at a higher level of abstraction. This means that we are describing the logical steps without specifying a particular implementation. Instead, implementation details are the responsability of the chosen model, loss function, and optimizer. Thus, we could even choose a different model, loss function, and/or optimizer, and use the same training steps with little or no modification. This decoupling of the training logic from the implementation details is one of the main advantages of libraries such as PyTorch. As shown in the code above, calling the model as a function, with the feature vectors as inputs, produces the predicted scores. Once again, a positive score corresponds to a positive label. When we evaluate this implementation on the test dataset, we obtain results that are in line with our previous models: Writing the perceptron and the logistic regression from scratch is a good exercise, as it exposes us to the fundamentals of implementing machine learning algorithms. However, this becomes cumbersome for more complex neural architectures. For this reason, from this point on, we will use PyTorch for all our coding examples. 4.2 Multiclass Classification So far, in this chapter we have discussed implementing binary classifiers. Next, we will modify these binary classifiers to perform multiclass classification, following the discussion in Section 3.5. 4.2.1 AG News Dataset Before explaining the actual training/testing code, we have to choose a new dataset that is suitable for multiclass classification. To this end, we will use the AG News Classification Dataset (Zhang et al., 2015), a subset of the larger AG corpus of news articles collected from thousands of different news sources.11 The classification dataset consists of four 11 http://groups.di.unipi.it/~gulli/AG_corpus_of_news_articles.html 4.2 Multiclass Classification 65 classes, and the data is equally balanced across all classes (30,000 articles per class for train, and 1,900 articles per class for testing). The goal of the task is to classify each article as one of the four classes: World, Sports, Business, or Sci/Tech. 4.2.2 Preparing the Dataset The AG News Dataset is distributed as two CSV files (one for training and one for testing), each containing three columns: the class index, the title, and the description. The dataset also provides a text file that maps the above class indexes to more descriptive class labels. Because of the tabular nature of the dataset, pandas, a Python library
for tabular data analysis,12 is a natural choice for loading and transform-
ing it. To this end, our Jupyter notebook (chap4_multiclass_logistic_regression) demonstrates the sequence of steps required to handle the data, as well
as model training and evaluation. First, we show how to load the CSV,
add column names, and inspect the result: class index . 0  3 
 . 1  3 
 . 2  3 
 . 3  3 
 . 4  3 
 ... ... . 119995  1 
 . 119996  2 
 . 119997  2 
 . 119998  2 
 . 119999  2 
 title Wall St. Bears Claw Back Into the Black (Reuters) Carlyle Looks Toward Commercial Aerospace (Reu... Oil and Economy Cloud Stocks' Outlook (Reuters) Iraq Halts Oil Exports from Main Southern Pipe... Oil prices soar to all-time record, posing new... ... Pakistan's Musharraf Says Won't Quit as Army C... Renteria signing a top-shelf deal Saban not going to Dolphins yet Today's NFL games Nets get Carter from Raptors description Reuters - Short-sellers, Wall Street's dwindli... Reuters - Private investment firm Carlyle Grou... Reuters - Soaring crude prices plus worries\ab... Reuters - Authorities have halted oil export\f... AFP - Tearaway world oil prices, toppling reco... ... KARACHI (Reuters) - Pakistani President Perve... Red Sox general manager Theo Epstein acknowled... The Miami Dolphins will put their courtship of... PITTSBURGH at NY GIANTS Time: 1:30 p.m. Line: ... INDIANAPOLIS -- All-Star Vince Carter was trad... 120000 rows × 3 columns Since the class labels themselves are in a separate file, we manually add them to the pandas data structure (called dataframe in pandas’ terminology) to increase the interpretability of the data. We use the class index column as a starting point, and use its map method to create a new column with the corresponding labels (technically a new Series object) that is added to the dataframe using its insert method, which allows us to insert the column in a specific position. Note that the label indices are one-based, so we subtract one to align them with their labels. 12 https://pandas.pydata.org 66 Implementing Text Classification Using Perceptron and LR class index . 0  3 
 . 1  3 
 . 2  3 
 . 3  3 
 . 4  3 
 ... ... . 119995  1 
 . 119996  2 
 . 119997  2 
 . 119998  2 
 . 119999  2 
 class Business Business Business Business Business ... World Sports Sports Sports Sports title Wall St. Bears Claw Back Into the Black (Reuters) Oil and Economy Cloud Stocks' Outlook (Reuters) Oil prices soar to all-time record, posing new... Pakistan's Musharraf Says Won't Quit as Army C... Saban not going to Dolphins yet Nets get Carter from Raptors description Reuters - Short-sellers, Wall Street's dwindli... Reuters - Soaring crude prices plus worries\ab... AFP - Tearaway world oil prices, toppling reco... KARACHI (Reuters) - Pakistani President Perve... The Miami Dolphins will put their courtship of... INDIANAPOLIS -- All-Star Vince Carter was trad... Iraq Halts Oil Exports from Main Southern Pipe... Reuters - Authorities have halted oil export\f... ... ... Renteria signing a top-shelf deal Red Sox general manager Theo Epstein acknowled... 120000 rows × 4 columns Carlyle Looks Toward Commercial Aerospace (Reu... Reuters - Private investment firm Carlyle Grou... Today's NFL games PITTSBURGH at NY GIANTS Time: 1:30 p.m. Line: ... Next we will preprocess the text. First we lowercase the title and description, and then we concatenate them into a single string. Then we remove some spurious backslashes from the text. Once this is done, the preprocessed text is added to the dataframe as a new column. Note that pandas allows these steps to be applied to all rows simultaneously. class index class title Wall St. Bears Claw Back Into the Black (Reuters) Oil and Economy Cloud Stocks' Outlook (Reuters) Oil prices soar to all-time record, posing new... ... Pakistan's Musharraf Says Won't Quit as Army C... Saban not going to Dolphins yet Nets get Carter from Raptors description Reuters - Short-sellers, Wall Street's dwindli... Reuters - Soaring crude prices plus worries\ab... AFP - Tearaway world oil prices, toppling reco... ... KARACHI (Reuters) - Pakistani President Perve... The Miami Dolphins will put their courtship of... INDIANAPOLIS -- All-Star Vince Carter was trad... text wall st. bears claw back into the black (reute... oil and economy cloud stocks' outlook (reuters... oil prices soar to all-time record, posing new... ... pakistan's musharraf says won't quit as army c... saban not going to dolphins yet the miami dolp... nets get carter from raptors indianapolis -- a... . 0  3 Business 
 . 1  3 Business 
 . 2  3 Business 
 . 3  3 Business 
 . 4  3 Business 
 ... ... ... . 119995  1 World 
 . 119996  2 Sports 
 . 119997  2 Sports 
 . 119998  2 Sports 
 . 119999  2 Sports 
 120000 rows × 5 columns Carlyle Looks Toward Commercial Reuters - Private investment firm Carlyle carlyle looks toward commercial Aerospace (Reu... Grou... aerospace (reu... Iraq Halts Oil Exports from Main Southern Pipe... Reuters - Authorities have halted oil export\f... iraq halts oil exports from main southern pipe... Renteria signing a top-shelf deal Red Sox general manager Theo Epstein renteria signing a top-shelf deal red sox acknowled... gene... Today's NFL games PITTSBURGH at NY GIANTS Time: 1:30 p.m. today's nfl games pittsburgh at ny giants Line: ... time... At this point, the text is ready to be tokenized. For this purpose we will use NLTK’s word_tokenize function. This function can be applied to the whole column at once using the pandas map function, which returns a new column which we add to the dataframe. However, here we actually use the progress_map function, which provides a visual progress bar. This visual feedback is especially helpful for tasks that take more time to complete. 4.2 Multiclass Classification 67 class index . 0  3 
 . 1  3 
 . 2  3 
 . 3  3 
 . 4  3 
 ... ... . 119995  1 
 . 119996  2 
 . 119997  2 
 . 119998  2 
 . 119999  2 
 class Business Business Business Business Business ... World Sports Sports Sports Sports title Wall St. Bears Claw Back Into the Black (Reuters) Oil and Economy Cloud Stocks' Outlook (Reuters) Oil prices soar to all-time record, posing new... ... Pakistan's Musharraf Says Won't Quit as Army C... Saban not going to Dolphins yet Nets get Carter from Raptors description Reuters - Short-sellers, Wall Street's dwindli... Reuters - Soaring crude prices plus worries\ab... AFP - Tearaway world oil prices, toppling reco... ... KARACHI (Reuters) - Pakistani President Perve... The Miami Dolphins will put their courtship of... INDIANAPOLIS -- All-Star Vince Carter was trad... text wall st. bears claw back into the black (reute... oil and economy cloud stocks' outlook (reuters... oil prices soar to all-time record, posing new... ... pakistan's musharraf says won't quit as army c... saban not going to dolphins yet the miami dolp... nets get carter from raptors indianapolis -- a... tokens [wall, st., bears, claw, back, into, the, blac... [oil, and, economy, cloud, stocks, ', outlook,... [oil, prices, soar, to, all-time, record, ,, p... ... [pakistan, 's, musharraf, says, wo, n't, quit,... [saban, not, going, to, dolphins, yet, the, mi... [nets, get, carter, from, raptors, indianapoli... 120000 rows × 6 columns Carlyle Looks Toward Commercial Reuters - Private investment firm carlyle looks toward commercial [carlyle, looks, toward, Aerospace (Reu... Carlyle Grou... aerospace (reu... commercial, aerospace... Iraq Halts Oil Exports from Main Reuters - Authorities have halted iraq halts oil exports from main [iraq, halts, oil, exports, from, Southern Pipe... oil export\f... southern pipe... main, southe... Renteria signing a top-shelf deal Red Sox general manager Theo renteria signing a top-shelf deal [renteria, signing, a, top-shelf, Epstein acknowled... red sox gene... deal, red, s... Today's NFL games PITTSBURGH at NY GIANTS today's nfl games pittsburgh at [today, 's, nfl, games, Time: 1:30 p.m. Line: ... ny giants time... pittsburgh, at, ny, gi... From the tokens we just created, we then create a vocabulary for our corpus. Here, we only keep the words that occur at least 10 times, decreasing the memory needed and reducing the likelihood that our vocabulary contains noisy tokens. Note that each row in the tokens column contains a list of tokens. In order to create the vocabulary, we will need to convert the Series of lists of tokens into a Series of tokens using the explode() Pandas method. Then we will use the value_counts() method to create a Series object in which the index are the tokens and the values are the number of times they appear in the corpus. The next step is removing the tokens with a count lower than our chosen threshold. Finally, we create a list with the remaining tokens, as well as a dictionary that maps tokens to token ids (i.e., the index of the token in the list). We include in the vocabulary a special token [UNK] that will be used as a placeholder for tokens that do not appear in our vocabulary after the frequency pruning. Using this vocabulary, we construct a feature vector for each news article in the corpus. This feature vector will be encoded as a dictionary, with keys corresponding to token ids, and values corresponding to the number of times the token appears in the article. As above, the feature vectors will be stored as a new column in the dataframe. 68 Implementing Text Classification Using Perceptron and LR class index class title Wall St. Bears Claw Back Into the Black (Reuters) Oil and Economy Cloud Stocks' Outlook (Reuters) Oil prices soar to all-time record, posing new... ... Pakistan's Musharraf Says Won't Quit as Army C... Saban not going to Dolphins yet Nets get Carter from Raptors description Reuters - Short-sellers, Wall Street's dwindli... Reuters - Soaring crude prices plus worries\ab... AFP - Tearaway world oil prices, toppling reco... ... KARACHI (Reuters) - Pakistani President Perve... The Miami Dolphins will put their courtship of... INDIANAPOLIS -- All-Star Vince Carter was trad... text wall st. bears claw back into the black (reute... oil and economy cloud stocks' outlook (reuters... oil prices soar to all-time record, posing new... ... pakistan's musharraf says won't quit as army c... saban not going to dolphins yet the miami dolp... nets get carter from raptors indianapolis -- a... tokens [wall, st., bears, claw, back, into, the, blac... [oil, and, economy, cloud, stocks, ', outlook,... [oil, prices, soar, to, alltime, record, ,, p... ... [pakistan, 's, musharraf, says, wo, n't, quit,... [saban, not, going, to, dolphins, yet, the, mi... [nets, get, carter, from, raptors, indianapoli... features {427: 2, 563: 1, 1607: 1, 15062: 1, 120: 1, 73... {66: 1, 9: 2, 351: 2, 4565: 1, 158: 1, 116: 1,... {66: 2, 99: 2, 4390: 1, 4: 2, 3595: 1, 149: 1,... ... {383: 1, 23: 1, 1626: 2, 91: 1, 1809: 1, 285: ... {7762: 2, 68: 1, 661: 1, 4: 2, 1439: 2, 703: 1... {2170: 2, 226: 1, 2402: 2, 32: 1, 2995: 2, 219... . 0  3 Business 
 . 1  3 Business 
 . 2  3 Business 
 . 3  3 Business 
 . 4  3 Business 
 ... ... ... . 119995  1 World 
 . 119996  2 Sports 
 . 119997  2 Sports 
 . 119998  2 Sports 
 . 119999  2 Sports 
 120000 rows × 7 columns Carlyle Looks Toward Commercial Aerospace (Reu... Reuters - Private investment firm Carlyle Grou... carlyle looks toward commercial aerospace (reu... Iraq Halts Oil Exports from Reuters - Authorities have iraq halts oil exports from Main Southern Pipe... halted oil export\f... main southern pipe... Renteria signing a top-shelf Red Sox general manager renteria signing a topdeal Theo Epstein acknowled... shelf deal red sox gene... PITTSBURGH at NY Today's NFL games GIANTS Time: 1:30 p.m. Line: ... today's nfl games pittsburgh at ny giants time... [carlyle, looks, toward, {15999: 2, 1076: 1, 855: commercial, aerospace... 1, 1286: 1, 4251: 1, ... [iraq, halts, oil, exports, {77: 2, 7380: 1, 66: 3, from, main, southe... 1787: 1, 32: 2, 900: 2... [renteria, signing, a, top- {8428: 2, 2638: 1, 5: 4, shelf, deal, red, s... 0: 3, 127: 1, 202: 3,... [today, 's, nfl, games, {106: 1, 23: 1, 729: 1, pittsburgh, at, ny, gi... 225: 1, 1586: 1, 22: 1... The final preprocessing step is converting the features and the class indices into PyTorch tensors. Recall that we need to subtract one from the class indices to make them zero-based. At this point, the data is fully processed and we are ready to begin training. 4.2.3 Multiclass Logistic Regression Using PyTorch The model itself is a single linear layer whose input size corresponds to the size of our vocabulary, and its output size corresponds to the number of classes in our corpus. PyTorch’s Linear layer includes a bias by default, so there is no need to handle that manually the way we did for our perceptron example. The code for training this model (which implements Algorithm 6) is almost identical to that of the binary logistic repression. However, since we have to calculate a score for each of the four different classes, we need to replace the previous BCEWithLogitsLoss with CrossEntropyLoss, which applies a softmax over the scores to obtain probabilities for each class. For each example, the model predicts 4 scores – one for each label. The label with the highest score is selected using the argmax function. We evaluate the predictions of our model for each class using Scikitlearn’s classification_report, which handles the results of multiclass classification. 4.3 Summary 69 4.3 Summary In this chapter, we used movie review and news article classification to illustrate the implementation of the previously described algorithms for the binary perceptron, binary logistic regression, and multiclass logistic regression. For the binary logistic regression, we made a direct comparison between the lower-level NumPy implementation and a higher-level version that made use of PyTorch. We hope that through this series of exercises the reader has noted several key takeaways. First, data preparation is important and should be done thoughtfully. Certain tasks (e.g., text normalization or sentence splitting) are going to be frequently needed if you continue with NLP, so using or creating generic functions can be very helpful. However, what works for one dataset and one language may not be suitable for another scenario. For example, in our case, we selected different tokenizers for each of our tasks to account for the different registers of English, as well as removing diacritics during normalization. Second, when it comes to implementing machine learning algorithms, it is often easier to use a higher-level library such as PyTorch instead of NumPy. For example, with the former, the gradients are calculated by the library, whereas in NumPy we have to code them ourselves. This becomes cumbersome quickly. For example, even the derivative of the softmax is non-trivial. Third, PyTorch imposes a training structure that remains largely the same, regardless of what models are being trained. That is, at a high level, the same steps are always required: clearing the current gradients, predicting output scores for the provided inputs, calculating the loss, and optimizing. These features make PyTorch a very powerful and convenient deep learning library; we will continue to use it throughout the remainder of the book to implement more complex neural architectures.
25,628
25,780
#!/usr/bin/env python # coding: utf-8 # # Multiclass Text Classification with # # Logistic Regression Implemented with PyTorch and CE Loss # First, we will do some initialization. # In[1]: import random import torch import numpy as np import pandas as pd from tqdm.notebook import tqdm # enable tqdm in pandas tqdm.pandas() # set to True to use the gpu (if there is one available) use_gpu = True # select device device = torch.device('cuda' if use_gpu and torch.cuda.is_available() else 'cpu') print(f'device: {device.type}') # random seed seed = 1234 # set random seed if seed is not None: print(f'random seed: {seed}') random.seed(seed) np.random.seed(seed) torch.manual_seed(seed) # We will be using the AG's News Topic Classification Dataset. # It is stored in two CSV files: `train.csv` and `test.csv`, as well as a `classes.txt` that stores the labels of the classes to predict. # # First, we will load the training dataset using [pandas](https://pandas.pydata.org/) and take a quick look at how the data. # In[2]: train_df = pd.read_csv('data/ag_news_csv/train.csv', header=None) train_df.columns = ['class index', 'title', 'description'] train_df # The dataset consists of 120,000 examples, each consisting of a class index, a title, and a description. # The class labels are distributed in a separated file. We will add the labels to the dataset so that we can interpret the data more easily. Note that the label indexes are one-based, so we need to subtract one to retrieve them from the list. # In[3]: labels = open('data/ag_news_csv/classes.txt').read().splitlines() classes = train_df['class index'].map(lambda i: labels[i-1]) train_df.insert(1, 'class', classes) train_df # Let's inspect how balanced our examples are by using a bar plot. # In[4]: pd.value_counts(train_df['class']).plot.bar() # The classes are evenly distributed. That's great! # # However, the text contains some spurious backslashes in some parts of the text. # They are meant to represent newlines in the original text. # An example can be seen below, between the words "dwindling" and "band". # In[5]: print(train_df.loc[0, 'description']) # We will replace the backslashes with spaces on the whole column using pandas replace method. # In[6]: title = train_df['title'].str.lower() descr = train_df['description'].str.lower() text = title + " " + descr train_df['text'] = text.str.replace('\\', ' ', regex=False) train_df # Now we will proceed to tokenize the title and description columns using NLTK's word_tokenize(). # We will add a new column to our dataframe with the list of tokens. # In[7]: from nltk.tokenize import word_tokenize train_df['tokens'] = train_df['text'].progress_map(word_tokenize) train_df # Now we will create a vocabulary from the training data. We will only keep the terms that repeat beyond some threshold established below. # In[8]: threshold = 10 tokens = train_df['tokens'].explode().value_counts() tokens = tokens[tokens > threshold] id_to_token = ['[UNK]'] + tokens.index.tolist() token_to_id = {w:i for i,w in enumerate(id_to_token)} vocabulary_size = len(id_to_token) print(f'vocabulary size: {vocabulary_size:,}') # In[9]: from collections import defaultdict def make_feature_vector(tokens, unk_id=0): vector = defaultdict(int) for t in tokens: i = token_to_id.get(t, unk_id) vector[i] += 1 return vector train_df['features'] = train_df['tokens'].progress_map(make_feature_vector) train_df # In[10]: def make_dense(feats): x = np.zeros(vocabulary_size) for k,v in feats.items(): x[k] = v return x X_train = np.stack(train_df['features'].progress_map(make_dense)) y_train = train_df['class index'].to_numpy() - 1 X_train = torch.tensor(X_train, dtype=torch.float32) y_train = torch.tensor(y_train) # In[11]: from torch import nn from torch import optim # hyperparameters lr = 1.0 n_epochs = 5 n_examples = X_train.shape[0] n_feats = X_train.shape[1] n_classes = len(labels) # initialize the model, loss function, optimizer, and data-loader model = nn.Linear(n_feats, n_classes).to(device) loss_func = nn.CrossEntropyLoss() optimizer = optim.SGD(model.parameters(), lr=lr) # train the model indices = np.arange(n_examples) for epoch in range(n_epochs): np.random.shuffle(indices) for i in tqdm(indices, desc=f'epoch {epoch+1}'): # clear gradients model.zero_grad() # send datum to right device x = X_train[i].unsqueeze(0).to(device) y_true = y_train[i].unsqueeze(0).to(device) # predict label scores y_pred = model(x) # compute loss loss = loss_func(y_pred, y_true) # backpropagate loss.backward() # optimize model parameters optimizer.step() # Next, we evaluate on the test dataset # In[12]: # repeat all preprocessing done above, this time on the test set test_df = pd.read_csv('data/ag_news_csv/test.csv', header=None) test_df.columns = ['class index', 'title', 'description'] test_df['text'] = test_df['title'].str.lower() + " " + test_df['description'].str.lower() test_df['text'] = test_df['text'].str.replace('\\', ' ', regex=False) test_df['tokens'] = test_df['text'].progress_map(word_tokenize) test_df['features'] = test_df['tokens'].progress_map(make_feature_vector) X_test = np.stack(test_df['features'].progress_map(make_dense)) y_test = test_df['class index'].to_numpy() - 1 X_test = torch.tensor(X_test, dtype=torch.float32) y_test = torch.tensor(y_test) # In[13]: from sklearn.metrics import classification_report # set model to evaluation mode model.eval() # don't store gradients with torch.no_grad(): X_test = X_test.to(device) y_pred = torch.argmax(model(X_test), dim=1) y_pred = y_pred.cpu().numpy() print(classification_report(y_test, y_pred, target_names=labels))
2,284
2,453
33
chap04-34
chap04-34
4 Implementing Text Classification Using Perceptron and Logistic Regression In the previous chapters we have discussed the theory behind the perceptron and logistic regression, including mathematical explanations of how and why they are able to learn from examples. In this chapter we will transition from math to code. Specifically, we will discuss how to implement these models in the Python programming language. All the code that we will introduce throughout this book is available online as well: http://clulab.github.io/gentlenlp/. The reader who is not familiar with the Python programming language is encouraged to read first Appendix A, for a brief introduction to the language, and Appendix B, for a discussion on how computers encode and preprocess text. Once done, please return here. To get a better understanding of how these algorithms work under the hood, we will start by implementing them from scratch. However, as the book progresses, we will introduce some of the popular tools and libraries that make Python the language of choice for machine learning, e.g., PyTorch,1 and Hugging Face’s transformers.2 The code for all the examples in the book is provided in the form of Jupyter notebooks.3 Important fragments of these notebooks will be presented in the implementation chapters so that the reader has the whole picture just by reading the book. However, we strongly encourage you to download the notebooks and execute them yourself. We also encourage you to modify them to conduct your own experiments! 1 https://pytorch.org
2 https://huggingface.co 3 https://jupyter.org/ 55 56 Implementing Text Classification Using Perceptron and LR 4.1 Binary Classification We begin this chapter with binary classification. That is, we aim to train classifiers that assign one of two labels to a given text. As the example for this task, we will train a review classifier using the the Large Movie Review Dataset (Maas et al., 2011).4 We tackle this task by implementing first a binary perceptron classifier, followed by a binary logistic regression one. We will implement the latter both from scratch as well as using PyTorch, so the reader has a clearer understanding on how PyTorch works “under the hood.” 4.1.1 Large Movie Review Dataset This dataset contains movie reviews and their associated scores (between 1 and 10) as provided by IMDb.5 converted these scores to binary labels by assigning each review a positive or negative label if the review score was above 6 or below 5, respectively. Reviews with scores 5 and 6 were considered too neutral and thus excluded. We follow the same protocol in this chapter. The dataset is divided in two even partitions called train and test, each containing 25,000 reviews. The dataset also provides additional unlabeled reviews, but we will not use those here. Each partition contains two directories called pos and neg where the positive and negative examples are stored. Each review is stored in an independent text file, whose name is composed of an id unique to the partition and the score associated with the review, separated by an underscore. An example of a positive and a negative review is shown in Table 4.1. 4.1.2 Bag-of-words Model As discussed in Section 2.2, we will encode the text to classify as a bag of words. That is, we encode each review as a list of numbers, with each position in the list corresponding to a word in our vocabulary, and the value stored in that position corresponding to the number of times the word appears in the review. For example, say we want to encode the following two reviews: 4 https://ai.stanford.edu/~amaas/data/sentiment/ 5 https://www.imdb.com/ Maas et al. 4.1 Binary Classification 57 Table 4.1 Two examples of movie reviews from IMDb. The first is a positive review of the movie Puss in Boots (1988). The second is a negative review of the movie Valentine (2001). These reviews can be found at https://www.imdb.com/review/rw0606396/ and https://www.imdb.com/review/rw0721861/, respectively. Filename Score Binary Label train/pos/24_8.txt 8/10 Positive train/neg/141_3.txt 3/10 Negative Review Text Although this was obviously a low-budget production, the performances and the songs in this movie are worth seeing. One of Walken’s few musical roles to date. (he is a marvelous dancer and singer and he demonstrates his acrobatic skills as well - watch for the cartwheel!) Also starring Jason Connery. A great children’s story and very likable characters. This stalk and slash turkey manages to bring nothing new to an increasingly stale genre. A masked killer stalks young, pert girls and slaughters them in a variety of gruesome ways, none of which are particularly inventive. It’s not scary, it’s not clever, and it’s not funny. So what was the point of it? Review 1: Review 2: "I liked the movie. My friend liked it too. " "I hated it. Would not recommend. " First, we need to create a vocabulary that maps each word to an id that uniquely identifies it. Each of these numbers will be used as the index in a list, so they must start at zero and grow by one for each word in the vocabulary. For example, one possible vocabulary that encodes the previous reviews is: {'would': 0, 'hated': 1, 58 Implementing Text Classification Using Perceptron and LR 'my': 2, 'liked': 3, 'not': 4, 'it': 5, 'movie': 6, 'recommend': 7, 'the': 8, 'I': 9, 'too': 10, 'friend': 11} Using this mapping, we can encode the two reviews as follows: Review1: [0,0,1,2,0,1,1,0,1,1,1,1] Review2: [1,1,0,0,1,1,0,1,0,1,0,0] Note that the word liked (fourth position) in the first review has a value of two. This is because this word appears twice in that review. This is a small example with a vocabulary of only 12 terms. Of course, the same process needs to be implemented for our whole training dataset. For this purpose we will use scikit-learn’s CountVectorizer class.6 Using the CountVectorizer class simplifies things, allowing us to get started quickly with a bag-of-words approach. However, note that it makes several simplifying assumptions (e.g., text is lowercased, and punctuation and single character tokens are removed). Some of these may not be adequate to other tasks. First, we need to obtain the filenames for the reviews in the training set: Once we have acquired the filenames for the training reviews, we need
to read them using the CountVectorizer. In order for the CountVectorizer to open and read the files for us, we make use of the input='filename' constructor parameter (otherwise it would expect the string content directly). The CountVectorizer provides three methods that will be use-
ful for us: a method called fit() that is used to acquire the vocabulary,
a method transform() that converts the text into the bag-of-words representation, and a method fit_transform() that conveniently acquires the vocabulary and transforms the data in a single step. The resulting object is referred to as a document-term matrix, where each row corre- 6 https://scikitlearn.org/stable/modules/generated/sklearn.feature_ extraction.text.CountVectorizer.html 4.1 Binary Classification 59 sponds to a document, and each column corresponds to a term in the vocabulary. As the output above indicates, the resulting matrix has 25,000 rows (one for each review), and 74,849 columns (one for each term). Also you may note that this matrix is sparse, with 3,445,861 stored elements. A regular matrix of shape 25,000×74,849 would have 1,871,225,000 elements. However, most of the elements in a document-term matrix are zeros because only a few words from the vocabulary appear in each document. A sparse matrix takes advantage of this fact by storing only the non-zero cells in order to reduce the memory required to store it. Thus, sparse matrices are convenient, especially when dealing with lots of data. Nevertheless, to simplify the downstream code in this example, we will convert it into a dense matrix, i.e., a regular two-dimensional NumPy array. Finally, we also need the labels of the reviews. We assign a label of one to positive reviews, and a label of zero to negative ones. Note that the first half of the reviews are positive and the second half are negative. The label at the ith position of the y_train array corresponds to the review encoded in the ith row of the X_train matrix. 4.1.3 Perceptron Now that we have defined our task and the data processing pipeline, we will implement a perceptron classifier that classifies the movie reviews as positive or negative. The entire code discussed in this section is available in the chap4_perceptron notebook. Recall from Section 2.4 that the perceptron is composed of a weight vector w and a bias term b. These will be represented as a NumPy array w of the same length as our document vectors, and a variable b for the bias term. Both will be initialized with zeros. The parameters w and b are learned through the following algorithm, which implements Algorithm 2 from Chapter 2: There are a couple of details to point out. Line 3 of Algorithm 2 indicates that we need to repeat the training loop until convergence. Theoretically, convergence is defined as predicting all training examples correctly. This is an ambitious requirement, which is not always possible in practice, so in this code we also include a stop condition if we reach a maximum number of epochs. Another crucial difference between our implementation here and the theoretical Algorithm 2, is that we randomize the order in which the training examples are seen at the beginning of 60 Implementing Text Classification Using Perceptron and LR each epoch. This simple (but highly recommended!) change is necessary to avoid the introduction of spurious biases due to the arbitrary order of the examples in the original training partition.7 We accomplish this by storing the indices corresponding to the X_train matrix rows in a NumPy array, and shuffling these indices at the beginning of each epoch. We shuffle the indices instead of the examples so that we can preserve the mapping between examples and labels. The training loop aligns closely with Algorithm 2. We start by iterating over each example in our training data, storing the current example in the variable x,8 and its corresponding label in the variable y_true. Next, we compute the perceptron decision function shown in Algorithm 1. Note that NumPy (as well as PyTorch) uses Python’s @ operator to indicate vector or matrix multiplication, depending on its operand types. Here we use it to calculate the dot product of the example x and the weights w. To this we add the bias b to obtain the predicted score, whose sign is used to assign a positive or negative predicted label. If the prediction is correct, then no update is needed, and we can move on to the next training example. However, if the prediction is incorrect, then we need to adjust w and b, as described in Algorithm 2. Sidebar 4.1 The tqdm function This is our first exposure to the tqdm function. tqdm is a progress bar that “make your loops show a smart progress meter.”9 The name tqdm comes from the Arabic word taqaddum which can mean “progress.” Using tqdm is as simple as wrapping it around the collection to be traversed. After training, we evaluate the model’s performance on the heldout test partition. The test data is loaded similarly to the training partition, but with one notable difference; we use CountVectorizer’s transform() method instead of the fit_transform() method so that the vocabulary is not adjusted for the test data. We won’t show here the loading of the test partition since it is so similar to the code already shown, but it is available in the Jupyter notebook that accompanies this section. . 7   As an extreme example, consider a dataset where all the positive examples appear first in the training partition. This would cause the perceptron to artificially inflate the weights of the features that occur in these examples, a situation from which the learning algorithm may struggle to recover. 
 . 8  We use typewriter font when we discuss variables in the code, to distinguish code from the theoretical discussion in the other chapters. 
 9 https://github.com/tqdm/tqdm 4.1 Binary Classification 61 Using the model to assign labels to all the test data is easily done in one step – we simply multiply the entire test data document-term matrix by the previously learned weights and add the bias. Scores greater than zero indicate a positive review, and those less than zero are negative. At this point we can evaluate the classifier’s performance, which we will do using precision, recall, and F1 scores for binary classification (described in Section 2.3). For this purpose, we implement a function called binary_classification_report that computes these metrics and returns them as a dictionary: We call this function to compare the predicted labels to the true labels, and obtain the evaluation scores. Our F1 score here is 86.8%, which is much higher than the baseline that assigns labels randomly, which yields an F1 score of about 50%. This is a good result, especially considering the simplicity of the perceptron! In the next sections and chapters, we will discuss a battery of strategies to considerably improve this performance. 4.1.4 Binary Logistic Regression from Scratch Using the same task, dataset, and evaluation, we will now implement a logistic regression classifier, as described in Algorithm 5 from Chapter 3. To give the reader hands-on experience with the implementation of the gradient calculations for logistic regression, we start by implementing it from scratch using NumPy. All the code shown in this section is available in the chap4_logistic_regression_numpy notebook. In the perceptron implementation, we represented the weights and the bias as two different variables. Here, however, we will use a different approach that will allow us to unify them into a single vector variable. Specifically, we take advantage of the similarity between the derivative of the cost function with respect to the weights (Equation 3.14) and the derivative of the cost with respect to the bias (Equation 3.15). d Ci(w, b) = (σi − yi)xij (3.14 revisited) dwj d Ci(w, b) = σi − yi (3.15 revisited) db Note that the two derivative formulas are identical except that the former has a multiplication by xij, while the latter does not. However, 62 Implementing Text Classification Using Perceptron and LR since σi − yi = (σi − yi)1 we can multiply the derivative of the cost with respect to the bias by one without changing the semantics. This gives an opportunity for combining the computations, doing them both in a single pass. The idea is that we can treat the bias as a weight corresponding to a feature that always has a value of one. As can be seen above, we created a NumPy array of ones of the same length as the number of examples in our training set (i.e., the number of rows in the data matrix). Then we add this array as a new column to the data matrix, using NumPy’s column_stack function. Next, we need to initialize our model. This time we will use a single NumPy array w of the same length as the number of columns in the data matrix. The weight vector w is initialized randomly with values between 0 and 1: Before implementing the learning algorithm, we need an implementation of the logistic function. Recall that the logistic function is σ(x) = 1 (3.1 revisited) 1+e−x This function can be easily implemented in NumPy as follows: However, this naive implementation may produce the following warning during training: The term overflow indicates that the result of evaluating exp(-x) is a number so large that it can’t be represented by a float (specifically, we’re using float64 numbers). We will avoid this issue by not calling exp with values that will overflow. NumPy provides the function finfo that can be consulted to find the limits of floating point numbers: The log of the largest floating point number is the largest number for which exp() will not overflow, so we will use it as a threshold to filter out problematic values: We now have everything we need to implement Algorithm 4. The steps to follow for each example are: (1) use the model to make a prediction, (2) calculate the gradient of the loss function with respect to the model parameters, and (3) update the model parameters using the gradient. The size of the update is controlled by the learning rate. Once the model has been trained, we evaluate it on the test dataset using our binary_classification_report function from the previous section. Loading and preprocessing the test dataset follows the same 4.1 Binary Classification 63 steps as with the previous classifier. We omit the code for brevity. These are the results: The performance is comparable with that of the perceptron. The difference in F1 scores between the two classifiers (84.9% here vs. 86.8% for the perceptron) is not significant. Classifier parity is probably attributable to the fact that the signal distinguishing the two classes being easy to learn and the simpler perceptron training algorithm being sufficient in this case. Nevertheless, this task is useful in showing how to implement the logistic regression model from scratch, i.e., by implementing the gradient calculation and parameter updates manually. Next, we will implement the same model again using PyTorch, highlighting how this machine learning library simplifies the process. 4.1.5 Binary Logistic Regression Utilizing PyTorch While it is fairly straightforward to compute the derivatives for logistic regression and implement then directly in NumPy, this will not scale well to arbitrary neural architectures. Fortunately, there are libraries that automate the computation of the derivatives of the cost function (assuming it is differentiable!) for any neural network, and use the resulting gradients to perform gradient descent or other more sophisticated optimization procedures. To this end, we will use the PyTorch deep learning library10. The corresponding notebook for this section is chap4_logistic_regression_pytorch_bce. Our model for logistic regression corresponds to PyTorch’s Linear layer. When we instantiate this layer, we specify the size of the inputs (the size of our vocabulary) and the size of the output, i.e., the number of output neurons (which is one because we’re doing binary classification). The loss function we use is the binary cross-entropy loss (see Chapter 3), which is implemented as BCEWithLogitsLoss in PyTorch. In PyTorch, the gradients obtained from the loss function are applied to the model by an optimizer object, which implements and applies an optimization algorithm. Here we will use the vanilla stochastic gradient descent optimizer; we set its learning rate to 0.1. This is equivalent to the discussion in Section 3.2. Similarly to the manual implementation, the steps required to train the model for a given training example are: (1) ensure the gradients are set to zeros, (2) apply the model to obtain a prediction, (3) calculate 10 https://pytorch.org/ 64 Implementing Text Classification Using Perceptron and LR the loss, (4) compute the gradient of the loss by back-propagation, and (5) update the model parameters. Recall that in our previous implementation everything was hardcoded: applying the model, computing the gradients, and optimizing the model parameters. Here, however, the implementation of the logistic regression is expressed at a higher level of abstraction. This means that we are describing the logical steps without specifying a particular implementation. Instead, implementation details are the responsability of the chosen model, loss function, and optimizer. Thus, we could even choose a different model, loss function, and/or optimizer, and use the same training steps with little or no modification. This decoupling of the training logic from the implementation details is one of the main advantages of libraries such as PyTorch. As shown in the code above, calling the model as a function, with the feature vectors as inputs, produces the predicted scores. Once again, a positive score corresponds to a positive label. When we evaluate this implementation on the test dataset, we obtain results that are in line with our previous models: Writing the perceptron and the logistic regression from scratch is a good exercise, as it exposes us to the fundamentals of implementing machine learning algorithms. However, this becomes cumbersome for more complex neural architectures. For this reason, from this point on, we will use PyTorch for all our coding examples. 4.2 Multiclass Classification So far, in this chapter we have discussed implementing binary classifiers. Next, we will modify these binary classifiers to perform multiclass classification, following the discussion in Section 3.5. 4.2.1 AG News Dataset Before explaining the actual training/testing code, we have to choose a new dataset that is suitable for multiclass classification. To this end, we will use the AG News Classification Dataset (Zhang et al., 2015), a subset of the larger AG corpus of news articles collected from thousands of different news sources.11 The classification dataset consists of four 11 http://groups.di.unipi.it/~gulli/AG_corpus_of_news_articles.html 4.2 Multiclass Classification 65 classes, and the data is equally balanced across all classes (30,000 articles per class for train, and 1,900 articles per class for testing). The goal of the task is to classify each article as one of the four classes: World, Sports, Business, or Sci/Tech. 4.2.2 Preparing the Dataset The AG News Dataset is distributed as two CSV files (one for training and one for testing), each containing three columns: the class index, the title, and the description. The dataset also provides a text file that maps the above class indexes to more descriptive class labels. Because of the tabular nature of the dataset, pandas, a Python library
for tabular data analysis,12 is a natural choice for loading and transform-
ing it. To this end, our Jupyter notebook (chap4_multiclass_logistic_regression) demonstrates the sequence of steps required to handle the data, as well
as model training and evaluation. First, we show how to load the CSV,
add column names, and inspect the result: class index . 0  3 
 . 1  3 
 . 2  3 
 . 3  3 
 . 4  3 
 ... ... . 119995  1 
 . 119996  2 
 . 119997  2 
 . 119998  2 
 . 119999  2 
 title Wall St. Bears Claw Back Into the Black (Reuters) Carlyle Looks Toward Commercial Aerospace (Reu... Oil and Economy Cloud Stocks' Outlook (Reuters) Iraq Halts Oil Exports from Main Southern Pipe... Oil prices soar to all-time record, posing new... ... Pakistan's Musharraf Says Won't Quit as Army C... Renteria signing a top-shelf deal Saban not going to Dolphins yet Today's NFL games Nets get Carter from Raptors description Reuters - Short-sellers, Wall Street's dwindli... Reuters - Private investment firm Carlyle Grou... Reuters - Soaring crude prices plus worries\ab... Reuters - Authorities have halted oil export\f... AFP - Tearaway world oil prices, toppling reco... ... KARACHI (Reuters) - Pakistani President Perve... Red Sox general manager Theo Epstein acknowled... The Miami Dolphins will put their courtship of... PITTSBURGH at NY GIANTS Time: 1:30 p.m. Line: ... INDIANAPOLIS -- All-Star Vince Carter was trad... 120000 rows × 3 columns Since the class labels themselves are in a separate file, we manually add them to the pandas data structure (called dataframe in pandas’ terminology) to increase the interpretability of the data. We use the class index column as a starting point, and use its map method to create a new column with the corresponding labels (technically a new Series object) that is added to the dataframe using its insert method, which allows us to insert the column in a specific position. Note that the label indices are one-based, so we subtract one to align them with their labels. 12 https://pandas.pydata.org 66 Implementing Text Classification Using Perceptron and LR class index . 0  3 
 . 1  3 
 . 2  3 
 . 3  3 
 . 4  3 
 ... ... . 119995  1 
 . 119996  2 
 . 119997  2 
 . 119998  2 
 . 119999  2 
 class Business Business Business Business Business ... World Sports Sports Sports Sports title Wall St. Bears Claw Back Into the Black (Reuters) Oil and Economy Cloud Stocks' Outlook (Reuters) Oil prices soar to all-time record, posing new... Pakistan's Musharraf Says Won't Quit as Army C... Saban not going to Dolphins yet Nets get Carter from Raptors description Reuters - Short-sellers, Wall Street's dwindli... Reuters - Soaring crude prices plus worries\ab... AFP - Tearaway world oil prices, toppling reco... KARACHI (Reuters) - Pakistani President Perve... The Miami Dolphins will put their courtship of... INDIANAPOLIS -- All-Star Vince Carter was trad... Iraq Halts Oil Exports from Main Southern Pipe... Reuters - Authorities have halted oil export\f... ... ... Renteria signing a top-shelf deal Red Sox general manager Theo Epstein acknowled... 120000 rows × 4 columns Carlyle Looks Toward Commercial Aerospace (Reu... Reuters - Private investment firm Carlyle Grou... Today's NFL games PITTSBURGH at NY GIANTS Time: 1:30 p.m. Line: ... Next we will preprocess the text. First we lowercase the title and description, and then we concatenate them into a single string. Then we remove some spurious backslashes from the text. Once this is done, the preprocessed text is added to the dataframe as a new column. Note that pandas allows these steps to be applied to all rows simultaneously. class index class title Wall St. Bears Claw Back Into the Black (Reuters) Oil and Economy Cloud Stocks' Outlook (Reuters) Oil prices soar to all-time record, posing new... ... Pakistan's Musharraf Says Won't Quit as Army C... Saban not going to Dolphins yet Nets get Carter from Raptors description Reuters - Short-sellers, Wall Street's dwindli... Reuters - Soaring crude prices plus worries\ab... AFP - Tearaway world oil prices, toppling reco... ... KARACHI (Reuters) - Pakistani President Perve... The Miami Dolphins will put their courtship of... INDIANAPOLIS -- All-Star Vince Carter was trad... text wall st. bears claw back into the black (reute... oil and economy cloud stocks' outlook (reuters... oil prices soar to all-time record, posing new... ... pakistan's musharraf says won't quit as army c... saban not going to dolphins yet the miami dolp... nets get carter from raptors indianapolis -- a... . 0  3 Business 
 . 1  3 Business 
 . 2  3 Business 
 . 3  3 Business 
 . 4  3 Business 
 ... ... ... . 119995  1 World 
 . 119996  2 Sports 
 . 119997  2 Sports 
 . 119998  2 Sports 
 . 119999  2 Sports 
 120000 rows × 5 columns Carlyle Looks Toward Commercial Reuters - Private investment firm Carlyle carlyle looks toward commercial Aerospace (Reu... Grou... aerospace (reu... Iraq Halts Oil Exports from Main Southern Pipe... Reuters - Authorities have halted oil export\f... iraq halts oil exports from main southern pipe... Renteria signing a top-shelf deal Red Sox general manager Theo Epstein renteria signing a top-shelf deal red sox acknowled... gene... Today's NFL games PITTSBURGH at NY GIANTS Time: 1:30 p.m. today's nfl games pittsburgh at ny giants Line: ... time... At this point, the text is ready to be tokenized. For this purpose we will use NLTK’s word_tokenize function. This function can be applied to the whole column at once using the pandas map function, which returns a new column which we add to the dataframe. However, here we actually use the progress_map function, which provides a visual progress bar. This visual feedback is especially helpful for tasks that take more time to complete. 4.2 Multiclass Classification 67 class index . 0  3 
 . 1  3 
 . 2  3 
 . 3  3 
 . 4  3 
 ... ... . 119995  1 
 . 119996  2 
 . 119997  2 
 . 119998  2 
 . 119999  2 
 class Business Business Business Business Business ... World Sports Sports Sports Sports title Wall St. Bears Claw Back Into the Black (Reuters) Oil and Economy Cloud Stocks' Outlook (Reuters) Oil prices soar to all-time record, posing new... ... Pakistan's Musharraf Says Won't Quit as Army C... Saban not going to Dolphins yet Nets get Carter from Raptors description Reuters - Short-sellers, Wall Street's dwindli... Reuters - Soaring crude prices plus worries\ab... AFP - Tearaway world oil prices, toppling reco... ... KARACHI (Reuters) - Pakistani President Perve... The Miami Dolphins will put their courtship of... INDIANAPOLIS -- All-Star Vince Carter was trad... text wall st. bears claw back into the black (reute... oil and economy cloud stocks' outlook (reuters... oil prices soar to all-time record, posing new... ... pakistan's musharraf says won't quit as army c... saban not going to dolphins yet the miami dolp... nets get carter from raptors indianapolis -- a... tokens [wall, st., bears, claw, back, into, the, blac... [oil, and, economy, cloud, stocks, ', outlook,... [oil, prices, soar, to, all-time, record, ,, p... ... [pakistan, 's, musharraf, says, wo, n't, quit,... [saban, not, going, to, dolphins, yet, the, mi... [nets, get, carter, from, raptors, indianapoli... 120000 rows × 6 columns Carlyle Looks Toward Commercial Reuters - Private investment firm carlyle looks toward commercial [carlyle, looks, toward, Aerospace (Reu... Carlyle Grou... aerospace (reu... commercial, aerospace... Iraq Halts Oil Exports from Main Reuters - Authorities have halted iraq halts oil exports from main [iraq, halts, oil, exports, from, Southern Pipe... oil export\f... southern pipe... main, southe... Renteria signing a top-shelf deal Red Sox general manager Theo renteria signing a top-shelf deal [renteria, signing, a, top-shelf, Epstein acknowled... red sox gene... deal, red, s... Today's NFL games PITTSBURGH at NY GIANTS today's nfl games pittsburgh at [today, 's, nfl, games, Time: 1:30 p.m. Line: ... ny giants time... pittsburgh, at, ny, gi... From the tokens we just created, we then create a vocabulary for our corpus. Here, we only keep the words that occur at least 10 times, decreasing the memory needed and reducing the likelihood that our vocabulary contains noisy tokens. Note that each row in the tokens column contains a list of tokens. In order to create the vocabulary, we will need to convert the Series of lists of tokens into a Series of tokens using the explode() Pandas method. Then we will use the value_counts() method to create a Series object in which the index are the tokens and the values are the number of times they appear in the corpus. The next step is removing the tokens with a count lower than our chosen threshold. Finally, we create a list with the remaining tokens, as well as a dictionary that maps tokens to token ids (i.e., the index of the token in the list). We include in the vocabulary a special token [UNK] that will be used as a placeholder for tokens that do not appear in our vocabulary after the frequency pruning. Using this vocabulary, we construct a feature vector for each news article in the corpus. This feature vector will be encoded as a dictionary, with keys corresponding to token ids, and values corresponding to the number of times the token appears in the article. As above, the feature vectors will be stored as a new column in the dataframe. 68 Implementing Text Classification Using Perceptron and LR class index class title Wall St. Bears Claw Back Into the Black (Reuters) Oil and Economy Cloud Stocks' Outlook (Reuters) Oil prices soar to all-time record, posing new... ... Pakistan's Musharraf Says Won't Quit as Army C... Saban not going to Dolphins yet Nets get Carter from Raptors description Reuters - Short-sellers, Wall Street's dwindli... Reuters - Soaring crude prices plus worries\ab... AFP - Tearaway world oil prices, toppling reco... ... KARACHI (Reuters) - Pakistani President Perve... The Miami Dolphins will put their courtship of... INDIANAPOLIS -- All-Star Vince Carter was trad... text wall st. bears claw back into the black (reute... oil and economy cloud stocks' outlook (reuters... oil prices soar to all-time record, posing new... ... pakistan's musharraf says won't quit as army c... saban not going to dolphins yet the miami dolp... nets get carter from raptors indianapolis -- a... tokens [wall, st., bears, claw, back, into, the, blac... [oil, and, economy, cloud, stocks, ', outlook,... [oil, prices, soar, to, alltime, record, ,, p... ... [pakistan, 's, musharraf, says, wo, n't, quit,... [saban, not, going, to, dolphins, yet, the, mi... [nets, get, carter, from, raptors, indianapoli... features {427: 2, 563: 1, 1607: 1, 15062: 1, 120: 1, 73... {66: 1, 9: 2, 351: 2, 4565: 1, 158: 1, 116: 1,... {66: 2, 99: 2, 4390: 1, 4: 2, 3595: 1, 149: 1,... ... {383: 1, 23: 1, 1626: 2, 91: 1, 1809: 1, 285: ... {7762: 2, 68: 1, 661: 1, 4: 2, 1439: 2, 703: 1... {2170: 2, 226: 1, 2402: 2, 32: 1, 2995: 2, 219... . 0  3 Business 
 . 1  3 Business 
 . 2  3 Business 
 . 3  3 Business 
 . 4  3 Business 
 ... ... ... . 119995  1 World 
 . 119996  2 Sports 
 . 119997  2 Sports 
 . 119998  2 Sports 
 . 119999  2 Sports 
 120000 rows × 7 columns Carlyle Looks Toward Commercial Aerospace (Reu... Reuters - Private investment firm Carlyle Grou... carlyle looks toward commercial aerospace (reu... Iraq Halts Oil Exports from Reuters - Authorities have iraq halts oil exports from Main Southern Pipe... halted oil export\f... main southern pipe... Renteria signing a top-shelf Red Sox general manager renteria signing a topdeal Theo Epstein acknowled... shelf deal red sox gene... PITTSBURGH at NY Today's NFL games GIANTS Time: 1:30 p.m. Line: ... today's nfl games pittsburgh at ny giants time... [carlyle, looks, toward, {15999: 2, 1076: 1, 855: commercial, aerospace... 1, 1286: 1, 4251: 1, ... [iraq, halts, oil, exports, {77: 2, 7380: 1, 66: 3, from, main, southe... 1787: 1, 32: 2, 900: 2... [renteria, signing, a, top- {8428: 2, 2638: 1, 5: 4, shelf, deal, red, s... 0: 3, 127: 1, 202: 3,... [today, 's, nfl, games, {106: 1, 23: 1, 729: 1, pittsburgh, at, ny, gi... 225: 1, 1586: 1, 22: 1... The final preprocessing step is converting the features and the class indices into PyTorch tensors. Recall that we need to subtract one from the class indices to make them zero-based. At this point, the data is fully processed and we are ready to begin training. 4.2.3 Multiclass Logistic Regression Using PyTorch The model itself is a single linear layer whose input size corresponds to the size of our vocabulary, and its output size corresponds to the number of classes in our corpus. PyTorch’s Linear layer includes a bias by default, so there is no need to handle that manually the way we did for our perceptron example. The code for training this model (which implements Algorithm 6) is almost identical to that of the binary logistic repression. However, since we have to calculate a score for each of the four different classes, we need to replace the previous BCEWithLogitsLoss with CrossEntropyLoss, which applies a softmax over the scores to obtain probabilities for each class. For each example, the model predicts 4 scores – one for each label. The label with the highest score is selected using the argmax function. We evaluate the predictions of our model for each class using Scikitlearn’s classification_report, which handles the results of multiclass classification. 4.3 Summary 69 4.3 Summary In this chapter, we used movie review and news article classification to illustrate the implementation of the previously described algorithms for the binary perceptron, binary logistic regression, and multiclass logistic regression. For the binary logistic regression, we made a direct comparison between the lower-level NumPy implementation and a higher-level version that made use of PyTorch. We hope that through this series of exercises the reader has noted several key takeaways. First, data preparation is important and should be done thoughtfully. Certain tasks (e.g., text normalization or sentence splitting) are going to be frequently needed if you continue with NLP, so using or creating generic functions can be very helpful. However, what works for one dataset and one language may not be suitable for another scenario. For example, in our case, we selected different tokenizers for each of our tasks to account for the different registers of English, as well as removing diacritics during normalization. Second, when it comes to implementing machine learning algorithms, it is often easier to use a higher-level library such as PyTorch instead of NumPy. For example, with the former, the gradients are calculated by the library, whereas in NumPy we have to code them ourselves. This becomes cumbersome quickly. For example, even the derivative of the softmax is non-trivial. Third, PyTorch imposes a training structure that remains largely the same, regardless of what models are being trained. That is, at a high level, the same steps are always required: clearing the current gradients, predicting output scores for the provided inputs, calculating the loss, and optimizing. These features make PyTorch a very powerful and convenient deep learning library; we will continue to use it throughout the remainder of the book to implement more complex neural architectures.
10,310
10,409
#!/usr/bin/env python # coding: utf-8 # # Binary Text Classification with Perceptron # In[1]: import random import numpy as np from tqdm.notebook import tqdm # set this variable to a number to be used as the random seed # or to None if you don't want to set a random seed seed = 1234 if seed is not None: random.seed(seed) np.random.seed(seed) # The dataset is divided in two directories called `train` and `test`. # These directories contain the training and testing splits of the dataset. # In[2]: get_ipython().system('ls -lh data/aclImdb/') # Both the `train` and `test` directories contain two directories called `pos` and `neg` that contain text files with the positive and negative reviews, respectively. # In[3]: get_ipython().system('ls -lh data/aclImdb/train/') # We will now read the filenames of the positive and negative examples. # In[4]: from glob import glob pos_files = glob('data/aclImdb/train/pos/*.txt') neg_files = glob('data/aclImdb/train/neg/*.txt') print('number of positive reviews:', len(pos_files)) print('number of negative reviews:', len(neg_files)) # Now, we will use a [`CountVectorizer`](https://scikit-learn.org/stable/modules/generated/sklearn.feature_extraction.text.CountVectorizer.html) to read the text files, tokenize them, acquire a vocabulary from the training data, and encode it in a document-term matrix in which each row represents a review, and each column represents a term in the vocabulary. Each element $(i,j)$ in the matrix represents the number of times term $j$ appears in example $i$. # In[5]: from sklearn.feature_extraction.text import CountVectorizer # initialize CountVectorizer indicating that we will give it a list of filenames that have to be read cv = CountVectorizer(input='filename') # learn vocabulary and return sparse document-term matrix doc_term_matrix = cv.fit_transform(pos_files + neg_files) doc_term_matrix # Note in the message printed above that the matrix is of shape (25000, 74894). # In other words, it has 1,871,225,000 elements. # However, only 3,445,861 elements were stored. # This is because most of the elements in the matrix are zeros. # The reason is that the reviews are short and most words in the english language don't appear in each review. # A matrix that only stores non-zero values is called *sparse*. # # Now we will convert it to a dense numpy array: # In[6]: X_train = doc_term_matrix.toarray() X_train.shape # We will also create a numpy array with the binary labels for the reviews. # One indicates a positive review and zero a negative review. # The label `y_train[i]` corresponds to the review encoded in row `i` of the `X_train` matrix. # In[7]: # training labels y_pos = np.ones(len(pos_files)) y_neg = np.zeros(len(neg_files)) y_train = np.concatenate([y_pos, y_neg]) y_train # Now we will initialize our model, in the form of an array of weights `w` of the same size as the number of features in our dataset (i.e., the number of words in the vocabulary acquired by [`CountVectorizer`](https://scikit-learn.org/stable/modules/generated/sklearn.feature_extraction.text.CountVectorizer.html)), and a bias term `b`. # Both are initialized to zeros. # In[8]: # initialize model: the feature vector and bias term are populated with zeros n_examples, n_features = X_train.shape w = np.zeros(n_features) b = 0 # Now we will use the perceptron learning algorithm to learn the values of `w` and `b` from our training data. # In[9]: n_epochs = 10 indices = np.arange(n_examples) for epoch in range(10): n_errors = 0 # randomize the order in which training examples are seen in this epoch np.random.shuffle(indices) # traverse the training data for i in tqdm(indices, desc=f'epoch {epoch+1}'): x = X_train[i] y_true = y_train[i] # the perceptron decision based on the current model score = x @ w + b y_pred = 1 if score > 0 else 0 # update the model is the prediction was incorrect if y_true == y_pred: continue elif y_true == 1 and y_pred == 0: w = w + x b = b + 1 n_errors += 1 elif y_true == 0 and y_pred == 1: w = w - x b = b - 1 n_errors += 1 if n_errors == 0: break # The next step is evaluating the model on the test dataset. # Note that this time we use the [`transform()`](https://scikit-learn.org/stable/modules/generated/sklearn.feature_extraction.text.CountVectorizer.html#sklearn.feature_extraction.text.CountVectorizer.transform) method of the [`CountVectorizer`](https://scikit-learn.org/stable/modules/generated/sklearn.feature_extraction.text.CountVectorizer.html), instead of the [`fit_transform()`](https://scikit-learn.org/stable/modules/generated/sklearn.feature_extraction.text.CountVectorizer.html#sklearn.feature_extraction.text.CountVectorizer.fit_transform) method that we used above. This is because we want to use the learned vocabulary in the test set, instead of learning a new one. # In[10]: pos_files = glob('data/aclImdb/test/pos/*.txt') neg_files = glob('data/aclImdb/test/neg/*.txt') doc_term_matrix = cv.transform(pos_files + neg_files) X_test = doc_term_matrix.toarray() y_pos = np.ones(len(pos_files)) y_neg = np.zeros(len(neg_files)) y_test = np.concatenate([y_pos, y_neg]) # Using the model is easy: multiply the document-term matrix by the learned weights and add the bias. # We use Python's `@` operator to perform the matrix-vector multiplication. # In[11]: y_pred = (X_test @ w + b) > 0 # Now we print an evaluation of the prediction results using scikit-learn's [`classification_report()`](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.classification_report.html) function. # In[12]: def binary_classification_report(y_true, y_pred): # count true positives, false positives, true negatives, and false negatives tp = fp = tn = fn = 0 for gold, pred in zip(y_true, y_pred): if pred == True: if gold == True: tp += 1 else: fp += 1 else: if gold == False: tn += 1 else: fn += 1 # calculate precision and recall precision = tp / (tp + fp) recall = tp / (tp + fn) # calculate f1 score fscore = 2 * precision * recall / (precision + recall) # calculate accuracy accuracy = (tp + tn) / len(y_true) # number of positive labels in y_true support = sum(y_true) return { "precision": precision, "recall": recall, "f1-score": fscore, "support": support, "accuracy": accuracy, } # In[13]: binary_classification_report(y_test, y_pred)
3,768
3,819
34
chap04-35
chap04-35
4 Implementing Text Classification Using Perceptron and Logistic Regression In the previous chapters we have discussed the theory behind the perceptron and logistic regression, including mathematical explanations of how and why they are able to learn from examples. In this chapter we will transition from math to code. Specifically, we will discuss how to implement these models in the Python programming language. All the code that we will introduce throughout this book is available online as well: http://clulab.github.io/gentlenlp/. The reader who is not familiar with the Python programming language is encouraged to read first Appendix A, for a brief introduction to the language, and Appendix B, for a discussion on how computers encode and preprocess text. Once done, please return here. To get a better understanding of how these algorithms work under the hood, we will start by implementing them from scratch. However, as the book progresses, we will introduce some of the popular tools and libraries that make Python the language of choice for machine learning, e.g., PyTorch,1 and Hugging Face’s transformers.2 The code for all the examples in the book is provided in the form of Jupyter notebooks.3 Important fragments of these notebooks will be presented in the implementation chapters so that the reader has the whole picture just by reading the book. However, we strongly encourage you to download the notebooks and execute them yourself. We also encourage you to modify them to conduct your own experiments! 1 https://pytorch.org
2 https://huggingface.co 3 https://jupyter.org/ 55 56 Implementing Text Classification Using Perceptron and LR 4.1 Binary Classification We begin this chapter with binary classification. That is, we aim to train classifiers that assign one of two labels to a given text. As the example for this task, we will train a review classifier using the the Large Movie Review Dataset (Maas et al., 2011).4 We tackle this task by implementing first a binary perceptron classifier, followed by a binary logistic regression one. We will implement the latter both from scratch as well as using PyTorch, so the reader has a clearer understanding on how PyTorch works “under the hood.” 4.1.1 Large Movie Review Dataset This dataset contains movie reviews and their associated scores (between 1 and 10) as provided by IMDb.5 converted these scores to binary labels by assigning each review a positive or negative label if the review score was above 6 or below 5, respectively. Reviews with scores 5 and 6 were considered too neutral and thus excluded. We follow the same protocol in this chapter. The dataset is divided in two even partitions called train and test, each containing 25,000 reviews. The dataset also provides additional unlabeled reviews, but we will not use those here. Each partition contains two directories called pos and neg where the positive and negative examples are stored. Each review is stored in an independent text file, whose name is composed of an id unique to the partition and the score associated with the review, separated by an underscore. An example of a positive and a negative review is shown in Table 4.1. 4.1.2 Bag-of-words Model As discussed in Section 2.2, we will encode the text to classify as a bag of words. That is, we encode each review as a list of numbers, with each position in the list corresponding to a word in our vocabulary, and the value stored in that position corresponding to the number of times the word appears in the review. For example, say we want to encode the following two reviews: 4 https://ai.stanford.edu/~amaas/data/sentiment/ 5 https://www.imdb.com/ Maas et al. 4.1 Binary Classification 57 Table 4.1 Two examples of movie reviews from IMDb. The first is a positive review of the movie Puss in Boots (1988). The second is a negative review of the movie Valentine (2001). These reviews can be found at https://www.imdb.com/review/rw0606396/ and https://www.imdb.com/review/rw0721861/, respectively. Filename Score Binary Label train/pos/24_8.txt 8/10 Positive train/neg/141_3.txt 3/10 Negative Review Text Although this was obviously a low-budget production, the performances and the songs in this movie are worth seeing. One of Walken’s few musical roles to date. (he is a marvelous dancer and singer and he demonstrates his acrobatic skills as well - watch for the cartwheel!) Also starring Jason Connery. A great children’s story and very likable characters. This stalk and slash turkey manages to bring nothing new to an increasingly stale genre. A masked killer stalks young, pert girls and slaughters them in a variety of gruesome ways, none of which are particularly inventive. It’s not scary, it’s not clever, and it’s not funny. So what was the point of it? Review 1: Review 2: "I liked the movie. My friend liked it too. " "I hated it. Would not recommend. " First, we need to create a vocabulary that maps each word to an id that uniquely identifies it. Each of these numbers will be used as the index in a list, so they must start at zero and grow by one for each word in the vocabulary. For example, one possible vocabulary that encodes the previous reviews is: {'would': 0, 'hated': 1, 58 Implementing Text Classification Using Perceptron and LR 'my': 2, 'liked': 3, 'not': 4, 'it': 5, 'movie': 6, 'recommend': 7, 'the': 8, 'I': 9, 'too': 10, 'friend': 11} Using this mapping, we can encode the two reviews as follows: Review1: [0,0,1,2,0,1,1,0,1,1,1,1] Review2: [1,1,0,0,1,1,0,1,0,1,0,0] Note that the word liked (fourth position) in the first review has a value of two. This is because this word appears twice in that review. This is a small example with a vocabulary of only 12 terms. Of course, the same process needs to be implemented for our whole training dataset. For this purpose we will use scikit-learn’s CountVectorizer class.6 Using the CountVectorizer class simplifies things, allowing us to get started quickly with a bag-of-words approach. However, note that it makes several simplifying assumptions (e.g., text is lowercased, and punctuation and single character tokens are removed). Some of these may not be adequate to other tasks. First, we need to obtain the filenames for the reviews in the training set: Once we have acquired the filenames for the training reviews, we need
to read them using the CountVectorizer. In order for the CountVectorizer to open and read the files for us, we make use of the input='filename' constructor parameter (otherwise it would expect the string content directly). The CountVectorizer provides three methods that will be use-
ful for us: a method called fit() that is used to acquire the vocabulary,
a method transform() that converts the text into the bag-of-words representation, and a method fit_transform() that conveniently acquires the vocabulary and transforms the data in a single step. The resulting object is referred to as a document-term matrix, where each row corre- 6 https://scikitlearn.org/stable/modules/generated/sklearn.feature_ extraction.text.CountVectorizer.html 4.1 Binary Classification 59 sponds to a document, and each column corresponds to a term in the vocabulary. As the output above indicates, the resulting matrix has 25,000 rows (one for each review), and 74,849 columns (one for each term). Also you may note that this matrix is sparse, with 3,445,861 stored elements. A regular matrix of shape 25,000×74,849 would have 1,871,225,000 elements. However, most of the elements in a document-term matrix are zeros because only a few words from the vocabulary appear in each document. A sparse matrix takes advantage of this fact by storing only the non-zero cells in order to reduce the memory required to store it. Thus, sparse matrices are convenient, especially when dealing with lots of data. Nevertheless, to simplify the downstream code in this example, we will convert it into a dense matrix, i.e., a regular two-dimensional NumPy array. Finally, we also need the labels of the reviews. We assign a label of one to positive reviews, and a label of zero to negative ones. Note that the first half of the reviews are positive and the second half are negative. The label at the ith position of the y_train array corresponds to the review encoded in the ith row of the X_train matrix. 4.1.3 Perceptron Now that we have defined our task and the data processing pipeline, we will implement a perceptron classifier that classifies the movie reviews as positive or negative. The entire code discussed in this section is available in the chap4_perceptron notebook. Recall from Section 2.4 that the perceptron is composed of a weight vector w and a bias term b. These will be represented as a NumPy array w of the same length as our document vectors, and a variable b for the bias term. Both will be initialized with zeros. The parameters w and b are learned through the following algorithm, which implements Algorithm 2 from Chapter 2: There are a couple of details to point out. Line 3 of Algorithm 2 indicates that we need to repeat the training loop until convergence. Theoretically, convergence is defined as predicting all training examples correctly. This is an ambitious requirement, which is not always possible in practice, so in this code we also include a stop condition if we reach a maximum number of epochs. Another crucial difference between our implementation here and the theoretical Algorithm 2, is that we randomize the order in which the training examples are seen at the beginning of 60 Implementing Text Classification Using Perceptron and LR each epoch. This simple (but highly recommended!) change is necessary to avoid the introduction of spurious biases due to the arbitrary order of the examples in the original training partition.7 We accomplish this by storing the indices corresponding to the X_train matrix rows in a NumPy array, and shuffling these indices at the beginning of each epoch. We shuffle the indices instead of the examples so that we can preserve the mapping between examples and labels. The training loop aligns closely with Algorithm 2. We start by iterating over each example in our training data, storing the current example in the variable x,8 and its corresponding label in the variable y_true. Next, we compute the perceptron decision function shown in Algorithm 1. Note that NumPy (as well as PyTorch) uses Python’s @ operator to indicate vector or matrix multiplication, depending on its operand types. Here we use it to calculate the dot product of the example x and the weights w. To this we add the bias b to obtain the predicted score, whose sign is used to assign a positive or negative predicted label. If the prediction is correct, then no update is needed, and we can move on to the next training example. However, if the prediction is incorrect, then we need to adjust w and b, as described in Algorithm 2. Sidebar 4.1 The tqdm function This is our first exposure to the tqdm function. tqdm is a progress bar that “make your loops show a smart progress meter.”9 The name tqdm comes from the Arabic word taqaddum which can mean “progress.” Using tqdm is as simple as wrapping it around the collection to be traversed. After training, we evaluate the model’s performance on the heldout test partition. The test data is loaded similarly to the training partition, but with one notable difference; we use CountVectorizer’s transform() method instead of the fit_transform() method so that the vocabulary is not adjusted for the test data. We won’t show here the loading of the test partition since it is so similar to the code already shown, but it is available in the Jupyter notebook that accompanies this section. . 7   As an extreme example, consider a dataset where all the positive examples appear first in the training partition. This would cause the perceptron to artificially inflate the weights of the features that occur in these examples, a situation from which the learning algorithm may struggle to recover. 
 . 8  We use typewriter font when we discuss variables in the code, to distinguish code from the theoretical discussion in the other chapters. 
 9 https://github.com/tqdm/tqdm 4.1 Binary Classification 61 Using the model to assign labels to all the test data is easily done in one step – we simply multiply the entire test data document-term matrix by the previously learned weights and add the bias. Scores greater than zero indicate a positive review, and those less than zero are negative. At this point we can evaluate the classifier’s performance, which we will do using precision, recall, and F1 scores for binary classification (described in Section 2.3). For this purpose, we implement a function called binary_classification_report that computes these metrics and returns them as a dictionary: We call this function to compare the predicted labels to the true labels, and obtain the evaluation scores. Our F1 score here is 86.8%, which is much higher than the baseline that assigns labels randomly, which yields an F1 score of about 50%. This is a good result, especially considering the simplicity of the perceptron! In the next sections and chapters, we will discuss a battery of strategies to considerably improve this performance. 4.1.4 Binary Logistic Regression from Scratch Using the same task, dataset, and evaluation, we will now implement a logistic regression classifier, as described in Algorithm 5 from Chapter 3. To give the reader hands-on experience with the implementation of the gradient calculations for logistic regression, we start by implementing it from scratch using NumPy. All the code shown in this section is available in the chap4_logistic_regression_numpy notebook. In the perceptron implementation, we represented the weights and the bias as two different variables. Here, however, we will use a different approach that will allow us to unify them into a single vector variable. Specifically, we take advantage of the similarity between the derivative of the cost function with respect to the weights (Equation 3.14) and the derivative of the cost with respect to the bias (Equation 3.15). d Ci(w, b) = (σi − yi)xij (3.14 revisited) dwj d Ci(w, b) = σi − yi (3.15 revisited) db Note that the two derivative formulas are identical except that the former has a multiplication by xij, while the latter does not. However, 62 Implementing Text Classification Using Perceptron and LR since σi − yi = (σi − yi)1 we can multiply the derivative of the cost with respect to the bias by one without changing the semantics. This gives an opportunity for combining the computations, doing them both in a single pass. The idea is that we can treat the bias as a weight corresponding to a feature that always has a value of one. As can be seen above, we created a NumPy array of ones of the same length as the number of examples in our training set (i.e., the number of rows in the data matrix). Then we add this array as a new column to the data matrix, using NumPy’s column_stack function. Next, we need to initialize our model. This time we will use a single NumPy array w of the same length as the number of columns in the data matrix. The weight vector w is initialized randomly with values between 0 and 1: Before implementing the learning algorithm, we need an implementation of the logistic function. Recall that the logistic function is σ(x) = 1 (3.1 revisited) 1+e−x This function can be easily implemented in NumPy as follows: However, this naive implementation may produce the following warning during training: The term overflow indicates that the result of evaluating exp(-x) is a number so large that it can’t be represented by a float (specifically, we’re using float64 numbers). We will avoid this issue by not calling exp with values that will overflow. NumPy provides the function finfo that can be consulted to find the limits of floating point numbers: The log of the largest floating point number is the largest number for which exp() will not overflow, so we will use it as a threshold to filter out problematic values: We now have everything we need to implement Algorithm 4. The steps to follow for each example are: (1) use the model to make a prediction, (2) calculate the gradient of the loss function with respect to the model parameters, and (3) update the model parameters using the gradient. The size of the update is controlled by the learning rate. Once the model has been trained, we evaluate it on the test dataset using our binary_classification_report function from the previous section. Loading and preprocessing the test dataset follows the same 4.1 Binary Classification 63 steps as with the previous classifier. We omit the code for brevity. These are the results: The performance is comparable with that of the perceptron. The difference in F1 scores between the two classifiers (84.9% here vs. 86.8% for the perceptron) is not significant. Classifier parity is probably attributable to the fact that the signal distinguishing the two classes being easy to learn and the simpler perceptron training algorithm being sufficient in this case. Nevertheless, this task is useful in showing how to implement the logistic regression model from scratch, i.e., by implementing the gradient calculation and parameter updates manually. Next, we will implement the same model again using PyTorch, highlighting how this machine learning library simplifies the process. 4.1.5 Binary Logistic Regression Utilizing PyTorch While it is fairly straightforward to compute the derivatives for logistic regression and implement then directly in NumPy, this will not scale well to arbitrary neural architectures. Fortunately, there are libraries that automate the computation of the derivatives of the cost function (assuming it is differentiable!) for any neural network, and use the resulting gradients to perform gradient descent or other more sophisticated optimization procedures. To this end, we will use the PyTorch deep learning library10. The corresponding notebook for this section is chap4_logistic_regression_pytorch_bce. Our model for logistic regression corresponds to PyTorch’s Linear layer. When we instantiate this layer, we specify the size of the inputs (the size of our vocabulary) and the size of the output, i.e., the number of output neurons (which is one because we’re doing binary classification). The loss function we use is the binary cross-entropy loss (see Chapter 3), which is implemented as BCEWithLogitsLoss in PyTorch. In PyTorch, the gradients obtained from the loss function are applied to the model by an optimizer object, which implements and applies an optimization algorithm. Here we will use the vanilla stochastic gradient descent optimizer; we set its learning rate to 0.1. This is equivalent to the discussion in Section 3.2. Similarly to the manual implementation, the steps required to train the model for a given training example are: (1) ensure the gradients are set to zeros, (2) apply the model to obtain a prediction, (3) calculate 10 https://pytorch.org/ 64 Implementing Text Classification Using Perceptron and LR the loss, (4) compute the gradient of the loss by back-propagation, and (5) update the model parameters. Recall that in our previous implementation everything was hardcoded: applying the model, computing the gradients, and optimizing the model parameters. Here, however, the implementation of the logistic regression is expressed at a higher level of abstraction. This means that we are describing the logical steps without specifying a particular implementation. Instead, implementation details are the responsability of the chosen model, loss function, and optimizer. Thus, we could even choose a different model, loss function, and/or optimizer, and use the same training steps with little or no modification. This decoupling of the training logic from the implementation details is one of the main advantages of libraries such as PyTorch. As shown in the code above, calling the model as a function, with the feature vectors as inputs, produces the predicted scores. Once again, a positive score corresponds to a positive label. When we evaluate this implementation on the test dataset, we obtain results that are in line with our previous models: Writing the perceptron and the logistic regression from scratch is a good exercise, as it exposes us to the fundamentals of implementing machine learning algorithms. However, this becomes cumbersome for more complex neural architectures. For this reason, from this point on, we will use PyTorch for all our coding examples. 4.2 Multiclass Classification So far, in this chapter we have discussed implementing binary classifiers. Next, we will modify these binary classifiers to perform multiclass classification, following the discussion in Section 3.5. 4.2.1 AG News Dataset Before explaining the actual training/testing code, we have to choose a new dataset that is suitable for multiclass classification. To this end, we will use the AG News Classification Dataset (Zhang et al., 2015), a subset of the larger AG corpus of news articles collected from thousands of different news sources.11 The classification dataset consists of four 11 http://groups.di.unipi.it/~gulli/AG_corpus_of_news_articles.html 4.2 Multiclass Classification 65 classes, and the data is equally balanced across all classes (30,000 articles per class for train, and 1,900 articles per class for testing). The goal of the task is to classify each article as one of the four classes: World, Sports, Business, or Sci/Tech. 4.2.2 Preparing the Dataset The AG News Dataset is distributed as two CSV files (one for training and one for testing), each containing three columns: the class index, the title, and the description. The dataset also provides a text file that maps the above class indexes to more descriptive class labels. Because of the tabular nature of the dataset, pandas, a Python library
for tabular data analysis,12 is a natural choice for loading and transform-
ing it. To this end, our Jupyter notebook (chap4_multiclass_logistic_regression) demonstrates the sequence of steps required to handle the data, as well
as model training and evaluation. First, we show how to load the CSV,
add column names, and inspect the result: class index . 0  3 
 . 1  3 
 . 2  3 
 . 3  3 
 . 4  3 
 ... ... . 119995  1 
 . 119996  2 
 . 119997  2 
 . 119998  2 
 . 119999  2 
 title Wall St. Bears Claw Back Into the Black (Reuters) Carlyle Looks Toward Commercial Aerospace (Reu... Oil and Economy Cloud Stocks' Outlook (Reuters) Iraq Halts Oil Exports from Main Southern Pipe... Oil prices soar to all-time record, posing new... ... Pakistan's Musharraf Says Won't Quit as Army C... Renteria signing a top-shelf deal Saban not going to Dolphins yet Today's NFL games Nets get Carter from Raptors description Reuters - Short-sellers, Wall Street's dwindli... Reuters - Private investment firm Carlyle Grou... Reuters - Soaring crude prices plus worries\ab... Reuters - Authorities have halted oil export\f... AFP - Tearaway world oil prices, toppling reco... ... KARACHI (Reuters) - Pakistani President Perve... Red Sox general manager Theo Epstein acknowled... The Miami Dolphins will put their courtship of... PITTSBURGH at NY GIANTS Time: 1:30 p.m. Line: ... INDIANAPOLIS -- All-Star Vince Carter was trad... 120000 rows × 3 columns Since the class labels themselves are in a separate file, we manually add them to the pandas data structure (called dataframe in pandas’ terminology) to increase the interpretability of the data. We use the class index column as a starting point, and use its map method to create a new column with the corresponding labels (technically a new Series object) that is added to the dataframe using its insert method, which allows us to insert the column in a specific position. Note that the label indices are one-based, so we subtract one to align them with their labels. 12 https://pandas.pydata.org 66 Implementing Text Classification Using Perceptron and LR class index . 0  3 
 . 1  3 
 . 2  3 
 . 3  3 
 . 4  3 
 ... ... . 119995  1 
 . 119996  2 
 . 119997  2 
 . 119998  2 
 . 119999  2 
 class Business Business Business Business Business ... World Sports Sports Sports Sports title Wall St. Bears Claw Back Into the Black (Reuters) Oil and Economy Cloud Stocks' Outlook (Reuters) Oil prices soar to all-time record, posing new... Pakistan's Musharraf Says Won't Quit as Army C... Saban not going to Dolphins yet Nets get Carter from Raptors description Reuters - Short-sellers, Wall Street's dwindli... Reuters - Soaring crude prices plus worries\ab... AFP - Tearaway world oil prices, toppling reco... KARACHI (Reuters) - Pakistani President Perve... The Miami Dolphins will put their courtship of... INDIANAPOLIS -- All-Star Vince Carter was trad... Iraq Halts Oil Exports from Main Southern Pipe... Reuters - Authorities have halted oil export\f... ... ... Renteria signing a top-shelf deal Red Sox general manager Theo Epstein acknowled... 120000 rows × 4 columns Carlyle Looks Toward Commercial Aerospace (Reu... Reuters - Private investment firm Carlyle Grou... Today's NFL games PITTSBURGH at NY GIANTS Time: 1:30 p.m. Line: ... Next we will preprocess the text. First we lowercase the title and description, and then we concatenate them into a single string. Then we remove some spurious backslashes from the text. Once this is done, the preprocessed text is added to the dataframe as a new column. Note that pandas allows these steps to be applied to all rows simultaneously. class index class title Wall St. Bears Claw Back Into the Black (Reuters) Oil and Economy Cloud Stocks' Outlook (Reuters) Oil prices soar to all-time record, posing new... ... Pakistan's Musharraf Says Won't Quit as Army C... Saban not going to Dolphins yet Nets get Carter from Raptors description Reuters - Short-sellers, Wall Street's dwindli... Reuters - Soaring crude prices plus worries\ab... AFP - Tearaway world oil prices, toppling reco... ... KARACHI (Reuters) - Pakistani President Perve... The Miami Dolphins will put their courtship of... INDIANAPOLIS -- All-Star Vince Carter was trad... text wall st. bears claw back into the black (reute... oil and economy cloud stocks' outlook (reuters... oil prices soar to all-time record, posing new... ... pakistan's musharraf says won't quit as army c... saban not going to dolphins yet the miami dolp... nets get carter from raptors indianapolis -- a... . 0  3 Business 
 . 1  3 Business 
 . 2  3 Business 
 . 3  3 Business 
 . 4  3 Business 
 ... ... ... . 119995  1 World 
 . 119996  2 Sports 
 . 119997  2 Sports 
 . 119998  2 Sports 
 . 119999  2 Sports 
 120000 rows × 5 columns Carlyle Looks Toward Commercial Reuters - Private investment firm Carlyle carlyle looks toward commercial Aerospace (Reu... Grou... aerospace (reu... Iraq Halts Oil Exports from Main Southern Pipe... Reuters - Authorities have halted oil export\f... iraq halts oil exports from main southern pipe... Renteria signing a top-shelf deal Red Sox general manager Theo Epstein renteria signing a top-shelf deal red sox acknowled... gene... Today's NFL games PITTSBURGH at NY GIANTS Time: 1:30 p.m. today's nfl games pittsburgh at ny giants Line: ... time... At this point, the text is ready to be tokenized. For this purpose we will use NLTK’s word_tokenize function. This function can be applied to the whole column at once using the pandas map function, which returns a new column which we add to the dataframe. However, here we actually use the progress_map function, which provides a visual progress bar. This visual feedback is especially helpful for tasks that take more time to complete. 4.2 Multiclass Classification 67 class index . 0  3 
 . 1  3 
 . 2  3 
 . 3  3 
 . 4  3 
 ... ... . 119995  1 
 . 119996  2 
 . 119997  2 
 . 119998  2 
 . 119999  2 
 class Business Business Business Business Business ... World Sports Sports Sports Sports title Wall St. Bears Claw Back Into the Black (Reuters) Oil and Economy Cloud Stocks' Outlook (Reuters) Oil prices soar to all-time record, posing new... ... Pakistan's Musharraf Says Won't Quit as Army C... Saban not going to Dolphins yet Nets get Carter from Raptors description Reuters - Short-sellers, Wall Street's dwindli... Reuters - Soaring crude prices plus worries\ab... AFP - Tearaway world oil prices, toppling reco... ... KARACHI (Reuters) - Pakistani President Perve... The Miami Dolphins will put their courtship of... INDIANAPOLIS -- All-Star Vince Carter was trad... text wall st. bears claw back into the black (reute... oil and economy cloud stocks' outlook (reuters... oil prices soar to all-time record, posing new... ... pakistan's musharraf says won't quit as army c... saban not going to dolphins yet the miami dolp... nets get carter from raptors indianapolis -- a... tokens [wall, st., bears, claw, back, into, the, blac... [oil, and, economy, cloud, stocks, ', outlook,... [oil, prices, soar, to, all-time, record, ,, p... ... [pakistan, 's, musharraf, says, wo, n't, quit,... [saban, not, going, to, dolphins, yet, the, mi... [nets, get, carter, from, raptors, indianapoli... 120000 rows × 6 columns Carlyle Looks Toward Commercial Reuters - Private investment firm carlyle looks toward commercial [carlyle, looks, toward, Aerospace (Reu... Carlyle Grou... aerospace (reu... commercial, aerospace... Iraq Halts Oil Exports from Main Reuters - Authorities have halted iraq halts oil exports from main [iraq, halts, oil, exports, from, Southern Pipe... oil export\f... southern pipe... main, southe... Renteria signing a top-shelf deal Red Sox general manager Theo renteria signing a top-shelf deal [renteria, signing, a, top-shelf, Epstein acknowled... red sox gene... deal, red, s... Today's NFL games PITTSBURGH at NY GIANTS today's nfl games pittsburgh at [today, 's, nfl, games, Time: 1:30 p.m. Line: ... ny giants time... pittsburgh, at, ny, gi... From the tokens we just created, we then create a vocabulary for our corpus. Here, we only keep the words that occur at least 10 times, decreasing the memory needed and reducing the likelihood that our vocabulary contains noisy tokens. Note that each row in the tokens column contains a list of tokens. In order to create the vocabulary, we will need to convert the Series of lists of tokens into a Series of tokens using the explode() Pandas method. Then we will use the value_counts() method to create a Series object in which the index are the tokens and the values are the number of times they appear in the corpus. The next step is removing the tokens with a count lower than our chosen threshold. Finally, we create a list with the remaining tokens, as well as a dictionary that maps tokens to token ids (i.e., the index of the token in the list). We include in the vocabulary a special token [UNK] that will be used as a placeholder for tokens that do not appear in our vocabulary after the frequency pruning. Using this vocabulary, we construct a feature vector for each news article in the corpus. This feature vector will be encoded as a dictionary, with keys corresponding to token ids, and values corresponding to the number of times the token appears in the article. As above, the feature vectors will be stored as a new column in the dataframe. 68 Implementing Text Classification Using Perceptron and LR class index class title Wall St. Bears Claw Back Into the Black (Reuters) Oil and Economy Cloud Stocks' Outlook (Reuters) Oil prices soar to all-time record, posing new... ... Pakistan's Musharraf Says Won't Quit as Army C... Saban not going to Dolphins yet Nets get Carter from Raptors description Reuters - Short-sellers, Wall Street's dwindli... Reuters - Soaring crude prices plus worries\ab... AFP - Tearaway world oil prices, toppling reco... ... KARACHI (Reuters) - Pakistani President Perve... The Miami Dolphins will put their courtship of... INDIANAPOLIS -- All-Star Vince Carter was trad... text wall st. bears claw back into the black (reute... oil and economy cloud stocks' outlook (reuters... oil prices soar to all-time record, posing new... ... pakistan's musharraf says won't quit as army c... saban not going to dolphins yet the miami dolp... nets get carter from raptors indianapolis -- a... tokens [wall, st., bears, claw, back, into, the, blac... [oil, and, economy, cloud, stocks, ', outlook,... [oil, prices, soar, to, alltime, record, ,, p... ... [pakistan, 's, musharraf, says, wo, n't, quit,... [saban, not, going, to, dolphins, yet, the, mi... [nets, get, carter, from, raptors, indianapoli... features {427: 2, 563: 1, 1607: 1, 15062: 1, 120: 1, 73... {66: 1, 9: 2, 351: 2, 4565: 1, 158: 1, 116: 1,... {66: 2, 99: 2, 4390: 1, 4: 2, 3595: 1, 149: 1,... ... {383: 1, 23: 1, 1626: 2, 91: 1, 1809: 1, 285: ... {7762: 2, 68: 1, 661: 1, 4: 2, 1439: 2, 703: 1... {2170: 2, 226: 1, 2402: 2, 32: 1, 2995: 2, 219... . 0  3 Business 
 . 1  3 Business 
 . 2  3 Business 
 . 3  3 Business 
 . 4  3 Business 
 ... ... ... . 119995  1 World 
 . 119996  2 Sports 
 . 119997  2 Sports 
 . 119998  2 Sports 
 . 119999  2 Sports 
 120000 rows × 7 columns Carlyle Looks Toward Commercial Aerospace (Reu... Reuters - Private investment firm Carlyle Grou... carlyle looks toward commercial aerospace (reu... Iraq Halts Oil Exports from Reuters - Authorities have iraq halts oil exports from Main Southern Pipe... halted oil export\f... main southern pipe... Renteria signing a top-shelf Red Sox general manager renteria signing a topdeal Theo Epstein acknowled... shelf deal red sox gene... PITTSBURGH at NY Today's NFL games GIANTS Time: 1:30 p.m. Line: ... today's nfl games pittsburgh at ny giants time... [carlyle, looks, toward, {15999: 2, 1076: 1, 855: commercial, aerospace... 1, 1286: 1, 4251: 1, ... [iraq, halts, oil, exports, {77: 2, 7380: 1, 66: 3, from, main, southe... 1787: 1, 32: 2, 900: 2... [renteria, signing, a, top- {8428: 2, 2638: 1, 5: 4, shelf, deal, red, s... 0: 3, 127: 1, 202: 3,... [today, 's, nfl, games, {106: 1, 23: 1, 729: 1, pittsburgh, at, ny, gi... 225: 1, 1586: 1, 22: 1... The final preprocessing step is converting the features and the class indices into PyTorch tensors. Recall that we need to subtract one from the class indices to make them zero-based. At this point, the data is fully processed and we are ready to begin training. 4.2.3 Multiclass Logistic Regression Using PyTorch The model itself is a single linear layer whose input size corresponds to the size of our vocabulary, and its output size corresponds to the number of classes in our corpus. PyTorch’s Linear layer includes a bias by default, so there is no need to handle that manually the way we did for our perceptron example. The code for training this model (which implements Algorithm 6) is almost identical to that of the binary logistic repression. However, since we have to calculate a score for each of the four different classes, we need to replace the previous BCEWithLogitsLoss with CrossEntropyLoss, which applies a softmax over the scores to obtain probabilities for each class. For each example, the model predicts 4 scores – one for each label. The label with the highest score is selected using the argmax function. We evaluate the predictions of our model for each class using Scikitlearn’s classification_report, which handles the results of multiclass classification. 4.3 Summary 69 4.3 Summary In this chapter, we used movie review and news article classification to illustrate the implementation of the previously described algorithms for the binary perceptron, binary logistic regression, and multiclass logistic regression. For the binary logistic regression, we made a direct comparison between the lower-level NumPy implementation and a higher-level version that made use of PyTorch. We hope that through this series of exercises the reader has noted several key takeaways. First, data preparation is important and should be done thoughtfully. Certain tasks (e.g., text normalization or sentence splitting) are going to be frequently needed if you continue with NLP, so using or creating generic functions can be very helpful. However, what works for one dataset and one language may not be suitable for another scenario. For example, in our case, we selected different tokenizers for each of our tasks to account for the different registers of English, as well as removing diacritics during normalization. Second, when it comes to implementing machine learning algorithms, it is often easier to use a higher-level library such as PyTorch instead of NumPy. For example, with the former, the gradients are calculated by the library, whereas in NumPy we have to code them ourselves. This becomes cumbersome quickly. For example, even the derivative of the softmax is non-trivial. Third, PyTorch imposes a training structure that remains largely the same, regardless of what models are being trained. That is, at a high level, the same steps are always required: clearing the current gradients, predicting output scores for the provided inputs, calculating the loss, and optimizing. These features make PyTorch a very powerful and convenient deep learning library; we will continue to use it throughout the remainder of the book to implement more complex neural architectures.
12,456
12,568
#!/usr/bin/env python # coding: utf-8 # # Binary Text Classification with Perceptron # In[1]: import random import numpy as np from tqdm.notebook import tqdm # set this variable to a number to be used as the random seed # or to None if you don't want to set a random seed seed = 1234 if seed is not None: random.seed(seed) np.random.seed(seed) # The dataset is divided in two directories called `train` and `test`. # These directories contain the training and testing splits of the dataset. # In[2]: get_ipython().system('ls -lh data/aclImdb/') # Both the `train` and `test` directories contain two directories called `pos` and `neg` that contain text files with the positive and negative reviews, respectively. # In[3]: get_ipython().system('ls -lh data/aclImdb/train/') # We will now read the filenames of the positive and negative examples. # In[4]: from glob import glob pos_files = glob('data/aclImdb/train/pos/*.txt') neg_files = glob('data/aclImdb/train/neg/*.txt') print('number of positive reviews:', len(pos_files)) print('number of negative reviews:', len(neg_files)) # Now, we will use a [`CountVectorizer`](https://scikit-learn.org/stable/modules/generated/sklearn.feature_extraction.text.CountVectorizer.html) to read the text files, tokenize them, acquire a vocabulary from the training data, and encode it in a document-term matrix in which each row represents a review, and each column represents a term in the vocabulary. Each element $(i,j)$ in the matrix represents the number of times term $j$ appears in example $i$. # In[5]: from sklearn.feature_extraction.text import CountVectorizer # initialize CountVectorizer indicating that we will give it a list of filenames that have to be read cv = CountVectorizer(input='filename') # learn vocabulary and return sparse document-term matrix doc_term_matrix = cv.fit_transform(pos_files + neg_files) doc_term_matrix # Note in the message printed above that the matrix is of shape (25000, 74894). # In other words, it has 1,871,225,000 elements. # However, only 3,445,861 elements were stored. # This is because most of the elements in the matrix are zeros. # The reason is that the reviews are short and most words in the english language don't appear in each review. # A matrix that only stores non-zero values is called *sparse*. # # Now we will convert it to a dense numpy array: # In[6]: X_train = doc_term_matrix.toarray() X_train.shape # We will also create a numpy array with the binary labels for the reviews. # One indicates a positive review and zero a negative review. # The label `y_train[i]` corresponds to the review encoded in row `i` of the `X_train` matrix. # In[7]: # training labels y_pos = np.ones(len(pos_files)) y_neg = np.zeros(len(neg_files)) y_train = np.concatenate([y_pos, y_neg]) y_train # Now we will initialize our model, in the form of an array of weights `w` of the same size as the number of features in our dataset (i.e., the number of words in the vocabulary acquired by [`CountVectorizer`](https://scikit-learn.org/stable/modules/generated/sklearn.feature_extraction.text.CountVectorizer.html)), and a bias term `b`. # Both are initialized to zeros. # In[8]: # initialize model: the feature vector and bias term are populated with zeros n_examples, n_features = X_train.shape w = np.zeros(n_features) b = 0 # Now we will use the perceptron learning algorithm to learn the values of `w` and `b` from our training data. # In[9]: n_epochs = 10 indices = np.arange(n_examples) for epoch in range(10): n_errors = 0 # randomize the order in which training examples are seen in this epoch np.random.shuffle(indices) # traverse the training data for i in tqdm(indices, desc=f'epoch {epoch+1}'): x = X_train[i] y_true = y_train[i] # the perceptron decision based on the current model score = x @ w + b y_pred = 1 if score > 0 else 0 # update the model is the prediction was incorrect if y_true == y_pred: continue elif y_true == 1 and y_pred == 0: w = w + x b = b + 1 n_errors += 1 elif y_true == 0 and y_pred == 1: w = w - x b = b - 1 n_errors += 1 if n_errors == 0: break # The next step is evaluating the model on the test dataset. # Note that this time we use the [`transform()`](https://scikit-learn.org/stable/modules/generated/sklearn.feature_extraction.text.CountVectorizer.html#sklearn.feature_extraction.text.CountVectorizer.transform) method of the [`CountVectorizer`](https://scikit-learn.org/stable/modules/generated/sklearn.feature_extraction.text.CountVectorizer.html), instead of the [`fit_transform()`](https://scikit-learn.org/stable/modules/generated/sklearn.feature_extraction.text.CountVectorizer.html#sklearn.feature_extraction.text.CountVectorizer.fit_transform) method that we used above. This is because we want to use the learned vocabulary in the test set, instead of learning a new one. # In[10]: pos_files = glob('data/aclImdb/test/pos/*.txt') neg_files = glob('data/aclImdb/test/neg/*.txt') doc_term_matrix = cv.transform(pos_files + neg_files) X_test = doc_term_matrix.toarray() y_pos = np.ones(len(pos_files)) y_neg = np.zeros(len(neg_files)) y_test = np.concatenate([y_pos, y_neg]) # Using the model is easy: multiply the document-term matrix by the learned weights and add the bias. # We use Python's `@` operator to perform the matrix-vector multiplication. # In[11]: y_pred = (X_test @ w + b) > 0 # Now we print an evaluation of the prediction results using scikit-learn's [`classification_report()`](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.classification_report.html) function. # In[12]: def binary_classification_report(y_true, y_pred): # count true positives, false positives, true negatives, and false negatives tp = fp = tn = fn = 0 for gold, pred in zip(y_true, y_pred): if pred == True: if gold == True: tp += 1 else: fp += 1 else: if gold == False: tn += 1 else: fn += 1 # calculate precision and recall precision = tp / (tp + fp) recall = tp / (tp + fn) # calculate f1 score fscore = 2 * precision * recall / (precision + recall) # calculate accuracy accuracy = (tp + tn) / len(y_true) # number of positive labels in y_true support = sum(y_true) return { "precision": precision, "recall": recall, "f1-score": fscore, "support": support, "accuracy": accuracy, } # In[13]: binary_classification_report(y_test, y_pred)
5,553
5,583
35
chap13-0
chap13-0
13 Using Transformers with the Hugging Face Library One of the key advantages of transformer networks is the ability to take a model that was pre-trained over vast quantities of text and fine-tune it for the task at hand. Intuitively, this strategy allows transformer networks to achieve higher performance on smaller datasets by relying on statistics acquired at scale in an unsupervised way (e.g., through the masked language model training objective). To this end, in this chapter we will use the Hugging Face library,1 which has a rich repository of datasets and pre-trained models, as well as helper methods and classes that make it easy to target downstream tasks. Using pre-trained transformer encoders, we will implement the two tasks that served as use cases in the previous chapters: text classification and part-of-speech tagging. 13.1 Tokenization As discussed in Section 12.2, transformers rely on sub-word tokens. This strategy provides an elegant way to handle unknown and low-frequency words by splitting them into more frequent sub-word parts. At the same time, these tokenization algorithms maintain frequently-occurring words as standalone tokens, so the signal for these common words is preserved. To make this more concrete, we show below how tokenizers are employed in the Hugging Face library. First, we load the tokenizer that corresponds to the transformer we intend to use. This is important for two reasons: (a) different transformers rely on different tokenization algorithms, and (b) even for the ones that use the same algorithm, their tokenizer vocabularies are likely to be different if they were pre-trained 1 https://huggingface.co/docs/transformers/main/en/index 186 13.1 Tokenization 187 on different corpora. Next, we tokenize some example text and display some of the resulting attributes with pandas: As shown above, the tokenizer splits the text into tokens, and adds two special tokens: the [CLS] token at the beginning of the token sequence, and the [SEP] token at the end. Also, note that the ## characters at the beginning of some tokens indicate that they are not standalone words, but rather sub-words that continue a word previously started. For example, the output above shows that the word walrus was split into three sub-words. Note, however, that this is specific to this particular tokenization algorithm, and other tokenizers may indicate word continuation in different ways. A better way to detect word continuations is using the word_ids() method of the tokenizer output, which assigns the same id to all tokens part of the same word. For example, all fragments of the word walrus share the word id 3. Lastly, the input_ids attribute provides the token ids used internally by the transformer to map tokens to embeddings. To briefly demonstrate how different tokenizers produce different outputs, here is the same text tokenized with the tokenizer corresponding to xlm-roberta-base: Note how the [CLS] and [SEP] special tokens have been replaced with <s> and </s> respectively. Also, spaces have been replaced with the Unicode character (U+2581, LOWER ONE EIGHTH BLOCK). Tokens that start with that character are considered word beginnings and the rest are word continuations, as can be confirmed by looking at the word ids. This illustrates the importance of using the tokenizer that corresponds to the transformer you intend to use. 012345678 tokens [CLS] I am the wa ##l ##rus . [SEP] word_ids None 0 1 2 3 3 3 4 None input_ids 101 146 1821 1103 20049 1233 6208 119 102 01234567 tokens <s> ▁I ▁am ▁the ▁wal rus . </s > None 0 1 2 3 3 3 None 0 87 444 70 32973 6563 5 2 word_ids input_ids 188 Using Transformers with the Hugging Face Library 13.2 Text Classification For our text classification example, we will continue using the AG News dataset from previous chapters. We will load, preprocess, and split the dataset into pandas dataframes in the same way as before. Now however, rather than continuing with pandas, we will create a Hugging Face dataset from the dataframes. Hugging Face datasets are convenient because of their built-in support of batching, efficient data transformations, and caching. In particular, we convert each dataframe into a Hugging Face dataset. The various datasets are managed with a DatasetDict. Note that this is the same data structure seen when downloading a Hugging Face dataset from their hub.2 The keys in this dictionary are usually train, validation, and test:3 Once our dataset is loaded, we load a tokenizer. Different pre-trained models are tokenized differently, and it is important to select the tokenizer that corresponds to the model we will use so that the inputs are consistent with model expectations. In our example, we will use the bert-base-cased pre-trained model and tokenizer: Datasets have a map() method that transforms the dataset by applying a function to each example. The method returns a new dataset with the transformation applied. We use the map() method to tokenize our dataset. To this end, we define a function that tokenizes an example using the tokenizer we loaded previously. Note that tokenizers support many options that you may need depending on your situation. However, since this is a simple scenario, all we need to do is provide the text to tokenize and specify how to handle texts that exceed the maximum number of tokens permitted by the pre-trained model. Here we have our tokenizer truncate any inputs that are too long by specifying the truncation=True parameter. The output of this function will be added to the new dataset as extra columns. Further, we also want to remove some of the columns that are no longer needed, simplifying subsequent steps. For this, we use the remove_columns argument, listing the columns that we want to discard. Additionally, the dataset’s map() method can batch the dataset; we enable this option with the batched=True argument: 2 https://huggingface.co/datasets
3 These correspond to the more common terms train, development, and test we have used throughout the book so far. In this chapter we use the Hugging Face naming conventions for consistency. 13.2 Text Classification 189 label 03 10 20 32 40 ... ... . 107995  0 
 . 107996  0 
 . 107997  0 
 . 107998  0 
 . 107999  3 
 input_ids [101, 3270, 11906, 1522, 1146, 7106, 1111, 251... [101, 158, 119, 156, 119, 12068, 5084, 1116, 9... [101, 7270, 118, 2733, 1383, 1111, 12448, 7430... [101, 6096, 117, 10378, 3969, 5977, 1111, 8988... [101, 19569, 5480, 10582, 2087, 1867, 158, 119... [101, 1130, 139, 24683, 131, 21107, 2050, 1739... token_type_ids attention_mask [101, 22087, 8223, 1611, 1106, 4417, [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 5572, 324... 0, 0, 0, ... [0,0,0,0,0,0,0,0, [0,0,0,0,0,0,0,0, [0,0,0,0,0,0,0,0, [0,0,0,0,0,0,0,0, [0,0,0,0,0,0,0,0, [0,0,0,0,0,0,0,0, 0,0,0,0, 0,0,0,... 0,0,0,0, 0,0,0,... 0,0,0,0, 0,0,0,... 0,0,0,0, 0,0,0,... 0,0,0,0, 0,0,0,... 0,0,0,0, 0,0,0,... [1,1,1,1,1,1,1,1, [1,1,1,1,1,1,1,1, [1,1,1,1,1,1,1,1, [1,1,1,1,1,1,1,1, [1,1,1,1,1,1,1,1, [1,1,1,1,1,1,1,1, 1,1,1,1, 1,1,1,... 1,1,1,1, 1,1,1,... 1,1,1,1, 1,1,1,... 1,1,1,1, 1,1,1,... 1,1,1,1, 1,1,1,... 1,1,1,1, 1,1,1,... ... ... [101, 16409, 118, 16587, 159, 4064, [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1106, 1564... 0, 0, 0, ... 108000 rows × 4 columns [101, 4222, 11404, 1174, 117, 1476, [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1130, 2696... 0, 0, 0, ... [101, 11560, 3881, 108, 3614, 132, [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 3498, 2944,... 0, 0, 0, ... [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... ... Next, we implement a classifier for our task. Hugging Face provides a
variety of models corresponding to several types of downstream tasks. However, for pedagogical purposes, we implement one from scratch. In particular, our model class inherits from BertPreTrainedModel, which
provides several useful methods such as init_weights() and from_pretrained() methods, which we will use later. The model constructor takes a config- uration object as its only parameter. Configuration objects contain all the hyper-parameters used by the corresponding pre-trained models. We will show later how the configuration model is retrieved and customized. Models that implement specific downstream tasks are usually composed of a pre-trained model (sometimes referred as the body), and one or more task-specific layers (usually referred as the head). Here, we initialize a BertModel using the provided configuration, as well as a dropout layer and a task-specific linear layer used for classifying the Bert output. Each of these layers is initialized by calling the init_weights() method inherited from BertPreTrainedModel. The forward() method, which implements the task-specific forward pass, takes as arguments the outputs of the tokenizer, and, optionally, the gold labels corresponding to the input data points. Our implementation of the forward pass sends the input tokens to the Bert model to produce the contextualized representations for all tokens. This output has several components, including the last_hidden_state which con- 190 Using Transformers with the Hugging Face Library tains the final hidden-state embedding for each token. For our task, we will represent the whole sequence using the embedding for the [CLS] token that occurs at the start of each example. We retrieve it by selecting the first element of each output sequence in the batch (i.e., last_hidden_state[:, 0, :]). As in the previous chapters, we apply dropout to our sequence representation, and then pass it through our linear classification layer. If gold labels are provided (i.e., we are training), we now compute the loss using the cross-entropy loss. The output of the forward pass is wrapped in a Hugging Face SequenceClassifierOutput object4 and returned: Next we load the configuration of the pre-trained model and instantiate our model. The AutoConfig class can load the configuration for any pre-trained model, retrieving it from Hugging Face if needed. Then we use the configuration to instantiate our model using the from_pretrained() method. With this call, the pre-trained model will be loaded, which includes downloading if necessary: Hugging Face provides a Trainer class that greatly simplifies the training process. This class not only implements the training loop we have been using in the previous chapters, but also handles other useful steps such as saving checkpoints (i.e., intermediate models after a number of mini-batches have been processed during training), and tracking custom measures about model performance. In order to create a Trainer, we first need to specify its configuration in a TrainingArguments object. In ours, we specify certain hyper parameters such as batch size, weight decay, and number of epochs, as well as where to store model checkpoints: The TrainingArguments class provides a wide variety of arguments that we have not shown.5 These arguments usually have appropriate default values, so it is often fine to omit them. For example, we did not use the label_names argument, which specifies the key that corresponds to the training labels. When omitted, it defaults to keys such as label, labels and label_ids.6 In this chapter we used label. Note that we also specify how often we would like to see the perfor- . 4  Hugging Face utilizes a set of output objects to standardize model output for a given task. These objects typically include additional information, e.g., attention weights, which can be used for visualizing or debugging model behavior. 
 . 5  https://huggingface.co/docs/transformers/main/en/main_classes/trainer# transformers. TrainingArguments 
 6 In the case of extractive question answering (see Chapter 16), the start_positions and end_positions store the start/end positions of the correct answers. 13.3 Part-of-speech Tagging 191 mance of the current model (at the end of each epoch) with evaluation_strategy='epoch'. This means that after each epoch we print the current loss on the training
partition and on the evaluation dataset, if one is available. Additionally,
we can report custom metrics at this time. For this purpose, we use the compute_metrics parameter of the Trainer, which expects a function that receives a transformers. EvalPredictions object containing the label ids and the predicted logits. The expected return type is a dictionary whose keys correspond to different metrics, each of which will be displayed as a separate result column. Using the above TrainingArguments and compute_metrics function, we create our Trainer. Note that when you provide a tokenizer, the trainer will automatically pad the sequences in each batch. Also, the trainer will automatically use any GPU that is available, unless specifically disabled in the TrainingArguments. Training our model takes a single call to the train() method of the Trainer object. As specified in the our instance of TrainingArguments, the training and validation losses, as well as the accuracy, are reported every epoch. As in the other chapters, we can write custom code to obtain the model’s predictions on the test data. However, the Trainer class provides a predict() method that drastically simplifies this: As shown in the table above, this model achieves an accuracy of 95%, which is the highest performance we have achieved so far on this dataset. 13.3 Part-of-speech Tagging To showcase part-of-speech tagging using transformers, we continue with the Spanish section of the AnCora corpus introduced in Chapter 11. Recall that the dataset is stored in the CoNLL-U format. We load this format in the same way as before, but then we convert the loaded dataset into a Hugging Face DictDataset: Importantly, because the CoNLL-U dataset is already tokenized, we use the is_split_into_words=True tokenizer argument to ensure that the tokenizer respects the existing word boundaries during its sub-word tokenization. Further, while we want to predict one POS tag per word, Epoch Training Loss Validation Loss Accuracy 1 0.187800 0.172629 0.941667 2 0.104000 0.183001 0.946250 192 Using Transformers with the Hugging Face Library any given word may be split into smaller pieces by our tokenizer. Thus, we need to align the tokenizer output to the CoNLL-U words. The original BERT paper (Devlin et al., 2018) addresses this by only using the embedding corresponding to the first sub-token for each word. We follow the same approach for consistency. For the sub-words that do not correspond to the beginning of a word, we use a special value that indicates that we are not interested in their predictions. The CrossEntropyLoss has a parameter called ignore_index for this purpose. The default value for this parameter is −100, which we use as the label for the sub-words we wish to ignore during training: Next, we use this function to preprocess the train and validation folds in our DatasetDict: words [El, presidente, de, el, órgano, regulador, de... [Afirmó, que, sigue, el, criterio, europeo, y,... [Durante, la, presentación, de, el, libro, ", ... [Y, todas, las, miradas, convergen, en, la, lu... [Cambiar, las, formas, parece, de, rigor, ,, p... [Él, llega, a, tirar, la, sobre, la, cama, y, ... tags [DET, NOUN, ADP, DET, NOUN, ADJ, ADP, DET, PRO... [VERB, SCONJ, VERB, DET, NOUN, ADJ, CCONJ, SCO... [ADP, DET, NOUN, ADP, DET, NOUN, PUNCT, DET, P... [CCONJ, DET, DET, NOUN, VERB, ADP, DET, NOUN, ... [VERB, DET, NOUN, VERB, ADP, NOUN, PUNCT, CCON... [PRON, VERB, ADP, VERB, PRON, ADP, DET, NOUN, ... input_ids [0, 540, 9692, 8, 88, 103633, 15913, 1846, 8, ... [0, 62, 38949, 849, 41, 58453, 88, 166220, 620... [0, 24292, 21, 43945, 8, 88, 7750, 44, 239, 78... [0, 990, 5136, 576, 100688, 7, 158, 814, 1409,... [0, 313, 61055, 42, 576, 26497, 12295, 8, 7599... [0, 124043, 47612, 10, 61846, 21, 1028, 21, 39... attention_mask labels [-100, 0, 1, 2, 0, 1, 3, -100, 2, 0, 4, -100, ... [-100, 6, -100, -100, 7, 6, 0, 1, 3, 10, 7, 6,... [-100, 2, 0, 1, 2, 0, 1, 8, 0, 4, -100, 2, 4, ... [-100, 10, 0, 0, 1, -100, 6, -100, -100, 2, 0,... [-100, 6, -100, -100, 0, 1, 6, 2, 1, 8, -100, ... [-100, 5, 6, 2, 6, 5, 2, 0, 1, 10, 5, 6, 0, 1,... 0 1 2 3 4 ... 14300 14301 14302 14303 [1,1,1,1,1, 1,1,1,1,1, [1,1,1,1,1, 1,1,1,1,1, [1,1,1,1,1, 1,1,1,1,1, [1,1,1,1,1, 1,1,1,1,1, [1,1,1,1,1, 1,1,1,1,1, [1,1,1,1,1, 1,1,1,1,1, 1, 1, 1, 1, 1, ... 1, 1, 1, 1, 1, ... 1, 1, 1, 1, 1, ... 1, 1, 1, 1, 1, ... 1, 1, 1, 1, 1, ... 1, 1, 1, 1, 1, ... [Sobre, la, oferta, de, interconexión, con, Te... [ADP, DET, NOUN, ADP, NOUN, ADP, PROPN, ADP, D... [0, 44125, 21, 19806, 8, 1940, 2271, 3355, 194... [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... [-100, 2, 0, 1, 2, 1, -100, -100, -100, 2, 4, ... [La, inversión, en, investigación, básica, es,... [DET, NOUN, ADP, NOUN, ADJ, AUX, DET, NOUN, AD... [0, 239, 98649, 22, 31674, 124528, 198, 88, 46... [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... [-100, 0, 1, 2, 1, 3, 9, 0, 1, 2, 0, 1, 10, 0,... [Conviene, que, ahora, ,, en, plena, apoteosis... [VERB, SCONJ, ADV, PUNCT, ADP, ADJ, NOUN, ADP,... [0, 1657, 7772, 13, 41, 18451, 6, 4, 22, 31161... [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... [-100, 6, -100, -100, 7, 11, 8, -100, 2, 3, 1,... [Carlos, y, Fayna, se, enzarzan, en, una, bron... [PROPN, CCONJ, PROPN, PRON, VERB, ADP, DET, NO... [0, 24856, 113, 114162, 76, 40, 22, 6383, 5935... [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... [-100, 4, 10, 4, -100, 5, 6, -100, -100, 2, 0,... 14304
14305 rows × 5 columns ... ... ... ... ... Next, we implement our model class that uses a transformer encoder as a transducer. Because our downstream task consists of POS tagging for Spanish, we need a transformer model that was pre-trained on Spanish texts. Here, we chose XLM-RoBERTa (Conneau et al., 2019) as our base model. XLM-Roberta is a RoBERTa model (Liu et al., 2019) that has 13.3 Part-of-speech Tagging 193 been pre-trained on 100 different languages, including Spanish. Of note, XLM-RoBERTa does not require us to specify what language we are working on. Similar to BERT, it only requires the input_ids. We discussed in the text classification section that Hugging Face provides implementations for text classification models. This is also true for token classification problems that require transducers. In particular, the XLMRobertaForTokenClassification model provided by Hugging Face does everything needed for this task. However, as before, here we implement it ourselves for pedagogical purposes. The model architecture is similar to our text classification example. It consists of a transformer, a dropout layer, and a linear layer used for classification. The number of labels which determines the output dimension of the linear layer is equal to the number of POS tags. The primary difference between the text classification example and this token classification model is that with the former we produced one label for each text document, while here we produce one label for each token in the input text. Specifically, in our text classification model the output shape was two-dimensional: (batch_size, num_labels). Here, our output is three-dimensional: (batch_size, sequence_size, num_labels). So, while much of the forward method is familiar to us, when we are required to compute the loss, we need to reshape the logits and the labels before passing them to the CrossEntropyLoss, since it expects two-dimensional input and one-dimensional labels. For this purpose, we use the view() method to reshape the tensors. This method is efficient because it does not copy the tensor data. Instead it provides a new view of the same data that behaves like a tensor with a different shape.7 As mentioned before, the number of arguments passed to this method determines the number of dimensions in the output tensor. Here, for our logits, we pass two arguments and so our new view will have two dimensions. The second will be the size of self.num_labels, while the first (because we pass -1) will be inferred based on the original tensor shape. For our labels, on the other hand, we only provide one argument and so the new view will have one dimension, inferred by the original shape: Next, we instantiate our model using the XLM-RoBERTa configuration: 7 Similar to NumPy, PyTorch tensors are represented internally by a block of memory storing the data and some metadata that describes how the data should be read, e.g., type, shape, and stride. The view() method returns a new tensor with new metadata but pointing to the same memory block. 194 Using Transformers with the Hugging Face Library As before, we create a TrainingArguments object and define a compute_metrics function in order to customize a Trainer: While the TrainingArguments code has no substantial changes, we need to adjust the compute_metrics function to account for the fact that our model uses sub-word tokens rather than complete words. Recall that only the first sub-word token per word was assigned a POS tag. This function discards the labels corresponding to the ignored sub-word tokens and evaluates the rest, returning the accuracy score: The last component required for the Trainer is a collator. Since this time we are batching sequences of tokens, we need a collator that can pad them dynamically when constructing the batches. The transformers library includes a DataCollatorForTokenClassification specifically for this purpose. Once we have our collator and our trainer object, we can train our model: Next, we evaluate our newly trained model on the test dataset. For this purpose, we preprocess the data in the same way we did for the train and validation partitions. Then, for convenience, we use the trainer’s predict() method to generate the predicted logits using our model: As before, we use scikit-learn’s classification_report() function to display the results of the evaluation. This function expects two onedimensional lists of labels, so we need to follow a similar approach to the one we employed for text classification. Note that output.label_ids and output.predictions are NumPy arrays rather than PyTorch tensors. This time we use NumPy’s reshape() method to reshape the arrays. This method is similar to PyTorch’s view() method that we used previously, except that view() may copy the array’s data in some situations. We discard the labels corresponding to ignored sub-word tokens, and then we print the classification report: Our model based on XLM-RoBERTa achieves 99% accuracy. This is considerably better than the LSTM-based model developed in Chapter 11. In order to understand the differences between the two methods, we produce below a confusion matrix for the results of each model. Rows in the confusion matrix represent the true labels and columns represent the predicted labels. In the confusion matrices shown below, each cell xij corresponds to the proportion of values with label i that were assigned the label j.8 For a perfect model, all cells in the diagonal would have value 1 and all other cells would have value 0. The code used to generate the confusion matrix is shown below. The confusion matrices 8 This is the case because we used the normalize='true' parameter of the confusion_matrix() function. 13.3 Part-of-speech Tagging 195 Figure 13.1 Confusion matrix corresponding to the LSTM-based part-ofspeech tagger developed in Chapter 11. for the LSTM and transformer are show in Figure 13.1 and Figure 13.2, respectively. The two confusion matrices highlight a couple of important observations. First, the transformer model is considerably better at predicting POS tags with infrequent support in the dataset. For example, the accuracy for predicting the SYM POS tag increased from 38% in the LSTM model to 95% in the transformer model! Equally as impressive, the transformer improved the performance of tags that are extremely common, and, thus, provide plenty of opportunity to both approaches to learn a good model. For example, the accuracy of tagging NOUN, the second 196 Using Transformers with the Hugging Face Library Figure 13.2 Confusion matrix corresponding to the transformer-based part-of-speech tagger. most common POS tag in the dataset, increased from 96% in the LSTM model to 99% in the transformer model. 13.4 Summary In this chapter we presented two applications driven by the encoder component of a transformer network. First, we used the transformer encoder as an acceptor and implemented a text classification application for English news. Second, we used the encoder as a transducer to develop a Spanish part-of-speech tagger. Both tasks were implemented using 13.4 Summary 197 pre-trained transformer models from the Hugging Face library. For both applications, the transformer-based methods outperform considerably all approaches introduced in the previous chapters, highlighting the value of the transformer architecture.
10,884
10,987
#!/usr/bin/env python # coding: utf-8 # # Text Classification Using Transformer Networks (DistilBERT) # Some initialization: # In[1]: import random import torch import numpy as np import pandas as pd from tqdm.notebook import tqdm # enable tqdm in pandas tqdm.pandas() # set to True to use the gpu (if there is one available) use_gpu = True # select device device = torch.device('cuda' if use_gpu and torch.cuda.is_available() else 'cpu') print(f'device: {device.type}') # random seed seed = 1234 # set random seed if seed is not None: print(f'random seed: {seed}') random.seed(seed) np.random.seed(seed) torch.manual_seed(seed) # Read the train/dev/test datasets and create a HuggingFace `Dataset` object: # In[2]: def read_data(filename): # read csv file df = pd.read_csv(filename, header=None) # add column names df.columns = ['label', 'title', 'description'] # make labels zero-based df['label'] -= 1 # concatenate title and description, and remove backslashes df['text'] = df['title'] + " " + df['description'] df['text'] = df['text'].str.replace('\\', ' ', regex=False) return df # In[3]: labels = open('data/ag_news_csv/classes.txt').read().splitlines() train_df = read_data('data/ag_news_csv/train.csv') test_df = read_data('data/ag_news_csv/test.csv') train_df # In[4]: from sklearn.model_selection import train_test_split train_df, eval_df = train_test_split(train_df, train_size=0.9) train_df.reset_index(inplace=True, drop=True) eval_df.reset_index(inplace=True, drop=True) print(f'train rows: {len(train_df.index):,}') print(f'eval rows: {len(eval_df.index):,}') print(f'test rows: {len(test_df.index):,}') # In[5]: from datasets import Dataset, DatasetDict ds = DatasetDict() ds['train'] = Dataset.from_pandas(train_df) ds['validation'] = Dataset.from_pandas(eval_df) ds['test'] = Dataset.from_pandas(test_df) ds # Tokenize the texts: # In[6]: from transformers import AutoTokenizer transformer_name = 'distilbert-base-cased' tokenizer = AutoTokenizer.from_pretrained(transformer_name) # In[7]: def tokenize(examples): return tokenizer(examples['text'], truncation=True) train_ds = ds['train'].map(tokenize, batched=True, remove_columns=['title', 'description', 'text']) eval_ds = ds['validation'].map(tokenize, batched=True, remove_columns=['title', 'description', 'text']) train_ds.to_pandas() # Create the transformer model: # In[8]: from transformers import AutoConfig config = AutoConfig.from_pretrained(transformer_name, num_labels=len(labels)) # In[9]: from transformers.models.distilbert.modeling_distilbert import DistilBertForSequenceClassification model = ( DistilBertForSequenceClassification .from_pretrained(transformer_name, config=config) ) # Create the trainer object and train: # In[10]: from transformers import TrainingArguments num_epochs = 2 batch_size = 24 logging_steps = len(ds['train']) // batch_size model_name = f'{transformer_name}-sequence-classification' training_args = TrainingArguments( output_dir=model_name, log_level='error', num_train_epochs=num_epochs, per_device_train_batch_size=batch_size, per_device_eval_batch_size=batch_size, evaluation_strategy='epoch', weight_decay=0.01, disable_tqdm=False, logging_steps=logging_steps, ) # In[11]: from sklearn.metrics import accuracy_score def compute_metrics(eval_pred): y_true = eval_pred.label_ids y_pred = np.argmax(eval_pred.predictions, axis=-1) return {'accuracy': accuracy_score(y_true, y_pred)} # In[12]: from transformers import Trainer trainer = Trainer( model=model, args=training_args, compute_metrics=compute_metrics, train_dataset=train_ds, eval_dataset=eval_ds, tokenizer=tokenizer, ) # In[13]: trainer.train() # Evaluate on the test partition: # In[14]: test_ds = ds['test'].map(tokenize, batched=True, remove_columns=['title', 'description', 'text']) test_ds.to_pandas() # In[15]: output = trainer.predict(test_ds) output # In[16]: from sklearn.metrics import classification_report y_true = output.label_ids y_pred = np.argmax(output.predictions, axis=-1) target_names = labels print(classification_report(y_true, y_pred, target_names=target_names)) # In[ ]:
3,026
3,346
0
chap13-1
chap13-1
13 Using Transformers with the Hugging Face Library One of the key advantages of transformer networks is the ability to take a model that was pre-trained over vast quantities of text and fine-tune it for the task at hand. Intuitively, this strategy allows transformer networks to achieve higher performance on smaller datasets by relying on statistics acquired at scale in an unsupervised way (e.g., through the masked language model training objective). To this end, in this chapter we will use the Hugging Face library,1 which has a rich repository of datasets and pre-trained models, as well as helper methods and classes that make it easy to target downstream tasks. Using pre-trained transformer encoders, we will implement the two tasks that served as use cases in the previous chapters: text classification and part-of-speech tagging. 13.1 Tokenization As discussed in Section 12.2, transformers rely on sub-word tokens. This strategy provides an elegant way to handle unknown and low-frequency words by splitting them into more frequent sub-word parts. At the same time, these tokenization algorithms maintain frequently-occurring words as standalone tokens, so the signal for these common words is preserved. To make this more concrete, we show below how tokenizers are employed in the Hugging Face library. First, we load the tokenizer that corresponds to the transformer we intend to use. This is important for two reasons: (a) different transformers rely on different tokenization algorithms, and (b) even for the ones that use the same algorithm, their tokenizer vocabularies are likely to be different if they were pre-trained 1 https://huggingface.co/docs/transformers/main/en/index 186 13.1 Tokenization 187 on different corpora. Next, we tokenize some example text and display some of the resulting attributes with pandas: As shown above, the tokenizer splits the text into tokens, and adds two special tokens: the [CLS] token at the beginning of the token sequence, and the [SEP] token at the end. Also, note that the ## characters at the beginning of some tokens indicate that they are not standalone words, but rather sub-words that continue a word previously started. For example, the output above shows that the word walrus was split into three sub-words. Note, however, that this is specific to this particular tokenization algorithm, and other tokenizers may indicate word continuation in different ways. A better way to detect word continuations is using the word_ids() method of the tokenizer output, which assigns the same id to all tokens part of the same word. For example, all fragments of the word walrus share the word id 3. Lastly, the input_ids attribute provides the token ids used internally by the transformer to map tokens to embeddings. To briefly demonstrate how different tokenizers produce different outputs, here is the same text tokenized with the tokenizer corresponding to xlm-roberta-base: Note how the [CLS] and [SEP] special tokens have been replaced with <s> and </s> respectively. Also, spaces have been replaced with the Unicode character (U+2581, LOWER ONE EIGHTH BLOCK). Tokens that start with that character are considered word beginnings and the rest are word continuations, as can be confirmed by looking at the word ids. This illustrates the importance of using the tokenizer that corresponds to the transformer you intend to use. 012345678 tokens [CLS] I am the wa ##l ##rus . [SEP] word_ids None 0 1 2 3 3 3 4 None input_ids 101 146 1821 1103 20049 1233 6208 119 102 01234567 tokens <s> ▁I ▁am ▁the ▁wal rus . </s > None 0 1 2 3 3 3 None 0 87 444 70 32973 6563 5 2 word_ids input_ids 188 Using Transformers with the Hugging Face Library 13.2 Text Classification For our text classification example, we will continue using the AG News dataset from previous chapters. We will load, preprocess, and split the dataset into pandas dataframes in the same way as before. Now however, rather than continuing with pandas, we will create a Hugging Face dataset from the dataframes. Hugging Face datasets are convenient because of their built-in support of batching, efficient data transformations, and caching. In particular, we convert each dataframe into a Hugging Face dataset. The various datasets are managed with a DatasetDict. Note that this is the same data structure seen when downloading a Hugging Face dataset from their hub.2 The keys in this dictionary are usually train, validation, and test:3 Once our dataset is loaded, we load a tokenizer. Different pre-trained models are tokenized differently, and it is important to select the tokenizer that corresponds to the model we will use so that the inputs are consistent with model expectations. In our example, we will use the bert-base-cased pre-trained model and tokenizer: Datasets have a map() method that transforms the dataset by applying a function to each example. The method returns a new dataset with the transformation applied. We use the map() method to tokenize our dataset. To this end, we define a function that tokenizes an example using the tokenizer we loaded previously. Note that tokenizers support many options that you may need depending on your situation. However, since this is a simple scenario, all we need to do is provide the text to tokenize and specify how to handle texts that exceed the maximum number of tokens permitted by the pre-trained model. Here we have our tokenizer truncate any inputs that are too long by specifying the truncation=True parameter. The output of this function will be added to the new dataset as extra columns. Further, we also want to remove some of the columns that are no longer needed, simplifying subsequent steps. For this, we use the remove_columns argument, listing the columns that we want to discard. Additionally, the dataset’s map() method can batch the dataset; we enable this option with the batched=True argument: 2 https://huggingface.co/datasets
3 These correspond to the more common terms train, development, and test we have used throughout the book so far. In this chapter we use the Hugging Face naming conventions for consistency. 13.2 Text Classification 189 label 03 10 20 32 40 ... ... . 107995  0 
 . 107996  0 
 . 107997  0 
 . 107998  0 
 . 107999  3 
 input_ids [101, 3270, 11906, 1522, 1146, 7106, 1111, 251... [101, 158, 119, 156, 119, 12068, 5084, 1116, 9... [101, 7270, 118, 2733, 1383, 1111, 12448, 7430... [101, 6096, 117, 10378, 3969, 5977, 1111, 8988... [101, 19569, 5480, 10582, 2087, 1867, 158, 119... [101, 1130, 139, 24683, 131, 21107, 2050, 1739... token_type_ids attention_mask [101, 22087, 8223, 1611, 1106, 4417, [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 5572, 324... 0, 0, 0, ... [0,0,0,0,0,0,0,0, [0,0,0,0,0,0,0,0, [0,0,0,0,0,0,0,0, [0,0,0,0,0,0,0,0, [0,0,0,0,0,0,0,0, [0,0,0,0,0,0,0,0, 0,0,0,0, 0,0,0,... 0,0,0,0, 0,0,0,... 0,0,0,0, 0,0,0,... 0,0,0,0, 0,0,0,... 0,0,0,0, 0,0,0,... 0,0,0,0, 0,0,0,... [1,1,1,1,1,1,1,1, [1,1,1,1,1,1,1,1, [1,1,1,1,1,1,1,1, [1,1,1,1,1,1,1,1, [1,1,1,1,1,1,1,1, [1,1,1,1,1,1,1,1, 1,1,1,1, 1,1,1,... 1,1,1,1, 1,1,1,... 1,1,1,1, 1,1,1,... 1,1,1,1, 1,1,1,... 1,1,1,1, 1,1,1,... 1,1,1,1, 1,1,1,... ... ... [101, 16409, 118, 16587, 159, 4064, [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1106, 1564... 0, 0, 0, ... 108000 rows × 4 columns [101, 4222, 11404, 1174, 117, 1476, [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1130, 2696... 0, 0, 0, ... [101, 11560, 3881, 108, 3614, 132, [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 3498, 2944,... 0, 0, 0, ... [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... ... Next, we implement a classifier for our task. Hugging Face provides a
variety of models corresponding to several types of downstream tasks. However, for pedagogical purposes, we implement one from scratch. In particular, our model class inherits from BertPreTrainedModel, which
provides several useful methods such as init_weights() and from_pretrained() methods, which we will use later. The model constructor takes a config- uration object as its only parameter. Configuration objects contain all the hyper-parameters used by the corresponding pre-trained models. We will show later how the configuration model is retrieved and customized. Models that implement specific downstream tasks are usually composed of a pre-trained model (sometimes referred as the body), and one or more task-specific layers (usually referred as the head). Here, we initialize a BertModel using the provided configuration, as well as a dropout layer and a task-specific linear layer used for classifying the Bert output. Each of these layers is initialized by calling the init_weights() method inherited from BertPreTrainedModel. The forward() method, which implements the task-specific forward pass, takes as arguments the outputs of the tokenizer, and, optionally, the gold labels corresponding to the input data points. Our implementation of the forward pass sends the input tokens to the Bert model to produce the contextualized representations for all tokens. This output has several components, including the last_hidden_state which con- 190 Using Transformers with the Hugging Face Library tains the final hidden-state embedding for each token. For our task, we will represent the whole sequence using the embedding for the [CLS] token that occurs at the start of each example. We retrieve it by selecting the first element of each output sequence in the batch (i.e., last_hidden_state[:, 0, :]). As in the previous chapters, we apply dropout to our sequence representation, and then pass it through our linear classification layer. If gold labels are provided (i.e., we are training), we now compute the loss using the cross-entropy loss. The output of the forward pass is wrapped in a Hugging Face SequenceClassifierOutput object4 and returned: Next we load the configuration of the pre-trained model and instantiate our model. The AutoConfig class can load the configuration for any pre-trained model, retrieving it from Hugging Face if needed. Then we use the configuration to instantiate our model using the from_pretrained() method. With this call, the pre-trained model will be loaded, which includes downloading if necessary: Hugging Face provides a Trainer class that greatly simplifies the training process. This class not only implements the training loop we have been using in the previous chapters, but also handles other useful steps such as saving checkpoints (i.e., intermediate models after a number of mini-batches have been processed during training), and tracking custom measures about model performance. In order to create a Trainer, we first need to specify its configuration in a TrainingArguments object. In ours, we specify certain hyper parameters such as batch size, weight decay, and number of epochs, as well as where to store model checkpoints: The TrainingArguments class provides a wide variety of arguments that we have not shown.5 These arguments usually have appropriate default values, so it is often fine to omit them. For example, we did not use the label_names argument, which specifies the key that corresponds to the training labels. When omitted, it defaults to keys such as label, labels and label_ids.6 In this chapter we used label. Note that we also specify how often we would like to see the perfor- . 4  Hugging Face utilizes a set of output objects to standardize model output for a given task. These objects typically include additional information, e.g., attention weights, which can be used for visualizing or debugging model behavior. 
 . 5  https://huggingface.co/docs/transformers/main/en/main_classes/trainer# transformers. TrainingArguments 
 6 In the case of extractive question answering (see Chapter 16), the start_positions and end_positions store the start/end positions of the correct answers. 13.3 Part-of-speech Tagging 191 mance of the current model (at the end of each epoch) with evaluation_strategy='epoch'. This means that after each epoch we print the current loss on the training
partition and on the evaluation dataset, if one is available. Additionally,
we can report custom metrics at this time. For this purpose, we use the compute_metrics parameter of the Trainer, which expects a function that receives a transformers. EvalPredictions object containing the label ids and the predicted logits. The expected return type is a dictionary whose keys correspond to different metrics, each of which will be displayed as a separate result column. Using the above TrainingArguments and compute_metrics function, we create our Trainer. Note that when you provide a tokenizer, the trainer will automatically pad the sequences in each batch. Also, the trainer will automatically use any GPU that is available, unless specifically disabled in the TrainingArguments. Training our model takes a single call to the train() method of the Trainer object. As specified in the our instance of TrainingArguments, the training and validation losses, as well as the accuracy, are reported every epoch. As in the other chapters, we can write custom code to obtain the model’s predictions on the test data. However, the Trainer class provides a predict() method that drastically simplifies this: As shown in the table above, this model achieves an accuracy of 95%, which is the highest performance we have achieved so far on this dataset. 13.3 Part-of-speech Tagging To showcase part-of-speech tagging using transformers, we continue with the Spanish section of the AnCora corpus introduced in Chapter 11. Recall that the dataset is stored in the CoNLL-U format. We load this format in the same way as before, but then we convert the loaded dataset into a Hugging Face DictDataset: Importantly, because the CoNLL-U dataset is already tokenized, we use the is_split_into_words=True tokenizer argument to ensure that the tokenizer respects the existing word boundaries during its sub-word tokenization. Further, while we want to predict one POS tag per word, Epoch Training Loss Validation Loss Accuracy 1 0.187800 0.172629 0.941667 2 0.104000 0.183001 0.946250 192 Using Transformers with the Hugging Face Library any given word may be split into smaller pieces by our tokenizer. Thus, we need to align the tokenizer output to the CoNLL-U words. The original BERT paper (Devlin et al., 2018) addresses this by only using the embedding corresponding to the first sub-token for each word. We follow the same approach for consistency. For the sub-words that do not correspond to the beginning of a word, we use a special value that indicates that we are not interested in their predictions. The CrossEntropyLoss has a parameter called ignore_index for this purpose. The default value for this parameter is −100, which we use as the label for the sub-words we wish to ignore during training: Next, we use this function to preprocess the train and validation folds in our DatasetDict: words [El, presidente, de, el, órgano, regulador, de... [Afirmó, que, sigue, el, criterio, europeo, y,... [Durante, la, presentación, de, el, libro, ", ... [Y, todas, las, miradas, convergen, en, la, lu... [Cambiar, las, formas, parece, de, rigor, ,, p... [Él, llega, a, tirar, la, sobre, la, cama, y, ... tags [DET, NOUN, ADP, DET, NOUN, ADJ, ADP, DET, PRO... [VERB, SCONJ, VERB, DET, NOUN, ADJ, CCONJ, SCO... [ADP, DET, NOUN, ADP, DET, NOUN, PUNCT, DET, P... [CCONJ, DET, DET, NOUN, VERB, ADP, DET, NOUN, ... [VERB, DET, NOUN, VERB, ADP, NOUN, PUNCT, CCON... [PRON, VERB, ADP, VERB, PRON, ADP, DET, NOUN, ... input_ids [0, 540, 9692, 8, 88, 103633, 15913, 1846, 8, ... [0, 62, 38949, 849, 41, 58453, 88, 166220, 620... [0, 24292, 21, 43945, 8, 88, 7750, 44, 239, 78... [0, 990, 5136, 576, 100688, 7, 158, 814, 1409,... [0, 313, 61055, 42, 576, 26497, 12295, 8, 7599... [0, 124043, 47612, 10, 61846, 21, 1028, 21, 39... attention_mask labels [-100, 0, 1, 2, 0, 1, 3, -100, 2, 0, 4, -100, ... [-100, 6, -100, -100, 7, 6, 0, 1, 3, 10, 7, 6,... [-100, 2, 0, 1, 2, 0, 1, 8, 0, 4, -100, 2, 4, ... [-100, 10, 0, 0, 1, -100, 6, -100, -100, 2, 0,... [-100, 6, -100, -100, 0, 1, 6, 2, 1, 8, -100, ... [-100, 5, 6, 2, 6, 5, 2, 0, 1, 10, 5, 6, 0, 1,... 0 1 2 3 4 ... 14300 14301 14302 14303 [1,1,1,1,1, 1,1,1,1,1, [1,1,1,1,1, 1,1,1,1,1, [1,1,1,1,1, 1,1,1,1,1, [1,1,1,1,1, 1,1,1,1,1, [1,1,1,1,1, 1,1,1,1,1, [1,1,1,1,1, 1,1,1,1,1, 1, 1, 1, 1, 1, ... 1, 1, 1, 1, 1, ... 1, 1, 1, 1, 1, ... 1, 1, 1, 1, 1, ... 1, 1, 1, 1, 1, ... 1, 1, 1, 1, 1, ... [Sobre, la, oferta, de, interconexión, con, Te... [ADP, DET, NOUN, ADP, NOUN, ADP, PROPN, ADP, D... [0, 44125, 21, 19806, 8, 1940, 2271, 3355, 194... [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... [-100, 2, 0, 1, 2, 1, -100, -100, -100, 2, 4, ... [La, inversión, en, investigación, básica, es,... [DET, NOUN, ADP, NOUN, ADJ, AUX, DET, NOUN, AD... [0, 239, 98649, 22, 31674, 124528, 198, 88, 46... [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... [-100, 0, 1, 2, 1, 3, 9, 0, 1, 2, 0, 1, 10, 0,... [Conviene, que, ahora, ,, en, plena, apoteosis... [VERB, SCONJ, ADV, PUNCT, ADP, ADJ, NOUN, ADP,... [0, 1657, 7772, 13, 41, 18451, 6, 4, 22, 31161... [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... [-100, 6, -100, -100, 7, 11, 8, -100, 2, 3, 1,... [Carlos, y, Fayna, se, enzarzan, en, una, bron... [PROPN, CCONJ, PROPN, PRON, VERB, ADP, DET, NO... [0, 24856, 113, 114162, 76, 40, 22, 6383, 5935... [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... [-100, 4, 10, 4, -100, 5, 6, -100, -100, 2, 0,... 14304
14305 rows × 5 columns ... ... ... ... ... Next, we implement our model class that uses a transformer encoder as a transducer. Because our downstream task consists of POS tagging for Spanish, we need a transformer model that was pre-trained on Spanish texts. Here, we chose XLM-RoBERTa (Conneau et al., 2019) as our base model. XLM-Roberta is a RoBERTa model (Liu et al., 2019) that has 13.3 Part-of-speech Tagging 193 been pre-trained on 100 different languages, including Spanish. Of note, XLM-RoBERTa does not require us to specify what language we are working on. Similar to BERT, it only requires the input_ids. We discussed in the text classification section that Hugging Face provides implementations for text classification models. This is also true for token classification problems that require transducers. In particular, the XLMRobertaForTokenClassification model provided by Hugging Face does everything needed for this task. However, as before, here we implement it ourselves for pedagogical purposes. The model architecture is similar to our text classification example. It consists of a transformer, a dropout layer, and a linear layer used for classification. The number of labels which determines the output dimension of the linear layer is equal to the number of POS tags. The primary difference between the text classification example and this token classification model is that with the former we produced one label for each text document, while here we produce one label for each token in the input text. Specifically, in our text classification model the output shape was two-dimensional: (batch_size, num_labels). Here, our output is three-dimensional: (batch_size, sequence_size, num_labels). So, while much of the forward method is familiar to us, when we are required to compute the loss, we need to reshape the logits and the labels before passing them to the CrossEntropyLoss, since it expects two-dimensional input and one-dimensional labels. For this purpose, we use the view() method to reshape the tensors. This method is efficient because it does not copy the tensor data. Instead it provides a new view of the same data that behaves like a tensor with a different shape.7 As mentioned before, the number of arguments passed to this method determines the number of dimensions in the output tensor. Here, for our logits, we pass two arguments and so our new view will have two dimensions. The second will be the size of self.num_labels, while the first (because we pass -1) will be inferred based on the original tensor shape. For our labels, on the other hand, we only provide one argument and so the new view will have one dimension, inferred by the original shape: Next, we instantiate our model using the XLM-RoBERTa configuration: 7 Similar to NumPy, PyTorch tensors are represented internally by a block of memory storing the data and some metadata that describes how the data should be read, e.g., type, shape, and stride. The view() method returns a new tensor with new metadata but pointing to the same memory block. 194 Using Transformers with the Hugging Face Library As before, we create a TrainingArguments object and define a compute_metrics function in order to customize a Trainer: While the TrainingArguments code has no substantial changes, we need to adjust the compute_metrics function to account for the fact that our model uses sub-word tokens rather than complete words. Recall that only the first sub-word token per word was assigned a POS tag. This function discards the labels corresponding to the ignored sub-word tokens and evaluates the rest, returning the accuracy score: The last component required for the Trainer is a collator. Since this time we are batching sequences of tokens, we need a collator that can pad them dynamically when constructing the batches. The transformers library includes a DataCollatorForTokenClassification specifically for this purpose. Once we have our collator and our trainer object, we can train our model: Next, we evaluate our newly trained model on the test dataset. For this purpose, we preprocess the data in the same way we did for the train and validation partitions. Then, for convenience, we use the trainer’s predict() method to generate the predicted logits using our model: As before, we use scikit-learn’s classification_report() function to display the results of the evaluation. This function expects two onedimensional lists of labels, so we need to follow a similar approach to the one we employed for text classification. Note that output.label_ids and output.predictions are NumPy arrays rather than PyTorch tensors. This time we use NumPy’s reshape() method to reshape the arrays. This method is similar to PyTorch’s view() method that we used previously, except that view() may copy the array’s data in some situations. We discard the labels corresponding to ignored sub-word tokens, and then we print the classification report: Our model based on XLM-RoBERTa achieves 99% accuracy. This is considerably better than the LSTM-based model developed in Chapter 11. In order to understand the differences between the two methods, we produce below a confusion matrix for the results of each model. Rows in the confusion matrix represent the true labels and columns represent the predicted labels. In the confusion matrices shown below, each cell xij corresponds to the proportion of values with label i that were assigned the label j.8 For a perfect model, all cells in the diagonal would have value 1 and all other cells would have value 0. The code used to generate the confusion matrix is shown below. The confusion matrices 8 This is the case because we used the normalize='true' parameter of the confusion_matrix() function. 13.3 Part-of-speech Tagging 195 Figure 13.1 Confusion matrix corresponding to the LSTM-based part-ofspeech tagger developed in Chapter 11. for the LSTM and transformer are show in Figure 13.1 and Figure 13.2, respectively. The two confusion matrices highlight a couple of important observations. First, the transformer model is considerably better at predicting POS tags with infrequent support in the dataset. For example, the accuracy for predicting the SYM POS tag increased from 38% in the LSTM model to 95% in the transformer model! Equally as impressive, the transformer improved the performance of tags that are extremely common, and, thus, provide plenty of opportunity to both approaches to learn a good model. For example, the accuracy of tagging NOUN, the second 196 Using Transformers with the Hugging Face Library Figure 13.2 Confusion matrix corresponding to the transformer-based part-of-speech tagger. most common POS tag in the dataset, increased from 96% in the LSTM model to 99% in the transformer model. 13.4 Summary In this chapter we presented two applications driven by the encoder component of a transformer network. First, we used the transformer encoder as an acceptor and implemented a text classification application for English news. Second, we used the encoder as a transducer to develop a Spanish part-of-speech tagger. Both tasks were implemented using 13.4 Summary 197 pre-trained transformer models from the Hugging Face library. For both applications, the transformer-based methods outperform considerably all approaches introduced in the previous chapters, highlighting the value of the transformer architecture.
21,067
21,185
#!/usr/bin/env python # coding: utf-8 # # Part-of-speech Tagging with Transformer Networks # Some initialization: # In[1]: import random import torch import numpy as np import pandas as pd from tqdm.notebook import tqdm # enable tqdm in pandas tqdm.pandas() # set to True to use the gpu (if there is one available) use_gpu = True # select device device = torch.device('cuda' if use_gpu and torch.cuda.is_available() else 'cpu') print(f'device: {device.type}') # random seed seed = 1234 # set random seed if seed is not None: print(f'random seed: {seed}') random.seed(seed) np.random.seed(seed) torch.manual_seed(seed) # Read the words and POS tags from the Spanish dataset: # In[2]: from conllu import parse_incr def read_tags(filename): data = {'words': [], 'tags': []} with open(filename) as f: for sent in tqdm(parse_incr(f)): words = [tok['form'] for tok in sent] tags = [tok['upos'] for tok in sent] data['words'].append(words) data['tags'].append(tags) return pd.DataFrame(data) # In[3]: train_df = read_tags('data/UD_Spanish-AnCora/es_ancora-ud-train.conllup') valid_df = read_tags('data/UD_Spanish-AnCora/es_ancora-ud-dev.conllup') test_df = read_tags('data/UD_Spanish-AnCora/es_ancora-ud-test.conllup') # In[4]: tags = train_df['tags'].explode().unique() index_to_tag = {i:t for i,t in enumerate(tags)} tag_to_index = {t:i for i,t in enumerate(tags)} # Create a HuggingFace `DatasetDict` object: # In[5]: from datasets import Dataset, DatasetDict ds = DatasetDict() ds['train'] = Dataset.from_pandas(train_df) ds['validation'] = Dataset.from_pandas(valid_df) ds['test'] = Dataset.from_pandas(test_df) ds # In[6]: ds['train'].to_pandas() # Now tokenize the texts and assign POS labels to the first token in each word: # In[7]: from transformers import AutoTokenizer transformer_name = 'xlm-roberta-base' tokenizer = AutoTokenizer.from_pretrained(transformer_name) # In[8]: x = ds['train'][0] tokenized_input = tokenizer(x['words'], is_split_into_words=True) tokens = tokenizer.convert_ids_to_tokens(tokenized_input['input_ids']) word_ids = tokenized_input.word_ids() pd.DataFrame([tokens, word_ids], index=['tokens', 'word ids']) # In[9]: # https://arxiv.org/pdf/1810.04805.pdf # Section 5.3 # We use the representation of the first sub-token as the input to the token-level classifier over the NER label set. # default value for CrossEntropyLoss ignore_index parameter ignore_index = -100 def tokenize_and_align_labels(batch): labels = [] # tokenize batch tokenized_inputs = tokenizer( batch['words'], truncation=True, is_split_into_words=True, ) # iterate over batch elements for i, tags in enumerate(batch['tags']): label_ids = [] previous_word_id = None # get word ids for current batch element word_ids = tokenized_inputs.word_ids(batch_index=i) # iterate over tokens in batch element for word_id in word_ids: if word_id is None or word_id == previous_word_id: # ignore if not a word or word id has already been seen label_ids.append(ignore_index) else: # get tag id for corresponding word tag_id = tag_to_index[tags[word_id]] label_ids.append(tag_id) # remember this word id previous_word_id = word_id # save label ids for current batch element labels.append(label_ids) # store labels together with the tokenizer output tokenized_inputs['labels'] = labels return tokenized_inputs # In[10]: train_ds = ds['train'].map(tokenize_and_align_labels, batched=True) eval_ds = ds['validation'].map(tokenize_and_align_labels, batched=True) train_ds.to_pandas() # Create our transformer model: # In[11]: from torch import nn from transformers.modeling_outputs import TokenClassifierOutput from transformers.models.roberta.modeling_roberta import RobertaModel, RobertaPreTrainedModel # https://github.com/huggingface/transformers/blob/65659a29cf5a079842e61a63d57fa24474288998/src/transformers/models/roberta/modeling_roberta.py#L1346 class XLMRobertaForTokenClassification(RobertaPreTrainedModel): def __init__(self, config): super().__init__(config) self.num_labels = config.num_labels self.roberta = RobertaModel(config, add_pooling_layer=False) self.dropout = nn.Dropout(config.hidden_dropout_prob) self.classifier = nn.Linear(config.hidden_size, config.num_labels) self.init_weights() def forward(self, input_ids=None, attention_mask=None, token_type_ids=None, labels=None, **kwargs): outputs = self.roberta( input_ids, attention_mask=attention_mask, token_type_ids=token_type_ids, **kwargs, ) sequence_output = self.dropout(outputs[0]) logits = self.classifier(sequence_output) loss = None if labels is not None: loss_fn = nn.CrossEntropyLoss() inputs = logits.view(-1, self.num_labels) targets = labels.view(-1) loss = loss_fn(inputs, targets) return TokenClassifierOutput( loss=loss, logits=logits, hidden_states=outputs.hidden_states, attentions=outputs.attentions, ) # In[12]: from transformers import AutoConfig config = AutoConfig.from_pretrained( transformer_name, num_labels=len(index_to_tag), ) model = ( XLMRobertaForTokenClassification .from_pretrained(transformer_name, config=config) ) # Create the `Trainer` object and train: # In[13]: from transformers import TrainingArguments num_epochs = 2 batch_size = 24 weight_decay = 0.01 model_name = f'{transformer_name}-finetuned-pos-es' training_args = TrainingArguments( output_dir=model_name, log_level='error', num_train_epochs=num_epochs, per_device_train_batch_size=batch_size, per_device_eval_batch_size=batch_size, evaluation_strategy='epoch', weight_decay=weight_decay, ) # In[14]: from sklearn.metrics import accuracy_score def compute_metrics(eval_pred): # gold labels label_ids = eval_pred.label_ids # predictions pred_ids = np.argmax(eval_pred.predictions, axis=-1) # collect gold and predicted labels, ignoring ignore_index label y_true, y_pred = [], [] batch_size, seq_len = pred_ids.shape for i in range(batch_size): for j in range(seq_len): if label_ids[i, j] != ignore_index: y_true.append(index_to_tag[label_ids[i][j]]) y_pred.append(index_to_tag[pred_ids[i][j]]) # return computed metrics return {'accuracy': accuracy_score(y_true, y_pred)} # In[15]: from transformers import Trainer from transformers import DataCollatorForTokenClassification data_collator = DataCollatorForTokenClassification(tokenizer) trainer = Trainer( model=model, args=training_args, data_collator=data_collator, compute_metrics=compute_metrics, train_dataset=train_ds, eval_dataset=eval_ds, tokenizer=tokenizer, ) trainer.train() # Evaluate on the test partition: # In[16]: test_ds = ds['test'].map( tokenize_and_align_labels, batched=True, ) output = trainer.predict(test_ds) # In[17]: from sklearn.metrics import classification_report num_labels = model.num_labels label_ids = output.label_ids.reshape(-1) predictions = output.predictions.reshape(-1, num_labels) predictions = np.argmax(predictions, axis=-1) mask = label_ids != ignore_index y_true = label_ids[mask] y_pred = predictions[mask] target_names = tags[:-1] report = classification_report( y_true, y_pred, target_names=target_names ) print(report) # In[18]: import matplotlib.pyplot as plt from sklearn.metrics import ConfusionMatrixDisplay, confusion_matrix cm = confusion_matrix(y_true, y_pred, normalize='true') disp = ConfusionMatrixDisplay( confusion_matrix=cm, display_labels=target_names, ) fig, ax = plt.subplots(figsize=(10,10)) disp.plot( cmap='Blues', values_format='.2f', colorbar=False, ax=ax, xticks_rotation=45, )
5,906
6,177
1
chap13-2
chap13-2
13 Using Transformers with the Hugging Face Library One of the key advantages of transformer networks is the ability to take a model that was pre-trained over vast quantities of text and fine-tune it for the task at hand. Intuitively, this strategy allows transformer networks to achieve higher performance on smaller datasets by relying on statistics acquired at scale in an unsupervised way (e.g., through the masked language model training objective). To this end, in this chapter we will use the Hugging Face library,1 which has a rich repository of datasets and pre-trained models, as well as helper methods and classes that make it easy to target downstream tasks. Using pre-trained transformer encoders, we will implement the two tasks that served as use cases in the previous chapters: text classification and part-of-speech tagging. 13.1 Tokenization As discussed in Section 12.2, transformers rely on sub-word tokens. This strategy provides an elegant way to handle unknown and low-frequency words by splitting them into more frequent sub-word parts. At the same time, these tokenization algorithms maintain frequently-occurring words as standalone tokens, so the signal for these common words is preserved. To make this more concrete, we show below how tokenizers are employed in the Hugging Face library. First, we load the tokenizer that corresponds to the transformer we intend to use. This is important for two reasons: (a) different transformers rely on different tokenization algorithms, and (b) even for the ones that use the same algorithm, their tokenizer vocabularies are likely to be different if they were pre-trained 1 https://huggingface.co/docs/transformers/main/en/index 186 13.1 Tokenization 187 on different corpora. Next, we tokenize some example text and display some of the resulting attributes with pandas: As shown above, the tokenizer splits the text into tokens, and adds two special tokens: the [CLS] token at the beginning of the token sequence, and the [SEP] token at the end. Also, note that the ## characters at the beginning of some tokens indicate that they are not standalone words, but rather sub-words that continue a word previously started. For example, the output above shows that the word walrus was split into three sub-words. Note, however, that this is specific to this particular tokenization algorithm, and other tokenizers may indicate word continuation in different ways. A better way to detect word continuations is using the word_ids() method of the tokenizer output, which assigns the same id to all tokens part of the same word. For example, all fragments of the word walrus share the word id 3. Lastly, the input_ids attribute provides the token ids used internally by the transformer to map tokens to embeddings. To briefly demonstrate how different tokenizers produce different outputs, here is the same text tokenized with the tokenizer corresponding to xlm-roberta-base: Note how the [CLS] and [SEP] special tokens have been replaced with <s> and </s> respectively. Also, spaces have been replaced with the Unicode character (U+2581, LOWER ONE EIGHTH BLOCK). Tokens that start with that character are considered word beginnings and the rest are word continuations, as can be confirmed by looking at the word ids. This illustrates the importance of using the tokenizer that corresponds to the transformer you intend to use. 012345678 tokens [CLS] I am the wa ##l ##rus . [SEP] word_ids None 0 1 2 3 3 3 4 None input_ids 101 146 1821 1103 20049 1233 6208 119 102 01234567 tokens <s> ▁I ▁am ▁the ▁wal rus . </s > None 0 1 2 3 3 3 None 0 87 444 70 32973 6563 5 2 word_ids input_ids 188 Using Transformers with the Hugging Face Library 13.2 Text Classification For our text classification example, we will continue using the AG News dataset from previous chapters. We will load, preprocess, and split the dataset into pandas dataframes in the same way as before. Now however, rather than continuing with pandas, we will create a Hugging Face dataset from the dataframes. Hugging Face datasets are convenient because of their built-in support of batching, efficient data transformations, and caching. In particular, we convert each dataframe into a Hugging Face dataset. The various datasets are managed with a DatasetDict. Note that this is the same data structure seen when downloading a Hugging Face dataset from their hub.2 The keys in this dictionary are usually train, validation, and test:3 Once our dataset is loaded, we load a tokenizer. Different pre-trained models are tokenized differently, and it is important to select the tokenizer that corresponds to the model we will use so that the inputs are consistent with model expectations. In our example, we will use the bert-base-cased pre-trained model and tokenizer: Datasets have a map() method that transforms the dataset by applying a function to each example. The method returns a new dataset with the transformation applied. We use the map() method to tokenize our dataset. To this end, we define a function that tokenizes an example using the tokenizer we loaded previously. Note that tokenizers support many options that you may need depending on your situation. However, since this is a simple scenario, all we need to do is provide the text to tokenize and specify how to handle texts that exceed the maximum number of tokens permitted by the pre-trained model. Here we have our tokenizer truncate any inputs that are too long by specifying the truncation=True parameter. The output of this function will be added to the new dataset as extra columns. Further, we also want to remove some of the columns that are no longer needed, simplifying subsequent steps. For this, we use the remove_columns argument, listing the columns that we want to discard. Additionally, the dataset’s map() method can batch the dataset; we enable this option with the batched=True argument: 2 https://huggingface.co/datasets
3 These correspond to the more common terms train, development, and test we have used throughout the book so far. In this chapter we use the Hugging Face naming conventions for consistency. 13.2 Text Classification 189 label 03 10 20 32 40 ... ... . 107995  0 
 . 107996  0 
 . 107997  0 
 . 107998  0 
 . 107999  3 
 input_ids [101, 3270, 11906, 1522, 1146, 7106, 1111, 251... [101, 158, 119, 156, 119, 12068, 5084, 1116, 9... [101, 7270, 118, 2733, 1383, 1111, 12448, 7430... [101, 6096, 117, 10378, 3969, 5977, 1111, 8988... [101, 19569, 5480, 10582, 2087, 1867, 158, 119... [101, 1130, 139, 24683, 131, 21107, 2050, 1739... token_type_ids attention_mask [101, 22087, 8223, 1611, 1106, 4417, [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 5572, 324... 0, 0, 0, ... [0,0,0,0,0,0,0,0, [0,0,0,0,0,0,0,0, [0,0,0,0,0,0,0,0, [0,0,0,0,0,0,0,0, [0,0,0,0,0,0,0,0, [0,0,0,0,0,0,0,0, 0,0,0,0, 0,0,0,... 0,0,0,0, 0,0,0,... 0,0,0,0, 0,0,0,... 0,0,0,0, 0,0,0,... 0,0,0,0, 0,0,0,... 0,0,0,0, 0,0,0,... [1,1,1,1,1,1,1,1, [1,1,1,1,1,1,1,1, [1,1,1,1,1,1,1,1, [1,1,1,1,1,1,1,1, [1,1,1,1,1,1,1,1, [1,1,1,1,1,1,1,1, 1,1,1,1, 1,1,1,... 1,1,1,1, 1,1,1,... 1,1,1,1, 1,1,1,... 1,1,1,1, 1,1,1,... 1,1,1,1, 1,1,1,... 1,1,1,1, 1,1,1,... ... ... [101, 16409, 118, 16587, 159, 4064, [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1106, 1564... 0, 0, 0, ... 108000 rows × 4 columns [101, 4222, 11404, 1174, 117, 1476, [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1130, 2696... 0, 0, 0, ... [101, 11560, 3881, 108, 3614, 132, [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 3498, 2944,... 0, 0, 0, ... [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... ... Next, we implement a classifier for our task. Hugging Face provides a
variety of models corresponding to several types of downstream tasks. However, for pedagogical purposes, we implement one from scratch. In particular, our model class inherits from BertPreTrainedModel, which
provides several useful methods such as init_weights() and from_pretrained() methods, which we will use later. The model constructor takes a config- uration object as its only parameter. Configuration objects contain all the hyper-parameters used by the corresponding pre-trained models. We will show later how the configuration model is retrieved and customized. Models that implement specific downstream tasks are usually composed of a pre-trained model (sometimes referred as the body), and one or more task-specific layers (usually referred as the head). Here, we initialize a BertModel using the provided configuration, as well as a dropout layer and a task-specific linear layer used for classifying the Bert output. Each of these layers is initialized by calling the init_weights() method inherited from BertPreTrainedModel. The forward() method, which implements the task-specific forward pass, takes as arguments the outputs of the tokenizer, and, optionally, the gold labels corresponding to the input data points. Our implementation of the forward pass sends the input tokens to the Bert model to produce the contextualized representations for all tokens. This output has several components, including the last_hidden_state which con- 190 Using Transformers with the Hugging Face Library tains the final hidden-state embedding for each token. For our task, we will represent the whole sequence using the embedding for the [CLS] token that occurs at the start of each example. We retrieve it by selecting the first element of each output sequence in the batch (i.e., last_hidden_state[:, 0, :]). As in the previous chapters, we apply dropout to our sequence representation, and then pass it through our linear classification layer. If gold labels are provided (i.e., we are training), we now compute the loss using the cross-entropy loss. The output of the forward pass is wrapped in a Hugging Face SequenceClassifierOutput object4 and returned: Next we load the configuration of the pre-trained model and instantiate our model. The AutoConfig class can load the configuration for any pre-trained model, retrieving it from Hugging Face if needed. Then we use the configuration to instantiate our model using the from_pretrained() method. With this call, the pre-trained model will be loaded, which includes downloading if necessary: Hugging Face provides a Trainer class that greatly simplifies the training process. This class not only implements the training loop we have been using in the previous chapters, but also handles other useful steps such as saving checkpoints (i.e., intermediate models after a number of mini-batches have been processed during training), and tracking custom measures about model performance. In order to create a Trainer, we first need to specify its configuration in a TrainingArguments object. In ours, we specify certain hyper parameters such as batch size, weight decay, and number of epochs, as well as where to store model checkpoints: The TrainingArguments class provides a wide variety of arguments that we have not shown.5 These arguments usually have appropriate default values, so it is often fine to omit them. For example, we did not use the label_names argument, which specifies the key that corresponds to the training labels. When omitted, it defaults to keys such as label, labels and label_ids.6 In this chapter we used label. Note that we also specify how often we would like to see the perfor- . 4  Hugging Face utilizes a set of output objects to standardize model output for a given task. These objects typically include additional information, e.g., attention weights, which can be used for visualizing or debugging model behavior. 
 . 5  https://huggingface.co/docs/transformers/main/en/main_classes/trainer# transformers. TrainingArguments 
 6 In the case of extractive question answering (see Chapter 16), the start_positions and end_positions store the start/end positions of the correct answers. 13.3 Part-of-speech Tagging 191 mance of the current model (at the end of each epoch) with evaluation_strategy='epoch'. This means that after each epoch we print the current loss on the training
partition and on the evaluation dataset, if one is available. Additionally,
we can report custom metrics at this time. For this purpose, we use the compute_metrics parameter of the Trainer, which expects a function that receives a transformers. EvalPredictions object containing the label ids and the predicted logits. The expected return type is a dictionary whose keys correspond to different metrics, each of which will be displayed as a separate result column. Using the above TrainingArguments and compute_metrics function, we create our Trainer. Note that when you provide a tokenizer, the trainer will automatically pad the sequences in each batch. Also, the trainer will automatically use any GPU that is available, unless specifically disabled in the TrainingArguments. Training our model takes a single call to the train() method of the Trainer object. As specified in the our instance of TrainingArguments, the training and validation losses, as well as the accuracy, are reported every epoch. As in the other chapters, we can write custom code to obtain the model’s predictions on the test data. However, the Trainer class provides a predict() method that drastically simplifies this: As shown in the table above, this model achieves an accuracy of 95%, which is the highest performance we have achieved so far on this dataset. 13.3 Part-of-speech Tagging To showcase part-of-speech tagging using transformers, we continue with the Spanish section of the AnCora corpus introduced in Chapter 11. Recall that the dataset is stored in the CoNLL-U format. We load this format in the same way as before, but then we convert the loaded dataset into a Hugging Face DictDataset: Importantly, because the CoNLL-U dataset is already tokenized, we use the is_split_into_words=True tokenizer argument to ensure that the tokenizer respects the existing word boundaries during its sub-word tokenization. Further, while we want to predict one POS tag per word, Epoch Training Loss Validation Loss Accuracy 1 0.187800 0.172629 0.941667 2 0.104000 0.183001 0.946250 192 Using Transformers with the Hugging Face Library any given word may be split into smaller pieces by our tokenizer. Thus, we need to align the tokenizer output to the CoNLL-U words. The original BERT paper (Devlin et al., 2018) addresses this by only using the embedding corresponding to the first sub-token for each word. We follow the same approach for consistency. For the sub-words that do not correspond to the beginning of a word, we use a special value that indicates that we are not interested in their predictions. The CrossEntropyLoss has a parameter called ignore_index for this purpose. The default value for this parameter is −100, which we use as the label for the sub-words we wish to ignore during training: Next, we use this function to preprocess the train and validation folds in our DatasetDict: words [El, presidente, de, el, órgano, regulador, de... [Afirmó, que, sigue, el, criterio, europeo, y,... [Durante, la, presentación, de, el, libro, ", ... [Y, todas, las, miradas, convergen, en, la, lu... [Cambiar, las, formas, parece, de, rigor, ,, p... [Él, llega, a, tirar, la, sobre, la, cama, y, ... tags [DET, NOUN, ADP, DET, NOUN, ADJ, ADP, DET, PRO... [VERB, SCONJ, VERB, DET, NOUN, ADJ, CCONJ, SCO... [ADP, DET, NOUN, ADP, DET, NOUN, PUNCT, DET, P... [CCONJ, DET, DET, NOUN, VERB, ADP, DET, NOUN, ... [VERB, DET, NOUN, VERB, ADP, NOUN, PUNCT, CCON... [PRON, VERB, ADP, VERB, PRON, ADP, DET, NOUN, ... input_ids [0, 540, 9692, 8, 88, 103633, 15913, 1846, 8, ... [0, 62, 38949, 849, 41, 58453, 88, 166220, 620... [0, 24292, 21, 43945, 8, 88, 7750, 44, 239, 78... [0, 990, 5136, 576, 100688, 7, 158, 814, 1409,... [0, 313, 61055, 42, 576, 26497, 12295, 8, 7599... [0, 124043, 47612, 10, 61846, 21, 1028, 21, 39... attention_mask labels [-100, 0, 1, 2, 0, 1, 3, -100, 2, 0, 4, -100, ... [-100, 6, -100, -100, 7, 6, 0, 1, 3, 10, 7, 6,... [-100, 2, 0, 1, 2, 0, 1, 8, 0, 4, -100, 2, 4, ... [-100, 10, 0, 0, 1, -100, 6, -100, -100, 2, 0,... [-100, 6, -100, -100, 0, 1, 6, 2, 1, 8, -100, ... [-100, 5, 6, 2, 6, 5, 2, 0, 1, 10, 5, 6, 0, 1,... 0 1 2 3 4 ... 14300 14301 14302 14303 [1,1,1,1,1, 1,1,1,1,1, [1,1,1,1,1, 1,1,1,1,1, [1,1,1,1,1, 1,1,1,1,1, [1,1,1,1,1, 1,1,1,1,1, [1,1,1,1,1, 1,1,1,1,1, [1,1,1,1,1, 1,1,1,1,1, 1, 1, 1, 1, 1, ... 1, 1, 1, 1, 1, ... 1, 1, 1, 1, 1, ... 1, 1, 1, 1, 1, ... 1, 1, 1, 1, 1, ... 1, 1, 1, 1, 1, ... [Sobre, la, oferta, de, interconexión, con, Te... [ADP, DET, NOUN, ADP, NOUN, ADP, PROPN, ADP, D... [0, 44125, 21, 19806, 8, 1940, 2271, 3355, 194... [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... [-100, 2, 0, 1, 2, 1, -100, -100, -100, 2, 4, ... [La, inversión, en, investigación, básica, es,... [DET, NOUN, ADP, NOUN, ADJ, AUX, DET, NOUN, AD... [0, 239, 98649, 22, 31674, 124528, 198, 88, 46... [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... [-100, 0, 1, 2, 1, 3, 9, 0, 1, 2, 0, 1, 10, 0,... [Conviene, que, ahora, ,, en, plena, apoteosis... [VERB, SCONJ, ADV, PUNCT, ADP, ADJ, NOUN, ADP,... [0, 1657, 7772, 13, 41, 18451, 6, 4, 22, 31161... [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... [-100, 6, -100, -100, 7, 11, 8, -100, 2, 3, 1,... [Carlos, y, Fayna, se, enzarzan, en, una, bron... [PROPN, CCONJ, PROPN, PRON, VERB, ADP, DET, NO... [0, 24856, 113, 114162, 76, 40, 22, 6383, 5935... [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... [-100, 4, 10, 4, -100, 5, 6, -100, -100, 2, 0,... 14304
14305 rows × 5 columns ... ... ... ... ... Next, we implement our model class that uses a transformer encoder as a transducer. Because our downstream task consists of POS tagging for Spanish, we need a transformer model that was pre-trained on Spanish texts. Here, we chose XLM-RoBERTa (Conneau et al., 2019) as our base model. XLM-Roberta is a RoBERTa model (Liu et al., 2019) that has 13.3 Part-of-speech Tagging 193 been pre-trained on 100 different languages, including Spanish. Of note, XLM-RoBERTa does not require us to specify what language we are working on. Similar to BERT, it only requires the input_ids. We discussed in the text classification section that Hugging Face provides implementations for text classification models. This is also true for token classification problems that require transducers. In particular, the XLMRobertaForTokenClassification model provided by Hugging Face does everything needed for this task. However, as before, here we implement it ourselves for pedagogical purposes. The model architecture is similar to our text classification example. It consists of a transformer, a dropout layer, and a linear layer used for classification. The number of labels which determines the output dimension of the linear layer is equal to the number of POS tags. The primary difference between the text classification example and this token classification model is that with the former we produced one label for each text document, while here we produce one label for each token in the input text. Specifically, in our text classification model the output shape was two-dimensional: (batch_size, num_labels). Here, our output is three-dimensional: (batch_size, sequence_size, num_labels). So, while much of the forward method is familiar to us, when we are required to compute the loss, we need to reshape the logits and the labels before passing them to the CrossEntropyLoss, since it expects two-dimensional input and one-dimensional labels. For this purpose, we use the view() method to reshape the tensors. This method is efficient because it does not copy the tensor data. Instead it provides a new view of the same data that behaves like a tensor with a different shape.7 As mentioned before, the number of arguments passed to this method determines the number of dimensions in the output tensor. Here, for our logits, we pass two arguments and so our new view will have two dimensions. The second will be the size of self.num_labels, while the first (because we pass -1) will be inferred based on the original tensor shape. For our labels, on the other hand, we only provide one argument and so the new view will have one dimension, inferred by the original shape: Next, we instantiate our model using the XLM-RoBERTa configuration: 7 Similar to NumPy, PyTorch tensors are represented internally by a block of memory storing the data and some metadata that describes how the data should be read, e.g., type, shape, and stride. The view() method returns a new tensor with new metadata but pointing to the same memory block. 194 Using Transformers with the Hugging Face Library As before, we create a TrainingArguments object and define a compute_metrics function in order to customize a Trainer: While the TrainingArguments code has no substantial changes, we need to adjust the compute_metrics function to account for the fact that our model uses sub-word tokens rather than complete words. Recall that only the first sub-word token per word was assigned a POS tag. This function discards the labels corresponding to the ignored sub-word tokens and evaluates the rest, returning the accuracy score: The last component required for the Trainer is a collator. Since this time we are batching sequences of tokens, we need a collator that can pad them dynamically when constructing the batches. The transformers library includes a DataCollatorForTokenClassification specifically for this purpose. Once we have our collator and our trainer object, we can train our model: Next, we evaluate our newly trained model on the test dataset. For this purpose, we preprocess the data in the same way we did for the train and validation partitions. Then, for convenience, we use the trainer’s predict() method to generate the predicted logits using our model: As before, we use scikit-learn’s classification_report() function to display the results of the evaluation. This function expects two onedimensional lists of labels, so we need to follow a similar approach to the one we employed for text classification. Note that output.label_ids and output.predictions are NumPy arrays rather than PyTorch tensors. This time we use NumPy’s reshape() method to reshape the arrays. This method is similar to PyTorch’s view() method that we used previously, except that view() may copy the array’s data in some situations. We discard the labels corresponding to ignored sub-word tokens, and then we print the classification report: Our model based on XLM-RoBERTa achieves 99% accuracy. This is considerably better than the LSTM-based model developed in Chapter 11. In order to understand the differences between the two methods, we produce below a confusion matrix for the results of each model. Rows in the confusion matrix represent the true labels and columns represent the predicted labels. In the confusion matrices shown below, each cell xij corresponds to the proportion of values with label i that were assigned the label j.8 For a perfect model, all cells in the diagonal would have value 1 and all other cells would have value 0. The code used to generate the confusion matrix is shown below. The confusion matrices 8 This is the case because we used the normalize='true' parameter of the confusion_matrix() function. 13.3 Part-of-speech Tagging 195 Figure 13.1 Confusion matrix corresponding to the LSTM-based part-ofspeech tagger developed in Chapter 11. for the LSTM and transformer are show in Figure 13.1 and Figure 13.2, respectively. The two confusion matrices highlight a couple of important observations. First, the transformer model is considerably better at predicting POS tags with infrequent support in the dataset. For example, the accuracy for predicting the SYM POS tag increased from 38% in the LSTM model to 95% in the transformer model! Equally as impressive, the transformer improved the performance of tags that are extremely common, and, thus, provide plenty of opportunity to both approaches to learn a good model. For example, the accuracy of tagging NOUN, the second 196 Using Transformers with the Hugging Face Library Figure 13.2 Confusion matrix corresponding to the transformer-based part-of-speech tagger. most common POS tag in the dataset, increased from 96% in the LSTM model to 99% in the transformer model. 13.4 Summary In this chapter we presented two applications driven by the encoder component of a transformer network. First, we used the transformer encoder as an acceptor and implemented a text classification application for English news. Second, we used the encoder as a transducer to develop a Spanish part-of-speech tagger. Both tasks were implemented using 13.4 Summary 197 pre-trained transformer models from the Hugging Face library. For both applications, the transformer-based methods outperform considerably all approaches introduced in the previous chapters, highlighting the value of the transformer architecture.
20,650
20,717
#!/usr/bin/env python # coding: utf-8 # # Part-of-speech Tagging with Transformer Networks # Some initialization: # In[1]: import random import torch import numpy as np import pandas as pd from tqdm.notebook import tqdm # enable tqdm in pandas tqdm.pandas() # set to True to use the gpu (if there is one available) use_gpu = True # select device device = torch.device('cuda' if use_gpu and torch.cuda.is_available() else 'cpu') print(f'device: {device.type}') # random seed seed = 1234 # set random seed if seed is not None: print(f'random seed: {seed}') random.seed(seed) np.random.seed(seed) torch.manual_seed(seed) # Read the words and POS tags from the Spanish dataset: # In[2]: from conllu import parse_incr def read_tags(filename): data = {'words': [], 'tags': []} with open(filename) as f: for sent in tqdm(parse_incr(f)): words = [tok['form'] for tok in sent] tags = [tok['upos'] for tok in sent] data['words'].append(words) data['tags'].append(tags) return pd.DataFrame(data) # In[3]: train_df = read_tags('data/UD_Spanish-AnCora/es_ancora-ud-train.conllup') valid_df = read_tags('data/UD_Spanish-AnCora/es_ancora-ud-dev.conllup') test_df = read_tags('data/UD_Spanish-AnCora/es_ancora-ud-test.conllup') # In[4]: tags = train_df['tags'].explode().unique() index_to_tag = {i:t for i,t in enumerate(tags)} tag_to_index = {t:i for i,t in enumerate(tags)} # Create a HuggingFace `DatasetDict` object: # In[5]: from datasets import Dataset, DatasetDict ds = DatasetDict() ds['train'] = Dataset.from_pandas(train_df) ds['validation'] = Dataset.from_pandas(valid_df) ds['test'] = Dataset.from_pandas(test_df) ds # In[6]: ds['train'].to_pandas() # Now tokenize the texts and assign POS labels to the first token in each word: # In[7]: from transformers import AutoTokenizer transformer_name = 'xlm-roberta-base' tokenizer = AutoTokenizer.from_pretrained(transformer_name) # In[8]: x = ds['train'][0] tokenized_input = tokenizer(x['words'], is_split_into_words=True) tokens = tokenizer.convert_ids_to_tokens(tokenized_input['input_ids']) word_ids = tokenized_input.word_ids() pd.DataFrame([tokens, word_ids], index=['tokens', 'word ids']) # In[9]: # https://arxiv.org/pdf/1810.04805.pdf # Section 5.3 # We use the representation of the first sub-token as the input to the token-level classifier over the NER label set. # default value for CrossEntropyLoss ignore_index parameter ignore_index = -100 def tokenize_and_align_labels(batch): labels = [] # tokenize batch tokenized_inputs = tokenizer( batch['words'], truncation=True, is_split_into_words=True, ) # iterate over batch elements for i, tags in enumerate(batch['tags']): label_ids = [] previous_word_id = None # get word ids for current batch element word_ids = tokenized_inputs.word_ids(batch_index=i) # iterate over tokens in batch element for word_id in word_ids: if word_id is None or word_id == previous_word_id: # ignore if not a word or word id has already been seen label_ids.append(ignore_index) else: # get tag id for corresponding word tag_id = tag_to_index[tags[word_id]] label_ids.append(tag_id) # remember this word id previous_word_id = word_id # save label ids for current batch element labels.append(label_ids) # store labels together with the tokenizer output tokenized_inputs['labels'] = labels return tokenized_inputs # In[10]: train_ds = ds['train'].map(tokenize_and_align_labels, batched=True) eval_ds = ds['validation'].map(tokenize_and_align_labels, batched=True) train_ds.to_pandas() # Create our transformer model: # In[11]: from torch import nn from transformers.modeling_outputs import TokenClassifierOutput from transformers.models.roberta.modeling_roberta import RobertaModel, RobertaPreTrainedModel # https://github.com/huggingface/transformers/blob/65659a29cf5a079842e61a63d57fa24474288998/src/transformers/models/roberta/modeling_roberta.py#L1346 class XLMRobertaForTokenClassification(RobertaPreTrainedModel): def __init__(self, config): super().__init__(config) self.num_labels = config.num_labels self.roberta = RobertaModel(config, add_pooling_layer=False) self.dropout = nn.Dropout(config.hidden_dropout_prob) self.classifier = nn.Linear(config.hidden_size, config.num_labels) self.init_weights() def forward(self, input_ids=None, attention_mask=None, token_type_ids=None, labels=None, **kwargs): outputs = self.roberta( input_ids, attention_mask=attention_mask, token_type_ids=token_type_ids, **kwargs, ) sequence_output = self.dropout(outputs[0]) logits = self.classifier(sequence_output) loss = None if labels is not None: loss_fn = nn.CrossEntropyLoss() inputs = logits.view(-1, self.num_labels) targets = labels.view(-1) loss = loss_fn(inputs, targets) return TokenClassifierOutput( loss=loss, logits=logits, hidden_states=outputs.hidden_states, attentions=outputs.attentions, ) # In[12]: from transformers import AutoConfig config = AutoConfig.from_pretrained( transformer_name, num_labels=len(index_to_tag), ) model = ( XLMRobertaForTokenClassification .from_pretrained(transformer_name, config=config) ) # Create the `Trainer` object and train: # In[13]: from transformers import TrainingArguments num_epochs = 2 batch_size = 24 weight_decay = 0.01 model_name = f'{transformer_name}-finetuned-pos-es' training_args = TrainingArguments( output_dir=model_name, log_level='error', num_train_epochs=num_epochs, per_device_train_batch_size=batch_size, per_device_eval_batch_size=batch_size, evaluation_strategy='epoch', weight_decay=weight_decay, ) # In[14]: from sklearn.metrics import accuracy_score def compute_metrics(eval_pred): # gold labels label_ids = eval_pred.label_ids # predictions pred_ids = np.argmax(eval_pred.predictions, axis=-1) # collect gold and predicted labels, ignoring ignore_index label y_true, y_pred = [], [] batch_size, seq_len = pred_ids.shape for i in range(batch_size): for j in range(seq_len): if label_ids[i, j] != ignore_index: y_true.append(index_to_tag[label_ids[i][j]]) y_pred.append(index_to_tag[pred_ids[i][j]]) # return computed metrics return {'accuracy': accuracy_score(y_true, y_pred)} # In[15]: from transformers import Trainer from transformers import DataCollatorForTokenClassification data_collator = DataCollatorForTokenClassification(tokenizer) trainer = Trainer( model=model, args=training_args, data_collator=data_collator, compute_metrics=compute_metrics, train_dataset=train_ds, eval_dataset=eval_ds, tokenizer=tokenizer, ) trainer.train() # Evaluate on the test partition: # In[16]: test_ds = ds['test'].map( tokenize_and_align_labels, batched=True, ) output = trainer.predict(test_ds) # In[17]: from sklearn.metrics import classification_report num_labels = model.num_labels label_ids = output.label_ids.reshape(-1) predictions = output.predictions.reshape(-1, num_labels) predictions = np.argmax(predictions, axis=-1) mask = label_ids != ignore_index y_true = label_ids[mask] y_pred = predictions[mask] target_names = tags[:-1] report = classification_report( y_true, y_pred, target_names=target_names ) print(report) # In[18]: import matplotlib.pyplot as plt from sklearn.metrics import ConfusionMatrixDisplay, confusion_matrix cm = confusion_matrix(y_true, y_pred, normalize='true') disp = ConfusionMatrixDisplay( confusion_matrix=cm, display_labels=target_names, ) fig, ax = plt.subplots(figsize=(10,10)) disp.plot( cmap='Blues', values_format='.2f', colorbar=False, ax=ax, xticks_rotation=45, )
5,503
5,702
2
chap13-3
chap13-3
13 Using Transformers with the Hugging Face Library One of the key advantages of transformer networks is the ability to take a model that was pre-trained over vast quantities of text and fine-tune it for the task at hand. Intuitively, this strategy allows transformer networks to achieve higher performance on smaller datasets by relying on statistics acquired at scale in an unsupervised way (e.g., through the masked language model training objective). To this end, in this chapter we will use the Hugging Face library,1 which has a rich repository of datasets and pre-trained models, as well as helper methods and classes that make it easy to target downstream tasks. Using pre-trained transformer encoders, we will implement the two tasks that served as use cases in the previous chapters: text classification and part-of-speech tagging. 13.1 Tokenization As discussed in Section 12.2, transformers rely on sub-word tokens. This strategy provides an elegant way to handle unknown and low-frequency words by splitting them into more frequent sub-word parts. At the same time, these tokenization algorithms maintain frequently-occurring words as standalone tokens, so the signal for these common words is preserved. To make this more concrete, we show below how tokenizers are employed in the Hugging Face library. First, we load the tokenizer that corresponds to the transformer we intend to use. This is important for two reasons: (a) different transformers rely on different tokenization algorithms, and (b) even for the ones that use the same algorithm, their tokenizer vocabularies are likely to be different if they were pre-trained 1 https://huggingface.co/docs/transformers/main/en/index 186 13.1 Tokenization 187 on different corpora. Next, we tokenize some example text and display some of the resulting attributes with pandas: As shown above, the tokenizer splits the text into tokens, and adds two special tokens: the [CLS] token at the beginning of the token sequence, and the [SEP] token at the end. Also, note that the ## characters at the beginning of some tokens indicate that they are not standalone words, but rather sub-words that continue a word previously started. For example, the output above shows that the word walrus was split into three sub-words. Note, however, that this is specific to this particular tokenization algorithm, and other tokenizers may indicate word continuation in different ways. A better way to detect word continuations is using the word_ids() method of the tokenizer output, which assigns the same id to all tokens part of the same word. For example, all fragments of the word walrus share the word id 3. Lastly, the input_ids attribute provides the token ids used internally by the transformer to map tokens to embeddings. To briefly demonstrate how different tokenizers produce different outputs, here is the same text tokenized with the tokenizer corresponding to xlm-roberta-base: Note how the [CLS] and [SEP] special tokens have been replaced with <s> and </s> respectively. Also, spaces have been replaced with the Unicode character (U+2581, LOWER ONE EIGHTH BLOCK). Tokens that start with that character are considered word beginnings and the rest are word continuations, as can be confirmed by looking at the word ids. This illustrates the importance of using the tokenizer that corresponds to the transformer you intend to use. 012345678 tokens [CLS] I am the wa ##l ##rus . [SEP] word_ids None 0 1 2 3 3 3 4 None input_ids 101 146 1821 1103 20049 1233 6208 119 102 01234567 tokens <s> ▁I ▁am ▁the ▁wal rus . </s > None 0 1 2 3 3 3 None 0 87 444 70 32973 6563 5 2 word_ids input_ids 188 Using Transformers with the Hugging Face Library 13.2 Text Classification For our text classification example, we will continue using the AG News dataset from previous chapters. We will load, preprocess, and split the dataset into pandas dataframes in the same way as before. Now however, rather than continuing with pandas, we will create a Hugging Face dataset from the dataframes. Hugging Face datasets are convenient because of their built-in support of batching, efficient data transformations, and caching. In particular, we convert each dataframe into a Hugging Face dataset. The various datasets are managed with a DatasetDict. Note that this is the same data structure seen when downloading a Hugging Face dataset from their hub.2 The keys in this dictionary are usually train, validation, and test:3 Once our dataset is loaded, we load a tokenizer. Different pre-trained models are tokenized differently, and it is important to select the tokenizer that corresponds to the model we will use so that the inputs are consistent with model expectations. In our example, we will use the bert-base-cased pre-trained model and tokenizer: Datasets have a map() method that transforms the dataset by applying a function to each example. The method returns a new dataset with the transformation applied. We use the map() method to tokenize our dataset. To this end, we define a function that tokenizes an example using the tokenizer we loaded previously. Note that tokenizers support many options that you may need depending on your situation. However, since this is a simple scenario, all we need to do is provide the text to tokenize and specify how to handle texts that exceed the maximum number of tokens permitted by the pre-trained model. Here we have our tokenizer truncate any inputs that are too long by specifying the truncation=True parameter. The output of this function will be added to the new dataset as extra columns. Further, we also want to remove some of the columns that are no longer needed, simplifying subsequent steps. For this, we use the remove_columns argument, listing the columns that we want to discard. Additionally, the dataset’s map() method can batch the dataset; we enable this option with the batched=True argument: 2 https://huggingface.co/datasets
3 These correspond to the more common terms train, development, and test we have used throughout the book so far. In this chapter we use the Hugging Face naming conventions for consistency. 13.2 Text Classification 189 label 03 10 20 32 40 ... ... . 107995  0 
 . 107996  0 
 . 107997  0 
 . 107998  0 
 . 107999  3 
 input_ids [101, 3270, 11906, 1522, 1146, 7106, 1111, 251... [101, 158, 119, 156, 119, 12068, 5084, 1116, 9... [101, 7270, 118, 2733, 1383, 1111, 12448, 7430... [101, 6096, 117, 10378, 3969, 5977, 1111, 8988... [101, 19569, 5480, 10582, 2087, 1867, 158, 119... [101, 1130, 139, 24683, 131, 21107, 2050, 1739... token_type_ids attention_mask [101, 22087, 8223, 1611, 1106, 4417, [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 5572, 324... 0, 0, 0, ... [0,0,0,0,0,0,0,0, [0,0,0,0,0,0,0,0, [0,0,0,0,0,0,0,0, [0,0,0,0,0,0,0,0, [0,0,0,0,0,0,0,0, [0,0,0,0,0,0,0,0, 0,0,0,0, 0,0,0,... 0,0,0,0, 0,0,0,... 0,0,0,0, 0,0,0,... 0,0,0,0, 0,0,0,... 0,0,0,0, 0,0,0,... 0,0,0,0, 0,0,0,... [1,1,1,1,1,1,1,1, [1,1,1,1,1,1,1,1, [1,1,1,1,1,1,1,1, [1,1,1,1,1,1,1,1, [1,1,1,1,1,1,1,1, [1,1,1,1,1,1,1,1, 1,1,1,1, 1,1,1,... 1,1,1,1, 1,1,1,... 1,1,1,1, 1,1,1,... 1,1,1,1, 1,1,1,... 1,1,1,1, 1,1,1,... 1,1,1,1, 1,1,1,... ... ... [101, 16409, 118, 16587, 159, 4064, [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1106, 1564... 0, 0, 0, ... 108000 rows × 4 columns [101, 4222, 11404, 1174, 117, 1476, [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1130, 2696... 0, 0, 0, ... [101, 11560, 3881, 108, 3614, 132, [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 3498, 2944,... 0, 0, 0, ... [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... ... Next, we implement a classifier for our task. Hugging Face provides a
variety of models corresponding to several types of downstream tasks. However, for pedagogical purposes, we implement one from scratch. In particular, our model class inherits from BertPreTrainedModel, which
provides several useful methods such as init_weights() and from_pretrained() methods, which we will use later. The model constructor takes a config- uration object as its only parameter. Configuration objects contain all the hyper-parameters used by the corresponding pre-trained models. We will show later how the configuration model is retrieved and customized. Models that implement specific downstream tasks are usually composed of a pre-trained model (sometimes referred as the body), and one or more task-specific layers (usually referred as the head). Here, we initialize a BertModel using the provided configuration, as well as a dropout layer and a task-specific linear layer used for classifying the Bert output. Each of these layers is initialized by calling the init_weights() method inherited from BertPreTrainedModel. The forward() method, which implements the task-specific forward pass, takes as arguments the outputs of the tokenizer, and, optionally, the gold labels corresponding to the input data points. Our implementation of the forward pass sends the input tokens to the Bert model to produce the contextualized representations for all tokens. This output has several components, including the last_hidden_state which con- 190 Using Transformers with the Hugging Face Library tains the final hidden-state embedding for each token. For our task, we will represent the whole sequence using the embedding for the [CLS] token that occurs at the start of each example. We retrieve it by selecting the first element of each output sequence in the batch (i.e., last_hidden_state[:, 0, :]). As in the previous chapters, we apply dropout to our sequence representation, and then pass it through our linear classification layer. If gold labels are provided (i.e., we are training), we now compute the loss using the cross-entropy loss. The output of the forward pass is wrapped in a Hugging Face SequenceClassifierOutput object4 and returned: Next we load the configuration of the pre-trained model and instantiate our model. The AutoConfig class can load the configuration for any pre-trained model, retrieving it from Hugging Face if needed. Then we use the configuration to instantiate our model using the from_pretrained() method. With this call, the pre-trained model will be loaded, which includes downloading if necessary: Hugging Face provides a Trainer class that greatly simplifies the training process. This class not only implements the training loop we have been using in the previous chapters, but also handles other useful steps such as saving checkpoints (i.e., intermediate models after a number of mini-batches have been processed during training), and tracking custom measures about model performance. In order to create a Trainer, we first need to specify its configuration in a TrainingArguments object. In ours, we specify certain hyper parameters such as batch size, weight decay, and number of epochs, as well as where to store model checkpoints: The TrainingArguments class provides a wide variety of arguments that we have not shown.5 These arguments usually have appropriate default values, so it is often fine to omit them. For example, we did not use the label_names argument, which specifies the key that corresponds to the training labels. When omitted, it defaults to keys such as label, labels and label_ids.6 In this chapter we used label. Note that we also specify how often we would like to see the perfor- . 4  Hugging Face utilizes a set of output objects to standardize model output for a given task. These objects typically include additional information, e.g., attention weights, which can be used for visualizing or debugging model behavior. 
 . 5  https://huggingface.co/docs/transformers/main/en/main_classes/trainer# transformers. TrainingArguments 
 6 In the case of extractive question answering (see Chapter 16), the start_positions and end_positions store the start/end positions of the correct answers. 13.3 Part-of-speech Tagging 191 mance of the current model (at the end of each epoch) with evaluation_strategy='epoch'. This means that after each epoch we print the current loss on the training
partition and on the evaluation dataset, if one is available. Additionally,
we can report custom metrics at this time. For this purpose, we use the compute_metrics parameter of the Trainer, which expects a function that receives a transformers. EvalPredictions object containing the label ids and the predicted logits. The expected return type is a dictionary whose keys correspond to different metrics, each of which will be displayed as a separate result column. Using the above TrainingArguments and compute_metrics function, we create our Trainer. Note that when you provide a tokenizer, the trainer will automatically pad the sequences in each batch. Also, the trainer will automatically use any GPU that is available, unless specifically disabled in the TrainingArguments. Training our model takes a single call to the train() method of the Trainer object. As specified in the our instance of TrainingArguments, the training and validation losses, as well as the accuracy, are reported every epoch. As in the other chapters, we can write custom code to obtain the model’s predictions on the test data. However, the Trainer class provides a predict() method that drastically simplifies this: As shown in the table above, this model achieves an accuracy of 95%, which is the highest performance we have achieved so far on this dataset. 13.3 Part-of-speech Tagging To showcase part-of-speech tagging using transformers, we continue with the Spanish section of the AnCora corpus introduced in Chapter 11. Recall that the dataset is stored in the CoNLL-U format. We load this format in the same way as before, but then we convert the loaded dataset into a Hugging Face DictDataset: Importantly, because the CoNLL-U dataset is already tokenized, we use the is_split_into_words=True tokenizer argument to ensure that the tokenizer respects the existing word boundaries during its sub-word tokenization. Further, while we want to predict one POS tag per word, Epoch Training Loss Validation Loss Accuracy 1 0.187800 0.172629 0.941667 2 0.104000 0.183001 0.946250 192 Using Transformers with the Hugging Face Library any given word may be split into smaller pieces by our tokenizer. Thus, we need to align the tokenizer output to the CoNLL-U words. The original BERT paper (Devlin et al., 2018) addresses this by only using the embedding corresponding to the first sub-token for each word. We follow the same approach for consistency. For the sub-words that do not correspond to the beginning of a word, we use a special value that indicates that we are not interested in their predictions. The CrossEntropyLoss has a parameter called ignore_index for this purpose. The default value for this parameter is −100, which we use as the label for the sub-words we wish to ignore during training: Next, we use this function to preprocess the train and validation folds in our DatasetDict: words [El, presidente, de, el, órgano, regulador, de... [Afirmó, que, sigue, el, criterio, europeo, y,... [Durante, la, presentación, de, el, libro, ", ... [Y, todas, las, miradas, convergen, en, la, lu... [Cambiar, las, formas, parece, de, rigor, ,, p... [Él, llega, a, tirar, la, sobre, la, cama, y, ... tags [DET, NOUN, ADP, DET, NOUN, ADJ, ADP, DET, PRO... [VERB, SCONJ, VERB, DET, NOUN, ADJ, CCONJ, SCO... [ADP, DET, NOUN, ADP, DET, NOUN, PUNCT, DET, P... [CCONJ, DET, DET, NOUN, VERB, ADP, DET, NOUN, ... [VERB, DET, NOUN, VERB, ADP, NOUN, PUNCT, CCON... [PRON, VERB, ADP, VERB, PRON, ADP, DET, NOUN, ... input_ids [0, 540, 9692, 8, 88, 103633, 15913, 1846, 8, ... [0, 62, 38949, 849, 41, 58453, 88, 166220, 620... [0, 24292, 21, 43945, 8, 88, 7750, 44, 239, 78... [0, 990, 5136, 576, 100688, 7, 158, 814, 1409,... [0, 313, 61055, 42, 576, 26497, 12295, 8, 7599... [0, 124043, 47612, 10, 61846, 21, 1028, 21, 39... attention_mask labels [-100, 0, 1, 2, 0, 1, 3, -100, 2, 0, 4, -100, ... [-100, 6, -100, -100, 7, 6, 0, 1, 3, 10, 7, 6,... [-100, 2, 0, 1, 2, 0, 1, 8, 0, 4, -100, 2, 4, ... [-100, 10, 0, 0, 1, -100, 6, -100, -100, 2, 0,... [-100, 6, -100, -100, 0, 1, 6, 2, 1, 8, -100, ... [-100, 5, 6, 2, 6, 5, 2, 0, 1, 10, 5, 6, 0, 1,... 0 1 2 3 4 ... 14300 14301 14302 14303 [1,1,1,1,1, 1,1,1,1,1, [1,1,1,1,1, 1,1,1,1,1, [1,1,1,1,1, 1,1,1,1,1, [1,1,1,1,1, 1,1,1,1,1, [1,1,1,1,1, 1,1,1,1,1, [1,1,1,1,1, 1,1,1,1,1, 1, 1, 1, 1, 1, ... 1, 1, 1, 1, 1, ... 1, 1, 1, 1, 1, ... 1, 1, 1, 1, 1, ... 1, 1, 1, 1, 1, ... 1, 1, 1, 1, 1, ... [Sobre, la, oferta, de, interconexión, con, Te... [ADP, DET, NOUN, ADP, NOUN, ADP, PROPN, ADP, D... [0, 44125, 21, 19806, 8, 1940, 2271, 3355, 194... [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... [-100, 2, 0, 1, 2, 1, -100, -100, -100, 2, 4, ... [La, inversión, en, investigación, básica, es,... [DET, NOUN, ADP, NOUN, ADJ, AUX, DET, NOUN, AD... [0, 239, 98649, 22, 31674, 124528, 198, 88, 46... [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... [-100, 0, 1, 2, 1, 3, 9, 0, 1, 2, 0, 1, 10, 0,... [Conviene, que, ahora, ,, en, plena, apoteosis... [VERB, SCONJ, ADV, PUNCT, ADP, ADJ, NOUN, ADP,... [0, 1657, 7772, 13, 41, 18451, 6, 4, 22, 31161... [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... [-100, 6, -100, -100, 7, 11, 8, -100, 2, 3, 1,... [Carlos, y, Fayna, se, enzarzan, en, una, bron... [PROPN, CCONJ, PROPN, PRON, VERB, ADP, DET, NO... [0, 24856, 113, 114162, 76, 40, 22, 6383, 5935... [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... [-100, 4, 10, 4, -100, 5, 6, -100, -100, 2, 0,... 14304
14305 rows × 5 columns ... ... ... ... ... Next, we implement our model class that uses a transformer encoder as a transducer. Because our downstream task consists of POS tagging for Spanish, we need a transformer model that was pre-trained on Spanish texts. Here, we chose XLM-RoBERTa (Conneau et al., 2019) as our base model. XLM-Roberta is a RoBERTa model (Liu et al., 2019) that has 13.3 Part-of-speech Tagging 193 been pre-trained on 100 different languages, including Spanish. Of note, XLM-RoBERTa does not require us to specify what language we are working on. Similar to BERT, it only requires the input_ids. We discussed in the text classification section that Hugging Face provides implementations for text classification models. This is also true for token classification problems that require transducers. In particular, the XLMRobertaForTokenClassification model provided by Hugging Face does everything needed for this task. However, as before, here we implement it ourselves for pedagogical purposes. The model architecture is similar to our text classification example. It consists of a transformer, a dropout layer, and a linear layer used for classification. The number of labels which determines the output dimension of the linear layer is equal to the number of POS tags. The primary difference between the text classification example and this token classification model is that with the former we produced one label for each text document, while here we produce one label for each token in the input text. Specifically, in our text classification model the output shape was two-dimensional: (batch_size, num_labels). Here, our output is three-dimensional: (batch_size, sequence_size, num_labels). So, while much of the forward method is familiar to us, when we are required to compute the loss, we need to reshape the logits and the labels before passing them to the CrossEntropyLoss, since it expects two-dimensional input and one-dimensional labels. For this purpose, we use the view() method to reshape the tensors. This method is efficient because it does not copy the tensor data. Instead it provides a new view of the same data that behaves like a tensor with a different shape.7 As mentioned before, the number of arguments passed to this method determines the number of dimensions in the output tensor. Here, for our logits, we pass two arguments and so our new view will have two dimensions. The second will be the size of self.num_labels, while the first (because we pass -1) will be inferred based on the original tensor shape. For our labels, on the other hand, we only provide one argument and so the new view will have one dimension, inferred by the original shape: Next, we instantiate our model using the XLM-RoBERTa configuration: 7 Similar to NumPy, PyTorch tensors are represented internally by a block of memory storing the data and some metadata that describes how the data should be read, e.g., type, shape, and stride. The view() method returns a new tensor with new metadata but pointing to the same memory block. 194 Using Transformers with the Hugging Face Library As before, we create a TrainingArguments object and define a compute_metrics function in order to customize a Trainer: While the TrainingArguments code has no substantial changes, we need to adjust the compute_metrics function to account for the fact that our model uses sub-word tokens rather than complete words. Recall that only the first sub-word token per word was assigned a POS tag. This function discards the labels corresponding to the ignored sub-word tokens and evaluates the rest, returning the accuracy score: The last component required for the Trainer is a collator. Since this time we are batching sequences of tokens, we need a collator that can pad them dynamically when constructing the batches. The transformers library includes a DataCollatorForTokenClassification specifically for this purpose. Once we have our collator and our trainer object, we can train our model: Next, we evaluate our newly trained model on the test dataset. For this purpose, we preprocess the data in the same way we did for the train and validation partitions. Then, for convenience, we use the trainer’s predict() method to generate the predicted logits using our model: As before, we use scikit-learn’s classification_report() function to display the results of the evaluation. This function expects two onedimensional lists of labels, so we need to follow a similar approach to the one we employed for text classification. Note that output.label_ids and output.predictions are NumPy arrays rather than PyTorch tensors. This time we use NumPy’s reshape() method to reshape the arrays. This method is similar to PyTorch’s view() method that we used previously, except that view() may copy the array’s data in some situations. We discard the labels corresponding to ignored sub-word tokens, and then we print the classification report: Our model based on XLM-RoBERTa achieves 99% accuracy. This is considerably better than the LSTM-based model developed in Chapter 11. In order to understand the differences between the two methods, we produce below a confusion matrix for the results of each model. Rows in the confusion matrix represent the true labels and columns represent the predicted labels. In the confusion matrices shown below, each cell xij corresponds to the proportion of values with label i that were assigned the label j.8 For a perfect model, all cells in the diagonal would have value 1 and all other cells would have value 0. The code used to generate the confusion matrix is shown below. The confusion matrices 8 This is the case because we used the normalize='true' parameter of the confusion_matrix() function. 13.3 Part-of-speech Tagging 195 Figure 13.1 Confusion matrix corresponding to the LSTM-based part-ofspeech tagger developed in Chapter 11. for the LSTM and transformer are show in Figure 13.1 and Figure 13.2, respectively. The two confusion matrices highlight a couple of important observations. First, the transformer model is considerably better at predicting POS tags with infrequent support in the dataset. For example, the accuracy for predicting the SYM POS tag increased from 38% in the LSTM model to 95% in the transformer model! Equally as impressive, the transformer improved the performance of tags that are extremely common, and, thus, provide plenty of opportunity to both approaches to learn a good model. For example, the accuracy of tagging NOUN, the second 196 Using Transformers with the Hugging Face Library Figure 13.2 Confusion matrix corresponding to the transformer-based part-of-speech tagger. most common POS tag in the dataset, increased from 96% in the LSTM model to 99% in the transformer model. 13.4 Summary In this chapter we presented two applications driven by the encoder component of a transformer network. First, we used the transformer encoder as an acceptor and implemented a text classification application for English news. Second, we used the encoder as a transducer to develop a Spanish part-of-speech tagger. Both tasks were implemented using 13.4 Summary 197 pre-trained transformer models from the Hugging Face library. For both applications, the transformer-based methods outperform considerably all approaches introduced in the previous chapters, highlighting the value of the transformer architecture.
8,763
8,789
#!/usr/bin/env python # coding: utf-8 # # Text Classification Using Transformer Networks (BERT) # Some initialization: # In[1]: import random import torch import numpy as np import pandas as pd from tqdm.notebook import tqdm # enable tqdm in pandas tqdm.pandas() # set to True to use the gpu (if there is one available) use_gpu = True # select device device = torch.device('cuda' if use_gpu and torch.cuda.is_available() else 'cpu') print(f'device: {device.type}') # random seed seed = 1234 # set random seed if seed is not None: print(f'random seed: {seed}') random.seed(seed) np.random.seed(seed) torch.manual_seed(seed) # Read the train/dev/test datasets and create a HuggingFace `Dataset` object: # In[2]: def read_data(filename): # read csv file df = pd.read_csv(filename, header=None) # add column names df.columns = ['label', 'title', 'description'] # make labels zero-based df['label'] -= 1 # concatenate title and description, and remove backslashes df['text'] = df['title'] + " " + df['description'] df['text'] = df['text'].str.replace('\\', ' ', regex=False) return df # In[3]: labels = open('data/ag_news_csv/classes.txt').read().splitlines() train_df = read_data('data/ag_news_csv/train.csv') test_df = read_data('data/ag_news_csv/test.csv') train_df # In[4]: from sklearn.model_selection import train_test_split train_df, eval_df = train_test_split(train_df, train_size=0.9) train_df.reset_index(inplace=True, drop=True) eval_df.reset_index(inplace=True, drop=True) print(f'train rows: {len(train_df.index):,}') print(f'eval rows: {len(eval_df.index):,}') print(f'test rows: {len(test_df.index):,}') # In[5]: from datasets import Dataset, DatasetDict ds = DatasetDict() ds['train'] = Dataset.from_pandas(train_df) ds['validation'] = Dataset.from_pandas(eval_df) ds['test'] = Dataset.from_pandas(test_df) ds # Tokenize the texts: # In[6]: from transformers import AutoTokenizer transformer_name = 'bert-base-cased' tokenizer = AutoTokenizer.from_pretrained(transformer_name) # In[7]: def tokenize(examples): return tokenizer(examples['text'], truncation=True) train_ds = ds['train'].map( tokenize, batched=True, remove_columns=['title', 'description', 'text'], ) eval_ds = ds['validation'].map( tokenize, batched=True, remove_columns=['title', 'description', 'text'], ) train_ds.to_pandas() # Create the transformer model: # In[8]: from torch import nn from transformers.modeling_outputs import SequenceClassifierOutput from transformers.models.bert.modeling_bert import BertModel, BertPreTrainedModel # https://github.com/huggingface/transformers/blob/65659a29cf5a079842e61a63d57fa24474288998/src/transformers/models/bert/modeling_bert.py#L1486 class BertForSequenceClassification(BertPreTrainedModel): def __init__(self, config): super().__init__(config) self.num_labels = config.num_labels self.bert = BertModel(config) self.dropout = nn.Dropout(config.hidden_dropout_prob) self.classifier = nn.Linear(config.hidden_size, config.num_labels) self.init_weights() def forward(self, input_ids=None, attention_mask=None, token_type_ids=None, labels=None, **kwargs): outputs = self.bert( input_ids, attention_mask=attention_mask, token_type_ids=token_type_ids, **kwargs, ) cls_outputs = outputs.last_hidden_state[:, 0, :] cls_outputs = self.dropout(cls_outputs) logits = self.classifier(cls_outputs) loss = None if labels is not None: loss_fn = nn.CrossEntropyLoss() loss = loss_fn(logits, labels) return SequenceClassifierOutput( loss=loss, logits=logits, hidden_states=outputs.hidden_states, attentions=outputs.attentions, ) # In[9]: from transformers import AutoConfig config = AutoConfig.from_pretrained( transformer_name, num_labels=len(labels), ) model = ( BertForSequenceClassification .from_pretrained(transformer_name, config=config) ) # Create the trainer object and train: # In[10]: from transformers import TrainingArguments num_epochs = 2 batch_size = 24 weight_decay = 0.01 model_name = f'{transformer_name}-sequence-classification' training_args = TrainingArguments( output_dir=model_name, log_level='error', num_train_epochs=num_epochs, per_device_train_batch_size=batch_size, per_device_eval_batch_size=batch_size, evaluation_strategy='epoch', weight_decay=weight_decay, ) # In[11]: from sklearn.metrics import accuracy_score def compute_metrics(eval_pred): y_true = eval_pred.label_ids y_pred = np.argmax(eval_pred.predictions, axis=-1) return {'accuracy': accuracy_score(y_true, y_pred)} # In[12]: from transformers import Trainer trainer = Trainer( model=model, args=training_args, compute_metrics=compute_metrics, train_dataset=train_ds, eval_dataset=eval_ds, tokenizer=tokenizer, ) # In[13]: trainer.train() # Evaluate on the test partition: # In[14]: test_ds = ds['test'].map( tokenize, batched=True, remove_columns=['title', 'description', 'text'], ) test_ds.to_pandas() # In[15]: output = trainer.predict(test_ds) output # In[16]: from sklearn.metrics import classification_report y_true = output.label_ids y_pred = np.argmax(output.predictions, axis=-1) target_names = labels print(classification_report(y_true, y_pred, target_names=target_names))
2,992
3,054
3
chap13-4
chap13-4
13 Using Transformers with the Hugging Face Library One of the key advantages of transformer networks is the ability to take a model that was pre-trained over vast quantities of text and fine-tune it for the task at hand. Intuitively, this strategy allows transformer networks to achieve higher performance on smaller datasets by relying on statistics acquired at scale in an unsupervised way (e.g., through the masked language model training objective). To this end, in this chapter we will use the Hugging Face library,1 which has a rich repository of datasets and pre-trained models, as well as helper methods and classes that make it easy to target downstream tasks. Using pre-trained transformer encoders, we will implement the two tasks that served as use cases in the previous chapters: text classification and part-of-speech tagging. 13.1 Tokenization As discussed in Section 12.2, transformers rely on sub-word tokens. This strategy provides an elegant way to handle unknown and low-frequency words by splitting them into more frequent sub-word parts. At the same time, these tokenization algorithms maintain frequently-occurring words as standalone tokens, so the signal for these common words is preserved. To make this more concrete, we show below how tokenizers are employed in the Hugging Face library. First, we load the tokenizer that corresponds to the transformer we intend to use. This is important for two reasons: (a) different transformers rely on different tokenization algorithms, and (b) even for the ones that use the same algorithm, their tokenizer vocabularies are likely to be different if they were pre-trained 1 https://huggingface.co/docs/transformers/main/en/index 186 13.1 Tokenization 187 on different corpora. Next, we tokenize some example text and display some of the resulting attributes with pandas: As shown above, the tokenizer splits the text into tokens, and adds two special tokens: the [CLS] token at the beginning of the token sequence, and the [SEP] token at the end. Also, note that the ## characters at the beginning of some tokens indicate that they are not standalone words, but rather sub-words that continue a word previously started. For example, the output above shows that the word walrus was split into three sub-words. Note, however, that this is specific to this particular tokenization algorithm, and other tokenizers may indicate word continuation in different ways. A better way to detect word continuations is using the word_ids() method of the tokenizer output, which assigns the same id to all tokens part of the same word. For example, all fragments of the word walrus share the word id 3. Lastly, the input_ids attribute provides the token ids used internally by the transformer to map tokens to embeddings. To briefly demonstrate how different tokenizers produce different outputs, here is the same text tokenized with the tokenizer corresponding to xlm-roberta-base: Note how the [CLS] and [SEP] special tokens have been replaced with <s> and </s> respectively. Also, spaces have been replaced with the Unicode character (U+2581, LOWER ONE EIGHTH BLOCK). Tokens that start with that character are considered word beginnings and the rest are word continuations, as can be confirmed by looking at the word ids. This illustrates the importance of using the tokenizer that corresponds to the transformer you intend to use. 012345678 tokens [CLS] I am the wa ##l ##rus . [SEP] word_ids None 0 1 2 3 3 3 4 None input_ids 101 146 1821 1103 20049 1233 6208 119 102 01234567 tokens <s> ▁I ▁am ▁the ▁wal rus . </s > None 0 1 2 3 3 3 None 0 87 444 70 32973 6563 5 2 word_ids input_ids 188 Using Transformers with the Hugging Face Library 13.2 Text Classification For our text classification example, we will continue using the AG News dataset from previous chapters. We will load, preprocess, and split the dataset into pandas dataframes in the same way as before. Now however, rather than continuing with pandas, we will create a Hugging Face dataset from the dataframes. Hugging Face datasets are convenient because of their built-in support of batching, efficient data transformations, and caching. In particular, we convert each dataframe into a Hugging Face dataset. The various datasets are managed with a DatasetDict. Note that this is the same data structure seen when downloading a Hugging Face dataset from their hub.2 The keys in this dictionary are usually train, validation, and test:3 Once our dataset is loaded, we load a tokenizer. Different pre-trained models are tokenized differently, and it is important to select the tokenizer that corresponds to the model we will use so that the inputs are consistent with model expectations. In our example, we will use the bert-base-cased pre-trained model and tokenizer: Datasets have a map() method that transforms the dataset by applying a function to each example. The method returns a new dataset with the transformation applied. We use the map() method to tokenize our dataset. To this end, we define a function that tokenizes an example using the tokenizer we loaded previously. Note that tokenizers support many options that you may need depending on your situation. However, since this is a simple scenario, all we need to do is provide the text to tokenize and specify how to handle texts that exceed the maximum number of tokens permitted by the pre-trained model. Here we have our tokenizer truncate any inputs that are too long by specifying the truncation=True parameter. The output of this function will be added to the new dataset as extra columns. Further, we also want to remove some of the columns that are no longer needed, simplifying subsequent steps. For this, we use the remove_columns argument, listing the columns that we want to discard. Additionally, the dataset’s map() method can batch the dataset; we enable this option with the batched=True argument: 2 https://huggingface.co/datasets
3 These correspond to the more common terms train, development, and test we have used throughout the book so far. In this chapter we use the Hugging Face naming conventions for consistency. 13.2 Text Classification 189 label 03 10 20 32 40 ... ... . 107995  0 
 . 107996  0 
 . 107997  0 
 . 107998  0 
 . 107999  3 
 input_ids [101, 3270, 11906, 1522, 1146, 7106, 1111, 251... [101, 158, 119, 156, 119, 12068, 5084, 1116, 9... [101, 7270, 118, 2733, 1383, 1111, 12448, 7430... [101, 6096, 117, 10378, 3969, 5977, 1111, 8988... [101, 19569, 5480, 10582, 2087, 1867, 158, 119... [101, 1130, 139, 24683, 131, 21107, 2050, 1739... token_type_ids attention_mask [101, 22087, 8223, 1611, 1106, 4417, [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 5572, 324... 0, 0, 0, ... [0,0,0,0,0,0,0,0, [0,0,0,0,0,0,0,0, [0,0,0,0,0,0,0,0, [0,0,0,0,0,0,0,0, [0,0,0,0,0,0,0,0, [0,0,0,0,0,0,0,0, 0,0,0,0, 0,0,0,... 0,0,0,0, 0,0,0,... 0,0,0,0, 0,0,0,... 0,0,0,0, 0,0,0,... 0,0,0,0, 0,0,0,... 0,0,0,0, 0,0,0,... [1,1,1,1,1,1,1,1, [1,1,1,1,1,1,1,1, [1,1,1,1,1,1,1,1, [1,1,1,1,1,1,1,1, [1,1,1,1,1,1,1,1, [1,1,1,1,1,1,1,1, 1,1,1,1, 1,1,1,... 1,1,1,1, 1,1,1,... 1,1,1,1, 1,1,1,... 1,1,1,1, 1,1,1,... 1,1,1,1, 1,1,1,... 1,1,1,1, 1,1,1,... ... ... [101, 16409, 118, 16587, 159, 4064, [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1106, 1564... 0, 0, 0, ... 108000 rows × 4 columns [101, 4222, 11404, 1174, 117, 1476, [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1130, 2696... 0, 0, 0, ... [101, 11560, 3881, 108, 3614, 132, [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 3498, 2944,... 0, 0, 0, ... [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... ... Next, we implement a classifier for our task. Hugging Face provides a
variety of models corresponding to several types of downstream tasks. However, for pedagogical purposes, we implement one from scratch. In particular, our model class inherits from BertPreTrainedModel, which
provides several useful methods such as init_weights() and from_pretrained() methods, which we will use later. The model constructor takes a config- uration object as its only parameter. Configuration objects contain all the hyper-parameters used by the corresponding pre-trained models. We will show later how the configuration model is retrieved and customized. Models that implement specific downstream tasks are usually composed of a pre-trained model (sometimes referred as the body), and one or more task-specific layers (usually referred as the head). Here, we initialize a BertModel using the provided configuration, as well as a dropout layer and a task-specific linear layer used for classifying the Bert output. Each of these layers is initialized by calling the init_weights() method inherited from BertPreTrainedModel. The forward() method, which implements the task-specific forward pass, takes as arguments the outputs of the tokenizer, and, optionally, the gold labels corresponding to the input data points. Our implementation of the forward pass sends the input tokens to the Bert model to produce the contextualized representations for all tokens. This output has several components, including the last_hidden_state which con- 190 Using Transformers with the Hugging Face Library tains the final hidden-state embedding for each token. For our task, we will represent the whole sequence using the embedding for the [CLS] token that occurs at the start of each example. We retrieve it by selecting the first element of each output sequence in the batch (i.e., last_hidden_state[:, 0, :]). As in the previous chapters, we apply dropout to our sequence representation, and then pass it through our linear classification layer. If gold labels are provided (i.e., we are training), we now compute the loss using the cross-entropy loss. The output of the forward pass is wrapped in a Hugging Face SequenceClassifierOutput object4 and returned: Next we load the configuration of the pre-trained model and instantiate our model. The AutoConfig class can load the configuration for any pre-trained model, retrieving it from Hugging Face if needed. Then we use the configuration to instantiate our model using the from_pretrained() method. With this call, the pre-trained model will be loaded, which includes downloading if necessary: Hugging Face provides a Trainer class that greatly simplifies the training process. This class not only implements the training loop we have been using in the previous chapters, but also handles other useful steps such as saving checkpoints (i.e., intermediate models after a number of mini-batches have been processed during training), and tracking custom measures about model performance. In order to create a Trainer, we first need to specify its configuration in a TrainingArguments object. In ours, we specify certain hyper parameters such as batch size, weight decay, and number of epochs, as well as where to store model checkpoints: The TrainingArguments class provides a wide variety of arguments that we have not shown.5 These arguments usually have appropriate default values, so it is often fine to omit them. For example, we did not use the label_names argument, which specifies the key that corresponds to the training labels. When omitted, it defaults to keys such as label, labels and label_ids.6 In this chapter we used label. Note that we also specify how often we would like to see the perfor- . 4  Hugging Face utilizes a set of output objects to standardize model output for a given task. These objects typically include additional information, e.g., attention weights, which can be used for visualizing or debugging model behavior. 
 . 5  https://huggingface.co/docs/transformers/main/en/main_classes/trainer# transformers. TrainingArguments 
 6 In the case of extractive question answering (see Chapter 16), the start_positions and end_positions store the start/end positions of the correct answers. 13.3 Part-of-speech Tagging 191 mance of the current model (at the end of each epoch) with evaluation_strategy='epoch'. This means that after each epoch we print the current loss on the training
partition and on the evaluation dataset, if one is available. Additionally,
we can report custom metrics at this time. For this purpose, we use the compute_metrics parameter of the Trainer, which expects a function that receives a transformers. EvalPredictions object containing the label ids and the predicted logits. The expected return type is a dictionary whose keys correspond to different metrics, each of which will be displayed as a separate result column. Using the above TrainingArguments and compute_metrics function, we create our Trainer. Note that when you provide a tokenizer, the trainer will automatically pad the sequences in each batch. Also, the trainer will automatically use any GPU that is available, unless specifically disabled in the TrainingArguments. Training our model takes a single call to the train() method of the Trainer object. As specified in the our instance of TrainingArguments, the training and validation losses, as well as the accuracy, are reported every epoch. As in the other chapters, we can write custom code to obtain the model’s predictions on the test data. However, the Trainer class provides a predict() method that drastically simplifies this: As shown in the table above, this model achieves an accuracy of 95%, which is the highest performance we have achieved so far on this dataset. 13.3 Part-of-speech Tagging To showcase part-of-speech tagging using transformers, we continue with the Spanish section of the AnCora corpus introduced in Chapter 11. Recall that the dataset is stored in the CoNLL-U format. We load this format in the same way as before, but then we convert the loaded dataset into a Hugging Face DictDataset: Importantly, because the CoNLL-U dataset is already tokenized, we use the is_split_into_words=True tokenizer argument to ensure that the tokenizer respects the existing word boundaries during its sub-word tokenization. Further, while we want to predict one POS tag per word, Epoch Training Loss Validation Loss Accuracy 1 0.187800 0.172629 0.941667 2 0.104000 0.183001 0.946250 192 Using Transformers with the Hugging Face Library any given word may be split into smaller pieces by our tokenizer. Thus, we need to align the tokenizer output to the CoNLL-U words. The original BERT paper (Devlin et al., 2018) addresses this by only using the embedding corresponding to the first sub-token for each word. We follow the same approach for consistency. For the sub-words that do not correspond to the beginning of a word, we use a special value that indicates that we are not interested in their predictions. The CrossEntropyLoss has a parameter called ignore_index for this purpose. The default value for this parameter is −100, which we use as the label for the sub-words we wish to ignore during training: Next, we use this function to preprocess the train and validation folds in our DatasetDict: words [El, presidente, de, el, órgano, regulador, de... [Afirmó, que, sigue, el, criterio, europeo, y,... [Durante, la, presentación, de, el, libro, ", ... [Y, todas, las, miradas, convergen, en, la, lu... [Cambiar, las, formas, parece, de, rigor, ,, p... [Él, llega, a, tirar, la, sobre, la, cama, y, ... tags [DET, NOUN, ADP, DET, NOUN, ADJ, ADP, DET, PRO... [VERB, SCONJ, VERB, DET, NOUN, ADJ, CCONJ, SCO... [ADP, DET, NOUN, ADP, DET, NOUN, PUNCT, DET, P... [CCONJ, DET, DET, NOUN, VERB, ADP, DET, NOUN, ... [VERB, DET, NOUN, VERB, ADP, NOUN, PUNCT, CCON... [PRON, VERB, ADP, VERB, PRON, ADP, DET, NOUN, ... input_ids [0, 540, 9692, 8, 88, 103633, 15913, 1846, 8, ... [0, 62, 38949, 849, 41, 58453, 88, 166220, 620... [0, 24292, 21, 43945, 8, 88, 7750, 44, 239, 78... [0, 990, 5136, 576, 100688, 7, 158, 814, 1409,... [0, 313, 61055, 42, 576, 26497, 12295, 8, 7599... [0, 124043, 47612, 10, 61846, 21, 1028, 21, 39... attention_mask labels [-100, 0, 1, 2, 0, 1, 3, -100, 2, 0, 4, -100, ... [-100, 6, -100, -100, 7, 6, 0, 1, 3, 10, 7, 6,... [-100, 2, 0, 1, 2, 0, 1, 8, 0, 4, -100, 2, 4, ... [-100, 10, 0, 0, 1, -100, 6, -100, -100, 2, 0,... [-100, 6, -100, -100, 0, 1, 6, 2, 1, 8, -100, ... [-100, 5, 6, 2, 6, 5, 2, 0, 1, 10, 5, 6, 0, 1,... 0 1 2 3 4 ... 14300 14301 14302 14303 [1,1,1,1,1, 1,1,1,1,1, [1,1,1,1,1, 1,1,1,1,1, [1,1,1,1,1, 1,1,1,1,1, [1,1,1,1,1, 1,1,1,1,1, [1,1,1,1,1, 1,1,1,1,1, [1,1,1,1,1, 1,1,1,1,1, 1, 1, 1, 1, 1, ... 1, 1, 1, 1, 1, ... 1, 1, 1, 1, 1, ... 1, 1, 1, 1, 1, ... 1, 1, 1, 1, 1, ... 1, 1, 1, 1, 1, ... [Sobre, la, oferta, de, interconexión, con, Te... [ADP, DET, NOUN, ADP, NOUN, ADP, PROPN, ADP, D... [0, 44125, 21, 19806, 8, 1940, 2271, 3355, 194... [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... [-100, 2, 0, 1, 2, 1, -100, -100, -100, 2, 4, ... [La, inversión, en, investigación, básica, es,... [DET, NOUN, ADP, NOUN, ADJ, AUX, DET, NOUN, AD... [0, 239, 98649, 22, 31674, 124528, 198, 88, 46... [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... [-100, 0, 1, 2, 1, 3, 9, 0, 1, 2, 0, 1, 10, 0,... [Conviene, que, ahora, ,, en, plena, apoteosis... [VERB, SCONJ, ADV, PUNCT, ADP, ADJ, NOUN, ADP,... [0, 1657, 7772, 13, 41, 18451, 6, 4, 22, 31161... [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... [-100, 6, -100, -100, 7, 11, 8, -100, 2, 3, 1,... [Carlos, y, Fayna, se, enzarzan, en, una, bron... [PROPN, CCONJ, PROPN, PRON, VERB, ADP, DET, NO... [0, 24856, 113, 114162, 76, 40, 22, 6383, 5935... [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... [-100, 4, 10, 4, -100, 5, 6, -100, -100, 2, 0,... 14304
14305 rows × 5 columns ... ... ... ... ... Next, we implement our model class that uses a transformer encoder as a transducer. Because our downstream task consists of POS tagging for Spanish, we need a transformer model that was pre-trained on Spanish texts. Here, we chose XLM-RoBERTa (Conneau et al., 2019) as our base model. XLM-Roberta is a RoBERTa model (Liu et al., 2019) that has 13.3 Part-of-speech Tagging 193 been pre-trained on 100 different languages, including Spanish. Of note, XLM-RoBERTa does not require us to specify what language we are working on. Similar to BERT, it only requires the input_ids. We discussed in the text classification section that Hugging Face provides implementations for text classification models. This is also true for token classification problems that require transducers. In particular, the XLMRobertaForTokenClassification model provided by Hugging Face does everything needed for this task. However, as before, here we implement it ourselves for pedagogical purposes. The model architecture is similar to our text classification example. It consists of a transformer, a dropout layer, and a linear layer used for classification. The number of labels which determines the output dimension of the linear layer is equal to the number of POS tags. The primary difference between the text classification example and this token classification model is that with the former we produced one label for each text document, while here we produce one label for each token in the input text. Specifically, in our text classification model the output shape was two-dimensional: (batch_size, num_labels). Here, our output is three-dimensional: (batch_size, sequence_size, num_labels). So, while much of the forward method is familiar to us, when we are required to compute the loss, we need to reshape the logits and the labels before passing them to the CrossEntropyLoss, since it expects two-dimensional input and one-dimensional labels. For this purpose, we use the view() method to reshape the tensors. This method is efficient because it does not copy the tensor data. Instead it provides a new view of the same data that behaves like a tensor with a different shape.7 As mentioned before, the number of arguments passed to this method determines the number of dimensions in the output tensor. Here, for our logits, we pass two arguments and so our new view will have two dimensions. The second will be the size of self.num_labels, while the first (because we pass -1) will be inferred based on the original tensor shape. For our labels, on the other hand, we only provide one argument and so the new view will have one dimension, inferred by the original shape: Next, we instantiate our model using the XLM-RoBERTa configuration: 7 Similar to NumPy, PyTorch tensors are represented internally by a block of memory storing the data and some metadata that describes how the data should be read, e.g., type, shape, and stride. The view() method returns a new tensor with new metadata but pointing to the same memory block. 194 Using Transformers with the Hugging Face Library As before, we create a TrainingArguments object and define a compute_metrics function in order to customize a Trainer: While the TrainingArguments code has no substantial changes, we need to adjust the compute_metrics function to account for the fact that our model uses sub-word tokens rather than complete words. Recall that only the first sub-word token per word was assigned a POS tag. This function discards the labels corresponding to the ignored sub-word tokens and evaluates the rest, returning the accuracy score: The last component required for the Trainer is a collator. Since this time we are batching sequences of tokens, we need a collator that can pad them dynamically when constructing the batches. The transformers library includes a DataCollatorForTokenClassification specifically for this purpose. Once we have our collator and our trainer object, we can train our model: Next, we evaluate our newly trained model on the test dataset. For this purpose, we preprocess the data in the same way we did for the train and validation partitions. Then, for convenience, we use the trainer’s predict() method to generate the predicted logits using our model: As before, we use scikit-learn’s classification_report() function to display the results of the evaluation. This function expects two onedimensional lists of labels, so we need to follow a similar approach to the one we employed for text classification. Note that output.label_ids and output.predictions are NumPy arrays rather than PyTorch tensors. This time we use NumPy’s reshape() method to reshape the arrays. This method is similar to PyTorch’s view() method that we used previously, except that view() may copy the array’s data in some situations. We discard the labels corresponding to ignored sub-word tokens, and then we print the classification report: Our model based on XLM-RoBERTa achieves 99% accuracy. This is considerably better than the LSTM-based model developed in Chapter 11. In order to understand the differences between the two methods, we produce below a confusion matrix for the results of each model. Rows in the confusion matrix represent the true labels and columns represent the predicted labels. In the confusion matrices shown below, each cell xij corresponds to the proportion of values with label i that were assigned the label j.8 For a perfect model, all cells in the diagonal would have value 1 and all other cells would have value 0. The code used to generate the confusion matrix is shown below. The confusion matrices 8 This is the case because we used the normalize='true' parameter of the confusion_matrix() function. 13.3 Part-of-speech Tagging 195 Figure 13.1 Confusion matrix corresponding to the LSTM-based part-ofspeech tagger developed in Chapter 11. for the LSTM and transformer are show in Figure 13.1 and Figure 13.2, respectively. The two confusion matrices highlight a couple of important observations. First, the transformer model is considerably better at predicting POS tags with infrequent support in the dataset. For example, the accuracy for predicting the SYM POS tag increased from 38% in the LSTM model to 95% in the transformer model! Equally as impressive, the transformer improved the performance of tags that are extremely common, and, thus, provide plenty of opportunity to both approaches to learn a good model. For example, the accuracy of tagging NOUN, the second 196 Using Transformers with the Hugging Face Library Figure 13.2 Confusion matrix corresponding to the transformer-based part-of-speech tagger. most common POS tag in the dataset, increased from 96% in the LSTM model to 99% in the transformer model. 13.4 Summary In this chapter we presented two applications driven by the encoder component of a transformer network. First, we used the transformer encoder as an acceptor and implemented a text classification application for English news. Second, we used the encoder as a transducer to develop a Spanish part-of-speech tagger. Both tasks were implemented using 13.4 Summary 197 pre-trained transformer models from the Hugging Face library. For both applications, the transformer-based methods outperform considerably all approaches introduced in the previous chapters, highlighting the value of the transformer architecture.
9,752
9,887
#!/usr/bin/env python # coding: utf-8 # # Text Classification Using Transformer Networks (BERT) # Some initialization: # In[1]: import random import torch import numpy as np import pandas as pd from tqdm.notebook import tqdm # enable tqdm in pandas tqdm.pandas() # set to True to use the gpu (if there is one available) use_gpu = True # select device device = torch.device('cuda' if use_gpu and torch.cuda.is_available() else 'cpu') print(f'device: {device.type}') # random seed seed = 1234 # set random seed if seed is not None: print(f'random seed: {seed}') random.seed(seed) np.random.seed(seed) torch.manual_seed(seed) # Read the train/dev/test datasets and create a HuggingFace `Dataset` object: # In[2]: def read_data(filename): # read csv file df = pd.read_csv(filename, header=None) # add column names df.columns = ['label', 'title', 'description'] # make labels zero-based df['label'] -= 1 # concatenate title and description, and remove backslashes df['text'] = df['title'] + " " + df['description'] df['text'] = df['text'].str.replace('\\', ' ', regex=False) return df # In[3]: labels = open('data/ag_news_csv/classes.txt').read().splitlines() train_df = read_data('data/ag_news_csv/train.csv') test_df = read_data('data/ag_news_csv/test.csv') train_df # In[4]: from sklearn.model_selection import train_test_split train_df, eval_df = train_test_split(train_df, train_size=0.9) train_df.reset_index(inplace=True, drop=True) eval_df.reset_index(inplace=True, drop=True) print(f'train rows: {len(train_df.index):,}') print(f'eval rows: {len(eval_df.index):,}') print(f'test rows: {len(test_df.index):,}') # In[5]: from datasets import Dataset, DatasetDict ds = DatasetDict() ds['train'] = Dataset.from_pandas(train_df) ds['validation'] = Dataset.from_pandas(eval_df) ds['test'] = Dataset.from_pandas(test_df) ds # Tokenize the texts: # In[6]: from transformers import AutoTokenizer transformer_name = 'bert-base-cased' tokenizer = AutoTokenizer.from_pretrained(transformer_name) # In[7]: def tokenize(examples): return tokenizer(examples['text'], truncation=True) train_ds = ds['train'].map( tokenize, batched=True, remove_columns=['title', 'description', 'text'], ) eval_ds = ds['validation'].map( tokenize, batched=True, remove_columns=['title', 'description', 'text'], ) train_ds.to_pandas() # Create the transformer model: # In[8]: from torch import nn from transformers.modeling_outputs import SequenceClassifierOutput from transformers.models.bert.modeling_bert import BertModel, BertPreTrainedModel # https://github.com/huggingface/transformers/blob/65659a29cf5a079842e61a63d57fa24474288998/src/transformers/models/bert/modeling_bert.py#L1486 class BertForSequenceClassification(BertPreTrainedModel): def __init__(self, config): super().__init__(config) self.num_labels = config.num_labels self.bert = BertModel(config) self.dropout = nn.Dropout(config.hidden_dropout_prob) self.classifier = nn.Linear(config.hidden_size, config.num_labels) self.init_weights() def forward(self, input_ids=None, attention_mask=None, token_type_ids=None, labels=None, **kwargs): outputs = self.bert( input_ids, attention_mask=attention_mask, token_type_ids=token_type_ids, **kwargs, ) cls_outputs = outputs.last_hidden_state[:, 0, :] cls_outputs = self.dropout(cls_outputs) logits = self.classifier(cls_outputs) loss = None if labels is not None: loss_fn = nn.CrossEntropyLoss() loss = loss_fn(logits, labels) return SequenceClassifierOutput( loss=loss, logits=logits, hidden_states=outputs.hidden_states, attentions=outputs.attentions, ) # In[9]: from transformers import AutoConfig config = AutoConfig.from_pretrained( transformer_name, num_labels=len(labels), ) model = ( BertForSequenceClassification .from_pretrained(transformer_name, config=config) ) # Create the trainer object and train: # In[10]: from transformers import TrainingArguments num_epochs = 2 batch_size = 24 weight_decay = 0.01 model_name = f'{transformer_name}-sequence-classification' training_args = TrainingArguments( output_dir=model_name, log_level='error', num_train_epochs=num_epochs, per_device_train_batch_size=batch_size, per_device_eval_batch_size=batch_size, evaluation_strategy='epoch', weight_decay=weight_decay, ) # In[11]: from sklearn.metrics import accuracy_score def compute_metrics(eval_pred): y_true = eval_pred.label_ids y_pred = np.argmax(eval_pred.predictions, axis=-1) return {'accuracy': accuracy_score(y_true, y_pred)} # In[12]: from transformers import Trainer trainer = Trainer( model=model, args=training_args, compute_metrics=compute_metrics, train_dataset=train_ds, eval_dataset=eval_ds, tokenizer=tokenizer, ) # In[13]: trainer.train() # Evaluate on the test partition: # In[14]: test_ds = ds['test'].map( tokenize, batched=True, remove_columns=['title', 'description', 'text'], ) test_ds.to_pandas() # In[15]: output = trainer.predict(test_ds) output # In[16]: from sklearn.metrics import classification_report y_true = output.label_ids y_pred = np.argmax(output.predictions, axis=-1) target_names = labels print(classification_report(y_true, y_pred, target_names=target_names))
3,497
3,591
4
chap13-5
chap13-5
13 Using Transformers with the Hugging Face Library One of the key advantages of transformer networks is the ability to take a model that was pre-trained over vast quantities of text and fine-tune it for the task at hand. Intuitively, this strategy allows transformer networks to achieve higher performance on smaller datasets by relying on statistics acquired at scale in an unsupervised way (e.g., through the masked language model training objective). To this end, in this chapter we will use the Hugging Face library,1 which has a rich repository of datasets and pre-trained models, as well as helper methods and classes that make it easy to target downstream tasks. Using pre-trained transformer encoders, we will implement the two tasks that served as use cases in the previous chapters: text classification and part-of-speech tagging. 13.1 Tokenization As discussed in Section 12.2, transformers rely on sub-word tokens. This strategy provides an elegant way to handle unknown and low-frequency words by splitting them into more frequent sub-word parts. At the same time, these tokenization algorithms maintain frequently-occurring words as standalone tokens, so the signal for these common words is preserved. To make this more concrete, we show below how tokenizers are employed in the Hugging Face library. First, we load the tokenizer that corresponds to the transformer we intend to use. This is important for two reasons: (a) different transformers rely on different tokenization algorithms, and (b) even for the ones that use the same algorithm, their tokenizer vocabularies are likely to be different if they were pre-trained 1 https://huggingface.co/docs/transformers/main/en/index 186 13.1 Tokenization 187 on different corpora. Next, we tokenize some example text and display some of the resulting attributes with pandas: As shown above, the tokenizer splits the text into tokens, and adds two special tokens: the [CLS] token at the beginning of the token sequence, and the [SEP] token at the end. Also, note that the ## characters at the beginning of some tokens indicate that they are not standalone words, but rather sub-words that continue a word previously started. For example, the output above shows that the word walrus was split into three sub-words. Note, however, that this is specific to this particular tokenization algorithm, and other tokenizers may indicate word continuation in different ways. A better way to detect word continuations is using the word_ids() method of the tokenizer output, which assigns the same id to all tokens part of the same word. For example, all fragments of the word walrus share the word id 3. Lastly, the input_ids attribute provides the token ids used internally by the transformer to map tokens to embeddings. To briefly demonstrate how different tokenizers produce different outputs, here is the same text tokenized with the tokenizer corresponding to xlm-roberta-base: Note how the [CLS] and [SEP] special tokens have been replaced with <s> and </s> respectively. Also, spaces have been replaced with the Unicode character (U+2581, LOWER ONE EIGHTH BLOCK). Tokens that start with that character are considered word beginnings and the rest are word continuations, as can be confirmed by looking at the word ids. This illustrates the importance of using the tokenizer that corresponds to the transformer you intend to use. 012345678 tokens [CLS] I am the wa ##l ##rus . [SEP] word_ids None 0 1 2 3 3 3 4 None input_ids 101 146 1821 1103 20049 1233 6208 119 102 01234567 tokens <s> ▁I ▁am ▁the ▁wal rus . </s > None 0 1 2 3 3 3 None 0 87 444 70 32973 6563 5 2 word_ids input_ids 188 Using Transformers with the Hugging Face Library 13.2 Text Classification For our text classification example, we will continue using the AG News dataset from previous chapters. We will load, preprocess, and split the dataset into pandas dataframes in the same way as before. Now however, rather than continuing with pandas, we will create a Hugging Face dataset from the dataframes. Hugging Face datasets are convenient because of their built-in support of batching, efficient data transformations, and caching. In particular, we convert each dataframe into a Hugging Face dataset. The various datasets are managed with a DatasetDict. Note that this is the same data structure seen when downloading a Hugging Face dataset from their hub.2 The keys in this dictionary are usually train, validation, and test:3 Once our dataset is loaded, we load a tokenizer. Different pre-trained models are tokenized differently, and it is important to select the tokenizer that corresponds to the model we will use so that the inputs are consistent with model expectations. In our example, we will use the bert-base-cased pre-trained model and tokenizer: Datasets have a map() method that transforms the dataset by applying a function to each example. The method returns a new dataset with the transformation applied. We use the map() method to tokenize our dataset. To this end, we define a function that tokenizes an example using the tokenizer we loaded previously. Note that tokenizers support many options that you may need depending on your situation. However, since this is a simple scenario, all we need to do is provide the text to tokenize and specify how to handle texts that exceed the maximum number of tokens permitted by the pre-trained model. Here we have our tokenizer truncate any inputs that are too long by specifying the truncation=True parameter. The output of this function will be added to the new dataset as extra columns. Further, we also want to remove some of the columns that are no longer needed, simplifying subsequent steps. For this, we use the remove_columns argument, listing the columns that we want to discard. Additionally, the dataset’s map() method can batch the dataset; we enable this option with the batched=True argument: 2 https://huggingface.co/datasets
3 These correspond to the more common terms train, development, and test we have used throughout the book so far. In this chapter we use the Hugging Face naming conventions for consistency. 13.2 Text Classification 189 label 03 10 20 32 40 ... ... . 107995  0 
 . 107996  0 
 . 107997  0 
 . 107998  0 
 . 107999  3 
 input_ids [101, 3270, 11906, 1522, 1146, 7106, 1111, 251... [101, 158, 119, 156, 119, 12068, 5084, 1116, 9... [101, 7270, 118, 2733, 1383, 1111, 12448, 7430... [101, 6096, 117, 10378, 3969, 5977, 1111, 8988... [101, 19569, 5480, 10582, 2087, 1867, 158, 119... [101, 1130, 139, 24683, 131, 21107, 2050, 1739... token_type_ids attention_mask [101, 22087, 8223, 1611, 1106, 4417, [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 5572, 324... 0, 0, 0, ... [0,0,0,0,0,0,0,0, [0,0,0,0,0,0,0,0, [0,0,0,0,0,0,0,0, [0,0,0,0,0,0,0,0, [0,0,0,0,0,0,0,0, [0,0,0,0,0,0,0,0, 0,0,0,0, 0,0,0,... 0,0,0,0, 0,0,0,... 0,0,0,0, 0,0,0,... 0,0,0,0, 0,0,0,... 0,0,0,0, 0,0,0,... 0,0,0,0, 0,0,0,... [1,1,1,1,1,1,1,1, [1,1,1,1,1,1,1,1, [1,1,1,1,1,1,1,1, [1,1,1,1,1,1,1,1, [1,1,1,1,1,1,1,1, [1,1,1,1,1,1,1,1, 1,1,1,1, 1,1,1,... 1,1,1,1, 1,1,1,... 1,1,1,1, 1,1,1,... 1,1,1,1, 1,1,1,... 1,1,1,1, 1,1,1,... 1,1,1,1, 1,1,1,... ... ... [101, 16409, 118, 16587, 159, 4064, [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1106, 1564... 0, 0, 0, ... 108000 rows × 4 columns [101, 4222, 11404, 1174, 117, 1476, [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1130, 2696... 0, 0, 0, ... [101, 11560, 3881, 108, 3614, 132, [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 3498, 2944,... 0, 0, 0, ... [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... ... Next, we implement a classifier for our task. Hugging Face provides a
variety of models corresponding to several types of downstream tasks. However, for pedagogical purposes, we implement one from scratch. In particular, our model class inherits from BertPreTrainedModel, which
provides several useful methods such as init_weights() and from_pretrained() methods, which we will use later. The model constructor takes a config- uration object as its only parameter. Configuration objects contain all the hyper-parameters used by the corresponding pre-trained models. We will show later how the configuration model is retrieved and customized. Models that implement specific downstream tasks are usually composed of a pre-trained model (sometimes referred as the body), and one or more task-specific layers (usually referred as the head). Here, we initialize a BertModel using the provided configuration, as well as a dropout layer and a task-specific linear layer used for classifying the Bert output. Each of these layers is initialized by calling the init_weights() method inherited from BertPreTrainedModel. The forward() method, which implements the task-specific forward pass, takes as arguments the outputs of the tokenizer, and, optionally, the gold labels corresponding to the input data points. Our implementation of the forward pass sends the input tokens to the Bert model to produce the contextualized representations for all tokens. This output has several components, including the last_hidden_state which con- 190 Using Transformers with the Hugging Face Library tains the final hidden-state embedding for each token. For our task, we will represent the whole sequence using the embedding for the [CLS] token that occurs at the start of each example. We retrieve it by selecting the first element of each output sequence in the batch (i.e., last_hidden_state[:, 0, :]). As in the previous chapters, we apply dropout to our sequence representation, and then pass it through our linear classification layer. If gold labels are provided (i.e., we are training), we now compute the loss using the cross-entropy loss. The output of the forward pass is wrapped in a Hugging Face SequenceClassifierOutput object4 and returned: Next we load the configuration of the pre-trained model and instantiate our model. The AutoConfig class can load the configuration for any pre-trained model, retrieving it from Hugging Face if needed. Then we use the configuration to instantiate our model using the from_pretrained() method. With this call, the pre-trained model will be loaded, which includes downloading if necessary: Hugging Face provides a Trainer class that greatly simplifies the training process. This class not only implements the training loop we have been using in the previous chapters, but also handles other useful steps such as saving checkpoints (i.e., intermediate models after a number of mini-batches have been processed during training), and tracking custom measures about model performance. In order to create a Trainer, we first need to specify its configuration in a TrainingArguments object. In ours, we specify certain hyper parameters such as batch size, weight decay, and number of epochs, as well as where to store model checkpoints: The TrainingArguments class provides a wide variety of arguments that we have not shown.5 These arguments usually have appropriate default values, so it is often fine to omit them. For example, we did not use the label_names argument, which specifies the key that corresponds to the training labels. When omitted, it defaults to keys such as label, labels and label_ids.6 In this chapter we used label. Note that we also specify how often we would like to see the perfor- . 4  Hugging Face utilizes a set of output objects to standardize model output for a given task. These objects typically include additional information, e.g., attention weights, which can be used for visualizing or debugging model behavior. 
 . 5  https://huggingface.co/docs/transformers/main/en/main_classes/trainer# transformers. TrainingArguments 
 6 In the case of extractive question answering (see Chapter 16), the start_positions and end_positions store the start/end positions of the correct answers. 13.3 Part-of-speech Tagging 191 mance of the current model (at the end of each epoch) with evaluation_strategy='epoch'. This means that after each epoch we print the current loss on the training
partition and on the evaluation dataset, if one is available. Additionally,
we can report custom metrics at this time. For this purpose, we use the compute_metrics parameter of the Trainer, which expects a function that receives a transformers. EvalPredictions object containing the label ids and the predicted logits. The expected return type is a dictionary whose keys correspond to different metrics, each of which will be displayed as a separate result column. Using the above TrainingArguments and compute_metrics function, we create our Trainer. Note that when you provide a tokenizer, the trainer will automatically pad the sequences in each batch. Also, the trainer will automatically use any GPU that is available, unless specifically disabled in the TrainingArguments. Training our model takes a single call to the train() method of the Trainer object. As specified in the our instance of TrainingArguments, the training and validation losses, as well as the accuracy, are reported every epoch. As in the other chapters, we can write custom code to obtain the model’s predictions on the test data. However, the Trainer class provides a predict() method that drastically simplifies this: As shown in the table above, this model achieves an accuracy of 95%, which is the highest performance we have achieved so far on this dataset. 13.3 Part-of-speech Tagging To showcase part-of-speech tagging using transformers, we continue with the Spanish section of the AnCora corpus introduced in Chapter 11. Recall that the dataset is stored in the CoNLL-U format. We load this format in the same way as before, but then we convert the loaded dataset into a Hugging Face DictDataset: Importantly, because the CoNLL-U dataset is already tokenized, we use the is_split_into_words=True tokenizer argument to ensure that the tokenizer respects the existing word boundaries during its sub-word tokenization. Further, while we want to predict one POS tag per word, Epoch Training Loss Validation Loss Accuracy 1 0.187800 0.172629 0.941667 2 0.104000 0.183001 0.946250 192 Using Transformers with the Hugging Face Library any given word may be split into smaller pieces by our tokenizer. Thus, we need to align the tokenizer output to the CoNLL-U words. The original BERT paper (Devlin et al., 2018) addresses this by only using the embedding corresponding to the first sub-token for each word. We follow the same approach for consistency. For the sub-words that do not correspond to the beginning of a word, we use a special value that indicates that we are not interested in their predictions. The CrossEntropyLoss has a parameter called ignore_index for this purpose. The default value for this parameter is −100, which we use as the label for the sub-words we wish to ignore during training: Next, we use this function to preprocess the train and validation folds in our DatasetDict: words [El, presidente, de, el, órgano, regulador, de... [Afirmó, que, sigue, el, criterio, europeo, y,... [Durante, la, presentación, de, el, libro, ", ... [Y, todas, las, miradas, convergen, en, la, lu... [Cambiar, las, formas, parece, de, rigor, ,, p... [Él, llega, a, tirar, la, sobre, la, cama, y, ... tags [DET, NOUN, ADP, DET, NOUN, ADJ, ADP, DET, PRO... [VERB, SCONJ, VERB, DET, NOUN, ADJ, CCONJ, SCO... [ADP, DET, NOUN, ADP, DET, NOUN, PUNCT, DET, P... [CCONJ, DET, DET, NOUN, VERB, ADP, DET, NOUN, ... [VERB, DET, NOUN, VERB, ADP, NOUN, PUNCT, CCON... [PRON, VERB, ADP, VERB, PRON, ADP, DET, NOUN, ... input_ids [0, 540, 9692, 8, 88, 103633, 15913, 1846, 8, ... [0, 62, 38949, 849, 41, 58453, 88, 166220, 620... [0, 24292, 21, 43945, 8, 88, 7750, 44, 239, 78... [0, 990, 5136, 576, 100688, 7, 158, 814, 1409,... [0, 313, 61055, 42, 576, 26497, 12295, 8, 7599... [0, 124043, 47612, 10, 61846, 21, 1028, 21, 39... attention_mask labels [-100, 0, 1, 2, 0, 1, 3, -100, 2, 0, 4, -100, ... [-100, 6, -100, -100, 7, 6, 0, 1, 3, 10, 7, 6,... [-100, 2, 0, 1, 2, 0, 1, 8, 0, 4, -100, 2, 4, ... [-100, 10, 0, 0, 1, -100, 6, -100, -100, 2, 0,... [-100, 6, -100, -100, 0, 1, 6, 2, 1, 8, -100, ... [-100, 5, 6, 2, 6, 5, 2, 0, 1, 10, 5, 6, 0, 1,... 0 1 2 3 4 ... 14300 14301 14302 14303 [1,1,1,1,1, 1,1,1,1,1, [1,1,1,1,1, 1,1,1,1,1, [1,1,1,1,1, 1,1,1,1,1, [1,1,1,1,1, 1,1,1,1,1, [1,1,1,1,1, 1,1,1,1,1, [1,1,1,1,1, 1,1,1,1,1, 1, 1, 1, 1, 1, ... 1, 1, 1, 1, 1, ... 1, 1, 1, 1, 1, ... 1, 1, 1, 1, 1, ... 1, 1, 1, 1, 1, ... 1, 1, 1, 1, 1, ... [Sobre, la, oferta, de, interconexión, con, Te... [ADP, DET, NOUN, ADP, NOUN, ADP, PROPN, ADP, D... [0, 44125, 21, 19806, 8, 1940, 2271, 3355, 194... [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... [-100, 2, 0, 1, 2, 1, -100, -100, -100, 2, 4, ... [La, inversión, en, investigación, básica, es,... [DET, NOUN, ADP, NOUN, ADJ, AUX, DET, NOUN, AD... [0, 239, 98649, 22, 31674, 124528, 198, 88, 46... [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... [-100, 0, 1, 2, 1, 3, 9, 0, 1, 2, 0, 1, 10, 0,... [Conviene, que, ahora, ,, en, plena, apoteosis... [VERB, SCONJ, ADV, PUNCT, ADP, ADJ, NOUN, ADP,... [0, 1657, 7772, 13, 41, 18451, 6, 4, 22, 31161... [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... [-100, 6, -100, -100, 7, 11, 8, -100, 2, 3, 1,... [Carlos, y, Fayna, se, enzarzan, en, una, bron... [PROPN, CCONJ, PROPN, PRON, VERB, ADP, DET, NO... [0, 24856, 113, 114162, 76, 40, 22, 6383, 5935... [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... [-100, 4, 10, 4, -100, 5, 6, -100, -100, 2, 0,... 14304
14305 rows × 5 columns ... ... ... ... ... Next, we implement our model class that uses a transformer encoder as a transducer. Because our downstream task consists of POS tagging for Spanish, we need a transformer model that was pre-trained on Spanish texts. Here, we chose XLM-RoBERTa (Conneau et al., 2019) as our base model. XLM-Roberta is a RoBERTa model (Liu et al., 2019) that has 13.3 Part-of-speech Tagging 193 been pre-trained on 100 different languages, including Spanish. Of note, XLM-RoBERTa does not require us to specify what language we are working on. Similar to BERT, it only requires the input_ids. We discussed in the text classification section that Hugging Face provides implementations for text classification models. This is also true for token classification problems that require transducers. In particular, the XLMRobertaForTokenClassification model provided by Hugging Face does everything needed for this task. However, as before, here we implement it ourselves for pedagogical purposes. The model architecture is similar to our text classification example. It consists of a transformer, a dropout layer, and a linear layer used for classification. The number of labels which determines the output dimension of the linear layer is equal to the number of POS tags. The primary difference between the text classification example and this token classification model is that with the former we produced one label for each text document, while here we produce one label for each token in the input text. Specifically, in our text classification model the output shape was two-dimensional: (batch_size, num_labels). Here, our output is three-dimensional: (batch_size, sequence_size, num_labels). So, while much of the forward method is familiar to us, when we are required to compute the loss, we need to reshape the logits and the labels before passing them to the CrossEntropyLoss, since it expects two-dimensional input and one-dimensional labels. For this purpose, we use the view() method to reshape the tensors. This method is efficient because it does not copy the tensor data. Instead it provides a new view of the same data that behaves like a tensor with a different shape.7 As mentioned before, the number of arguments passed to this method determines the number of dimensions in the output tensor. Here, for our logits, we pass two arguments and so our new view will have two dimensions. The second will be the size of self.num_labels, while the first (because we pass -1) will be inferred based on the original tensor shape. For our labels, on the other hand, we only provide one argument and so the new view will have one dimension, inferred by the original shape: Next, we instantiate our model using the XLM-RoBERTa configuration: 7 Similar to NumPy, PyTorch tensors are represented internally by a block of memory storing the data and some metadata that describes how the data should be read, e.g., type, shape, and stride. The view() method returns a new tensor with new metadata but pointing to the same memory block. 194 Using Transformers with the Hugging Face Library As before, we create a TrainingArguments object and define a compute_metrics function in order to customize a Trainer: While the TrainingArguments code has no substantial changes, we need to adjust the compute_metrics function to account for the fact that our model uses sub-word tokens rather than complete words. Recall that only the first sub-word token per word was assigned a POS tag. This function discards the labels corresponding to the ignored sub-word tokens and evaluates the rest, returning the accuracy score: The last component required for the Trainer is a collator. Since this time we are batching sequences of tokens, we need a collator that can pad them dynamically when constructing the batches. The transformers library includes a DataCollatorForTokenClassification specifically for this purpose. Once we have our collator and our trainer object, we can train our model: Next, we evaluate our newly trained model on the test dataset. For this purpose, we preprocess the data in the same way we did for the train and validation partitions. Then, for convenience, we use the trainer’s predict() method to generate the predicted logits using our model: As before, we use scikit-learn’s classification_report() function to display the results of the evaluation. This function expects two onedimensional lists of labels, so we need to follow a similar approach to the one we employed for text classification. Note that output.label_ids and output.predictions are NumPy arrays rather than PyTorch tensors. This time we use NumPy’s reshape() method to reshape the arrays. This method is similar to PyTorch’s view() method that we used previously, except that view() may copy the array’s data in some situations. We discard the labels corresponding to ignored sub-word tokens, and then we print the classification report: Our model based on XLM-RoBERTa achieves 99% accuracy. This is considerably better than the LSTM-based model developed in Chapter 11. In order to understand the differences between the two methods, we produce below a confusion matrix for the results of each model. Rows in the confusion matrix represent the true labels and columns represent the predicted labels. In the confusion matrices shown below, each cell xij corresponds to the proportion of values with label i that were assigned the label j.8 For a perfect model, all cells in the diagonal would have value 1 and all other cells would have value 0. The code used to generate the confusion matrix is shown below. The confusion matrices 8 This is the case because we used the normalize='true' parameter of the confusion_matrix() function. 13.3 Part-of-speech Tagging 195 Figure 13.1 Confusion matrix corresponding to the LSTM-based part-ofspeech tagger developed in Chapter 11. for the LSTM and transformer are show in Figure 13.1 and Figure 13.2, respectively. The two confusion matrices highlight a couple of important observations. First, the transformer model is considerably better at predicting POS tags with infrequent support in the dataset. For example, the accuracy for predicting the SYM POS tag increased from 38% in the LSTM model to 95% in the transformer model! Equally as impressive, the transformer improved the performance of tags that are extremely common, and, thus, provide plenty of opportunity to both approaches to learn a good model. For example, the accuracy of tagging NOUN, the second 196 Using Transformers with the Hugging Face Library Figure 13.2 Confusion matrix corresponding to the transformer-based part-of-speech tagger. most common POS tag in the dataset, increased from 96% in the LSTM model to 99% in the transformer model. 13.4 Summary In this chapter we presented two applications driven by the encoder component of a transformer network. First, we used the transformer encoder as an acceptor and implemented a text classification application for English news. Second, we used the encoder as a transducer to develop a Spanish part-of-speech tagger. Both tasks were implemented using 13.4 Summary 197 pre-trained transformer models from the Hugging Face library. For both applications, the transformer-based methods outperform considerably all approaches introduced in the previous chapters, highlighting the value of the transformer architecture.
5,001
5,151
#!/usr/bin/env python # coding: utf-8 # # Text Classification Using Transformer Networks (DistilBERT) # Some initialization: # In[1]: import random import torch import numpy as np import pandas as pd from tqdm.notebook import tqdm # enable tqdm in pandas tqdm.pandas() # set to True to use the gpu (if there is one available) use_gpu = True # select device device = torch.device('cuda' if use_gpu and torch.cuda.is_available() else 'cpu') print(f'device: {device.type}') # random seed seed = 1234 # set random seed if seed is not None: print(f'random seed: {seed}') random.seed(seed) np.random.seed(seed) torch.manual_seed(seed) # Read the train/dev/test datasets and create a HuggingFace `Dataset` object: # In[2]: def read_data(filename): # read csv file df = pd.read_csv(filename, header=None) # add column names df.columns = ['label', 'title', 'description'] # make labels zero-based df['label'] -= 1 # concatenate title and description, and remove backslashes df['text'] = df['title'] + " " + df['description'] df['text'] = df['text'].str.replace('\\', ' ', regex=False) return df # In[3]: labels = open('data/ag_news_csv/classes.txt').read().splitlines() train_df = read_data('data/ag_news_csv/train.csv') test_df = read_data('data/ag_news_csv/test.csv') train_df # In[4]: from sklearn.model_selection import train_test_split train_df, eval_df = train_test_split(train_df, train_size=0.9) train_df.reset_index(inplace=True, drop=True) eval_df.reset_index(inplace=True, drop=True) print(f'train rows: {len(train_df.index):,}') print(f'eval rows: {len(eval_df.index):,}') print(f'test rows: {len(test_df.index):,}') # In[5]: from datasets import Dataset, DatasetDict ds = DatasetDict() ds['train'] = Dataset.from_pandas(train_df) ds['validation'] = Dataset.from_pandas(eval_df) ds['test'] = Dataset.from_pandas(test_df) ds # Tokenize the texts: # In[6]: from transformers import AutoTokenizer transformer_name = 'distilbert-base-cased' tokenizer = AutoTokenizer.from_pretrained(transformer_name) # In[7]: def tokenize(examples): return tokenizer(examples['text'], truncation=True) train_ds = ds['train'].map(tokenize, batched=True, remove_columns=['title', 'description', 'text']) eval_ds = ds['validation'].map(tokenize, batched=True, remove_columns=['title', 'description', 'text']) train_ds.to_pandas() # Create the transformer model: # In[8]: from transformers import AutoConfig config = AutoConfig.from_pretrained(transformer_name, num_labels=len(labels)) # In[9]: from transformers.models.distilbert.modeling_distilbert import DistilBertForSequenceClassification model = ( DistilBertForSequenceClassification .from_pretrained(transformer_name, config=config) ) # Create the trainer object and train: # In[10]: from transformers import TrainingArguments num_epochs = 2 batch_size = 24 logging_steps = len(ds['train']) // batch_size model_name = f'{transformer_name}-sequence-classification' training_args = TrainingArguments( output_dir=model_name, log_level='error', num_train_epochs=num_epochs, per_device_train_batch_size=batch_size, per_device_eval_batch_size=batch_size, evaluation_strategy='epoch', weight_decay=0.01, disable_tqdm=False, logging_steps=logging_steps, ) # In[11]: from sklearn.metrics import accuracy_score def compute_metrics(eval_pred): y_true = eval_pred.label_ids y_pred = np.argmax(eval_pred.predictions, axis=-1) return {'accuracy': accuracy_score(y_true, y_pred)} # In[12]: from transformers import Trainer trainer = Trainer( model=model, args=training_args, compute_metrics=compute_metrics, train_dataset=train_ds, eval_dataset=eval_ds, tokenizer=tokenizer, ) # In[13]: trainer.train() # Evaluate on the test partition: # In[14]: test_ds = ds['test'].map(tokenize, batched=True, remove_columns=['title', 'description', 'text']) test_ds.to_pandas() # In[15]: output = trainer.predict(test_ds) output # In[16]: from sklearn.metrics import classification_report y_true = output.label_ids y_pred = np.argmax(output.predictions, axis=-1) target_names = labels print(classification_report(y_true, y_pred, target_names=target_names)) # In[ ]:
2,186
2,390
5
chap13-6
chap13-6
13 Using Transformers with the Hugging Face Library One of the key advantages of transformer networks is the ability to take a model that was pre-trained over vast quantities of text and fine-tune it for the task at hand. Intuitively, this strategy allows transformer networks to achieve higher performance on smaller datasets by relying on statistics acquired at scale in an unsupervised way (e.g., through the masked language model training objective). To this end, in this chapter we will use the Hugging Face library,1 which has a rich repository of datasets and pre-trained models, as well as helper methods and classes that make it easy to target downstream tasks. Using pre-trained transformer encoders, we will implement the two tasks that served as use cases in the previous chapters: text classification and part-of-speech tagging. 13.1 Tokenization As discussed in Section 12.2, transformers rely on sub-word tokens. This strategy provides an elegant way to handle unknown and low-frequency words by splitting them into more frequent sub-word parts. At the same time, these tokenization algorithms maintain frequently-occurring words as standalone tokens, so the signal for these common words is preserved. To make this more concrete, we show below how tokenizers are employed in the Hugging Face library. First, we load the tokenizer that corresponds to the transformer we intend to use. This is important for two reasons: (a) different transformers rely on different tokenization algorithms, and (b) even for the ones that use the same algorithm, their tokenizer vocabularies are likely to be different if they were pre-trained 1 https://huggingface.co/docs/transformers/main/en/index 186 13.1 Tokenization 187 on different corpora. Next, we tokenize some example text and display some of the resulting attributes with pandas: As shown above, the tokenizer splits the text into tokens, and adds two special tokens: the [CLS] token at the beginning of the token sequence, and the [SEP] token at the end. Also, note that the ## characters at the beginning of some tokens indicate that they are not standalone words, but rather sub-words that continue a word previously started. For example, the output above shows that the word walrus was split into three sub-words. Note, however, that this is specific to this particular tokenization algorithm, and other tokenizers may indicate word continuation in different ways. A better way to detect word continuations is using the word_ids() method of the tokenizer output, which assigns the same id to all tokens part of the same word. For example, all fragments of the word walrus share the word id 3. Lastly, the input_ids attribute provides the token ids used internally by the transformer to map tokens to embeddings. To briefly demonstrate how different tokenizers produce different outputs, here is the same text tokenized with the tokenizer corresponding to xlm-roberta-base: Note how the [CLS] and [SEP] special tokens have been replaced with <s> and </s> respectively. Also, spaces have been replaced with the Unicode character (U+2581, LOWER ONE EIGHTH BLOCK). Tokens that start with that character are considered word beginnings and the rest are word continuations, as can be confirmed by looking at the word ids. This illustrates the importance of using the tokenizer that corresponds to the transformer you intend to use. 012345678 tokens [CLS] I am the wa ##l ##rus . [SEP] word_ids None 0 1 2 3 3 3 4 None input_ids 101 146 1821 1103 20049 1233 6208 119 102 01234567 tokens <s> ▁I ▁am ▁the ▁wal rus . </s > None 0 1 2 3 3 3 None 0 87 444 70 32973 6563 5 2 word_ids input_ids 188 Using Transformers with the Hugging Face Library 13.2 Text Classification For our text classification example, we will continue using the AG News dataset from previous chapters. We will load, preprocess, and split the dataset into pandas dataframes in the same way as before. Now however, rather than continuing with pandas, we will create a Hugging Face dataset from the dataframes. Hugging Face datasets are convenient because of their built-in support of batching, efficient data transformations, and caching. In particular, we convert each dataframe into a Hugging Face dataset. The various datasets are managed with a DatasetDict. Note that this is the same data structure seen when downloading a Hugging Face dataset from their hub.2 The keys in this dictionary are usually train, validation, and test:3 Once our dataset is loaded, we load a tokenizer. Different pre-trained models are tokenized differently, and it is important to select the tokenizer that corresponds to the model we will use so that the inputs are consistent with model expectations. In our example, we will use the bert-base-cased pre-trained model and tokenizer: Datasets have a map() method that transforms the dataset by applying a function to each example. The method returns a new dataset with the transformation applied. We use the map() method to tokenize our dataset. To this end, we define a function that tokenizes an example using the tokenizer we loaded previously. Note that tokenizers support many options that you may need depending on your situation. However, since this is a simple scenario, all we need to do is provide the text to tokenize and specify how to handle texts that exceed the maximum number of tokens permitted by the pre-trained model. Here we have our tokenizer truncate any inputs that are too long by specifying the truncation=True parameter. The output of this function will be added to the new dataset as extra columns. Further, we also want to remove some of the columns that are no longer needed, simplifying subsequent steps. For this, we use the remove_columns argument, listing the columns that we want to discard. Additionally, the dataset’s map() method can batch the dataset; we enable this option with the batched=True argument: 2 https://huggingface.co/datasets
3 These correspond to the more common terms train, development, and test we have used throughout the book so far. In this chapter we use the Hugging Face naming conventions for consistency. 13.2 Text Classification 189 label 03 10 20 32 40 ... ... . 107995  0 
 . 107996  0 
 . 107997  0 
 . 107998  0 
 . 107999  3 
 input_ids [101, 3270, 11906, 1522, 1146, 7106, 1111, 251... [101, 158, 119, 156, 119, 12068, 5084, 1116, 9... [101, 7270, 118, 2733, 1383, 1111, 12448, 7430... [101, 6096, 117, 10378, 3969, 5977, 1111, 8988... [101, 19569, 5480, 10582, 2087, 1867, 158, 119... [101, 1130, 139, 24683, 131, 21107, 2050, 1739... token_type_ids attention_mask [101, 22087, 8223, 1611, 1106, 4417, [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 5572, 324... 0, 0, 0, ... [0,0,0,0,0,0,0,0, [0,0,0,0,0,0,0,0, [0,0,0,0,0,0,0,0, [0,0,0,0,0,0,0,0, [0,0,0,0,0,0,0,0, [0,0,0,0,0,0,0,0, 0,0,0,0, 0,0,0,... 0,0,0,0, 0,0,0,... 0,0,0,0, 0,0,0,... 0,0,0,0, 0,0,0,... 0,0,0,0, 0,0,0,... 0,0,0,0, 0,0,0,... [1,1,1,1,1,1,1,1, [1,1,1,1,1,1,1,1, [1,1,1,1,1,1,1,1, [1,1,1,1,1,1,1,1, [1,1,1,1,1,1,1,1, [1,1,1,1,1,1,1,1, 1,1,1,1, 1,1,1,... 1,1,1,1, 1,1,1,... 1,1,1,1, 1,1,1,... 1,1,1,1, 1,1,1,... 1,1,1,1, 1,1,1,... 1,1,1,1, 1,1,1,... ... ... [101, 16409, 118, 16587, 159, 4064, [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1106, 1564... 0, 0, 0, ... 108000 rows × 4 columns [101, 4222, 11404, 1174, 117, 1476, [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1130, 2696... 0, 0, 0, ... [101, 11560, 3881, 108, 3614, 132, [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 3498, 2944,... 0, 0, 0, ... [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... ... Next, we implement a classifier for our task. Hugging Face provides a
variety of models corresponding to several types of downstream tasks. However, for pedagogical purposes, we implement one from scratch. In particular, our model class inherits from BertPreTrainedModel, which
provides several useful methods such as init_weights() and from_pretrained() methods, which we will use later. The model constructor takes a config- uration object as its only parameter. Configuration objects contain all the hyper-parameters used by the corresponding pre-trained models. We will show later how the configuration model is retrieved and customized. Models that implement specific downstream tasks are usually composed of a pre-trained model (sometimes referred as the body), and one or more task-specific layers (usually referred as the head). Here, we initialize a BertModel using the provided configuration, as well as a dropout layer and a task-specific linear layer used for classifying the Bert output. Each of these layers is initialized by calling the init_weights() method inherited from BertPreTrainedModel. The forward() method, which implements the task-specific forward pass, takes as arguments the outputs of the tokenizer, and, optionally, the gold labels corresponding to the input data points. Our implementation of the forward pass sends the input tokens to the Bert model to produce the contextualized representations for all tokens. This output has several components, including the last_hidden_state which con- 190 Using Transformers with the Hugging Face Library tains the final hidden-state embedding for each token. For our task, we will represent the whole sequence using the embedding for the [CLS] token that occurs at the start of each example. We retrieve it by selecting the first element of each output sequence in the batch (i.e., last_hidden_state[:, 0, :]). As in the previous chapters, we apply dropout to our sequence representation, and then pass it through our linear classification layer. If gold labels are provided (i.e., we are training), we now compute the loss using the cross-entropy loss. The output of the forward pass is wrapped in a Hugging Face SequenceClassifierOutput object4 and returned: Next we load the configuration of the pre-trained model and instantiate our model. The AutoConfig class can load the configuration for any pre-trained model, retrieving it from Hugging Face if needed. Then we use the configuration to instantiate our model using the from_pretrained() method. With this call, the pre-trained model will be loaded, which includes downloading if necessary: Hugging Face provides a Trainer class that greatly simplifies the training process. This class not only implements the training loop we have been using in the previous chapters, but also handles other useful steps such as saving checkpoints (i.e., intermediate models after a number of mini-batches have been processed during training), and tracking custom measures about model performance. In order to create a Trainer, we first need to specify its configuration in a TrainingArguments object. In ours, we specify certain hyper parameters such as batch size, weight decay, and number of epochs, as well as where to store model checkpoints: The TrainingArguments class provides a wide variety of arguments that we have not shown.5 These arguments usually have appropriate default values, so it is often fine to omit them. For example, we did not use the label_names argument, which specifies the key that corresponds to the training labels. When omitted, it defaults to keys such as label, labels and label_ids.6 In this chapter we used label. Note that we also specify how often we would like to see the perfor- . 4  Hugging Face utilizes a set of output objects to standardize model output for a given task. These objects typically include additional information, e.g., attention weights, which can be used for visualizing or debugging model behavior. 
 . 5  https://huggingface.co/docs/transformers/main/en/main_classes/trainer# transformers. TrainingArguments 
 6 In the case of extractive question answering (see Chapter 16), the start_positions and end_positions store the start/end positions of the correct answers. 13.3 Part-of-speech Tagging 191 mance of the current model (at the end of each epoch) with evaluation_strategy='epoch'. This means that after each epoch we print the current loss on the training
partition and on the evaluation dataset, if one is available. Additionally,
we can report custom metrics at this time. For this purpose, we use the compute_metrics parameter of the Trainer, which expects a function that receives a transformers. EvalPredictions object containing the label ids and the predicted logits. The expected return type is a dictionary whose keys correspond to different metrics, each of which will be displayed as a separate result column. Using the above TrainingArguments and compute_metrics function, we create our Trainer. Note that when you provide a tokenizer, the trainer will automatically pad the sequences in each batch. Also, the trainer will automatically use any GPU that is available, unless specifically disabled in the TrainingArguments. Training our model takes a single call to the train() method of the Trainer object. As specified in the our instance of TrainingArguments, the training and validation losses, as well as the accuracy, are reported every epoch. As in the other chapters, we can write custom code to obtain the model’s predictions on the test data. However, the Trainer class provides a predict() method that drastically simplifies this: As shown in the table above, this model achieves an accuracy of 95%, which is the highest performance we have achieved so far on this dataset. 13.3 Part-of-speech Tagging To showcase part-of-speech tagging using transformers, we continue with the Spanish section of the AnCora corpus introduced in Chapter 11. Recall that the dataset is stored in the CoNLL-U format. We load this format in the same way as before, but then we convert the loaded dataset into a Hugging Face DictDataset: Importantly, because the CoNLL-U dataset is already tokenized, we use the is_split_into_words=True tokenizer argument to ensure that the tokenizer respects the existing word boundaries during its sub-word tokenization. Further, while we want to predict one POS tag per word, Epoch Training Loss Validation Loss Accuracy 1 0.187800 0.172629 0.941667 2 0.104000 0.183001 0.946250 192 Using Transformers with the Hugging Face Library any given word may be split into smaller pieces by our tokenizer. Thus, we need to align the tokenizer output to the CoNLL-U words. The original BERT paper (Devlin et al., 2018) addresses this by only using the embedding corresponding to the first sub-token for each word. We follow the same approach for consistency. For the sub-words that do not correspond to the beginning of a word, we use a special value that indicates that we are not interested in their predictions. The CrossEntropyLoss has a parameter called ignore_index for this purpose. The default value for this parameter is −100, which we use as the label for the sub-words we wish to ignore during training: Next, we use this function to preprocess the train and validation folds in our DatasetDict: words [El, presidente, de, el, órgano, regulador, de... [Afirmó, que, sigue, el, criterio, europeo, y,... [Durante, la, presentación, de, el, libro, ", ... [Y, todas, las, miradas, convergen, en, la, lu... [Cambiar, las, formas, parece, de, rigor, ,, p... [Él, llega, a, tirar, la, sobre, la, cama, y, ... tags [DET, NOUN, ADP, DET, NOUN, ADJ, ADP, DET, PRO... [VERB, SCONJ, VERB, DET, NOUN, ADJ, CCONJ, SCO... [ADP, DET, NOUN, ADP, DET, NOUN, PUNCT, DET, P... [CCONJ, DET, DET, NOUN, VERB, ADP, DET, NOUN, ... [VERB, DET, NOUN, VERB, ADP, NOUN, PUNCT, CCON... [PRON, VERB, ADP, VERB, PRON, ADP, DET, NOUN, ... input_ids [0, 540, 9692, 8, 88, 103633, 15913, 1846, 8, ... [0, 62, 38949, 849, 41, 58453, 88, 166220, 620... [0, 24292, 21, 43945, 8, 88, 7750, 44, 239, 78... [0, 990, 5136, 576, 100688, 7, 158, 814, 1409,... [0, 313, 61055, 42, 576, 26497, 12295, 8, 7599... [0, 124043, 47612, 10, 61846, 21, 1028, 21, 39... attention_mask labels [-100, 0, 1, 2, 0, 1, 3, -100, 2, 0, 4, -100, ... [-100, 6, -100, -100, 7, 6, 0, 1, 3, 10, 7, 6,... [-100, 2, 0, 1, 2, 0, 1, 8, 0, 4, -100, 2, 4, ... [-100, 10, 0, 0, 1, -100, 6, -100, -100, 2, 0,... [-100, 6, -100, -100, 0, 1, 6, 2, 1, 8, -100, ... [-100, 5, 6, 2, 6, 5, 2, 0, 1, 10, 5, 6, 0, 1,... 0 1 2 3 4 ... 14300 14301 14302 14303 [1,1,1,1,1, 1,1,1,1,1, [1,1,1,1,1, 1,1,1,1,1, [1,1,1,1,1, 1,1,1,1,1, [1,1,1,1,1, 1,1,1,1,1, [1,1,1,1,1, 1,1,1,1,1, [1,1,1,1,1, 1,1,1,1,1, 1, 1, 1, 1, 1, ... 1, 1, 1, 1, 1, ... 1, 1, 1, 1, 1, ... 1, 1, 1, 1, 1, ... 1, 1, 1, 1, 1, ... 1, 1, 1, 1, 1, ... [Sobre, la, oferta, de, interconexión, con, Te... [ADP, DET, NOUN, ADP, NOUN, ADP, PROPN, ADP, D... [0, 44125, 21, 19806, 8, 1940, 2271, 3355, 194... [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... [-100, 2, 0, 1, 2, 1, -100, -100, -100, 2, 4, ... [La, inversión, en, investigación, básica, es,... [DET, NOUN, ADP, NOUN, ADJ, AUX, DET, NOUN, AD... [0, 239, 98649, 22, 31674, 124528, 198, 88, 46... [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... [-100, 0, 1, 2, 1, 3, 9, 0, 1, 2, 0, 1, 10, 0,... [Conviene, que, ahora, ,, en, plena, apoteosis... [VERB, SCONJ, ADV, PUNCT, ADP, ADJ, NOUN, ADP,... [0, 1657, 7772, 13, 41, 18451, 6, 4, 22, 31161... [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... [-100, 6, -100, -100, 7, 11, 8, -100, 2, 3, 1,... [Carlos, y, Fayna, se, enzarzan, en, una, bron... [PROPN, CCONJ, PROPN, PRON, VERB, ADP, DET, NO... [0, 24856, 113, 114162, 76, 40, 22, 6383, 5935... [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... [-100, 4, 10, 4, -100, 5, 6, -100, -100, 2, 0,... 14304
14305 rows × 5 columns ... ... ... ... ... Next, we implement our model class that uses a transformer encoder as a transducer. Because our downstream task consists of POS tagging for Spanish, we need a transformer model that was pre-trained on Spanish texts. Here, we chose XLM-RoBERTa (Conneau et al., 2019) as our base model. XLM-Roberta is a RoBERTa model (Liu et al., 2019) that has 13.3 Part-of-speech Tagging 193 been pre-trained on 100 different languages, including Spanish. Of note, XLM-RoBERTa does not require us to specify what language we are working on. Similar to BERT, it only requires the input_ids. We discussed in the text classification section that Hugging Face provides implementations for text classification models. This is also true for token classification problems that require transducers. In particular, the XLMRobertaForTokenClassification model provided by Hugging Face does everything needed for this task. However, as before, here we implement it ourselves for pedagogical purposes. The model architecture is similar to our text classification example. It consists of a transformer, a dropout layer, and a linear layer used for classification. The number of labels which determines the output dimension of the linear layer is equal to the number of POS tags. The primary difference between the text classification example and this token classification model is that with the former we produced one label for each text document, while here we produce one label for each token in the input text. Specifically, in our text classification model the output shape was two-dimensional: (batch_size, num_labels). Here, our output is three-dimensional: (batch_size, sequence_size, num_labels). So, while much of the forward method is familiar to us, when we are required to compute the loss, we need to reshape the logits and the labels before passing them to the CrossEntropyLoss, since it expects two-dimensional input and one-dimensional labels. For this purpose, we use the view() method to reshape the tensors. This method is efficient because it does not copy the tensor data. Instead it provides a new view of the same data that behaves like a tensor with a different shape.7 As mentioned before, the number of arguments passed to this method determines the number of dimensions in the output tensor. Here, for our logits, we pass two arguments and so our new view will have two dimensions. The second will be the size of self.num_labels, while the first (because we pass -1) will be inferred based on the original tensor shape. For our labels, on the other hand, we only provide one argument and so the new view will have one dimension, inferred by the original shape: Next, we instantiate our model using the XLM-RoBERTa configuration: 7 Similar to NumPy, PyTorch tensors are represented internally by a block of memory storing the data and some metadata that describes how the data should be read, e.g., type, shape, and stride. The view() method returns a new tensor with new metadata but pointing to the same memory block. 194 Using Transformers with the Hugging Face Library As before, we create a TrainingArguments object and define a compute_metrics function in order to customize a Trainer: While the TrainingArguments code has no substantial changes, we need to adjust the compute_metrics function to account for the fact that our model uses sub-word tokens rather than complete words. Recall that only the first sub-word token per word was assigned a POS tag. This function discards the labels corresponding to the ignored sub-word tokens and evaluates the rest, returning the accuracy score: The last component required for the Trainer is a collator. Since this time we are batching sequences of tokens, we need a collator that can pad them dynamically when constructing the batches. The transformers library includes a DataCollatorForTokenClassification specifically for this purpose. Once we have our collator and our trainer object, we can train our model: Next, we evaluate our newly trained model on the test dataset. For this purpose, we preprocess the data in the same way we did for the train and validation partitions. Then, for convenience, we use the trainer’s predict() method to generate the predicted logits using our model: As before, we use scikit-learn’s classification_report() function to display the results of the evaluation. This function expects two onedimensional lists of labels, so we need to follow a similar approach to the one we employed for text classification. Note that output.label_ids and output.predictions are NumPy arrays rather than PyTorch tensors. This time we use NumPy’s reshape() method to reshape the arrays. This method is similar to PyTorch’s view() method that we used previously, except that view() may copy the array’s data in some situations. We discard the labels corresponding to ignored sub-word tokens, and then we print the classification report: Our model based on XLM-RoBERTa achieves 99% accuracy. This is considerably better than the LSTM-based model developed in Chapter 11. In order to understand the differences between the two methods, we produce below a confusion matrix for the results of each model. Rows in the confusion matrix represent the true labels and columns represent the predicted labels. In the confusion matrices shown below, each cell xij corresponds to the proportion of values with label i that were assigned the label j.8 For a perfect model, all cells in the diagonal would have value 1 and all other cells would have value 0. The code used to generate the confusion matrix is shown below. The confusion matrices 8 This is the case because we used the normalize='true' parameter of the confusion_matrix() function. 13.3 Part-of-speech Tagging 195 Figure 13.1 Confusion matrix corresponding to the LSTM-based part-ofspeech tagger developed in Chapter 11. for the LSTM and transformer are show in Figure 13.1 and Figure 13.2, respectively. The two confusion matrices highlight a couple of important observations. First, the transformer model is considerably better at predicting POS tags with infrequent support in the dataset. For example, the accuracy for predicting the SYM POS tag increased from 38% in the LSTM model to 95% in the transformer model! Equally as impressive, the transformer improved the performance of tags that are extremely common, and, thus, provide plenty of opportunity to both approaches to learn a good model. For example, the accuracy of tagging NOUN, the second 196 Using Transformers with the Hugging Face Library Figure 13.2 Confusion matrix corresponding to the transformer-based part-of-speech tagger. most common POS tag in the dataset, increased from 96% in the LSTM model to 99% in the transformer model. 13.4 Summary In this chapter we presented two applications driven by the encoder component of a transformer network. First, we used the transformer encoder as an acceptor and implemented a text classification application for English news. Second, we used the encoder as a transducer to develop a Spanish part-of-speech tagger. Both tasks were implemented using 13.4 Summary 197 pre-trained transformer models from the Hugging Face library. For both applications, the transformer-based methods outperform considerably all approaches introduced in the previous chapters, highlighting the value of the transformer architecture.
22,245
22,352
#!/usr/bin/env python # coding: utf-8 # # Part-of-speech Tagging with Transformer Networks # Some initialization: # In[1]: import random import torch import numpy as np import pandas as pd from tqdm.notebook import tqdm # enable tqdm in pandas tqdm.pandas() # set to True to use the gpu (if there is one available) use_gpu = True # select device device = torch.device('cuda' if use_gpu and torch.cuda.is_available() else 'cpu') print(f'device: {device.type}') # random seed seed = 1234 # set random seed if seed is not None: print(f'random seed: {seed}') random.seed(seed) np.random.seed(seed) torch.manual_seed(seed) # Read the words and POS tags from the Spanish dataset: # In[2]: from conllu import parse_incr def read_tags(filename): data = {'words': [], 'tags': []} with open(filename) as f: for sent in tqdm(parse_incr(f)): words = [tok['form'] for tok in sent] tags = [tok['upos'] for tok in sent] data['words'].append(words) data['tags'].append(tags) return pd.DataFrame(data) # In[3]: train_df = read_tags('data/UD_Spanish-AnCora/es_ancora-ud-train.conllup') valid_df = read_tags('data/UD_Spanish-AnCora/es_ancora-ud-dev.conllup') test_df = read_tags('data/UD_Spanish-AnCora/es_ancora-ud-test.conllup') # In[4]: tags = train_df['tags'].explode().unique() index_to_tag = {i:t for i,t in enumerate(tags)} tag_to_index = {t:i for i,t in enumerate(tags)} # Create a HuggingFace `DatasetDict` object: # In[5]: from datasets import Dataset, DatasetDict ds = DatasetDict() ds['train'] = Dataset.from_pandas(train_df) ds['validation'] = Dataset.from_pandas(valid_df) ds['test'] = Dataset.from_pandas(test_df) ds # In[6]: ds['train'].to_pandas() # Now tokenize the texts and assign POS labels to the first token in each word: # In[7]: from transformers import AutoTokenizer transformer_name = 'xlm-roberta-base' tokenizer = AutoTokenizer.from_pretrained(transformer_name) # In[8]: x = ds['train'][0] tokenized_input = tokenizer(x['words'], is_split_into_words=True) tokens = tokenizer.convert_ids_to_tokens(tokenized_input['input_ids']) word_ids = tokenized_input.word_ids() pd.DataFrame([tokens, word_ids], index=['tokens', 'word ids']) # In[9]: # https://arxiv.org/pdf/1810.04805.pdf # Section 5.3 # We use the representation of the first sub-token as the input to the token-level classifier over the NER label set. # default value for CrossEntropyLoss ignore_index parameter ignore_index = -100 def tokenize_and_align_labels(batch): labels = [] # tokenize batch tokenized_inputs = tokenizer( batch['words'], truncation=True, is_split_into_words=True, ) # iterate over batch elements for i, tags in enumerate(batch['tags']): label_ids = [] previous_word_id = None # get word ids for current batch element word_ids = tokenized_inputs.word_ids(batch_index=i) # iterate over tokens in batch element for word_id in word_ids: if word_id is None or word_id == previous_word_id: # ignore if not a word or word id has already been seen label_ids.append(ignore_index) else: # get tag id for corresponding word tag_id = tag_to_index[tags[word_id]] label_ids.append(tag_id) # remember this word id previous_word_id = word_id # save label ids for current batch element labels.append(label_ids) # store labels together with the tokenizer output tokenized_inputs['labels'] = labels return tokenized_inputs # In[10]: train_ds = ds['train'].map(tokenize_and_align_labels, batched=True) eval_ds = ds['validation'].map(tokenize_and_align_labels, batched=True) train_ds.to_pandas() # Create our transformer model: # In[11]: from torch import nn from transformers.modeling_outputs import TokenClassifierOutput from transformers.models.roberta.modeling_roberta import RobertaModel, RobertaPreTrainedModel # https://github.com/huggingface/transformers/blob/65659a29cf5a079842e61a63d57fa24474288998/src/transformers/models/roberta/modeling_roberta.py#L1346 class XLMRobertaForTokenClassification(RobertaPreTrainedModel): def __init__(self, config): super().__init__(config) self.num_labels = config.num_labels self.roberta = RobertaModel(config, add_pooling_layer=False) self.dropout = nn.Dropout(config.hidden_dropout_prob) self.classifier = nn.Linear(config.hidden_size, config.num_labels) self.init_weights() def forward(self, input_ids=None, attention_mask=None, token_type_ids=None, labels=None, **kwargs): outputs = self.roberta( input_ids, attention_mask=attention_mask, token_type_ids=token_type_ids, **kwargs, ) sequence_output = self.dropout(outputs[0]) logits = self.classifier(sequence_output) loss = None if labels is not None: loss_fn = nn.CrossEntropyLoss() inputs = logits.view(-1, self.num_labels) targets = labels.view(-1) loss = loss_fn(inputs, targets) return TokenClassifierOutput( loss=loss, logits=logits, hidden_states=outputs.hidden_states, attentions=outputs.attentions, ) # In[12]: from transformers import AutoConfig config = AutoConfig.from_pretrained( transformer_name, num_labels=len(index_to_tag), ) model = ( XLMRobertaForTokenClassification .from_pretrained(transformer_name, config=config) ) # Create the `Trainer` object and train: # In[13]: from transformers import TrainingArguments num_epochs = 2 batch_size = 24 weight_decay = 0.01 model_name = f'{transformer_name}-finetuned-pos-es' training_args = TrainingArguments( output_dir=model_name, log_level='error', num_train_epochs=num_epochs, per_device_train_batch_size=batch_size, per_device_eval_batch_size=batch_size, evaluation_strategy='epoch', weight_decay=weight_decay, ) # In[14]: from sklearn.metrics import accuracy_score def compute_metrics(eval_pred): # gold labels label_ids = eval_pred.label_ids # predictions pred_ids = np.argmax(eval_pred.predictions, axis=-1) # collect gold and predicted labels, ignoring ignore_index label y_true, y_pred = [], [] batch_size, seq_len = pred_ids.shape for i in range(batch_size): for j in range(seq_len): if label_ids[i, j] != ignore_index: y_true.append(index_to_tag[label_ids[i][j]]) y_pred.append(index_to_tag[pred_ids[i][j]]) # return computed metrics return {'accuracy': accuracy_score(y_true, y_pred)} # In[15]: from transformers import Trainer from transformers import DataCollatorForTokenClassification data_collator = DataCollatorForTokenClassification(tokenizer) trainer = Trainer( model=model, args=training_args, data_collator=data_collator, compute_metrics=compute_metrics, train_dataset=train_ds, eval_dataset=eval_ds, tokenizer=tokenizer, ) trainer.train() # Evaluate on the test partition: # In[16]: test_ds = ds['test'].map( tokenize_and_align_labels, batched=True, ) output = trainer.predict(test_ds) # In[17]: from sklearn.metrics import classification_report num_labels = model.num_labels label_ids = output.label_ids.reshape(-1) predictions = output.predictions.reshape(-1, num_labels) predictions = np.argmax(predictions, axis=-1) mask = label_ids != ignore_index y_true = label_ids[mask] y_pred = predictions[mask] target_names = tags[:-1] report = classification_report( y_true, y_pred, target_names=target_names ) print(report) # In[18]: import matplotlib.pyplot as plt from sklearn.metrics import ConfusionMatrixDisplay, confusion_matrix cm = confusion_matrix(y_true, y_pred, normalize='true') disp = ConfusionMatrixDisplay( confusion_matrix=cm, display_labels=target_names, ) fig, ax = plt.subplots(figsize=(10,10)) disp.plot( cmap='Blues', values_format='.2f', colorbar=False, ax=ax, xticks_rotation=45, )
7,764
7,862
6
chap13-7
chap13-7
13 Using Transformers with the Hugging Face Library One of the key advantages of transformer networks is the ability to take a model that was pre-trained over vast quantities of text and fine-tune it for the task at hand. Intuitively, this strategy allows transformer networks to achieve higher performance on smaller datasets by relying on statistics acquired at scale in an unsupervised way (e.g., through the masked language model training objective). To this end, in this chapter we will use the Hugging Face library,1 which has a rich repository of datasets and pre-trained models, as well as helper methods and classes that make it easy to target downstream tasks. Using pre-trained transformer encoders, we will implement the two tasks that served as use cases in the previous chapters: text classification and part-of-speech tagging. 13.1 Tokenization As discussed in Section 12.2, transformers rely on sub-word tokens. This strategy provides an elegant way to handle unknown and low-frequency words by splitting them into more frequent sub-word parts. At the same time, these tokenization algorithms maintain frequently-occurring words as standalone tokens, so the signal for these common words is preserved. To make this more concrete, we show below how tokenizers are employed in the Hugging Face library. First, we load the tokenizer that corresponds to the transformer we intend to use. This is important for two reasons: (a) different transformers rely on different tokenization algorithms, and (b) even for the ones that use the same algorithm, their tokenizer vocabularies are likely to be different if they were pre-trained 1 https://huggingface.co/docs/transformers/main/en/index 186 13.1 Tokenization 187 on different corpora. Next, we tokenize some example text and display some of the resulting attributes with pandas: As shown above, the tokenizer splits the text into tokens, and adds two special tokens: the [CLS] token at the beginning of the token sequence, and the [SEP] token at the end. Also, note that the ## characters at the beginning of some tokens indicate that they are not standalone words, but rather sub-words that continue a word previously started. For example, the output above shows that the word walrus was split into three sub-words. Note, however, that this is specific to this particular tokenization algorithm, and other tokenizers may indicate word continuation in different ways. A better way to detect word continuations is using the word_ids() method of the tokenizer output, which assigns the same id to all tokens part of the same word. For example, all fragments of the word walrus share the word id 3. Lastly, the input_ids attribute provides the token ids used internally by the transformer to map tokens to embeddings. To briefly demonstrate how different tokenizers produce different outputs, here is the same text tokenized with the tokenizer corresponding to xlm-roberta-base: Note how the [CLS] and [SEP] special tokens have been replaced with <s> and </s> respectively. Also, spaces have been replaced with the Unicode character (U+2581, LOWER ONE EIGHTH BLOCK). Tokens that start with that character are considered word beginnings and the rest are word continuations, as can be confirmed by looking at the word ids. This illustrates the importance of using the tokenizer that corresponds to the transformer you intend to use. 012345678 tokens [CLS] I am the wa ##l ##rus . [SEP] word_ids None 0 1 2 3 3 3 4 None input_ids 101 146 1821 1103 20049 1233 6208 119 102 01234567 tokens <s> ▁I ▁am ▁the ▁wal rus . </s > None 0 1 2 3 3 3 None 0 87 444 70 32973 6563 5 2 word_ids input_ids 188 Using Transformers with the Hugging Face Library 13.2 Text Classification For our text classification example, we will continue using the AG News dataset from previous chapters. We will load, preprocess, and split the dataset into pandas dataframes in the same way as before. Now however, rather than continuing with pandas, we will create a Hugging Face dataset from the dataframes. Hugging Face datasets are convenient because of their built-in support of batching, efficient data transformations, and caching. In particular, we convert each dataframe into a Hugging Face dataset. The various datasets are managed with a DatasetDict. Note that this is the same data structure seen when downloading a Hugging Face dataset from their hub.2 The keys in this dictionary are usually train, validation, and test:3 Once our dataset is loaded, we load a tokenizer. Different pre-trained models are tokenized differently, and it is important to select the tokenizer that corresponds to the model we will use so that the inputs are consistent with model expectations. In our example, we will use the bert-base-cased pre-trained model and tokenizer: Datasets have a map() method that transforms the dataset by applying a function to each example. The method returns a new dataset with the transformation applied. We use the map() method to tokenize our dataset. To this end, we define a function that tokenizes an example using the tokenizer we loaded previously. Note that tokenizers support many options that you may need depending on your situation. However, since this is a simple scenario, all we need to do is provide the text to tokenize and specify how to handle texts that exceed the maximum number of tokens permitted by the pre-trained model. Here we have our tokenizer truncate any inputs that are too long by specifying the truncation=True parameter. The output of this function will be added to the new dataset as extra columns. Further, we also want to remove some of the columns that are no longer needed, simplifying subsequent steps. For this, we use the remove_columns argument, listing the columns that we want to discard. Additionally, the dataset’s map() method can batch the dataset; we enable this option with the batched=True argument: 2 https://huggingface.co/datasets
3 These correspond to the more common terms train, development, and test we have used throughout the book so far. In this chapter we use the Hugging Face naming conventions for consistency. 13.2 Text Classification 189 label 03 10 20 32 40 ... ... . 107995  0 
 . 107996  0 
 . 107997  0 
 . 107998  0 
 . 107999  3 
 input_ids [101, 3270, 11906, 1522, 1146, 7106, 1111, 251... [101, 158, 119, 156, 119, 12068, 5084, 1116, 9... [101, 7270, 118, 2733, 1383, 1111, 12448, 7430... [101, 6096, 117, 10378, 3969, 5977, 1111, 8988... [101, 19569, 5480, 10582, 2087, 1867, 158, 119... [101, 1130, 139, 24683, 131, 21107, 2050, 1739... token_type_ids attention_mask [101, 22087, 8223, 1611, 1106, 4417, [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 5572, 324... 0, 0, 0, ... [0,0,0,0,0,0,0,0, [0,0,0,0,0,0,0,0, [0,0,0,0,0,0,0,0, [0,0,0,0,0,0,0,0, [0,0,0,0,0,0,0,0, [0,0,0,0,0,0,0,0, 0,0,0,0, 0,0,0,... 0,0,0,0, 0,0,0,... 0,0,0,0, 0,0,0,... 0,0,0,0, 0,0,0,... 0,0,0,0, 0,0,0,... 0,0,0,0, 0,0,0,... [1,1,1,1,1,1,1,1, [1,1,1,1,1,1,1,1, [1,1,1,1,1,1,1,1, [1,1,1,1,1,1,1,1, [1,1,1,1,1,1,1,1, [1,1,1,1,1,1,1,1, 1,1,1,1, 1,1,1,... 1,1,1,1, 1,1,1,... 1,1,1,1, 1,1,1,... 1,1,1,1, 1,1,1,... 1,1,1,1, 1,1,1,... 1,1,1,1, 1,1,1,... ... ... [101, 16409, 118, 16587, 159, 4064, [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1106, 1564... 0, 0, 0, ... 108000 rows × 4 columns [101, 4222, 11404, 1174, 117, 1476, [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1130, 2696... 0, 0, 0, ... [101, 11560, 3881, 108, 3614, 132, [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 3498, 2944,... 0, 0, 0, ... [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... ... Next, we implement a classifier for our task. Hugging Face provides a
variety of models corresponding to several types of downstream tasks. However, for pedagogical purposes, we implement one from scratch. In particular, our model class inherits from BertPreTrainedModel, which
provides several useful methods such as init_weights() and from_pretrained() methods, which we will use later. The model constructor takes a config- uration object as its only parameter. Configuration objects contain all the hyper-parameters used by the corresponding pre-trained models. We will show later how the configuration model is retrieved and customized. Models that implement specific downstream tasks are usually composed of a pre-trained model (sometimes referred as the body), and one or more task-specific layers (usually referred as the head). Here, we initialize a BertModel using the provided configuration, as well as a dropout layer and a task-specific linear layer used for classifying the Bert output. Each of these layers is initialized by calling the init_weights() method inherited from BertPreTrainedModel. The forward() method, which implements the task-specific forward pass, takes as arguments the outputs of the tokenizer, and, optionally, the gold labels corresponding to the input data points. Our implementation of the forward pass sends the input tokens to the Bert model to produce the contextualized representations for all tokens. This output has several components, including the last_hidden_state which con- 190 Using Transformers with the Hugging Face Library tains the final hidden-state embedding for each token. For our task, we will represent the whole sequence using the embedding for the [CLS] token that occurs at the start of each example. We retrieve it by selecting the first element of each output sequence in the batch (i.e., last_hidden_state[:, 0, :]). As in the previous chapters, we apply dropout to our sequence representation, and then pass it through our linear classification layer. If gold labels are provided (i.e., we are training), we now compute the loss using the cross-entropy loss. The output of the forward pass is wrapped in a Hugging Face SequenceClassifierOutput object4 and returned: Next we load the configuration of the pre-trained model and instantiate our model. The AutoConfig class can load the configuration for any pre-trained model, retrieving it from Hugging Face if needed. Then we use the configuration to instantiate our model using the from_pretrained() method. With this call, the pre-trained model will be loaded, which includes downloading if necessary: Hugging Face provides a Trainer class that greatly simplifies the training process. This class not only implements the training loop we have been using in the previous chapters, but also handles other useful steps such as saving checkpoints (i.e., intermediate models after a number of mini-batches have been processed during training), and tracking custom measures about model performance. In order to create a Trainer, we first need to specify its configuration in a TrainingArguments object. In ours, we specify certain hyper parameters such as batch size, weight decay, and number of epochs, as well as where to store model checkpoints: The TrainingArguments class provides a wide variety of arguments that we have not shown.5 These arguments usually have appropriate default values, so it is often fine to omit them. For example, we did not use the label_names argument, which specifies the key that corresponds to the training labels. When omitted, it defaults to keys such as label, labels and label_ids.6 In this chapter we used label. Note that we also specify how often we would like to see the perfor- . 4  Hugging Face utilizes a set of output objects to standardize model output for a given task. These objects typically include additional information, e.g., attention weights, which can be used for visualizing or debugging model behavior. 
 . 5  https://huggingface.co/docs/transformers/main/en/main_classes/trainer# transformers. TrainingArguments 
 6 In the case of extractive question answering (see Chapter 16), the start_positions and end_positions store the start/end positions of the correct answers. 13.3 Part-of-speech Tagging 191 mance of the current model (at the end of each epoch) with evaluation_strategy='epoch'. This means that after each epoch we print the current loss on the training
partition and on the evaluation dataset, if one is available. Additionally,
we can report custom metrics at this time. For this purpose, we use the compute_metrics parameter of the Trainer, which expects a function that receives a transformers. EvalPredictions object containing the label ids and the predicted logits. The expected return type is a dictionary whose keys correspond to different metrics, each of which will be displayed as a separate result column. Using the above TrainingArguments and compute_metrics function, we create our Trainer. Note that when you provide a tokenizer, the trainer will automatically pad the sequences in each batch. Also, the trainer will automatically use any GPU that is available, unless specifically disabled in the TrainingArguments. Training our model takes a single call to the train() method of the Trainer object. As specified in the our instance of TrainingArguments, the training and validation losses, as well as the accuracy, are reported every epoch. As in the other chapters, we can write custom code to obtain the model’s predictions on the test data. However, the Trainer class provides a predict() method that drastically simplifies this: As shown in the table above, this model achieves an accuracy of 95%, which is the highest performance we have achieved so far on this dataset. 13.3 Part-of-speech Tagging To showcase part-of-speech tagging using transformers, we continue with the Spanish section of the AnCora corpus introduced in Chapter 11. Recall that the dataset is stored in the CoNLL-U format. We load this format in the same way as before, but then we convert the loaded dataset into a Hugging Face DictDataset: Importantly, because the CoNLL-U dataset is already tokenized, we use the is_split_into_words=True tokenizer argument to ensure that the tokenizer respects the existing word boundaries during its sub-word tokenization. Further, while we want to predict one POS tag per word, Epoch Training Loss Validation Loss Accuracy 1 0.187800 0.172629 0.941667 2 0.104000 0.183001 0.946250 192 Using Transformers with the Hugging Face Library any given word may be split into smaller pieces by our tokenizer. Thus, we need to align the tokenizer output to the CoNLL-U words. The original BERT paper (Devlin et al., 2018) addresses this by only using the embedding corresponding to the first sub-token for each word. We follow the same approach for consistency. For the sub-words that do not correspond to the beginning of a word, we use a special value that indicates that we are not interested in their predictions. The CrossEntropyLoss has a parameter called ignore_index for this purpose. The default value for this parameter is −100, which we use as the label for the sub-words we wish to ignore during training: Next, we use this function to preprocess the train and validation folds in our DatasetDict: words [El, presidente, de, el, órgano, regulador, de... [Afirmó, que, sigue, el, criterio, europeo, y,... [Durante, la, presentación, de, el, libro, ", ... [Y, todas, las, miradas, convergen, en, la, lu... [Cambiar, las, formas, parece, de, rigor, ,, p... [Él, llega, a, tirar, la, sobre, la, cama, y, ... tags [DET, NOUN, ADP, DET, NOUN, ADJ, ADP, DET, PRO... [VERB, SCONJ, VERB, DET, NOUN, ADJ, CCONJ, SCO... [ADP, DET, NOUN, ADP, DET, NOUN, PUNCT, DET, P... [CCONJ, DET, DET, NOUN, VERB, ADP, DET, NOUN, ... [VERB, DET, NOUN, VERB, ADP, NOUN, PUNCT, CCON... [PRON, VERB, ADP, VERB, PRON, ADP, DET, NOUN, ... input_ids [0, 540, 9692, 8, 88, 103633, 15913, 1846, 8, ... [0, 62, 38949, 849, 41, 58453, 88, 166220, 620... [0, 24292, 21, 43945, 8, 88, 7750, 44, 239, 78... [0, 990, 5136, 576, 100688, 7, 158, 814, 1409,... [0, 313, 61055, 42, 576, 26497, 12295, 8, 7599... [0, 124043, 47612, 10, 61846, 21, 1028, 21, 39... attention_mask labels [-100, 0, 1, 2, 0, 1, 3, -100, 2, 0, 4, -100, ... [-100, 6, -100, -100, 7, 6, 0, 1, 3, 10, 7, 6,... [-100, 2, 0, 1, 2, 0, 1, 8, 0, 4, -100, 2, 4, ... [-100, 10, 0, 0, 1, -100, 6, -100, -100, 2, 0,... [-100, 6, -100, -100, 0, 1, 6, 2, 1, 8, -100, ... [-100, 5, 6, 2, 6, 5, 2, 0, 1, 10, 5, 6, 0, 1,... 0 1 2 3 4 ... 14300 14301 14302 14303 [1,1,1,1,1, 1,1,1,1,1, [1,1,1,1,1, 1,1,1,1,1, [1,1,1,1,1, 1,1,1,1,1, [1,1,1,1,1, 1,1,1,1,1, [1,1,1,1,1, 1,1,1,1,1, [1,1,1,1,1, 1,1,1,1,1, 1, 1, 1, 1, 1, ... 1, 1, 1, 1, 1, ... 1, 1, 1, 1, 1, ... 1, 1, 1, 1, 1, ... 1, 1, 1, 1, 1, ... 1, 1, 1, 1, 1, ... [Sobre, la, oferta, de, interconexión, con, Te... [ADP, DET, NOUN, ADP, NOUN, ADP, PROPN, ADP, D... [0, 44125, 21, 19806, 8, 1940, 2271, 3355, 194... [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... [-100, 2, 0, 1, 2, 1, -100, -100, -100, 2, 4, ... [La, inversión, en, investigación, básica, es,... [DET, NOUN, ADP, NOUN, ADJ, AUX, DET, NOUN, AD... [0, 239, 98649, 22, 31674, 124528, 198, 88, 46... [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... [-100, 0, 1, 2, 1, 3, 9, 0, 1, 2, 0, 1, 10, 0,... [Conviene, que, ahora, ,, en, plena, apoteosis... [VERB, SCONJ, ADV, PUNCT, ADP, ADJ, NOUN, ADP,... [0, 1657, 7772, 13, 41, 18451, 6, 4, 22, 31161... [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... [-100, 6, -100, -100, 7, 11, 8, -100, 2, 3, 1,... [Carlos, y, Fayna, se, enzarzan, en, una, bron... [PROPN, CCONJ, PROPN, PRON, VERB, ADP, DET, NO... [0, 24856, 113, 114162, 76, 40, 22, 6383, 5935... [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... [-100, 4, 10, 4, -100, 5, 6, -100, -100, 2, 0,... 14304
14305 rows × 5 columns ... ... ... ... ... Next, we implement our model class that uses a transformer encoder as a transducer. Because our downstream task consists of POS tagging for Spanish, we need a transformer model that was pre-trained on Spanish texts. Here, we chose XLM-RoBERTa (Conneau et al., 2019) as our base model. XLM-Roberta is a RoBERTa model (Liu et al., 2019) that has 13.3 Part-of-speech Tagging 193 been pre-trained on 100 different languages, including Spanish. Of note, XLM-RoBERTa does not require us to specify what language we are working on. Similar to BERT, it only requires the input_ids. We discussed in the text classification section that Hugging Face provides implementations for text classification models. This is also true for token classification problems that require transducers. In particular, the XLMRobertaForTokenClassification model provided by Hugging Face does everything needed for this task. However, as before, here we implement it ourselves for pedagogical purposes. The model architecture is similar to our text classification example. It consists of a transformer, a dropout layer, and a linear layer used for classification. The number of labels which determines the output dimension of the linear layer is equal to the number of POS tags. The primary difference between the text classification example and this token classification model is that with the former we produced one label for each text document, while here we produce one label for each token in the input text. Specifically, in our text classification model the output shape was two-dimensional: (batch_size, num_labels). Here, our output is three-dimensional: (batch_size, sequence_size, num_labels). So, while much of the forward method is familiar to us, when we are required to compute the loss, we need to reshape the logits and the labels before passing them to the CrossEntropyLoss, since it expects two-dimensional input and one-dimensional labels. For this purpose, we use the view() method to reshape the tensors. This method is efficient because it does not copy the tensor data. Instead it provides a new view of the same data that behaves like a tensor with a different shape.7 As mentioned before, the number of arguments passed to this method determines the number of dimensions in the output tensor. Here, for our logits, we pass two arguments and so our new view will have two dimensions. The second will be the size of self.num_labels, while the first (because we pass -1) will be inferred based on the original tensor shape. For our labels, on the other hand, we only provide one argument and so the new view will have one dimension, inferred by the original shape: Next, we instantiate our model using the XLM-RoBERTa configuration: 7 Similar to NumPy, PyTorch tensors are represented internally by a block of memory storing the data and some metadata that describes how the data should be read, e.g., type, shape, and stride. The view() method returns a new tensor with new metadata but pointing to the same memory block. 194 Using Transformers with the Hugging Face Library As before, we create a TrainingArguments object and define a compute_metrics function in order to customize a Trainer: While the TrainingArguments code has no substantial changes, we need to adjust the compute_metrics function to account for the fact that our model uses sub-word tokens rather than complete words. Recall that only the first sub-word token per word was assigned a POS tag. This function discards the labels corresponding to the ignored sub-word tokens and evaluates the rest, returning the accuracy score: The last component required for the Trainer is a collator. Since this time we are batching sequences of tokens, we need a collator that can pad them dynamically when constructing the batches. The transformers library includes a DataCollatorForTokenClassification specifically for this purpose. Once we have our collator and our trainer object, we can train our model: Next, we evaluate our newly trained model on the test dataset. For this purpose, we preprocess the data in the same way we did for the train and validation partitions. Then, for convenience, we use the trainer’s predict() method to generate the predicted logits using our model: As before, we use scikit-learn’s classification_report() function to display the results of the evaluation. This function expects two onedimensional lists of labels, so we need to follow a similar approach to the one we employed for text classification. Note that output.label_ids and output.predictions are NumPy arrays rather than PyTorch tensors. This time we use NumPy’s reshape() method to reshape the arrays. This method is similar to PyTorch’s view() method that we used previously, except that view() may copy the array’s data in some situations. We discard the labels corresponding to ignored sub-word tokens, and then we print the classification report: Our model based on XLM-RoBERTa achieves 99% accuracy. This is considerably better than the LSTM-based model developed in Chapter 11. In order to understand the differences between the two methods, we produce below a confusion matrix for the results of each model. Rows in the confusion matrix represent the true labels and columns represent the predicted labels. In the confusion matrices shown below, each cell xij corresponds to the proportion of values with label i that were assigned the label j.8 For a perfect model, all cells in the diagonal would have value 1 and all other cells would have value 0. The code used to generate the confusion matrix is shown below. The confusion matrices 8 This is the case because we used the normalize='true' parameter of the confusion_matrix() function. 13.3 Part-of-speech Tagging 195 Figure 13.1 Confusion matrix corresponding to the LSTM-based part-ofspeech tagger developed in Chapter 11. for the LSTM and transformer are show in Figure 13.1 and Figure 13.2, respectively. The two confusion matrices highlight a couple of important observations. First, the transformer model is considerably better at predicting POS tags with infrequent support in the dataset. For example, the accuracy for predicting the SYM POS tag increased from 38% in the LSTM model to 95% in the transformer model! Equally as impressive, the transformer improved the performance of tags that are extremely common, and, thus, provide plenty of opportunity to both approaches to learn a good model. For example, the accuracy of tagging NOUN, the second 196 Using Transformers with the Hugging Face Library Figure 13.2 Confusion matrix corresponding to the transformer-based part-of-speech tagger. most common POS tag in the dataset, increased from 96% in the LSTM model to 99% in the transformer model. 13.4 Summary In this chapter we presented two applications driven by the encoder component of a transformer network. First, we used the transformer encoder as an acceptor and implemented a text classification application for English news. Second, we used the encoder as a transducer to develop a Spanish part-of-speech tagger. Both tasks were implemented using 13.4 Summary 197 pre-trained transformer models from the Hugging Face library. For both applications, the transformer-based methods outperform considerably all approaches introduced in the previous chapters, highlighting the value of the transformer architecture.
15,154
15,245
#!/usr/bin/env python # coding: utf-8 # # Part-of-speech Tagging with Transformer Networks # Some initialization: # In[1]: import random import torch import numpy as np import pandas as pd from tqdm.notebook import tqdm # enable tqdm in pandas tqdm.pandas() # set to True to use the gpu (if there is one available) use_gpu = True # select device device = torch.device('cuda' if use_gpu and torch.cuda.is_available() else 'cpu') print(f'device: {device.type}') # random seed seed = 1234 # set random seed if seed is not None: print(f'random seed: {seed}') random.seed(seed) np.random.seed(seed) torch.manual_seed(seed) # Read the words and POS tags from the Spanish dataset: # In[2]: from conllu import parse_incr def read_tags(filename): data = {'words': [], 'tags': []} with open(filename) as f: for sent in tqdm(parse_incr(f)): words = [tok['form'] for tok in sent] tags = [tok['upos'] for tok in sent] data['words'].append(words) data['tags'].append(tags) return pd.DataFrame(data) # In[3]: train_df = read_tags('data/UD_Spanish-AnCora/es_ancora-ud-train.conllup') valid_df = read_tags('data/UD_Spanish-AnCora/es_ancora-ud-dev.conllup') test_df = read_tags('data/UD_Spanish-AnCora/es_ancora-ud-test.conllup') # In[4]: tags = train_df['tags'].explode().unique() index_to_tag = {i:t for i,t in enumerate(tags)} tag_to_index = {t:i for i,t in enumerate(tags)} # Create a HuggingFace `DatasetDict` object: # In[5]: from datasets import Dataset, DatasetDict ds = DatasetDict() ds['train'] = Dataset.from_pandas(train_df) ds['validation'] = Dataset.from_pandas(valid_df) ds['test'] = Dataset.from_pandas(test_df) ds # In[6]: ds['train'].to_pandas() # Now tokenize the texts and assign POS labels to the first token in each word: # In[7]: from transformers import AutoTokenizer transformer_name = 'xlm-roberta-base' tokenizer = AutoTokenizer.from_pretrained(transformer_name) # In[8]: x = ds['train'][0] tokenized_input = tokenizer(x['words'], is_split_into_words=True) tokens = tokenizer.convert_ids_to_tokens(tokenized_input['input_ids']) word_ids = tokenized_input.word_ids() pd.DataFrame([tokens, word_ids], index=['tokens', 'word ids']) # In[9]: # https://arxiv.org/pdf/1810.04805.pdf # Section 5.3 # We use the representation of the first sub-token as the input to the token-level classifier over the NER label set. # default value for CrossEntropyLoss ignore_index parameter ignore_index = -100 def tokenize_and_align_labels(batch): labels = [] # tokenize batch tokenized_inputs = tokenizer( batch['words'], truncation=True, is_split_into_words=True, ) # iterate over batch elements for i, tags in enumerate(batch['tags']): label_ids = [] previous_word_id = None # get word ids for current batch element word_ids = tokenized_inputs.word_ids(batch_index=i) # iterate over tokens in batch element for word_id in word_ids: if word_id is None or word_id == previous_word_id: # ignore if not a word or word id has already been seen label_ids.append(ignore_index) else: # get tag id for corresponding word tag_id = tag_to_index[tags[word_id]] label_ids.append(tag_id) # remember this word id previous_word_id = word_id # save label ids for current batch element labels.append(label_ids) # store labels together with the tokenizer output tokenized_inputs['labels'] = labels return tokenized_inputs # In[10]: train_ds = ds['train'].map(tokenize_and_align_labels, batched=True) eval_ds = ds['validation'].map(tokenize_and_align_labels, batched=True) train_ds.to_pandas() # Create our transformer model: # In[11]: from torch import nn from transformers.modeling_outputs import TokenClassifierOutput from transformers.models.roberta.modeling_roberta import RobertaModel, RobertaPreTrainedModel # https://github.com/huggingface/transformers/blob/65659a29cf5a079842e61a63d57fa24474288998/src/transformers/models/roberta/modeling_roberta.py#L1346 class XLMRobertaForTokenClassification(RobertaPreTrainedModel): def __init__(self, config): super().__init__(config) self.num_labels = config.num_labels self.roberta = RobertaModel(config, add_pooling_layer=False) self.dropout = nn.Dropout(config.hidden_dropout_prob) self.classifier = nn.Linear(config.hidden_size, config.num_labels) self.init_weights() def forward(self, input_ids=None, attention_mask=None, token_type_ids=None, labels=None, **kwargs): outputs = self.roberta( input_ids, attention_mask=attention_mask, token_type_ids=token_type_ids, **kwargs, ) sequence_output = self.dropout(outputs[0]) logits = self.classifier(sequence_output) loss = None if labels is not None: loss_fn = nn.CrossEntropyLoss() inputs = logits.view(-1, self.num_labels) targets = labels.view(-1) loss = loss_fn(inputs, targets) return TokenClassifierOutput( loss=loss, logits=logits, hidden_states=outputs.hidden_states, attentions=outputs.attentions, ) # In[12]: from transformers import AutoConfig config = AutoConfig.from_pretrained( transformer_name, num_labels=len(index_to_tag), ) model = ( XLMRobertaForTokenClassification .from_pretrained(transformer_name, config=config) ) # Create the `Trainer` object and train: # In[13]: from transformers import TrainingArguments num_epochs = 2 batch_size = 24 weight_decay = 0.01 model_name = f'{transformer_name}-finetuned-pos-es' training_args = TrainingArguments( output_dir=model_name, log_level='error', num_train_epochs=num_epochs, per_device_train_batch_size=batch_size, per_device_eval_batch_size=batch_size, evaluation_strategy='epoch', weight_decay=weight_decay, ) # In[14]: from sklearn.metrics import accuracy_score def compute_metrics(eval_pred): # gold labels label_ids = eval_pred.label_ids # predictions pred_ids = np.argmax(eval_pred.predictions, axis=-1) # collect gold and predicted labels, ignoring ignore_index label y_true, y_pred = [], [] batch_size, seq_len = pred_ids.shape for i in range(batch_size): for j in range(seq_len): if label_ids[i, j] != ignore_index: y_true.append(index_to_tag[label_ids[i][j]]) y_pred.append(index_to_tag[pred_ids[i][j]]) # return computed metrics return {'accuracy': accuracy_score(y_true, y_pred)} # In[15]: from transformers import Trainer from transformers import DataCollatorForTokenClassification data_collator = DataCollatorForTokenClassification(tokenizer) trainer = Trainer( model=model, args=training_args, data_collator=data_collator, compute_metrics=compute_metrics, train_dataset=train_ds, eval_dataset=eval_ds, tokenizer=tokenizer, ) trainer.train() # Evaluate on the test partition: # In[16]: test_ds = ds['test'].map( tokenize_and_align_labels, batched=True, ) output = trainer.predict(test_ds) # In[17]: from sklearn.metrics import classification_report num_labels = model.num_labels label_ids = output.label_ids.reshape(-1) predictions = output.predictions.reshape(-1, num_labels) predictions = np.argmax(predictions, axis=-1) mask = label_ids != ignore_index y_true = label_ids[mask] y_pred = predictions[mask] target_names = tags[:-1] report = classification_report( y_true, y_pred, target_names=target_names ) print(report) # In[18]: import matplotlib.pyplot as plt from sklearn.metrics import ConfusionMatrixDisplay, confusion_matrix cm = confusion_matrix(y_true, y_pred, normalize='true') disp = ConfusionMatrixDisplay( confusion_matrix=cm, display_labels=target_names, ) fig, ax = plt.subplots(figsize=(10,10)) disp.plot( cmap='Blues', values_format='.2f', colorbar=False, ax=ax, xticks_rotation=45, )
3,694
3,834
7
chap13-8
chap13-8
13 Using Transformers with the Hugging Face Library One of the key advantages of transformer networks is the ability to take a model that was pre-trained over vast quantities of text and fine-tune it for the task at hand. Intuitively, this strategy allows transformer networks to achieve higher performance on smaller datasets by relying on statistics acquired at scale in an unsupervised way (e.g., through the masked language model training objective). To this end, in this chapter we will use the Hugging Face library,1 which has a rich repository of datasets and pre-trained models, as well as helper methods and classes that make it easy to target downstream tasks. Using pre-trained transformer encoders, we will implement the two tasks that served as use cases in the previous chapters: text classification and part-of-speech tagging. 13.1 Tokenization As discussed in Section 12.2, transformers rely on sub-word tokens. This strategy provides an elegant way to handle unknown and low-frequency words by splitting them into more frequent sub-word parts. At the same time, these tokenization algorithms maintain frequently-occurring words as standalone tokens, so the signal for these common words is preserved. To make this more concrete, we show below how tokenizers are employed in the Hugging Face library. First, we load the tokenizer that corresponds to the transformer we intend to use. This is important for two reasons: (a) different transformers rely on different tokenization algorithms, and (b) even for the ones that use the same algorithm, their tokenizer vocabularies are likely to be different if they were pre-trained 1 https://huggingface.co/docs/transformers/main/en/index 186 13.1 Tokenization 187 on different corpora. Next, we tokenize some example text and display some of the resulting attributes with pandas: As shown above, the tokenizer splits the text into tokens, and adds two special tokens: the [CLS] token at the beginning of the token sequence, and the [SEP] token at the end. Also, note that the ## characters at the beginning of some tokens indicate that they are not standalone words, but rather sub-words that continue a word previously started. For example, the output above shows that the word walrus was split into three sub-words. Note, however, that this is specific to this particular tokenization algorithm, and other tokenizers may indicate word continuation in different ways. A better way to detect word continuations is using the word_ids() method of the tokenizer output, which assigns the same id to all tokens part of the same word. For example, all fragments of the word walrus share the word id 3. Lastly, the input_ids attribute provides the token ids used internally by the transformer to map tokens to embeddings. To briefly demonstrate how different tokenizers produce different outputs, here is the same text tokenized with the tokenizer corresponding to xlm-roberta-base: Note how the [CLS] and [SEP] special tokens have been replaced with <s> and </s> respectively. Also, spaces have been replaced with the Unicode character (U+2581, LOWER ONE EIGHTH BLOCK). Tokens that start with that character are considered word beginnings and the rest are word continuations, as can be confirmed by looking at the word ids. This illustrates the importance of using the tokenizer that corresponds to the transformer you intend to use. 012345678 tokens [CLS] I am the wa ##l ##rus . [SEP] word_ids None 0 1 2 3 3 3 4 None input_ids 101 146 1821 1103 20049 1233 6208 119 102 01234567 tokens <s> ▁I ▁am ▁the ▁wal rus . </s > None 0 1 2 3 3 3 None 0 87 444 70 32973 6563 5 2 word_ids input_ids 188 Using Transformers with the Hugging Face Library 13.2 Text Classification For our text classification example, we will continue using the AG News dataset from previous chapters. We will load, preprocess, and split the dataset into pandas dataframes in the same way as before. Now however, rather than continuing with pandas, we will create a Hugging Face dataset from the dataframes. Hugging Face datasets are convenient because of their built-in support of batching, efficient data transformations, and caching. In particular, we convert each dataframe into a Hugging Face dataset. The various datasets are managed with a DatasetDict. Note that this is the same data structure seen when downloading a Hugging Face dataset from their hub.2 The keys in this dictionary are usually train, validation, and test:3 Once our dataset is loaded, we load a tokenizer. Different pre-trained models are tokenized differently, and it is important to select the tokenizer that corresponds to the model we will use so that the inputs are consistent with model expectations. In our example, we will use the bert-base-cased pre-trained model and tokenizer: Datasets have a map() method that transforms the dataset by applying a function to each example. The method returns a new dataset with the transformation applied. We use the map() method to tokenize our dataset. To this end, we define a function that tokenizes an example using the tokenizer we loaded previously. Note that tokenizers support many options that you may need depending on your situation. However, since this is a simple scenario, all we need to do is provide the text to tokenize and specify how to handle texts that exceed the maximum number of tokens permitted by the pre-trained model. Here we have our tokenizer truncate any inputs that are too long by specifying the truncation=True parameter. The output of this function will be added to the new dataset as extra columns. Further, we also want to remove some of the columns that are no longer needed, simplifying subsequent steps. For this, we use the remove_columns argument, listing the columns that we want to discard. Additionally, the dataset’s map() method can batch the dataset; we enable this option with the batched=True argument: 2 https://huggingface.co/datasets
3 These correspond to the more common terms train, development, and test we have used throughout the book so far. In this chapter we use the Hugging Face naming conventions for consistency. 13.2 Text Classification 189 label 03 10 20 32 40 ... ... . 107995  0 
 . 107996  0 
 . 107997  0 
 . 107998  0 
 . 107999  3 
 input_ids [101, 3270, 11906, 1522, 1146, 7106, 1111, 251... [101, 158, 119, 156, 119, 12068, 5084, 1116, 9... [101, 7270, 118, 2733, 1383, 1111, 12448, 7430... [101, 6096, 117, 10378, 3969, 5977, 1111, 8988... [101, 19569, 5480, 10582, 2087, 1867, 158, 119... [101, 1130, 139, 24683, 131, 21107, 2050, 1739... token_type_ids attention_mask [101, 22087, 8223, 1611, 1106, 4417, [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 5572, 324... 0, 0, 0, ... [0,0,0,0,0,0,0,0, [0,0,0,0,0,0,0,0, [0,0,0,0,0,0,0,0, [0,0,0,0,0,0,0,0, [0,0,0,0,0,0,0,0, [0,0,0,0,0,0,0,0, 0,0,0,0, 0,0,0,... 0,0,0,0, 0,0,0,... 0,0,0,0, 0,0,0,... 0,0,0,0, 0,0,0,... 0,0,0,0, 0,0,0,... 0,0,0,0, 0,0,0,... [1,1,1,1,1,1,1,1, [1,1,1,1,1,1,1,1, [1,1,1,1,1,1,1,1, [1,1,1,1,1,1,1,1, [1,1,1,1,1,1,1,1, [1,1,1,1,1,1,1,1, 1,1,1,1, 1,1,1,... 1,1,1,1, 1,1,1,... 1,1,1,1, 1,1,1,... 1,1,1,1, 1,1,1,... 1,1,1,1, 1,1,1,... 1,1,1,1, 1,1,1,... ... ... [101, 16409, 118, 16587, 159, 4064, [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1106, 1564... 0, 0, 0, ... 108000 rows × 4 columns [101, 4222, 11404, 1174, 117, 1476, [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1130, 2696... 0, 0, 0, ... [101, 11560, 3881, 108, 3614, 132, [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 3498, 2944,... 0, 0, 0, ... [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... ... Next, we implement a classifier for our task. Hugging Face provides a
variety of models corresponding to several types of downstream tasks. However, for pedagogical purposes, we implement one from scratch. In particular, our model class inherits from BertPreTrainedModel, which
provides several useful methods such as init_weights() and from_pretrained() methods, which we will use later. The model constructor takes a config- uration object as its only parameter. Configuration objects contain all the hyper-parameters used by the corresponding pre-trained models. We will show later how the configuration model is retrieved and customized. Models that implement specific downstream tasks are usually composed of a pre-trained model (sometimes referred as the body), and one or more task-specific layers (usually referred as the head). Here, we initialize a BertModel using the provided configuration, as well as a dropout layer and a task-specific linear layer used for classifying the Bert output. Each of these layers is initialized by calling the init_weights() method inherited from BertPreTrainedModel. The forward() method, which implements the task-specific forward pass, takes as arguments the outputs of the tokenizer, and, optionally, the gold labels corresponding to the input data points. Our implementation of the forward pass sends the input tokens to the Bert model to produce the contextualized representations for all tokens. This output has several components, including the last_hidden_state which con- 190 Using Transformers with the Hugging Face Library tains the final hidden-state embedding for each token. For our task, we will represent the whole sequence using the embedding for the [CLS] token that occurs at the start of each example. We retrieve it by selecting the first element of each output sequence in the batch (i.e., last_hidden_state[:, 0, :]). As in the previous chapters, we apply dropout to our sequence representation, and then pass it through our linear classification layer. If gold labels are provided (i.e., we are training), we now compute the loss using the cross-entropy loss. The output of the forward pass is wrapped in a Hugging Face SequenceClassifierOutput object4 and returned: Next we load the configuration of the pre-trained model and instantiate our model. The AutoConfig class can load the configuration for any pre-trained model, retrieving it from Hugging Face if needed. Then we use the configuration to instantiate our model using the from_pretrained() method. With this call, the pre-trained model will be loaded, which includes downloading if necessary: Hugging Face provides a Trainer class that greatly simplifies the training process. This class not only implements the training loop we have been using in the previous chapters, but also handles other useful steps such as saving checkpoints (i.e., intermediate models after a number of mini-batches have been processed during training), and tracking custom measures about model performance. In order to create a Trainer, we first need to specify its configuration in a TrainingArguments object. In ours, we specify certain hyper parameters such as batch size, weight decay, and number of epochs, as well as where to store model checkpoints: The TrainingArguments class provides a wide variety of arguments that we have not shown.5 These arguments usually have appropriate default values, so it is often fine to omit them. For example, we did not use the label_names argument, which specifies the key that corresponds to the training labels. When omitted, it defaults to keys such as label, labels and label_ids.6 In this chapter we used label. Note that we also specify how often we would like to see the perfor- . 4  Hugging Face utilizes a set of output objects to standardize model output for a given task. These objects typically include additional information, e.g., attention weights, which can be used for visualizing or debugging model behavior. 
 . 5  https://huggingface.co/docs/transformers/main/en/main_classes/trainer# transformers. TrainingArguments 
 6 In the case of extractive question answering (see Chapter 16), the start_positions and end_positions store the start/end positions of the correct answers. 13.3 Part-of-speech Tagging 191 mance of the current model (at the end of each epoch) with evaluation_strategy='epoch'. This means that after each epoch we print the current loss on the training
partition and on the evaluation dataset, if one is available. Additionally,
we can report custom metrics at this time. For this purpose, we use the compute_metrics parameter of the Trainer, which expects a function that receives a transformers. EvalPredictions object containing the label ids and the predicted logits. The expected return type is a dictionary whose keys correspond to different metrics, each of which will be displayed as a separate result column. Using the above TrainingArguments and compute_metrics function, we create our Trainer. Note that when you provide a tokenizer, the trainer will automatically pad the sequences in each batch. Also, the trainer will automatically use any GPU that is available, unless specifically disabled in the TrainingArguments. Training our model takes a single call to the train() method of the Trainer object. As specified in the our instance of TrainingArguments, the training and validation losses, as well as the accuracy, are reported every epoch. As in the other chapters, we can write custom code to obtain the model’s predictions on the test data. However, the Trainer class provides a predict() method that drastically simplifies this: As shown in the table above, this model achieves an accuracy of 95%, which is the highest performance we have achieved so far on this dataset. 13.3 Part-of-speech Tagging To showcase part-of-speech tagging using transformers, we continue with the Spanish section of the AnCora corpus introduced in Chapter 11. Recall that the dataset is stored in the CoNLL-U format. We load this format in the same way as before, but then we convert the loaded dataset into a Hugging Face DictDataset: Importantly, because the CoNLL-U dataset is already tokenized, we use the is_split_into_words=True tokenizer argument to ensure that the tokenizer respects the existing word boundaries during its sub-word tokenization. Further, while we want to predict one POS tag per word, Epoch Training Loss Validation Loss Accuracy 1 0.187800 0.172629 0.941667 2 0.104000 0.183001 0.946250 192 Using Transformers with the Hugging Face Library any given word may be split into smaller pieces by our tokenizer. Thus, we need to align the tokenizer output to the CoNLL-U words. The original BERT paper (Devlin et al., 2018) addresses this by only using the embedding corresponding to the first sub-token for each word. We follow the same approach for consistency. For the sub-words that do not correspond to the beginning of a word, we use a special value that indicates that we are not interested in their predictions. The CrossEntropyLoss has a parameter called ignore_index for this purpose. The default value for this parameter is −100, which we use as the label for the sub-words we wish to ignore during training: Next, we use this function to preprocess the train and validation folds in our DatasetDict: words [El, presidente, de, el, órgano, regulador, de... [Afirmó, que, sigue, el, criterio, europeo, y,... [Durante, la, presentación, de, el, libro, ", ... [Y, todas, las, miradas, convergen, en, la, lu... [Cambiar, las, formas, parece, de, rigor, ,, p... [Él, llega, a, tirar, la, sobre, la, cama, y, ... tags [DET, NOUN, ADP, DET, NOUN, ADJ, ADP, DET, PRO... [VERB, SCONJ, VERB, DET, NOUN, ADJ, CCONJ, SCO... [ADP, DET, NOUN, ADP, DET, NOUN, PUNCT, DET, P... [CCONJ, DET, DET, NOUN, VERB, ADP, DET, NOUN, ... [VERB, DET, NOUN, VERB, ADP, NOUN, PUNCT, CCON... [PRON, VERB, ADP, VERB, PRON, ADP, DET, NOUN, ... input_ids [0, 540, 9692, 8, 88, 103633, 15913, 1846, 8, ... [0, 62, 38949, 849, 41, 58453, 88, 166220, 620... [0, 24292, 21, 43945, 8, 88, 7750, 44, 239, 78... [0, 990, 5136, 576, 100688, 7, 158, 814, 1409,... [0, 313, 61055, 42, 576, 26497, 12295, 8, 7599... [0, 124043, 47612, 10, 61846, 21, 1028, 21, 39... attention_mask labels [-100, 0, 1, 2, 0, 1, 3, -100, 2, 0, 4, -100, ... [-100, 6, -100, -100, 7, 6, 0, 1, 3, 10, 7, 6,... [-100, 2, 0, 1, 2, 0, 1, 8, 0, 4, -100, 2, 4, ... [-100, 10, 0, 0, 1, -100, 6, -100, -100, 2, 0,... [-100, 6, -100, -100, 0, 1, 6, 2, 1, 8, -100, ... [-100, 5, 6, 2, 6, 5, 2, 0, 1, 10, 5, 6, 0, 1,... 0 1 2 3 4 ... 14300 14301 14302 14303 [1,1,1,1,1, 1,1,1,1,1, [1,1,1,1,1, 1,1,1,1,1, [1,1,1,1,1, 1,1,1,1,1, [1,1,1,1,1, 1,1,1,1,1, [1,1,1,1,1, 1,1,1,1,1, [1,1,1,1,1, 1,1,1,1,1, 1, 1, 1, 1, 1, ... 1, 1, 1, 1, 1, ... 1, 1, 1, 1, 1, ... 1, 1, 1, 1, 1, ... 1, 1, 1, 1, 1, ... 1, 1, 1, 1, 1, ... [Sobre, la, oferta, de, interconexión, con, Te... [ADP, DET, NOUN, ADP, NOUN, ADP, PROPN, ADP, D... [0, 44125, 21, 19806, 8, 1940, 2271, 3355, 194... [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... [-100, 2, 0, 1, 2, 1, -100, -100, -100, 2, 4, ... [La, inversión, en, investigación, básica, es,... [DET, NOUN, ADP, NOUN, ADJ, AUX, DET, NOUN, AD... [0, 239, 98649, 22, 31674, 124528, 198, 88, 46... [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... [-100, 0, 1, 2, 1, 3, 9, 0, 1, 2, 0, 1, 10, 0,... [Conviene, que, ahora, ,, en, plena, apoteosis... [VERB, SCONJ, ADV, PUNCT, ADP, ADJ, NOUN, ADP,... [0, 1657, 7772, 13, 41, 18451, 6, 4, 22, 31161... [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... [-100, 6, -100, -100, 7, 11, 8, -100, 2, 3, 1,... [Carlos, y, Fayna, se, enzarzan, en, una, bron... [PROPN, CCONJ, PROPN, PRON, VERB, ADP, DET, NO... [0, 24856, 113, 114162, 76, 40, 22, 6383, 5935... [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... [-100, 4, 10, 4, -100, 5, 6, -100, -100, 2, 0,... 14304
14305 rows × 5 columns ... ... ... ... ... Next, we implement our model class that uses a transformer encoder as a transducer. Because our downstream task consists of POS tagging for Spanish, we need a transformer model that was pre-trained on Spanish texts. Here, we chose XLM-RoBERTa (Conneau et al., 2019) as our base model. XLM-Roberta is a RoBERTa model (Liu et al., 2019) that has 13.3 Part-of-speech Tagging 193 been pre-trained on 100 different languages, including Spanish. Of note, XLM-RoBERTa does not require us to specify what language we are working on. Similar to BERT, it only requires the input_ids. We discussed in the text classification section that Hugging Face provides implementations for text classification models. This is also true for token classification problems that require transducers. In particular, the XLMRobertaForTokenClassification model provided by Hugging Face does everything needed for this task. However, as before, here we implement it ourselves for pedagogical purposes. The model architecture is similar to our text classification example. It consists of a transformer, a dropout layer, and a linear layer used for classification. The number of labels which determines the output dimension of the linear layer is equal to the number of POS tags. The primary difference between the text classification example and this token classification model is that with the former we produced one label for each text document, while here we produce one label for each token in the input text. Specifically, in our text classification model the output shape was two-dimensional: (batch_size, num_labels). Here, our output is three-dimensional: (batch_size, sequence_size, num_labels). So, while much of the forward method is familiar to us, when we are required to compute the loss, we need to reshape the logits and the labels before passing them to the CrossEntropyLoss, since it expects two-dimensional input and one-dimensional labels. For this purpose, we use the view() method to reshape the tensors. This method is efficient because it does not copy the tensor data. Instead it provides a new view of the same data that behaves like a tensor with a different shape.7 As mentioned before, the number of arguments passed to this method determines the number of dimensions in the output tensor. Here, for our logits, we pass two arguments and so our new view will have two dimensions. The second will be the size of self.num_labels, while the first (because we pass -1) will be inferred based on the original tensor shape. For our labels, on the other hand, we only provide one argument and so the new view will have one dimension, inferred by the original shape: Next, we instantiate our model using the XLM-RoBERTa configuration: 7 Similar to NumPy, PyTorch tensors are represented internally by a block of memory storing the data and some metadata that describes how the data should be read, e.g., type, shape, and stride. The view() method returns a new tensor with new metadata but pointing to the same memory block. 194 Using Transformers with the Hugging Face Library As before, we create a TrainingArguments object and define a compute_metrics function in order to customize a Trainer: While the TrainingArguments code has no substantial changes, we need to adjust the compute_metrics function to account for the fact that our model uses sub-word tokens rather than complete words. Recall that only the first sub-word token per word was assigned a POS tag. This function discards the labels corresponding to the ignored sub-word tokens and evaluates the rest, returning the accuracy score: The last component required for the Trainer is a collator. Since this time we are batching sequences of tokens, we need a collator that can pad them dynamically when constructing the batches. The transformers library includes a DataCollatorForTokenClassification specifically for this purpose. Once we have our collator and our trainer object, we can train our model: Next, we evaluate our newly trained model on the test dataset. For this purpose, we preprocess the data in the same way we did for the train and validation partitions. Then, for convenience, we use the trainer’s predict() method to generate the predicted logits using our model: As before, we use scikit-learn’s classification_report() function to display the results of the evaluation. This function expects two onedimensional lists of labels, so we need to follow a similar approach to the one we employed for text classification. Note that output.label_ids and output.predictions are NumPy arrays rather than PyTorch tensors. This time we use NumPy’s reshape() method to reshape the arrays. This method is similar to PyTorch’s view() method that we used previously, except that view() may copy the array’s data in some situations. We discard the labels corresponding to ignored sub-word tokens, and then we print the classification report: Our model based on XLM-RoBERTa achieves 99% accuracy. This is considerably better than the LSTM-based model developed in Chapter 11. In order to understand the differences between the two methods, we produce below a confusion matrix for the results of each model. Rows in the confusion matrix represent the true labels and columns represent the predicted labels. In the confusion matrices shown below, each cell xij corresponds to the proportion of values with label i that were assigned the label j.8 For a perfect model, all cells in the diagonal would have value 1 and all other cells would have value 0. The code used to generate the confusion matrix is shown below. The confusion matrices 8 This is the case because we used the normalize='true' parameter of the confusion_matrix() function. 13.3 Part-of-speech Tagging 195 Figure 13.1 Confusion matrix corresponding to the LSTM-based part-ofspeech tagger developed in Chapter 11. for the LSTM and transformer are show in Figure 13.1 and Figure 13.2, respectively. The two confusion matrices highlight a couple of important observations. First, the transformer model is considerably better at predicting POS tags with infrequent support in the dataset. For example, the accuracy for predicting the SYM POS tag increased from 38% in the LSTM model to 95% in the transformer model! Equally as impressive, the transformer improved the performance of tags that are extremely common, and, thus, provide plenty of opportunity to both approaches to learn a good model. For example, the accuracy of tagging NOUN, the second 196 Using Transformers with the Hugging Face Library Figure 13.2 Confusion matrix corresponding to the transformer-based part-of-speech tagger. most common POS tag in the dataset, increased from 96% in the LSTM model to 99% in the transformer model. 13.4 Summary In this chapter we presented two applications driven by the encoder component of a transformer network. First, we used the transformer encoder as an acceptor and implemented a text classification application for English news. Second, we used the encoder as a transducer to develop a Spanish part-of-speech tagger. Both tasks were implemented using 13.4 Summary 197 pre-trained transformer models from the Hugging Face library. For both applications, the transformer-based methods outperform considerably all approaches introduced in the previous chapters, highlighting the value of the transformer architecture.
12,449
12,650
#!/usr/bin/env python # coding: utf-8 # # Text Classification Using Transformer Networks (BERT) # Some initialization: # In[1]: import random import torch import numpy as np import pandas as pd from tqdm.notebook import tqdm # enable tqdm in pandas tqdm.pandas() # set to True to use the gpu (if there is one available) use_gpu = True # select device device = torch.device('cuda' if use_gpu and torch.cuda.is_available() else 'cpu') print(f'device: {device.type}') # random seed seed = 1234 # set random seed if seed is not None: print(f'random seed: {seed}') random.seed(seed) np.random.seed(seed) torch.manual_seed(seed) # Read the train/dev/test datasets and create a HuggingFace `Dataset` object: # In[2]: def read_data(filename): # read csv file df = pd.read_csv(filename, header=None) # add column names df.columns = ['label', 'title', 'description'] # make labels zero-based df['label'] -= 1 # concatenate title and description, and remove backslashes df['text'] = df['title'] + " " + df['description'] df['text'] = df['text'].str.replace('\\', ' ', regex=False) return df # In[3]: labels = open('data/ag_news_csv/classes.txt').read().splitlines() train_df = read_data('data/ag_news_csv/train.csv') test_df = read_data('data/ag_news_csv/test.csv') train_df # In[4]: from sklearn.model_selection import train_test_split train_df, eval_df = train_test_split(train_df, train_size=0.9) train_df.reset_index(inplace=True, drop=True) eval_df.reset_index(inplace=True, drop=True) print(f'train rows: {len(train_df.index):,}') print(f'eval rows: {len(eval_df.index):,}') print(f'test rows: {len(test_df.index):,}') # In[5]: from datasets import Dataset, DatasetDict ds = DatasetDict() ds['train'] = Dataset.from_pandas(train_df) ds['validation'] = Dataset.from_pandas(eval_df) ds['test'] = Dataset.from_pandas(test_df) ds # Tokenize the texts: # In[6]: from transformers import AutoTokenizer transformer_name = 'bert-base-cased' tokenizer = AutoTokenizer.from_pretrained(transformer_name) # In[7]: def tokenize(examples): return tokenizer(examples['text'], truncation=True) train_ds = ds['train'].map( tokenize, batched=True, remove_columns=['title', 'description', 'text'], ) eval_ds = ds['validation'].map( tokenize, batched=True, remove_columns=['title', 'description', 'text'], ) train_ds.to_pandas() # Create the transformer model: # In[8]: from torch import nn from transformers.modeling_outputs import SequenceClassifierOutput from transformers.models.bert.modeling_bert import BertModel, BertPreTrainedModel # https://github.com/huggingface/transformers/blob/65659a29cf5a079842e61a63d57fa24474288998/src/transformers/models/bert/modeling_bert.py#L1486 class BertForSequenceClassification(BertPreTrainedModel): def __init__(self, config): super().__init__(config) self.num_labels = config.num_labels self.bert = BertModel(config) self.dropout = nn.Dropout(config.hidden_dropout_prob) self.classifier = nn.Linear(config.hidden_size, config.num_labels) self.init_weights() def forward(self, input_ids=None, attention_mask=None, token_type_ids=None, labels=None, **kwargs): outputs = self.bert( input_ids, attention_mask=attention_mask, token_type_ids=token_type_ids, **kwargs, ) cls_outputs = outputs.last_hidden_state[:, 0, :] cls_outputs = self.dropout(cls_outputs) logits = self.classifier(cls_outputs) loss = None if labels is not None: loss_fn = nn.CrossEntropyLoss() loss = loss_fn(logits, labels) return SequenceClassifierOutput( loss=loss, logits=logits, hidden_states=outputs.hidden_states, attentions=outputs.attentions, ) # In[9]: from transformers import AutoConfig config = AutoConfig.from_pretrained( transformer_name, num_labels=len(labels), ) model = ( BertForSequenceClassification .from_pretrained(transformer_name, config=config) ) # Create the trainer object and train: # In[10]: from transformers import TrainingArguments num_epochs = 2 batch_size = 24 weight_decay = 0.01 model_name = f'{transformer_name}-sequence-classification' training_args = TrainingArguments( output_dir=model_name, log_level='error', num_train_epochs=num_epochs, per_device_train_batch_size=batch_size, per_device_eval_batch_size=batch_size, evaluation_strategy='epoch', weight_decay=weight_decay, ) # In[11]: from sklearn.metrics import accuracy_score def compute_metrics(eval_pred): y_true = eval_pred.label_ids y_pred = np.argmax(eval_pred.predictions, axis=-1) return {'accuracy': accuracy_score(y_true, y_pred)} # In[12]: from transformers import Trainer trainer = Trainer( model=model, args=training_args, compute_metrics=compute_metrics, train_dataset=train_ds, eval_dataset=eval_ds, tokenizer=tokenizer, ) # In[13]: trainer.train() # Evaluate on the test partition: # In[14]: test_ds = ds['test'].map( tokenize, batched=True, remove_columns=['title', 'description', 'text'], ) test_ds.to_pandas() # In[15]: output = trainer.predict(test_ds) output # In[16]: from sklearn.metrics import classification_report y_true = output.label_ids y_pred = np.argmax(output.predictions, axis=-1) target_names = labels print(classification_report(y_true, y_pred, target_names=target_names))
4,700
4,732
8
chap13-9
chap13-9
13 Using Transformers with the Hugging Face Library One of the key advantages of transformer networks is the ability to take a model that was pre-trained over vast quantities of text and fine-tune it for the task at hand. Intuitively, this strategy allows transformer networks to achieve higher performance on smaller datasets by relying on statistics acquired at scale in an unsupervised way (e.g., through the masked language model training objective). To this end, in this chapter we will use the Hugging Face library,1 which has a rich repository of datasets and pre-trained models, as well as helper methods and classes that make it easy to target downstream tasks. Using pre-trained transformer encoders, we will implement the two tasks that served as use cases in the previous chapters: text classification and part-of-speech tagging. 13.1 Tokenization As discussed in Section 12.2, transformers rely on sub-word tokens. This strategy provides an elegant way to handle unknown and low-frequency words by splitting them into more frequent sub-word parts. At the same time, these tokenization algorithms maintain frequently-occurring words as standalone tokens, so the signal for these common words is preserved. To make this more concrete, we show below how tokenizers are employed in the Hugging Face library. First, we load the tokenizer that corresponds to the transformer we intend to use. This is important for two reasons: (a) different transformers rely on different tokenization algorithms, and (b) even for the ones that use the same algorithm, their tokenizer vocabularies are likely to be different if they were pre-trained 1 https://huggingface.co/docs/transformers/main/en/index 186 13.1 Tokenization 187 on different corpora. Next, we tokenize some example text and display some of the resulting attributes with pandas: As shown above, the tokenizer splits the text into tokens, and adds two special tokens: the [CLS] token at the beginning of the token sequence, and the [SEP] token at the end. Also, note that the ## characters at the beginning of some tokens indicate that they are not standalone words, but rather sub-words that continue a word previously started. For example, the output above shows that the word walrus was split into three sub-words. Note, however, that this is specific to this particular tokenization algorithm, and other tokenizers may indicate word continuation in different ways. A better way to detect word continuations is using the word_ids() method of the tokenizer output, which assigns the same id to all tokens part of the same word. For example, all fragments of the word walrus share the word id 3. Lastly, the input_ids attribute provides the token ids used internally by the transformer to map tokens to embeddings. To briefly demonstrate how different tokenizers produce different outputs, here is the same text tokenized with the tokenizer corresponding to xlm-roberta-base: Note how the [CLS] and [SEP] special tokens have been replaced with <s> and </s> respectively. Also, spaces have been replaced with the Unicode character (U+2581, LOWER ONE EIGHTH BLOCK). Tokens that start with that character are considered word beginnings and the rest are word continuations, as can be confirmed by looking at the word ids. This illustrates the importance of using the tokenizer that corresponds to the transformer you intend to use. 012345678 tokens [CLS] I am the wa ##l ##rus . [SEP] word_ids None 0 1 2 3 3 3 4 None input_ids 101 146 1821 1103 20049 1233 6208 119 102 01234567 tokens <s> ▁I ▁am ▁the ▁wal rus . </s > None 0 1 2 3 3 3 None 0 87 444 70 32973 6563 5 2 word_ids input_ids 188 Using Transformers with the Hugging Face Library 13.2 Text Classification For our text classification example, we will continue using the AG News dataset from previous chapters. We will load, preprocess, and split the dataset into pandas dataframes in the same way as before. Now however, rather than continuing with pandas, we will create a Hugging Face dataset from the dataframes. Hugging Face datasets are convenient because of their built-in support of batching, efficient data transformations, and caching. In particular, we convert each dataframe into a Hugging Face dataset. The various datasets are managed with a DatasetDict. Note that this is the same data structure seen when downloading a Hugging Face dataset from their hub.2 The keys in this dictionary are usually train, validation, and test:3 Once our dataset is loaded, we load a tokenizer. Different pre-trained models are tokenized differently, and it is important to select the tokenizer that corresponds to the model we will use so that the inputs are consistent with model expectations. In our example, we will use the bert-base-cased pre-trained model and tokenizer: Datasets have a map() method that transforms the dataset by applying a function to each example. The method returns a new dataset with the transformation applied. We use the map() method to tokenize our dataset. To this end, we define a function that tokenizes an example using the tokenizer we loaded previously. Note that tokenizers support many options that you may need depending on your situation. However, since this is a simple scenario, all we need to do is provide the text to tokenize and specify how to handle texts that exceed the maximum number of tokens permitted by the pre-trained model. Here we have our tokenizer truncate any inputs that are too long by specifying the truncation=True parameter. The output of this function will be added to the new dataset as extra columns. Further, we also want to remove some of the columns that are no longer needed, simplifying subsequent steps. For this, we use the remove_columns argument, listing the columns that we want to discard. Additionally, the dataset’s map() method can batch the dataset; we enable this option with the batched=True argument: 2 https://huggingface.co/datasets
3 These correspond to the more common terms train, development, and test we have used throughout the book so far. In this chapter we use the Hugging Face naming conventions for consistency. 13.2 Text Classification 189 label 03 10 20 32 40 ... ... . 107995  0 
 . 107996  0 
 . 107997  0 
 . 107998  0 
 . 107999  3 
 input_ids [101, 3270, 11906, 1522, 1146, 7106, 1111, 251... [101, 158, 119, 156, 119, 12068, 5084, 1116, 9... [101, 7270, 118, 2733, 1383, 1111, 12448, 7430... [101, 6096, 117, 10378, 3969, 5977, 1111, 8988... [101, 19569, 5480, 10582, 2087, 1867, 158, 119... [101, 1130, 139, 24683, 131, 21107, 2050, 1739... token_type_ids attention_mask [101, 22087, 8223, 1611, 1106, 4417, [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 5572, 324... 0, 0, 0, ... [0,0,0,0,0,0,0,0, [0,0,0,0,0,0,0,0, [0,0,0,0,0,0,0,0, [0,0,0,0,0,0,0,0, [0,0,0,0,0,0,0,0, [0,0,0,0,0,0,0,0, 0,0,0,0, 0,0,0,... 0,0,0,0, 0,0,0,... 0,0,0,0, 0,0,0,... 0,0,0,0, 0,0,0,... 0,0,0,0, 0,0,0,... 0,0,0,0, 0,0,0,... [1,1,1,1,1,1,1,1, [1,1,1,1,1,1,1,1, [1,1,1,1,1,1,1,1, [1,1,1,1,1,1,1,1, [1,1,1,1,1,1,1,1, [1,1,1,1,1,1,1,1, 1,1,1,1, 1,1,1,... 1,1,1,1, 1,1,1,... 1,1,1,1, 1,1,1,... 1,1,1,1, 1,1,1,... 1,1,1,1, 1,1,1,... 1,1,1,1, 1,1,1,... ... ... [101, 16409, 118, 16587, 159, 4064, [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1106, 1564... 0, 0, 0, ... 108000 rows × 4 columns [101, 4222, 11404, 1174, 117, 1476, [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1130, 2696... 0, 0, 0, ... [101, 11560, 3881, 108, 3614, 132, [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 3498, 2944,... 0, 0, 0, ... [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... ... Next, we implement a classifier for our task. Hugging Face provides a
variety of models corresponding to several types of downstream tasks. However, for pedagogical purposes, we implement one from scratch. In particular, our model class inherits from BertPreTrainedModel, which
provides several useful methods such as init_weights() and from_pretrained() methods, which we will use later. The model constructor takes a config- uration object as its only parameter. Configuration objects contain all the hyper-parameters used by the corresponding pre-trained models. We will show later how the configuration model is retrieved and customized. Models that implement specific downstream tasks are usually composed of a pre-trained model (sometimes referred as the body), and one or more task-specific layers (usually referred as the head). Here, we initialize a BertModel using the provided configuration, as well as a dropout layer and a task-specific linear layer used for classifying the Bert output. Each of these layers is initialized by calling the init_weights() method inherited from BertPreTrainedModel. The forward() method, which implements the task-specific forward pass, takes as arguments the outputs of the tokenizer, and, optionally, the gold labels corresponding to the input data points. Our implementation of the forward pass sends the input tokens to the Bert model to produce the contextualized representations for all tokens. This output has several components, including the last_hidden_state which con- 190 Using Transformers with the Hugging Face Library tains the final hidden-state embedding for each token. For our task, we will represent the whole sequence using the embedding for the [CLS] token that occurs at the start of each example. We retrieve it by selecting the first element of each output sequence in the batch (i.e., last_hidden_state[:, 0, :]). As in the previous chapters, we apply dropout to our sequence representation, and then pass it through our linear classification layer. If gold labels are provided (i.e., we are training), we now compute the loss using the cross-entropy loss. The output of the forward pass is wrapped in a Hugging Face SequenceClassifierOutput object4 and returned: Next we load the configuration of the pre-trained model and instantiate our model. The AutoConfig class can load the configuration for any pre-trained model, retrieving it from Hugging Face if needed. Then we use the configuration to instantiate our model using the from_pretrained() method. With this call, the pre-trained model will be loaded, which includes downloading if necessary: Hugging Face provides a Trainer class that greatly simplifies the training process. This class not only implements the training loop we have been using in the previous chapters, but also handles other useful steps such as saving checkpoints (i.e., intermediate models after a number of mini-batches have been processed during training), and tracking custom measures about model performance. In order to create a Trainer, we first need to specify its configuration in a TrainingArguments object. In ours, we specify certain hyper parameters such as batch size, weight decay, and number of epochs, as well as where to store model checkpoints: The TrainingArguments class provides a wide variety of arguments that we have not shown.5 These arguments usually have appropriate default values, so it is often fine to omit them. For example, we did not use the label_names argument, which specifies the key that corresponds to the training labels. When omitted, it defaults to keys such as label, labels and label_ids.6 In this chapter we used label. Note that we also specify how often we would like to see the perfor- . 4  Hugging Face utilizes a set of output objects to standardize model output for a given task. These objects typically include additional information, e.g., attention weights, which can be used for visualizing or debugging model behavior. 
 . 5  https://huggingface.co/docs/transformers/main/en/main_classes/trainer# transformers. TrainingArguments 
 6 In the case of extractive question answering (see Chapter 16), the start_positions and end_positions store the start/end positions of the correct answers. 13.3 Part-of-speech Tagging 191 mance of the current model (at the end of each epoch) with evaluation_strategy='epoch'. This means that after each epoch we print the current loss on the training
partition and on the evaluation dataset, if one is available. Additionally,
we can report custom metrics at this time. For this purpose, we use the compute_metrics parameter of the Trainer, which expects a function that receives a transformers. EvalPredictions object containing the label ids and the predicted logits. The expected return type is a dictionary whose keys correspond to different metrics, each of which will be displayed as a separate result column. Using the above TrainingArguments and compute_metrics function, we create our Trainer. Note that when you provide a tokenizer, the trainer will automatically pad the sequences in each batch. Also, the trainer will automatically use any GPU that is available, unless specifically disabled in the TrainingArguments. Training our model takes a single call to the train() method of the Trainer object. As specified in the our instance of TrainingArguments, the training and validation losses, as well as the accuracy, are reported every epoch. As in the other chapters, we can write custom code to obtain the model’s predictions on the test data. However, the Trainer class provides a predict() method that drastically simplifies this: As shown in the table above, this model achieves an accuracy of 95%, which is the highest performance we have achieved so far on this dataset. 13.3 Part-of-speech Tagging To showcase part-of-speech tagging using transformers, we continue with the Spanish section of the AnCora corpus introduced in Chapter 11. Recall that the dataset is stored in the CoNLL-U format. We load this format in the same way as before, but then we convert the loaded dataset into a Hugging Face DictDataset: Importantly, because the CoNLL-U dataset is already tokenized, we use the is_split_into_words=True tokenizer argument to ensure that the tokenizer respects the existing word boundaries during its sub-word tokenization. Further, while we want to predict one POS tag per word, Epoch Training Loss Validation Loss Accuracy 1 0.187800 0.172629 0.941667 2 0.104000 0.183001 0.946250 192 Using Transformers with the Hugging Face Library any given word may be split into smaller pieces by our tokenizer. Thus, we need to align the tokenizer output to the CoNLL-U words. The original BERT paper (Devlin et al., 2018) addresses this by only using the embedding corresponding to the first sub-token for each word. We follow the same approach for consistency. For the sub-words that do not correspond to the beginning of a word, we use a special value that indicates that we are not interested in their predictions. The CrossEntropyLoss has a parameter called ignore_index for this purpose. The default value for this parameter is −100, which we use as the label for the sub-words we wish to ignore during training: Next, we use this function to preprocess the train and validation folds in our DatasetDict: words [El, presidente, de, el, órgano, regulador, de... [Afirmó, que, sigue, el, criterio, europeo, y,... [Durante, la, presentación, de, el, libro, ", ... [Y, todas, las, miradas, convergen, en, la, lu... [Cambiar, las, formas, parece, de, rigor, ,, p... [Él, llega, a, tirar, la, sobre, la, cama, y, ... tags [DET, NOUN, ADP, DET, NOUN, ADJ, ADP, DET, PRO... [VERB, SCONJ, VERB, DET, NOUN, ADJ, CCONJ, SCO... [ADP, DET, NOUN, ADP, DET, NOUN, PUNCT, DET, P... [CCONJ, DET, DET, NOUN, VERB, ADP, DET, NOUN, ... [VERB, DET, NOUN, VERB, ADP, NOUN, PUNCT, CCON... [PRON, VERB, ADP, VERB, PRON, ADP, DET, NOUN, ... input_ids [0, 540, 9692, 8, 88, 103633, 15913, 1846, 8, ... [0, 62, 38949, 849, 41, 58453, 88, 166220, 620... [0, 24292, 21, 43945, 8, 88, 7750, 44, 239, 78... [0, 990, 5136, 576, 100688, 7, 158, 814, 1409,... [0, 313, 61055, 42, 576, 26497, 12295, 8, 7599... [0, 124043, 47612, 10, 61846, 21, 1028, 21, 39... attention_mask labels [-100, 0, 1, 2, 0, 1, 3, -100, 2, 0, 4, -100, ... [-100, 6, -100, -100, 7, 6, 0, 1, 3, 10, 7, 6,... [-100, 2, 0, 1, 2, 0, 1, 8, 0, 4, -100, 2, 4, ... [-100, 10, 0, 0, 1, -100, 6, -100, -100, 2, 0,... [-100, 6, -100, -100, 0, 1, 6, 2, 1, 8, -100, ... [-100, 5, 6, 2, 6, 5, 2, 0, 1, 10, 5, 6, 0, 1,... 0 1 2 3 4 ... 14300 14301 14302 14303 [1,1,1,1,1, 1,1,1,1,1, [1,1,1,1,1, 1,1,1,1,1, [1,1,1,1,1, 1,1,1,1,1, [1,1,1,1,1, 1,1,1,1,1, [1,1,1,1,1, 1,1,1,1,1, [1,1,1,1,1, 1,1,1,1,1, 1, 1, 1, 1, 1, ... 1, 1, 1, 1, 1, ... 1, 1, 1, 1, 1, ... 1, 1, 1, 1, 1, ... 1, 1, 1, 1, 1, ... 1, 1, 1, 1, 1, ... [Sobre, la, oferta, de, interconexión, con, Te... [ADP, DET, NOUN, ADP, NOUN, ADP, PROPN, ADP, D... [0, 44125, 21, 19806, 8, 1940, 2271, 3355, 194... [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... [-100, 2, 0, 1, 2, 1, -100, -100, -100, 2, 4, ... [La, inversión, en, investigación, básica, es,... [DET, NOUN, ADP, NOUN, ADJ, AUX, DET, NOUN, AD... [0, 239, 98649, 22, 31674, 124528, 198, 88, 46... [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... [-100, 0, 1, 2, 1, 3, 9, 0, 1, 2, 0, 1, 10, 0,... [Conviene, que, ahora, ,, en, plena, apoteosis... [VERB, SCONJ, ADV, PUNCT, ADP, ADJ, NOUN, ADP,... [0, 1657, 7772, 13, 41, 18451, 6, 4, 22, 31161... [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... [-100, 6, -100, -100, 7, 11, 8, -100, 2, 3, 1,... [Carlos, y, Fayna, se, enzarzan, en, una, bron... [PROPN, CCONJ, PROPN, PRON, VERB, ADP, DET, NO... [0, 24856, 113, 114162, 76, 40, 22, 6383, 5935... [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... [-100, 4, 10, 4, -100, 5, 6, -100, -100, 2, 0,... 14304
14305 rows × 5 columns ... ... ... ... ... Next, we implement our model class that uses a transformer encoder as a transducer. Because our downstream task consists of POS tagging for Spanish, we need a transformer model that was pre-trained on Spanish texts. Here, we chose XLM-RoBERTa (Conneau et al., 2019) as our base model. XLM-Roberta is a RoBERTa model (Liu et al., 2019) that has 13.3 Part-of-speech Tagging 193 been pre-trained on 100 different languages, including Spanish. Of note, XLM-RoBERTa does not require us to specify what language we are working on. Similar to BERT, it only requires the input_ids. We discussed in the text classification section that Hugging Face provides implementations for text classification models. This is also true for token classification problems that require transducers. In particular, the XLMRobertaForTokenClassification model provided by Hugging Face does everything needed for this task. However, as before, here we implement it ourselves for pedagogical purposes. The model architecture is similar to our text classification example. It consists of a transformer, a dropout layer, and a linear layer used for classification. The number of labels which determines the output dimension of the linear layer is equal to the number of POS tags. The primary difference between the text classification example and this token classification model is that with the former we produced one label for each text document, while here we produce one label for each token in the input text. Specifically, in our text classification model the output shape was two-dimensional: (batch_size, num_labels). Here, our output is three-dimensional: (batch_size, sequence_size, num_labels). So, while much of the forward method is familiar to us, when we are required to compute the loss, we need to reshape the logits and the labels before passing them to the CrossEntropyLoss, since it expects two-dimensional input and one-dimensional labels. For this purpose, we use the view() method to reshape the tensors. This method is efficient because it does not copy the tensor data. Instead it provides a new view of the same data that behaves like a tensor with a different shape.7 As mentioned before, the number of arguments passed to this method determines the number of dimensions in the output tensor. Here, for our logits, we pass two arguments and so our new view will have two dimensions. The second will be the size of self.num_labels, while the first (because we pass -1) will be inferred based on the original tensor shape. For our labels, on the other hand, we only provide one argument and so the new view will have one dimension, inferred by the original shape: Next, we instantiate our model using the XLM-RoBERTa configuration: 7 Similar to NumPy, PyTorch tensors are represented internally by a block of memory storing the data and some metadata that describes how the data should be read, e.g., type, shape, and stride. The view() method returns a new tensor with new metadata but pointing to the same memory block. 194 Using Transformers with the Hugging Face Library As before, we create a TrainingArguments object and define a compute_metrics function in order to customize a Trainer: While the TrainingArguments code has no substantial changes, we need to adjust the compute_metrics function to account for the fact that our model uses sub-word tokens rather than complete words. Recall that only the first sub-word token per word was assigned a POS tag. This function discards the labels corresponding to the ignored sub-word tokens and evaluates the rest, returning the accuracy score: The last component required for the Trainer is a collator. Since this time we are batching sequences of tokens, we need a collator that can pad them dynamically when constructing the batches. The transformers library includes a DataCollatorForTokenClassification specifically for this purpose. Once we have our collator and our trainer object, we can train our model: Next, we evaluate our newly trained model on the test dataset. For this purpose, we preprocess the data in the same way we did for the train and validation partitions. Then, for convenience, we use the trainer’s predict() method to generate the predicted logits using our model: As before, we use scikit-learn’s classification_report() function to display the results of the evaluation. This function expects two onedimensional lists of labels, so we need to follow a similar approach to the one we employed for text classification. Note that output.label_ids and output.predictions are NumPy arrays rather than PyTorch tensors. This time we use NumPy’s reshape() method to reshape the arrays. This method is similar to PyTorch’s view() method that we used previously, except that view() may copy the array’s data in some situations. We discard the labels corresponding to ignored sub-word tokens, and then we print the classification report: Our model based on XLM-RoBERTa achieves 99% accuracy. This is considerably better than the LSTM-based model developed in Chapter 11. In order to understand the differences between the two methods, we produce below a confusion matrix for the results of each model. Rows in the confusion matrix represent the true labels and columns represent the predicted labels. In the confusion matrices shown below, each cell xij corresponds to the proportion of values with label i that were assigned the label j.8 For a perfect model, all cells in the diagonal would have value 1 and all other cells would have value 0. The code used to generate the confusion matrix is shown below. The confusion matrices 8 This is the case because we used the normalize='true' parameter of the confusion_matrix() function. 13.3 Part-of-speech Tagging 195 Figure 13.1 Confusion matrix corresponding to the LSTM-based part-ofspeech tagger developed in Chapter 11. for the LSTM and transformer are show in Figure 13.1 and Figure 13.2, respectively. The two confusion matrices highlight a couple of important observations. First, the transformer model is considerably better at predicting POS tags with infrequent support in the dataset. For example, the accuracy for predicting the SYM POS tag increased from 38% in the LSTM model to 95% in the transformer model! Equally as impressive, the transformer improved the performance of tags that are extremely common, and, thus, provide plenty of opportunity to both approaches to learn a good model. For example, the accuracy of tagging NOUN, the second 196 Using Transformers with the Hugging Face Library Figure 13.2 Confusion matrix corresponding to the transformer-based part-of-speech tagger. most common POS tag in the dataset, increased from 96% in the LSTM model to 99% in the transformer model. 13.4 Summary In this chapter we presented two applications driven by the encoder component of a transformer network. First, we used the transformer encoder as an acceptor and implemented a text classification application for English news. Second, we used the encoder as a transducer to develop a Spanish part-of-speech tagger. Both tasks were implemented using 13.4 Summary 197 pre-trained transformer models from the Hugging Face library. For both applications, the transformer-based methods outperform considerably all approaches introduced in the previous chapters, highlighting the value of the transformer architecture.
21,653
21,887
#!/usr/bin/env python # coding: utf-8 # # Part-of-speech Tagging with Transformer Networks # Some initialization: # In[1]: import random import torch import numpy as np import pandas as pd from tqdm.notebook import tqdm # enable tqdm in pandas tqdm.pandas() # set to True to use the gpu (if there is one available) use_gpu = True # select device device = torch.device('cuda' if use_gpu and torch.cuda.is_available() else 'cpu') print(f'device: {device.type}') # random seed seed = 1234 # set random seed if seed is not None: print(f'random seed: {seed}') random.seed(seed) np.random.seed(seed) torch.manual_seed(seed) # Read the words and POS tags from the Spanish dataset: # In[2]: from conllu import parse_incr def read_tags(filename): data = {'words': [], 'tags': []} with open(filename) as f: for sent in tqdm(parse_incr(f)): words = [tok['form'] for tok in sent] tags = [tok['upos'] for tok in sent] data['words'].append(words) data['tags'].append(tags) return pd.DataFrame(data) # In[3]: train_df = read_tags('data/UD_Spanish-AnCora/es_ancora-ud-train.conllup') valid_df = read_tags('data/UD_Spanish-AnCora/es_ancora-ud-dev.conllup') test_df = read_tags('data/UD_Spanish-AnCora/es_ancora-ud-test.conllup') # In[4]: tags = train_df['tags'].explode().unique() index_to_tag = {i:t for i,t in enumerate(tags)} tag_to_index = {t:i for i,t in enumerate(tags)} # Create a HuggingFace `DatasetDict` object: # In[5]: from datasets import Dataset, DatasetDict ds = DatasetDict() ds['train'] = Dataset.from_pandas(train_df) ds['validation'] = Dataset.from_pandas(valid_df) ds['test'] = Dataset.from_pandas(test_df) ds # In[6]: ds['train'].to_pandas() # Now tokenize the texts and assign POS labels to the first token in each word: # In[7]: from transformers import AutoTokenizer transformer_name = 'xlm-roberta-base' tokenizer = AutoTokenizer.from_pretrained(transformer_name) # In[8]: x = ds['train'][0] tokenized_input = tokenizer(x['words'], is_split_into_words=True) tokens = tokenizer.convert_ids_to_tokens(tokenized_input['input_ids']) word_ids = tokenized_input.word_ids() pd.DataFrame([tokens, word_ids], index=['tokens', 'word ids']) # In[9]: # https://arxiv.org/pdf/1810.04805.pdf # Section 5.3 # We use the representation of the first sub-token as the input to the token-level classifier over the NER label set. # default value for CrossEntropyLoss ignore_index parameter ignore_index = -100 def tokenize_and_align_labels(batch): labels = [] # tokenize batch tokenized_inputs = tokenizer( batch['words'], truncation=True, is_split_into_words=True, ) # iterate over batch elements for i, tags in enumerate(batch['tags']): label_ids = [] previous_word_id = None # get word ids for current batch element word_ids = tokenized_inputs.word_ids(batch_index=i) # iterate over tokens in batch element for word_id in word_ids: if word_id is None or word_id == previous_word_id: # ignore if not a word or word id has already been seen label_ids.append(ignore_index) else: # get tag id for corresponding word tag_id = tag_to_index[tags[word_id]] label_ids.append(tag_id) # remember this word id previous_word_id = word_id # save label ids for current batch element labels.append(label_ids) # store labels together with the tokenizer output tokenized_inputs['labels'] = labels return tokenized_inputs # In[10]: train_ds = ds['train'].map(tokenize_and_align_labels, batched=True) eval_ds = ds['validation'].map(tokenize_and_align_labels, batched=True) train_ds.to_pandas() # Create our transformer model: # In[11]: from torch import nn from transformers.modeling_outputs import TokenClassifierOutput from transformers.models.roberta.modeling_roberta import RobertaModel, RobertaPreTrainedModel # https://github.com/huggingface/transformers/blob/65659a29cf5a079842e61a63d57fa24474288998/src/transformers/models/roberta/modeling_roberta.py#L1346 class XLMRobertaForTokenClassification(RobertaPreTrainedModel): def __init__(self, config): super().__init__(config) self.num_labels = config.num_labels self.roberta = RobertaModel(config, add_pooling_layer=False) self.dropout = nn.Dropout(config.hidden_dropout_prob) self.classifier = nn.Linear(config.hidden_size, config.num_labels) self.init_weights() def forward(self, input_ids=None, attention_mask=None, token_type_ids=None, labels=None, **kwargs): outputs = self.roberta( input_ids, attention_mask=attention_mask, token_type_ids=token_type_ids, **kwargs, ) sequence_output = self.dropout(outputs[0]) logits = self.classifier(sequence_output) loss = None if labels is not None: loss_fn = nn.CrossEntropyLoss() inputs = logits.view(-1, self.num_labels) targets = labels.view(-1) loss = loss_fn(inputs, targets) return TokenClassifierOutput( loss=loss, logits=logits, hidden_states=outputs.hidden_states, attentions=outputs.attentions, ) # In[12]: from transformers import AutoConfig config = AutoConfig.from_pretrained( transformer_name, num_labels=len(index_to_tag), ) model = ( XLMRobertaForTokenClassification .from_pretrained(transformer_name, config=config) ) # Create the `Trainer` object and train: # In[13]: from transformers import TrainingArguments num_epochs = 2 batch_size = 24 weight_decay = 0.01 model_name = f'{transformer_name}-finetuned-pos-es' training_args = TrainingArguments( output_dir=model_name, log_level='error', num_train_epochs=num_epochs, per_device_train_batch_size=batch_size, per_device_eval_batch_size=batch_size, evaluation_strategy='epoch', weight_decay=weight_decay, ) # In[14]: from sklearn.metrics import accuracy_score def compute_metrics(eval_pred): # gold labels label_ids = eval_pred.label_ids # predictions pred_ids = np.argmax(eval_pred.predictions, axis=-1) # collect gold and predicted labels, ignoring ignore_index label y_true, y_pred = [], [] batch_size, seq_len = pred_ids.shape for i in range(batch_size): for j in range(seq_len): if label_ids[i, j] != ignore_index: y_true.append(index_to_tag[label_ids[i][j]]) y_pred.append(index_to_tag[pred_ids[i][j]]) # return computed metrics return {'accuracy': accuracy_score(y_true, y_pred)} # In[15]: from transformers import Trainer from transformers import DataCollatorForTokenClassification data_collator = DataCollatorForTokenClassification(tokenizer) trainer = Trainer( model=model, args=training_args, data_collator=data_collator, compute_metrics=compute_metrics, train_dataset=train_ds, eval_dataset=eval_ds, tokenizer=tokenizer, ) trainer.train() # Evaluate on the test partition: # In[16]: test_ds = ds['test'].map( tokenize_and_align_labels, batched=True, ) output = trainer.predict(test_ds) # In[17]: from sklearn.metrics import classification_report num_labels = model.num_labels label_ids = output.label_ids.reshape(-1) predictions = output.predictions.reshape(-1, num_labels) predictions = np.argmax(predictions, axis=-1) mask = label_ids != ignore_index y_true = label_ids[mask] y_pred = predictions[mask] target_names = tags[:-1] report = classification_report( y_true, y_pred, target_names=target_names ) print(report) # In[18]: import matplotlib.pyplot as plt from sklearn.metrics import ConfusionMatrixDisplay, confusion_matrix cm = confusion_matrix(y_true, y_pred, normalize='true') disp = ConfusionMatrixDisplay( confusion_matrix=cm, display_labels=target_names, ) fig, ax = plt.subplots(figsize=(10,10)) disp.plot( cmap='Blues', values_format='.2f', colorbar=False, ax=ax, xticks_rotation=45, )
6,962
7,024
9
chap13-10
chap13-10
13 Using Transformers with the Hugging Face Library One of the key advantages of transformer networks is the ability to take a model that was pre-trained over vast quantities of text and fine-tune it for the task at hand. Intuitively, this strategy allows transformer networks to achieve higher performance on smaller datasets by relying on statistics acquired at scale in an unsupervised way (e.g., through the masked language model training objective). To this end, in this chapter we will use the Hugging Face library,1 which has a rich repository of datasets and pre-trained models, as well as helper methods and classes that make it easy to target downstream tasks. Using pre-trained transformer encoders, we will implement the two tasks that served as use cases in the previous chapters: text classification and part-of-speech tagging. 13.1 Tokenization As discussed in Section 12.2, transformers rely on sub-word tokens. This strategy provides an elegant way to handle unknown and low-frequency words by splitting them into more frequent sub-word parts. At the same time, these tokenization algorithms maintain frequently-occurring words as standalone tokens, so the signal for these common words is preserved. To make this more concrete, we show below how tokenizers are employed in the Hugging Face library. First, we load the tokenizer that corresponds to the transformer we intend to use. This is important for two reasons: (a) different transformers rely on different tokenization algorithms, and (b) even for the ones that use the same algorithm, their tokenizer vocabularies are likely to be different if they were pre-trained 1 https://huggingface.co/docs/transformers/main/en/index 186 13.1 Tokenization 187 on different corpora. Next, we tokenize some example text and display some of the resulting attributes with pandas: As shown above, the tokenizer splits the text into tokens, and adds two special tokens: the [CLS] token at the beginning of the token sequence, and the [SEP] token at the end. Also, note that the ## characters at the beginning of some tokens indicate that they are not standalone words, but rather sub-words that continue a word previously started. For example, the output above shows that the word walrus was split into three sub-words. Note, however, that this is specific to this particular tokenization algorithm, and other tokenizers may indicate word continuation in different ways. A better way to detect word continuations is using the word_ids() method of the tokenizer output, which assigns the same id to all tokens part of the same word. For example, all fragments of the word walrus share the word id 3. Lastly, the input_ids attribute provides the token ids used internally by the transformer to map tokens to embeddings. To briefly demonstrate how different tokenizers produce different outputs, here is the same text tokenized with the tokenizer corresponding to xlm-roberta-base: Note how the [CLS] and [SEP] special tokens have been replaced with <s> and </s> respectively. Also, spaces have been replaced with the Unicode character (U+2581, LOWER ONE EIGHTH BLOCK). Tokens that start with that character are considered word beginnings and the rest are word continuations, as can be confirmed by looking at the word ids. This illustrates the importance of using the tokenizer that corresponds to the transformer you intend to use. 012345678 tokens [CLS] I am the wa ##l ##rus . [SEP] word_ids None 0 1 2 3 3 3 4 None input_ids 101 146 1821 1103 20049 1233 6208 119 102 01234567 tokens <s> ▁I ▁am ▁the ▁wal rus . </s > None 0 1 2 3 3 3 None 0 87 444 70 32973 6563 5 2 word_ids input_ids 188 Using Transformers with the Hugging Face Library 13.2 Text Classification For our text classification example, we will continue using the AG News dataset from previous chapters. We will load, preprocess, and split the dataset into pandas dataframes in the same way as before. Now however, rather than continuing with pandas, we will create a Hugging Face dataset from the dataframes. Hugging Face datasets are convenient because of their built-in support of batching, efficient data transformations, and caching. In particular, we convert each dataframe into a Hugging Face dataset. The various datasets are managed with a DatasetDict. Note that this is the same data structure seen when downloading a Hugging Face dataset from their hub.2 The keys in this dictionary are usually train, validation, and test:3 Once our dataset is loaded, we load a tokenizer. Different pre-trained models are tokenized differently, and it is important to select the tokenizer that corresponds to the model we will use so that the inputs are consistent with model expectations. In our example, we will use the bert-base-cased pre-trained model and tokenizer: Datasets have a map() method that transforms the dataset by applying a function to each example. The method returns a new dataset with the transformation applied. We use the map() method to tokenize our dataset. To this end, we define a function that tokenizes an example using the tokenizer we loaded previously. Note that tokenizers support many options that you may need depending on your situation. However, since this is a simple scenario, all we need to do is provide the text to tokenize and specify how to handle texts that exceed the maximum number of tokens permitted by the pre-trained model. Here we have our tokenizer truncate any inputs that are too long by specifying the truncation=True parameter. The output of this function will be added to the new dataset as extra columns. Further, we also want to remove some of the columns that are no longer needed, simplifying subsequent steps. For this, we use the remove_columns argument, listing the columns that we want to discard. Additionally, the dataset’s map() method can batch the dataset; we enable this option with the batched=True argument: 2 https://huggingface.co/datasets
3 These correspond to the more common terms train, development, and test we have used throughout the book so far. In this chapter we use the Hugging Face naming conventions for consistency. 13.2 Text Classification 189 label 03 10 20 32 40 ... ... . 107995  0 
 . 107996  0 
 . 107997  0 
 . 107998  0 
 . 107999  3 
 input_ids [101, 3270, 11906, 1522, 1146, 7106, 1111, 251... [101, 158, 119, 156, 119, 12068, 5084, 1116, 9... [101, 7270, 118, 2733, 1383, 1111, 12448, 7430... [101, 6096, 117, 10378, 3969, 5977, 1111, 8988... [101, 19569, 5480, 10582, 2087, 1867, 158, 119... [101, 1130, 139, 24683, 131, 21107, 2050, 1739... token_type_ids attention_mask [101, 22087, 8223, 1611, 1106, 4417, [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 5572, 324... 0, 0, 0, ... [0,0,0,0,0,0,0,0, [0,0,0,0,0,0,0,0, [0,0,0,0,0,0,0,0, [0,0,0,0,0,0,0,0, [0,0,0,0,0,0,0,0, [0,0,0,0,0,0,0,0, 0,0,0,0, 0,0,0,... 0,0,0,0, 0,0,0,... 0,0,0,0, 0,0,0,... 0,0,0,0, 0,0,0,... 0,0,0,0, 0,0,0,... 0,0,0,0, 0,0,0,... [1,1,1,1,1,1,1,1, [1,1,1,1,1,1,1,1, [1,1,1,1,1,1,1,1, [1,1,1,1,1,1,1,1, [1,1,1,1,1,1,1,1, [1,1,1,1,1,1,1,1, 1,1,1,1, 1,1,1,... 1,1,1,1, 1,1,1,... 1,1,1,1, 1,1,1,... 1,1,1,1, 1,1,1,... 1,1,1,1, 1,1,1,... 1,1,1,1, 1,1,1,... ... ... [101, 16409, 118, 16587, 159, 4064, [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1106, 1564... 0, 0, 0, ... 108000 rows × 4 columns [101, 4222, 11404, 1174, 117, 1476, [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1130, 2696... 0, 0, 0, ... [101, 11560, 3881, 108, 3614, 132, [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 3498, 2944,... 0, 0, 0, ... [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... ... Next, we implement a classifier for our task. Hugging Face provides a
variety of models corresponding to several types of downstream tasks. However, for pedagogical purposes, we implement one from scratch. In particular, our model class inherits from BertPreTrainedModel, which
provides several useful methods such as init_weights() and from_pretrained() methods, which we will use later. The model constructor takes a config- uration object as its only parameter. Configuration objects contain all the hyper-parameters used by the corresponding pre-trained models. We will show later how the configuration model is retrieved and customized. Models that implement specific downstream tasks are usually composed of a pre-trained model (sometimes referred as the body), and one or more task-specific layers (usually referred as the head). Here, we initialize a BertModel using the provided configuration, as well as a dropout layer and a task-specific linear layer used for classifying the Bert output. Each of these layers is initialized by calling the init_weights() method inherited from BertPreTrainedModel. The forward() method, which implements the task-specific forward pass, takes as arguments the outputs of the tokenizer, and, optionally, the gold labels corresponding to the input data points. Our implementation of the forward pass sends the input tokens to the Bert model to produce the contextualized representations for all tokens. This output has several components, including the last_hidden_state which con- 190 Using Transformers with the Hugging Face Library tains the final hidden-state embedding for each token. For our task, we will represent the whole sequence using the embedding for the [CLS] token that occurs at the start of each example. We retrieve it by selecting the first element of each output sequence in the batch (i.e., last_hidden_state[:, 0, :]). As in the previous chapters, we apply dropout to our sequence representation, and then pass it through our linear classification layer. If gold labels are provided (i.e., we are training), we now compute the loss using the cross-entropy loss. The output of the forward pass is wrapped in a Hugging Face SequenceClassifierOutput object4 and returned: Next we load the configuration of the pre-trained model and instantiate our model. The AutoConfig class can load the configuration for any pre-trained model, retrieving it from Hugging Face if needed. Then we use the configuration to instantiate our model using the from_pretrained() method. With this call, the pre-trained model will be loaded, which includes downloading if necessary: Hugging Face provides a Trainer class that greatly simplifies the training process. This class not only implements the training loop we have been using in the previous chapters, but also handles other useful steps such as saving checkpoints (i.e., intermediate models after a number of mini-batches have been processed during training), and tracking custom measures about model performance. In order to create a Trainer, we first need to specify its configuration in a TrainingArguments object. In ours, we specify certain hyper parameters such as batch size, weight decay, and number of epochs, as well as where to store model checkpoints: The TrainingArguments class provides a wide variety of arguments that we have not shown.5 These arguments usually have appropriate default values, so it is often fine to omit them. For example, we did not use the label_names argument, which specifies the key that corresponds to the training labels. When omitted, it defaults to keys such as label, labels and label_ids.6 In this chapter we used label. Note that we also specify how often we would like to see the perfor- . 4  Hugging Face utilizes a set of output objects to standardize model output for a given task. These objects typically include additional information, e.g., attention weights, which can be used for visualizing or debugging model behavior. 
 . 5  https://huggingface.co/docs/transformers/main/en/main_classes/trainer# transformers. TrainingArguments 
 6 In the case of extractive question answering (see Chapter 16), the start_positions and end_positions store the start/end positions of the correct answers. 13.3 Part-of-speech Tagging 191 mance of the current model (at the end of each epoch) with evaluation_strategy='epoch'. This means that after each epoch we print the current loss on the training
partition and on the evaluation dataset, if one is available. Additionally,
we can report custom metrics at this time. For this purpose, we use the compute_metrics parameter of the Trainer, which expects a function that receives a transformers. EvalPredictions object containing the label ids and the predicted logits. The expected return type is a dictionary whose keys correspond to different metrics, each of which will be displayed as a separate result column. Using the above TrainingArguments and compute_metrics function, we create our Trainer. Note that when you provide a tokenizer, the trainer will automatically pad the sequences in each batch. Also, the trainer will automatically use any GPU that is available, unless specifically disabled in the TrainingArguments. Training our model takes a single call to the train() method of the Trainer object. As specified in the our instance of TrainingArguments, the training and validation losses, as well as the accuracy, are reported every epoch. As in the other chapters, we can write custom code to obtain the model’s predictions on the test data. However, the Trainer class provides a predict() method that drastically simplifies this: As shown in the table above, this model achieves an accuracy of 95%, which is the highest performance we have achieved so far on this dataset. 13.3 Part-of-speech Tagging To showcase part-of-speech tagging using transformers, we continue with the Spanish section of the AnCora corpus introduced in Chapter 11. Recall that the dataset is stored in the CoNLL-U format. We load this format in the same way as before, but then we convert the loaded dataset into a Hugging Face DictDataset: Importantly, because the CoNLL-U dataset is already tokenized, we use the is_split_into_words=True tokenizer argument to ensure that the tokenizer respects the existing word boundaries during its sub-word tokenization. Further, while we want to predict one POS tag per word, Epoch Training Loss Validation Loss Accuracy 1 0.187800 0.172629 0.941667 2 0.104000 0.183001 0.946250 192 Using Transformers with the Hugging Face Library any given word may be split into smaller pieces by our tokenizer. Thus, we need to align the tokenizer output to the CoNLL-U words. The original BERT paper (Devlin et al., 2018) addresses this by only using the embedding corresponding to the first sub-token for each word. We follow the same approach for consistency. For the sub-words that do not correspond to the beginning of a word, we use a special value that indicates that we are not interested in their predictions. The CrossEntropyLoss has a parameter called ignore_index for this purpose. The default value for this parameter is −100, which we use as the label for the sub-words we wish to ignore during training: Next, we use this function to preprocess the train and validation folds in our DatasetDict: words [El, presidente, de, el, órgano, regulador, de... [Afirmó, que, sigue, el, criterio, europeo, y,... [Durante, la, presentación, de, el, libro, ", ... [Y, todas, las, miradas, convergen, en, la, lu... [Cambiar, las, formas, parece, de, rigor, ,, p... [Él, llega, a, tirar, la, sobre, la, cama, y, ... tags [DET, NOUN, ADP, DET, NOUN, ADJ, ADP, DET, PRO... [VERB, SCONJ, VERB, DET, NOUN, ADJ, CCONJ, SCO... [ADP, DET, NOUN, ADP, DET, NOUN, PUNCT, DET, P... [CCONJ, DET, DET, NOUN, VERB, ADP, DET, NOUN, ... [VERB, DET, NOUN, VERB, ADP, NOUN, PUNCT, CCON... [PRON, VERB, ADP, VERB, PRON, ADP, DET, NOUN, ... input_ids [0, 540, 9692, 8, 88, 103633, 15913, 1846, 8, ... [0, 62, 38949, 849, 41, 58453, 88, 166220, 620... [0, 24292, 21, 43945, 8, 88, 7750, 44, 239, 78... [0, 990, 5136, 576, 100688, 7, 158, 814, 1409,... [0, 313, 61055, 42, 576, 26497, 12295, 8, 7599... [0, 124043, 47612, 10, 61846, 21, 1028, 21, 39... attention_mask labels [-100, 0, 1, 2, 0, 1, 3, -100, 2, 0, 4, -100, ... [-100, 6, -100, -100, 7, 6, 0, 1, 3, 10, 7, 6,... [-100, 2, 0, 1, 2, 0, 1, 8, 0, 4, -100, 2, 4, ... [-100, 10, 0, 0, 1, -100, 6, -100, -100, 2, 0,... [-100, 6, -100, -100, 0, 1, 6, 2, 1, 8, -100, ... [-100, 5, 6, 2, 6, 5, 2, 0, 1, 10, 5, 6, 0, 1,... 0 1 2 3 4 ... 14300 14301 14302 14303 [1,1,1,1,1, 1,1,1,1,1, [1,1,1,1,1, 1,1,1,1,1, [1,1,1,1,1, 1,1,1,1,1, [1,1,1,1,1, 1,1,1,1,1, [1,1,1,1,1, 1,1,1,1,1, [1,1,1,1,1, 1,1,1,1,1, 1, 1, 1, 1, 1, ... 1, 1, 1, 1, 1, ... 1, 1, 1, 1, 1, ... 1, 1, 1, 1, 1, ... 1, 1, 1, 1, 1, ... 1, 1, 1, 1, 1, ... [Sobre, la, oferta, de, interconexión, con, Te... [ADP, DET, NOUN, ADP, NOUN, ADP, PROPN, ADP, D... [0, 44125, 21, 19806, 8, 1940, 2271, 3355, 194... [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... [-100, 2, 0, 1, 2, 1, -100, -100, -100, 2, 4, ... [La, inversión, en, investigación, básica, es,... [DET, NOUN, ADP, NOUN, ADJ, AUX, DET, NOUN, AD... [0, 239, 98649, 22, 31674, 124528, 198, 88, 46... [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... [-100, 0, 1, 2, 1, 3, 9, 0, 1, 2, 0, 1, 10, 0,... [Conviene, que, ahora, ,, en, plena, apoteosis... [VERB, SCONJ, ADV, PUNCT, ADP, ADJ, NOUN, ADP,... [0, 1657, 7772, 13, 41, 18451, 6, 4, 22, 31161... [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... [-100, 6, -100, -100, 7, 11, 8, -100, 2, 3, 1,... [Carlos, y, Fayna, se, enzarzan, en, una, bron... [PROPN, CCONJ, PROPN, PRON, VERB, ADP, DET, NO... [0, 24856, 113, 114162, 76, 40, 22, 6383, 5935... [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... [-100, 4, 10, 4, -100, 5, 6, -100, -100, 2, 0,... 14304
14305 rows × 5 columns ... ... ... ... ... Next, we implement our model class that uses a transformer encoder as a transducer. Because our downstream task consists of POS tagging for Spanish, we need a transformer model that was pre-trained on Spanish texts. Here, we chose XLM-RoBERTa (Conneau et al., 2019) as our base model. XLM-Roberta is a RoBERTa model (Liu et al., 2019) that has 13.3 Part-of-speech Tagging 193 been pre-trained on 100 different languages, including Spanish. Of note, XLM-RoBERTa does not require us to specify what language we are working on. Similar to BERT, it only requires the input_ids. We discussed in the text classification section that Hugging Face provides implementations for text classification models. This is also true for token classification problems that require transducers. In particular, the XLMRobertaForTokenClassification model provided by Hugging Face does everything needed for this task. However, as before, here we implement it ourselves for pedagogical purposes. The model architecture is similar to our text classification example. It consists of a transformer, a dropout layer, and a linear layer used for classification. The number of labels which determines the output dimension of the linear layer is equal to the number of POS tags. The primary difference between the text classification example and this token classification model is that with the former we produced one label for each text document, while here we produce one label for each token in the input text. Specifically, in our text classification model the output shape was two-dimensional: (batch_size, num_labels). Here, our output is three-dimensional: (batch_size, sequence_size, num_labels). So, while much of the forward method is familiar to us, when we are required to compute the loss, we need to reshape the logits and the labels before passing them to the CrossEntropyLoss, since it expects two-dimensional input and one-dimensional labels. For this purpose, we use the view() method to reshape the tensors. This method is efficient because it does not copy the tensor data. Instead it provides a new view of the same data that behaves like a tensor with a different shape.7 As mentioned before, the number of arguments passed to this method determines the number of dimensions in the output tensor. Here, for our logits, we pass two arguments and so our new view will have two dimensions. The second will be the size of self.num_labels, while the first (because we pass -1) will be inferred based on the original tensor shape. For our labels, on the other hand, we only provide one argument and so the new view will have one dimension, inferred by the original shape: Next, we instantiate our model using the XLM-RoBERTa configuration: 7 Similar to NumPy, PyTorch tensors are represented internally by a block of memory storing the data and some metadata that describes how the data should be read, e.g., type, shape, and stride. The view() method returns a new tensor with new metadata but pointing to the same memory block. 194 Using Transformers with the Hugging Face Library As before, we create a TrainingArguments object and define a compute_metrics function in order to customize a Trainer: While the TrainingArguments code has no substantial changes, we need to adjust the compute_metrics function to account for the fact that our model uses sub-word tokens rather than complete words. Recall that only the first sub-word token per word was assigned a POS tag. This function discards the labels corresponding to the ignored sub-word tokens and evaluates the rest, returning the accuracy score: The last component required for the Trainer is a collator. Since this time we are batching sequences of tokens, we need a collator that can pad them dynamically when constructing the batches. The transformers library includes a DataCollatorForTokenClassification specifically for this purpose. Once we have our collator and our trainer object, we can train our model: Next, we evaluate our newly trained model on the test dataset. For this purpose, we preprocess the data in the same way we did for the train and validation partitions. Then, for convenience, we use the trainer’s predict() method to generate the predicted logits using our model: As before, we use scikit-learn’s classification_report() function to display the results of the evaluation. This function expects two onedimensional lists of labels, so we need to follow a similar approach to the one we employed for text classification. Note that output.label_ids and output.predictions are NumPy arrays rather than PyTorch tensors. This time we use NumPy’s reshape() method to reshape the arrays. This method is similar to PyTorch’s view() method that we used previously, except that view() may copy the array’s data in some situations. We discard the labels corresponding to ignored sub-word tokens, and then we print the classification report: Our model based on XLM-RoBERTa achieves 99% accuracy. This is considerably better than the LSTM-based model developed in Chapter 11. In order to understand the differences between the two methods, we produce below a confusion matrix for the results of each model. Rows in the confusion matrix represent the true labels and columns represent the predicted labels. In the confusion matrices shown below, each cell xij corresponds to the proportion of values with label i that were assigned the label j.8 For a perfect model, all cells in the diagonal would have value 1 and all other cells would have value 0. The code used to generate the confusion matrix is shown below. The confusion matrices 8 This is the case because we used the normalize='true' parameter of the confusion_matrix() function. 13.3 Part-of-speech Tagging 195 Figure 13.1 Confusion matrix corresponding to the LSTM-based part-ofspeech tagger developed in Chapter 11. for the LSTM and transformer are show in Figure 13.1 and Figure 13.2, respectively. The two confusion matrices highlight a couple of important observations. First, the transformer model is considerably better at predicting POS tags with infrequent support in the dataset. For example, the accuracy for predicting the SYM POS tag increased from 38% in the LSTM model to 95% in the transformer model! Equally as impressive, the transformer improved the performance of tags that are extremely common, and, thus, provide plenty of opportunity to both approaches to learn a good model. For example, the accuracy of tagging NOUN, the second 196 Using Transformers with the Hugging Face Library Figure 13.2 Confusion matrix corresponding to the transformer-based part-of-speech tagger. most common POS tag in the dataset, increased from 96% in the LSTM model to 99% in the transformer model. 13.4 Summary In this chapter we presented two applications driven by the encoder component of a transformer network. First, we used the transformer encoder as an acceptor and implemented a text classification application for English news. Second, we used the encoder as a transducer to develop a Spanish part-of-speech tagger. Both tasks were implemented using 13.4 Summary 197 pre-trained transformer models from the Hugging Face library. For both applications, the transformer-based methods outperform considerably all approaches introduced in the previous chapters, highlighting the value of the transformer architecture.
21,888
21,961
#!/usr/bin/env python # coding: utf-8 # # Part-of-speech Tagging with Transformer Networks # Some initialization: # In[1]: import random import torch import numpy as np import pandas as pd from tqdm.notebook import tqdm # enable tqdm in pandas tqdm.pandas() # set to True to use the gpu (if there is one available) use_gpu = True # select device device = torch.device('cuda' if use_gpu and torch.cuda.is_available() else 'cpu') print(f'device: {device.type}') # random seed seed = 1234 # set random seed if seed is not None: print(f'random seed: {seed}') random.seed(seed) np.random.seed(seed) torch.manual_seed(seed) # Read the words and POS tags from the Spanish dataset: # In[2]: from conllu import parse_incr def read_tags(filename): data = {'words': [], 'tags': []} with open(filename) as f: for sent in tqdm(parse_incr(f)): words = [tok['form'] for tok in sent] tags = [tok['upos'] for tok in sent] data['words'].append(words) data['tags'].append(tags) return pd.DataFrame(data) # In[3]: train_df = read_tags('data/UD_Spanish-AnCora/es_ancora-ud-train.conllup') valid_df = read_tags('data/UD_Spanish-AnCora/es_ancora-ud-dev.conllup') test_df = read_tags('data/UD_Spanish-AnCora/es_ancora-ud-test.conllup') # In[4]: tags = train_df['tags'].explode().unique() index_to_tag = {i:t for i,t in enumerate(tags)} tag_to_index = {t:i for i,t in enumerate(tags)} # Create a HuggingFace `DatasetDict` object: # In[5]: from datasets import Dataset, DatasetDict ds = DatasetDict() ds['train'] = Dataset.from_pandas(train_df) ds['validation'] = Dataset.from_pandas(valid_df) ds['test'] = Dataset.from_pandas(test_df) ds # In[6]: ds['train'].to_pandas() # Now tokenize the texts and assign POS labels to the first token in each word: # In[7]: from transformers import AutoTokenizer transformer_name = 'xlm-roberta-base' tokenizer = AutoTokenizer.from_pretrained(transformer_name) # In[8]: x = ds['train'][0] tokenized_input = tokenizer(x['words'], is_split_into_words=True) tokens = tokenizer.convert_ids_to_tokens(tokenized_input['input_ids']) word_ids = tokenized_input.word_ids() pd.DataFrame([tokens, word_ids], index=['tokens', 'word ids']) # In[9]: # https://arxiv.org/pdf/1810.04805.pdf # Section 5.3 # We use the representation of the first sub-token as the input to the token-level classifier over the NER label set. # default value for CrossEntropyLoss ignore_index parameter ignore_index = -100 def tokenize_and_align_labels(batch): labels = [] # tokenize batch tokenized_inputs = tokenizer( batch['words'], truncation=True, is_split_into_words=True, ) # iterate over batch elements for i, tags in enumerate(batch['tags']): label_ids = [] previous_word_id = None # get word ids for current batch element word_ids = tokenized_inputs.word_ids(batch_index=i) # iterate over tokens in batch element for word_id in word_ids: if word_id is None or word_id == previous_word_id: # ignore if not a word or word id has already been seen label_ids.append(ignore_index) else: # get tag id for corresponding word tag_id = tag_to_index[tags[word_id]] label_ids.append(tag_id) # remember this word id previous_word_id = word_id # save label ids for current batch element labels.append(label_ids) # store labels together with the tokenizer output tokenized_inputs['labels'] = labels return tokenized_inputs # In[10]: train_ds = ds['train'].map(tokenize_and_align_labels, batched=True) eval_ds = ds['validation'].map(tokenize_and_align_labels, batched=True) train_ds.to_pandas() # Create our transformer model: # In[11]: from torch import nn from transformers.modeling_outputs import TokenClassifierOutput from transformers.models.roberta.modeling_roberta import RobertaModel, RobertaPreTrainedModel # https://github.com/huggingface/transformers/blob/65659a29cf5a079842e61a63d57fa24474288998/src/transformers/models/roberta/modeling_roberta.py#L1346 class XLMRobertaForTokenClassification(RobertaPreTrainedModel): def __init__(self, config): super().__init__(config) self.num_labels = config.num_labels self.roberta = RobertaModel(config, add_pooling_layer=False) self.dropout = nn.Dropout(config.hidden_dropout_prob) self.classifier = nn.Linear(config.hidden_size, config.num_labels) self.init_weights() def forward(self, input_ids=None, attention_mask=None, token_type_ids=None, labels=None, **kwargs): outputs = self.roberta( input_ids, attention_mask=attention_mask, token_type_ids=token_type_ids, **kwargs, ) sequence_output = self.dropout(outputs[0]) logits = self.classifier(sequence_output) loss = None if labels is not None: loss_fn = nn.CrossEntropyLoss() inputs = logits.view(-1, self.num_labels) targets = labels.view(-1) loss = loss_fn(inputs, targets) return TokenClassifierOutput( loss=loss, logits=logits, hidden_states=outputs.hidden_states, attentions=outputs.attentions, ) # In[12]: from transformers import AutoConfig config = AutoConfig.from_pretrained( transformer_name, num_labels=len(index_to_tag), ) model = ( XLMRobertaForTokenClassification .from_pretrained(transformer_name, config=config) ) # Create the `Trainer` object and train: # In[13]: from transformers import TrainingArguments num_epochs = 2 batch_size = 24 weight_decay = 0.01 model_name = f'{transformer_name}-finetuned-pos-es' training_args = TrainingArguments( output_dir=model_name, log_level='error', num_train_epochs=num_epochs, per_device_train_batch_size=batch_size, per_device_eval_batch_size=batch_size, evaluation_strategy='epoch', weight_decay=weight_decay, ) # In[14]: from sklearn.metrics import accuracy_score def compute_metrics(eval_pred): # gold labels label_ids = eval_pred.label_ids # predictions pred_ids = np.argmax(eval_pred.predictions, axis=-1) # collect gold and predicted labels, ignoring ignore_index label y_true, y_pred = [], [] batch_size, seq_len = pred_ids.shape for i in range(batch_size): for j in range(seq_len): if label_ids[i, j] != ignore_index: y_true.append(index_to_tag[label_ids[i][j]]) y_pred.append(index_to_tag[pred_ids[i][j]]) # return computed metrics return {'accuracy': accuracy_score(y_true, y_pred)} # In[15]: from transformers import Trainer from transformers import DataCollatorForTokenClassification data_collator = DataCollatorForTokenClassification(tokenizer) trainer = Trainer( model=model, args=training_args, data_collator=data_collator, compute_metrics=compute_metrics, train_dataset=train_ds, eval_dataset=eval_ds, tokenizer=tokenizer, ) trainer.train() # Evaluate on the test partition: # In[16]: test_ds = ds['test'].map( tokenize_and_align_labels, batched=True, ) output = trainer.predict(test_ds) # In[17]: from sklearn.metrics import classification_report num_labels = model.num_labels label_ids = output.label_ids.reshape(-1) predictions = output.predictions.reshape(-1, num_labels) predictions = np.argmax(predictions, axis=-1) mask = label_ids != ignore_index y_true = label_ids[mask] y_pred = predictions[mask] target_names = tags[:-1] report = classification_report( y_true, y_pred, target_names=target_names ) print(report) # In[18]: import matplotlib.pyplot as plt from sklearn.metrics import ConfusionMatrixDisplay, confusion_matrix cm = confusion_matrix(y_true, y_pred, normalize='true') disp = ConfusionMatrixDisplay( confusion_matrix=cm, display_labels=target_names, ) fig, ax = plt.subplots(figsize=(10,10)) disp.plot( cmap='Blues', values_format='.2f', colorbar=False, ax=ax, xticks_rotation=45, )
7,025
7,253
10
chap13-11
chap13-11
13 Using Transformers with the Hugging Face Library One of the key advantages of transformer networks is the ability to take a model that was pre-trained over vast quantities of text and fine-tune it for the task at hand. Intuitively, this strategy allows transformer networks to achieve higher performance on smaller datasets by relying on statistics acquired at scale in an unsupervised way (e.g., through the masked language model training objective). To this end, in this chapter we will use the Hugging Face library,1 which has a rich repository of datasets and pre-trained models, as well as helper methods and classes that make it easy to target downstream tasks. Using pre-trained transformer encoders, we will implement the two tasks that served as use cases in the previous chapters: text classification and part-of-speech tagging. 13.1 Tokenization As discussed in Section 12.2, transformers rely on sub-word tokens. This strategy provides an elegant way to handle unknown and low-frequency words by splitting them into more frequent sub-word parts. At the same time, these tokenization algorithms maintain frequently-occurring words as standalone tokens, so the signal for these common words is preserved. To make this more concrete, we show below how tokenizers are employed in the Hugging Face library. First, we load the tokenizer that corresponds to the transformer we intend to use. This is important for two reasons: (a) different transformers rely on different tokenization algorithms, and (b) even for the ones that use the same algorithm, their tokenizer vocabularies are likely to be different if they were pre-trained 1 https://huggingface.co/docs/transformers/main/en/index 186 13.1 Tokenization 187 on different corpora. Next, we tokenize some example text and display some of the resulting attributes with pandas: As shown above, the tokenizer splits the text into tokens, and adds two special tokens: the [CLS] token at the beginning of the token sequence, and the [SEP] token at the end. Also, note that the ## characters at the beginning of some tokens indicate that they are not standalone words, but rather sub-words that continue a word previously started. For example, the output above shows that the word walrus was split into three sub-words. Note, however, that this is specific to this particular tokenization algorithm, and other tokenizers may indicate word continuation in different ways. A better way to detect word continuations is using the word_ids() method of the tokenizer output, which assigns the same id to all tokens part of the same word. For example, all fragments of the word walrus share the word id 3. Lastly, the input_ids attribute provides the token ids used internally by the transformer to map tokens to embeddings. To briefly demonstrate how different tokenizers produce different outputs, here is the same text tokenized with the tokenizer corresponding to xlm-roberta-base: Note how the [CLS] and [SEP] special tokens have been replaced with <s> and </s> respectively. Also, spaces have been replaced with the Unicode character (U+2581, LOWER ONE EIGHTH BLOCK). Tokens that start with that character are considered word beginnings and the rest are word continuations, as can be confirmed by looking at the word ids. This illustrates the importance of using the tokenizer that corresponds to the transformer you intend to use. 012345678 tokens [CLS] I am the wa ##l ##rus . [SEP] word_ids None 0 1 2 3 3 3 4 None input_ids 101 146 1821 1103 20049 1233 6208 119 102 01234567 tokens <s> ▁I ▁am ▁the ▁wal rus . </s > None 0 1 2 3 3 3 None 0 87 444 70 32973 6563 5 2 word_ids input_ids 188 Using Transformers with the Hugging Face Library 13.2 Text Classification For our text classification example, we will continue using the AG News dataset from previous chapters. We will load, preprocess, and split the dataset into pandas dataframes in the same way as before. Now however, rather than continuing with pandas, we will create a Hugging Face dataset from the dataframes. Hugging Face datasets are convenient because of their built-in support of batching, efficient data transformations, and caching. In particular, we convert each dataframe into a Hugging Face dataset. The various datasets are managed with a DatasetDict. Note that this is the same data structure seen when downloading a Hugging Face dataset from their hub.2 The keys in this dictionary are usually train, validation, and test:3 Once our dataset is loaded, we load a tokenizer. Different pre-trained models are tokenized differently, and it is important to select the tokenizer that corresponds to the model we will use so that the inputs are consistent with model expectations. In our example, we will use the bert-base-cased pre-trained model and tokenizer: Datasets have a map() method that transforms the dataset by applying a function to each example. The method returns a new dataset with the transformation applied. We use the map() method to tokenize our dataset. To this end, we define a function that tokenizes an example using the tokenizer we loaded previously. Note that tokenizers support many options that you may need depending on your situation. However, since this is a simple scenario, all we need to do is provide the text to tokenize and specify how to handle texts that exceed the maximum number of tokens permitted by the pre-trained model. Here we have our tokenizer truncate any inputs that are too long by specifying the truncation=True parameter. The output of this function will be added to the new dataset as extra columns. Further, we also want to remove some of the columns that are no longer needed, simplifying subsequent steps. For this, we use the remove_columns argument, listing the columns that we want to discard. Additionally, the dataset’s map() method can batch the dataset; we enable this option with the batched=True argument: 2 https://huggingface.co/datasets
3 These correspond to the more common terms train, development, and test we have used throughout the book so far. In this chapter we use the Hugging Face naming conventions for consistency. 13.2 Text Classification 189 label 03 10 20 32 40 ... ... . 107995  0 
 . 107996  0 
 . 107997  0 
 . 107998  0 
 . 107999  3 
 input_ids [101, 3270, 11906, 1522, 1146, 7106, 1111, 251... [101, 158, 119, 156, 119, 12068, 5084, 1116, 9... [101, 7270, 118, 2733, 1383, 1111, 12448, 7430... [101, 6096, 117, 10378, 3969, 5977, 1111, 8988... [101, 19569, 5480, 10582, 2087, 1867, 158, 119... [101, 1130, 139, 24683, 131, 21107, 2050, 1739... token_type_ids attention_mask [101, 22087, 8223, 1611, 1106, 4417, [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 5572, 324... 0, 0, 0, ... [0,0,0,0,0,0,0,0, [0,0,0,0,0,0,0,0, [0,0,0,0,0,0,0,0, [0,0,0,0,0,0,0,0, [0,0,0,0,0,0,0,0, [0,0,0,0,0,0,0,0, 0,0,0,0, 0,0,0,... 0,0,0,0, 0,0,0,... 0,0,0,0, 0,0,0,... 0,0,0,0, 0,0,0,... 0,0,0,0, 0,0,0,... 0,0,0,0, 0,0,0,... [1,1,1,1,1,1,1,1, [1,1,1,1,1,1,1,1, [1,1,1,1,1,1,1,1, [1,1,1,1,1,1,1,1, [1,1,1,1,1,1,1,1, [1,1,1,1,1,1,1,1, 1,1,1,1, 1,1,1,... 1,1,1,1, 1,1,1,... 1,1,1,1, 1,1,1,... 1,1,1,1, 1,1,1,... 1,1,1,1, 1,1,1,... 1,1,1,1, 1,1,1,... ... ... [101, 16409, 118, 16587, 159, 4064, [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1106, 1564... 0, 0, 0, ... 108000 rows × 4 columns [101, 4222, 11404, 1174, 117, 1476, [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1130, 2696... 0, 0, 0, ... [101, 11560, 3881, 108, 3614, 132, [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 3498, 2944,... 0, 0, 0, ... [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... ... Next, we implement a classifier for our task. Hugging Face provides a
variety of models corresponding to several types of downstream tasks. However, for pedagogical purposes, we implement one from scratch. In particular, our model class inherits from BertPreTrainedModel, which
provides several useful methods such as init_weights() and from_pretrained() methods, which we will use later. The model constructor takes a config- uration object as its only parameter. Configuration objects contain all the hyper-parameters used by the corresponding pre-trained models. We will show later how the configuration model is retrieved and customized. Models that implement specific downstream tasks are usually composed of a pre-trained model (sometimes referred as the body), and one or more task-specific layers (usually referred as the head). Here, we initialize a BertModel using the provided configuration, as well as a dropout layer and a task-specific linear layer used for classifying the Bert output. Each of these layers is initialized by calling the init_weights() method inherited from BertPreTrainedModel. The forward() method, which implements the task-specific forward pass, takes as arguments the outputs of the tokenizer, and, optionally, the gold labels corresponding to the input data points. Our implementation of the forward pass sends the input tokens to the Bert model to produce the contextualized representations for all tokens. This output has several components, including the last_hidden_state which con- 190 Using Transformers with the Hugging Face Library tains the final hidden-state embedding for each token. For our task, we will represent the whole sequence using the embedding for the [CLS] token that occurs at the start of each example. We retrieve it by selecting the first element of each output sequence in the batch (i.e., last_hidden_state[:, 0, :]). As in the previous chapters, we apply dropout to our sequence representation, and then pass it through our linear classification layer. If gold labels are provided (i.e., we are training), we now compute the loss using the cross-entropy loss. The output of the forward pass is wrapped in a Hugging Face SequenceClassifierOutput object4 and returned: Next we load the configuration of the pre-trained model and instantiate our model. The AutoConfig class can load the configuration for any pre-trained model, retrieving it from Hugging Face if needed. Then we use the configuration to instantiate our model using the from_pretrained() method. With this call, the pre-trained model will be loaded, which includes downloading if necessary: Hugging Face provides a Trainer class that greatly simplifies the training process. This class not only implements the training loop we have been using in the previous chapters, but also handles other useful steps such as saving checkpoints (i.e., intermediate models after a number of mini-batches have been processed during training), and tracking custom measures about model performance. In order to create a Trainer, we first need to specify its configuration in a TrainingArguments object. In ours, we specify certain hyper parameters such as batch size, weight decay, and number of epochs, as well as where to store model checkpoints: The TrainingArguments class provides a wide variety of arguments that we have not shown.5 These arguments usually have appropriate default values, so it is often fine to omit them. For example, we did not use the label_names argument, which specifies the key that corresponds to the training labels. When omitted, it defaults to keys such as label, labels and label_ids.6 In this chapter we used label. Note that we also specify how often we would like to see the perfor- . 4  Hugging Face utilizes a set of output objects to standardize model output for a given task. These objects typically include additional information, e.g., attention weights, which can be used for visualizing or debugging model behavior. 
 . 5  https://huggingface.co/docs/transformers/main/en/main_classes/trainer# transformers. TrainingArguments 
 6 In the case of extractive question answering (see Chapter 16), the start_positions and end_positions store the start/end positions of the correct answers. 13.3 Part-of-speech Tagging 191 mance of the current model (at the end of each epoch) with evaluation_strategy='epoch'. This means that after each epoch we print the current loss on the training
partition and on the evaluation dataset, if one is available. Additionally,
we can report custom metrics at this time. For this purpose, we use the compute_metrics parameter of the Trainer, which expects a function that receives a transformers. EvalPredictions object containing the label ids and the predicted logits. The expected return type is a dictionary whose keys correspond to different metrics, each of which will be displayed as a separate result column. Using the above TrainingArguments and compute_metrics function, we create our Trainer. Note that when you provide a tokenizer, the trainer will automatically pad the sequences in each batch. Also, the trainer will automatically use any GPU that is available, unless specifically disabled in the TrainingArguments. Training our model takes a single call to the train() method of the Trainer object. As specified in the our instance of TrainingArguments, the training and validation losses, as well as the accuracy, are reported every epoch. As in the other chapters, we can write custom code to obtain the model’s predictions on the test data. However, the Trainer class provides a predict() method that drastically simplifies this: As shown in the table above, this model achieves an accuracy of 95%, which is the highest performance we have achieved so far on this dataset. 13.3 Part-of-speech Tagging To showcase part-of-speech tagging using transformers, we continue with the Spanish section of the AnCora corpus introduced in Chapter 11. Recall that the dataset is stored in the CoNLL-U format. We load this format in the same way as before, but then we convert the loaded dataset into a Hugging Face DictDataset: Importantly, because the CoNLL-U dataset is already tokenized, we use the is_split_into_words=True tokenizer argument to ensure that the tokenizer respects the existing word boundaries during its sub-word tokenization. Further, while we want to predict one POS tag per word, Epoch Training Loss Validation Loss Accuracy 1 0.187800 0.172629 0.941667 2 0.104000 0.183001 0.946250 192 Using Transformers with the Hugging Face Library any given word may be split into smaller pieces by our tokenizer. Thus, we need to align the tokenizer output to the CoNLL-U words. The original BERT paper (Devlin et al., 2018) addresses this by only using the embedding corresponding to the first sub-token for each word. We follow the same approach for consistency. For the sub-words that do not correspond to the beginning of a word, we use a special value that indicates that we are not interested in their predictions. The CrossEntropyLoss has a parameter called ignore_index for this purpose. The default value for this parameter is −100, which we use as the label for the sub-words we wish to ignore during training: Next, we use this function to preprocess the train and validation folds in our DatasetDict: words [El, presidente, de, el, órgano, regulador, de... [Afirmó, que, sigue, el, criterio, europeo, y,... [Durante, la, presentación, de, el, libro, ", ... [Y, todas, las, miradas, convergen, en, la, lu... [Cambiar, las, formas, parece, de, rigor, ,, p... [Él, llega, a, tirar, la, sobre, la, cama, y, ... tags [DET, NOUN, ADP, DET, NOUN, ADJ, ADP, DET, PRO... [VERB, SCONJ, VERB, DET, NOUN, ADJ, CCONJ, SCO... [ADP, DET, NOUN, ADP, DET, NOUN, PUNCT, DET, P... [CCONJ, DET, DET, NOUN, VERB, ADP, DET, NOUN, ... [VERB, DET, NOUN, VERB, ADP, NOUN, PUNCT, CCON... [PRON, VERB, ADP, VERB, PRON, ADP, DET, NOUN, ... input_ids [0, 540, 9692, 8, 88, 103633, 15913, 1846, 8, ... [0, 62, 38949, 849, 41, 58453, 88, 166220, 620... [0, 24292, 21, 43945, 8, 88, 7750, 44, 239, 78... [0, 990, 5136, 576, 100688, 7, 158, 814, 1409,... [0, 313, 61055, 42, 576, 26497, 12295, 8, 7599... [0, 124043, 47612, 10, 61846, 21, 1028, 21, 39... attention_mask labels [-100, 0, 1, 2, 0, 1, 3, -100, 2, 0, 4, -100, ... [-100, 6, -100, -100, 7, 6, 0, 1, 3, 10, 7, 6,... [-100, 2, 0, 1, 2, 0, 1, 8, 0, 4, -100, 2, 4, ... [-100, 10, 0, 0, 1, -100, 6, -100, -100, 2, 0,... [-100, 6, -100, -100, 0, 1, 6, 2, 1, 8, -100, ... [-100, 5, 6, 2, 6, 5, 2, 0, 1, 10, 5, 6, 0, 1,... 0 1 2 3 4 ... 14300 14301 14302 14303 [1,1,1,1,1, 1,1,1,1,1, [1,1,1,1,1, 1,1,1,1,1, [1,1,1,1,1, 1,1,1,1,1, [1,1,1,1,1, 1,1,1,1,1, [1,1,1,1,1, 1,1,1,1,1, [1,1,1,1,1, 1,1,1,1,1, 1, 1, 1, 1, 1, ... 1, 1, 1, 1, 1, ... 1, 1, 1, 1, 1, ... 1, 1, 1, 1, 1, ... 1, 1, 1, 1, 1, ... 1, 1, 1, 1, 1, ... [Sobre, la, oferta, de, interconexión, con, Te... [ADP, DET, NOUN, ADP, NOUN, ADP, PROPN, ADP, D... [0, 44125, 21, 19806, 8, 1940, 2271, 3355, 194... [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... [-100, 2, 0, 1, 2, 1, -100, -100, -100, 2, 4, ... [La, inversión, en, investigación, básica, es,... [DET, NOUN, ADP, NOUN, ADJ, AUX, DET, NOUN, AD... [0, 239, 98649, 22, 31674, 124528, 198, 88, 46... [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... [-100, 0, 1, 2, 1, 3, 9, 0, 1, 2, 0, 1, 10, 0,... [Conviene, que, ahora, ,, en, plena, apoteosis... [VERB, SCONJ, ADV, PUNCT, ADP, ADJ, NOUN, ADP,... [0, 1657, 7772, 13, 41, 18451, 6, 4, 22, 31161... [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... [-100, 6, -100, -100, 7, 11, 8, -100, 2, 3, 1,... [Carlos, y, Fayna, se, enzarzan, en, una, bron... [PROPN, CCONJ, PROPN, PRON, VERB, ADP, DET, NO... [0, 24856, 113, 114162, 76, 40, 22, 6383, 5935... [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... [-100, 4, 10, 4, -100, 5, 6, -100, -100, 2, 0,... 14304
14305 rows × 5 columns ... ... ... ... ... Next, we implement our model class that uses a transformer encoder as a transducer. Because our downstream task consists of POS tagging for Spanish, we need a transformer model that was pre-trained on Spanish texts. Here, we chose XLM-RoBERTa (Conneau et al., 2019) as our base model. XLM-Roberta is a RoBERTa model (Liu et al., 2019) that has 13.3 Part-of-speech Tagging 193 been pre-trained on 100 different languages, including Spanish. Of note, XLM-RoBERTa does not require us to specify what language we are working on. Similar to BERT, it only requires the input_ids. We discussed in the text classification section that Hugging Face provides implementations for text classification models. This is also true for token classification problems that require transducers. In particular, the XLMRobertaForTokenClassification model provided by Hugging Face does everything needed for this task. However, as before, here we implement it ourselves for pedagogical purposes. The model architecture is similar to our text classification example. It consists of a transformer, a dropout layer, and a linear layer used for classification. The number of labels which determines the output dimension of the linear layer is equal to the number of POS tags. The primary difference between the text classification example and this token classification model is that with the former we produced one label for each text document, while here we produce one label for each token in the input text. Specifically, in our text classification model the output shape was two-dimensional: (batch_size, num_labels). Here, our output is three-dimensional: (batch_size, sequence_size, num_labels). So, while much of the forward method is familiar to us, when we are required to compute the loss, we need to reshape the logits and the labels before passing them to the CrossEntropyLoss, since it expects two-dimensional input and one-dimensional labels. For this purpose, we use the view() method to reshape the tensors. This method is efficient because it does not copy the tensor data. Instead it provides a new view of the same data that behaves like a tensor with a different shape.7 As mentioned before, the number of arguments passed to this method determines the number of dimensions in the output tensor. Here, for our logits, we pass two arguments and so our new view will have two dimensions. The second will be the size of self.num_labels, while the first (because we pass -1) will be inferred based on the original tensor shape. For our labels, on the other hand, we only provide one argument and so the new view will have one dimension, inferred by the original shape: Next, we instantiate our model using the XLM-RoBERTa configuration: 7 Similar to NumPy, PyTorch tensors are represented internally by a block of memory storing the data and some metadata that describes how the data should be read, e.g., type, shape, and stride. The view() method returns a new tensor with new metadata but pointing to the same memory block. 194 Using Transformers with the Hugging Face Library As before, we create a TrainingArguments object and define a compute_metrics function in order to customize a Trainer: While the TrainingArguments code has no substantial changes, we need to adjust the compute_metrics function to account for the fact that our model uses sub-word tokens rather than complete words. Recall that only the first sub-word token per word was assigned a POS tag. This function discards the labels corresponding to the ignored sub-word tokens and evaluates the rest, returning the accuracy score: The last component required for the Trainer is a collator. Since this time we are batching sequences of tokens, we need a collator that can pad them dynamically when constructing the batches. The transformers library includes a DataCollatorForTokenClassification specifically for this purpose. Once we have our collator and our trainer object, we can train our model: Next, we evaluate our newly trained model on the test dataset. For this purpose, we preprocess the data in the same way we did for the train and validation partitions. Then, for convenience, we use the trainer’s predict() method to generate the predicted logits using our model: As before, we use scikit-learn’s classification_report() function to display the results of the evaluation. This function expects two onedimensional lists of labels, so we need to follow a similar approach to the one we employed for text classification. Note that output.label_ids and output.predictions are NumPy arrays rather than PyTorch tensors. This time we use NumPy’s reshape() method to reshape the arrays. This method is similar to PyTorch’s view() method that we used previously, except that view() may copy the array’s data in some situations. We discard the labels corresponding to ignored sub-word tokens, and then we print the classification report: Our model based on XLM-RoBERTa achieves 99% accuracy. This is considerably better than the LSTM-based model developed in Chapter 11. In order to understand the differences between the two methods, we produce below a confusion matrix for the results of each model. Rows in the confusion matrix represent the true labels and columns represent the predicted labels. In the confusion matrices shown below, each cell xij corresponds to the proportion of values with label i that were assigned the label j.8 For a perfect model, all cells in the diagonal would have value 1 and all other cells would have value 0. The code used to generate the confusion matrix is shown below. The confusion matrices 8 This is the case because we used the normalize='true' parameter of the confusion_matrix() function. 13.3 Part-of-speech Tagging 195 Figure 13.1 Confusion matrix corresponding to the LSTM-based part-ofspeech tagger developed in Chapter 11. for the LSTM and transformer are show in Figure 13.1 and Figure 13.2, respectively. The two confusion matrices highlight a couple of important observations. First, the transformer model is considerably better at predicting POS tags with infrequent support in the dataset. For example, the accuracy for predicting the SYM POS tag increased from 38% in the LSTM model to 95% in the transformer model! Equally as impressive, the transformer improved the performance of tags that are extremely common, and, thus, provide plenty of opportunity to both approaches to learn a good model. For example, the accuracy of tagging NOUN, the second 196 Using Transformers with the Hugging Face Library Figure 13.2 Confusion matrix corresponding to the transformer-based part-of-speech tagger. most common POS tag in the dataset, increased from 96% in the LSTM model to 99% in the transformer model. 13.4 Summary In this chapter we presented two applications driven by the encoder component of a transformer network. First, we used the transformer encoder as an acceptor and implemented a text classification application for English news. Second, we used the encoder as a transducer to develop a Spanish part-of-speech tagger. Both tasks were implemented using 13.4 Summary 197 pre-trained transformer models from the Hugging Face library. For both applications, the transformer-based methods outperform considerably all approaches introduced in the previous chapters, highlighting the value of the transformer architecture.
22,132
22,242
#!/usr/bin/env python # coding: utf-8 # # Part-of-speech Tagging with Transformer Networks # Some initialization: # In[1]: import random import torch import numpy as np import pandas as pd from tqdm.notebook import tqdm # enable tqdm in pandas tqdm.pandas() # set to True to use the gpu (if there is one available) use_gpu = True # select device device = torch.device('cuda' if use_gpu and torch.cuda.is_available() else 'cpu') print(f'device: {device.type}') # random seed seed = 1234 # set random seed if seed is not None: print(f'random seed: {seed}') random.seed(seed) np.random.seed(seed) torch.manual_seed(seed) # Read the words and POS tags from the Spanish dataset: # In[2]: from conllu import parse_incr def read_tags(filename): data = {'words': [], 'tags': []} with open(filename) as f: for sent in tqdm(parse_incr(f)): words = [tok['form'] for tok in sent] tags = [tok['upos'] for tok in sent] data['words'].append(words) data['tags'].append(tags) return pd.DataFrame(data) # In[3]: train_df = read_tags('data/UD_Spanish-AnCora/es_ancora-ud-train.conllup') valid_df = read_tags('data/UD_Spanish-AnCora/es_ancora-ud-dev.conllup') test_df = read_tags('data/UD_Spanish-AnCora/es_ancora-ud-test.conllup') # In[4]: tags = train_df['tags'].explode().unique() index_to_tag = {i:t for i,t in enumerate(tags)} tag_to_index = {t:i for i,t in enumerate(tags)} # Create a HuggingFace `DatasetDict` object: # In[5]: from datasets import Dataset, DatasetDict ds = DatasetDict() ds['train'] = Dataset.from_pandas(train_df) ds['validation'] = Dataset.from_pandas(valid_df) ds['test'] = Dataset.from_pandas(test_df) ds # In[6]: ds['train'].to_pandas() # Now tokenize the texts and assign POS labels to the first token in each word: # In[7]: from transformers import AutoTokenizer transformer_name = 'xlm-roberta-base' tokenizer = AutoTokenizer.from_pretrained(transformer_name) # In[8]: x = ds['train'][0] tokenized_input = tokenizer(x['words'], is_split_into_words=True) tokens = tokenizer.convert_ids_to_tokens(tokenized_input['input_ids']) word_ids = tokenized_input.word_ids() pd.DataFrame([tokens, word_ids], index=['tokens', 'word ids']) # In[9]: # https://arxiv.org/pdf/1810.04805.pdf # Section 5.3 # We use the representation of the first sub-token as the input to the token-level classifier over the NER label set. # default value for CrossEntropyLoss ignore_index parameter ignore_index = -100 def tokenize_and_align_labels(batch): labels = [] # tokenize batch tokenized_inputs = tokenizer( batch['words'], truncation=True, is_split_into_words=True, ) # iterate over batch elements for i, tags in enumerate(batch['tags']): label_ids = [] previous_word_id = None # get word ids for current batch element word_ids = tokenized_inputs.word_ids(batch_index=i) # iterate over tokens in batch element for word_id in word_ids: if word_id is None or word_id == previous_word_id: # ignore if not a word or word id has already been seen label_ids.append(ignore_index) else: # get tag id for corresponding word tag_id = tag_to_index[tags[word_id]] label_ids.append(tag_id) # remember this word id previous_word_id = word_id # save label ids for current batch element labels.append(label_ids) # store labels together with the tokenizer output tokenized_inputs['labels'] = labels return tokenized_inputs # In[10]: train_ds = ds['train'].map(tokenize_and_align_labels, batched=True) eval_ds = ds['validation'].map(tokenize_and_align_labels, batched=True) train_ds.to_pandas() # Create our transformer model: # In[11]: from torch import nn from transformers.modeling_outputs import TokenClassifierOutput from transformers.models.roberta.modeling_roberta import RobertaModel, RobertaPreTrainedModel # https://github.com/huggingface/transformers/blob/65659a29cf5a079842e61a63d57fa24474288998/src/transformers/models/roberta/modeling_roberta.py#L1346 class XLMRobertaForTokenClassification(RobertaPreTrainedModel): def __init__(self, config): super().__init__(config) self.num_labels = config.num_labels self.roberta = RobertaModel(config, add_pooling_layer=False) self.dropout = nn.Dropout(config.hidden_dropout_prob) self.classifier = nn.Linear(config.hidden_size, config.num_labels) self.init_weights() def forward(self, input_ids=None, attention_mask=None, token_type_ids=None, labels=None, **kwargs): outputs = self.roberta( input_ids, attention_mask=attention_mask, token_type_ids=token_type_ids, **kwargs, ) sequence_output = self.dropout(outputs[0]) logits = self.classifier(sequence_output) loss = None if labels is not None: loss_fn = nn.CrossEntropyLoss() inputs = logits.view(-1, self.num_labels) targets = labels.view(-1) loss = loss_fn(inputs, targets) return TokenClassifierOutput( loss=loss, logits=logits, hidden_states=outputs.hidden_states, attentions=outputs.attentions, ) # In[12]: from transformers import AutoConfig config = AutoConfig.from_pretrained( transformer_name, num_labels=len(index_to_tag), ) model = ( XLMRobertaForTokenClassification .from_pretrained(transformer_name, config=config) ) # Create the `Trainer` object and train: # In[13]: from transformers import TrainingArguments num_epochs = 2 batch_size = 24 weight_decay = 0.01 model_name = f'{transformer_name}-finetuned-pos-es' training_args = TrainingArguments( output_dir=model_name, log_level='error', num_train_epochs=num_epochs, per_device_train_batch_size=batch_size, per_device_eval_batch_size=batch_size, evaluation_strategy='epoch', weight_decay=weight_decay, ) # In[14]: from sklearn.metrics import accuracy_score def compute_metrics(eval_pred): # gold labels label_ids = eval_pred.label_ids # predictions pred_ids = np.argmax(eval_pred.predictions, axis=-1) # collect gold and predicted labels, ignoring ignore_index label y_true, y_pred = [], [] batch_size, seq_len = pred_ids.shape for i in range(batch_size): for j in range(seq_len): if label_ids[i, j] != ignore_index: y_true.append(index_to_tag[label_ids[i][j]]) y_pred.append(index_to_tag[pred_ids[i][j]]) # return computed metrics return {'accuracy': accuracy_score(y_true, y_pred)} # In[15]: from transformers import Trainer from transformers import DataCollatorForTokenClassification data_collator = DataCollatorForTokenClassification(tokenizer) trainer = Trainer( model=model, args=training_args, data_collator=data_collator, compute_metrics=compute_metrics, train_dataset=train_ds, eval_dataset=eval_ds, tokenizer=tokenizer, ) trainer.train() # Evaluate on the test partition: # In[16]: test_ds = ds['test'].map( tokenize_and_align_labels, batched=True, ) output = trainer.predict(test_ds) # In[17]: from sklearn.metrics import classification_report num_labels = model.num_labels label_ids = output.label_ids.reshape(-1) predictions = output.predictions.reshape(-1, num_labels) predictions = np.argmax(predictions, axis=-1) mask = label_ids != ignore_index y_true = label_ids[mask] y_pred = predictions[mask] target_names = tags[:-1] report = classification_report( y_true, y_pred, target_names=target_names ) print(report) # In[18]: import matplotlib.pyplot as plt from sklearn.metrics import ConfusionMatrixDisplay, confusion_matrix cm = confusion_matrix(y_true, y_pred, normalize='true') disp = ConfusionMatrixDisplay( confusion_matrix=cm, display_labels=target_names, ) fig, ax = plt.subplots(figsize=(10,10)) disp.plot( cmap='Blues', values_format='.2f', colorbar=False, ax=ax, xticks_rotation=45, )
7,379
7,413
11
chap13-12
chap13-12
13 Using Transformers with the Hugging Face Library One of the key advantages of transformer networks is the ability to take a model that was pre-trained over vast quantities of text and fine-tune it for the task at hand. Intuitively, this strategy allows transformer networks to achieve higher performance on smaller datasets by relying on statistics acquired at scale in an unsupervised way (e.g., through the masked language model training objective). To this end, in this chapter we will use the Hugging Face library,1 which has a rich repository of datasets and pre-trained models, as well as helper methods and classes that make it easy to target downstream tasks. Using pre-trained transformer encoders, we will implement the two tasks that served as use cases in the previous chapters: text classification and part-of-speech tagging. 13.1 Tokenization As discussed in Section 12.2, transformers rely on sub-word tokens. This strategy provides an elegant way to handle unknown and low-frequency words by splitting them into more frequent sub-word parts. At the same time, these tokenization algorithms maintain frequently-occurring words as standalone tokens, so the signal for these common words is preserved. To make this more concrete, we show below how tokenizers are employed in the Hugging Face library. First, we load the tokenizer that corresponds to the transformer we intend to use. This is important for two reasons: (a) different transformers rely on different tokenization algorithms, and (b) even for the ones that use the same algorithm, their tokenizer vocabularies are likely to be different if they were pre-trained 1 https://huggingface.co/docs/transformers/main/en/index 186 13.1 Tokenization 187 on different corpora. Next, we tokenize some example text and display some of the resulting attributes with pandas: As shown above, the tokenizer splits the text into tokens, and adds two special tokens: the [CLS] token at the beginning of the token sequence, and the [SEP] token at the end. Also, note that the ## characters at the beginning of some tokens indicate that they are not standalone words, but rather sub-words that continue a word previously started. For example, the output above shows that the word walrus was split into three sub-words. Note, however, that this is specific to this particular tokenization algorithm, and other tokenizers may indicate word continuation in different ways. A better way to detect word continuations is using the word_ids() method of the tokenizer output, which assigns the same id to all tokens part of the same word. For example, all fragments of the word walrus share the word id 3. Lastly, the input_ids attribute provides the token ids used internally by the transformer to map tokens to embeddings. To briefly demonstrate how different tokenizers produce different outputs, here is the same text tokenized with the tokenizer corresponding to xlm-roberta-base: Note how the [CLS] and [SEP] special tokens have been replaced with <s> and </s> respectively. Also, spaces have been replaced with the Unicode character (U+2581, LOWER ONE EIGHTH BLOCK). Tokens that start with that character are considered word beginnings and the rest are word continuations, as can be confirmed by looking at the word ids. This illustrates the importance of using the tokenizer that corresponds to the transformer you intend to use. 012345678 tokens [CLS] I am the wa ##l ##rus . [SEP] word_ids None 0 1 2 3 3 3 4 None input_ids 101 146 1821 1103 20049 1233 6208 119 102 01234567 tokens <s> ▁I ▁am ▁the ▁wal rus . </s > None 0 1 2 3 3 3 None 0 87 444 70 32973 6563 5 2 word_ids input_ids 188 Using Transformers with the Hugging Face Library 13.2 Text Classification For our text classification example, we will continue using the AG News dataset from previous chapters. We will load, preprocess, and split the dataset into pandas dataframes in the same way as before. Now however, rather than continuing with pandas, we will create a Hugging Face dataset from the dataframes. Hugging Face datasets are convenient because of their built-in support of batching, efficient data transformations, and caching. In particular, we convert each dataframe into a Hugging Face dataset. The various datasets are managed with a DatasetDict. Note that this is the same data structure seen when downloading a Hugging Face dataset from their hub.2 The keys in this dictionary are usually train, validation, and test:3 Once our dataset is loaded, we load a tokenizer. Different pre-trained models are tokenized differently, and it is important to select the tokenizer that corresponds to the model we will use so that the inputs are consistent with model expectations. In our example, we will use the bert-base-cased pre-trained model and tokenizer: Datasets have a map() method that transforms the dataset by applying a function to each example. The method returns a new dataset with the transformation applied. We use the map() method to tokenize our dataset. To this end, we define a function that tokenizes an example using the tokenizer we loaded previously. Note that tokenizers support many options that you may need depending on your situation. However, since this is a simple scenario, all we need to do is provide the text to tokenize and specify how to handle texts that exceed the maximum number of tokens permitted by the pre-trained model. Here we have our tokenizer truncate any inputs that are too long by specifying the truncation=True parameter. The output of this function will be added to the new dataset as extra columns. Further, we also want to remove some of the columns that are no longer needed, simplifying subsequent steps. For this, we use the remove_columns argument, listing the columns that we want to discard. Additionally, the dataset’s map() method can batch the dataset; we enable this option with the batched=True argument: 2 https://huggingface.co/datasets
3 These correspond to the more common terms train, development, and test we have used throughout the book so far. In this chapter we use the Hugging Face naming conventions for consistency. 13.2 Text Classification 189 label 03 10 20 32 40 ... ... . 107995  0 
 . 107996  0 
 . 107997  0 
 . 107998  0 
 . 107999  3 
 input_ids [101, 3270, 11906, 1522, 1146, 7106, 1111, 251... [101, 158, 119, 156, 119, 12068, 5084, 1116, 9... [101, 7270, 118, 2733, 1383, 1111, 12448, 7430... [101, 6096, 117, 10378, 3969, 5977, 1111, 8988... [101, 19569, 5480, 10582, 2087, 1867, 158, 119... [101, 1130, 139, 24683, 131, 21107, 2050, 1739... token_type_ids attention_mask [101, 22087, 8223, 1611, 1106, 4417, [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 5572, 324... 0, 0, 0, ... [0,0,0,0,0,0,0,0, [0,0,0,0,0,0,0,0, [0,0,0,0,0,0,0,0, [0,0,0,0,0,0,0,0, [0,0,0,0,0,0,0,0, [0,0,0,0,0,0,0,0, 0,0,0,0, 0,0,0,... 0,0,0,0, 0,0,0,... 0,0,0,0, 0,0,0,... 0,0,0,0, 0,0,0,... 0,0,0,0, 0,0,0,... 0,0,0,0, 0,0,0,... [1,1,1,1,1,1,1,1, [1,1,1,1,1,1,1,1, [1,1,1,1,1,1,1,1, [1,1,1,1,1,1,1,1, [1,1,1,1,1,1,1,1, [1,1,1,1,1,1,1,1, 1,1,1,1, 1,1,1,... 1,1,1,1, 1,1,1,... 1,1,1,1, 1,1,1,... 1,1,1,1, 1,1,1,... 1,1,1,1, 1,1,1,... 1,1,1,1, 1,1,1,... ... ... [101, 16409, 118, 16587, 159, 4064, [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1106, 1564... 0, 0, 0, ... 108000 rows × 4 columns [101, 4222, 11404, 1174, 117, 1476, [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1130, 2696... 0, 0, 0, ... [101, 11560, 3881, 108, 3614, 132, [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 3498, 2944,... 0, 0, 0, ... [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... ... Next, we implement a classifier for our task. Hugging Face provides a
variety of models corresponding to several types of downstream tasks. However, for pedagogical purposes, we implement one from scratch. In particular, our model class inherits from BertPreTrainedModel, which
provides several useful methods such as init_weights() and from_pretrained() methods, which we will use later. The model constructor takes a config- uration object as its only parameter. Configuration objects contain all the hyper-parameters used by the corresponding pre-trained models. We will show later how the configuration model is retrieved and customized. Models that implement specific downstream tasks are usually composed of a pre-trained model (sometimes referred as the body), and one or more task-specific layers (usually referred as the head). Here, we initialize a BertModel using the provided configuration, as well as a dropout layer and a task-specific linear layer used for classifying the Bert output. Each of these layers is initialized by calling the init_weights() method inherited from BertPreTrainedModel. The forward() method, which implements the task-specific forward pass, takes as arguments the outputs of the tokenizer, and, optionally, the gold labels corresponding to the input data points. Our implementation of the forward pass sends the input tokens to the Bert model to produce the contextualized representations for all tokens. This output has several components, including the last_hidden_state which con- 190 Using Transformers with the Hugging Face Library tains the final hidden-state embedding for each token. For our task, we will represent the whole sequence using the embedding for the [CLS] token that occurs at the start of each example. We retrieve it by selecting the first element of each output sequence in the batch (i.e., last_hidden_state[:, 0, :]). As in the previous chapters, we apply dropout to our sequence representation, and then pass it through our linear classification layer. If gold labels are provided (i.e., we are training), we now compute the loss using the cross-entropy loss. The output of the forward pass is wrapped in a Hugging Face SequenceClassifierOutput object4 and returned: Next we load the configuration of the pre-trained model and instantiate our model. The AutoConfig class can load the configuration for any pre-trained model, retrieving it from Hugging Face if needed. Then we use the configuration to instantiate our model using the from_pretrained() method. With this call, the pre-trained model will be loaded, which includes downloading if necessary: Hugging Face provides a Trainer class that greatly simplifies the training process. This class not only implements the training loop we have been using in the previous chapters, but also handles other useful steps such as saving checkpoints (i.e., intermediate models after a number of mini-batches have been processed during training), and tracking custom measures about model performance. In order to create a Trainer, we first need to specify its configuration in a TrainingArguments object. In ours, we specify certain hyper parameters such as batch size, weight decay, and number of epochs, as well as where to store model checkpoints: The TrainingArguments class provides a wide variety of arguments that we have not shown.5 These arguments usually have appropriate default values, so it is often fine to omit them. For example, we did not use the label_names argument, which specifies the key that corresponds to the training labels. When omitted, it defaults to keys such as label, labels and label_ids.6 In this chapter we used label. Note that we also specify how often we would like to see the perfor- . 4  Hugging Face utilizes a set of output objects to standardize model output for a given task. These objects typically include additional information, e.g., attention weights, which can be used for visualizing or debugging model behavior. 
 . 5  https://huggingface.co/docs/transformers/main/en/main_classes/trainer# transformers. TrainingArguments 
 6 In the case of extractive question answering (see Chapter 16), the start_positions and end_positions store the start/end positions of the correct answers. 13.3 Part-of-speech Tagging 191 mance of the current model (at the end of each epoch) with evaluation_strategy='epoch'. This means that after each epoch we print the current loss on the training
partition and on the evaluation dataset, if one is available. Additionally,
we can report custom metrics at this time. For this purpose, we use the compute_metrics parameter of the Trainer, which expects a function that receives a transformers. EvalPredictions object containing the label ids and the predicted logits. The expected return type is a dictionary whose keys correspond to different metrics, each of which will be displayed as a separate result column. Using the above TrainingArguments and compute_metrics function, we create our Trainer. Note that when you provide a tokenizer, the trainer will automatically pad the sequences in each batch. Also, the trainer will automatically use any GPU that is available, unless specifically disabled in the TrainingArguments. Training our model takes a single call to the train() method of the Trainer object. As specified in the our instance of TrainingArguments, the training and validation losses, as well as the accuracy, are reported every epoch. As in the other chapters, we can write custom code to obtain the model’s predictions on the test data. However, the Trainer class provides a predict() method that drastically simplifies this: As shown in the table above, this model achieves an accuracy of 95%, which is the highest performance we have achieved so far on this dataset. 13.3 Part-of-speech Tagging To showcase part-of-speech tagging using transformers, we continue with the Spanish section of the AnCora corpus introduced in Chapter 11. Recall that the dataset is stored in the CoNLL-U format. We load this format in the same way as before, but then we convert the loaded dataset into a Hugging Face DictDataset: Importantly, because the CoNLL-U dataset is already tokenized, we use the is_split_into_words=True tokenizer argument to ensure that the tokenizer respects the existing word boundaries during its sub-word tokenization. Further, while we want to predict one POS tag per word, Epoch Training Loss Validation Loss Accuracy 1 0.187800 0.172629 0.941667 2 0.104000 0.183001 0.946250 192 Using Transformers with the Hugging Face Library any given word may be split into smaller pieces by our tokenizer. Thus, we need to align the tokenizer output to the CoNLL-U words. The original BERT paper (Devlin et al., 2018) addresses this by only using the embedding corresponding to the first sub-token for each word. We follow the same approach for consistency. For the sub-words that do not correspond to the beginning of a word, we use a special value that indicates that we are not interested in their predictions. The CrossEntropyLoss has a parameter called ignore_index for this purpose. The default value for this parameter is −100, which we use as the label for the sub-words we wish to ignore during training: Next, we use this function to preprocess the train and validation folds in our DatasetDict: words [El, presidente, de, el, órgano, regulador, de... [Afirmó, que, sigue, el, criterio, europeo, y,... [Durante, la, presentación, de, el, libro, ", ... [Y, todas, las, miradas, convergen, en, la, lu... [Cambiar, las, formas, parece, de, rigor, ,, p... [Él, llega, a, tirar, la, sobre, la, cama, y, ... tags [DET, NOUN, ADP, DET, NOUN, ADJ, ADP, DET, PRO... [VERB, SCONJ, VERB, DET, NOUN, ADJ, CCONJ, SCO... [ADP, DET, NOUN, ADP, DET, NOUN, PUNCT, DET, P... [CCONJ, DET, DET, NOUN, VERB, ADP, DET, NOUN, ... [VERB, DET, NOUN, VERB, ADP, NOUN, PUNCT, CCON... [PRON, VERB, ADP, VERB, PRON, ADP, DET, NOUN, ... input_ids [0, 540, 9692, 8, 88, 103633, 15913, 1846, 8, ... [0, 62, 38949, 849, 41, 58453, 88, 166220, 620... [0, 24292, 21, 43945, 8, 88, 7750, 44, 239, 78... [0, 990, 5136, 576, 100688, 7, 158, 814, 1409,... [0, 313, 61055, 42, 576, 26497, 12295, 8, 7599... [0, 124043, 47612, 10, 61846, 21, 1028, 21, 39... attention_mask labels [-100, 0, 1, 2, 0, 1, 3, -100, 2, 0, 4, -100, ... [-100, 6, -100, -100, 7, 6, 0, 1, 3, 10, 7, 6,... [-100, 2, 0, 1, 2, 0, 1, 8, 0, 4, -100, 2, 4, ... [-100, 10, 0, 0, 1, -100, 6, -100, -100, 2, 0,... [-100, 6, -100, -100, 0, 1, 6, 2, 1, 8, -100, ... [-100, 5, 6, 2, 6, 5, 2, 0, 1, 10, 5, 6, 0, 1,... 0 1 2 3 4 ... 14300 14301 14302 14303 [1,1,1,1,1, 1,1,1,1,1, [1,1,1,1,1, 1,1,1,1,1, [1,1,1,1,1, 1,1,1,1,1, [1,1,1,1,1, 1,1,1,1,1, [1,1,1,1,1, 1,1,1,1,1, [1,1,1,1,1, 1,1,1,1,1, 1, 1, 1, 1, 1, ... 1, 1, 1, 1, 1, ... 1, 1, 1, 1, 1, ... 1, 1, 1, 1, 1, ... 1, 1, 1, 1, 1, ... 1, 1, 1, 1, 1, ... [Sobre, la, oferta, de, interconexión, con, Te... [ADP, DET, NOUN, ADP, NOUN, ADP, PROPN, ADP, D... [0, 44125, 21, 19806, 8, 1940, 2271, 3355, 194... [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... [-100, 2, 0, 1, 2, 1, -100, -100, -100, 2, 4, ... [La, inversión, en, investigación, básica, es,... [DET, NOUN, ADP, NOUN, ADJ, AUX, DET, NOUN, AD... [0, 239, 98649, 22, 31674, 124528, 198, 88, 46... [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... [-100, 0, 1, 2, 1, 3, 9, 0, 1, 2, 0, 1, 10, 0,... [Conviene, que, ahora, ,, en, plena, apoteosis... [VERB, SCONJ, ADV, PUNCT, ADP, ADJ, NOUN, ADP,... [0, 1657, 7772, 13, 41, 18451, 6, 4, 22, 31161... [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... [-100, 6, -100, -100, 7, 11, 8, -100, 2, 3, 1,... [Carlos, y, Fayna, se, enzarzan, en, una, bron... [PROPN, CCONJ, PROPN, PRON, VERB, ADP, DET, NO... [0, 24856, 113, 114162, 76, 40, 22, 6383, 5935... [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... [-100, 4, 10, 4, -100, 5, 6, -100, -100, 2, 0,... 14304
14305 rows × 5 columns ... ... ... ... ... Next, we implement our model class that uses a transformer encoder as a transducer. Because our downstream task consists of POS tagging for Spanish, we need a transformer model that was pre-trained on Spanish texts. Here, we chose XLM-RoBERTa (Conneau et al., 2019) as our base model. XLM-Roberta is a RoBERTa model (Liu et al., 2019) that has 13.3 Part-of-speech Tagging 193 been pre-trained on 100 different languages, including Spanish. Of note, XLM-RoBERTa does not require us to specify what language we are working on. Similar to BERT, it only requires the input_ids. We discussed in the text classification section that Hugging Face provides implementations for text classification models. This is also true for token classification problems that require transducers. In particular, the XLMRobertaForTokenClassification model provided by Hugging Face does everything needed for this task. However, as before, here we implement it ourselves for pedagogical purposes. The model architecture is similar to our text classification example. It consists of a transformer, a dropout layer, and a linear layer used for classification. The number of labels which determines the output dimension of the linear layer is equal to the number of POS tags. The primary difference between the text classification example and this token classification model is that with the former we produced one label for each text document, while here we produce one label for each token in the input text. Specifically, in our text classification model the output shape was two-dimensional: (batch_size, num_labels). Here, our output is three-dimensional: (batch_size, sequence_size, num_labels). So, while much of the forward method is familiar to us, when we are required to compute the loss, we need to reshape the logits and the labels before passing them to the CrossEntropyLoss, since it expects two-dimensional input and one-dimensional labels. For this purpose, we use the view() method to reshape the tensors. This method is efficient because it does not copy the tensor data. Instead it provides a new view of the same data that behaves like a tensor with a different shape.7 As mentioned before, the number of arguments passed to this method determines the number of dimensions in the output tensor. Here, for our logits, we pass two arguments and so our new view will have two dimensions. The second will be the size of self.num_labels, while the first (because we pass -1) will be inferred based on the original tensor shape. For our labels, on the other hand, we only provide one argument and so the new view will have one dimension, inferred by the original shape: Next, we instantiate our model using the XLM-RoBERTa configuration: 7 Similar to NumPy, PyTorch tensors are represented internally by a block of memory storing the data and some metadata that describes how the data should be read, e.g., type, shape, and stride. The view() method returns a new tensor with new metadata but pointing to the same memory block. 194 Using Transformers with the Hugging Face Library As before, we create a TrainingArguments object and define a compute_metrics function in order to customize a Trainer: While the TrainingArguments code has no substantial changes, we need to adjust the compute_metrics function to account for the fact that our model uses sub-word tokens rather than complete words. Recall that only the first sub-word token per word was assigned a POS tag. This function discards the labels corresponding to the ignored sub-word tokens and evaluates the rest, returning the accuracy score: The last component required for the Trainer is a collator. Since this time we are batching sequences of tokens, we need a collator that can pad them dynamically when constructing the batches. The transformers library includes a DataCollatorForTokenClassification specifically for this purpose. Once we have our collator and our trainer object, we can train our model: Next, we evaluate our newly trained model on the test dataset. For this purpose, we preprocess the data in the same way we did for the train and validation partitions. Then, for convenience, we use the trainer’s predict() method to generate the predicted logits using our model: As before, we use scikit-learn’s classification_report() function to display the results of the evaluation. This function expects two onedimensional lists of labels, so we need to follow a similar approach to the one we employed for text classification. Note that output.label_ids and output.predictions are NumPy arrays rather than PyTorch tensors. This time we use NumPy’s reshape() method to reshape the arrays. This method is similar to PyTorch’s view() method that we used previously, except that view() may copy the array’s data in some situations. We discard the labels corresponding to ignored sub-word tokens, and then we print the classification report: Our model based on XLM-RoBERTa achieves 99% accuracy. This is considerably better than the LSTM-based model developed in Chapter 11. In order to understand the differences between the two methods, we produce below a confusion matrix for the results of each model. Rows in the confusion matrix represent the true labels and columns represent the predicted labels. In the confusion matrices shown below, each cell xij corresponds to the proportion of values with label i that were assigned the label j.8 For a perfect model, all cells in the diagonal would have value 1 and all other cells would have value 0. The code used to generate the confusion matrix is shown below. The confusion matrices 8 This is the case because we used the normalize='true' parameter of the confusion_matrix() function. 13.3 Part-of-speech Tagging 195 Figure 13.1 Confusion matrix corresponding to the LSTM-based part-ofspeech tagger developed in Chapter 11. for the LSTM and transformer are show in Figure 13.1 and Figure 13.2, respectively. The two confusion matrices highlight a couple of important observations. First, the transformer model is considerably better at predicting POS tags with infrequent support in the dataset. For example, the accuracy for predicting the SYM POS tag increased from 38% in the LSTM model to 95% in the transformer model! Equally as impressive, the transformer improved the performance of tags that are extremely common, and, thus, provide plenty of opportunity to both approaches to learn a good model. For example, the accuracy of tagging NOUN, the second 196 Using Transformers with the Hugging Face Library Figure 13.2 Confusion matrix corresponding to the transformer-based part-of-speech tagger. most common POS tag in the dataset, increased from 96% in the LSTM model to 99% in the transformer model. 13.4 Summary In this chapter we presented two applications driven by the encoder component of a transformer network. First, we used the transformer encoder as an acceptor and implemented a text classification application for English news. Second, we used the encoder as a transducer to develop a Spanish part-of-speech tagger. Both tasks were implemented using 13.4 Summary 197 pre-trained transformer models from the Hugging Face library. For both applications, the transformer-based methods outperform considerably all approaches introduced in the previous chapters, highlighting the value of the transformer architecture.
9,165
9,306
#!/usr/bin/env python # coding: utf-8 # # Text Classification Using Transformer Networks (BERT) # Some initialization: # In[1]: import random import torch import numpy as np import pandas as pd from tqdm.notebook import tqdm # enable tqdm in pandas tqdm.pandas() # set to True to use the gpu (if there is one available) use_gpu = True # select device device = torch.device('cuda' if use_gpu and torch.cuda.is_available() else 'cpu') print(f'device: {device.type}') # random seed seed = 1234 # set random seed if seed is not None: print(f'random seed: {seed}') random.seed(seed) np.random.seed(seed) torch.manual_seed(seed) # Read the train/dev/test datasets and create a HuggingFace `Dataset` object: # In[2]: def read_data(filename): # read csv file df = pd.read_csv(filename, header=None) # add column names df.columns = ['label', 'title', 'description'] # make labels zero-based df['label'] -= 1 # concatenate title and description, and remove backslashes df['text'] = df['title'] + " " + df['description'] df['text'] = df['text'].str.replace('\\', ' ', regex=False) return df # In[3]: labels = open('data/ag_news_csv/classes.txt').read().splitlines() train_df = read_data('data/ag_news_csv/train.csv') test_df = read_data('data/ag_news_csv/test.csv') train_df # In[4]: from sklearn.model_selection import train_test_split train_df, eval_df = train_test_split(train_df, train_size=0.9) train_df.reset_index(inplace=True, drop=True) eval_df.reset_index(inplace=True, drop=True) print(f'train rows: {len(train_df.index):,}') print(f'eval rows: {len(eval_df.index):,}') print(f'test rows: {len(test_df.index):,}') # In[5]: from datasets import Dataset, DatasetDict ds = DatasetDict() ds['train'] = Dataset.from_pandas(train_df) ds['validation'] = Dataset.from_pandas(eval_df) ds['test'] = Dataset.from_pandas(test_df) ds # Tokenize the texts: # In[6]: from transformers import AutoTokenizer transformer_name = 'bert-base-cased' tokenizer = AutoTokenizer.from_pretrained(transformer_name) # In[7]: def tokenize(examples): return tokenizer(examples['text'], truncation=True) train_ds = ds['train'].map( tokenize, batched=True, remove_columns=['title', 'description', 'text'], ) eval_ds = ds['validation'].map( tokenize, batched=True, remove_columns=['title', 'description', 'text'], ) train_ds.to_pandas() # Create the transformer model: # In[8]: from torch import nn from transformers.modeling_outputs import SequenceClassifierOutput from transformers.models.bert.modeling_bert import BertModel, BertPreTrainedModel # https://github.com/huggingface/transformers/blob/65659a29cf5a079842e61a63d57fa24474288998/src/transformers/models/bert/modeling_bert.py#L1486 class BertForSequenceClassification(BertPreTrainedModel): def __init__(self, config): super().__init__(config) self.num_labels = config.num_labels self.bert = BertModel(config) self.dropout = nn.Dropout(config.hidden_dropout_prob) self.classifier = nn.Linear(config.hidden_size, config.num_labels) self.init_weights() def forward(self, input_ids=None, attention_mask=None, token_type_ids=None, labels=None, **kwargs): outputs = self.bert( input_ids, attention_mask=attention_mask, token_type_ids=token_type_ids, **kwargs, ) cls_outputs = outputs.last_hidden_state[:, 0, :] cls_outputs = self.dropout(cls_outputs) logits = self.classifier(cls_outputs) loss = None if labels is not None: loss_fn = nn.CrossEntropyLoss() loss = loss_fn(logits, labels) return SequenceClassifierOutput( loss=loss, logits=logits, hidden_states=outputs.hidden_states, attentions=outputs.attentions, ) # In[9]: from transformers import AutoConfig config = AutoConfig.from_pretrained( transformer_name, num_labels=len(labels), ) model = ( BertForSequenceClassification .from_pretrained(transformer_name, config=config) ) # Create the trainer object and train: # In[10]: from transformers import TrainingArguments num_epochs = 2 batch_size = 24 weight_decay = 0.01 model_name = f'{transformer_name}-sequence-classification' training_args = TrainingArguments( output_dir=model_name, log_level='error', num_train_epochs=num_epochs, per_device_train_batch_size=batch_size, per_device_eval_batch_size=batch_size, evaluation_strategy='epoch', weight_decay=weight_decay, ) # In[11]: from sklearn.metrics import accuracy_score def compute_metrics(eval_pred): y_true = eval_pred.label_ids y_pred = np.argmax(eval_pred.predictions, axis=-1) return {'accuracy': accuracy_score(y_true, y_pred)} # In[12]: from transformers import Trainer trainer = Trainer( model=model, args=training_args, compute_metrics=compute_metrics, train_dataset=train_ds, eval_dataset=eval_ds, tokenizer=tokenizer, ) # In[13]: trainer.train() # Evaluate on the test partition: # In[14]: test_ds = ds['test'].map( tokenize, batched=True, remove_columns=['title', 'description', 'text'], ) test_ds.to_pandas() # In[15]: output = trainer.predict(test_ds) output # In[16]: from sklearn.metrics import classification_report y_true = output.label_ids y_pred = np.argmax(output.predictions, axis=-1) target_names = labels print(classification_report(y_true, y_pred, target_names=target_names))
3,270
3,440
12
chap13-13
chap13-13
13 Using Transformers with the Hugging Face Library One of the key advantages of transformer networks is the ability to take a model that was pre-trained over vast quantities of text and fine-tune it for the task at hand. Intuitively, this strategy allows transformer networks to achieve higher performance on smaller datasets by relying on statistics acquired at scale in an unsupervised way (e.g., through the masked language model training objective). To this end, in this chapter we will use the Hugging Face library,1 which has a rich repository of datasets and pre-trained models, as well as helper methods and classes that make it easy to target downstream tasks. Using pre-trained transformer encoders, we will implement the two tasks that served as use cases in the previous chapters: text classification and part-of-speech tagging. 13.1 Tokenization As discussed in Section 12.2, transformers rely on sub-word tokens. This strategy provides an elegant way to handle unknown and low-frequency words by splitting them into more frequent sub-word parts. At the same time, these tokenization algorithms maintain frequently-occurring words as standalone tokens, so the signal for these common words is preserved. To make this more concrete, we show below how tokenizers are employed in the Hugging Face library. First, we load the tokenizer that corresponds to the transformer we intend to use. This is important for two reasons: (a) different transformers rely on different tokenization algorithms, and (b) even for the ones that use the same algorithm, their tokenizer vocabularies are likely to be different if they were pre-trained 1 https://huggingface.co/docs/transformers/main/en/index 186 13.1 Tokenization 187 on different corpora. Next, we tokenize some example text and display some of the resulting attributes with pandas: As shown above, the tokenizer splits the text into tokens, and adds two special tokens: the [CLS] token at the beginning of the token sequence, and the [SEP] token at the end. Also, note that the ## characters at the beginning of some tokens indicate that they are not standalone words, but rather sub-words that continue a word previously started. For example, the output above shows that the word walrus was split into three sub-words. Note, however, that this is specific to this particular tokenization algorithm, and other tokenizers may indicate word continuation in different ways. A better way to detect word continuations is using the word_ids() method of the tokenizer output, which assigns the same id to all tokens part of the same word. For example, all fragments of the word walrus share the word id 3. Lastly, the input_ids attribute provides the token ids used internally by the transformer to map tokens to embeddings. To briefly demonstrate how different tokenizers produce different outputs, here is the same text tokenized with the tokenizer corresponding to xlm-roberta-base: Note how the [CLS] and [SEP] special tokens have been replaced with <s> and </s> respectively. Also, spaces have been replaced with the Unicode character (U+2581, LOWER ONE EIGHTH BLOCK). Tokens that start with that character are considered word beginnings and the rest are word continuations, as can be confirmed by looking at the word ids. This illustrates the importance of using the tokenizer that corresponds to the transformer you intend to use. 012345678 tokens [CLS] I am the wa ##l ##rus . [SEP] word_ids None 0 1 2 3 3 3 4 None input_ids 101 146 1821 1103 20049 1233 6208 119 102 01234567 tokens <s> ▁I ▁am ▁the ▁wal rus . </s > None 0 1 2 3 3 3 None 0 87 444 70 32973 6563 5 2 word_ids input_ids 188 Using Transformers with the Hugging Face Library 13.2 Text Classification For our text classification example, we will continue using the AG News dataset from previous chapters. We will load, preprocess, and split the dataset into pandas dataframes in the same way as before. Now however, rather than continuing with pandas, we will create a Hugging Face dataset from the dataframes. Hugging Face datasets are convenient because of their built-in support of batching, efficient data transformations, and caching. In particular, we convert each dataframe into a Hugging Face dataset. The various datasets are managed with a DatasetDict. Note that this is the same data structure seen when downloading a Hugging Face dataset from their hub.2 The keys in this dictionary are usually train, validation, and test:3 Once our dataset is loaded, we load a tokenizer. Different pre-trained models are tokenized differently, and it is important to select the tokenizer that corresponds to the model we will use so that the inputs are consistent with model expectations. In our example, we will use the bert-base-cased pre-trained model and tokenizer: Datasets have a map() method that transforms the dataset by applying a function to each example. The method returns a new dataset with the transformation applied. We use the map() method to tokenize our dataset. To this end, we define a function that tokenizes an example using the tokenizer we loaded previously. Note that tokenizers support many options that you may need depending on your situation. However, since this is a simple scenario, all we need to do is provide the text to tokenize and specify how to handle texts that exceed the maximum number of tokens permitted by the pre-trained model. Here we have our tokenizer truncate any inputs that are too long by specifying the truncation=True parameter. The output of this function will be added to the new dataset as extra columns. Further, we also want to remove some of the columns that are no longer needed, simplifying subsequent steps. For this, we use the remove_columns argument, listing the columns that we want to discard. Additionally, the dataset’s map() method can batch the dataset; we enable this option with the batched=True argument: 2 https://huggingface.co/datasets
3 These correspond to the more common terms train, development, and test we have used throughout the book so far. In this chapter we use the Hugging Face naming conventions for consistency. 13.2 Text Classification 189 label 03 10 20 32 40 ... ... . 107995  0 
 . 107996  0 
 . 107997  0 
 . 107998  0 
 . 107999  3 
 input_ids [101, 3270, 11906, 1522, 1146, 7106, 1111, 251... [101, 158, 119, 156, 119, 12068, 5084, 1116, 9... [101, 7270, 118, 2733, 1383, 1111, 12448, 7430... [101, 6096, 117, 10378, 3969, 5977, 1111, 8988... [101, 19569, 5480, 10582, 2087, 1867, 158, 119... [101, 1130, 139, 24683, 131, 21107, 2050, 1739... token_type_ids attention_mask [101, 22087, 8223, 1611, 1106, 4417, [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 5572, 324... 0, 0, 0, ... [0,0,0,0,0,0,0,0, [0,0,0,0,0,0,0,0, [0,0,0,0,0,0,0,0, [0,0,0,0,0,0,0,0, [0,0,0,0,0,0,0,0, [0,0,0,0,0,0,0,0, 0,0,0,0, 0,0,0,... 0,0,0,0, 0,0,0,... 0,0,0,0, 0,0,0,... 0,0,0,0, 0,0,0,... 0,0,0,0, 0,0,0,... 0,0,0,0, 0,0,0,... [1,1,1,1,1,1,1,1, [1,1,1,1,1,1,1,1, [1,1,1,1,1,1,1,1, [1,1,1,1,1,1,1,1, [1,1,1,1,1,1,1,1, [1,1,1,1,1,1,1,1, 1,1,1,1, 1,1,1,... 1,1,1,1, 1,1,1,... 1,1,1,1, 1,1,1,... 1,1,1,1, 1,1,1,... 1,1,1,1, 1,1,1,... 1,1,1,1, 1,1,1,... ... ... [101, 16409, 118, 16587, 159, 4064, [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1106, 1564... 0, 0, 0, ... 108000 rows × 4 columns [101, 4222, 11404, 1174, 117, 1476, [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1130, 2696... 0, 0, 0, ... [101, 11560, 3881, 108, 3614, 132, [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 3498, 2944,... 0, 0, 0, ... [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... ... Next, we implement a classifier for our task. Hugging Face provides a
variety of models corresponding to several types of downstream tasks. However, for pedagogical purposes, we implement one from scratch. In particular, our model class inherits from BertPreTrainedModel, which
provides several useful methods such as init_weights() and from_pretrained() methods, which we will use later. The model constructor takes a config- uration object as its only parameter. Configuration objects contain all the hyper-parameters used by the corresponding pre-trained models. We will show later how the configuration model is retrieved and customized. Models that implement specific downstream tasks are usually composed of a pre-trained model (sometimes referred as the body), and one or more task-specific layers (usually referred as the head). Here, we initialize a BertModel using the provided configuration, as well as a dropout layer and a task-specific linear layer used for classifying the Bert output. Each of these layers is initialized by calling the init_weights() method inherited from BertPreTrainedModel. The forward() method, which implements the task-specific forward pass, takes as arguments the outputs of the tokenizer, and, optionally, the gold labels corresponding to the input data points. Our implementation of the forward pass sends the input tokens to the Bert model to produce the contextualized representations for all tokens. This output has several components, including the last_hidden_state which con- 190 Using Transformers with the Hugging Face Library tains the final hidden-state embedding for each token. For our task, we will represent the whole sequence using the embedding for the [CLS] token that occurs at the start of each example. We retrieve it by selecting the first element of each output sequence in the batch (i.e., last_hidden_state[:, 0, :]). As in the previous chapters, we apply dropout to our sequence representation, and then pass it through our linear classification layer. If gold labels are provided (i.e., we are training), we now compute the loss using the cross-entropy loss. The output of the forward pass is wrapped in a Hugging Face SequenceClassifierOutput object4 and returned: Next we load the configuration of the pre-trained model and instantiate our model. The AutoConfig class can load the configuration for any pre-trained model, retrieving it from Hugging Face if needed. Then we use the configuration to instantiate our model using the from_pretrained() method. With this call, the pre-trained model will be loaded, which includes downloading if necessary: Hugging Face provides a Trainer class that greatly simplifies the training process. This class not only implements the training loop we have been using in the previous chapters, but also handles other useful steps such as saving checkpoints (i.e., intermediate models after a number of mini-batches have been processed during training), and tracking custom measures about model performance. In order to create a Trainer, we first need to specify its configuration in a TrainingArguments object. In ours, we specify certain hyper parameters such as batch size, weight decay, and number of epochs, as well as where to store model checkpoints: The TrainingArguments class provides a wide variety of arguments that we have not shown.5 These arguments usually have appropriate default values, so it is often fine to omit them. For example, we did not use the label_names argument, which specifies the key that corresponds to the training labels. When omitted, it defaults to keys such as label, labels and label_ids.6 In this chapter we used label. Note that we also specify how often we would like to see the perfor- . 4  Hugging Face utilizes a set of output objects to standardize model output for a given task. These objects typically include additional information, e.g., attention weights, which can be used for visualizing or debugging model behavior. 
 . 5  https://huggingface.co/docs/transformers/main/en/main_classes/trainer# transformers. TrainingArguments 
 6 In the case of extractive question answering (see Chapter 16), the start_positions and end_positions store the start/end positions of the correct answers. 13.3 Part-of-speech Tagging 191 mance of the current model (at the end of each epoch) with evaluation_strategy='epoch'. This means that after each epoch we print the current loss on the training
partition and on the evaluation dataset, if one is available. Additionally,
we can report custom metrics at this time. For this purpose, we use the compute_metrics parameter of the Trainer, which expects a function that receives a transformers. EvalPredictions object containing the label ids and the predicted logits. The expected return type is a dictionary whose keys correspond to different metrics, each of which will be displayed as a separate result column. Using the above TrainingArguments and compute_metrics function, we create our Trainer. Note that when you provide a tokenizer, the trainer will automatically pad the sequences in each batch. Also, the trainer will automatically use any GPU that is available, unless specifically disabled in the TrainingArguments. Training our model takes a single call to the train() method of the Trainer object. As specified in the our instance of TrainingArguments, the training and validation losses, as well as the accuracy, are reported every epoch. As in the other chapters, we can write custom code to obtain the model’s predictions on the test data. However, the Trainer class provides a predict() method that drastically simplifies this: As shown in the table above, this model achieves an accuracy of 95%, which is the highest performance we have achieved so far on this dataset. 13.3 Part-of-speech Tagging To showcase part-of-speech tagging using transformers, we continue with the Spanish section of the AnCora corpus introduced in Chapter 11. Recall that the dataset is stored in the CoNLL-U format. We load this format in the same way as before, but then we convert the loaded dataset into a Hugging Face DictDataset: Importantly, because the CoNLL-U dataset is already tokenized, we use the is_split_into_words=True tokenizer argument to ensure that the tokenizer respects the existing word boundaries during its sub-word tokenization. Further, while we want to predict one POS tag per word, Epoch Training Loss Validation Loss Accuracy 1 0.187800 0.172629 0.941667 2 0.104000 0.183001 0.946250 192 Using Transformers with the Hugging Face Library any given word may be split into smaller pieces by our tokenizer. Thus, we need to align the tokenizer output to the CoNLL-U words. The original BERT paper (Devlin et al., 2018) addresses this by only using the embedding corresponding to the first sub-token for each word. We follow the same approach for consistency. For the sub-words that do not correspond to the beginning of a word, we use a special value that indicates that we are not interested in their predictions. The CrossEntropyLoss has a parameter called ignore_index for this purpose. The default value for this parameter is −100, which we use as the label for the sub-words we wish to ignore during training: Next, we use this function to preprocess the train and validation folds in our DatasetDict: words [El, presidente, de, el, órgano, regulador, de... [Afirmó, que, sigue, el, criterio, europeo, y,... [Durante, la, presentación, de, el, libro, ", ... [Y, todas, las, miradas, convergen, en, la, lu... [Cambiar, las, formas, parece, de, rigor, ,, p... [Él, llega, a, tirar, la, sobre, la, cama, y, ... tags [DET, NOUN, ADP, DET, NOUN, ADJ, ADP, DET, PRO... [VERB, SCONJ, VERB, DET, NOUN, ADJ, CCONJ, SCO... [ADP, DET, NOUN, ADP, DET, NOUN, PUNCT, DET, P... [CCONJ, DET, DET, NOUN, VERB, ADP, DET, NOUN, ... [VERB, DET, NOUN, VERB, ADP, NOUN, PUNCT, CCON... [PRON, VERB, ADP, VERB, PRON, ADP, DET, NOUN, ... input_ids [0, 540, 9692, 8, 88, 103633, 15913, 1846, 8, ... [0, 62, 38949, 849, 41, 58453, 88, 166220, 620... [0, 24292, 21, 43945, 8, 88, 7750, 44, 239, 78... [0, 990, 5136, 576, 100688, 7, 158, 814, 1409,... [0, 313, 61055, 42, 576, 26497, 12295, 8, 7599... [0, 124043, 47612, 10, 61846, 21, 1028, 21, 39... attention_mask labels [-100, 0, 1, 2, 0, 1, 3, -100, 2, 0, 4, -100, ... [-100, 6, -100, -100, 7, 6, 0, 1, 3, 10, 7, 6,... [-100, 2, 0, 1, 2, 0, 1, 8, 0, 4, -100, 2, 4, ... [-100, 10, 0, 0, 1, -100, 6, -100, -100, 2, 0,... [-100, 6, -100, -100, 0, 1, 6, 2, 1, 8, -100, ... [-100, 5, 6, 2, 6, 5, 2, 0, 1, 10, 5, 6, 0, 1,... 0 1 2 3 4 ... 14300 14301 14302 14303 [1,1,1,1,1, 1,1,1,1,1, [1,1,1,1,1, 1,1,1,1,1, [1,1,1,1,1, 1,1,1,1,1, [1,1,1,1,1, 1,1,1,1,1, [1,1,1,1,1, 1,1,1,1,1, [1,1,1,1,1, 1,1,1,1,1, 1, 1, 1, 1, 1, ... 1, 1, 1, 1, 1, ... 1, 1, 1, 1, 1, ... 1, 1, 1, 1, 1, ... 1, 1, 1, 1, 1, ... 1, 1, 1, 1, 1, ... [Sobre, la, oferta, de, interconexión, con, Te... [ADP, DET, NOUN, ADP, NOUN, ADP, PROPN, ADP, D... [0, 44125, 21, 19806, 8, 1940, 2271, 3355, 194... [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... [-100, 2, 0, 1, 2, 1, -100, -100, -100, 2, 4, ... [La, inversión, en, investigación, básica, es,... [DET, NOUN, ADP, NOUN, ADJ, AUX, DET, NOUN, AD... [0, 239, 98649, 22, 31674, 124528, 198, 88, 46... [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... [-100, 0, 1, 2, 1, 3, 9, 0, 1, 2, 0, 1, 10, 0,... [Conviene, que, ahora, ,, en, plena, apoteosis... [VERB, SCONJ, ADV, PUNCT, ADP, ADJ, NOUN, ADP,... [0, 1657, 7772, 13, 41, 18451, 6, 4, 22, 31161... [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... [-100, 6, -100, -100, 7, 11, 8, -100, 2, 3, 1,... [Carlos, y, Fayna, se, enzarzan, en, una, bron... [PROPN, CCONJ, PROPN, PRON, VERB, ADP, DET, NO... [0, 24856, 113, 114162, 76, 40, 22, 6383, 5935... [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... [-100, 4, 10, 4, -100, 5, 6, -100, -100, 2, 0,... 14304
14305 rows × 5 columns ... ... ... ... ... Next, we implement our model class that uses a transformer encoder as a transducer. Because our downstream task consists of POS tagging for Spanish, we need a transformer model that was pre-trained on Spanish texts. Here, we chose XLM-RoBERTa (Conneau et al., 2019) as our base model. XLM-Roberta is a RoBERTa model (Liu et al., 2019) that has 13.3 Part-of-speech Tagging 193 been pre-trained on 100 different languages, including Spanish. Of note, XLM-RoBERTa does not require us to specify what language we are working on. Similar to BERT, it only requires the input_ids. We discussed in the text classification section that Hugging Face provides implementations for text classification models. This is also true for token classification problems that require transducers. In particular, the XLMRobertaForTokenClassification model provided by Hugging Face does everything needed for this task. However, as before, here we implement it ourselves for pedagogical purposes. The model architecture is similar to our text classification example. It consists of a transformer, a dropout layer, and a linear layer used for classification. The number of labels which determines the output dimension of the linear layer is equal to the number of POS tags. The primary difference between the text classification example and this token classification model is that with the former we produced one label for each text document, while here we produce one label for each token in the input text. Specifically, in our text classification model the output shape was two-dimensional: (batch_size, num_labels). Here, our output is three-dimensional: (batch_size, sequence_size, num_labels). So, while much of the forward method is familiar to us, when we are required to compute the loss, we need to reshape the logits and the labels before passing them to the CrossEntropyLoss, since it expects two-dimensional input and one-dimensional labels. For this purpose, we use the view() method to reshape the tensors. This method is efficient because it does not copy the tensor data. Instead it provides a new view of the same data that behaves like a tensor with a different shape.7 As mentioned before, the number of arguments passed to this method determines the number of dimensions in the output tensor. Here, for our logits, we pass two arguments and so our new view will have two dimensions. The second will be the size of self.num_labels, while the first (because we pass -1) will be inferred based on the original tensor shape. For our labels, on the other hand, we only provide one argument and so the new view will have one dimension, inferred by the original shape: Next, we instantiate our model using the XLM-RoBERTa configuration: 7 Similar to NumPy, PyTorch tensors are represented internally by a block of memory storing the data and some metadata that describes how the data should be read, e.g., type, shape, and stride. The view() method returns a new tensor with new metadata but pointing to the same memory block. 194 Using Transformers with the Hugging Face Library As before, we create a TrainingArguments object and define a compute_metrics function in order to customize a Trainer: While the TrainingArguments code has no substantial changes, we need to adjust the compute_metrics function to account for the fact that our model uses sub-word tokens rather than complete words. Recall that only the first sub-word token per word was assigned a POS tag. This function discards the labels corresponding to the ignored sub-word tokens and evaluates the rest, returning the accuracy score: The last component required for the Trainer is a collator. Since this time we are batching sequences of tokens, we need a collator that can pad them dynamically when constructing the batches. The transformers library includes a DataCollatorForTokenClassification specifically for this purpose. Once we have our collator and our trainer object, we can train our model: Next, we evaluate our newly trained model on the test dataset. For this purpose, we preprocess the data in the same way we did for the train and validation partitions. Then, for convenience, we use the trainer’s predict() method to generate the predicted logits using our model: As before, we use scikit-learn’s classification_report() function to display the results of the evaluation. This function expects two onedimensional lists of labels, so we need to follow a similar approach to the one we employed for text classification. Note that output.label_ids and output.predictions are NumPy arrays rather than PyTorch tensors. This time we use NumPy’s reshape() method to reshape the arrays. This method is similar to PyTorch’s view() method that we used previously, except that view() may copy the array’s data in some situations. We discard the labels corresponding to ignored sub-word tokens, and then we print the classification report: Our model based on XLM-RoBERTa achieves 99% accuracy. This is considerably better than the LSTM-based model developed in Chapter 11. In order to understand the differences between the two methods, we produce below a confusion matrix for the results of each model. Rows in the confusion matrix represent the true labels and columns represent the predicted labels. In the confusion matrices shown below, each cell xij corresponds to the proportion of values with label i that were assigned the label j.8 For a perfect model, all cells in the diagonal would have value 1 and all other cells would have value 0. The code used to generate the confusion matrix is shown below. The confusion matrices 8 This is the case because we used the normalize='true' parameter of the confusion_matrix() function. 13.3 Part-of-speech Tagging 195 Figure 13.1 Confusion matrix corresponding to the LSTM-based part-ofspeech tagger developed in Chapter 11. for the LSTM and transformer are show in Figure 13.1 and Figure 13.2, respectively. The two confusion matrices highlight a couple of important observations. First, the transformer model is considerably better at predicting POS tags with infrequent support in the dataset. For example, the accuracy for predicting the SYM POS tag increased from 38% in the LSTM model to 95% in the transformer model! Equally as impressive, the transformer improved the performance of tags that are extremely common, and, thus, provide plenty of opportunity to both approaches to learn a good model. For example, the accuracy of tagging NOUN, the second 196 Using Transformers with the Hugging Face Library Figure 13.2 Confusion matrix corresponding to the transformer-based part-of-speech tagger. most common POS tag in the dataset, increased from 96% in the LSTM model to 99% in the transformer model. 13.4 Summary In this chapter we presented two applications driven by the encoder component of a transformer network. First, we used the transformer encoder as an acceptor and implemented a text classification application for English news. Second, we used the encoder as a transducer to develop a Spanish part-of-speech tagger. Both tasks were implemented using 13.4 Summary 197 pre-trained transformer models from the Hugging Face library. For both applications, the transformer-based methods outperform considerably all approaches introduced in the previous chapters, highlighting the value of the transformer architecture.
4,505
4,835
#!/usr/bin/env python # coding: utf-8 # # Text Classification Using Transformer Networks (BERT) # Some initialization: # In[1]: import random import torch import numpy as np import pandas as pd from tqdm.notebook import tqdm # enable tqdm in pandas tqdm.pandas() # set to True to use the gpu (if there is one available) use_gpu = True # select device device = torch.device('cuda' if use_gpu and torch.cuda.is_available() else 'cpu') print(f'device: {device.type}') # random seed seed = 1234 # set random seed if seed is not None: print(f'random seed: {seed}') random.seed(seed) np.random.seed(seed) torch.manual_seed(seed) # Read the train/dev/test datasets and create a HuggingFace `Dataset` object: # In[2]: def read_data(filename): # read csv file df = pd.read_csv(filename, header=None) # add column names df.columns = ['label', 'title', 'description'] # make labels zero-based df['label'] -= 1 # concatenate title and description, and remove backslashes df['text'] = df['title'] + " " + df['description'] df['text'] = df['text'].str.replace('\\', ' ', regex=False) return df # In[3]: labels = open('data/ag_news_csv/classes.txt').read().splitlines() train_df = read_data('data/ag_news_csv/train.csv') test_df = read_data('data/ag_news_csv/test.csv') train_df # In[4]: from sklearn.model_selection import train_test_split train_df, eval_df = train_test_split(train_df, train_size=0.9) train_df.reset_index(inplace=True, drop=True) eval_df.reset_index(inplace=True, drop=True) print(f'train rows: {len(train_df.index):,}') print(f'eval rows: {len(eval_df.index):,}') print(f'test rows: {len(test_df.index):,}') # In[5]: from datasets import Dataset, DatasetDict ds = DatasetDict() ds['train'] = Dataset.from_pandas(train_df) ds['validation'] = Dataset.from_pandas(eval_df) ds['test'] = Dataset.from_pandas(test_df) ds # Tokenize the texts: # In[6]: from transformers import AutoTokenizer transformer_name = 'bert-base-cased' tokenizer = AutoTokenizer.from_pretrained(transformer_name) # In[7]: def tokenize(examples): return tokenizer(examples['text'], truncation=True) train_ds = ds['train'].map( tokenize, batched=True, remove_columns=['title', 'description', 'text'], ) eval_ds = ds['validation'].map( tokenize, batched=True, remove_columns=['title', 'description', 'text'], ) train_ds.to_pandas() # Create the transformer model: # In[8]: from torch import nn from transformers.modeling_outputs import SequenceClassifierOutput from transformers.models.bert.modeling_bert import BertModel, BertPreTrainedModel # https://github.com/huggingface/transformers/blob/65659a29cf5a079842e61a63d57fa24474288998/src/transformers/models/bert/modeling_bert.py#L1486 class BertForSequenceClassification(BertPreTrainedModel): def __init__(self, config): super().__init__(config) self.num_labels = config.num_labels self.bert = BertModel(config) self.dropout = nn.Dropout(config.hidden_dropout_prob) self.classifier = nn.Linear(config.hidden_size, config.num_labels) self.init_weights() def forward(self, input_ids=None, attention_mask=None, token_type_ids=None, labels=None, **kwargs): outputs = self.bert( input_ids, attention_mask=attention_mask, token_type_ids=token_type_ids, **kwargs, ) cls_outputs = outputs.last_hidden_state[:, 0, :] cls_outputs = self.dropout(cls_outputs) logits = self.classifier(cls_outputs) loss = None if labels is not None: loss_fn = nn.CrossEntropyLoss() loss = loss_fn(logits, labels) return SequenceClassifierOutput( loss=loss, logits=logits, hidden_states=outputs.hidden_states, attentions=outputs.attentions, ) # In[9]: from transformers import AutoConfig config = AutoConfig.from_pretrained( transformer_name, num_labels=len(labels), ) model = ( BertForSequenceClassification .from_pretrained(transformer_name, config=config) ) # Create the trainer object and train: # In[10]: from transformers import TrainingArguments num_epochs = 2 batch_size = 24 weight_decay = 0.01 model_name = f'{transformer_name}-sequence-classification' training_args = TrainingArguments( output_dir=model_name, log_level='error', num_train_epochs=num_epochs, per_device_train_batch_size=batch_size, per_device_eval_batch_size=batch_size, evaluation_strategy='epoch', weight_decay=weight_decay, ) # In[11]: from sklearn.metrics import accuracy_score def compute_metrics(eval_pred): y_true = eval_pred.label_ids y_pred = np.argmax(eval_pred.predictions, axis=-1) return {'accuracy': accuracy_score(y_true, y_pred)} # In[12]: from transformers import Trainer trainer = Trainer( model=model, args=training_args, compute_metrics=compute_metrics, train_dataset=train_ds, eval_dataset=eval_ds, tokenizer=tokenizer, ) # In[13]: trainer.train() # Evaluate on the test partition: # In[14]: test_ds = ds['test'].map( tokenize, batched=True, remove_columns=['title', 'description', 'text'], ) test_ds.to_pandas() # In[15]: output = trainer.predict(test_ds) output # In[16]: from sklearn.metrics import classification_report y_true = output.label_ids y_pred = np.argmax(output.predictions, axis=-1) target_names = labels print(classification_report(y_true, y_pred, target_names=target_names))
1,983
2,080
13
chap13-14
chap13-14
13 Using Transformers with the Hugging Face Library One of the key advantages of transformer networks is the ability to take a model that was pre-trained over vast quantities of text and fine-tune it for the task at hand. Intuitively, this strategy allows transformer networks to achieve higher performance on smaller datasets by relying on statistics acquired at scale in an unsupervised way (e.g., through the masked language model training objective). To this end, in this chapter we will use the Hugging Face library,1 which has a rich repository of datasets and pre-trained models, as well as helper methods and classes that make it easy to target downstream tasks. Using pre-trained transformer encoders, we will implement the two tasks that served as use cases in the previous chapters: text classification and part-of-speech tagging. 13.1 Tokenization As discussed in Section 12.2, transformers rely on sub-word tokens. This strategy provides an elegant way to handle unknown and low-frequency words by splitting them into more frequent sub-word parts. At the same time, these tokenization algorithms maintain frequently-occurring words as standalone tokens, so the signal for these common words is preserved. To make this more concrete, we show below how tokenizers are employed in the Hugging Face library. First, we load the tokenizer that corresponds to the transformer we intend to use. This is important for two reasons: (a) different transformers rely on different tokenization algorithms, and (b) even for the ones that use the same algorithm, their tokenizer vocabularies are likely to be different if they were pre-trained 1 https://huggingface.co/docs/transformers/main/en/index 186 13.1 Tokenization 187 on different corpora. Next, we tokenize some example text and display some of the resulting attributes with pandas: As shown above, the tokenizer splits the text into tokens, and adds two special tokens: the [CLS] token at the beginning of the token sequence, and the [SEP] token at the end. Also, note that the ## characters at the beginning of some tokens indicate that they are not standalone words, but rather sub-words that continue a word previously started. For example, the output above shows that the word walrus was split into three sub-words. Note, however, that this is specific to this particular tokenization algorithm, and other tokenizers may indicate word continuation in different ways. A better way to detect word continuations is using the word_ids() method of the tokenizer output, which assigns the same id to all tokens part of the same word. For example, all fragments of the word walrus share the word id 3. Lastly, the input_ids attribute provides the token ids used internally by the transformer to map tokens to embeddings. To briefly demonstrate how different tokenizers produce different outputs, here is the same text tokenized with the tokenizer corresponding to xlm-roberta-base: Note how the [CLS] and [SEP] special tokens have been replaced with <s> and </s> respectively. Also, spaces have been replaced with the Unicode character (U+2581, LOWER ONE EIGHTH BLOCK). Tokens that start with that character are considered word beginnings and the rest are word continuations, as can be confirmed by looking at the word ids. This illustrates the importance of using the tokenizer that corresponds to the transformer you intend to use. 012345678 tokens [CLS] I am the wa ##l ##rus . [SEP] word_ids None 0 1 2 3 3 3 4 None input_ids 101 146 1821 1103 20049 1233 6208 119 102 01234567 tokens <s> ▁I ▁am ▁the ▁wal rus . </s > None 0 1 2 3 3 3 None 0 87 444 70 32973 6563 5 2 word_ids input_ids 188 Using Transformers with the Hugging Face Library 13.2 Text Classification For our text classification example, we will continue using the AG News dataset from previous chapters. We will load, preprocess, and split the dataset into pandas dataframes in the same way as before. Now however, rather than continuing with pandas, we will create a Hugging Face dataset from the dataframes. Hugging Face datasets are convenient because of their built-in support of batching, efficient data transformations, and caching. In particular, we convert each dataframe into a Hugging Face dataset. The various datasets are managed with a DatasetDict. Note that this is the same data structure seen when downloading a Hugging Face dataset from their hub.2 The keys in this dictionary are usually train, validation, and test:3 Once our dataset is loaded, we load a tokenizer. Different pre-trained models are tokenized differently, and it is important to select the tokenizer that corresponds to the model we will use so that the inputs are consistent with model expectations. In our example, we will use the bert-base-cased pre-trained model and tokenizer: Datasets have a map() method that transforms the dataset by applying a function to each example. The method returns a new dataset with the transformation applied. We use the map() method to tokenize our dataset. To this end, we define a function that tokenizes an example using the tokenizer we loaded previously. Note that tokenizers support many options that you may need depending on your situation. However, since this is a simple scenario, all we need to do is provide the text to tokenize and specify how to handle texts that exceed the maximum number of tokens permitted by the pre-trained model. Here we have our tokenizer truncate any inputs that are too long by specifying the truncation=True parameter. The output of this function will be added to the new dataset as extra columns. Further, we also want to remove some of the columns that are no longer needed, simplifying subsequent steps. For this, we use the remove_columns argument, listing the columns that we want to discard. Additionally, the dataset’s map() method can batch the dataset; we enable this option with the batched=True argument: 2 https://huggingface.co/datasets
3 These correspond to the more common terms train, development, and test we have used throughout the book so far. In this chapter we use the Hugging Face naming conventions for consistency. 13.2 Text Classification 189 label 03 10 20 32 40 ... ... . 107995  0 
 . 107996  0 
 . 107997  0 
 . 107998  0 
 . 107999  3 
 input_ids [101, 3270, 11906, 1522, 1146, 7106, 1111, 251... [101, 158, 119, 156, 119, 12068, 5084, 1116, 9... [101, 7270, 118, 2733, 1383, 1111, 12448, 7430... [101, 6096, 117, 10378, 3969, 5977, 1111, 8988... [101, 19569, 5480, 10582, 2087, 1867, 158, 119... [101, 1130, 139, 24683, 131, 21107, 2050, 1739... token_type_ids attention_mask [101, 22087, 8223, 1611, 1106, 4417, [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 5572, 324... 0, 0, 0, ... [0,0,0,0,0,0,0,0, [0,0,0,0,0,0,0,0, [0,0,0,0,0,0,0,0, [0,0,0,0,0,0,0,0, [0,0,0,0,0,0,0,0, [0,0,0,0,0,0,0,0, 0,0,0,0, 0,0,0,... 0,0,0,0, 0,0,0,... 0,0,0,0, 0,0,0,... 0,0,0,0, 0,0,0,... 0,0,0,0, 0,0,0,... 0,0,0,0, 0,0,0,... [1,1,1,1,1,1,1,1, [1,1,1,1,1,1,1,1, [1,1,1,1,1,1,1,1, [1,1,1,1,1,1,1,1, [1,1,1,1,1,1,1,1, [1,1,1,1,1,1,1,1, 1,1,1,1, 1,1,1,... 1,1,1,1, 1,1,1,... 1,1,1,1, 1,1,1,... 1,1,1,1, 1,1,1,... 1,1,1,1, 1,1,1,... 1,1,1,1, 1,1,1,... ... ... [101, 16409, 118, 16587, 159, 4064, [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1106, 1564... 0, 0, 0, ... 108000 rows × 4 columns [101, 4222, 11404, 1174, 117, 1476, [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1130, 2696... 0, 0, 0, ... [101, 11560, 3881, 108, 3614, 132, [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 3498, 2944,... 0, 0, 0, ... [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... ... Next, we implement a classifier for our task. Hugging Face provides a
variety of models corresponding to several types of downstream tasks. However, for pedagogical purposes, we implement one from scratch. In particular, our model class inherits from BertPreTrainedModel, which
provides several useful methods such as init_weights() and from_pretrained() methods, which we will use later. The model constructor takes a config- uration object as its only parameter. Configuration objects contain all the hyper-parameters used by the corresponding pre-trained models. We will show later how the configuration model is retrieved and customized. Models that implement specific downstream tasks are usually composed of a pre-trained model (sometimes referred as the body), and one or more task-specific layers (usually referred as the head). Here, we initialize a BertModel using the provided configuration, as well as a dropout layer and a task-specific linear layer used for classifying the Bert output. Each of these layers is initialized by calling the init_weights() method inherited from BertPreTrainedModel. The forward() method, which implements the task-specific forward pass, takes as arguments the outputs of the tokenizer, and, optionally, the gold labels corresponding to the input data points. Our implementation of the forward pass sends the input tokens to the Bert model to produce the contextualized representations for all tokens. This output has several components, including the last_hidden_state which con- 190 Using Transformers with the Hugging Face Library tains the final hidden-state embedding for each token. For our task, we will represent the whole sequence using the embedding for the [CLS] token that occurs at the start of each example. We retrieve it by selecting the first element of each output sequence in the batch (i.e., last_hidden_state[:, 0, :]). As in the previous chapters, we apply dropout to our sequence representation, and then pass it through our linear classification layer. If gold labels are provided (i.e., we are training), we now compute the loss using the cross-entropy loss. The output of the forward pass is wrapped in a Hugging Face SequenceClassifierOutput object4 and returned: Next we load the configuration of the pre-trained model and instantiate our model. The AutoConfig class can load the configuration for any pre-trained model, retrieving it from Hugging Face if needed. Then we use the configuration to instantiate our model using the from_pretrained() method. With this call, the pre-trained model will be loaded, which includes downloading if necessary: Hugging Face provides a Trainer class that greatly simplifies the training process. This class not only implements the training loop we have been using in the previous chapters, but also handles other useful steps such as saving checkpoints (i.e., intermediate models after a number of mini-batches have been processed during training), and tracking custom measures about model performance. In order to create a Trainer, we first need to specify its configuration in a TrainingArguments object. In ours, we specify certain hyper parameters such as batch size, weight decay, and number of epochs, as well as where to store model checkpoints: The TrainingArguments class provides a wide variety of arguments that we have not shown.5 These arguments usually have appropriate default values, so it is often fine to omit them. For example, we did not use the label_names argument, which specifies the key that corresponds to the training labels. When omitted, it defaults to keys such as label, labels and label_ids.6 In this chapter we used label. Note that we also specify how often we would like to see the perfor- . 4  Hugging Face utilizes a set of output objects to standardize model output for a given task. These objects typically include additional information, e.g., attention weights, which can be used for visualizing or debugging model behavior. 
 . 5  https://huggingface.co/docs/transformers/main/en/main_classes/trainer# transformers. TrainingArguments 
 6 In the case of extractive question answering (see Chapter 16), the start_positions and end_positions store the start/end positions of the correct answers. 13.3 Part-of-speech Tagging 191 mance of the current model (at the end of each epoch) with evaluation_strategy='epoch'. This means that after each epoch we print the current loss on the training
partition and on the evaluation dataset, if one is available. Additionally,
we can report custom metrics at this time. For this purpose, we use the compute_metrics parameter of the Trainer, which expects a function that receives a transformers. EvalPredictions object containing the label ids and the predicted logits. The expected return type is a dictionary whose keys correspond to different metrics, each of which will be displayed as a separate result column. Using the above TrainingArguments and compute_metrics function, we create our Trainer. Note that when you provide a tokenizer, the trainer will automatically pad the sequences in each batch. Also, the trainer will automatically use any GPU that is available, unless specifically disabled in the TrainingArguments. Training our model takes a single call to the train() method of the Trainer object. As specified in the our instance of TrainingArguments, the training and validation losses, as well as the accuracy, are reported every epoch. As in the other chapters, we can write custom code to obtain the model’s predictions on the test data. However, the Trainer class provides a predict() method that drastically simplifies this: As shown in the table above, this model achieves an accuracy of 95%, which is the highest performance we have achieved so far on this dataset. 13.3 Part-of-speech Tagging To showcase part-of-speech tagging using transformers, we continue with the Spanish section of the AnCora corpus introduced in Chapter 11. Recall that the dataset is stored in the CoNLL-U format. We load this format in the same way as before, but then we convert the loaded dataset into a Hugging Face DictDataset: Importantly, because the CoNLL-U dataset is already tokenized, we use the is_split_into_words=True tokenizer argument to ensure that the tokenizer respects the existing word boundaries during its sub-word tokenization. Further, while we want to predict one POS tag per word, Epoch Training Loss Validation Loss Accuracy 1 0.187800 0.172629 0.941667 2 0.104000 0.183001 0.946250 192 Using Transformers with the Hugging Face Library any given word may be split into smaller pieces by our tokenizer. Thus, we need to align the tokenizer output to the CoNLL-U words. The original BERT paper (Devlin et al., 2018) addresses this by only using the embedding corresponding to the first sub-token for each word. We follow the same approach for consistency. For the sub-words that do not correspond to the beginning of a word, we use a special value that indicates that we are not interested in their predictions. The CrossEntropyLoss has a parameter called ignore_index for this purpose. The default value for this parameter is −100, which we use as the label for the sub-words we wish to ignore during training: Next, we use this function to preprocess the train and validation folds in our DatasetDict: words [El, presidente, de, el, órgano, regulador, de... [Afirmó, que, sigue, el, criterio, europeo, y,... [Durante, la, presentación, de, el, libro, ", ... [Y, todas, las, miradas, convergen, en, la, lu... [Cambiar, las, formas, parece, de, rigor, ,, p... [Él, llega, a, tirar, la, sobre, la, cama, y, ... tags [DET, NOUN, ADP, DET, NOUN, ADJ, ADP, DET, PRO... [VERB, SCONJ, VERB, DET, NOUN, ADJ, CCONJ, SCO... [ADP, DET, NOUN, ADP, DET, NOUN, PUNCT, DET, P... [CCONJ, DET, DET, NOUN, VERB, ADP, DET, NOUN, ... [VERB, DET, NOUN, VERB, ADP, NOUN, PUNCT, CCON... [PRON, VERB, ADP, VERB, PRON, ADP, DET, NOUN, ... input_ids [0, 540, 9692, 8, 88, 103633, 15913, 1846, 8, ... [0, 62, 38949, 849, 41, 58453, 88, 166220, 620... [0, 24292, 21, 43945, 8, 88, 7750, 44, 239, 78... [0, 990, 5136, 576, 100688, 7, 158, 814, 1409,... [0, 313, 61055, 42, 576, 26497, 12295, 8, 7599... [0, 124043, 47612, 10, 61846, 21, 1028, 21, 39... attention_mask labels [-100, 0, 1, 2, 0, 1, 3, -100, 2, 0, 4, -100, ... [-100, 6, -100, -100, 7, 6, 0, 1, 3, 10, 7, 6,... [-100, 2, 0, 1, 2, 0, 1, 8, 0, 4, -100, 2, 4, ... [-100, 10, 0, 0, 1, -100, 6, -100, -100, 2, 0,... [-100, 6, -100, -100, 0, 1, 6, 2, 1, 8, -100, ... [-100, 5, 6, 2, 6, 5, 2, 0, 1, 10, 5, 6, 0, 1,... 0 1 2 3 4 ... 14300 14301 14302 14303 [1,1,1,1,1, 1,1,1,1,1, [1,1,1,1,1, 1,1,1,1,1, [1,1,1,1,1, 1,1,1,1,1, [1,1,1,1,1, 1,1,1,1,1, [1,1,1,1,1, 1,1,1,1,1, [1,1,1,1,1, 1,1,1,1,1, 1, 1, 1, 1, 1, ... 1, 1, 1, 1, 1, ... 1, 1, 1, 1, 1, ... 1, 1, 1, 1, 1, ... 1, 1, 1, 1, 1, ... 1, 1, 1, 1, 1, ... [Sobre, la, oferta, de, interconexión, con, Te... [ADP, DET, NOUN, ADP, NOUN, ADP, PROPN, ADP, D... [0, 44125, 21, 19806, 8, 1940, 2271, 3355, 194... [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... [-100, 2, 0, 1, 2, 1, -100, -100, -100, 2, 4, ... [La, inversión, en, investigación, básica, es,... [DET, NOUN, ADP, NOUN, ADJ, AUX, DET, NOUN, AD... [0, 239, 98649, 22, 31674, 124528, 198, 88, 46... [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... [-100, 0, 1, 2, 1, 3, 9, 0, 1, 2, 0, 1, 10, 0,... [Conviene, que, ahora, ,, en, plena, apoteosis... [VERB, SCONJ, ADV, PUNCT, ADP, ADJ, NOUN, ADP,... [0, 1657, 7772, 13, 41, 18451, 6, 4, 22, 31161... [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... [-100, 6, -100, -100, 7, 11, 8, -100, 2, 3, 1,... [Carlos, y, Fayna, se, enzarzan, en, una, bron... [PROPN, CCONJ, PROPN, PRON, VERB, ADP, DET, NO... [0, 24856, 113, 114162, 76, 40, 22, 6383, 5935... [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... [-100, 4, 10, 4, -100, 5, 6, -100, -100, 2, 0,... 14304
14305 rows × 5 columns ... ... ... ... ... Next, we implement our model class that uses a transformer encoder as a transducer. Because our downstream task consists of POS tagging for Spanish, we need a transformer model that was pre-trained on Spanish texts. Here, we chose XLM-RoBERTa (Conneau et al., 2019) as our base model. XLM-Roberta is a RoBERTa model (Liu et al., 2019) that has 13.3 Part-of-speech Tagging 193 been pre-trained on 100 different languages, including Spanish. Of note, XLM-RoBERTa does not require us to specify what language we are working on. Similar to BERT, it only requires the input_ids. We discussed in the text classification section that Hugging Face provides implementations for text classification models. This is also true for token classification problems that require transducers. In particular, the XLMRobertaForTokenClassification model provided by Hugging Face does everything needed for this task. However, as before, here we implement it ourselves for pedagogical purposes. The model architecture is similar to our text classification example. It consists of a transformer, a dropout layer, and a linear layer used for classification. The number of labels which determines the output dimension of the linear layer is equal to the number of POS tags. The primary difference between the text classification example and this token classification model is that with the former we produced one label for each text document, while here we produce one label for each token in the input text. Specifically, in our text classification model the output shape was two-dimensional: (batch_size, num_labels). Here, our output is three-dimensional: (batch_size, sequence_size, num_labels). So, while much of the forward method is familiar to us, when we are required to compute the loss, we need to reshape the logits and the labels before passing them to the CrossEntropyLoss, since it expects two-dimensional input and one-dimensional labels. For this purpose, we use the view() method to reshape the tensors. This method is efficient because it does not copy the tensor data. Instead it provides a new view of the same data that behaves like a tensor with a different shape.7 As mentioned before, the number of arguments passed to this method determines the number of dimensions in the output tensor. Here, for our logits, we pass two arguments and so our new view will have two dimensions. The second will be the size of self.num_labels, while the first (because we pass -1) will be inferred based on the original tensor shape. For our labels, on the other hand, we only provide one argument and so the new view will have one dimension, inferred by the original shape: Next, we instantiate our model using the XLM-RoBERTa configuration: 7 Similar to NumPy, PyTorch tensors are represented internally by a block of memory storing the data and some metadata that describes how the data should be read, e.g., type, shape, and stride. The view() method returns a new tensor with new metadata but pointing to the same memory block. 194 Using Transformers with the Hugging Face Library As before, we create a TrainingArguments object and define a compute_metrics function in order to customize a Trainer: While the TrainingArguments code has no substantial changes, we need to adjust the compute_metrics function to account for the fact that our model uses sub-word tokens rather than complete words. Recall that only the first sub-word token per word was assigned a POS tag. This function discards the labels corresponding to the ignored sub-word tokens and evaluates the rest, returning the accuracy score: The last component required for the Trainer is a collator. Since this time we are batching sequences of tokens, we need a collator that can pad them dynamically when constructing the batches. The transformers library includes a DataCollatorForTokenClassification specifically for this purpose. Once we have our collator and our trainer object, we can train our model: Next, we evaluate our newly trained model on the test dataset. For this purpose, we preprocess the data in the same way we did for the train and validation partitions. Then, for convenience, we use the trainer’s predict() method to generate the predicted logits using our model: As before, we use scikit-learn’s classification_report() function to display the results of the evaluation. This function expects two onedimensional lists of labels, so we need to follow a similar approach to the one we employed for text classification. Note that output.label_ids and output.predictions are NumPy arrays rather than PyTorch tensors. This time we use NumPy’s reshape() method to reshape the arrays. This method is similar to PyTorch’s view() method that we used previously, except that view() may copy the array’s data in some situations. We discard the labels corresponding to ignored sub-word tokens, and then we print the classification report: Our model based on XLM-RoBERTa achieves 99% accuracy. This is considerably better than the LSTM-based model developed in Chapter 11. In order to understand the differences between the two methods, we produce below a confusion matrix for the results of each model. Rows in the confusion matrix represent the true labels and columns represent the predicted labels. In the confusion matrices shown below, each cell xij corresponds to the proportion of values with label i that were assigned the label j.8 For a perfect model, all cells in the diagonal would have value 1 and all other cells would have value 0. The code used to generate the confusion matrix is shown below. The confusion matrices 8 This is the case because we used the normalize='true' parameter of the confusion_matrix() function. 13.3 Part-of-speech Tagging 195 Figure 13.1 Confusion matrix corresponding to the LSTM-based part-ofspeech tagger developed in Chapter 11. for the LSTM and transformer are show in Figure 13.1 and Figure 13.2, respectively. The two confusion matrices highlight a couple of important observations. First, the transformer model is considerably better at predicting POS tags with infrequent support in the dataset. For example, the accuracy for predicting the SYM POS tag increased from 38% in the LSTM model to 95% in the transformer model! Equally as impressive, the transformer improved the performance of tags that are extremely common, and, thus, provide plenty of opportunity to both approaches to learn a good model. For example, the accuracy of tagging NOUN, the second 196 Using Transformers with the Hugging Face Library Figure 13.2 Confusion matrix corresponding to the transformer-based part-of-speech tagger. most common POS tag in the dataset, increased from 96% in the LSTM model to 99% in the transformer model. 13.4 Summary In this chapter we presented two applications driven by the encoder component of a transformer network. First, we used the transformer encoder as an acceptor and implemented a text classification application for English news. Second, we used the encoder as a transducer to develop a Spanish part-of-speech tagger. Both tasks were implemented using 13.4 Summary 197 pre-trained transformer models from the Hugging Face library. For both applications, the transformer-based methods outperform considerably all approaches introduced in the previous chapters, highlighting the value of the transformer architecture.
10,104
10,186
#!/usr/bin/env python # coding: utf-8 # # Text Classification Using Transformer Networks (BERT) # Some initialization: # In[1]: import random import torch import numpy as np import pandas as pd from tqdm.notebook import tqdm # enable tqdm in pandas tqdm.pandas() # set to True to use the gpu (if there is one available) use_gpu = True # select device device = torch.device('cuda' if use_gpu and torch.cuda.is_available() else 'cpu') print(f'device: {device.type}') # random seed seed = 1234 # set random seed if seed is not None: print(f'random seed: {seed}') random.seed(seed) np.random.seed(seed) torch.manual_seed(seed) # Read the train/dev/test datasets and create a HuggingFace `Dataset` object: # In[2]: def read_data(filename): # read csv file df = pd.read_csv(filename, header=None) # add column names df.columns = ['label', 'title', 'description'] # make labels zero-based df['label'] -= 1 # concatenate title and description, and remove backslashes df['text'] = df['title'] + " " + df['description'] df['text'] = df['text'].str.replace('\\', ' ', regex=False) return df # In[3]: labels = open('data/ag_news_csv/classes.txt').read().splitlines() train_df = read_data('data/ag_news_csv/train.csv') test_df = read_data('data/ag_news_csv/test.csv') train_df # In[4]: from sklearn.model_selection import train_test_split train_df, eval_df = train_test_split(train_df, train_size=0.9) train_df.reset_index(inplace=True, drop=True) eval_df.reset_index(inplace=True, drop=True) print(f'train rows: {len(train_df.index):,}') print(f'eval rows: {len(eval_df.index):,}') print(f'test rows: {len(test_df.index):,}') # In[5]: from datasets import Dataset, DatasetDict ds = DatasetDict() ds['train'] = Dataset.from_pandas(train_df) ds['validation'] = Dataset.from_pandas(eval_df) ds['test'] = Dataset.from_pandas(test_df) ds # Tokenize the texts: # In[6]: from transformers import AutoTokenizer transformer_name = 'bert-base-cased' tokenizer = AutoTokenizer.from_pretrained(transformer_name) # In[7]: def tokenize(examples): return tokenizer(examples['text'], truncation=True) train_ds = ds['train'].map( tokenize, batched=True, remove_columns=['title', 'description', 'text'], ) eval_ds = ds['validation'].map( tokenize, batched=True, remove_columns=['title', 'description', 'text'], ) train_ds.to_pandas() # Create the transformer model: # In[8]: from torch import nn from transformers.modeling_outputs import SequenceClassifierOutput from transformers.models.bert.modeling_bert import BertModel, BertPreTrainedModel # https://github.com/huggingface/transformers/blob/65659a29cf5a079842e61a63d57fa24474288998/src/transformers/models/bert/modeling_bert.py#L1486 class BertForSequenceClassification(BertPreTrainedModel): def __init__(self, config): super().__init__(config) self.num_labels = config.num_labels self.bert = BertModel(config) self.dropout = nn.Dropout(config.hidden_dropout_prob) self.classifier = nn.Linear(config.hidden_size, config.num_labels) self.init_weights() def forward(self, input_ids=None, attention_mask=None, token_type_ids=None, labels=None, **kwargs): outputs = self.bert( input_ids, attention_mask=attention_mask, token_type_ids=token_type_ids, **kwargs, ) cls_outputs = outputs.last_hidden_state[:, 0, :] cls_outputs = self.dropout(cls_outputs) logits = self.classifier(cls_outputs) loss = None if labels is not None: loss_fn = nn.CrossEntropyLoss() loss = loss_fn(logits, labels) return SequenceClassifierOutput( loss=loss, logits=logits, hidden_states=outputs.hidden_states, attentions=outputs.attentions, ) # In[9]: from transformers import AutoConfig config = AutoConfig.from_pretrained( transformer_name, num_labels=len(labels), ) model = ( BertForSequenceClassification .from_pretrained(transformer_name, config=config) ) # Create the trainer object and train: # In[10]: from transformers import TrainingArguments num_epochs = 2 batch_size = 24 weight_decay = 0.01 model_name = f'{transformer_name}-sequence-classification' training_args = TrainingArguments( output_dir=model_name, log_level='error', num_train_epochs=num_epochs, per_device_train_batch_size=batch_size, per_device_eval_batch_size=batch_size, evaluation_strategy='epoch', weight_decay=weight_decay, ) # In[11]: from sklearn.metrics import accuracy_score def compute_metrics(eval_pred): y_true = eval_pred.label_ids y_pred = np.argmax(eval_pred.predictions, axis=-1) return {'accuracy': accuracy_score(y_true, y_pred)} # In[12]: from transformers import Trainer trainer = Trainer( model=model, args=training_args, compute_metrics=compute_metrics, train_dataset=train_ds, eval_dataset=eval_ds, tokenizer=tokenizer, ) # In[13]: trainer.train() # Evaluate on the test partition: # In[14]: test_ds = ds['test'].map( tokenize, batched=True, remove_columns=['title', 'description', 'text'], ) test_ds.to_pandas() # In[15]: output = trainer.predict(test_ds) output # In[16]: from sklearn.metrics import classification_report y_true = output.label_ids y_pred = np.argmax(output.predictions, axis=-1) target_names = labels print(classification_report(y_true, y_pred, target_names=target_names))
3,972
4,162
14
chap13-15
chap13-15
13 Using Transformers with the Hugging Face Library One of the key advantages of transformer networks is the ability to take a model that was pre-trained over vast quantities of text and fine-tune it for the task at hand. Intuitively, this strategy allows transformer networks to achieve higher performance on smaller datasets by relying on statistics acquired at scale in an unsupervised way (e.g., through the masked language model training objective). To this end, in this chapter we will use the Hugging Face library,1 which has a rich repository of datasets and pre-trained models, as well as helper methods and classes that make it easy to target downstream tasks. Using pre-trained transformer encoders, we will implement the two tasks that served as use cases in the previous chapters: text classification and part-of-speech tagging. 13.1 Tokenization As discussed in Section 12.2, transformers rely on sub-word tokens. This strategy provides an elegant way to handle unknown and low-frequency words by splitting them into more frequent sub-word parts. At the same time, these tokenization algorithms maintain frequently-occurring words as standalone tokens, so the signal for these common words is preserved. To make this more concrete, we show below how tokenizers are employed in the Hugging Face library. First, we load the tokenizer that corresponds to the transformer we intend to use. This is important for two reasons: (a) different transformers rely on different tokenization algorithms, and (b) even for the ones that use the same algorithm, their tokenizer vocabularies are likely to be different if they were pre-trained 1 https://huggingface.co/docs/transformers/main/en/index 186 13.1 Tokenization 187 on different corpora. Next, we tokenize some example text and display some of the resulting attributes with pandas: As shown above, the tokenizer splits the text into tokens, and adds two special tokens: the [CLS] token at the beginning of the token sequence, and the [SEP] token at the end. Also, note that the ## characters at the beginning of some tokens indicate that they are not standalone words, but rather sub-words that continue a word previously started. For example, the output above shows that the word walrus was split into three sub-words. Note, however, that this is specific to this particular tokenization algorithm, and other tokenizers may indicate word continuation in different ways. A better way to detect word continuations is using the word_ids() method of the tokenizer output, which assigns the same id to all tokens part of the same word. For example, all fragments of the word walrus share the word id 3. Lastly, the input_ids attribute provides the token ids used internally by the transformer to map tokens to embeddings. To briefly demonstrate how different tokenizers produce different outputs, here is the same text tokenized with the tokenizer corresponding to xlm-roberta-base: Note how the [CLS] and [SEP] special tokens have been replaced with <s> and </s> respectively. Also, spaces have been replaced with the Unicode character (U+2581, LOWER ONE EIGHTH BLOCK). Tokens that start with that character are considered word beginnings and the rest are word continuations, as can be confirmed by looking at the word ids. This illustrates the importance of using the tokenizer that corresponds to the transformer you intend to use. 012345678 tokens [CLS] I am the wa ##l ##rus . [SEP] word_ids None 0 1 2 3 3 3 4 None input_ids 101 146 1821 1103 20049 1233 6208 119 102 01234567 tokens <s> ▁I ▁am ▁the ▁wal rus . </s > None 0 1 2 3 3 3 None 0 87 444 70 32973 6563 5 2 word_ids input_ids 188 Using Transformers with the Hugging Face Library 13.2 Text Classification For our text classification example, we will continue using the AG News dataset from previous chapters. We will load, preprocess, and split the dataset into pandas dataframes in the same way as before. Now however, rather than continuing with pandas, we will create a Hugging Face dataset from the dataframes. Hugging Face datasets are convenient because of their built-in support of batching, efficient data transformations, and caching. In particular, we convert each dataframe into a Hugging Face dataset. The various datasets are managed with a DatasetDict. Note that this is the same data structure seen when downloading a Hugging Face dataset from their hub.2 The keys in this dictionary are usually train, validation, and test:3 Once our dataset is loaded, we load a tokenizer. Different pre-trained models are tokenized differently, and it is important to select the tokenizer that corresponds to the model we will use so that the inputs are consistent with model expectations. In our example, we will use the bert-base-cased pre-trained model and tokenizer: Datasets have a map() method that transforms the dataset by applying a function to each example. The method returns a new dataset with the transformation applied. We use the map() method to tokenize our dataset. To this end, we define a function that tokenizes an example using the tokenizer we loaded previously. Note that tokenizers support many options that you may need depending on your situation. However, since this is a simple scenario, all we need to do is provide the text to tokenize and specify how to handle texts that exceed the maximum number of tokens permitted by the pre-trained model. Here we have our tokenizer truncate any inputs that are too long by specifying the truncation=True parameter. The output of this function will be added to the new dataset as extra columns. Further, we also want to remove some of the columns that are no longer needed, simplifying subsequent steps. For this, we use the remove_columns argument, listing the columns that we want to discard. Additionally, the dataset’s map() method can batch the dataset; we enable this option with the batched=True argument: 2 https://huggingface.co/datasets
3 These correspond to the more common terms train, development, and test we have used throughout the book so far. In this chapter we use the Hugging Face naming conventions for consistency. 13.2 Text Classification 189 label 03 10 20 32 40 ... ... . 107995  0 
 . 107996  0 
 . 107997  0 
 . 107998  0 
 . 107999  3 
 input_ids [101, 3270, 11906, 1522, 1146, 7106, 1111, 251... [101, 158, 119, 156, 119, 12068, 5084, 1116, 9... [101, 7270, 118, 2733, 1383, 1111, 12448, 7430... [101, 6096, 117, 10378, 3969, 5977, 1111, 8988... [101, 19569, 5480, 10582, 2087, 1867, 158, 119... [101, 1130, 139, 24683, 131, 21107, 2050, 1739... token_type_ids attention_mask [101, 22087, 8223, 1611, 1106, 4417, [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 5572, 324... 0, 0, 0, ... [0,0,0,0,0,0,0,0, [0,0,0,0,0,0,0,0, [0,0,0,0,0,0,0,0, [0,0,0,0,0,0,0,0, [0,0,0,0,0,0,0,0, [0,0,0,0,0,0,0,0, 0,0,0,0, 0,0,0,... 0,0,0,0, 0,0,0,... 0,0,0,0, 0,0,0,... 0,0,0,0, 0,0,0,... 0,0,0,0, 0,0,0,... 0,0,0,0, 0,0,0,... [1,1,1,1,1,1,1,1, [1,1,1,1,1,1,1,1, [1,1,1,1,1,1,1,1, [1,1,1,1,1,1,1,1, [1,1,1,1,1,1,1,1, [1,1,1,1,1,1,1,1, 1,1,1,1, 1,1,1,... 1,1,1,1, 1,1,1,... 1,1,1,1, 1,1,1,... 1,1,1,1, 1,1,1,... 1,1,1,1, 1,1,1,... 1,1,1,1, 1,1,1,... ... ... [101, 16409, 118, 16587, 159, 4064, [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1106, 1564... 0, 0, 0, ... 108000 rows × 4 columns [101, 4222, 11404, 1174, 117, 1476, [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1130, 2696... 0, 0, 0, ... [101, 11560, 3881, 108, 3614, 132, [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 3498, 2944,... 0, 0, 0, ... [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... ... Next, we implement a classifier for our task. Hugging Face provides a
variety of models corresponding to several types of downstream tasks. However, for pedagogical purposes, we implement one from scratch. In particular, our model class inherits from BertPreTrainedModel, which
provides several useful methods such as init_weights() and from_pretrained() methods, which we will use later. The model constructor takes a config- uration object as its only parameter. Configuration objects contain all the hyper-parameters used by the corresponding pre-trained models. We will show later how the configuration model is retrieved and customized. Models that implement specific downstream tasks are usually composed of a pre-trained model (sometimes referred as the body), and one or more task-specific layers (usually referred as the head). Here, we initialize a BertModel using the provided configuration, as well as a dropout layer and a task-specific linear layer used for classifying the Bert output. Each of these layers is initialized by calling the init_weights() method inherited from BertPreTrainedModel. The forward() method, which implements the task-specific forward pass, takes as arguments the outputs of the tokenizer, and, optionally, the gold labels corresponding to the input data points. Our implementation of the forward pass sends the input tokens to the Bert model to produce the contextualized representations for all tokens. This output has several components, including the last_hidden_state which con- 190 Using Transformers with the Hugging Face Library tains the final hidden-state embedding for each token. For our task, we will represent the whole sequence using the embedding for the [CLS] token that occurs at the start of each example. We retrieve it by selecting the first element of each output sequence in the batch (i.e., last_hidden_state[:, 0, :]). As in the previous chapters, we apply dropout to our sequence representation, and then pass it through our linear classification layer. If gold labels are provided (i.e., we are training), we now compute the loss using the cross-entropy loss. The output of the forward pass is wrapped in a Hugging Face SequenceClassifierOutput object4 and returned: Next we load the configuration of the pre-trained model and instantiate our model. The AutoConfig class can load the configuration for any pre-trained model, retrieving it from Hugging Face if needed. Then we use the configuration to instantiate our model using the from_pretrained() method. With this call, the pre-trained model will be loaded, which includes downloading if necessary: Hugging Face provides a Trainer class that greatly simplifies the training process. This class not only implements the training loop we have been using in the previous chapters, but also handles other useful steps such as saving checkpoints (i.e., intermediate models after a number of mini-batches have been processed during training), and tracking custom measures about model performance. In order to create a Trainer, we first need to specify its configuration in a TrainingArguments object. In ours, we specify certain hyper parameters such as batch size, weight decay, and number of epochs, as well as where to store model checkpoints: The TrainingArguments class provides a wide variety of arguments that we have not shown.5 These arguments usually have appropriate default values, so it is often fine to omit them. For example, we did not use the label_names argument, which specifies the key that corresponds to the training labels. When omitted, it defaults to keys such as label, labels and label_ids.6 In this chapter we used label. Note that we also specify how often we would like to see the perfor- . 4  Hugging Face utilizes a set of output objects to standardize model output for a given task. These objects typically include additional information, e.g., attention weights, which can be used for visualizing or debugging model behavior. 
 . 5  https://huggingface.co/docs/transformers/main/en/main_classes/trainer# transformers. TrainingArguments 
 6 In the case of extractive question answering (see Chapter 16), the start_positions and end_positions store the start/end positions of the correct answers. 13.3 Part-of-speech Tagging 191 mance of the current model (at the end of each epoch) with evaluation_strategy='epoch'. This means that after each epoch we print the current loss on the training
partition and on the evaluation dataset, if one is available. Additionally,
we can report custom metrics at this time. For this purpose, we use the compute_metrics parameter of the Trainer, which expects a function that receives a transformers. EvalPredictions object containing the label ids and the predicted logits. The expected return type is a dictionary whose keys correspond to different metrics, each of which will be displayed as a separate result column. Using the above TrainingArguments and compute_metrics function, we create our Trainer. Note that when you provide a tokenizer, the trainer will automatically pad the sequences in each batch. Also, the trainer will automatically use any GPU that is available, unless specifically disabled in the TrainingArguments. Training our model takes a single call to the train() method of the Trainer object. As specified in the our instance of TrainingArguments, the training and validation losses, as well as the accuracy, are reported every epoch. As in the other chapters, we can write custom code to obtain the model’s predictions on the test data. However, the Trainer class provides a predict() method that drastically simplifies this: As shown in the table above, this model achieves an accuracy of 95%, which is the highest performance we have achieved so far on this dataset. 13.3 Part-of-speech Tagging To showcase part-of-speech tagging using transformers, we continue with the Spanish section of the AnCora corpus introduced in Chapter 11. Recall that the dataset is stored in the CoNLL-U format. We load this format in the same way as before, but then we convert the loaded dataset into a Hugging Face DictDataset: Importantly, because the CoNLL-U dataset is already tokenized, we use the is_split_into_words=True tokenizer argument to ensure that the tokenizer respects the existing word boundaries during its sub-word tokenization. Further, while we want to predict one POS tag per word, Epoch Training Loss Validation Loss Accuracy 1 0.187800 0.172629 0.941667 2 0.104000 0.183001 0.946250 192 Using Transformers with the Hugging Face Library any given word may be split into smaller pieces by our tokenizer. Thus, we need to align the tokenizer output to the CoNLL-U words. The original BERT paper (Devlin et al., 2018) addresses this by only using the embedding corresponding to the first sub-token for each word. We follow the same approach for consistency. For the sub-words that do not correspond to the beginning of a word, we use a special value that indicates that we are not interested in their predictions. The CrossEntropyLoss has a parameter called ignore_index for this purpose. The default value for this parameter is −100, which we use as the label for the sub-words we wish to ignore during training: Next, we use this function to preprocess the train and validation folds in our DatasetDict: words [El, presidente, de, el, órgano, regulador, de... [Afirmó, que, sigue, el, criterio, europeo, y,... [Durante, la, presentación, de, el, libro, ", ... [Y, todas, las, miradas, convergen, en, la, lu... [Cambiar, las, formas, parece, de, rigor, ,, p... [Él, llega, a, tirar, la, sobre, la, cama, y, ... tags [DET, NOUN, ADP, DET, NOUN, ADJ, ADP, DET, PRO... [VERB, SCONJ, VERB, DET, NOUN, ADJ, CCONJ, SCO... [ADP, DET, NOUN, ADP, DET, NOUN, PUNCT, DET, P... [CCONJ, DET, DET, NOUN, VERB, ADP, DET, NOUN, ... [VERB, DET, NOUN, VERB, ADP, NOUN, PUNCT, CCON... [PRON, VERB, ADP, VERB, PRON, ADP, DET, NOUN, ... input_ids [0, 540, 9692, 8, 88, 103633, 15913, 1846, 8, ... [0, 62, 38949, 849, 41, 58453, 88, 166220, 620... [0, 24292, 21, 43945, 8, 88, 7750, 44, 239, 78... [0, 990, 5136, 576, 100688, 7, 158, 814, 1409,... [0, 313, 61055, 42, 576, 26497, 12295, 8, 7599... [0, 124043, 47612, 10, 61846, 21, 1028, 21, 39... attention_mask labels [-100, 0, 1, 2, 0, 1, 3, -100, 2, 0, 4, -100, ... [-100, 6, -100, -100, 7, 6, 0, 1, 3, 10, 7, 6,... [-100, 2, 0, 1, 2, 0, 1, 8, 0, 4, -100, 2, 4, ... [-100, 10, 0, 0, 1, -100, 6, -100, -100, 2, 0,... [-100, 6, -100, -100, 0, 1, 6, 2, 1, 8, -100, ... [-100, 5, 6, 2, 6, 5, 2, 0, 1, 10, 5, 6, 0, 1,... 0 1 2 3 4 ... 14300 14301 14302 14303 [1,1,1,1,1, 1,1,1,1,1, [1,1,1,1,1, 1,1,1,1,1, [1,1,1,1,1, 1,1,1,1,1, [1,1,1,1,1, 1,1,1,1,1, [1,1,1,1,1, 1,1,1,1,1, [1,1,1,1,1, 1,1,1,1,1, 1, 1, 1, 1, 1, ... 1, 1, 1, 1, 1, ... 1, 1, 1, 1, 1, ... 1, 1, 1, 1, 1, ... 1, 1, 1, 1, 1, ... 1, 1, 1, 1, 1, ... [Sobre, la, oferta, de, interconexión, con, Te... [ADP, DET, NOUN, ADP, NOUN, ADP, PROPN, ADP, D... [0, 44125, 21, 19806, 8, 1940, 2271, 3355, 194... [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... [-100, 2, 0, 1, 2, 1, -100, -100, -100, 2, 4, ... [La, inversión, en, investigación, básica, es,... [DET, NOUN, ADP, NOUN, ADJ, AUX, DET, NOUN, AD... [0, 239, 98649, 22, 31674, 124528, 198, 88, 46... [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... [-100, 0, 1, 2, 1, 3, 9, 0, 1, 2, 0, 1, 10, 0,... [Conviene, que, ahora, ,, en, plena, apoteosis... [VERB, SCONJ, ADV, PUNCT, ADP, ADJ, NOUN, ADP,... [0, 1657, 7772, 13, 41, 18451, 6, 4, 22, 31161... [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... [-100, 6, -100, -100, 7, 11, 8, -100, 2, 3, 1,... [Carlos, y, Fayna, se, enzarzan, en, una, bron... [PROPN, CCONJ, PROPN, PRON, VERB, ADP, DET, NO... [0, 24856, 113, 114162, 76, 40, 22, 6383, 5935... [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... [-100, 4, 10, 4, -100, 5, 6, -100, -100, 2, 0,... 14304
14305 rows × 5 columns ... ... ... ... ... Next, we implement our model class that uses a transformer encoder as a transducer. Because our downstream task consists of POS tagging for Spanish, we need a transformer model that was pre-trained on Spanish texts. Here, we chose XLM-RoBERTa (Conneau et al., 2019) as our base model. XLM-Roberta is a RoBERTa model (Liu et al., 2019) that has 13.3 Part-of-speech Tagging 193 been pre-trained on 100 different languages, including Spanish. Of note, XLM-RoBERTa does not require us to specify what language we are working on. Similar to BERT, it only requires the input_ids. We discussed in the text classification section that Hugging Face provides implementations for text classification models. This is also true for token classification problems that require transducers. In particular, the XLMRobertaForTokenClassification model provided by Hugging Face does everything needed for this task. However, as before, here we implement it ourselves for pedagogical purposes. The model architecture is similar to our text classification example. It consists of a transformer, a dropout layer, and a linear layer used for classification. The number of labels which determines the output dimension of the linear layer is equal to the number of POS tags. The primary difference between the text classification example and this token classification model is that with the former we produced one label for each text document, while here we produce one label for each token in the input text. Specifically, in our text classification model the output shape was two-dimensional: (batch_size, num_labels). Here, our output is three-dimensional: (batch_size, sequence_size, num_labels). So, while much of the forward method is familiar to us, when we are required to compute the loss, we need to reshape the logits and the labels before passing them to the CrossEntropyLoss, since it expects two-dimensional input and one-dimensional labels. For this purpose, we use the view() method to reshape the tensors. This method is efficient because it does not copy the tensor data. Instead it provides a new view of the same data that behaves like a tensor with a different shape.7 As mentioned before, the number of arguments passed to this method determines the number of dimensions in the output tensor. Here, for our logits, we pass two arguments and so our new view will have two dimensions. The second will be the size of self.num_labels, while the first (because we pass -1) will be inferred based on the original tensor shape. For our labels, on the other hand, we only provide one argument and so the new view will have one dimension, inferred by the original shape: Next, we instantiate our model using the XLM-RoBERTa configuration: 7 Similar to NumPy, PyTorch tensors are represented internally by a block of memory storing the data and some metadata that describes how the data should be read, e.g., type, shape, and stride. The view() method returns a new tensor with new metadata but pointing to the same memory block. 194 Using Transformers with the Hugging Face Library As before, we create a TrainingArguments object and define a compute_metrics function in order to customize a Trainer: While the TrainingArguments code has no substantial changes, we need to adjust the compute_metrics function to account for the fact that our model uses sub-word tokens rather than complete words. Recall that only the first sub-word token per word was assigned a POS tag. This function discards the labels corresponding to the ignored sub-word tokens and evaluates the rest, returning the accuracy score: The last component required for the Trainer is a collator. Since this time we are batching sequences of tokens, we need a collator that can pad them dynamically when constructing the batches. The transformers library includes a DataCollatorForTokenClassification specifically for this purpose. Once we have our collator and our trainer object, we can train our model: Next, we evaluate our newly trained model on the test dataset. For this purpose, we preprocess the data in the same way we did for the train and validation partitions. Then, for convenience, we use the trainer’s predict() method to generate the predicted logits using our model: As before, we use scikit-learn’s classification_report() function to display the results of the evaluation. This function expects two onedimensional lists of labels, so we need to follow a similar approach to the one we employed for text classification. Note that output.label_ids and output.predictions are NumPy arrays rather than PyTorch tensors. This time we use NumPy’s reshape() method to reshape the arrays. This method is similar to PyTorch’s view() method that we used previously, except that view() may copy the array’s data in some situations. We discard the labels corresponding to ignored sub-word tokens, and then we print the classification report: Our model based on XLM-RoBERTa achieves 99% accuracy. This is considerably better than the LSTM-based model developed in Chapter 11. In order to understand the differences between the two methods, we produce below a confusion matrix for the results of each model. Rows in the confusion matrix represent the true labels and columns represent the predicted labels. In the confusion matrices shown below, each cell xij corresponds to the proportion of values with label i that were assigned the label j.8 For a perfect model, all cells in the diagonal would have value 1 and all other cells would have value 0. The code used to generate the confusion matrix is shown below. The confusion matrices 8 This is the case because we used the normalize='true' parameter of the confusion_matrix() function. 13.3 Part-of-speech Tagging 195 Figure 13.1 Confusion matrix corresponding to the LSTM-based part-ofspeech tagger developed in Chapter 11. for the LSTM and transformer are show in Figure 13.1 and Figure 13.2, respectively. The two confusion matrices highlight a couple of important observations. First, the transformer model is considerably better at predicting POS tags with infrequent support in the dataset. For example, the accuracy for predicting the SYM POS tag increased from 38% in the LSTM model to 95% in the transformer model! Equally as impressive, the transformer improved the performance of tags that are extremely common, and, thus, provide plenty of opportunity to both approaches to learn a good model. For example, the accuracy of tagging NOUN, the second 196 Using Transformers with the Hugging Face Library Figure 13.2 Confusion matrix corresponding to the transformer-based part-of-speech tagger. most common POS tag in the dataset, increased from 96% in the LSTM model to 99% in the transformer model. 13.4 Summary In this chapter we presented two applications driven by the encoder component of a transformer network. First, we used the transformer encoder as an acceptor and implemented a text classification application for English news. Second, we used the encoder as a transducer to develop a Spanish part-of-speech tagger. Both tasks were implemented using 13.4 Summary 197 pre-trained transformer models from the Hugging Face library. For both applications, the transformer-based methods outperform considerably all approaches introduced in the previous chapters, highlighting the value of the transformer architecture.
13,115
13,198
#!/usr/bin/env python # coding: utf-8 # # Text Classification Using Transformer Networks (BERT) # Some initialization: # In[1]: import random import torch import numpy as np import pandas as pd from tqdm.notebook import tqdm # enable tqdm in pandas tqdm.pandas() # set to True to use the gpu (if there is one available) use_gpu = True # select device device = torch.device('cuda' if use_gpu and torch.cuda.is_available() else 'cpu') print(f'device: {device.type}') # random seed seed = 1234 # set random seed if seed is not None: print(f'random seed: {seed}') random.seed(seed) np.random.seed(seed) torch.manual_seed(seed) # Read the train/dev/test datasets and create a HuggingFace `Dataset` object: # In[2]: def read_data(filename): # read csv file df = pd.read_csv(filename, header=None) # add column names df.columns = ['label', 'title', 'description'] # make labels zero-based df['label'] -= 1 # concatenate title and description, and remove backslashes df['text'] = df['title'] + " " + df['description'] df['text'] = df['text'].str.replace('\\', ' ', regex=False) return df # In[3]: labels = open('data/ag_news_csv/classes.txt').read().splitlines() train_df = read_data('data/ag_news_csv/train.csv') test_df = read_data('data/ag_news_csv/test.csv') train_df # In[4]: from sklearn.model_selection import train_test_split train_df, eval_df = train_test_split(train_df, train_size=0.9) train_df.reset_index(inplace=True, drop=True) eval_df.reset_index(inplace=True, drop=True) print(f'train rows: {len(train_df.index):,}') print(f'eval rows: {len(eval_df.index):,}') print(f'test rows: {len(test_df.index):,}') # In[5]: from datasets import Dataset, DatasetDict ds = DatasetDict() ds['train'] = Dataset.from_pandas(train_df) ds['validation'] = Dataset.from_pandas(eval_df) ds['test'] = Dataset.from_pandas(test_df) ds # Tokenize the texts: # In[6]: from transformers import AutoTokenizer transformer_name = 'bert-base-cased' tokenizer = AutoTokenizer.from_pretrained(transformer_name) # In[7]: def tokenize(examples): return tokenizer(examples['text'], truncation=True) train_ds = ds['train'].map( tokenize, batched=True, remove_columns=['title', 'description', 'text'], ) eval_ds = ds['validation'].map( tokenize, batched=True, remove_columns=['title', 'description', 'text'], ) train_ds.to_pandas() # Create the transformer model: # In[8]: from torch import nn from transformers.modeling_outputs import SequenceClassifierOutput from transformers.models.bert.modeling_bert import BertModel, BertPreTrainedModel # https://github.com/huggingface/transformers/blob/65659a29cf5a079842e61a63d57fa24474288998/src/transformers/models/bert/modeling_bert.py#L1486 class BertForSequenceClassification(BertPreTrainedModel): def __init__(self, config): super().__init__(config) self.num_labels = config.num_labels self.bert = BertModel(config) self.dropout = nn.Dropout(config.hidden_dropout_prob) self.classifier = nn.Linear(config.hidden_size, config.num_labels) self.init_weights() def forward(self, input_ids=None, attention_mask=None, token_type_ids=None, labels=None, **kwargs): outputs = self.bert( input_ids, attention_mask=attention_mask, token_type_ids=token_type_ids, **kwargs, ) cls_outputs = outputs.last_hidden_state[:, 0, :] cls_outputs = self.dropout(cls_outputs) logits = self.classifier(cls_outputs) loss = None if labels is not None: loss_fn = nn.CrossEntropyLoss() loss = loss_fn(logits, labels) return SequenceClassifierOutput( loss=loss, logits=logits, hidden_states=outputs.hidden_states, attentions=outputs.attentions, ) # In[9]: from transformers import AutoConfig config = AutoConfig.from_pretrained( transformer_name, num_labels=len(labels), ) model = ( BertForSequenceClassification .from_pretrained(transformer_name, config=config) ) # Create the trainer object and train: # In[10]: from transformers import TrainingArguments num_epochs = 2 batch_size = 24 weight_decay = 0.01 model_name = f'{transformer_name}-sequence-classification' training_args = TrainingArguments( output_dir=model_name, log_level='error', num_train_epochs=num_epochs, per_device_train_batch_size=batch_size, per_device_eval_batch_size=batch_size, evaluation_strategy='epoch', weight_decay=weight_decay, ) # In[11]: from sklearn.metrics import accuracy_score def compute_metrics(eval_pred): y_true = eval_pred.label_ids y_pred = np.argmax(eval_pred.predictions, axis=-1) return {'accuracy': accuracy_score(y_true, y_pred)} # In[12]: from transformers import Trainer trainer = Trainer( model=model, args=training_args, compute_metrics=compute_metrics, train_dataset=train_ds, eval_dataset=eval_ds, tokenizer=tokenizer, ) # In[13]: trainer.train() # Evaluate on the test partition: # In[14]: test_ds = ds['test'].map( tokenize, batched=True, remove_columns=['title', 'description', 'text'], ) test_ds.to_pandas() # In[15]: output = trainer.predict(test_ds) output # In[16]: from sklearn.metrics import classification_report y_true = output.label_ids y_pred = np.argmax(output.predictions, axis=-1) target_names = labels print(classification_report(y_true, y_pred, target_names=target_names))
5,116
5,132
15
chap13-16
chap13-16
13 Using Transformers with the Hugging Face Library One of the key advantages of transformer networks is the ability to take a model that was pre-trained over vast quantities of text and fine-tune it for the task at hand. Intuitively, this strategy allows transformer networks to achieve higher performance on smaller datasets by relying on statistics acquired at scale in an unsupervised way (e.g., through the masked language model training objective). To this end, in this chapter we will use the Hugging Face library,1 which has a rich repository of datasets and pre-trained models, as well as helper methods and classes that make it easy to target downstream tasks. Using pre-trained transformer encoders, we will implement the two tasks that served as use cases in the previous chapters: text classification and part-of-speech tagging. 13.1 Tokenization As discussed in Section 12.2, transformers rely on sub-word tokens. This strategy provides an elegant way to handle unknown and low-frequency words by splitting them into more frequent sub-word parts. At the same time, these tokenization algorithms maintain frequently-occurring words as standalone tokens, so the signal for these common words is preserved. To make this more concrete, we show below how tokenizers are employed in the Hugging Face library. First, we load the tokenizer that corresponds to the transformer we intend to use. This is important for two reasons: (a) different transformers rely on different tokenization algorithms, and (b) even for the ones that use the same algorithm, their tokenizer vocabularies are likely to be different if they were pre-trained 1 https://huggingface.co/docs/transformers/main/en/index 186 13.1 Tokenization 187 on different corpora. Next, we tokenize some example text and display some of the resulting attributes with pandas: As shown above, the tokenizer splits the text into tokens, and adds two special tokens: the [CLS] token at the beginning of the token sequence, and the [SEP] token at the end. Also, note that the ## characters at the beginning of some tokens indicate that they are not standalone words, but rather sub-words that continue a word previously started. For example, the output above shows that the word walrus was split into three sub-words. Note, however, that this is specific to this particular tokenization algorithm, and other tokenizers may indicate word continuation in different ways. A better way to detect word continuations is using the word_ids() method of the tokenizer output, which assigns the same id to all tokens part of the same word. For example, all fragments of the word walrus share the word id 3. Lastly, the input_ids attribute provides the token ids used internally by the transformer to map tokens to embeddings. To briefly demonstrate how different tokenizers produce different outputs, here is the same text tokenized with the tokenizer corresponding to xlm-roberta-base: Note how the [CLS] and [SEP] special tokens have been replaced with <s> and </s> respectively. Also, spaces have been replaced with the Unicode character (U+2581, LOWER ONE EIGHTH BLOCK). Tokens that start with that character are considered word beginnings and the rest are word continuations, as can be confirmed by looking at the word ids. This illustrates the importance of using the tokenizer that corresponds to the transformer you intend to use. 012345678 tokens [CLS] I am the wa ##l ##rus . [SEP] word_ids None 0 1 2 3 3 3 4 None input_ids 101 146 1821 1103 20049 1233 6208 119 102 01234567 tokens <s> ▁I ▁am ▁the ▁wal rus . </s > None 0 1 2 3 3 3 None 0 87 444 70 32973 6563 5 2 word_ids input_ids 188 Using Transformers with the Hugging Face Library 13.2 Text Classification For our text classification example, we will continue using the AG News dataset from previous chapters. We will load, preprocess, and split the dataset into pandas dataframes in the same way as before. Now however, rather than continuing with pandas, we will create a Hugging Face dataset from the dataframes. Hugging Face datasets are convenient because of their built-in support of batching, efficient data transformations, and caching. In particular, we convert each dataframe into a Hugging Face dataset. The various datasets are managed with a DatasetDict. Note that this is the same data structure seen when downloading a Hugging Face dataset from their hub.2 The keys in this dictionary are usually train, validation, and test:3 Once our dataset is loaded, we load a tokenizer. Different pre-trained models are tokenized differently, and it is important to select the tokenizer that corresponds to the model we will use so that the inputs are consistent with model expectations. In our example, we will use the bert-base-cased pre-trained model and tokenizer: Datasets have a map() method that transforms the dataset by applying a function to each example. The method returns a new dataset with the transformation applied. We use the map() method to tokenize our dataset. To this end, we define a function that tokenizes an example using the tokenizer we loaded previously. Note that tokenizers support many options that you may need depending on your situation. However, since this is a simple scenario, all we need to do is provide the text to tokenize and specify how to handle texts that exceed the maximum number of tokens permitted by the pre-trained model. Here we have our tokenizer truncate any inputs that are too long by specifying the truncation=True parameter. The output of this function will be added to the new dataset as extra columns. Further, we also want to remove some of the columns that are no longer needed, simplifying subsequent steps. For this, we use the remove_columns argument, listing the columns that we want to discard. Additionally, the dataset’s map() method can batch the dataset; we enable this option with the batched=True argument: 2 https://huggingface.co/datasets
3 These correspond to the more common terms train, development, and test we have used throughout the book so far. In this chapter we use the Hugging Face naming conventions for consistency. 13.2 Text Classification 189 label 03 10 20 32 40 ... ... . 107995  0 
 . 107996  0 
 . 107997  0 
 . 107998  0 
 . 107999  3 
 input_ids [101, 3270, 11906, 1522, 1146, 7106, 1111, 251... [101, 158, 119, 156, 119, 12068, 5084, 1116, 9... [101, 7270, 118, 2733, 1383, 1111, 12448, 7430... [101, 6096, 117, 10378, 3969, 5977, 1111, 8988... [101, 19569, 5480, 10582, 2087, 1867, 158, 119... [101, 1130, 139, 24683, 131, 21107, 2050, 1739... token_type_ids attention_mask [101, 22087, 8223, 1611, 1106, 4417, [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 5572, 324... 0, 0, 0, ... [0,0,0,0,0,0,0,0, [0,0,0,0,0,0,0,0, [0,0,0,0,0,0,0,0, [0,0,0,0,0,0,0,0, [0,0,0,0,0,0,0,0, [0,0,0,0,0,0,0,0, 0,0,0,0, 0,0,0,... 0,0,0,0, 0,0,0,... 0,0,0,0, 0,0,0,... 0,0,0,0, 0,0,0,... 0,0,0,0, 0,0,0,... 0,0,0,0, 0,0,0,... [1,1,1,1,1,1,1,1, [1,1,1,1,1,1,1,1, [1,1,1,1,1,1,1,1, [1,1,1,1,1,1,1,1, [1,1,1,1,1,1,1,1, [1,1,1,1,1,1,1,1, 1,1,1,1, 1,1,1,... 1,1,1,1, 1,1,1,... 1,1,1,1, 1,1,1,... 1,1,1,1, 1,1,1,... 1,1,1,1, 1,1,1,... 1,1,1,1, 1,1,1,... ... ... [101, 16409, 118, 16587, 159, 4064, [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1106, 1564... 0, 0, 0, ... 108000 rows × 4 columns [101, 4222, 11404, 1174, 117, 1476, [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1130, 2696... 0, 0, 0, ... [101, 11560, 3881, 108, 3614, 132, [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 3498, 2944,... 0, 0, 0, ... [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... ... Next, we implement a classifier for our task. Hugging Face provides a
variety of models corresponding to several types of downstream tasks. However, for pedagogical purposes, we implement one from scratch. In particular, our model class inherits from BertPreTrainedModel, which
provides several useful methods such as init_weights() and from_pretrained() methods, which we will use later. The model constructor takes a config- uration object as its only parameter. Configuration objects contain all the hyper-parameters used by the corresponding pre-trained models. We will show later how the configuration model is retrieved and customized. Models that implement specific downstream tasks are usually composed of a pre-trained model (sometimes referred as the body), and one or more task-specific layers (usually referred as the head). Here, we initialize a BertModel using the provided configuration, as well as a dropout layer and a task-specific linear layer used for classifying the Bert output. Each of these layers is initialized by calling the init_weights() method inherited from BertPreTrainedModel. The forward() method, which implements the task-specific forward pass, takes as arguments the outputs of the tokenizer, and, optionally, the gold labels corresponding to the input data points. Our implementation of the forward pass sends the input tokens to the Bert model to produce the contextualized representations for all tokens. This output has several components, including the last_hidden_state which con- 190 Using Transformers with the Hugging Face Library tains the final hidden-state embedding for each token. For our task, we will represent the whole sequence using the embedding for the [CLS] token that occurs at the start of each example. We retrieve it by selecting the first element of each output sequence in the batch (i.e., last_hidden_state[:, 0, :]). As in the previous chapters, we apply dropout to our sequence representation, and then pass it through our linear classification layer. If gold labels are provided (i.e., we are training), we now compute the loss using the cross-entropy loss. The output of the forward pass is wrapped in a Hugging Face SequenceClassifierOutput object4 and returned: Next we load the configuration of the pre-trained model and instantiate our model. The AutoConfig class can load the configuration for any pre-trained model, retrieving it from Hugging Face if needed. Then we use the configuration to instantiate our model using the from_pretrained() method. With this call, the pre-trained model will be loaded, which includes downloading if necessary: Hugging Face provides a Trainer class that greatly simplifies the training process. This class not only implements the training loop we have been using in the previous chapters, but also handles other useful steps such as saving checkpoints (i.e., intermediate models after a number of mini-batches have been processed during training), and tracking custom measures about model performance. In order to create a Trainer, we first need to specify its configuration in a TrainingArguments object. In ours, we specify certain hyper parameters such as batch size, weight decay, and number of epochs, as well as where to store model checkpoints: The TrainingArguments class provides a wide variety of arguments that we have not shown.5 These arguments usually have appropriate default values, so it is often fine to omit them. For example, we did not use the label_names argument, which specifies the key that corresponds to the training labels. When omitted, it defaults to keys such as label, labels and label_ids.6 In this chapter we used label. Note that we also specify how often we would like to see the perfor- . 4  Hugging Face utilizes a set of output objects to standardize model output for a given task. These objects typically include additional information, e.g., attention weights, which can be used for visualizing or debugging model behavior. 
 . 5  https://huggingface.co/docs/transformers/main/en/main_classes/trainer# transformers. TrainingArguments 
 6 In the case of extractive question answering (see Chapter 16), the start_positions and end_positions store the start/end positions of the correct answers. 13.3 Part-of-speech Tagging 191 mance of the current model (at the end of each epoch) with evaluation_strategy='epoch'. This means that after each epoch we print the current loss on the training
partition and on the evaluation dataset, if one is available. Additionally,
we can report custom metrics at this time. For this purpose, we use the compute_metrics parameter of the Trainer, which expects a function that receives a transformers. EvalPredictions object containing the label ids and the predicted logits. The expected return type is a dictionary whose keys correspond to different metrics, each of which will be displayed as a separate result column. Using the above TrainingArguments and compute_metrics function, we create our Trainer. Note that when you provide a tokenizer, the trainer will automatically pad the sequences in each batch. Also, the trainer will automatically use any GPU that is available, unless specifically disabled in the TrainingArguments. Training our model takes a single call to the train() method of the Trainer object. As specified in the our instance of TrainingArguments, the training and validation losses, as well as the accuracy, are reported every epoch. As in the other chapters, we can write custom code to obtain the model’s predictions on the test data. However, the Trainer class provides a predict() method that drastically simplifies this: As shown in the table above, this model achieves an accuracy of 95%, which is the highest performance we have achieved so far on this dataset. 13.3 Part-of-speech Tagging To showcase part-of-speech tagging using transformers, we continue with the Spanish section of the AnCora corpus introduced in Chapter 11. Recall that the dataset is stored in the CoNLL-U format. We load this format in the same way as before, but then we convert the loaded dataset into a Hugging Face DictDataset: Importantly, because the CoNLL-U dataset is already tokenized, we use the is_split_into_words=True tokenizer argument to ensure that the tokenizer respects the existing word boundaries during its sub-word tokenization. Further, while we want to predict one POS tag per word, Epoch Training Loss Validation Loss Accuracy 1 0.187800 0.172629 0.941667 2 0.104000 0.183001 0.946250 192 Using Transformers with the Hugging Face Library any given word may be split into smaller pieces by our tokenizer. Thus, we need to align the tokenizer output to the CoNLL-U words. The original BERT paper (Devlin et al., 2018) addresses this by only using the embedding corresponding to the first sub-token for each word. We follow the same approach for consistency. For the sub-words that do not correspond to the beginning of a word, we use a special value that indicates that we are not interested in their predictions. The CrossEntropyLoss has a parameter called ignore_index for this purpose. The default value for this parameter is −100, which we use as the label for the sub-words we wish to ignore during training: Next, we use this function to preprocess the train and validation folds in our DatasetDict: words [El, presidente, de, el, órgano, regulador, de... [Afirmó, que, sigue, el, criterio, europeo, y,... [Durante, la, presentación, de, el, libro, ", ... [Y, todas, las, miradas, convergen, en, la, lu... [Cambiar, las, formas, parece, de, rigor, ,, p... [Él, llega, a, tirar, la, sobre, la, cama, y, ... tags [DET, NOUN, ADP, DET, NOUN, ADJ, ADP, DET, PRO... [VERB, SCONJ, VERB, DET, NOUN, ADJ, CCONJ, SCO... [ADP, DET, NOUN, ADP, DET, NOUN, PUNCT, DET, P... [CCONJ, DET, DET, NOUN, VERB, ADP, DET, NOUN, ... [VERB, DET, NOUN, VERB, ADP, NOUN, PUNCT, CCON... [PRON, VERB, ADP, VERB, PRON, ADP, DET, NOUN, ... input_ids [0, 540, 9692, 8, 88, 103633, 15913, 1846, 8, ... [0, 62, 38949, 849, 41, 58453, 88, 166220, 620... [0, 24292, 21, 43945, 8, 88, 7750, 44, 239, 78... [0, 990, 5136, 576, 100688, 7, 158, 814, 1409,... [0, 313, 61055, 42, 576, 26497, 12295, 8, 7599... [0, 124043, 47612, 10, 61846, 21, 1028, 21, 39... attention_mask labels [-100, 0, 1, 2, 0, 1, 3, -100, 2, 0, 4, -100, ... [-100, 6, -100, -100, 7, 6, 0, 1, 3, 10, 7, 6,... [-100, 2, 0, 1, 2, 0, 1, 8, 0, 4, -100, 2, 4, ... [-100, 10, 0, 0, 1, -100, 6, -100, -100, 2, 0,... [-100, 6, -100, -100, 0, 1, 6, 2, 1, 8, -100, ... [-100, 5, 6, 2, 6, 5, 2, 0, 1, 10, 5, 6, 0, 1,... 0 1 2 3 4 ... 14300 14301 14302 14303 [1,1,1,1,1, 1,1,1,1,1, [1,1,1,1,1, 1,1,1,1,1, [1,1,1,1,1, 1,1,1,1,1, [1,1,1,1,1, 1,1,1,1,1, [1,1,1,1,1, 1,1,1,1,1, [1,1,1,1,1, 1,1,1,1,1, 1, 1, 1, 1, 1, ... 1, 1, 1, 1, 1, ... 1, 1, 1, 1, 1, ... 1, 1, 1, 1, 1, ... 1, 1, 1, 1, 1, ... 1, 1, 1, 1, 1, ... [Sobre, la, oferta, de, interconexión, con, Te... [ADP, DET, NOUN, ADP, NOUN, ADP, PROPN, ADP, D... [0, 44125, 21, 19806, 8, 1940, 2271, 3355, 194... [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... [-100, 2, 0, 1, 2, 1, -100, -100, -100, 2, 4, ... [La, inversión, en, investigación, básica, es,... [DET, NOUN, ADP, NOUN, ADJ, AUX, DET, NOUN, AD... [0, 239, 98649, 22, 31674, 124528, 198, 88, 46... [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... [-100, 0, 1, 2, 1, 3, 9, 0, 1, 2, 0, 1, 10, 0,... [Conviene, que, ahora, ,, en, plena, apoteosis... [VERB, SCONJ, ADV, PUNCT, ADP, ADJ, NOUN, ADP,... [0, 1657, 7772, 13, 41, 18451, 6, 4, 22, 31161... [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... [-100, 6, -100, -100, 7, 11, 8, -100, 2, 3, 1,... [Carlos, y, Fayna, se, enzarzan, en, una, bron... [PROPN, CCONJ, PROPN, PRON, VERB, ADP, DET, NO... [0, 24856, 113, 114162, 76, 40, 22, 6383, 5935... [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... [-100, 4, 10, 4, -100, 5, 6, -100, -100, 2, 0,... 14304
14305 rows × 5 columns ... ... ... ... ... Next, we implement our model class that uses a transformer encoder as a transducer. Because our downstream task consists of POS tagging for Spanish, we need a transformer model that was pre-trained on Spanish texts. Here, we chose XLM-RoBERTa (Conneau et al., 2019) as our base model. XLM-Roberta is a RoBERTa model (Liu et al., 2019) that has 13.3 Part-of-speech Tagging 193 been pre-trained on 100 different languages, including Spanish. Of note, XLM-RoBERTa does not require us to specify what language we are working on. Similar to BERT, it only requires the input_ids. We discussed in the text classification section that Hugging Face provides implementations for text classification models. This is also true for token classification problems that require transducers. In particular, the XLMRobertaForTokenClassification model provided by Hugging Face does everything needed for this task. However, as before, here we implement it ourselves for pedagogical purposes. The model architecture is similar to our text classification example. It consists of a transformer, a dropout layer, and a linear layer used for classification. The number of labels which determines the output dimension of the linear layer is equal to the number of POS tags. The primary difference between the text classification example and this token classification model is that with the former we produced one label for each text document, while here we produce one label for each token in the input text. Specifically, in our text classification model the output shape was two-dimensional: (batch_size, num_labels). Here, our output is three-dimensional: (batch_size, sequence_size, num_labels). So, while much of the forward method is familiar to us, when we are required to compute the loss, we need to reshape the logits and the labels before passing them to the CrossEntropyLoss, since it expects two-dimensional input and one-dimensional labels. For this purpose, we use the view() method to reshape the tensors. This method is efficient because it does not copy the tensor data. Instead it provides a new view of the same data that behaves like a tensor with a different shape.7 As mentioned before, the number of arguments passed to this method determines the number of dimensions in the output tensor. Here, for our logits, we pass two arguments and so our new view will have two dimensions. The second will be the size of self.num_labels, while the first (because we pass -1) will be inferred based on the original tensor shape. For our labels, on the other hand, we only provide one argument and so the new view will have one dimension, inferred by the original shape: Next, we instantiate our model using the XLM-RoBERTa configuration: 7 Similar to NumPy, PyTorch tensors are represented internally by a block of memory storing the data and some metadata that describes how the data should be read, e.g., type, shape, and stride. The view() method returns a new tensor with new metadata but pointing to the same memory block. 194 Using Transformers with the Hugging Face Library As before, we create a TrainingArguments object and define a compute_metrics function in order to customize a Trainer: While the TrainingArguments code has no substantial changes, we need to adjust the compute_metrics function to account for the fact that our model uses sub-word tokens rather than complete words. Recall that only the first sub-word token per word was assigned a POS tag. This function discards the labels corresponding to the ignored sub-word tokens and evaluates the rest, returning the accuracy score: The last component required for the Trainer is a collator. Since this time we are batching sequences of tokens, we need a collator that can pad them dynamically when constructing the batches. The transformers library includes a DataCollatorForTokenClassification specifically for this purpose. Once we have our collator and our trainer object, we can train our model: Next, we evaluate our newly trained model on the test dataset. For this purpose, we preprocess the data in the same way we did for the train and validation partitions. Then, for convenience, we use the trainer’s predict() method to generate the predicted logits using our model: As before, we use scikit-learn’s classification_report() function to display the results of the evaluation. This function expects two onedimensional lists of labels, so we need to follow a similar approach to the one we employed for text classification. Note that output.label_ids and output.predictions are NumPy arrays rather than PyTorch tensors. This time we use NumPy’s reshape() method to reshape the arrays. This method is similar to PyTorch’s view() method that we used previously, except that view() may copy the array’s data in some situations. We discard the labels corresponding to ignored sub-word tokens, and then we print the classification report: Our model based on XLM-RoBERTa achieves 99% accuracy. This is considerably better than the LSTM-based model developed in Chapter 11. In order to understand the differences between the two methods, we produce below a confusion matrix for the results of each model. Rows in the confusion matrix represent the true labels and columns represent the predicted labels. In the confusion matrices shown below, each cell xij corresponds to the proportion of values with label i that were assigned the label j.8 For a perfect model, all cells in the diagonal would have value 1 and all other cells would have value 0. The code used to generate the confusion matrix is shown below. The confusion matrices 8 This is the case because we used the normalize='true' parameter of the confusion_matrix() function. 13.3 Part-of-speech Tagging 195 Figure 13.1 Confusion matrix corresponding to the LSTM-based part-ofspeech tagger developed in Chapter 11. for the LSTM and transformer are show in Figure 13.1 and Figure 13.2, respectively. The two confusion matrices highlight a couple of important observations. First, the transformer model is considerably better at predicting POS tags with infrequent support in the dataset. For example, the accuracy for predicting the SYM POS tag increased from 38% in the LSTM model to 95% in the transformer model! Equally as impressive, the transformer improved the performance of tags that are extremely common, and, thus, provide plenty of opportunity to both approaches to learn a good model. For example, the accuracy of tagging NOUN, the second 196 Using Transformers with the Hugging Face Library Figure 13.2 Confusion matrix corresponding to the transformer-based part-of-speech tagger. most common POS tag in the dataset, increased from 96% in the LSTM model to 99% in the transformer model. 13.4 Summary In this chapter we presented two applications driven by the encoder component of a transformer network. First, we used the transformer encoder as an acceptor and implemented a text classification application for English news. Second, we used the encoder as a transducer to develop a Spanish part-of-speech tagger. Both tasks were implemented using 13.4 Summary 197 pre-trained transformer models from the Hugging Face library. For both applications, the transformer-based methods outperform considerably all approaches introduced in the previous chapters, highlighting the value of the transformer architecture.
21,188
21,383
#!/usr/bin/env python # coding: utf-8 # # Part-of-speech Tagging with Transformer Networks # Some initialization: # In[1]: import random import torch import numpy as np import pandas as pd from tqdm.notebook import tqdm # enable tqdm in pandas tqdm.pandas() # set to True to use the gpu (if there is one available) use_gpu = True # select device device = torch.device('cuda' if use_gpu and torch.cuda.is_available() else 'cpu') print(f'device: {device.type}') # random seed seed = 1234 # set random seed if seed is not None: print(f'random seed: {seed}') random.seed(seed) np.random.seed(seed) torch.manual_seed(seed) # Read the words and POS tags from the Spanish dataset: # In[2]: from conllu import parse_incr def read_tags(filename): data = {'words': [], 'tags': []} with open(filename) as f: for sent in tqdm(parse_incr(f)): words = [tok['form'] for tok in sent] tags = [tok['upos'] for tok in sent] data['words'].append(words) data['tags'].append(tags) return pd.DataFrame(data) # In[3]: train_df = read_tags('data/UD_Spanish-AnCora/es_ancora-ud-train.conllup') valid_df = read_tags('data/UD_Spanish-AnCora/es_ancora-ud-dev.conllup') test_df = read_tags('data/UD_Spanish-AnCora/es_ancora-ud-test.conllup') # In[4]: tags = train_df['tags'].explode().unique() index_to_tag = {i:t for i,t in enumerate(tags)} tag_to_index = {t:i for i,t in enumerate(tags)} # Create a HuggingFace `DatasetDict` object: # In[5]: from datasets import Dataset, DatasetDict ds = DatasetDict() ds['train'] = Dataset.from_pandas(train_df) ds['validation'] = Dataset.from_pandas(valid_df) ds['test'] = Dataset.from_pandas(test_df) ds # In[6]: ds['train'].to_pandas() # Now tokenize the texts and assign POS labels to the first token in each word: # In[7]: from transformers import AutoTokenizer transformer_name = 'xlm-roberta-base' tokenizer = AutoTokenizer.from_pretrained(transformer_name) # In[8]: x = ds['train'][0] tokenized_input = tokenizer(x['words'], is_split_into_words=True) tokens = tokenizer.convert_ids_to_tokens(tokenized_input['input_ids']) word_ids = tokenized_input.word_ids() pd.DataFrame([tokens, word_ids], index=['tokens', 'word ids']) # In[9]: # https://arxiv.org/pdf/1810.04805.pdf # Section 5.3 # We use the representation of the first sub-token as the input to the token-level classifier over the NER label set. # default value for CrossEntropyLoss ignore_index parameter ignore_index = -100 def tokenize_and_align_labels(batch): labels = [] # tokenize batch tokenized_inputs = tokenizer( batch['words'], truncation=True, is_split_into_words=True, ) # iterate over batch elements for i, tags in enumerate(batch['tags']): label_ids = [] previous_word_id = None # get word ids for current batch element word_ids = tokenized_inputs.word_ids(batch_index=i) # iterate over tokens in batch element for word_id in word_ids: if word_id is None or word_id == previous_word_id: # ignore if not a word or word id has already been seen label_ids.append(ignore_index) else: # get tag id for corresponding word tag_id = tag_to_index[tags[word_id]] label_ids.append(tag_id) # remember this word id previous_word_id = word_id # save label ids for current batch element labels.append(label_ids) # store labels together with the tokenizer output tokenized_inputs['labels'] = labels return tokenized_inputs # In[10]: train_ds = ds['train'].map(tokenize_and_align_labels, batched=True) eval_ds = ds['validation'].map(tokenize_and_align_labels, batched=True) train_ds.to_pandas() # Create our transformer model: # In[11]: from torch import nn from transformers.modeling_outputs import TokenClassifierOutput from transformers.models.roberta.modeling_roberta import RobertaModel, RobertaPreTrainedModel # https://github.com/huggingface/transformers/blob/65659a29cf5a079842e61a63d57fa24474288998/src/transformers/models/roberta/modeling_roberta.py#L1346 class XLMRobertaForTokenClassification(RobertaPreTrainedModel): def __init__(self, config): super().__init__(config) self.num_labels = config.num_labels self.roberta = RobertaModel(config, add_pooling_layer=False) self.dropout = nn.Dropout(config.hidden_dropout_prob) self.classifier = nn.Linear(config.hidden_size, config.num_labels) self.init_weights() def forward(self, input_ids=None, attention_mask=None, token_type_ids=None, labels=None, **kwargs): outputs = self.roberta( input_ids, attention_mask=attention_mask, token_type_ids=token_type_ids, **kwargs, ) sequence_output = self.dropout(outputs[0]) logits = self.classifier(sequence_output) loss = None if labels is not None: loss_fn = nn.CrossEntropyLoss() inputs = logits.view(-1, self.num_labels) targets = labels.view(-1) loss = loss_fn(inputs, targets) return TokenClassifierOutput( loss=loss, logits=logits, hidden_states=outputs.hidden_states, attentions=outputs.attentions, ) # In[12]: from transformers import AutoConfig config = AutoConfig.from_pretrained( transformer_name, num_labels=len(index_to_tag), ) model = ( XLMRobertaForTokenClassification .from_pretrained(transformer_name, config=config) ) # Create the `Trainer` object and train: # In[13]: from transformers import TrainingArguments num_epochs = 2 batch_size = 24 weight_decay = 0.01 model_name = f'{transformer_name}-finetuned-pos-es' training_args = TrainingArguments( output_dir=model_name, log_level='error', num_train_epochs=num_epochs, per_device_train_batch_size=batch_size, per_device_eval_batch_size=batch_size, evaluation_strategy='epoch', weight_decay=weight_decay, ) # In[14]: from sklearn.metrics import accuracy_score def compute_metrics(eval_pred): # gold labels label_ids = eval_pred.label_ids # predictions pred_ids = np.argmax(eval_pred.predictions, axis=-1) # collect gold and predicted labels, ignoring ignore_index label y_true, y_pred = [], [] batch_size, seq_len = pred_ids.shape for i in range(batch_size): for j in range(seq_len): if label_ids[i, j] != ignore_index: y_true.append(index_to_tag[label_ids[i][j]]) y_pred.append(index_to_tag[pred_ids[i][j]]) # return computed metrics return {'accuracy': accuracy_score(y_true, y_pred)} # In[15]: from transformers import Trainer from transformers import DataCollatorForTokenClassification data_collator = DataCollatorForTokenClassification(tokenizer) trainer = Trainer( model=model, args=training_args, data_collator=data_collator, compute_metrics=compute_metrics, train_dataset=train_ds, eval_dataset=eval_ds, tokenizer=tokenizer, ) trainer.train() # Evaluate on the test partition: # In[16]: test_ds = ds['test'].map( tokenize_and_align_labels, batched=True, ) output = trainer.predict(test_ds) # In[17]: from sklearn.metrics import classification_report num_labels = model.num_labels label_ids = output.label_ids.reshape(-1) predictions = output.predictions.reshape(-1, num_labels) predictions = np.argmax(predictions, axis=-1) mask = label_ids != ignore_index y_true = label_ids[mask] y_pred = predictions[mask] target_names = tags[:-1] report = classification_report( y_true, y_pred, target_names=target_names ) print(report) # In[18]: import matplotlib.pyplot as plt from sklearn.metrics import ConfusionMatrixDisplay, confusion_matrix cm = confusion_matrix(y_true, y_pred, normalize='true') disp = ConfusionMatrixDisplay( confusion_matrix=cm, display_labels=target_names, ) fig, ax = plt.subplots(figsize=(10,10)) disp.plot( cmap='Blues', values_format='.2f', colorbar=False, ax=ax, xticks_rotation=45, )
6,235
6,267
16
chap13-17
chap13-17
13 Using Transformers with the Hugging Face Library One of the key advantages of transformer networks is the ability to take a model that was pre-trained over vast quantities of text and fine-tune it for the task at hand. Intuitively, this strategy allows transformer networks to achieve higher performance on smaller datasets by relying on statistics acquired at scale in an unsupervised way (e.g., through the masked language model training objective). To this end, in this chapter we will use the Hugging Face library,1 which has a rich repository of datasets and pre-trained models, as well as helper methods and classes that make it easy to target downstream tasks. Using pre-trained transformer encoders, we will implement the two tasks that served as use cases in the previous chapters: text classification and part-of-speech tagging. 13.1 Tokenization As discussed in Section 12.2, transformers rely on sub-word tokens. This strategy provides an elegant way to handle unknown and low-frequency words by splitting them into more frequent sub-word parts. At the same time, these tokenization algorithms maintain frequently-occurring words as standalone tokens, so the signal for these common words is preserved. To make this more concrete, we show below how tokenizers are employed in the Hugging Face library. First, we load the tokenizer that corresponds to the transformer we intend to use. This is important for two reasons: (a) different transformers rely on different tokenization algorithms, and (b) even for the ones that use the same algorithm, their tokenizer vocabularies are likely to be different if they were pre-trained 1 https://huggingface.co/docs/transformers/main/en/index 186 13.1 Tokenization 187 on different corpora. Next, we tokenize some example text and display some of the resulting attributes with pandas: As shown above, the tokenizer splits the text into tokens, and adds two special tokens: the [CLS] token at the beginning of the token sequence, and the [SEP] token at the end. Also, note that the ## characters at the beginning of some tokens indicate that they are not standalone words, but rather sub-words that continue a word previously started. For example, the output above shows that the word walrus was split into three sub-words. Note, however, that this is specific to this particular tokenization algorithm, and other tokenizers may indicate word continuation in different ways. A better way to detect word continuations is using the word_ids() method of the tokenizer output, which assigns the same id to all tokens part of the same word. For example, all fragments of the word walrus share the word id 3. Lastly, the input_ids attribute provides the token ids used internally by the transformer to map tokens to embeddings. To briefly demonstrate how different tokenizers produce different outputs, here is the same text tokenized with the tokenizer corresponding to xlm-roberta-base: Note how the [CLS] and [SEP] special tokens have been replaced with <s> and </s> respectively. Also, spaces have been replaced with the Unicode character (U+2581, LOWER ONE EIGHTH BLOCK). Tokens that start with that character are considered word beginnings and the rest are word continuations, as can be confirmed by looking at the word ids. This illustrates the importance of using the tokenizer that corresponds to the transformer you intend to use. 012345678 tokens [CLS] I am the wa ##l ##rus . [SEP] word_ids None 0 1 2 3 3 3 4 None input_ids 101 146 1821 1103 20049 1233 6208 119 102 01234567 tokens <s> ▁I ▁am ▁the ▁wal rus . </s > None 0 1 2 3 3 3 None 0 87 444 70 32973 6563 5 2 word_ids input_ids 188 Using Transformers with the Hugging Face Library 13.2 Text Classification For our text classification example, we will continue using the AG News dataset from previous chapters. We will load, preprocess, and split the dataset into pandas dataframes in the same way as before. Now however, rather than continuing with pandas, we will create a Hugging Face dataset from the dataframes. Hugging Face datasets are convenient because of their built-in support of batching, efficient data transformations, and caching. In particular, we convert each dataframe into a Hugging Face dataset. The various datasets are managed with a DatasetDict. Note that this is the same data structure seen when downloading a Hugging Face dataset from their hub.2 The keys in this dictionary are usually train, validation, and test:3 Once our dataset is loaded, we load a tokenizer. Different pre-trained models are tokenized differently, and it is important to select the tokenizer that corresponds to the model we will use so that the inputs are consistent with model expectations. In our example, we will use the bert-base-cased pre-trained model and tokenizer: Datasets have a map() method that transforms the dataset by applying a function to each example. The method returns a new dataset with the transformation applied. We use the map() method to tokenize our dataset. To this end, we define a function that tokenizes an example using the tokenizer we loaded previously. Note that tokenizers support many options that you may need depending on your situation. However, since this is a simple scenario, all we need to do is provide the text to tokenize and specify how to handle texts that exceed the maximum number of tokens permitted by the pre-trained model. Here we have our tokenizer truncate any inputs that are too long by specifying the truncation=True parameter. The output of this function will be added to the new dataset as extra columns. Further, we also want to remove some of the columns that are no longer needed, simplifying subsequent steps. For this, we use the remove_columns argument, listing the columns that we want to discard. Additionally, the dataset’s map() method can batch the dataset; we enable this option with the batched=True argument: 2 https://huggingface.co/datasets
3 These correspond to the more common terms train, development, and test we have used throughout the book so far. In this chapter we use the Hugging Face naming conventions for consistency. 13.2 Text Classification 189 label 03 10 20 32 40 ... ... . 107995  0 
 . 107996  0 
 . 107997  0 
 . 107998  0 
 . 107999  3 
 input_ids [101, 3270, 11906, 1522, 1146, 7106, 1111, 251... [101, 158, 119, 156, 119, 12068, 5084, 1116, 9... [101, 7270, 118, 2733, 1383, 1111, 12448, 7430... [101, 6096, 117, 10378, 3969, 5977, 1111, 8988... [101, 19569, 5480, 10582, 2087, 1867, 158, 119... [101, 1130, 139, 24683, 131, 21107, 2050, 1739... token_type_ids attention_mask [101, 22087, 8223, 1611, 1106, 4417, [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 5572, 324... 0, 0, 0, ... [0,0,0,0,0,0,0,0, [0,0,0,0,0,0,0,0, [0,0,0,0,0,0,0,0, [0,0,0,0,0,0,0,0, [0,0,0,0,0,0,0,0, [0,0,0,0,0,0,0,0, 0,0,0,0, 0,0,0,... 0,0,0,0, 0,0,0,... 0,0,0,0, 0,0,0,... 0,0,0,0, 0,0,0,... 0,0,0,0, 0,0,0,... 0,0,0,0, 0,0,0,... [1,1,1,1,1,1,1,1, [1,1,1,1,1,1,1,1, [1,1,1,1,1,1,1,1, [1,1,1,1,1,1,1,1, [1,1,1,1,1,1,1,1, [1,1,1,1,1,1,1,1, 1,1,1,1, 1,1,1,... 1,1,1,1, 1,1,1,... 1,1,1,1, 1,1,1,... 1,1,1,1, 1,1,1,... 1,1,1,1, 1,1,1,... 1,1,1,1, 1,1,1,... ... ... [101, 16409, 118, 16587, 159, 4064, [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1106, 1564... 0, 0, 0, ... 108000 rows × 4 columns [101, 4222, 11404, 1174, 117, 1476, [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1130, 2696... 0, 0, 0, ... [101, 11560, 3881, 108, 3614, 132, [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 3498, 2944,... 0, 0, 0, ... [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... ... Next, we implement a classifier for our task. Hugging Face provides a
variety of models corresponding to several types of downstream tasks. However, for pedagogical purposes, we implement one from scratch. In particular, our model class inherits from BertPreTrainedModel, which
provides several useful methods such as init_weights() and from_pretrained() methods, which we will use later. The model constructor takes a config- uration object as its only parameter. Configuration objects contain all the hyper-parameters used by the corresponding pre-trained models. We will show later how the configuration model is retrieved and customized. Models that implement specific downstream tasks are usually composed of a pre-trained model (sometimes referred as the body), and one or more task-specific layers (usually referred as the head). Here, we initialize a BertModel using the provided configuration, as well as a dropout layer and a task-specific linear layer used for classifying the Bert output. Each of these layers is initialized by calling the init_weights() method inherited from BertPreTrainedModel. The forward() method, which implements the task-specific forward pass, takes as arguments the outputs of the tokenizer, and, optionally, the gold labels corresponding to the input data points. Our implementation of the forward pass sends the input tokens to the Bert model to produce the contextualized representations for all tokens. This output has several components, including the last_hidden_state which con- 190 Using Transformers with the Hugging Face Library tains the final hidden-state embedding for each token. For our task, we will represent the whole sequence using the embedding for the [CLS] token that occurs at the start of each example. We retrieve it by selecting the first element of each output sequence in the batch (i.e., last_hidden_state[:, 0, :]). As in the previous chapters, we apply dropout to our sequence representation, and then pass it through our linear classification layer. If gold labels are provided (i.e., we are training), we now compute the loss using the cross-entropy loss. The output of the forward pass is wrapped in a Hugging Face SequenceClassifierOutput object4 and returned: Next we load the configuration of the pre-trained model and instantiate our model. The AutoConfig class can load the configuration for any pre-trained model, retrieving it from Hugging Face if needed. Then we use the configuration to instantiate our model using the from_pretrained() method. With this call, the pre-trained model will be loaded, which includes downloading if necessary: Hugging Face provides a Trainer class that greatly simplifies the training process. This class not only implements the training loop we have been using in the previous chapters, but also handles other useful steps such as saving checkpoints (i.e., intermediate models after a number of mini-batches have been processed during training), and tracking custom measures about model performance. In order to create a Trainer, we first need to specify its configuration in a TrainingArguments object. In ours, we specify certain hyper parameters such as batch size, weight decay, and number of epochs, as well as where to store model checkpoints: The TrainingArguments class provides a wide variety of arguments that we have not shown.5 These arguments usually have appropriate default values, so it is often fine to omit them. For example, we did not use the label_names argument, which specifies the key that corresponds to the training labels. When omitted, it defaults to keys such as label, labels and label_ids.6 In this chapter we used label. Note that we also specify how often we would like to see the perfor- . 4  Hugging Face utilizes a set of output objects to standardize model output for a given task. These objects typically include additional information, e.g., attention weights, which can be used for visualizing or debugging model behavior. 
 . 5  https://huggingface.co/docs/transformers/main/en/main_classes/trainer# transformers. TrainingArguments 
 6 In the case of extractive question answering (see Chapter 16), the start_positions and end_positions store the start/end positions of the correct answers. 13.3 Part-of-speech Tagging 191 mance of the current model (at the end of each epoch) with evaluation_strategy='epoch'. This means that after each epoch we print the current loss on the training
partition and on the evaluation dataset, if one is available. Additionally,
we can report custom metrics at this time. For this purpose, we use the compute_metrics parameter of the Trainer, which expects a function that receives a transformers. EvalPredictions object containing the label ids and the predicted logits. The expected return type is a dictionary whose keys correspond to different metrics, each of which will be displayed as a separate result column. Using the above TrainingArguments and compute_metrics function, we create our Trainer. Note that when you provide a tokenizer, the trainer will automatically pad the sequences in each batch. Also, the trainer will automatically use any GPU that is available, unless specifically disabled in the TrainingArguments. Training our model takes a single call to the train() method of the Trainer object. As specified in the our instance of TrainingArguments, the training and validation losses, as well as the accuracy, are reported every epoch. As in the other chapters, we can write custom code to obtain the model’s predictions on the test data. However, the Trainer class provides a predict() method that drastically simplifies this: As shown in the table above, this model achieves an accuracy of 95%, which is the highest performance we have achieved so far on this dataset. 13.3 Part-of-speech Tagging To showcase part-of-speech tagging using transformers, we continue with the Spanish section of the AnCora corpus introduced in Chapter 11. Recall that the dataset is stored in the CoNLL-U format. We load this format in the same way as before, but then we convert the loaded dataset into a Hugging Face DictDataset: Importantly, because the CoNLL-U dataset is already tokenized, we use the is_split_into_words=True tokenizer argument to ensure that the tokenizer respects the existing word boundaries during its sub-word tokenization. Further, while we want to predict one POS tag per word, Epoch Training Loss Validation Loss Accuracy 1 0.187800 0.172629 0.941667 2 0.104000 0.183001 0.946250 192 Using Transformers with the Hugging Face Library any given word may be split into smaller pieces by our tokenizer. Thus, we need to align the tokenizer output to the CoNLL-U words. The original BERT paper (Devlin et al., 2018) addresses this by only using the embedding corresponding to the first sub-token for each word. We follow the same approach for consistency. For the sub-words that do not correspond to the beginning of a word, we use a special value that indicates that we are not interested in their predictions. The CrossEntropyLoss has a parameter called ignore_index for this purpose. The default value for this parameter is −100, which we use as the label for the sub-words we wish to ignore during training: Next, we use this function to preprocess the train and validation folds in our DatasetDict: words [El, presidente, de, el, órgano, regulador, de... [Afirmó, que, sigue, el, criterio, europeo, y,... [Durante, la, presentación, de, el, libro, ", ... [Y, todas, las, miradas, convergen, en, la, lu... [Cambiar, las, formas, parece, de, rigor, ,, p... [Él, llega, a, tirar, la, sobre, la, cama, y, ... tags [DET, NOUN, ADP, DET, NOUN, ADJ, ADP, DET, PRO... [VERB, SCONJ, VERB, DET, NOUN, ADJ, CCONJ, SCO... [ADP, DET, NOUN, ADP, DET, NOUN, PUNCT, DET, P... [CCONJ, DET, DET, NOUN, VERB, ADP, DET, NOUN, ... [VERB, DET, NOUN, VERB, ADP, NOUN, PUNCT, CCON... [PRON, VERB, ADP, VERB, PRON, ADP, DET, NOUN, ... input_ids [0, 540, 9692, 8, 88, 103633, 15913, 1846, 8, ... [0, 62, 38949, 849, 41, 58453, 88, 166220, 620... [0, 24292, 21, 43945, 8, 88, 7750, 44, 239, 78... [0, 990, 5136, 576, 100688, 7, 158, 814, 1409,... [0, 313, 61055, 42, 576, 26497, 12295, 8, 7599... [0, 124043, 47612, 10, 61846, 21, 1028, 21, 39... attention_mask labels [-100, 0, 1, 2, 0, 1, 3, -100, 2, 0, 4, -100, ... [-100, 6, -100, -100, 7, 6, 0, 1, 3, 10, 7, 6,... [-100, 2, 0, 1, 2, 0, 1, 8, 0, 4, -100, 2, 4, ... [-100, 10, 0, 0, 1, -100, 6, -100, -100, 2, 0,... [-100, 6, -100, -100, 0, 1, 6, 2, 1, 8, -100, ... [-100, 5, 6, 2, 6, 5, 2, 0, 1, 10, 5, 6, 0, 1,... 0 1 2 3 4 ... 14300 14301 14302 14303 [1,1,1,1,1, 1,1,1,1,1, [1,1,1,1,1, 1,1,1,1,1, [1,1,1,1,1, 1,1,1,1,1, [1,1,1,1,1, 1,1,1,1,1, [1,1,1,1,1, 1,1,1,1,1, [1,1,1,1,1, 1,1,1,1,1, 1, 1, 1, 1, 1, ... 1, 1, 1, 1, 1, ... 1, 1, 1, 1, 1, ... 1, 1, 1, 1, 1, ... 1, 1, 1, 1, 1, ... 1, 1, 1, 1, 1, ... [Sobre, la, oferta, de, interconexión, con, Te... [ADP, DET, NOUN, ADP, NOUN, ADP, PROPN, ADP, D... [0, 44125, 21, 19806, 8, 1940, 2271, 3355, 194... [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... [-100, 2, 0, 1, 2, 1, -100, -100, -100, 2, 4, ... [La, inversión, en, investigación, básica, es,... [DET, NOUN, ADP, NOUN, ADJ, AUX, DET, NOUN, AD... [0, 239, 98649, 22, 31674, 124528, 198, 88, 46... [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... [-100, 0, 1, 2, 1, 3, 9, 0, 1, 2, 0, 1, 10, 0,... [Conviene, que, ahora, ,, en, plena, apoteosis... [VERB, SCONJ, ADV, PUNCT, ADP, ADJ, NOUN, ADP,... [0, 1657, 7772, 13, 41, 18451, 6, 4, 22, 31161... [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... [-100, 6, -100, -100, 7, 11, 8, -100, 2, 3, 1,... [Carlos, y, Fayna, se, enzarzan, en, una, bron... [PROPN, CCONJ, PROPN, PRON, VERB, ADP, DET, NO... [0, 24856, 113, 114162, 76, 40, 22, 6383, 5935... [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... [-100, 4, 10, 4, -100, 5, 6, -100, -100, 2, 0,... 14304
14305 rows × 5 columns ... ... ... ... ... Next, we implement our model class that uses a transformer encoder as a transducer. Because our downstream task consists of POS tagging for Spanish, we need a transformer model that was pre-trained on Spanish texts. Here, we chose XLM-RoBERTa (Conneau et al., 2019) as our base model. XLM-Roberta is a RoBERTa model (Liu et al., 2019) that has 13.3 Part-of-speech Tagging 193 been pre-trained on 100 different languages, including Spanish. Of note, XLM-RoBERTa does not require us to specify what language we are working on. Similar to BERT, it only requires the input_ids. We discussed in the text classification section that Hugging Face provides implementations for text classification models. This is also true for token classification problems that require transducers. In particular, the XLMRobertaForTokenClassification model provided by Hugging Face does everything needed for this task. However, as before, here we implement it ourselves for pedagogical purposes. The model architecture is similar to our text classification example. It consists of a transformer, a dropout layer, and a linear layer used for classification. The number of labels which determines the output dimension of the linear layer is equal to the number of POS tags. The primary difference between the text classification example and this token classification model is that with the former we produced one label for each text document, while here we produce one label for each token in the input text. Specifically, in our text classification model the output shape was two-dimensional: (batch_size, num_labels). Here, our output is three-dimensional: (batch_size, sequence_size, num_labels). So, while much of the forward method is familiar to us, when we are required to compute the loss, we need to reshape the logits and the labels before passing them to the CrossEntropyLoss, since it expects two-dimensional input and one-dimensional labels. For this purpose, we use the view() method to reshape the tensors. This method is efficient because it does not copy the tensor data. Instead it provides a new view of the same data that behaves like a tensor with a different shape.7 As mentioned before, the number of arguments passed to this method determines the number of dimensions in the output tensor. Here, for our logits, we pass two arguments and so our new view will have two dimensions. The second will be the size of self.num_labels, while the first (because we pass -1) will be inferred based on the original tensor shape. For our labels, on the other hand, we only provide one argument and so the new view will have one dimension, inferred by the original shape: Next, we instantiate our model using the XLM-RoBERTa configuration: 7 Similar to NumPy, PyTorch tensors are represented internally by a block of memory storing the data and some metadata that describes how the data should be read, e.g., type, shape, and stride. The view() method returns a new tensor with new metadata but pointing to the same memory block. 194 Using Transformers with the Hugging Face Library As before, we create a TrainingArguments object and define a compute_metrics function in order to customize a Trainer: While the TrainingArguments code has no substantial changes, we need to adjust the compute_metrics function to account for the fact that our model uses sub-word tokens rather than complete words. Recall that only the first sub-word token per word was assigned a POS tag. This function discards the labels corresponding to the ignored sub-word tokens and evaluates the rest, returning the accuracy score: The last component required for the Trainer is a collator. Since this time we are batching sequences of tokens, we need a collator that can pad them dynamically when constructing the batches. The transformers library includes a DataCollatorForTokenClassification specifically for this purpose. Once we have our collator and our trainer object, we can train our model: Next, we evaluate our newly trained model on the test dataset. For this purpose, we preprocess the data in the same way we did for the train and validation partitions. Then, for convenience, we use the trainer’s predict() method to generate the predicted logits using our model: As before, we use scikit-learn’s classification_report() function to display the results of the evaluation. This function expects two onedimensional lists of labels, so we need to follow a similar approach to the one we employed for text classification. Note that output.label_ids and output.predictions are NumPy arrays rather than PyTorch tensors. This time we use NumPy’s reshape() method to reshape the arrays. This method is similar to PyTorch’s view() method that we used previously, except that view() may copy the array’s data in some situations. We discard the labels corresponding to ignored sub-word tokens, and then we print the classification report: Our model based on XLM-RoBERTa achieves 99% accuracy. This is considerably better than the LSTM-based model developed in Chapter 11. In order to understand the differences between the two methods, we produce below a confusion matrix for the results of each model. Rows in the confusion matrix represent the true labels and columns represent the predicted labels. In the confusion matrices shown below, each cell xij corresponds to the proportion of values with label i that were assigned the label j.8 For a perfect model, all cells in the diagonal would have value 1 and all other cells would have value 0. The code used to generate the confusion matrix is shown below. The confusion matrices 8 This is the case because we used the normalize='true' parameter of the confusion_matrix() function. 13.3 Part-of-speech Tagging 195 Figure 13.1 Confusion matrix corresponding to the LSTM-based part-ofspeech tagger developed in Chapter 11. for the LSTM and transformer are show in Figure 13.1 and Figure 13.2, respectively. The two confusion matrices highlight a couple of important observations. First, the transformer model is considerably better at predicting POS tags with infrequent support in the dataset. For example, the accuracy for predicting the SYM POS tag increased from 38% in the LSTM model to 95% in the transformer model! Equally as impressive, the transformer improved the performance of tags that are extremely common, and, thus, provide plenty of opportunity to both approaches to learn a good model. For example, the accuracy of tagging NOUN, the second 196 Using Transformers with the Hugging Face Library Figure 13.2 Confusion matrix corresponding to the transformer-based part-of-speech tagger. most common POS tag in the dataset, increased from 96% in the LSTM model to 99% in the transformer model. 13.4 Summary In this chapter we presented two applications driven by the encoder component of a transformer network. First, we used the transformer encoder as an acceptor and implemented a text classification application for English news. Second, we used the encoder as a transducer to develop a Spanish part-of-speech tagger. Both tasks were implemented using 13.4 Summary 197 pre-trained transformer models from the Hugging Face library. For both applications, the transformer-based methods outperform considerably all approaches introduced in the previous chapters, highlighting the value of the transformer architecture.
8,794
8,860
#!/usr/bin/env python # coding: utf-8 # # Text Classification Using Transformer Networks (BERT) # Some initialization: # In[1]: import random import torch import numpy as np import pandas as pd from tqdm.notebook import tqdm # enable tqdm in pandas tqdm.pandas() # set to True to use the gpu (if there is one available) use_gpu = True # select device device = torch.device('cuda' if use_gpu and torch.cuda.is_available() else 'cpu') print(f'device: {device.type}') # random seed seed = 1234 # set random seed if seed is not None: print(f'random seed: {seed}') random.seed(seed) np.random.seed(seed) torch.manual_seed(seed) # Read the train/dev/test datasets and create a HuggingFace `Dataset` object: # In[2]: def read_data(filename): # read csv file df = pd.read_csv(filename, header=None) # add column names df.columns = ['label', 'title', 'description'] # make labels zero-based df['label'] -= 1 # concatenate title and description, and remove backslashes df['text'] = df['title'] + " " + df['description'] df['text'] = df['text'].str.replace('\\', ' ', regex=False) return df # In[3]: labels = open('data/ag_news_csv/classes.txt').read().splitlines() train_df = read_data('data/ag_news_csv/train.csv') test_df = read_data('data/ag_news_csv/test.csv') train_df # In[4]: from sklearn.model_selection import train_test_split train_df, eval_df = train_test_split(train_df, train_size=0.9) train_df.reset_index(inplace=True, drop=True) eval_df.reset_index(inplace=True, drop=True) print(f'train rows: {len(train_df.index):,}') print(f'eval rows: {len(eval_df.index):,}') print(f'test rows: {len(test_df.index):,}') # In[5]: from datasets import Dataset, DatasetDict ds = DatasetDict() ds['train'] = Dataset.from_pandas(train_df) ds['validation'] = Dataset.from_pandas(eval_df) ds['test'] = Dataset.from_pandas(test_df) ds # Tokenize the texts: # In[6]: from transformers import AutoTokenizer transformer_name = 'bert-base-cased' tokenizer = AutoTokenizer.from_pretrained(transformer_name) # In[7]: def tokenize(examples): return tokenizer(examples['text'], truncation=True) train_ds = ds['train'].map( tokenize, batched=True, remove_columns=['title', 'description', 'text'], ) eval_ds = ds['validation'].map( tokenize, batched=True, remove_columns=['title', 'description', 'text'], ) train_ds.to_pandas() # Create the transformer model: # In[8]: from torch import nn from transformers.modeling_outputs import SequenceClassifierOutput from transformers.models.bert.modeling_bert import BertModel, BertPreTrainedModel # https://github.com/huggingface/transformers/blob/65659a29cf5a079842e61a63d57fa24474288998/src/transformers/models/bert/modeling_bert.py#L1486 class BertForSequenceClassification(BertPreTrainedModel): def __init__(self, config): super().__init__(config) self.num_labels = config.num_labels self.bert = BertModel(config) self.dropout = nn.Dropout(config.hidden_dropout_prob) self.classifier = nn.Linear(config.hidden_size, config.num_labels) self.init_weights() def forward(self, input_ids=None, attention_mask=None, token_type_ids=None, labels=None, **kwargs): outputs = self.bert( input_ids, attention_mask=attention_mask, token_type_ids=token_type_ids, **kwargs, ) cls_outputs = outputs.last_hidden_state[:, 0, :] cls_outputs = self.dropout(cls_outputs) logits = self.classifier(cls_outputs) loss = None if labels is not None: loss_fn = nn.CrossEntropyLoss() loss = loss_fn(logits, labels) return SequenceClassifierOutput( loss=loss, logits=logits, hidden_states=outputs.hidden_states, attentions=outputs.attentions, ) # In[9]: from transformers import AutoConfig config = AutoConfig.from_pretrained( transformer_name, num_labels=len(labels), ) model = ( BertForSequenceClassification .from_pretrained(transformer_name, config=config) ) # Create the trainer object and train: # In[10]: from transformers import TrainingArguments num_epochs = 2 batch_size = 24 weight_decay = 0.01 model_name = f'{transformer_name}-sequence-classification' training_args = TrainingArguments( output_dir=model_name, log_level='error', num_train_epochs=num_epochs, per_device_train_batch_size=batch_size, per_device_eval_batch_size=batch_size, evaluation_strategy='epoch', weight_decay=weight_decay, ) # In[11]: from sklearn.metrics import accuracy_score def compute_metrics(eval_pred): y_true = eval_pred.label_ids y_pred = np.argmax(eval_pred.predictions, axis=-1) return {'accuracy': accuracy_score(y_true, y_pred)} # In[12]: from transformers import Trainer trainer = Trainer( model=model, args=training_args, compute_metrics=compute_metrics, train_dataset=train_ds, eval_dataset=eval_ds, tokenizer=tokenizer, ) # In[13]: trainer.train() # Evaluate on the test partition: # In[14]: test_ds = ds['test'].map( tokenize, batched=True, remove_columns=['title', 'description', 'text'], ) test_ds.to_pandas() # In[15]: output = trainer.predict(test_ds) output # In[16]: from sklearn.metrics import classification_report y_true = output.label_ids y_pred = np.argmax(output.predictions, axis=-1) target_names = labels print(classification_report(y_true, y_pred, target_names=target_names))
3,054
3,129
17
chap13-18
chap13-18
13 Using Transformers with the Hugging Face Library One of the key advantages of transformer networks is the ability to take a model that was pre-trained over vast quantities of text and fine-tune it for the task at hand. Intuitively, this strategy allows transformer networks to achieve higher performance on smaller datasets by relying on statistics acquired at scale in an unsupervised way (e.g., through the masked language model training objective). To this end, in this chapter we will use the Hugging Face library,1 which has a rich repository of datasets and pre-trained models, as well as helper methods and classes that make it easy to target downstream tasks. Using pre-trained transformer encoders, we will implement the two tasks that served as use cases in the previous chapters: text classification and part-of-speech tagging. 13.1 Tokenization As discussed in Section 12.2, transformers rely on sub-word tokens. This strategy provides an elegant way to handle unknown and low-frequency words by splitting them into more frequent sub-word parts. At the same time, these tokenization algorithms maintain frequently-occurring words as standalone tokens, so the signal for these common words is preserved. To make this more concrete, we show below how tokenizers are employed in the Hugging Face library. First, we load the tokenizer that corresponds to the transformer we intend to use. This is important for two reasons: (a) different transformers rely on different tokenization algorithms, and (b) even for the ones that use the same algorithm, their tokenizer vocabularies are likely to be different if they were pre-trained 1 https://huggingface.co/docs/transformers/main/en/index 186 13.1 Tokenization 187 on different corpora. Next, we tokenize some example text and display some of the resulting attributes with pandas: As shown above, the tokenizer splits the text into tokens, and adds two special tokens: the [CLS] token at the beginning of the token sequence, and the [SEP] token at the end. Also, note that the ## characters at the beginning of some tokens indicate that they are not standalone words, but rather sub-words that continue a word previously started. For example, the output above shows that the word walrus was split into three sub-words. Note, however, that this is specific to this particular tokenization algorithm, and other tokenizers may indicate word continuation in different ways. A better way to detect word continuations is using the word_ids() method of the tokenizer output, which assigns the same id to all tokens part of the same word. For example, all fragments of the word walrus share the word id 3. Lastly, the input_ids attribute provides the token ids used internally by the transformer to map tokens to embeddings. To briefly demonstrate how different tokenizers produce different outputs, here is the same text tokenized with the tokenizer corresponding to xlm-roberta-base: Note how the [CLS] and [SEP] special tokens have been replaced with <s> and </s> respectively. Also, spaces have been replaced with the Unicode character (U+2581, LOWER ONE EIGHTH BLOCK). Tokens that start with that character are considered word beginnings and the rest are word continuations, as can be confirmed by looking at the word ids. This illustrates the importance of using the tokenizer that corresponds to the transformer you intend to use. 012345678 tokens [CLS] I am the wa ##l ##rus . [SEP] word_ids None 0 1 2 3 3 3 4 None input_ids 101 146 1821 1103 20049 1233 6208 119 102 01234567 tokens <s> ▁I ▁am ▁the ▁wal rus . </s > None 0 1 2 3 3 3 None 0 87 444 70 32973 6563 5 2 word_ids input_ids 188 Using Transformers with the Hugging Face Library 13.2 Text Classification For our text classification example, we will continue using the AG News dataset from previous chapters. We will load, preprocess, and split the dataset into pandas dataframes in the same way as before. Now however, rather than continuing with pandas, we will create a Hugging Face dataset from the dataframes. Hugging Face datasets are convenient because of their built-in support of batching, efficient data transformations, and caching. In particular, we convert each dataframe into a Hugging Face dataset. The various datasets are managed with a DatasetDict. Note that this is the same data structure seen when downloading a Hugging Face dataset from their hub.2 The keys in this dictionary are usually train, validation, and test:3 Once our dataset is loaded, we load a tokenizer. Different pre-trained models are tokenized differently, and it is important to select the tokenizer that corresponds to the model we will use so that the inputs are consistent with model expectations. In our example, we will use the bert-base-cased pre-trained model and tokenizer: Datasets have a map() method that transforms the dataset by applying a function to each example. The method returns a new dataset with the transformation applied. We use the map() method to tokenize our dataset. To this end, we define a function that tokenizes an example using the tokenizer we loaded previously. Note that tokenizers support many options that you may need depending on your situation. However, since this is a simple scenario, all we need to do is provide the text to tokenize and specify how to handle texts that exceed the maximum number of tokens permitted by the pre-trained model. Here we have our tokenizer truncate any inputs that are too long by specifying the truncation=True parameter. The output of this function will be added to the new dataset as extra columns. Further, we also want to remove some of the columns that are no longer needed, simplifying subsequent steps. For this, we use the remove_columns argument, listing the columns that we want to discard. Additionally, the dataset’s map() method can batch the dataset; we enable this option with the batched=True argument: 2 https://huggingface.co/datasets
3 These correspond to the more common terms train, development, and test we have used throughout the book so far. In this chapter we use the Hugging Face naming conventions for consistency. 13.2 Text Classification 189 label 03 10 20 32 40 ... ... . 107995  0 
 . 107996  0 
 . 107997  0 
 . 107998  0 
 . 107999  3 
 input_ids [101, 3270, 11906, 1522, 1146, 7106, 1111, 251... [101, 158, 119, 156, 119, 12068, 5084, 1116, 9... [101, 7270, 118, 2733, 1383, 1111, 12448, 7430... [101, 6096, 117, 10378, 3969, 5977, 1111, 8988... [101, 19569, 5480, 10582, 2087, 1867, 158, 119... [101, 1130, 139, 24683, 131, 21107, 2050, 1739... token_type_ids attention_mask [101, 22087, 8223, 1611, 1106, 4417, [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 5572, 324... 0, 0, 0, ... [0,0,0,0,0,0,0,0, [0,0,0,0,0,0,0,0, [0,0,0,0,0,0,0,0, [0,0,0,0,0,0,0,0, [0,0,0,0,0,0,0,0, [0,0,0,0,0,0,0,0, 0,0,0,0, 0,0,0,... 0,0,0,0, 0,0,0,... 0,0,0,0, 0,0,0,... 0,0,0,0, 0,0,0,... 0,0,0,0, 0,0,0,... 0,0,0,0, 0,0,0,... [1,1,1,1,1,1,1,1, [1,1,1,1,1,1,1,1, [1,1,1,1,1,1,1,1, [1,1,1,1,1,1,1,1, [1,1,1,1,1,1,1,1, [1,1,1,1,1,1,1,1, 1,1,1,1, 1,1,1,... 1,1,1,1, 1,1,1,... 1,1,1,1, 1,1,1,... 1,1,1,1, 1,1,1,... 1,1,1,1, 1,1,1,... 1,1,1,1, 1,1,1,... ... ... [101, 16409, 118, 16587, 159, 4064, [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1106, 1564... 0, 0, 0, ... 108000 rows × 4 columns [101, 4222, 11404, 1174, 117, 1476, [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1130, 2696... 0, 0, 0, ... [101, 11560, 3881, 108, 3614, 132, [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 3498, 2944,... 0, 0, 0, ... [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... ... Next, we implement a classifier for our task. Hugging Face provides a
variety of models corresponding to several types of downstream tasks. However, for pedagogical purposes, we implement one from scratch. In particular, our model class inherits from BertPreTrainedModel, which
provides several useful methods such as init_weights() and from_pretrained() methods, which we will use later. The model constructor takes a config- uration object as its only parameter. Configuration objects contain all the hyper-parameters used by the corresponding pre-trained models. We will show later how the configuration model is retrieved and customized. Models that implement specific downstream tasks are usually composed of a pre-trained model (sometimes referred as the body), and one or more task-specific layers (usually referred as the head). Here, we initialize a BertModel using the provided configuration, as well as a dropout layer and a task-specific linear layer used for classifying the Bert output. Each of these layers is initialized by calling the init_weights() method inherited from BertPreTrainedModel. The forward() method, which implements the task-specific forward pass, takes as arguments the outputs of the tokenizer, and, optionally, the gold labels corresponding to the input data points. Our implementation of the forward pass sends the input tokens to the Bert model to produce the contextualized representations for all tokens. This output has several components, including the last_hidden_state which con- 190 Using Transformers with the Hugging Face Library tains the final hidden-state embedding for each token. For our task, we will represent the whole sequence using the embedding for the [CLS] token that occurs at the start of each example. We retrieve it by selecting the first element of each output sequence in the batch (i.e., last_hidden_state[:, 0, :]). As in the previous chapters, we apply dropout to our sequence representation, and then pass it through our linear classification layer. If gold labels are provided (i.e., we are training), we now compute the loss using the cross-entropy loss. The output of the forward pass is wrapped in a Hugging Face SequenceClassifierOutput object4 and returned: Next we load the configuration of the pre-trained model and instantiate our model. The AutoConfig class can load the configuration for any pre-trained model, retrieving it from Hugging Face if needed. Then we use the configuration to instantiate our model using the from_pretrained() method. With this call, the pre-trained model will be loaded, which includes downloading if necessary: Hugging Face provides a Trainer class that greatly simplifies the training process. This class not only implements the training loop we have been using in the previous chapters, but also handles other useful steps such as saving checkpoints (i.e., intermediate models after a number of mini-batches have been processed during training), and tracking custom measures about model performance. In order to create a Trainer, we first need to specify its configuration in a TrainingArguments object. In ours, we specify certain hyper parameters such as batch size, weight decay, and number of epochs, as well as where to store model checkpoints: The TrainingArguments class provides a wide variety of arguments that we have not shown.5 These arguments usually have appropriate default values, so it is often fine to omit them. For example, we did not use the label_names argument, which specifies the key that corresponds to the training labels. When omitted, it defaults to keys such as label, labels and label_ids.6 In this chapter we used label. Note that we also specify how often we would like to see the perfor- . 4  Hugging Face utilizes a set of output objects to standardize model output for a given task. These objects typically include additional information, e.g., attention weights, which can be used for visualizing or debugging model behavior. 
 . 5  https://huggingface.co/docs/transformers/main/en/main_classes/trainer# transformers. TrainingArguments 
 6 In the case of extractive question answering (see Chapter 16), the start_positions and end_positions store the start/end positions of the correct answers. 13.3 Part-of-speech Tagging 191 mance of the current model (at the end of each epoch) with evaluation_strategy='epoch'. This means that after each epoch we print the current loss on the training
partition and on the evaluation dataset, if one is available. Additionally,
we can report custom metrics at this time. For this purpose, we use the compute_metrics parameter of the Trainer, which expects a function that receives a transformers. EvalPredictions object containing the label ids and the predicted logits. The expected return type is a dictionary whose keys correspond to different metrics, each of which will be displayed as a separate result column. Using the above TrainingArguments and compute_metrics function, we create our Trainer. Note that when you provide a tokenizer, the trainer will automatically pad the sequences in each batch. Also, the trainer will automatically use any GPU that is available, unless specifically disabled in the TrainingArguments. Training our model takes a single call to the train() method of the Trainer object. As specified in the our instance of TrainingArguments, the training and validation losses, as well as the accuracy, are reported every epoch. As in the other chapters, we can write custom code to obtain the model’s predictions on the test data. However, the Trainer class provides a predict() method that drastically simplifies this: As shown in the table above, this model achieves an accuracy of 95%, which is the highest performance we have achieved so far on this dataset. 13.3 Part-of-speech Tagging To showcase part-of-speech tagging using transformers, we continue with the Spanish section of the AnCora corpus introduced in Chapter 11. Recall that the dataset is stored in the CoNLL-U format. We load this format in the same way as before, but then we convert the loaded dataset into a Hugging Face DictDataset: Importantly, because the CoNLL-U dataset is already tokenized, we use the is_split_into_words=True tokenizer argument to ensure that the tokenizer respects the existing word boundaries during its sub-word tokenization. Further, while we want to predict one POS tag per word, Epoch Training Loss Validation Loss Accuracy 1 0.187800 0.172629 0.941667 2 0.104000 0.183001 0.946250 192 Using Transformers with the Hugging Face Library any given word may be split into smaller pieces by our tokenizer. Thus, we need to align the tokenizer output to the CoNLL-U words. The original BERT paper (Devlin et al., 2018) addresses this by only using the embedding corresponding to the first sub-token for each word. We follow the same approach for consistency. For the sub-words that do not correspond to the beginning of a word, we use a special value that indicates that we are not interested in their predictions. The CrossEntropyLoss has a parameter called ignore_index for this purpose. The default value for this parameter is −100, which we use as the label for the sub-words we wish to ignore during training: Next, we use this function to preprocess the train and validation folds in our DatasetDict: words [El, presidente, de, el, órgano, regulador, de... [Afirmó, que, sigue, el, criterio, europeo, y,... [Durante, la, presentación, de, el, libro, ", ... [Y, todas, las, miradas, convergen, en, la, lu... [Cambiar, las, formas, parece, de, rigor, ,, p... [Él, llega, a, tirar, la, sobre, la, cama, y, ... tags [DET, NOUN, ADP, DET, NOUN, ADJ, ADP, DET, PRO... [VERB, SCONJ, VERB, DET, NOUN, ADJ, CCONJ, SCO... [ADP, DET, NOUN, ADP, DET, NOUN, PUNCT, DET, P... [CCONJ, DET, DET, NOUN, VERB, ADP, DET, NOUN, ... [VERB, DET, NOUN, VERB, ADP, NOUN, PUNCT, CCON... [PRON, VERB, ADP, VERB, PRON, ADP, DET, NOUN, ... input_ids [0, 540, 9692, 8, 88, 103633, 15913, 1846, 8, ... [0, 62, 38949, 849, 41, 58453, 88, 166220, 620... [0, 24292, 21, 43945, 8, 88, 7750, 44, 239, 78... [0, 990, 5136, 576, 100688, 7, 158, 814, 1409,... [0, 313, 61055, 42, 576, 26497, 12295, 8, 7599... [0, 124043, 47612, 10, 61846, 21, 1028, 21, 39... attention_mask labels [-100, 0, 1, 2, 0, 1, 3, -100, 2, 0, 4, -100, ... [-100, 6, -100, -100, 7, 6, 0, 1, 3, 10, 7, 6,... [-100, 2, 0, 1, 2, 0, 1, 8, 0, 4, -100, 2, 4, ... [-100, 10, 0, 0, 1, -100, 6, -100, -100, 2, 0,... [-100, 6, -100, -100, 0, 1, 6, 2, 1, 8, -100, ... [-100, 5, 6, 2, 6, 5, 2, 0, 1, 10, 5, 6, 0, 1,... 0 1 2 3 4 ... 14300 14301 14302 14303 [1,1,1,1,1, 1,1,1,1,1, [1,1,1,1,1, 1,1,1,1,1, [1,1,1,1,1, 1,1,1,1,1, [1,1,1,1,1, 1,1,1,1,1, [1,1,1,1,1, 1,1,1,1,1, [1,1,1,1,1, 1,1,1,1,1, 1, 1, 1, 1, 1, ... 1, 1, 1, 1, 1, ... 1, 1, 1, 1, 1, ... 1, 1, 1, 1, 1, ... 1, 1, 1, 1, 1, ... 1, 1, 1, 1, 1, ... [Sobre, la, oferta, de, interconexión, con, Te... [ADP, DET, NOUN, ADP, NOUN, ADP, PROPN, ADP, D... [0, 44125, 21, 19806, 8, 1940, 2271, 3355, 194... [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... [-100, 2, 0, 1, 2, 1, -100, -100, -100, 2, 4, ... [La, inversión, en, investigación, básica, es,... [DET, NOUN, ADP, NOUN, ADJ, AUX, DET, NOUN, AD... [0, 239, 98649, 22, 31674, 124528, 198, 88, 46... [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... [-100, 0, 1, 2, 1, 3, 9, 0, 1, 2, 0, 1, 10, 0,... [Conviene, que, ahora, ,, en, plena, apoteosis... [VERB, SCONJ, ADV, PUNCT, ADP, ADJ, NOUN, ADP,... [0, 1657, 7772, 13, 41, 18451, 6, 4, 22, 31161... [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... [-100, 6, -100, -100, 7, 11, 8, -100, 2, 3, 1,... [Carlos, y, Fayna, se, enzarzan, en, una, bron... [PROPN, CCONJ, PROPN, PRON, VERB, ADP, DET, NO... [0, 24856, 113, 114162, 76, 40, 22, 6383, 5935... [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... [-100, 4, 10, 4, -100, 5, 6, -100, -100, 2, 0,... 14304
14305 rows × 5 columns ... ... ... ... ... Next, we implement our model class that uses a transformer encoder as a transducer. Because our downstream task consists of POS tagging for Spanish, we need a transformer model that was pre-trained on Spanish texts. Here, we chose XLM-RoBERTa (Conneau et al., 2019) as our base model. XLM-Roberta is a RoBERTa model (Liu et al., 2019) that has 13.3 Part-of-speech Tagging 193 been pre-trained on 100 different languages, including Spanish. Of note, XLM-RoBERTa does not require us to specify what language we are working on. Similar to BERT, it only requires the input_ids. We discussed in the text classification section that Hugging Face provides implementations for text classification models. This is also true for token classification problems that require transducers. In particular, the XLMRobertaForTokenClassification model provided by Hugging Face does everything needed for this task. However, as before, here we implement it ourselves for pedagogical purposes. The model architecture is similar to our text classification example. It consists of a transformer, a dropout layer, and a linear layer used for classification. The number of labels which determines the output dimension of the linear layer is equal to the number of POS tags. The primary difference between the text classification example and this token classification model is that with the former we produced one label for each text document, while here we produce one label for each token in the input text. Specifically, in our text classification model the output shape was two-dimensional: (batch_size, num_labels). Here, our output is three-dimensional: (batch_size, sequence_size, num_labels). So, while much of the forward method is familiar to us, when we are required to compute the loss, we need to reshape the logits and the labels before passing them to the CrossEntropyLoss, since it expects two-dimensional input and one-dimensional labels. For this purpose, we use the view() method to reshape the tensors. This method is efficient because it does not copy the tensor data. Instead it provides a new view of the same data that behaves like a tensor with a different shape.7 As mentioned before, the number of arguments passed to this method determines the number of dimensions in the output tensor. Here, for our logits, we pass two arguments and so our new view will have two dimensions. The second will be the size of self.num_labels, while the first (because we pass -1) will be inferred based on the original tensor shape. For our labels, on the other hand, we only provide one argument and so the new view will have one dimension, inferred by the original shape: Next, we instantiate our model using the XLM-RoBERTa configuration: 7 Similar to NumPy, PyTorch tensors are represented internally by a block of memory storing the data and some metadata that describes how the data should be read, e.g., type, shape, and stride. The view() method returns a new tensor with new metadata but pointing to the same memory block. 194 Using Transformers with the Hugging Face Library As before, we create a TrainingArguments object and define a compute_metrics function in order to customize a Trainer: While the TrainingArguments code has no substantial changes, we need to adjust the compute_metrics function to account for the fact that our model uses sub-word tokens rather than complete words. Recall that only the first sub-word token per word was assigned a POS tag. This function discards the labels corresponding to the ignored sub-word tokens and evaluates the rest, returning the accuracy score: The last component required for the Trainer is a collator. Since this time we are batching sequences of tokens, we need a collator that can pad them dynamically when constructing the batches. The transformers library includes a DataCollatorForTokenClassification specifically for this purpose. Once we have our collator and our trainer object, we can train our model: Next, we evaluate our newly trained model on the test dataset. For this purpose, we preprocess the data in the same way we did for the train and validation partitions. Then, for convenience, we use the trainer’s predict() method to generate the predicted logits using our model: As before, we use scikit-learn’s classification_report() function to display the results of the evaluation. This function expects two onedimensional lists of labels, so we need to follow a similar approach to the one we employed for text classification. Note that output.label_ids and output.predictions are NumPy arrays rather than PyTorch tensors. This time we use NumPy’s reshape() method to reshape the arrays. This method is similar to PyTorch’s view() method that we used previously, except that view() may copy the array’s data in some situations. We discard the labels corresponding to ignored sub-word tokens, and then we print the classification report: Our model based on XLM-RoBERTa achieves 99% accuracy. This is considerably better than the LSTM-based model developed in Chapter 11. In order to understand the differences between the two methods, we produce below a confusion matrix for the results of each model. Rows in the confusion matrix represent the true labels and columns represent the predicted labels. In the confusion matrices shown below, each cell xij corresponds to the proportion of values with label i that were assigned the label j.8 For a perfect model, all cells in the diagonal would have value 1 and all other cells would have value 0. The code used to generate the confusion matrix is shown below. The confusion matrices 8 This is the case because we used the normalize='true' parameter of the confusion_matrix() function. 13.3 Part-of-speech Tagging 195 Figure 13.1 Confusion matrix corresponding to the LSTM-based part-ofspeech tagger developed in Chapter 11. for the LSTM and transformer are show in Figure 13.1 and Figure 13.2, respectively. The two confusion matrices highlight a couple of important observations. First, the transformer model is considerably better at predicting POS tags with infrequent support in the dataset. For example, the accuracy for predicting the SYM POS tag increased from 38% in the LSTM model to 95% in the transformer model! Equally as impressive, the transformer improved the performance of tags that are extremely common, and, thus, provide plenty of opportunity to both approaches to learn a good model. For example, the accuracy of tagging NOUN, the second 196 Using Transformers with the Hugging Face Library Figure 13.2 Confusion matrix corresponding to the transformer-based part-of-speech tagger. most common POS tag in the dataset, increased from 96% in the LSTM model to 99% in the transformer model. 13.4 Summary In this chapter we presented two applications driven by the encoder component of a transformer network. First, we used the transformer encoder as an acceptor and implemented a text classification application for English news. Second, we used the encoder as a transducer to develop a Spanish part-of-speech tagger. Both tasks were implemented using 13.4 Summary 197 pre-trained transformer models from the Hugging Face library. For both applications, the transformer-based methods outperform considerably all approaches introduced in the previous chapters, highlighting the value of the transformer architecture.
14,478
14,609
#!/usr/bin/env python # coding: utf-8 # # Part-of-speech Tagging with Transformer Networks # Some initialization: # In[1]: import random import torch import numpy as np import pandas as pd from tqdm.notebook import tqdm # enable tqdm in pandas tqdm.pandas() # set to True to use the gpu (if there is one available) use_gpu = True # select device device = torch.device('cuda' if use_gpu and torch.cuda.is_available() else 'cpu') print(f'device: {device.type}') # random seed seed = 1234 # set random seed if seed is not None: print(f'random seed: {seed}') random.seed(seed) np.random.seed(seed) torch.manual_seed(seed) # Read the words and POS tags from the Spanish dataset: # In[2]: from conllu import parse_incr def read_tags(filename): data = {'words': [], 'tags': []} with open(filename) as f: for sent in tqdm(parse_incr(f)): words = [tok['form'] for tok in sent] tags = [tok['upos'] for tok in sent] data['words'].append(words) data['tags'].append(tags) return pd.DataFrame(data) # In[3]: train_df = read_tags('data/UD_Spanish-AnCora/es_ancora-ud-train.conllup') valid_df = read_tags('data/UD_Spanish-AnCora/es_ancora-ud-dev.conllup') test_df = read_tags('data/UD_Spanish-AnCora/es_ancora-ud-test.conllup') # In[4]: tags = train_df['tags'].explode().unique() index_to_tag = {i:t for i,t in enumerate(tags)} tag_to_index = {t:i for i,t in enumerate(tags)} # Create a HuggingFace `DatasetDict` object: # In[5]: from datasets import Dataset, DatasetDict ds = DatasetDict() ds['train'] = Dataset.from_pandas(train_df) ds['validation'] = Dataset.from_pandas(valid_df) ds['test'] = Dataset.from_pandas(test_df) ds # In[6]: ds['train'].to_pandas() # Now tokenize the texts and assign POS labels to the first token in each word: # In[7]: from transformers import AutoTokenizer transformer_name = 'xlm-roberta-base' tokenizer = AutoTokenizer.from_pretrained(transformer_name) # In[8]: x = ds['train'][0] tokenized_input = tokenizer(x['words'], is_split_into_words=True) tokens = tokenizer.convert_ids_to_tokens(tokenized_input['input_ids']) word_ids = tokenized_input.word_ids() pd.DataFrame([tokens, word_ids], index=['tokens', 'word ids']) # In[9]: # https://arxiv.org/pdf/1810.04805.pdf # Section 5.3 # We use the representation of the first sub-token as the input to the token-level classifier over the NER label set. # default value for CrossEntropyLoss ignore_index parameter ignore_index = -100 def tokenize_and_align_labels(batch): labels = [] # tokenize batch tokenized_inputs = tokenizer( batch['words'], truncation=True, is_split_into_words=True, ) # iterate over batch elements for i, tags in enumerate(batch['tags']): label_ids = [] previous_word_id = None # get word ids for current batch element word_ids = tokenized_inputs.word_ids(batch_index=i) # iterate over tokens in batch element for word_id in word_ids: if word_id is None or word_id == previous_word_id: # ignore if not a word or word id has already been seen label_ids.append(ignore_index) else: # get tag id for corresponding word tag_id = tag_to_index[tags[word_id]] label_ids.append(tag_id) # remember this word id previous_word_id = word_id # save label ids for current batch element labels.append(label_ids) # store labels together with the tokenizer output tokenized_inputs['labels'] = labels return tokenized_inputs # In[10]: train_ds = ds['train'].map(tokenize_and_align_labels, batched=True) eval_ds = ds['validation'].map(tokenize_and_align_labels, batched=True) train_ds.to_pandas() # Create our transformer model: # In[11]: from torch import nn from transformers.modeling_outputs import TokenClassifierOutput from transformers.models.roberta.modeling_roberta import RobertaModel, RobertaPreTrainedModel # https://github.com/huggingface/transformers/blob/65659a29cf5a079842e61a63d57fa24474288998/src/transformers/models/roberta/modeling_roberta.py#L1346 class XLMRobertaForTokenClassification(RobertaPreTrainedModel): def __init__(self, config): super().__init__(config) self.num_labels = config.num_labels self.roberta = RobertaModel(config, add_pooling_layer=False) self.dropout = nn.Dropout(config.hidden_dropout_prob) self.classifier = nn.Linear(config.hidden_size, config.num_labels) self.init_weights() def forward(self, input_ids=None, attention_mask=None, token_type_ids=None, labels=None, **kwargs): outputs = self.roberta( input_ids, attention_mask=attention_mask, token_type_ids=token_type_ids, **kwargs, ) sequence_output = self.dropout(outputs[0]) logits = self.classifier(sequence_output) loss = None if labels is not None: loss_fn = nn.CrossEntropyLoss() inputs = logits.view(-1, self.num_labels) targets = labels.view(-1) loss = loss_fn(inputs, targets) return TokenClassifierOutput( loss=loss, logits=logits, hidden_states=outputs.hidden_states, attentions=outputs.attentions, ) # In[12]: from transformers import AutoConfig config = AutoConfig.from_pretrained( transformer_name, num_labels=len(index_to_tag), ) model = ( XLMRobertaForTokenClassification .from_pretrained(transformer_name, config=config) ) # Create the `Trainer` object and train: # In[13]: from transformers import TrainingArguments num_epochs = 2 batch_size = 24 weight_decay = 0.01 model_name = f'{transformer_name}-finetuned-pos-es' training_args = TrainingArguments( output_dir=model_name, log_level='error', num_train_epochs=num_epochs, per_device_train_batch_size=batch_size, per_device_eval_batch_size=batch_size, evaluation_strategy='epoch', weight_decay=weight_decay, ) # In[14]: from sklearn.metrics import accuracy_score def compute_metrics(eval_pred): # gold labels label_ids = eval_pred.label_ids # predictions pred_ids = np.argmax(eval_pred.predictions, axis=-1) # collect gold and predicted labels, ignoring ignore_index label y_true, y_pred = [], [] batch_size, seq_len = pred_ids.shape for i in range(batch_size): for j in range(seq_len): if label_ids[i, j] != ignore_index: y_true.append(index_to_tag[label_ids[i][j]]) y_pred.append(index_to_tag[pred_ids[i][j]]) # return computed metrics return {'accuracy': accuracy_score(y_true, y_pred)} # In[15]: from transformers import Trainer from transformers import DataCollatorForTokenClassification data_collator = DataCollatorForTokenClassification(tokenizer) trainer = Trainer( model=model, args=training_args, data_collator=data_collator, compute_metrics=compute_metrics, train_dataset=train_ds, eval_dataset=eval_ds, tokenizer=tokenizer, ) trainer.train() # Evaluate on the test partition: # In[16]: test_ds = ds['test'].map( tokenize_and_align_labels, batched=True, ) output = trainer.predict(test_ds) # In[17]: from sklearn.metrics import classification_report num_labels = model.num_labels label_ids = output.label_ids.reshape(-1) predictions = output.predictions.reshape(-1, num_labels) predictions = np.argmax(predictions, axis=-1) mask = label_ids != ignore_index y_true = label_ids[mask] y_pred = predictions[mask] target_names = tags[:-1] report = classification_report( y_true, y_pred, target_names=target_names ) print(report) # In[18]: import matplotlib.pyplot as plt from sklearn.metrics import ConfusionMatrixDisplay, confusion_matrix cm = confusion_matrix(y_true, y_pred, normalize='true') disp = ConfusionMatrixDisplay( confusion_matrix=cm, display_labels=target_names, ) fig, ax = plt.subplots(figsize=(10,10)) disp.plot( cmap='Blues', values_format='.2f', colorbar=False, ax=ax, xticks_rotation=45, )
2,532
2,570
18
chap13-19
chap13-19
13 Using Transformers with the Hugging Face Library One of the key advantages of transformer networks is the ability to take a model that was pre-trained over vast quantities of text and fine-tune it for the task at hand. Intuitively, this strategy allows transformer networks to achieve higher performance on smaller datasets by relying on statistics acquired at scale in an unsupervised way (e.g., through the masked language model training objective). To this end, in this chapter we will use the Hugging Face library,1 which has a rich repository of datasets and pre-trained models, as well as helper methods and classes that make it easy to target downstream tasks. Using pre-trained transformer encoders, we will implement the two tasks that served as use cases in the previous chapters: text classification and part-of-speech tagging. 13.1 Tokenization As discussed in Section 12.2, transformers rely on sub-word tokens. This strategy provides an elegant way to handle unknown and low-frequency words by splitting them into more frequent sub-word parts. At the same time, these tokenization algorithms maintain frequently-occurring words as standalone tokens, so the signal for these common words is preserved. To make this more concrete, we show below how tokenizers are employed in the Hugging Face library. First, we load the tokenizer that corresponds to the transformer we intend to use. This is important for two reasons: (a) different transformers rely on different tokenization algorithms, and (b) even for the ones that use the same algorithm, their tokenizer vocabularies are likely to be different if they were pre-trained 1 https://huggingface.co/docs/transformers/main/en/index 186 13.1 Tokenization 187 on different corpora. Next, we tokenize some example text and display some of the resulting attributes with pandas: As shown above, the tokenizer splits the text into tokens, and adds two special tokens: the [CLS] token at the beginning of the token sequence, and the [SEP] token at the end. Also, note that the ## characters at the beginning of some tokens indicate that they are not standalone words, but rather sub-words that continue a word previously started. For example, the output above shows that the word walrus was split into three sub-words. Note, however, that this is specific to this particular tokenization algorithm, and other tokenizers may indicate word continuation in different ways. A better way to detect word continuations is using the word_ids() method of the tokenizer output, which assigns the same id to all tokens part of the same word. For example, all fragments of the word walrus share the word id 3. Lastly, the input_ids attribute provides the token ids used internally by the transformer to map tokens to embeddings. To briefly demonstrate how different tokenizers produce different outputs, here is the same text tokenized with the tokenizer corresponding to xlm-roberta-base: Note how the [CLS] and [SEP] special tokens have been replaced with <s> and </s> respectively. Also, spaces have been replaced with the Unicode character (U+2581, LOWER ONE EIGHTH BLOCK). Tokens that start with that character are considered word beginnings and the rest are word continuations, as can be confirmed by looking at the word ids. This illustrates the importance of using the tokenizer that corresponds to the transformer you intend to use. 012345678 tokens [CLS] I am the wa ##l ##rus . [SEP] word_ids None 0 1 2 3 3 3 4 None input_ids 101 146 1821 1103 20049 1233 6208 119 102 01234567 tokens <s> ▁I ▁am ▁the ▁wal rus . </s > None 0 1 2 3 3 3 None 0 87 444 70 32973 6563 5 2 word_ids input_ids 188 Using Transformers with the Hugging Face Library 13.2 Text Classification For our text classification example, we will continue using the AG News dataset from previous chapters. We will load, preprocess, and split the dataset into pandas dataframes in the same way as before. Now however, rather than continuing with pandas, we will create a Hugging Face dataset from the dataframes. Hugging Face datasets are convenient because of their built-in support of batching, efficient data transformations, and caching. In particular, we convert each dataframe into a Hugging Face dataset. The various datasets are managed with a DatasetDict. Note that this is the same data structure seen when downloading a Hugging Face dataset from their hub.2 The keys in this dictionary are usually train, validation, and test:3 Once our dataset is loaded, we load a tokenizer. Different pre-trained models are tokenized differently, and it is important to select the tokenizer that corresponds to the model we will use so that the inputs are consistent with model expectations. In our example, we will use the bert-base-cased pre-trained model and tokenizer: Datasets have a map() method that transforms the dataset by applying a function to each example. The method returns a new dataset with the transformation applied. We use the map() method to tokenize our dataset. To this end, we define a function that tokenizes an example using the tokenizer we loaded previously. Note that tokenizers support many options that you may need depending on your situation. However, since this is a simple scenario, all we need to do is provide the text to tokenize and specify how to handle texts that exceed the maximum number of tokens permitted by the pre-trained model. Here we have our tokenizer truncate any inputs that are too long by specifying the truncation=True parameter. The output of this function will be added to the new dataset as extra columns. Further, we also want to remove some of the columns that are no longer needed, simplifying subsequent steps. For this, we use the remove_columns argument, listing the columns that we want to discard. Additionally, the dataset’s map() method can batch the dataset; we enable this option with the batched=True argument: 2 https://huggingface.co/datasets
3 These correspond to the more common terms train, development, and test we have used throughout the book so far. In this chapter we use the Hugging Face naming conventions for consistency. 13.2 Text Classification 189 label 03 10 20 32 40 ... ... . 107995  0 
 . 107996  0 
 . 107997  0 
 . 107998  0 
 . 107999  3 
 input_ids [101, 3270, 11906, 1522, 1146, 7106, 1111, 251... [101, 158, 119, 156, 119, 12068, 5084, 1116, 9... [101, 7270, 118, 2733, 1383, 1111, 12448, 7430... [101, 6096, 117, 10378, 3969, 5977, 1111, 8988... [101, 19569, 5480, 10582, 2087, 1867, 158, 119... [101, 1130, 139, 24683, 131, 21107, 2050, 1739... token_type_ids attention_mask [101, 22087, 8223, 1611, 1106, 4417, [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 5572, 324... 0, 0, 0, ... [0,0,0,0,0,0,0,0, [0,0,0,0,0,0,0,0, [0,0,0,0,0,0,0,0, [0,0,0,0,0,0,0,0, [0,0,0,0,0,0,0,0, [0,0,0,0,0,0,0,0, 0,0,0,0, 0,0,0,... 0,0,0,0, 0,0,0,... 0,0,0,0, 0,0,0,... 0,0,0,0, 0,0,0,... 0,0,0,0, 0,0,0,... 0,0,0,0, 0,0,0,... [1,1,1,1,1,1,1,1, [1,1,1,1,1,1,1,1, [1,1,1,1,1,1,1,1, [1,1,1,1,1,1,1,1, [1,1,1,1,1,1,1,1, [1,1,1,1,1,1,1,1, 1,1,1,1, 1,1,1,... 1,1,1,1, 1,1,1,... 1,1,1,1, 1,1,1,... 1,1,1,1, 1,1,1,... 1,1,1,1, 1,1,1,... 1,1,1,1, 1,1,1,... ... ... [101, 16409, 118, 16587, 159, 4064, [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1106, 1564... 0, 0, 0, ... 108000 rows × 4 columns [101, 4222, 11404, 1174, 117, 1476, [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1130, 2696... 0, 0, 0, ... [101, 11560, 3881, 108, 3614, 132, [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 3498, 2944,... 0, 0, 0, ... [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... ... Next, we implement a classifier for our task. Hugging Face provides a
variety of models corresponding to several types of downstream tasks. However, for pedagogical purposes, we implement one from scratch. In particular, our model class inherits from BertPreTrainedModel, which
provides several useful methods such as init_weights() and from_pretrained() methods, which we will use later. The model constructor takes a config- uration object as its only parameter. Configuration objects contain all the hyper-parameters used by the corresponding pre-trained models. We will show later how the configuration model is retrieved and customized. Models that implement specific downstream tasks are usually composed of a pre-trained model (sometimes referred as the body), and one or more task-specific layers (usually referred as the head). Here, we initialize a BertModel using the provided configuration, as well as a dropout layer and a task-specific linear layer used for classifying the Bert output. Each of these layers is initialized by calling the init_weights() method inherited from BertPreTrainedModel. The forward() method, which implements the task-specific forward pass, takes as arguments the outputs of the tokenizer, and, optionally, the gold labels corresponding to the input data points. Our implementation of the forward pass sends the input tokens to the Bert model to produce the contextualized representations for all tokens. This output has several components, including the last_hidden_state which con- 190 Using Transformers with the Hugging Face Library tains the final hidden-state embedding for each token. For our task, we will represent the whole sequence using the embedding for the [CLS] token that occurs at the start of each example. We retrieve it by selecting the first element of each output sequence in the batch (i.e., last_hidden_state[:, 0, :]). As in the previous chapters, we apply dropout to our sequence representation, and then pass it through our linear classification layer. If gold labels are provided (i.e., we are training), we now compute the loss using the cross-entropy loss. The output of the forward pass is wrapped in a Hugging Face SequenceClassifierOutput object4 and returned: Next we load the configuration of the pre-trained model and instantiate our model. The AutoConfig class can load the configuration for any pre-trained model, retrieving it from Hugging Face if needed. Then we use the configuration to instantiate our model using the from_pretrained() method. With this call, the pre-trained model will be loaded, which includes downloading if necessary: Hugging Face provides a Trainer class that greatly simplifies the training process. This class not only implements the training loop we have been using in the previous chapters, but also handles other useful steps such as saving checkpoints (i.e., intermediate models after a number of mini-batches have been processed during training), and tracking custom measures about model performance. In order to create a Trainer, we first need to specify its configuration in a TrainingArguments object. In ours, we specify certain hyper parameters such as batch size, weight decay, and number of epochs, as well as where to store model checkpoints: The TrainingArguments class provides a wide variety of arguments that we have not shown.5 These arguments usually have appropriate default values, so it is often fine to omit them. For example, we did not use the label_names argument, which specifies the key that corresponds to the training labels. When omitted, it defaults to keys such as label, labels and label_ids.6 In this chapter we used label. Note that we also specify how often we would like to see the perfor- . 4  Hugging Face utilizes a set of output objects to standardize model output for a given task. These objects typically include additional information, e.g., attention weights, which can be used for visualizing or debugging model behavior. 
 . 5  https://huggingface.co/docs/transformers/main/en/main_classes/trainer# transformers. TrainingArguments 
 6 In the case of extractive question answering (see Chapter 16), the start_positions and end_positions store the start/end positions of the correct answers. 13.3 Part-of-speech Tagging 191 mance of the current model (at the end of each epoch) with evaluation_strategy='epoch'. This means that after each epoch we print the current loss on the training
partition and on the evaluation dataset, if one is available. Additionally,
we can report custom metrics at this time. For this purpose, we use the compute_metrics parameter of the Trainer, which expects a function that receives a transformers. EvalPredictions object containing the label ids and the predicted logits. The expected return type is a dictionary whose keys correspond to different metrics, each of which will be displayed as a separate result column. Using the above TrainingArguments and compute_metrics function, we create our Trainer. Note that when you provide a tokenizer, the trainer will automatically pad the sequences in each batch. Also, the trainer will automatically use any GPU that is available, unless specifically disabled in the TrainingArguments. Training our model takes a single call to the train() method of the Trainer object. As specified in the our instance of TrainingArguments, the training and validation losses, as well as the accuracy, are reported every epoch. As in the other chapters, we can write custom code to obtain the model’s predictions on the test data. However, the Trainer class provides a predict() method that drastically simplifies this: As shown in the table above, this model achieves an accuracy of 95%, which is the highest performance we have achieved so far on this dataset. 13.3 Part-of-speech Tagging To showcase part-of-speech tagging using transformers, we continue with the Spanish section of the AnCora corpus introduced in Chapter 11. Recall that the dataset is stored in the CoNLL-U format. We load this format in the same way as before, but then we convert the loaded dataset into a Hugging Face DictDataset: Importantly, because the CoNLL-U dataset is already tokenized, we use the is_split_into_words=True tokenizer argument to ensure that the tokenizer respects the existing word boundaries during its sub-word tokenization. Further, while we want to predict one POS tag per word, Epoch Training Loss Validation Loss Accuracy 1 0.187800 0.172629 0.941667 2 0.104000 0.183001 0.946250 192 Using Transformers with the Hugging Face Library any given word may be split into smaller pieces by our tokenizer. Thus, we need to align the tokenizer output to the CoNLL-U words. The original BERT paper (Devlin et al., 2018) addresses this by only using the embedding corresponding to the first sub-token for each word. We follow the same approach for consistency. For the sub-words that do not correspond to the beginning of a word, we use a special value that indicates that we are not interested in their predictions. The CrossEntropyLoss has a parameter called ignore_index for this purpose. The default value for this parameter is −100, which we use as the label for the sub-words we wish to ignore during training: Next, we use this function to preprocess the train and validation folds in our DatasetDict: words [El, presidente, de, el, órgano, regulador, de... [Afirmó, que, sigue, el, criterio, europeo, y,... [Durante, la, presentación, de, el, libro, ", ... [Y, todas, las, miradas, convergen, en, la, lu... [Cambiar, las, formas, parece, de, rigor, ,, p... [Él, llega, a, tirar, la, sobre, la, cama, y, ... tags [DET, NOUN, ADP, DET, NOUN, ADJ, ADP, DET, PRO... [VERB, SCONJ, VERB, DET, NOUN, ADJ, CCONJ, SCO... [ADP, DET, NOUN, ADP, DET, NOUN, PUNCT, DET, P... [CCONJ, DET, DET, NOUN, VERB, ADP, DET, NOUN, ... [VERB, DET, NOUN, VERB, ADP, NOUN, PUNCT, CCON... [PRON, VERB, ADP, VERB, PRON, ADP, DET, NOUN, ... input_ids [0, 540, 9692, 8, 88, 103633, 15913, 1846, 8, ... [0, 62, 38949, 849, 41, 58453, 88, 166220, 620... [0, 24292, 21, 43945, 8, 88, 7750, 44, 239, 78... [0, 990, 5136, 576, 100688, 7, 158, 814, 1409,... [0, 313, 61055, 42, 576, 26497, 12295, 8, 7599... [0, 124043, 47612, 10, 61846, 21, 1028, 21, 39... attention_mask labels [-100, 0, 1, 2, 0, 1, 3, -100, 2, 0, 4, -100, ... [-100, 6, -100, -100, 7, 6, 0, 1, 3, 10, 7, 6,... [-100, 2, 0, 1, 2, 0, 1, 8, 0, 4, -100, 2, 4, ... [-100, 10, 0, 0, 1, -100, 6, -100, -100, 2, 0,... [-100, 6, -100, -100, 0, 1, 6, 2, 1, 8, -100, ... [-100, 5, 6, 2, 6, 5, 2, 0, 1, 10, 5, 6, 0, 1,... 0 1 2 3 4 ... 14300 14301 14302 14303 [1,1,1,1,1, 1,1,1,1,1, [1,1,1,1,1, 1,1,1,1,1, [1,1,1,1,1, 1,1,1,1,1, [1,1,1,1,1, 1,1,1,1,1, [1,1,1,1,1, 1,1,1,1,1, [1,1,1,1,1, 1,1,1,1,1, 1, 1, 1, 1, 1, ... 1, 1, 1, 1, 1, ... 1, 1, 1, 1, 1, ... 1, 1, 1, 1, 1, ... 1, 1, 1, 1, 1, ... 1, 1, 1, 1, 1, ... [Sobre, la, oferta, de, interconexión, con, Te... [ADP, DET, NOUN, ADP, NOUN, ADP, PROPN, ADP, D... [0, 44125, 21, 19806, 8, 1940, 2271, 3355, 194... [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... [-100, 2, 0, 1, 2, 1, -100, -100, -100, 2, 4, ... [La, inversión, en, investigación, básica, es,... [DET, NOUN, ADP, NOUN, ADJ, AUX, DET, NOUN, AD... [0, 239, 98649, 22, 31674, 124528, 198, 88, 46... [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... [-100, 0, 1, 2, 1, 3, 9, 0, 1, 2, 0, 1, 10, 0,... [Conviene, que, ahora, ,, en, plena, apoteosis... [VERB, SCONJ, ADV, PUNCT, ADP, ADJ, NOUN, ADP,... [0, 1657, 7772, 13, 41, 18451, 6, 4, 22, 31161... [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... [-100, 6, -100, -100, 7, 11, 8, -100, 2, 3, 1,... [Carlos, y, Fayna, se, enzarzan, en, una, bron... [PROPN, CCONJ, PROPN, PRON, VERB, ADP, DET, NO... [0, 24856, 113, 114162, 76, 40, 22, 6383, 5935... [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... [-100, 4, 10, 4, -100, 5, 6, -100, -100, 2, 0,... 14304
14305 rows × 5 columns ... ... ... ... ... Next, we implement our model class that uses a transformer encoder as a transducer. Because our downstream task consists of POS tagging for Spanish, we need a transformer model that was pre-trained on Spanish texts. Here, we chose XLM-RoBERTa (Conneau et al., 2019) as our base model. XLM-Roberta is a RoBERTa model (Liu et al., 2019) that has 13.3 Part-of-speech Tagging 193 been pre-trained on 100 different languages, including Spanish. Of note, XLM-RoBERTa does not require us to specify what language we are working on. Similar to BERT, it only requires the input_ids. We discussed in the text classification section that Hugging Face provides implementations for text classification models. This is also true for token classification problems that require transducers. In particular, the XLMRobertaForTokenClassification model provided by Hugging Face does everything needed for this task. However, as before, here we implement it ourselves for pedagogical purposes. The model architecture is similar to our text classification example. It consists of a transformer, a dropout layer, and a linear layer used for classification. The number of labels which determines the output dimension of the linear layer is equal to the number of POS tags. The primary difference between the text classification example and this token classification model is that with the former we produced one label for each text document, while here we produce one label for each token in the input text. Specifically, in our text classification model the output shape was two-dimensional: (batch_size, num_labels). Here, our output is three-dimensional: (batch_size, sequence_size, num_labels). So, while much of the forward method is familiar to us, when we are required to compute the loss, we need to reshape the logits and the labels before passing them to the CrossEntropyLoss, since it expects two-dimensional input and one-dimensional labels. For this purpose, we use the view() method to reshape the tensors. This method is efficient because it does not copy the tensor data. Instead it provides a new view of the same data that behaves like a tensor with a different shape.7 As mentioned before, the number of arguments passed to this method determines the number of dimensions in the output tensor. Here, for our logits, we pass two arguments and so our new view will have two dimensions. The second will be the size of self.num_labels, while the first (because we pass -1) will be inferred based on the original tensor shape. For our labels, on the other hand, we only provide one argument and so the new view will have one dimension, inferred by the original shape: Next, we instantiate our model using the XLM-RoBERTa configuration: 7 Similar to NumPy, PyTorch tensors are represented internally by a block of memory storing the data and some metadata that describes how the data should be read, e.g., type, shape, and stride. The view() method returns a new tensor with new metadata but pointing to the same memory block. 194 Using Transformers with the Hugging Face Library As before, we create a TrainingArguments object and define a compute_metrics function in order to customize a Trainer: While the TrainingArguments code has no substantial changes, we need to adjust the compute_metrics function to account for the fact that our model uses sub-word tokens rather than complete words. Recall that only the first sub-word token per word was assigned a POS tag. This function discards the labels corresponding to the ignored sub-word tokens and evaluates the rest, returning the accuracy score: The last component required for the Trainer is a collator. Since this time we are batching sequences of tokens, we need a collator that can pad them dynamically when constructing the batches. The transformers library includes a DataCollatorForTokenClassification specifically for this purpose. Once we have our collator and our trainer object, we can train our model: Next, we evaluate our newly trained model on the test dataset. For this purpose, we preprocess the data in the same way we did for the train and validation partitions. Then, for convenience, we use the trainer’s predict() method to generate the predicted logits using our model: As before, we use scikit-learn’s classification_report() function to display the results of the evaluation. This function expects two onedimensional lists of labels, so we need to follow a similar approach to the one we employed for text classification. Note that output.label_ids and output.predictions are NumPy arrays rather than PyTorch tensors. This time we use NumPy’s reshape() method to reshape the arrays. This method is similar to PyTorch’s view() method that we used previously, except that view() may copy the array’s data in some situations. We discard the labels corresponding to ignored sub-word tokens, and then we print the classification report: Our model based on XLM-RoBERTa achieves 99% accuracy. This is considerably better than the LSTM-based model developed in Chapter 11. In order to understand the differences between the two methods, we produce below a confusion matrix for the results of each model. Rows in the confusion matrix represent the true labels and columns represent the predicted labels. In the confusion matrices shown below, each cell xij corresponds to the proportion of values with label i that were assigned the label j.8 For a perfect model, all cells in the diagonal would have value 1 and all other cells would have value 0. The code used to generate the confusion matrix is shown below. The confusion matrices 8 This is the case because we used the normalize='true' parameter of the confusion_matrix() function. 13.3 Part-of-speech Tagging 195 Figure 13.1 Confusion matrix corresponding to the LSTM-based part-ofspeech tagger developed in Chapter 11. for the LSTM and transformer are show in Figure 13.1 and Figure 13.2, respectively. The two confusion matrices highlight a couple of important observations. First, the transformer model is considerably better at predicting POS tags with infrequent support in the dataset. For example, the accuracy for predicting the SYM POS tag increased from 38% in the LSTM model to 95% in the transformer model! Equally as impressive, the transformer improved the performance of tags that are extremely common, and, thus, provide plenty of opportunity to both approaches to learn a good model. For example, the accuracy of tagging NOUN, the second 196 Using Transformers with the Hugging Face Library Figure 13.2 Confusion matrix corresponding to the transformer-based part-of-speech tagger. most common POS tag in the dataset, increased from 96% in the LSTM model to 99% in the transformer model. 13.4 Summary In this chapter we presented two applications driven by the encoder component of a transformer network. First, we used the transformer encoder as an acceptor and implemented a text classification application for English news. Second, we used the encoder as a transducer to develop a Spanish part-of-speech tagger. Both tasks were implemented using 13.4 Summary 197 pre-trained transformer models from the Hugging Face library. For both applications, the transformer-based methods outperform considerably all approaches introduced in the previous chapters, highlighting the value of the transformer architecture.
8,031
8,244
#!/usr/bin/env python # coding: utf-8 # # Text Classification Using Transformer Networks (BERT) # Some initialization: # In[1]: import random import torch import numpy as np import pandas as pd from tqdm.notebook import tqdm # enable tqdm in pandas tqdm.pandas() # set to True to use the gpu (if there is one available) use_gpu = True # select device device = torch.device('cuda' if use_gpu and torch.cuda.is_available() else 'cpu') print(f'device: {device.type}') # random seed seed = 1234 # set random seed if seed is not None: print(f'random seed: {seed}') random.seed(seed) np.random.seed(seed) torch.manual_seed(seed) # Read the train/dev/test datasets and create a HuggingFace `Dataset` object: # In[2]: def read_data(filename): # read csv file df = pd.read_csv(filename, header=None) # add column names df.columns = ['label', 'title', 'description'] # make labels zero-based df['label'] -= 1 # concatenate title and description, and remove backslashes df['text'] = df['title'] + " " + df['description'] df['text'] = df['text'].str.replace('\\', ' ', regex=False) return df # In[3]: labels = open('data/ag_news_csv/classes.txt').read().splitlines() train_df = read_data('data/ag_news_csv/train.csv') test_df = read_data('data/ag_news_csv/test.csv') train_df # In[4]: from sklearn.model_selection import train_test_split train_df, eval_df = train_test_split(train_df, train_size=0.9) train_df.reset_index(inplace=True, drop=True) eval_df.reset_index(inplace=True, drop=True) print(f'train rows: {len(train_df.index):,}') print(f'eval rows: {len(eval_df.index):,}') print(f'test rows: {len(test_df.index):,}') # In[5]: from datasets import Dataset, DatasetDict ds = DatasetDict() ds['train'] = Dataset.from_pandas(train_df) ds['validation'] = Dataset.from_pandas(eval_df) ds['test'] = Dataset.from_pandas(test_df) ds # Tokenize the texts: # In[6]: from transformers import AutoTokenizer transformer_name = 'bert-base-cased' tokenizer = AutoTokenizer.from_pretrained(transformer_name) # In[7]: def tokenize(examples): return tokenizer(examples['text'], truncation=True) train_ds = ds['train'].map( tokenize, batched=True, remove_columns=['title', 'description', 'text'], ) eval_ds = ds['validation'].map( tokenize, batched=True, remove_columns=['title', 'description', 'text'], ) train_ds.to_pandas() # Create the transformer model: # In[8]: from torch import nn from transformers.modeling_outputs import SequenceClassifierOutput from transformers.models.bert.modeling_bert import BertModel, BertPreTrainedModel # https://github.com/huggingface/transformers/blob/65659a29cf5a079842e61a63d57fa24474288998/src/transformers/models/bert/modeling_bert.py#L1486 class BertForSequenceClassification(BertPreTrainedModel): def __init__(self, config): super().__init__(config) self.num_labels = config.num_labels self.bert = BertModel(config) self.dropout = nn.Dropout(config.hidden_dropout_prob) self.classifier = nn.Linear(config.hidden_size, config.num_labels) self.init_weights() def forward(self, input_ids=None, attention_mask=None, token_type_ids=None, labels=None, **kwargs): outputs = self.bert( input_ids, attention_mask=attention_mask, token_type_ids=token_type_ids, **kwargs, ) cls_outputs = outputs.last_hidden_state[:, 0, :] cls_outputs = self.dropout(cls_outputs) logits = self.classifier(cls_outputs) loss = None if labels is not None: loss_fn = nn.CrossEntropyLoss() loss = loss_fn(logits, labels) return SequenceClassifierOutput( loss=loss, logits=logits, hidden_states=outputs.hidden_states, attentions=outputs.attentions, ) # In[9]: from transformers import AutoConfig config = AutoConfig.from_pretrained( transformer_name, num_labels=len(labels), ) model = ( BertForSequenceClassification .from_pretrained(transformer_name, config=config) ) # Create the trainer object and train: # In[10]: from transformers import TrainingArguments num_epochs = 2 batch_size = 24 weight_decay = 0.01 model_name = f'{transformer_name}-sequence-classification' training_args = TrainingArguments( output_dir=model_name, log_level='error', num_train_epochs=num_epochs, per_device_train_batch_size=batch_size, per_device_eval_batch_size=batch_size, evaluation_strategy='epoch', weight_decay=weight_decay, ) # In[11]: from sklearn.metrics import accuracy_score def compute_metrics(eval_pred): y_true = eval_pred.label_ids y_pred = np.argmax(eval_pred.predictions, axis=-1) return {'accuracy': accuracy_score(y_true, y_pred)} # In[12]: from transformers import Trainer trainer = Trainer( model=model, args=training_args, compute_metrics=compute_metrics, train_dataset=train_ds, eval_dataset=eval_ds, tokenizer=tokenizer, ) # In[13]: trainer.train() # Evaluate on the test partition: # In[14]: test_ds = ds['test'].map( tokenize, batched=True, remove_columns=['title', 'description', 'text'], ) test_ds.to_pandas() # In[15]: output = trainer.predict(test_ds) output # In[16]: from sklearn.metrics import classification_report y_true = output.label_ids y_pred = np.argmax(output.predictions, axis=-1) target_names = labels print(classification_report(y_true, y_pred, target_names=target_names))
2,787
2,845
19
chap13-20
chap13-20
13 Using Transformers with the Hugging Face Library One of the key advantages of transformer networks is the ability to take a model that was pre-trained over vast quantities of text and fine-tune it for the task at hand. Intuitively, this strategy allows transformer networks to achieve higher performance on smaller datasets by relying on statistics acquired at scale in an unsupervised way (e.g., through the masked language model training objective). To this end, in this chapter we will use the Hugging Face library,1 which has a rich repository of datasets and pre-trained models, as well as helper methods and classes that make it easy to target downstream tasks. Using pre-trained transformer encoders, we will implement the two tasks that served as use cases in the previous chapters: text classification and part-of-speech tagging. 13.1 Tokenization As discussed in Section 12.2, transformers rely on sub-word tokens. This strategy provides an elegant way to handle unknown and low-frequency words by splitting them into more frequent sub-word parts. At the same time, these tokenization algorithms maintain frequently-occurring words as standalone tokens, so the signal for these common words is preserved. To make this more concrete, we show below how tokenizers are employed in the Hugging Face library. First, we load the tokenizer that corresponds to the transformer we intend to use. This is important for two reasons: (a) different transformers rely on different tokenization algorithms, and (b) even for the ones that use the same algorithm, their tokenizer vocabularies are likely to be different if they were pre-trained 1 https://huggingface.co/docs/transformers/main/en/index 186 13.1 Tokenization 187 on different corpora. Next, we tokenize some example text and display some of the resulting attributes with pandas: As shown above, the tokenizer splits the text into tokens, and adds two special tokens: the [CLS] token at the beginning of the token sequence, and the [SEP] token at the end. Also, note that the ## characters at the beginning of some tokens indicate that they are not standalone words, but rather sub-words that continue a word previously started. For example, the output above shows that the word walrus was split into three sub-words. Note, however, that this is specific to this particular tokenization algorithm, and other tokenizers may indicate word continuation in different ways. A better way to detect word continuations is using the word_ids() method of the tokenizer output, which assigns the same id to all tokens part of the same word. For example, all fragments of the word walrus share the word id 3. Lastly, the input_ids attribute provides the token ids used internally by the transformer to map tokens to embeddings. To briefly demonstrate how different tokenizers produce different outputs, here is the same text tokenized with the tokenizer corresponding to xlm-roberta-base: Note how the [CLS] and [SEP] special tokens have been replaced with <s> and </s> respectively. Also, spaces have been replaced with the Unicode character (U+2581, LOWER ONE EIGHTH BLOCK). Tokens that start with that character are considered word beginnings and the rest are word continuations, as can be confirmed by looking at the word ids. This illustrates the importance of using the tokenizer that corresponds to the transformer you intend to use. 012345678 tokens [CLS] I am the wa ##l ##rus . [SEP] word_ids None 0 1 2 3 3 3 4 None input_ids 101 146 1821 1103 20049 1233 6208 119 102 01234567 tokens <s> ▁I ▁am ▁the ▁wal rus . </s > None 0 1 2 3 3 3 None 0 87 444 70 32973 6563 5 2 word_ids input_ids 188 Using Transformers with the Hugging Face Library 13.2 Text Classification For our text classification example, we will continue using the AG News dataset from previous chapters. We will load, preprocess, and split the dataset into pandas dataframes in the same way as before. Now however, rather than continuing with pandas, we will create a Hugging Face dataset from the dataframes. Hugging Face datasets are convenient because of their built-in support of batching, efficient data transformations, and caching. In particular, we convert each dataframe into a Hugging Face dataset. The various datasets are managed with a DatasetDict. Note that this is the same data structure seen when downloading a Hugging Face dataset from their hub.2 The keys in this dictionary are usually train, validation, and test:3 Once our dataset is loaded, we load a tokenizer. Different pre-trained models are tokenized differently, and it is important to select the tokenizer that corresponds to the model we will use so that the inputs are consistent with model expectations. In our example, we will use the bert-base-cased pre-trained model and tokenizer: Datasets have a map() method that transforms the dataset by applying a function to each example. The method returns a new dataset with the transformation applied. We use the map() method to tokenize our dataset. To this end, we define a function that tokenizes an example using the tokenizer we loaded previously. Note that tokenizers support many options that you may need depending on your situation. However, since this is a simple scenario, all we need to do is provide the text to tokenize and specify how to handle texts that exceed the maximum number of tokens permitted by the pre-trained model. Here we have our tokenizer truncate any inputs that are too long by specifying the truncation=True parameter. The output of this function will be added to the new dataset as extra columns. Further, we also want to remove some of the columns that are no longer needed, simplifying subsequent steps. For this, we use the remove_columns argument, listing the columns that we want to discard. Additionally, the dataset’s map() method can batch the dataset; we enable this option with the batched=True argument: 2 https://huggingface.co/datasets
3 These correspond to the more common terms train, development, and test we have used throughout the book so far. In this chapter we use the Hugging Face naming conventions for consistency. 13.2 Text Classification 189 label 03 10 20 32 40 ... ... . 107995  0 
 . 107996  0 
 . 107997  0 
 . 107998  0 
 . 107999  3 
 input_ids [101, 3270, 11906, 1522, 1146, 7106, 1111, 251... [101, 158, 119, 156, 119, 12068, 5084, 1116, 9... [101, 7270, 118, 2733, 1383, 1111, 12448, 7430... [101, 6096, 117, 10378, 3969, 5977, 1111, 8988... [101, 19569, 5480, 10582, 2087, 1867, 158, 119... [101, 1130, 139, 24683, 131, 21107, 2050, 1739... token_type_ids attention_mask [101, 22087, 8223, 1611, 1106, 4417, [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 5572, 324... 0, 0, 0, ... [0,0,0,0,0,0,0,0, [0,0,0,0,0,0,0,0, [0,0,0,0,0,0,0,0, [0,0,0,0,0,0,0,0, [0,0,0,0,0,0,0,0, [0,0,0,0,0,0,0,0, 0,0,0,0, 0,0,0,... 0,0,0,0, 0,0,0,... 0,0,0,0, 0,0,0,... 0,0,0,0, 0,0,0,... 0,0,0,0, 0,0,0,... 0,0,0,0, 0,0,0,... [1,1,1,1,1,1,1,1, [1,1,1,1,1,1,1,1, [1,1,1,1,1,1,1,1, [1,1,1,1,1,1,1,1, [1,1,1,1,1,1,1,1, [1,1,1,1,1,1,1,1, 1,1,1,1, 1,1,1,... 1,1,1,1, 1,1,1,... 1,1,1,1, 1,1,1,... 1,1,1,1, 1,1,1,... 1,1,1,1, 1,1,1,... 1,1,1,1, 1,1,1,... ... ... [101, 16409, 118, 16587, 159, 4064, [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1106, 1564... 0, 0, 0, ... 108000 rows × 4 columns [101, 4222, 11404, 1174, 117, 1476, [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1130, 2696... 0, 0, 0, ... [101, 11560, 3881, 108, 3614, 132, [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 3498, 2944,... 0, 0, 0, ... [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... ... Next, we implement a classifier for our task. Hugging Face provides a
variety of models corresponding to several types of downstream tasks. However, for pedagogical purposes, we implement one from scratch. In particular, our model class inherits from BertPreTrainedModel, which
provides several useful methods such as init_weights() and from_pretrained() methods, which we will use later. The model constructor takes a config- uration object as its only parameter. Configuration objects contain all the hyper-parameters used by the corresponding pre-trained models. We will show later how the configuration model is retrieved and customized. Models that implement specific downstream tasks are usually composed of a pre-trained model (sometimes referred as the body), and one or more task-specific layers (usually referred as the head). Here, we initialize a BertModel using the provided configuration, as well as a dropout layer and a task-specific linear layer used for classifying the Bert output. Each of these layers is initialized by calling the init_weights() method inherited from BertPreTrainedModel. The forward() method, which implements the task-specific forward pass, takes as arguments the outputs of the tokenizer, and, optionally, the gold labels corresponding to the input data points. Our implementation of the forward pass sends the input tokens to the Bert model to produce the contextualized representations for all tokens. This output has several components, including the last_hidden_state which con- 190 Using Transformers with the Hugging Face Library tains the final hidden-state embedding for each token. For our task, we will represent the whole sequence using the embedding for the [CLS] token that occurs at the start of each example. We retrieve it by selecting the first element of each output sequence in the batch (i.e., last_hidden_state[:, 0, :]). As in the previous chapters, we apply dropout to our sequence representation, and then pass it through our linear classification layer. If gold labels are provided (i.e., we are training), we now compute the loss using the cross-entropy loss. The output of the forward pass is wrapped in a Hugging Face SequenceClassifierOutput object4 and returned: Next we load the configuration of the pre-trained model and instantiate our model. The AutoConfig class can load the configuration for any pre-trained model, retrieving it from Hugging Face if needed. Then we use the configuration to instantiate our model using the from_pretrained() method. With this call, the pre-trained model will be loaded, which includes downloading if necessary: Hugging Face provides a Trainer class that greatly simplifies the training process. This class not only implements the training loop we have been using in the previous chapters, but also handles other useful steps such as saving checkpoints (i.e., intermediate models after a number of mini-batches have been processed during training), and tracking custom measures about model performance. In order to create a Trainer, we first need to specify its configuration in a TrainingArguments object. In ours, we specify certain hyper parameters such as batch size, weight decay, and number of epochs, as well as where to store model checkpoints: The TrainingArguments class provides a wide variety of arguments that we have not shown.5 These arguments usually have appropriate default values, so it is often fine to omit them. For example, we did not use the label_names argument, which specifies the key that corresponds to the training labels. When omitted, it defaults to keys such as label, labels and label_ids.6 In this chapter we used label. Note that we also specify how often we would like to see the perfor- . 4  Hugging Face utilizes a set of output objects to standardize model output for a given task. These objects typically include additional information, e.g., attention weights, which can be used for visualizing or debugging model behavior. 
 . 5  https://huggingface.co/docs/transformers/main/en/main_classes/trainer# transformers. TrainingArguments 
 6 In the case of extractive question answering (see Chapter 16), the start_positions and end_positions store the start/end positions of the correct answers. 13.3 Part-of-speech Tagging 191 mance of the current model (at the end of each epoch) with evaluation_strategy='epoch'. This means that after each epoch we print the current loss on the training
partition and on the evaluation dataset, if one is available. Additionally,
we can report custom metrics at this time. For this purpose, we use the compute_metrics parameter of the Trainer, which expects a function that receives a transformers. EvalPredictions object containing the label ids and the predicted logits. The expected return type is a dictionary whose keys correspond to different metrics, each of which will be displayed as a separate result column. Using the above TrainingArguments and compute_metrics function, we create our Trainer. Note that when you provide a tokenizer, the trainer will automatically pad the sequences in each batch. Also, the trainer will automatically use any GPU that is available, unless specifically disabled in the TrainingArguments. Training our model takes a single call to the train() method of the Trainer object. As specified in the our instance of TrainingArguments, the training and validation losses, as well as the accuracy, are reported every epoch. As in the other chapters, we can write custom code to obtain the model’s predictions on the test data. However, the Trainer class provides a predict() method that drastically simplifies this: As shown in the table above, this model achieves an accuracy of 95%, which is the highest performance we have achieved so far on this dataset. 13.3 Part-of-speech Tagging To showcase part-of-speech tagging using transformers, we continue with the Spanish section of the AnCora corpus introduced in Chapter 11. Recall that the dataset is stored in the CoNLL-U format. We load this format in the same way as before, but then we convert the loaded dataset into a Hugging Face DictDataset: Importantly, because the CoNLL-U dataset is already tokenized, we use the is_split_into_words=True tokenizer argument to ensure that the tokenizer respects the existing word boundaries during its sub-word tokenization. Further, while we want to predict one POS tag per word, Epoch Training Loss Validation Loss Accuracy 1 0.187800 0.172629 0.941667 2 0.104000 0.183001 0.946250 192 Using Transformers with the Hugging Face Library any given word may be split into smaller pieces by our tokenizer. Thus, we need to align the tokenizer output to the CoNLL-U words. The original BERT paper (Devlin et al., 2018) addresses this by only using the embedding corresponding to the first sub-token for each word. We follow the same approach for consistency. For the sub-words that do not correspond to the beginning of a word, we use a special value that indicates that we are not interested in their predictions. The CrossEntropyLoss has a parameter called ignore_index for this purpose. The default value for this parameter is −100, which we use as the label for the sub-words we wish to ignore during training: Next, we use this function to preprocess the train and validation folds in our DatasetDict: words [El, presidente, de, el, órgano, regulador, de... [Afirmó, que, sigue, el, criterio, europeo, y,... [Durante, la, presentación, de, el, libro, ", ... [Y, todas, las, miradas, convergen, en, la, lu... [Cambiar, las, formas, parece, de, rigor, ,, p... [Él, llega, a, tirar, la, sobre, la, cama, y, ... tags [DET, NOUN, ADP, DET, NOUN, ADJ, ADP, DET, PRO... [VERB, SCONJ, VERB, DET, NOUN, ADJ, CCONJ, SCO... [ADP, DET, NOUN, ADP, DET, NOUN, PUNCT, DET, P... [CCONJ, DET, DET, NOUN, VERB, ADP, DET, NOUN, ... [VERB, DET, NOUN, VERB, ADP, NOUN, PUNCT, CCON... [PRON, VERB, ADP, VERB, PRON, ADP, DET, NOUN, ... input_ids [0, 540, 9692, 8, 88, 103633, 15913, 1846, 8, ... [0, 62, 38949, 849, 41, 58453, 88, 166220, 620... [0, 24292, 21, 43945, 8, 88, 7750, 44, 239, 78... [0, 990, 5136, 576, 100688, 7, 158, 814, 1409,... [0, 313, 61055, 42, 576, 26497, 12295, 8, 7599... [0, 124043, 47612, 10, 61846, 21, 1028, 21, 39... attention_mask labels [-100, 0, 1, 2, 0, 1, 3, -100, 2, 0, 4, -100, ... [-100, 6, -100, -100, 7, 6, 0, 1, 3, 10, 7, 6,... [-100, 2, 0, 1, 2, 0, 1, 8, 0, 4, -100, 2, 4, ... [-100, 10, 0, 0, 1, -100, 6, -100, -100, 2, 0,... [-100, 6, -100, -100, 0, 1, 6, 2, 1, 8, -100, ... [-100, 5, 6, 2, 6, 5, 2, 0, 1, 10, 5, 6, 0, 1,... 0 1 2 3 4 ... 14300 14301 14302 14303 [1,1,1,1,1, 1,1,1,1,1, [1,1,1,1,1, 1,1,1,1,1, [1,1,1,1,1, 1,1,1,1,1, [1,1,1,1,1, 1,1,1,1,1, [1,1,1,1,1, 1,1,1,1,1, [1,1,1,1,1, 1,1,1,1,1, 1, 1, 1, 1, 1, ... 1, 1, 1, 1, 1, ... 1, 1, 1, 1, 1, ... 1, 1, 1, 1, 1, ... 1, 1, 1, 1, 1, ... 1, 1, 1, 1, 1, ... [Sobre, la, oferta, de, interconexión, con, Te... [ADP, DET, NOUN, ADP, NOUN, ADP, PROPN, ADP, D... [0, 44125, 21, 19806, 8, 1940, 2271, 3355, 194... [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... [-100, 2, 0, 1, 2, 1, -100, -100, -100, 2, 4, ... [La, inversión, en, investigación, básica, es,... [DET, NOUN, ADP, NOUN, ADJ, AUX, DET, NOUN, AD... [0, 239, 98649, 22, 31674, 124528, 198, 88, 46... [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... [-100, 0, 1, 2, 1, 3, 9, 0, 1, 2, 0, 1, 10, 0,... [Conviene, que, ahora, ,, en, plena, apoteosis... [VERB, SCONJ, ADV, PUNCT, ADP, ADJ, NOUN, ADP,... [0, 1657, 7772, 13, 41, 18451, 6, 4, 22, 31161... [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... [-100, 6, -100, -100, 7, 11, 8, -100, 2, 3, 1,... [Carlos, y, Fayna, se, enzarzan, en, una, bron... [PROPN, CCONJ, PROPN, PRON, VERB, ADP, DET, NO... [0, 24856, 113, 114162, 76, 40, 22, 6383, 5935... [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... [-100, 4, 10, 4, -100, 5, 6, -100, -100, 2, 0,... 14304
14305 rows × 5 columns ... ... ... ... ... Next, we implement our model class that uses a transformer encoder as a transducer. Because our downstream task consists of POS tagging for Spanish, we need a transformer model that was pre-trained on Spanish texts. Here, we chose XLM-RoBERTa (Conneau et al., 2019) as our base model. XLM-Roberta is a RoBERTa model (Liu et al., 2019) that has 13.3 Part-of-speech Tagging 193 been pre-trained on 100 different languages, including Spanish. Of note, XLM-RoBERTa does not require us to specify what language we are working on. Similar to BERT, it only requires the input_ids. We discussed in the text classification section that Hugging Face provides implementations for text classification models. This is also true for token classification problems that require transducers. In particular, the XLMRobertaForTokenClassification model provided by Hugging Face does everything needed for this task. However, as before, here we implement it ourselves for pedagogical purposes. The model architecture is similar to our text classification example. It consists of a transformer, a dropout layer, and a linear layer used for classification. The number of labels which determines the output dimension of the linear layer is equal to the number of POS tags. The primary difference between the text classification example and this token classification model is that with the former we produced one label for each text document, while here we produce one label for each token in the input text. Specifically, in our text classification model the output shape was two-dimensional: (batch_size, num_labels). Here, our output is three-dimensional: (batch_size, sequence_size, num_labels). So, while much of the forward method is familiar to us, when we are required to compute the loss, we need to reshape the logits and the labels before passing them to the CrossEntropyLoss, since it expects two-dimensional input and one-dimensional labels. For this purpose, we use the view() method to reshape the tensors. This method is efficient because it does not copy the tensor data. Instead it provides a new view of the same data that behaves like a tensor with a different shape.7 As mentioned before, the number of arguments passed to this method determines the number of dimensions in the output tensor. Here, for our logits, we pass two arguments and so our new view will have two dimensions. The second will be the size of self.num_labels, while the first (because we pass -1) will be inferred based on the original tensor shape. For our labels, on the other hand, we only provide one argument and so the new view will have one dimension, inferred by the original shape: Next, we instantiate our model using the XLM-RoBERTa configuration: 7 Similar to NumPy, PyTorch tensors are represented internally by a block of memory storing the data and some metadata that describes how the data should be read, e.g., type, shape, and stride. The view() method returns a new tensor with new metadata but pointing to the same memory block. 194 Using Transformers with the Hugging Face Library As before, we create a TrainingArguments object and define a compute_metrics function in order to customize a Trainer: While the TrainingArguments code has no substantial changes, we need to adjust the compute_metrics function to account for the fact that our model uses sub-word tokens rather than complete words. Recall that only the first sub-word token per word was assigned a POS tag. This function discards the labels corresponding to the ignored sub-word tokens and evaluates the rest, returning the accuracy score: The last component required for the Trainer is a collator. Since this time we are batching sequences of tokens, we need a collator that can pad them dynamically when constructing the batches. The transformers library includes a DataCollatorForTokenClassification specifically for this purpose. Once we have our collator and our trainer object, we can train our model: Next, we evaluate our newly trained model on the test dataset. For this purpose, we preprocess the data in the same way we did for the train and validation partitions. Then, for convenience, we use the trainer’s predict() method to generate the predicted logits using our model: As before, we use scikit-learn’s classification_report() function to display the results of the evaluation. This function expects two onedimensional lists of labels, so we need to follow a similar approach to the one we employed for text classification. Note that output.label_ids and output.predictions are NumPy arrays rather than PyTorch tensors. This time we use NumPy’s reshape() method to reshape the arrays. This method is similar to PyTorch’s view() method that we used previously, except that view() may copy the array’s data in some situations. We discard the labels corresponding to ignored sub-word tokens, and then we print the classification report: Our model based on XLM-RoBERTa achieves 99% accuracy. This is considerably better than the LSTM-based model developed in Chapter 11. In order to understand the differences between the two methods, we produce below a confusion matrix for the results of each model. Rows in the confusion matrix represent the true labels and columns represent the predicted labels. In the confusion matrices shown below, each cell xij corresponds to the proportion of values with label i that were assigned the label j.8 For a perfect model, all cells in the diagonal would have value 1 and all other cells would have value 0. The code used to generate the confusion matrix is shown below. The confusion matrices 8 This is the case because we used the normalize='true' parameter of the confusion_matrix() function. 13.3 Part-of-speech Tagging 195 Figure 13.1 Confusion matrix corresponding to the LSTM-based part-ofspeech tagger developed in Chapter 11. for the LSTM and transformer are show in Figure 13.1 and Figure 13.2, respectively. The two confusion matrices highlight a couple of important observations. First, the transformer model is considerably better at predicting POS tags with infrequent support in the dataset. For example, the accuracy for predicting the SYM POS tag increased from 38% in the LSTM model to 95% in the transformer model! Equally as impressive, the transformer improved the performance of tags that are extremely common, and, thus, provide plenty of opportunity to both approaches to learn a good model. For example, the accuracy of tagging NOUN, the second 196 Using Transformers with the Hugging Face Library Figure 13.2 Confusion matrix corresponding to the transformer-based part-of-speech tagger. most common POS tag in the dataset, increased from 96% in the LSTM model to 99% in the transformer model. 13.4 Summary In this chapter we presented two applications driven by the encoder component of a transformer network. First, we used the transformer encoder as an acceptor and implemented a text classification application for English news. Second, we used the encoder as a transducer to develop a Spanish part-of-speech tagger. Both tasks were implemented using 13.4 Summary 197 pre-trained transformer models from the Hugging Face library. For both applications, the transformer-based methods outperform considerably all approaches introduced in the previous chapters, highlighting the value of the transformer architecture.
8,703
8,761
#!/usr/bin/env python # coding: utf-8 # # Text Classification Using Transformer Networks (BERT) # Some initialization: # In[1]: import random import torch import numpy as np import pandas as pd from tqdm.notebook import tqdm # enable tqdm in pandas tqdm.pandas() # set to True to use the gpu (if there is one available) use_gpu = True # select device device = torch.device('cuda' if use_gpu and torch.cuda.is_available() else 'cpu') print(f'device: {device.type}') # random seed seed = 1234 # set random seed if seed is not None: print(f'random seed: {seed}') random.seed(seed) np.random.seed(seed) torch.manual_seed(seed) # Read the train/dev/test datasets and create a HuggingFace `Dataset` object: # In[2]: def read_data(filename): # read csv file df = pd.read_csv(filename, header=None) # add column names df.columns = ['label', 'title', 'description'] # make labels zero-based df['label'] -= 1 # concatenate title and description, and remove backslashes df['text'] = df['title'] + " " + df['description'] df['text'] = df['text'].str.replace('\\', ' ', regex=False) return df # In[3]: labels = open('data/ag_news_csv/classes.txt').read().splitlines() train_df = read_data('data/ag_news_csv/train.csv') test_df = read_data('data/ag_news_csv/test.csv') train_df # In[4]: from sklearn.model_selection import train_test_split train_df, eval_df = train_test_split(train_df, train_size=0.9) train_df.reset_index(inplace=True, drop=True) eval_df.reset_index(inplace=True, drop=True) print(f'train rows: {len(train_df.index):,}') print(f'eval rows: {len(eval_df.index):,}') print(f'test rows: {len(test_df.index):,}') # In[5]: from datasets import Dataset, DatasetDict ds = DatasetDict() ds['train'] = Dataset.from_pandas(train_df) ds['validation'] = Dataset.from_pandas(eval_df) ds['test'] = Dataset.from_pandas(test_df) ds # Tokenize the texts: # In[6]: from transformers import AutoTokenizer transformer_name = 'bert-base-cased' tokenizer = AutoTokenizer.from_pretrained(transformer_name) # In[7]: def tokenize(examples): return tokenizer(examples['text'], truncation=True) train_ds = ds['train'].map( tokenize, batched=True, remove_columns=['title', 'description', 'text'], ) eval_ds = ds['validation'].map( tokenize, batched=True, remove_columns=['title', 'description', 'text'], ) train_ds.to_pandas() # Create the transformer model: # In[8]: from torch import nn from transformers.modeling_outputs import SequenceClassifierOutput from transformers.models.bert.modeling_bert import BertModel, BertPreTrainedModel # https://github.com/huggingface/transformers/blob/65659a29cf5a079842e61a63d57fa24474288998/src/transformers/models/bert/modeling_bert.py#L1486 class BertForSequenceClassification(BertPreTrainedModel): def __init__(self, config): super().__init__(config) self.num_labels = config.num_labels self.bert = BertModel(config) self.dropout = nn.Dropout(config.hidden_dropout_prob) self.classifier = nn.Linear(config.hidden_size, config.num_labels) self.init_weights() def forward(self, input_ids=None, attention_mask=None, token_type_ids=None, labels=None, **kwargs): outputs = self.bert( input_ids, attention_mask=attention_mask, token_type_ids=token_type_ids, **kwargs, ) cls_outputs = outputs.last_hidden_state[:, 0, :] cls_outputs = self.dropout(cls_outputs) logits = self.classifier(cls_outputs) loss = None if labels is not None: loss_fn = nn.CrossEntropyLoss() loss = loss_fn(logits, labels) return SequenceClassifierOutput( loss=loss, logits=logits, hidden_states=outputs.hidden_states, attentions=outputs.attentions, ) # In[9]: from transformers import AutoConfig config = AutoConfig.from_pretrained( transformer_name, num_labels=len(labels), ) model = ( BertForSequenceClassification .from_pretrained(transformer_name, config=config) ) # Create the trainer object and train: # In[10]: from transformers import TrainingArguments num_epochs = 2 batch_size = 24 weight_decay = 0.01 model_name = f'{transformer_name}-sequence-classification' training_args = TrainingArguments( output_dir=model_name, log_level='error', num_train_epochs=num_epochs, per_device_train_batch_size=batch_size, per_device_eval_batch_size=batch_size, evaluation_strategy='epoch', weight_decay=weight_decay, ) # In[11]: from sklearn.metrics import accuracy_score def compute_metrics(eval_pred): y_true = eval_pred.label_ids y_pred = np.argmax(eval_pred.predictions, axis=-1) return {'accuracy': accuracy_score(y_true, y_pred)} # In[12]: from transformers import Trainer trainer = Trainer( model=model, args=training_args, compute_metrics=compute_metrics, train_dataset=train_ds, eval_dataset=eval_ds, tokenizer=tokenizer, ) # In[13]: trainer.train() # Evaluate on the test partition: # In[14]: test_ds = ds['test'].map( tokenize, batched=True, remove_columns=['title', 'description', 'text'], ) test_ds.to_pandas() # In[15]: output = trainer.predict(test_ds) output # In[16]: from sklearn.metrics import classification_report y_true = output.label_ids y_pred = np.argmax(output.predictions, axis=-1) target_names = labels print(classification_report(y_true, y_pred, target_names=target_names))
2,954
2,992
20
chap13-21
chap13-21
13 Using Transformers with the Hugging Face Library One of the key advantages of transformer networks is the ability to take a model that was pre-trained over vast quantities of text and fine-tune it for the task at hand. Intuitively, this strategy allows transformer networks to achieve higher performance on smaller datasets by relying on statistics acquired at scale in an unsupervised way (e.g., through the masked language model training objective). To this end, in this chapter we will use the Hugging Face library,1 which has a rich repository of datasets and pre-trained models, as well as helper methods and classes that make it easy to target downstream tasks. Using pre-trained transformer encoders, we will implement the two tasks that served as use cases in the previous chapters: text classification and part-of-speech tagging. 13.1 Tokenization As discussed in Section 12.2, transformers rely on sub-word tokens. This strategy provides an elegant way to handle unknown and low-frequency words by splitting them into more frequent sub-word parts. At the same time, these tokenization algorithms maintain frequently-occurring words as standalone tokens, so the signal for these common words is preserved. To make this more concrete, we show below how tokenizers are employed in the Hugging Face library. First, we load the tokenizer that corresponds to the transformer we intend to use. This is important for two reasons: (a) different transformers rely on different tokenization algorithms, and (b) even for the ones that use the same algorithm, their tokenizer vocabularies are likely to be different if they were pre-trained 1 https://huggingface.co/docs/transformers/main/en/index 186 13.1 Tokenization 187 on different corpora. Next, we tokenize some example text and display some of the resulting attributes with pandas: As shown above, the tokenizer splits the text into tokens, and adds two special tokens: the [CLS] token at the beginning of the token sequence, and the [SEP] token at the end. Also, note that the ## characters at the beginning of some tokens indicate that they are not standalone words, but rather sub-words that continue a word previously started. For example, the output above shows that the word walrus was split into three sub-words. Note, however, that this is specific to this particular tokenization algorithm, and other tokenizers may indicate word continuation in different ways. A better way to detect word continuations is using the word_ids() method of the tokenizer output, which assigns the same id to all tokens part of the same word. For example, all fragments of the word walrus share the word id 3. Lastly, the input_ids attribute provides the token ids used internally by the transformer to map tokens to embeddings. To briefly demonstrate how different tokenizers produce different outputs, here is the same text tokenized with the tokenizer corresponding to xlm-roberta-base: Note how the [CLS] and [SEP] special tokens have been replaced with <s> and </s> respectively. Also, spaces have been replaced with the Unicode character (U+2581, LOWER ONE EIGHTH BLOCK). Tokens that start with that character are considered word beginnings and the rest are word continuations, as can be confirmed by looking at the word ids. This illustrates the importance of using the tokenizer that corresponds to the transformer you intend to use. 012345678 tokens [CLS] I am the wa ##l ##rus . [SEP] word_ids None 0 1 2 3 3 3 4 None input_ids 101 146 1821 1103 20049 1233 6208 119 102 01234567 tokens <s> ▁I ▁am ▁the ▁wal rus . </s > None 0 1 2 3 3 3 None 0 87 444 70 32973 6563 5 2 word_ids input_ids 188 Using Transformers with the Hugging Face Library 13.2 Text Classification For our text classification example, we will continue using the AG News dataset from previous chapters. We will load, preprocess, and split the dataset into pandas dataframes in the same way as before. Now however, rather than continuing with pandas, we will create a Hugging Face dataset from the dataframes. Hugging Face datasets are convenient because of their built-in support of batching, efficient data transformations, and caching. In particular, we convert each dataframe into a Hugging Face dataset. The various datasets are managed with a DatasetDict. Note that this is the same data structure seen when downloading a Hugging Face dataset from their hub.2 The keys in this dictionary are usually train, validation, and test:3 Once our dataset is loaded, we load a tokenizer. Different pre-trained models are tokenized differently, and it is important to select the tokenizer that corresponds to the model we will use so that the inputs are consistent with model expectations. In our example, we will use the bert-base-cased pre-trained model and tokenizer: Datasets have a map() method that transforms the dataset by applying a function to each example. The method returns a new dataset with the transformation applied. We use the map() method to tokenize our dataset. To this end, we define a function that tokenizes an example using the tokenizer we loaded previously. Note that tokenizers support many options that you may need depending on your situation. However, since this is a simple scenario, all we need to do is provide the text to tokenize and specify how to handle texts that exceed the maximum number of tokens permitted by the pre-trained model. Here we have our tokenizer truncate any inputs that are too long by specifying the truncation=True parameter. The output of this function will be added to the new dataset as extra columns. Further, we also want to remove some of the columns that are no longer needed, simplifying subsequent steps. For this, we use the remove_columns argument, listing the columns that we want to discard. Additionally, the dataset’s map() method can batch the dataset; we enable this option with the batched=True argument: 2 https://huggingface.co/datasets
3 These correspond to the more common terms train, development, and test we have used throughout the book so far. In this chapter we use the Hugging Face naming conventions for consistency. 13.2 Text Classification 189 label 03 10 20 32 40 ... ... . 107995  0 
 . 107996  0 
 . 107997  0 
 . 107998  0 
 . 107999  3 
 input_ids [101, 3270, 11906, 1522, 1146, 7106, 1111, 251... [101, 158, 119, 156, 119, 12068, 5084, 1116, 9... [101, 7270, 118, 2733, 1383, 1111, 12448, 7430... [101, 6096, 117, 10378, 3969, 5977, 1111, 8988... [101, 19569, 5480, 10582, 2087, 1867, 158, 119... [101, 1130, 139, 24683, 131, 21107, 2050, 1739... token_type_ids attention_mask [101, 22087, 8223, 1611, 1106, 4417, [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 5572, 324... 0, 0, 0, ... [0,0,0,0,0,0,0,0, [0,0,0,0,0,0,0,0, [0,0,0,0,0,0,0,0, [0,0,0,0,0,0,0,0, [0,0,0,0,0,0,0,0, [0,0,0,0,0,0,0,0, 0,0,0,0, 0,0,0,... 0,0,0,0, 0,0,0,... 0,0,0,0, 0,0,0,... 0,0,0,0, 0,0,0,... 0,0,0,0, 0,0,0,... 0,0,0,0, 0,0,0,... [1,1,1,1,1,1,1,1, [1,1,1,1,1,1,1,1, [1,1,1,1,1,1,1,1, [1,1,1,1,1,1,1,1, [1,1,1,1,1,1,1,1, [1,1,1,1,1,1,1,1, 1,1,1,1, 1,1,1,... 1,1,1,1, 1,1,1,... 1,1,1,1, 1,1,1,... 1,1,1,1, 1,1,1,... 1,1,1,1, 1,1,1,... 1,1,1,1, 1,1,1,... ... ... [101, 16409, 118, 16587, 159, 4064, [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1106, 1564... 0, 0, 0, ... 108000 rows × 4 columns [101, 4222, 11404, 1174, 117, 1476, [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1130, 2696... 0, 0, 0, ... [101, 11560, 3881, 108, 3614, 132, [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 3498, 2944,... 0, 0, 0, ... [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... ... Next, we implement a classifier for our task. Hugging Face provides a
variety of models corresponding to several types of downstream tasks. However, for pedagogical purposes, we implement one from scratch. In particular, our model class inherits from BertPreTrainedModel, which
provides several useful methods such as init_weights() and from_pretrained() methods, which we will use later. The model constructor takes a config- uration object as its only parameter. Configuration objects contain all the hyper-parameters used by the corresponding pre-trained models. We will show later how the configuration model is retrieved and customized. Models that implement specific downstream tasks are usually composed of a pre-trained model (sometimes referred as the body), and one or more task-specific layers (usually referred as the head). Here, we initialize a BertModel using the provided configuration, as well as a dropout layer and a task-specific linear layer used for classifying the Bert output. Each of these layers is initialized by calling the init_weights() method inherited from BertPreTrainedModel. The forward() method, which implements the task-specific forward pass, takes as arguments the outputs of the tokenizer, and, optionally, the gold labels corresponding to the input data points. Our implementation of the forward pass sends the input tokens to the Bert model to produce the contextualized representations for all tokens. This output has several components, including the last_hidden_state which con- 190 Using Transformers with the Hugging Face Library tains the final hidden-state embedding for each token. For our task, we will represent the whole sequence using the embedding for the [CLS] token that occurs at the start of each example. We retrieve it by selecting the first element of each output sequence in the batch (i.e., last_hidden_state[:, 0, :]). As in the previous chapters, we apply dropout to our sequence representation, and then pass it through our linear classification layer. If gold labels are provided (i.e., we are training), we now compute the loss using the cross-entropy loss. The output of the forward pass is wrapped in a Hugging Face SequenceClassifierOutput object4 and returned: Next we load the configuration of the pre-trained model and instantiate our model. The AutoConfig class can load the configuration for any pre-trained model, retrieving it from Hugging Face if needed. Then we use the configuration to instantiate our model using the from_pretrained() method. With this call, the pre-trained model will be loaded, which includes downloading if necessary: Hugging Face provides a Trainer class that greatly simplifies the training process. This class not only implements the training loop we have been using in the previous chapters, but also handles other useful steps such as saving checkpoints (i.e., intermediate models after a number of mini-batches have been processed during training), and tracking custom measures about model performance. In order to create a Trainer, we first need to specify its configuration in a TrainingArguments object. In ours, we specify certain hyper parameters such as batch size, weight decay, and number of epochs, as well as where to store model checkpoints: The TrainingArguments class provides a wide variety of arguments that we have not shown.5 These arguments usually have appropriate default values, so it is often fine to omit them. For example, we did not use the label_names argument, which specifies the key that corresponds to the training labels. When omitted, it defaults to keys such as label, labels and label_ids.6 In this chapter we used label. Note that we also specify how often we would like to see the perfor- . 4  Hugging Face utilizes a set of output objects to standardize model output for a given task. These objects typically include additional information, e.g., attention weights, which can be used for visualizing or debugging model behavior. 
 . 5  https://huggingface.co/docs/transformers/main/en/main_classes/trainer# transformers. TrainingArguments 
 6 In the case of extractive question answering (see Chapter 16), the start_positions and end_positions store the start/end positions of the correct answers. 13.3 Part-of-speech Tagging 191 mance of the current model (at the end of each epoch) with evaluation_strategy='epoch'. This means that after each epoch we print the current loss on the training
partition and on the evaluation dataset, if one is available. Additionally,
we can report custom metrics at this time. For this purpose, we use the compute_metrics parameter of the Trainer, which expects a function that receives a transformers. EvalPredictions object containing the label ids and the predicted logits. The expected return type is a dictionary whose keys correspond to different metrics, each of which will be displayed as a separate result column. Using the above TrainingArguments and compute_metrics function, we create our Trainer. Note that when you provide a tokenizer, the trainer will automatically pad the sequences in each batch. Also, the trainer will automatically use any GPU that is available, unless specifically disabled in the TrainingArguments. Training our model takes a single call to the train() method of the Trainer object. As specified in the our instance of TrainingArguments, the training and validation losses, as well as the accuracy, are reported every epoch. As in the other chapters, we can write custom code to obtain the model’s predictions on the test data. However, the Trainer class provides a predict() method that drastically simplifies this: As shown in the table above, this model achieves an accuracy of 95%, which is the highest performance we have achieved so far on this dataset. 13.3 Part-of-speech Tagging To showcase part-of-speech tagging using transformers, we continue with the Spanish section of the AnCora corpus introduced in Chapter 11. Recall that the dataset is stored in the CoNLL-U format. We load this format in the same way as before, but then we convert the loaded dataset into a Hugging Face DictDataset: Importantly, because the CoNLL-U dataset is already tokenized, we use the is_split_into_words=True tokenizer argument to ensure that the tokenizer respects the existing word boundaries during its sub-word tokenization. Further, while we want to predict one POS tag per word, Epoch Training Loss Validation Loss Accuracy 1 0.187800 0.172629 0.941667 2 0.104000 0.183001 0.946250 192 Using Transformers with the Hugging Face Library any given word may be split into smaller pieces by our tokenizer. Thus, we need to align the tokenizer output to the CoNLL-U words. The original BERT paper (Devlin et al., 2018) addresses this by only using the embedding corresponding to the first sub-token for each word. We follow the same approach for consistency. For the sub-words that do not correspond to the beginning of a word, we use a special value that indicates that we are not interested in their predictions. The CrossEntropyLoss has a parameter called ignore_index for this purpose. The default value for this parameter is −100, which we use as the label for the sub-words we wish to ignore during training: Next, we use this function to preprocess the train and validation folds in our DatasetDict: words [El, presidente, de, el, órgano, regulador, de... [Afirmó, que, sigue, el, criterio, europeo, y,... [Durante, la, presentación, de, el, libro, ", ... [Y, todas, las, miradas, convergen, en, la, lu... [Cambiar, las, formas, parece, de, rigor, ,, p... [Él, llega, a, tirar, la, sobre, la, cama, y, ... tags [DET, NOUN, ADP, DET, NOUN, ADJ, ADP, DET, PRO... [VERB, SCONJ, VERB, DET, NOUN, ADJ, CCONJ, SCO... [ADP, DET, NOUN, ADP, DET, NOUN, PUNCT, DET, P... [CCONJ, DET, DET, NOUN, VERB, ADP, DET, NOUN, ... [VERB, DET, NOUN, VERB, ADP, NOUN, PUNCT, CCON... [PRON, VERB, ADP, VERB, PRON, ADP, DET, NOUN, ... input_ids [0, 540, 9692, 8, 88, 103633, 15913, 1846, 8, ... [0, 62, 38949, 849, 41, 58453, 88, 166220, 620... [0, 24292, 21, 43945, 8, 88, 7750, 44, 239, 78... [0, 990, 5136, 576, 100688, 7, 158, 814, 1409,... [0, 313, 61055, 42, 576, 26497, 12295, 8, 7599... [0, 124043, 47612, 10, 61846, 21, 1028, 21, 39... attention_mask labels [-100, 0, 1, 2, 0, 1, 3, -100, 2, 0, 4, -100, ... [-100, 6, -100, -100, 7, 6, 0, 1, 3, 10, 7, 6,... [-100, 2, 0, 1, 2, 0, 1, 8, 0, 4, -100, 2, 4, ... [-100, 10, 0, 0, 1, -100, 6, -100, -100, 2, 0,... [-100, 6, -100, -100, 0, 1, 6, 2, 1, 8, -100, ... [-100, 5, 6, 2, 6, 5, 2, 0, 1, 10, 5, 6, 0, 1,... 0 1 2 3 4 ... 14300 14301 14302 14303 [1,1,1,1,1, 1,1,1,1,1, [1,1,1,1,1, 1,1,1,1,1, [1,1,1,1,1, 1,1,1,1,1, [1,1,1,1,1, 1,1,1,1,1, [1,1,1,1,1, 1,1,1,1,1, [1,1,1,1,1, 1,1,1,1,1, 1, 1, 1, 1, 1, ... 1, 1, 1, 1, 1, ... 1, 1, 1, 1, 1, ... 1, 1, 1, 1, 1, ... 1, 1, 1, 1, 1, ... 1, 1, 1, 1, 1, ... [Sobre, la, oferta, de, interconexión, con, Te... [ADP, DET, NOUN, ADP, NOUN, ADP, PROPN, ADP, D... [0, 44125, 21, 19806, 8, 1940, 2271, 3355, 194... [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... [-100, 2, 0, 1, 2, 1, -100, -100, -100, 2, 4, ... [La, inversión, en, investigación, básica, es,... [DET, NOUN, ADP, NOUN, ADJ, AUX, DET, NOUN, AD... [0, 239, 98649, 22, 31674, 124528, 198, 88, 46... [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... [-100, 0, 1, 2, 1, 3, 9, 0, 1, 2, 0, 1, 10, 0,... [Conviene, que, ahora, ,, en, plena, apoteosis... [VERB, SCONJ, ADV, PUNCT, ADP, ADJ, NOUN, ADP,... [0, 1657, 7772, 13, 41, 18451, 6, 4, 22, 31161... [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... [-100, 6, -100, -100, 7, 11, 8, -100, 2, 3, 1,... [Carlos, y, Fayna, se, enzarzan, en, una, bron... [PROPN, CCONJ, PROPN, PRON, VERB, ADP, DET, NO... [0, 24856, 113, 114162, 76, 40, 22, 6383, 5935... [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... [-100, 4, 10, 4, -100, 5, 6, -100, -100, 2, 0,... 14304
14305 rows × 5 columns ... ... ... ... ... Next, we implement our model class that uses a transformer encoder as a transducer. Because our downstream task consists of POS tagging for Spanish, we need a transformer model that was pre-trained on Spanish texts. Here, we chose XLM-RoBERTa (Conneau et al., 2019) as our base model. XLM-Roberta is a RoBERTa model (Liu et al., 2019) that has 13.3 Part-of-speech Tagging 193 been pre-trained on 100 different languages, including Spanish. Of note, XLM-RoBERTa does not require us to specify what language we are working on. Similar to BERT, it only requires the input_ids. We discussed in the text classification section that Hugging Face provides implementations for text classification models. This is also true for token classification problems that require transducers. In particular, the XLMRobertaForTokenClassification model provided by Hugging Face does everything needed for this task. However, as before, here we implement it ourselves for pedagogical purposes. The model architecture is similar to our text classification example. It consists of a transformer, a dropout layer, and a linear layer used for classification. The number of labels which determines the output dimension of the linear layer is equal to the number of POS tags. The primary difference between the text classification example and this token classification model is that with the former we produced one label for each text document, while here we produce one label for each token in the input text. Specifically, in our text classification model the output shape was two-dimensional: (batch_size, num_labels). Here, our output is three-dimensional: (batch_size, sequence_size, num_labels). So, while much of the forward method is familiar to us, when we are required to compute the loss, we need to reshape the logits and the labels before passing them to the CrossEntropyLoss, since it expects two-dimensional input and one-dimensional labels. For this purpose, we use the view() method to reshape the tensors. This method is efficient because it does not copy the tensor data. Instead it provides a new view of the same data that behaves like a tensor with a different shape.7 As mentioned before, the number of arguments passed to this method determines the number of dimensions in the output tensor. Here, for our logits, we pass two arguments and so our new view will have two dimensions. The second will be the size of self.num_labels, while the first (because we pass -1) will be inferred based on the original tensor shape. For our labels, on the other hand, we only provide one argument and so the new view will have one dimension, inferred by the original shape: Next, we instantiate our model using the XLM-RoBERTa configuration: 7 Similar to NumPy, PyTorch tensors are represented internally by a block of memory storing the data and some metadata that describes how the data should be read, e.g., type, shape, and stride. The view() method returns a new tensor with new metadata but pointing to the same memory block. 194 Using Transformers with the Hugging Face Library As before, we create a TrainingArguments object and define a compute_metrics function in order to customize a Trainer: While the TrainingArguments code has no substantial changes, we need to adjust the compute_metrics function to account for the fact that our model uses sub-word tokens rather than complete words. Recall that only the first sub-word token per word was assigned a POS tag. This function discards the labels corresponding to the ignored sub-word tokens and evaluates the rest, returning the accuracy score: The last component required for the Trainer is a collator. Since this time we are batching sequences of tokens, we need a collator that can pad them dynamically when constructing the batches. The transformers library includes a DataCollatorForTokenClassification specifically for this purpose. Once we have our collator and our trainer object, we can train our model: Next, we evaluate our newly trained model on the test dataset. For this purpose, we preprocess the data in the same way we did for the train and validation partitions. Then, for convenience, we use the trainer’s predict() method to generate the predicted logits using our model: As before, we use scikit-learn’s classification_report() function to display the results of the evaluation. This function expects two onedimensional lists of labels, so we need to follow a similar approach to the one we employed for text classification. Note that output.label_ids and output.predictions are NumPy arrays rather than PyTorch tensors. This time we use NumPy’s reshape() method to reshape the arrays. This method is similar to PyTorch’s view() method that we used previously, except that view() may copy the array’s data in some situations. We discard the labels corresponding to ignored sub-word tokens, and then we print the classification report: Our model based on XLM-RoBERTa achieves 99% accuracy. This is considerably better than the LSTM-based model developed in Chapter 11. In order to understand the differences between the two methods, we produce below a confusion matrix for the results of each model. Rows in the confusion matrix represent the true labels and columns represent the predicted labels. In the confusion matrices shown below, each cell xij corresponds to the proportion of values with label i that were assigned the label j.8 For a perfect model, all cells in the diagonal would have value 1 and all other cells would have value 0. The code used to generate the confusion matrix is shown below. The confusion matrices 8 This is the case because we used the normalize='true' parameter of the confusion_matrix() function. 13.3 Part-of-speech Tagging 195 Figure 13.1 Confusion matrix corresponding to the LSTM-based part-ofspeech tagger developed in Chapter 11. for the LSTM and transformer are show in Figure 13.1 and Figure 13.2, respectively. The two confusion matrices highlight a couple of important observations. First, the transformer model is considerably better at predicting POS tags with infrequent support in the dataset. For example, the accuracy for predicting the SYM POS tag increased from 38% in the LSTM model to 95% in the transformer model! Equally as impressive, the transformer improved the performance of tags that are extremely common, and, thus, provide plenty of opportunity to both approaches to learn a good model. For example, the accuracy of tagging NOUN, the second 196 Using Transformers with the Hugging Face Library Figure 13.2 Confusion matrix corresponding to the transformer-based part-of-speech tagger. most common POS tag in the dataset, increased from 96% in the LSTM model to 99% in the transformer model. 13.4 Summary In this chapter we presented two applications driven by the encoder component of a transformer network. First, we used the transformer encoder as an acceptor and implemented a text classification application for English news. Second, we used the encoder as a transducer to develop a Spanish part-of-speech tagger. Both tasks were implemented using 13.4 Summary 197 pre-trained transformer models from the Hugging Face library. For both applications, the transformer-based methods outperform considerably all approaches introduced in the previous chapters, highlighting the value of the transformer architecture.
13,908
14,026
#!/usr/bin/env python # coding: utf-8 # # Part-of-speech Tagging with Transformer Networks # Some initialization: # In[1]: import random import torch import numpy as np import pandas as pd from tqdm.notebook import tqdm # enable tqdm in pandas tqdm.pandas() # set to True to use the gpu (if there is one available) use_gpu = True # select device device = torch.device('cuda' if use_gpu and torch.cuda.is_available() else 'cpu') print(f'device: {device.type}') # random seed seed = 1234 # set random seed if seed is not None: print(f'random seed: {seed}') random.seed(seed) np.random.seed(seed) torch.manual_seed(seed) # Read the words and POS tags from the Spanish dataset: # In[2]: from conllu import parse_incr def read_tags(filename): data = {'words': [], 'tags': []} with open(filename) as f: for sent in tqdm(parse_incr(f)): words = [tok['form'] for tok in sent] tags = [tok['upos'] for tok in sent] data['words'].append(words) data['tags'].append(tags) return pd.DataFrame(data) # In[3]: train_df = read_tags('data/UD_Spanish-AnCora/es_ancora-ud-train.conllup') valid_df = read_tags('data/UD_Spanish-AnCora/es_ancora-ud-dev.conllup') test_df = read_tags('data/UD_Spanish-AnCora/es_ancora-ud-test.conllup') # In[4]: tags = train_df['tags'].explode().unique() index_to_tag = {i:t for i,t in enumerate(tags)} tag_to_index = {t:i for i,t in enumerate(tags)} # Create a HuggingFace `DatasetDict` object: # In[5]: from datasets import Dataset, DatasetDict ds = DatasetDict() ds['train'] = Dataset.from_pandas(train_df) ds['validation'] = Dataset.from_pandas(valid_df) ds['test'] = Dataset.from_pandas(test_df) ds # In[6]: ds['train'].to_pandas() # Now tokenize the texts and assign POS labels to the first token in each word: # In[7]: from transformers import AutoTokenizer transformer_name = 'xlm-roberta-base' tokenizer = AutoTokenizer.from_pretrained(transformer_name) # In[8]: x = ds['train'][0] tokenized_input = tokenizer(x['words'], is_split_into_words=True) tokens = tokenizer.convert_ids_to_tokens(tokenized_input['input_ids']) word_ids = tokenized_input.word_ids() pd.DataFrame([tokens, word_ids], index=['tokens', 'word ids']) # In[9]: # https://arxiv.org/pdf/1810.04805.pdf # Section 5.3 # We use the representation of the first sub-token as the input to the token-level classifier over the NER label set. # default value for CrossEntropyLoss ignore_index parameter ignore_index = -100 def tokenize_and_align_labels(batch): labels = [] # tokenize batch tokenized_inputs = tokenizer( batch['words'], truncation=True, is_split_into_words=True, ) # iterate over batch elements for i, tags in enumerate(batch['tags']): label_ids = [] previous_word_id = None # get word ids for current batch element word_ids = tokenized_inputs.word_ids(batch_index=i) # iterate over tokens in batch element for word_id in word_ids: if word_id is None or word_id == previous_word_id: # ignore if not a word or word id has already been seen label_ids.append(ignore_index) else: # get tag id for corresponding word tag_id = tag_to_index[tags[word_id]] label_ids.append(tag_id) # remember this word id previous_word_id = word_id # save label ids for current batch element labels.append(label_ids) # store labels together with the tokenizer output tokenized_inputs['labels'] = labels return tokenized_inputs # In[10]: train_ds = ds['train'].map(tokenize_and_align_labels, batched=True) eval_ds = ds['validation'].map(tokenize_and_align_labels, batched=True) train_ds.to_pandas() # Create our transformer model: # In[11]: from torch import nn from transformers.modeling_outputs import TokenClassifierOutput from transformers.models.roberta.modeling_roberta import RobertaModel, RobertaPreTrainedModel # https://github.com/huggingface/transformers/blob/65659a29cf5a079842e61a63d57fa24474288998/src/transformers/models/roberta/modeling_roberta.py#L1346 class XLMRobertaForTokenClassification(RobertaPreTrainedModel): def __init__(self, config): super().__init__(config) self.num_labels = config.num_labels self.roberta = RobertaModel(config, add_pooling_layer=False) self.dropout = nn.Dropout(config.hidden_dropout_prob) self.classifier = nn.Linear(config.hidden_size, config.num_labels) self.init_weights() def forward(self, input_ids=None, attention_mask=None, token_type_ids=None, labels=None, **kwargs): outputs = self.roberta( input_ids, attention_mask=attention_mask, token_type_ids=token_type_ids, **kwargs, ) sequence_output = self.dropout(outputs[0]) logits = self.classifier(sequence_output) loss = None if labels is not None: loss_fn = nn.CrossEntropyLoss() inputs = logits.view(-1, self.num_labels) targets = labels.view(-1) loss = loss_fn(inputs, targets) return TokenClassifierOutput( loss=loss, logits=logits, hidden_states=outputs.hidden_states, attentions=outputs.attentions, ) # In[12]: from transformers import AutoConfig config = AutoConfig.from_pretrained( transformer_name, num_labels=len(index_to_tag), ) model = ( XLMRobertaForTokenClassification .from_pretrained(transformer_name, config=config) ) # Create the `Trainer` object and train: # In[13]: from transformers import TrainingArguments num_epochs = 2 batch_size = 24 weight_decay = 0.01 model_name = f'{transformer_name}-finetuned-pos-es' training_args = TrainingArguments( output_dir=model_name, log_level='error', num_train_epochs=num_epochs, per_device_train_batch_size=batch_size, per_device_eval_batch_size=batch_size, evaluation_strategy='epoch', weight_decay=weight_decay, ) # In[14]: from sklearn.metrics import accuracy_score def compute_metrics(eval_pred): # gold labels label_ids = eval_pred.label_ids # predictions pred_ids = np.argmax(eval_pred.predictions, axis=-1) # collect gold and predicted labels, ignoring ignore_index label y_true, y_pred = [], [] batch_size, seq_len = pred_ids.shape for i in range(batch_size): for j in range(seq_len): if label_ids[i, j] != ignore_index: y_true.append(index_to_tag[label_ids[i][j]]) y_pred.append(index_to_tag[pred_ids[i][j]]) # return computed metrics return {'accuracy': accuracy_score(y_true, y_pred)} # In[15]: from transformers import Trainer from transformers import DataCollatorForTokenClassification data_collator = DataCollatorForTokenClassification(tokenizer) trainer = Trainer( model=model, args=training_args, data_collator=data_collator, compute_metrics=compute_metrics, train_dataset=train_ds, eval_dataset=eval_ds, tokenizer=tokenizer, ) trainer.train() # Evaluate on the test partition: # In[16]: test_ds = ds['test'].map( tokenize_and_align_labels, batched=True, ) output = trainer.predict(test_ds) # In[17]: from sklearn.metrics import classification_report num_labels = model.num_labels label_ids = output.label_ids.reshape(-1) predictions = output.predictions.reshape(-1, num_labels) predictions = np.argmax(predictions, axis=-1) mask = label_ids != ignore_index y_true = label_ids[mask] y_pred = predictions[mask] target_names = tags[:-1] report = classification_report( y_true, y_pred, target_names=target_names ) print(report) # In[18]: import matplotlib.pyplot as plt from sklearn.metrics import ConfusionMatrixDisplay, confusion_matrix cm = confusion_matrix(y_true, y_pred, normalize='true') disp = ConfusionMatrixDisplay( confusion_matrix=cm, display_labels=target_names, ) fig, ax = plt.subplots(figsize=(10,10)) disp.plot( cmap='Blues', values_format='.2f', colorbar=False, ax=ax, xticks_rotation=45, )
1,570
1,724
21
chap13-22
chap13-22
13 Using Transformers with the Hugging Face Library One of the key advantages of transformer networks is the ability to take a model that was pre-trained over vast quantities of text and fine-tune it for the task at hand. Intuitively, this strategy allows transformer networks to achieve higher performance on smaller datasets by relying on statistics acquired at scale in an unsupervised way (e.g., through the masked language model training objective). To this end, in this chapter we will use the Hugging Face library,1 which has a rich repository of datasets and pre-trained models, as well as helper methods and classes that make it easy to target downstream tasks. Using pre-trained transformer encoders, we will implement the two tasks that served as use cases in the previous chapters: text classification and part-of-speech tagging. 13.1 Tokenization As discussed in Section 12.2, transformers rely on sub-word tokens. This strategy provides an elegant way to handle unknown and low-frequency words by splitting them into more frequent sub-word parts. At the same time, these tokenization algorithms maintain frequently-occurring words as standalone tokens, so the signal for these common words is preserved. To make this more concrete, we show below how tokenizers are employed in the Hugging Face library. First, we load the tokenizer that corresponds to the transformer we intend to use. This is important for two reasons: (a) different transformers rely on different tokenization algorithms, and (b) even for the ones that use the same algorithm, their tokenizer vocabularies are likely to be different if they were pre-trained 1 https://huggingface.co/docs/transformers/main/en/index 186 13.1 Tokenization 187 on different corpora. Next, we tokenize some example text and display some of the resulting attributes with pandas: As shown above, the tokenizer splits the text into tokens, and adds two special tokens: the [CLS] token at the beginning of the token sequence, and the [SEP] token at the end. Also, note that the ## characters at the beginning of some tokens indicate that they are not standalone words, but rather sub-words that continue a word previously started. For example, the output above shows that the word walrus was split into three sub-words. Note, however, that this is specific to this particular tokenization algorithm, and other tokenizers may indicate word continuation in different ways. A better way to detect word continuations is using the word_ids() method of the tokenizer output, which assigns the same id to all tokens part of the same word. For example, all fragments of the word walrus share the word id 3. Lastly, the input_ids attribute provides the token ids used internally by the transformer to map tokens to embeddings. To briefly demonstrate how different tokenizers produce different outputs, here is the same text tokenized with the tokenizer corresponding to xlm-roberta-base: Note how the [CLS] and [SEP] special tokens have been replaced with <s> and </s> respectively. Also, spaces have been replaced with the Unicode character (U+2581, LOWER ONE EIGHTH BLOCK). Tokens that start with that character are considered word beginnings and the rest are word continuations, as can be confirmed by looking at the word ids. This illustrates the importance of using the tokenizer that corresponds to the transformer you intend to use. 012345678 tokens [CLS] I am the wa ##l ##rus . [SEP] word_ids None 0 1 2 3 3 3 4 None input_ids 101 146 1821 1103 20049 1233 6208 119 102 01234567 tokens <s> ▁I ▁am ▁the ▁wal rus . </s > None 0 1 2 3 3 3 None 0 87 444 70 32973 6563 5 2 word_ids input_ids 188 Using Transformers with the Hugging Face Library 13.2 Text Classification For our text classification example, we will continue using the AG News dataset from previous chapters. We will load, preprocess, and split the dataset into pandas dataframes in the same way as before. Now however, rather than continuing with pandas, we will create a Hugging Face dataset from the dataframes. Hugging Face datasets are convenient because of their built-in support of batching, efficient data transformations, and caching. In particular, we convert each dataframe into a Hugging Face dataset. The various datasets are managed with a DatasetDict. Note that this is the same data structure seen when downloading a Hugging Face dataset from their hub.2 The keys in this dictionary are usually train, validation, and test:3 Once our dataset is loaded, we load a tokenizer. Different pre-trained models are tokenized differently, and it is important to select the tokenizer that corresponds to the model we will use so that the inputs are consistent with model expectations. In our example, we will use the bert-base-cased pre-trained model and tokenizer: Datasets have a map() method that transforms the dataset by applying a function to each example. The method returns a new dataset with the transformation applied. We use the map() method to tokenize our dataset. To this end, we define a function that tokenizes an example using the tokenizer we loaded previously. Note that tokenizers support many options that you may need depending on your situation. However, since this is a simple scenario, all we need to do is provide the text to tokenize and specify how to handle texts that exceed the maximum number of tokens permitted by the pre-trained model. Here we have our tokenizer truncate any inputs that are too long by specifying the truncation=True parameter. The output of this function will be added to the new dataset as extra columns. Further, we also want to remove some of the columns that are no longer needed, simplifying subsequent steps. For this, we use the remove_columns argument, listing the columns that we want to discard. Additionally, the dataset’s map() method can batch the dataset; we enable this option with the batched=True argument: 2 https://huggingface.co/datasets
3 These correspond to the more common terms train, development, and test we have used throughout the book so far. In this chapter we use the Hugging Face naming conventions for consistency. 13.2 Text Classification 189 label 03 10 20 32 40 ... ... . 107995  0 
 . 107996  0 
 . 107997  0 
 . 107998  0 
 . 107999  3 
 input_ids [101, 3270, 11906, 1522, 1146, 7106, 1111, 251... [101, 158, 119, 156, 119, 12068, 5084, 1116, 9... [101, 7270, 118, 2733, 1383, 1111, 12448, 7430... [101, 6096, 117, 10378, 3969, 5977, 1111, 8988... [101, 19569, 5480, 10582, 2087, 1867, 158, 119... [101, 1130, 139, 24683, 131, 21107, 2050, 1739... token_type_ids attention_mask [101, 22087, 8223, 1611, 1106, 4417, [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 5572, 324... 0, 0, 0, ... [0,0,0,0,0,0,0,0, [0,0,0,0,0,0,0,0, [0,0,0,0,0,0,0,0, [0,0,0,0,0,0,0,0, [0,0,0,0,0,0,0,0, [0,0,0,0,0,0,0,0, 0,0,0,0, 0,0,0,... 0,0,0,0, 0,0,0,... 0,0,0,0, 0,0,0,... 0,0,0,0, 0,0,0,... 0,0,0,0, 0,0,0,... 0,0,0,0, 0,0,0,... [1,1,1,1,1,1,1,1, [1,1,1,1,1,1,1,1, [1,1,1,1,1,1,1,1, [1,1,1,1,1,1,1,1, [1,1,1,1,1,1,1,1, [1,1,1,1,1,1,1,1, 1,1,1,1, 1,1,1,... 1,1,1,1, 1,1,1,... 1,1,1,1, 1,1,1,... 1,1,1,1, 1,1,1,... 1,1,1,1, 1,1,1,... 1,1,1,1, 1,1,1,... ... ... [101, 16409, 118, 16587, 159, 4064, [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1106, 1564... 0, 0, 0, ... 108000 rows × 4 columns [101, 4222, 11404, 1174, 117, 1476, [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1130, 2696... 0, 0, 0, ... [101, 11560, 3881, 108, 3614, 132, [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 3498, 2944,... 0, 0, 0, ... [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... ... Next, we implement a classifier for our task. Hugging Face provides a
variety of models corresponding to several types of downstream tasks. However, for pedagogical purposes, we implement one from scratch. In particular, our model class inherits from BertPreTrainedModel, which
provides several useful methods such as init_weights() and from_pretrained() methods, which we will use later. The model constructor takes a config- uration object as its only parameter. Configuration objects contain all the hyper-parameters used by the corresponding pre-trained models. We will show later how the configuration model is retrieved and customized. Models that implement specific downstream tasks are usually composed of a pre-trained model (sometimes referred as the body), and one or more task-specific layers (usually referred as the head). Here, we initialize a BertModel using the provided configuration, as well as a dropout layer and a task-specific linear layer used for classifying the Bert output. Each of these layers is initialized by calling the init_weights() method inherited from BertPreTrainedModel. The forward() method, which implements the task-specific forward pass, takes as arguments the outputs of the tokenizer, and, optionally, the gold labels corresponding to the input data points. Our implementation of the forward pass sends the input tokens to the Bert model to produce the contextualized representations for all tokens. This output has several components, including the last_hidden_state which con- 190 Using Transformers with the Hugging Face Library tains the final hidden-state embedding for each token. For our task, we will represent the whole sequence using the embedding for the [CLS] token that occurs at the start of each example. We retrieve it by selecting the first element of each output sequence in the batch (i.e., last_hidden_state[:, 0, :]). As in the previous chapters, we apply dropout to our sequence representation, and then pass it through our linear classification layer. If gold labels are provided (i.e., we are training), we now compute the loss using the cross-entropy loss. The output of the forward pass is wrapped in a Hugging Face SequenceClassifierOutput object4 and returned: Next we load the configuration of the pre-trained model and instantiate our model. The AutoConfig class can load the configuration for any pre-trained model, retrieving it from Hugging Face if needed. Then we use the configuration to instantiate our model using the from_pretrained() method. With this call, the pre-trained model will be loaded, which includes downloading if necessary: Hugging Face provides a Trainer class that greatly simplifies the training process. This class not only implements the training loop we have been using in the previous chapters, but also handles other useful steps such as saving checkpoints (i.e., intermediate models after a number of mini-batches have been processed during training), and tracking custom measures about model performance. In order to create a Trainer, we first need to specify its configuration in a TrainingArguments object. In ours, we specify certain hyper parameters such as batch size, weight decay, and number of epochs, as well as where to store model checkpoints: The TrainingArguments class provides a wide variety of arguments that we have not shown.5 These arguments usually have appropriate default values, so it is often fine to omit them. For example, we did not use the label_names argument, which specifies the key that corresponds to the training labels. When omitted, it defaults to keys such as label, labels and label_ids.6 In this chapter we used label. Note that we also specify how often we would like to see the perfor- . 4  Hugging Face utilizes a set of output objects to standardize model output for a given task. These objects typically include additional information, e.g., attention weights, which can be used for visualizing or debugging model behavior. 
 . 5  https://huggingface.co/docs/transformers/main/en/main_classes/trainer# transformers. TrainingArguments 
 6 In the case of extractive question answering (see Chapter 16), the start_positions and end_positions store the start/end positions of the correct answers. 13.3 Part-of-speech Tagging 191 mance of the current model (at the end of each epoch) with evaluation_strategy='epoch'. This means that after each epoch we print the current loss on the training
partition and on the evaluation dataset, if one is available. Additionally,
we can report custom metrics at this time. For this purpose, we use the compute_metrics parameter of the Trainer, which expects a function that receives a transformers. EvalPredictions object containing the label ids and the predicted logits. The expected return type is a dictionary whose keys correspond to different metrics, each of which will be displayed as a separate result column. Using the above TrainingArguments and compute_metrics function, we create our Trainer. Note that when you provide a tokenizer, the trainer will automatically pad the sequences in each batch. Also, the trainer will automatically use any GPU that is available, unless specifically disabled in the TrainingArguments. Training our model takes a single call to the train() method of the Trainer object. As specified in the our instance of TrainingArguments, the training and validation losses, as well as the accuracy, are reported every epoch. As in the other chapters, we can write custom code to obtain the model’s predictions on the test data. However, the Trainer class provides a predict() method that drastically simplifies this: As shown in the table above, this model achieves an accuracy of 95%, which is the highest performance we have achieved so far on this dataset. 13.3 Part-of-speech Tagging To showcase part-of-speech tagging using transformers, we continue with the Spanish section of the AnCora corpus introduced in Chapter 11. Recall that the dataset is stored in the CoNLL-U format. We load this format in the same way as before, but then we convert the loaded dataset into a Hugging Face DictDataset: Importantly, because the CoNLL-U dataset is already tokenized, we use the is_split_into_words=True tokenizer argument to ensure that the tokenizer respects the existing word boundaries during its sub-word tokenization. Further, while we want to predict one POS tag per word, Epoch Training Loss Validation Loss Accuracy 1 0.187800 0.172629 0.941667 2 0.104000 0.183001 0.946250 192 Using Transformers with the Hugging Face Library any given word may be split into smaller pieces by our tokenizer. Thus, we need to align the tokenizer output to the CoNLL-U words. The original BERT paper (Devlin et al., 2018) addresses this by only using the embedding corresponding to the first sub-token for each word. We follow the same approach for consistency. For the sub-words that do not correspond to the beginning of a word, we use a special value that indicates that we are not interested in their predictions. The CrossEntropyLoss has a parameter called ignore_index for this purpose. The default value for this parameter is −100, which we use as the label for the sub-words we wish to ignore during training: Next, we use this function to preprocess the train and validation folds in our DatasetDict: words [El, presidente, de, el, órgano, regulador, de... [Afirmó, que, sigue, el, criterio, europeo, y,... [Durante, la, presentación, de, el, libro, ", ... [Y, todas, las, miradas, convergen, en, la, lu... [Cambiar, las, formas, parece, de, rigor, ,, p... [Él, llega, a, tirar, la, sobre, la, cama, y, ... tags [DET, NOUN, ADP, DET, NOUN, ADJ, ADP, DET, PRO... [VERB, SCONJ, VERB, DET, NOUN, ADJ, CCONJ, SCO... [ADP, DET, NOUN, ADP, DET, NOUN, PUNCT, DET, P... [CCONJ, DET, DET, NOUN, VERB, ADP, DET, NOUN, ... [VERB, DET, NOUN, VERB, ADP, NOUN, PUNCT, CCON... [PRON, VERB, ADP, VERB, PRON, ADP, DET, NOUN, ... input_ids [0, 540, 9692, 8, 88, 103633, 15913, 1846, 8, ... [0, 62, 38949, 849, 41, 58453, 88, 166220, 620... [0, 24292, 21, 43945, 8, 88, 7750, 44, 239, 78... [0, 990, 5136, 576, 100688, 7, 158, 814, 1409,... [0, 313, 61055, 42, 576, 26497, 12295, 8, 7599... [0, 124043, 47612, 10, 61846, 21, 1028, 21, 39... attention_mask labels [-100, 0, 1, 2, 0, 1, 3, -100, 2, 0, 4, -100, ... [-100, 6, -100, -100, 7, 6, 0, 1, 3, 10, 7, 6,... [-100, 2, 0, 1, 2, 0, 1, 8, 0, 4, -100, 2, 4, ... [-100, 10, 0, 0, 1, -100, 6, -100, -100, 2, 0,... [-100, 6, -100, -100, 0, 1, 6, 2, 1, 8, -100, ... [-100, 5, 6, 2, 6, 5, 2, 0, 1, 10, 5, 6, 0, 1,... 0 1 2 3 4 ... 14300 14301 14302 14303 [1,1,1,1,1, 1,1,1,1,1, [1,1,1,1,1, 1,1,1,1,1, [1,1,1,1,1, 1,1,1,1,1, [1,1,1,1,1, 1,1,1,1,1, [1,1,1,1,1, 1,1,1,1,1, [1,1,1,1,1, 1,1,1,1,1, 1, 1, 1, 1, 1, ... 1, 1, 1, 1, 1, ... 1, 1, 1, 1, 1, ... 1, 1, 1, 1, 1, ... 1, 1, 1, 1, 1, ... 1, 1, 1, 1, 1, ... [Sobre, la, oferta, de, interconexión, con, Te... [ADP, DET, NOUN, ADP, NOUN, ADP, PROPN, ADP, D... [0, 44125, 21, 19806, 8, 1940, 2271, 3355, 194... [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... [-100, 2, 0, 1, 2, 1, -100, -100, -100, 2, 4, ... [La, inversión, en, investigación, básica, es,... [DET, NOUN, ADP, NOUN, ADJ, AUX, DET, NOUN, AD... [0, 239, 98649, 22, 31674, 124528, 198, 88, 46... [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... [-100, 0, 1, 2, 1, 3, 9, 0, 1, 2, 0, 1, 10, 0,... [Conviene, que, ahora, ,, en, plena, apoteosis... [VERB, SCONJ, ADV, PUNCT, ADP, ADJ, NOUN, ADP,... [0, 1657, 7772, 13, 41, 18451, 6, 4, 22, 31161... [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... [-100, 6, -100, -100, 7, 11, 8, -100, 2, 3, 1,... [Carlos, y, Fayna, se, enzarzan, en, una, bron... [PROPN, CCONJ, PROPN, PRON, VERB, ADP, DET, NO... [0, 24856, 113, 114162, 76, 40, 22, 6383, 5935... [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... [-100, 4, 10, 4, -100, 5, 6, -100, -100, 2, 0,... 14304
14305 rows × 5 columns ... ... ... ... ... Next, we implement our model class that uses a transformer encoder as a transducer. Because our downstream task consists of POS tagging for Spanish, we need a transformer model that was pre-trained on Spanish texts. Here, we chose XLM-RoBERTa (Conneau et al., 2019) as our base model. XLM-Roberta is a RoBERTa model (Liu et al., 2019) that has 13.3 Part-of-speech Tagging 193 been pre-trained on 100 different languages, including Spanish. Of note, XLM-RoBERTa does not require us to specify what language we are working on. Similar to BERT, it only requires the input_ids. We discussed in the text classification section that Hugging Face provides implementations for text classification models. This is also true for token classification problems that require transducers. In particular, the XLMRobertaForTokenClassification model provided by Hugging Face does everything needed for this task. However, as before, here we implement it ourselves for pedagogical purposes. The model architecture is similar to our text classification example. It consists of a transformer, a dropout layer, and a linear layer used for classification. The number of labels which determines the output dimension of the linear layer is equal to the number of POS tags. The primary difference between the text classification example and this token classification model is that with the former we produced one label for each text document, while here we produce one label for each token in the input text. Specifically, in our text classification model the output shape was two-dimensional: (batch_size, num_labels). Here, our output is three-dimensional: (batch_size, sequence_size, num_labels). So, while much of the forward method is familiar to us, when we are required to compute the loss, we need to reshape the logits and the labels before passing them to the CrossEntropyLoss, since it expects two-dimensional input and one-dimensional labels. For this purpose, we use the view() method to reshape the tensors. This method is efficient because it does not copy the tensor data. Instead it provides a new view of the same data that behaves like a tensor with a different shape.7 As mentioned before, the number of arguments passed to this method determines the number of dimensions in the output tensor. Here, for our logits, we pass two arguments and so our new view will have two dimensions. The second will be the size of self.num_labels, while the first (because we pass -1) will be inferred based on the original tensor shape. For our labels, on the other hand, we only provide one argument and so the new view will have one dimension, inferred by the original shape: Next, we instantiate our model using the XLM-RoBERTa configuration: 7 Similar to NumPy, PyTorch tensors are represented internally by a block of memory storing the data and some metadata that describes how the data should be read, e.g., type, shape, and stride. The view() method returns a new tensor with new metadata but pointing to the same memory block. 194 Using Transformers with the Hugging Face Library As before, we create a TrainingArguments object and define a compute_metrics function in order to customize a Trainer: While the TrainingArguments code has no substantial changes, we need to adjust the compute_metrics function to account for the fact that our model uses sub-word tokens rather than complete words. Recall that only the first sub-word token per word was assigned a POS tag. This function discards the labels corresponding to the ignored sub-word tokens and evaluates the rest, returning the accuracy score: The last component required for the Trainer is a collator. Since this time we are batching sequences of tokens, we need a collator that can pad them dynamically when constructing the batches. The transformers library includes a DataCollatorForTokenClassification specifically for this purpose. Once we have our collator and our trainer object, we can train our model: Next, we evaluate our newly trained model on the test dataset. For this purpose, we preprocess the data in the same way we did for the train and validation partitions. Then, for convenience, we use the trainer’s predict() method to generate the predicted logits using our model: As before, we use scikit-learn’s classification_report() function to display the results of the evaluation. This function expects two onedimensional lists of labels, so we need to follow a similar approach to the one we employed for text classification. Note that output.label_ids and output.predictions are NumPy arrays rather than PyTorch tensors. This time we use NumPy’s reshape() method to reshape the arrays. This method is similar to PyTorch’s view() method that we used previously, except that view() may copy the array’s data in some situations. We discard the labels corresponding to ignored sub-word tokens, and then we print the classification report: Our model based on XLM-RoBERTa achieves 99% accuracy. This is considerably better than the LSTM-based model developed in Chapter 11. In order to understand the differences between the two methods, we produce below a confusion matrix for the results of each model. Rows in the confusion matrix represent the true labels and columns represent the predicted labels. In the confusion matrices shown below, each cell xij corresponds to the proportion of values with label i that were assigned the label j.8 For a perfect model, all cells in the diagonal would have value 1 and all other cells would have value 0. The code used to generate the confusion matrix is shown below. The confusion matrices 8 This is the case because we used the normalize='true' parameter of the confusion_matrix() function. 13.3 Part-of-speech Tagging 195 Figure 13.1 Confusion matrix corresponding to the LSTM-based part-ofspeech tagger developed in Chapter 11. for the LSTM and transformer are show in Figure 13.1 and Figure 13.2, respectively. The two confusion matrices highlight a couple of important observations. First, the transformer model is considerably better at predicting POS tags with infrequent support in the dataset. For example, the accuracy for predicting the SYM POS tag increased from 38% in the LSTM model to 95% in the transformer model! Equally as impressive, the transformer improved the performance of tags that are extremely common, and, thus, provide plenty of opportunity to both approaches to learn a good model. For example, the accuracy of tagging NOUN, the second 196 Using Transformers with the Hugging Face Library Figure 13.2 Confusion matrix corresponding to the transformer-based part-of-speech tagger. most common POS tag in the dataset, increased from 96% in the LSTM model to 99% in the transformer model. 13.4 Summary In this chapter we presented two applications driven by the encoder component of a transformer network. First, we used the transformer encoder as an acceptor and implemented a text classification application for English news. Second, we used the encoder as a transducer to develop a Spanish part-of-speech tagger. Both tasks were implemented using 13.4 Summary 197 pre-trained transformer models from the Hugging Face library. For both applications, the transformer-based methods outperform considerably all approaches introduced in the previous chapters, highlighting the value of the transformer architecture.
17,980
18,264
#!/usr/bin/env python # coding: utf-8 # # Part-of-speech Tagging with Transformer Networks # Some initialization: # In[1]: import random import torch import numpy as np import pandas as pd from tqdm.notebook import tqdm # enable tqdm in pandas tqdm.pandas() # set to True to use the gpu (if there is one available) use_gpu = True # select device device = torch.device('cuda' if use_gpu and torch.cuda.is_available() else 'cpu') print(f'device: {device.type}') # random seed seed = 1234 # set random seed if seed is not None: print(f'random seed: {seed}') random.seed(seed) np.random.seed(seed) torch.manual_seed(seed) # Read the words and POS tags from the Spanish dataset: # In[2]: from conllu import parse_incr def read_tags(filename): data = {'words': [], 'tags': []} with open(filename) as f: for sent in tqdm(parse_incr(f)): words = [tok['form'] for tok in sent] tags = [tok['upos'] for tok in sent] data['words'].append(words) data['tags'].append(tags) return pd.DataFrame(data) # In[3]: train_df = read_tags('data/UD_Spanish-AnCora/es_ancora-ud-train.conllup') valid_df = read_tags('data/UD_Spanish-AnCora/es_ancora-ud-dev.conllup') test_df = read_tags('data/UD_Spanish-AnCora/es_ancora-ud-test.conllup') # In[4]: tags = train_df['tags'].explode().unique() index_to_tag = {i:t for i,t in enumerate(tags)} tag_to_index = {t:i for i,t in enumerate(tags)} # Create a HuggingFace `DatasetDict` object: # In[5]: from datasets import Dataset, DatasetDict ds = DatasetDict() ds['train'] = Dataset.from_pandas(train_df) ds['validation'] = Dataset.from_pandas(valid_df) ds['test'] = Dataset.from_pandas(test_df) ds # In[6]: ds['train'].to_pandas() # Now tokenize the texts and assign POS labels to the first token in each word: # In[7]: from transformers import AutoTokenizer transformer_name = 'xlm-roberta-base' tokenizer = AutoTokenizer.from_pretrained(transformer_name) # In[8]: x = ds['train'][0] tokenized_input = tokenizer(x['words'], is_split_into_words=True) tokens = tokenizer.convert_ids_to_tokens(tokenized_input['input_ids']) word_ids = tokenized_input.word_ids() pd.DataFrame([tokens, word_ids], index=['tokens', 'word ids']) # In[9]: # https://arxiv.org/pdf/1810.04805.pdf # Section 5.3 # We use the representation of the first sub-token as the input to the token-level classifier over the NER label set. # default value for CrossEntropyLoss ignore_index parameter ignore_index = -100 def tokenize_and_align_labels(batch): labels = [] # tokenize batch tokenized_inputs = tokenizer( batch['words'], truncation=True, is_split_into_words=True, ) # iterate over batch elements for i, tags in enumerate(batch['tags']): label_ids = [] previous_word_id = None # get word ids for current batch element word_ids = tokenized_inputs.word_ids(batch_index=i) # iterate over tokens in batch element for word_id in word_ids: if word_id is None or word_id == previous_word_id: # ignore if not a word or word id has already been seen label_ids.append(ignore_index) else: # get tag id for corresponding word tag_id = tag_to_index[tags[word_id]] label_ids.append(tag_id) # remember this word id previous_word_id = word_id # save label ids for current batch element labels.append(label_ids) # store labels together with the tokenizer output tokenized_inputs['labels'] = labels return tokenized_inputs # In[10]: train_ds = ds['train'].map(tokenize_and_align_labels, batched=True) eval_ds = ds['validation'].map(tokenize_and_align_labels, batched=True) train_ds.to_pandas() # Create our transformer model: # In[11]: from torch import nn from transformers.modeling_outputs import TokenClassifierOutput from transformers.models.roberta.modeling_roberta import RobertaModel, RobertaPreTrainedModel # https://github.com/huggingface/transformers/blob/65659a29cf5a079842e61a63d57fa24474288998/src/transformers/models/roberta/modeling_roberta.py#L1346 class XLMRobertaForTokenClassification(RobertaPreTrainedModel): def __init__(self, config): super().__init__(config) self.num_labels = config.num_labels self.roberta = RobertaModel(config, add_pooling_layer=False) self.dropout = nn.Dropout(config.hidden_dropout_prob) self.classifier = nn.Linear(config.hidden_size, config.num_labels) self.init_weights() def forward(self, input_ids=None, attention_mask=None, token_type_ids=None, labels=None, **kwargs): outputs = self.roberta( input_ids, attention_mask=attention_mask, token_type_ids=token_type_ids, **kwargs, ) sequence_output = self.dropout(outputs[0]) logits = self.classifier(sequence_output) loss = None if labels is not None: loss_fn = nn.CrossEntropyLoss() inputs = logits.view(-1, self.num_labels) targets = labels.view(-1) loss = loss_fn(inputs, targets) return TokenClassifierOutput( loss=loss, logits=logits, hidden_states=outputs.hidden_states, attentions=outputs.attentions, ) # In[12]: from transformers import AutoConfig config = AutoConfig.from_pretrained( transformer_name, num_labels=len(index_to_tag), ) model = ( XLMRobertaForTokenClassification .from_pretrained(transformer_name, config=config) ) # Create the `Trainer` object and train: # In[13]: from transformers import TrainingArguments num_epochs = 2 batch_size = 24 weight_decay = 0.01 model_name = f'{transformer_name}-finetuned-pos-es' training_args = TrainingArguments( output_dir=model_name, log_level='error', num_train_epochs=num_epochs, per_device_train_batch_size=batch_size, per_device_eval_batch_size=batch_size, evaluation_strategy='epoch', weight_decay=weight_decay, ) # In[14]: from sklearn.metrics import accuracy_score def compute_metrics(eval_pred): # gold labels label_ids = eval_pred.label_ids # predictions pred_ids = np.argmax(eval_pred.predictions, axis=-1) # collect gold and predicted labels, ignoring ignore_index label y_true, y_pred = [], [] batch_size, seq_len = pred_ids.shape for i in range(batch_size): for j in range(seq_len): if label_ids[i, j] != ignore_index: y_true.append(index_to_tag[label_ids[i][j]]) y_pred.append(index_to_tag[pred_ids[i][j]]) # return computed metrics return {'accuracy': accuracy_score(y_true, y_pred)} # In[15]: from transformers import Trainer from transformers import DataCollatorForTokenClassification data_collator = DataCollatorForTokenClassification(tokenizer) trainer = Trainer( model=model, args=training_args, data_collator=data_collator, compute_metrics=compute_metrics, train_dataset=train_ds, eval_dataset=eval_ds, tokenizer=tokenizer, ) trainer.train() # Evaluate on the test partition: # In[16]: test_ds = ds['test'].map( tokenize_and_align_labels, batched=True, ) output = trainer.predict(test_ds) # In[17]: from sklearn.metrics import classification_report num_labels = model.num_labels label_ids = output.label_ids.reshape(-1) predictions = output.predictions.reshape(-1, num_labels) predictions = np.argmax(predictions, axis=-1) mask = label_ids != ignore_index y_true = label_ids[mask] y_pred = predictions[mask] target_names = tags[:-1] report = classification_report( y_true, y_pred, target_names=target_names ) print(report) # In[18]: import matplotlib.pyplot as plt from sklearn.metrics import ConfusionMatrixDisplay, confusion_matrix cm = confusion_matrix(y_true, y_pred, normalize='true') disp = ConfusionMatrixDisplay( confusion_matrix=cm, display_labels=target_names, ) fig, ax = plt.subplots(figsize=(10,10)) disp.plot( cmap='Blues', values_format='.2f', colorbar=False, ax=ax, xticks_rotation=45, )
4,233
4,301
22
chap13-23
chap13-23
13 Using Transformers with the Hugging Face Library One of the key advantages of transformer networks is the ability to take a model that was pre-trained over vast quantities of text and fine-tune it for the task at hand. Intuitively, this strategy allows transformer networks to achieve higher performance on smaller datasets by relying on statistics acquired at scale in an unsupervised way (e.g., through the masked language model training objective). To this end, in this chapter we will use the Hugging Face library,1 which has a rich repository of datasets and pre-trained models, as well as helper methods and classes that make it easy to target downstream tasks. Using pre-trained transformer encoders, we will implement the two tasks that served as use cases in the previous chapters: text classification and part-of-speech tagging. 13.1 Tokenization As discussed in Section 12.2, transformers rely on sub-word tokens. This strategy provides an elegant way to handle unknown and low-frequency words by splitting them into more frequent sub-word parts. At the same time, these tokenization algorithms maintain frequently-occurring words as standalone tokens, so the signal for these common words is preserved. To make this more concrete, we show below how tokenizers are employed in the Hugging Face library. First, we load the tokenizer that corresponds to the transformer we intend to use. This is important for two reasons: (a) different transformers rely on different tokenization algorithms, and (b) even for the ones that use the same algorithm, their tokenizer vocabularies are likely to be different if they were pre-trained 1 https://huggingface.co/docs/transformers/main/en/index 186 13.1 Tokenization 187 on different corpora. Next, we tokenize some example text and display some of the resulting attributes with pandas: As shown above, the tokenizer splits the text into tokens, and adds two special tokens: the [CLS] token at the beginning of the token sequence, and the [SEP] token at the end. Also, note that the ## characters at the beginning of some tokens indicate that they are not standalone words, but rather sub-words that continue a word previously started. For example, the output above shows that the word walrus was split into three sub-words. Note, however, that this is specific to this particular tokenization algorithm, and other tokenizers may indicate word continuation in different ways. A better way to detect word continuations is using the word_ids() method of the tokenizer output, which assigns the same id to all tokens part of the same word. For example, all fragments of the word walrus share the word id 3. Lastly, the input_ids attribute provides the token ids used internally by the transformer to map tokens to embeddings. To briefly demonstrate how different tokenizers produce different outputs, here is the same text tokenized with the tokenizer corresponding to xlm-roberta-base: Note how the [CLS] and [SEP] special tokens have been replaced with <s> and </s> respectively. Also, spaces have been replaced with the Unicode character (U+2581, LOWER ONE EIGHTH BLOCK). Tokens that start with that character are considered word beginnings and the rest are word continuations, as can be confirmed by looking at the word ids. This illustrates the importance of using the tokenizer that corresponds to the transformer you intend to use. 012345678 tokens [CLS] I am the wa ##l ##rus . [SEP] word_ids None 0 1 2 3 3 3 4 None input_ids 101 146 1821 1103 20049 1233 6208 119 102 01234567 tokens <s> ▁I ▁am ▁the ▁wal rus . </s > None 0 1 2 3 3 3 None 0 87 444 70 32973 6563 5 2 word_ids input_ids 188 Using Transformers with the Hugging Face Library 13.2 Text Classification For our text classification example, we will continue using the AG News dataset from previous chapters. We will load, preprocess, and split the dataset into pandas dataframes in the same way as before. Now however, rather than continuing with pandas, we will create a Hugging Face dataset from the dataframes. Hugging Face datasets are convenient because of their built-in support of batching, efficient data transformations, and caching. In particular, we convert each dataframe into a Hugging Face dataset. The various datasets are managed with a DatasetDict. Note that this is the same data structure seen when downloading a Hugging Face dataset from their hub.2 The keys in this dictionary are usually train, validation, and test:3 Once our dataset is loaded, we load a tokenizer. Different pre-trained models are tokenized differently, and it is important to select the tokenizer that corresponds to the model we will use so that the inputs are consistent with model expectations. In our example, we will use the bert-base-cased pre-trained model and tokenizer: Datasets have a map() method that transforms the dataset by applying a function to each example. The method returns a new dataset with the transformation applied. We use the map() method to tokenize our dataset. To this end, we define a function that tokenizes an example using the tokenizer we loaded previously. Note that tokenizers support many options that you may need depending on your situation. However, since this is a simple scenario, all we need to do is provide the text to tokenize and specify how to handle texts that exceed the maximum number of tokens permitted by the pre-trained model. Here we have our tokenizer truncate any inputs that are too long by specifying the truncation=True parameter. The output of this function will be added to the new dataset as extra columns. Further, we also want to remove some of the columns that are no longer needed, simplifying subsequent steps. For this, we use the remove_columns argument, listing the columns that we want to discard. Additionally, the dataset’s map() method can batch the dataset; we enable this option with the batched=True argument: 2 https://huggingface.co/datasets
3 These correspond to the more common terms train, development, and test we have used throughout the book so far. In this chapter we use the Hugging Face naming conventions for consistency. 13.2 Text Classification 189 label 03 10 20 32 40 ... ... . 107995  0 
 . 107996  0 
 . 107997  0 
 . 107998  0 
 . 107999  3 
 input_ids [101, 3270, 11906, 1522, 1146, 7106, 1111, 251... [101, 158, 119, 156, 119, 12068, 5084, 1116, 9... [101, 7270, 118, 2733, 1383, 1111, 12448, 7430... [101, 6096, 117, 10378, 3969, 5977, 1111, 8988... [101, 19569, 5480, 10582, 2087, 1867, 158, 119... [101, 1130, 139, 24683, 131, 21107, 2050, 1739... token_type_ids attention_mask [101, 22087, 8223, 1611, 1106, 4417, [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 5572, 324... 0, 0, 0, ... [0,0,0,0,0,0,0,0, [0,0,0,0,0,0,0,0, [0,0,0,0,0,0,0,0, [0,0,0,0,0,0,0,0, [0,0,0,0,0,0,0,0, [0,0,0,0,0,0,0,0, 0,0,0,0, 0,0,0,... 0,0,0,0, 0,0,0,... 0,0,0,0, 0,0,0,... 0,0,0,0, 0,0,0,... 0,0,0,0, 0,0,0,... 0,0,0,0, 0,0,0,... [1,1,1,1,1,1,1,1, [1,1,1,1,1,1,1,1, [1,1,1,1,1,1,1,1, [1,1,1,1,1,1,1,1, [1,1,1,1,1,1,1,1, [1,1,1,1,1,1,1,1, 1,1,1,1, 1,1,1,... 1,1,1,1, 1,1,1,... 1,1,1,1, 1,1,1,... 1,1,1,1, 1,1,1,... 1,1,1,1, 1,1,1,... 1,1,1,1, 1,1,1,... ... ... [101, 16409, 118, 16587, 159, 4064, [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1106, 1564... 0, 0, 0, ... 108000 rows × 4 columns [101, 4222, 11404, 1174, 117, 1476, [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1130, 2696... 0, 0, 0, ... [101, 11560, 3881, 108, 3614, 132, [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 3498, 2944,... 0, 0, 0, ... [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... ... Next, we implement a classifier for our task. Hugging Face provides a
variety of models corresponding to several types of downstream tasks. However, for pedagogical purposes, we implement one from scratch. In particular, our model class inherits from BertPreTrainedModel, which
provides several useful methods such as init_weights() and from_pretrained() methods, which we will use later. The model constructor takes a config- uration object as its only parameter. Configuration objects contain all the hyper-parameters used by the corresponding pre-trained models. We will show later how the configuration model is retrieved and customized. Models that implement specific downstream tasks are usually composed of a pre-trained model (sometimes referred as the body), and one or more task-specific layers (usually referred as the head). Here, we initialize a BertModel using the provided configuration, as well as a dropout layer and a task-specific linear layer used for classifying the Bert output. Each of these layers is initialized by calling the init_weights() method inherited from BertPreTrainedModel. The forward() method, which implements the task-specific forward pass, takes as arguments the outputs of the tokenizer, and, optionally, the gold labels corresponding to the input data points. Our implementation of the forward pass sends the input tokens to the Bert model to produce the contextualized representations for all tokens. This output has several components, including the last_hidden_state which con- 190 Using Transformers with the Hugging Face Library tains the final hidden-state embedding for each token. For our task, we will represent the whole sequence using the embedding for the [CLS] token that occurs at the start of each example. We retrieve it by selecting the first element of each output sequence in the batch (i.e., last_hidden_state[:, 0, :]). As in the previous chapters, we apply dropout to our sequence representation, and then pass it through our linear classification layer. If gold labels are provided (i.e., we are training), we now compute the loss using the cross-entropy loss. The output of the forward pass is wrapped in a Hugging Face SequenceClassifierOutput object4 and returned: Next we load the configuration of the pre-trained model and instantiate our model. The AutoConfig class can load the configuration for any pre-trained model, retrieving it from Hugging Face if needed. Then we use the configuration to instantiate our model using the from_pretrained() method. With this call, the pre-trained model will be loaded, which includes downloading if necessary: Hugging Face provides a Trainer class that greatly simplifies the training process. This class not only implements the training loop we have been using in the previous chapters, but also handles other useful steps such as saving checkpoints (i.e., intermediate models after a number of mini-batches have been processed during training), and tracking custom measures about model performance. In order to create a Trainer, we first need to specify its configuration in a TrainingArguments object. In ours, we specify certain hyper parameters such as batch size, weight decay, and number of epochs, as well as where to store model checkpoints: The TrainingArguments class provides a wide variety of arguments that we have not shown.5 These arguments usually have appropriate default values, so it is often fine to omit them. For example, we did not use the label_names argument, which specifies the key that corresponds to the training labels. When omitted, it defaults to keys such as label, labels and label_ids.6 In this chapter we used label. Note that we also specify how often we would like to see the perfor- . 4  Hugging Face utilizes a set of output objects to standardize model output for a given task. These objects typically include additional information, e.g., attention weights, which can be used for visualizing or debugging model behavior. 
 . 5  https://huggingface.co/docs/transformers/main/en/main_classes/trainer# transformers. TrainingArguments 
 6 In the case of extractive question answering (see Chapter 16), the start_positions and end_positions store the start/end positions of the correct answers. 13.3 Part-of-speech Tagging 191 mance of the current model (at the end of each epoch) with evaluation_strategy='epoch'. This means that after each epoch we print the current loss on the training
partition and on the evaluation dataset, if one is available. Additionally,
we can report custom metrics at this time. For this purpose, we use the compute_metrics parameter of the Trainer, which expects a function that receives a transformers. EvalPredictions object containing the label ids and the predicted logits. The expected return type is a dictionary whose keys correspond to different metrics, each of which will be displayed as a separate result column. Using the above TrainingArguments and compute_metrics function, we create our Trainer. Note that when you provide a tokenizer, the trainer will automatically pad the sequences in each batch. Also, the trainer will automatically use any GPU that is available, unless specifically disabled in the TrainingArguments. Training our model takes a single call to the train() method of the Trainer object. As specified in the our instance of TrainingArguments, the training and validation losses, as well as the accuracy, are reported every epoch. As in the other chapters, we can write custom code to obtain the model’s predictions on the test data. However, the Trainer class provides a predict() method that drastically simplifies this: As shown in the table above, this model achieves an accuracy of 95%, which is the highest performance we have achieved so far on this dataset. 13.3 Part-of-speech Tagging To showcase part-of-speech tagging using transformers, we continue with the Spanish section of the AnCora corpus introduced in Chapter 11. Recall that the dataset is stored in the CoNLL-U format. We load this format in the same way as before, but then we convert the loaded dataset into a Hugging Face DictDataset: Importantly, because the CoNLL-U dataset is already tokenized, we use the is_split_into_words=True tokenizer argument to ensure that the tokenizer respects the existing word boundaries during its sub-word tokenization. Further, while we want to predict one POS tag per word, Epoch Training Loss Validation Loss Accuracy 1 0.187800 0.172629 0.941667 2 0.104000 0.183001 0.946250 192 Using Transformers with the Hugging Face Library any given word may be split into smaller pieces by our tokenizer. Thus, we need to align the tokenizer output to the CoNLL-U words. The original BERT paper (Devlin et al., 2018) addresses this by only using the embedding corresponding to the first sub-token for each word. We follow the same approach for consistency. For the sub-words that do not correspond to the beginning of a word, we use a special value that indicates that we are not interested in their predictions. The CrossEntropyLoss has a parameter called ignore_index for this purpose. The default value for this parameter is −100, which we use as the label for the sub-words we wish to ignore during training: Next, we use this function to preprocess the train and validation folds in our DatasetDict: words [El, presidente, de, el, órgano, regulador, de... [Afirmó, que, sigue, el, criterio, europeo, y,... [Durante, la, presentación, de, el, libro, ", ... [Y, todas, las, miradas, convergen, en, la, lu... [Cambiar, las, formas, parece, de, rigor, ,, p... [Él, llega, a, tirar, la, sobre, la, cama, y, ... tags [DET, NOUN, ADP, DET, NOUN, ADJ, ADP, DET, PRO... [VERB, SCONJ, VERB, DET, NOUN, ADJ, CCONJ, SCO... [ADP, DET, NOUN, ADP, DET, NOUN, PUNCT, DET, P... [CCONJ, DET, DET, NOUN, VERB, ADP, DET, NOUN, ... [VERB, DET, NOUN, VERB, ADP, NOUN, PUNCT, CCON... [PRON, VERB, ADP, VERB, PRON, ADP, DET, NOUN, ... input_ids [0, 540, 9692, 8, 88, 103633, 15913, 1846, 8, ... [0, 62, 38949, 849, 41, 58453, 88, 166220, 620... [0, 24292, 21, 43945, 8, 88, 7750, 44, 239, 78... [0, 990, 5136, 576, 100688, 7, 158, 814, 1409,... [0, 313, 61055, 42, 576, 26497, 12295, 8, 7599... [0, 124043, 47612, 10, 61846, 21, 1028, 21, 39... attention_mask labels [-100, 0, 1, 2, 0, 1, 3, -100, 2, 0, 4, -100, ... [-100, 6, -100, -100, 7, 6, 0, 1, 3, 10, 7, 6,... [-100, 2, 0, 1, 2, 0, 1, 8, 0, 4, -100, 2, 4, ... [-100, 10, 0, 0, 1, -100, 6, -100, -100, 2, 0,... [-100, 6, -100, -100, 0, 1, 6, 2, 1, 8, -100, ... [-100, 5, 6, 2, 6, 5, 2, 0, 1, 10, 5, 6, 0, 1,... 0 1 2 3 4 ... 14300 14301 14302 14303 [1,1,1,1,1, 1,1,1,1,1, [1,1,1,1,1, 1,1,1,1,1, [1,1,1,1,1, 1,1,1,1,1, [1,1,1,1,1, 1,1,1,1,1, [1,1,1,1,1, 1,1,1,1,1, [1,1,1,1,1, 1,1,1,1,1, 1, 1, 1, 1, 1, ... 1, 1, 1, 1, 1, ... 1, 1, 1, 1, 1, ... 1, 1, 1, 1, 1, ... 1, 1, 1, 1, 1, ... 1, 1, 1, 1, 1, ... [Sobre, la, oferta, de, interconexión, con, Te... [ADP, DET, NOUN, ADP, NOUN, ADP, PROPN, ADP, D... [0, 44125, 21, 19806, 8, 1940, 2271, 3355, 194... [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... [-100, 2, 0, 1, 2, 1, -100, -100, -100, 2, 4, ... [La, inversión, en, investigación, básica, es,... [DET, NOUN, ADP, NOUN, ADJ, AUX, DET, NOUN, AD... [0, 239, 98649, 22, 31674, 124528, 198, 88, 46... [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... [-100, 0, 1, 2, 1, 3, 9, 0, 1, 2, 0, 1, 10, 0,... [Conviene, que, ahora, ,, en, plena, apoteosis... [VERB, SCONJ, ADV, PUNCT, ADP, ADJ, NOUN, ADP,... [0, 1657, 7772, 13, 41, 18451, 6, 4, 22, 31161... [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... [-100, 6, -100, -100, 7, 11, 8, -100, 2, 3, 1,... [Carlos, y, Fayna, se, enzarzan, en, una, bron... [PROPN, CCONJ, PROPN, PRON, VERB, ADP, DET, NO... [0, 24856, 113, 114162, 76, 40, 22, 6383, 5935... [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... [-100, 4, 10, 4, -100, 5, 6, -100, -100, 2, 0,... 14304
14305 rows × 5 columns ... ... ... ... ... Next, we implement our model class that uses a transformer encoder as a transducer. Because our downstream task consists of POS tagging for Spanish, we need a transformer model that was pre-trained on Spanish texts. Here, we chose XLM-RoBERTa (Conneau et al., 2019) as our base model. XLM-Roberta is a RoBERTa model (Liu et al., 2019) that has 13.3 Part-of-speech Tagging 193 been pre-trained on 100 different languages, including Spanish. Of note, XLM-RoBERTa does not require us to specify what language we are working on. Similar to BERT, it only requires the input_ids. We discussed in the text classification section that Hugging Face provides implementations for text classification models. This is also true for token classification problems that require transducers. In particular, the XLMRobertaForTokenClassification model provided by Hugging Face does everything needed for this task. However, as before, here we implement it ourselves for pedagogical purposes. The model architecture is similar to our text classification example. It consists of a transformer, a dropout layer, and a linear layer used for classification. The number of labels which determines the output dimension of the linear layer is equal to the number of POS tags. The primary difference between the text classification example and this token classification model is that with the former we produced one label for each text document, while here we produce one label for each token in the input text. Specifically, in our text classification model the output shape was two-dimensional: (batch_size, num_labels). Here, our output is three-dimensional: (batch_size, sequence_size, num_labels). So, while much of the forward method is familiar to us, when we are required to compute the loss, we need to reshape the logits and the labels before passing them to the CrossEntropyLoss, since it expects two-dimensional input and one-dimensional labels. For this purpose, we use the view() method to reshape the tensors. This method is efficient because it does not copy the tensor data. Instead it provides a new view of the same data that behaves like a tensor with a different shape.7 As mentioned before, the number of arguments passed to this method determines the number of dimensions in the output tensor. Here, for our logits, we pass two arguments and so our new view will have two dimensions. The second will be the size of self.num_labels, while the first (because we pass -1) will be inferred based on the original tensor shape. For our labels, on the other hand, we only provide one argument and so the new view will have one dimension, inferred by the original shape: Next, we instantiate our model using the XLM-RoBERTa configuration: 7 Similar to NumPy, PyTorch tensors are represented internally by a block of memory storing the data and some metadata that describes how the data should be read, e.g., type, shape, and stride. The view() method returns a new tensor with new metadata but pointing to the same memory block. 194 Using Transformers with the Hugging Face Library As before, we create a TrainingArguments object and define a compute_metrics function in order to customize a Trainer: While the TrainingArguments code has no substantial changes, we need to adjust the compute_metrics function to account for the fact that our model uses sub-word tokens rather than complete words. Recall that only the first sub-word token per word was assigned a POS tag. This function discards the labels corresponding to the ignored sub-word tokens and evaluates the rest, returning the accuracy score: The last component required for the Trainer is a collator. Since this time we are batching sequences of tokens, we need a collator that can pad them dynamically when constructing the batches. The transformers library includes a DataCollatorForTokenClassification specifically for this purpose. Once we have our collator and our trainer object, we can train our model: Next, we evaluate our newly trained model on the test dataset. For this purpose, we preprocess the data in the same way we did for the train and validation partitions. Then, for convenience, we use the trainer’s predict() method to generate the predicted logits using our model: As before, we use scikit-learn’s classification_report() function to display the results of the evaluation. This function expects two onedimensional lists of labels, so we need to follow a similar approach to the one we employed for text classification. Note that output.label_ids and output.predictions are NumPy arrays rather than PyTorch tensors. This time we use NumPy’s reshape() method to reshape the arrays. This method is similar to PyTorch’s view() method that we used previously, except that view() may copy the array’s data in some situations. We discard the labels corresponding to ignored sub-word tokens, and then we print the classification report: Our model based on XLM-RoBERTa achieves 99% accuracy. This is considerably better than the LSTM-based model developed in Chapter 11. In order to understand the differences between the two methods, we produce below a confusion matrix for the results of each model. Rows in the confusion matrix represent the true labels and columns represent the predicted labels. In the confusion matrices shown below, each cell xij corresponds to the proportion of values with label i that were assigned the label j.8 For a perfect model, all cells in the diagonal would have value 1 and all other cells would have value 0. The code used to generate the confusion matrix is shown below. The confusion matrices 8 This is the case because we used the normalize='true' parameter of the confusion_matrix() function. 13.3 Part-of-speech Tagging 195 Figure 13.1 Confusion matrix corresponding to the LSTM-based part-ofspeech tagger developed in Chapter 11. for the LSTM and transformer are show in Figure 13.1 and Figure 13.2, respectively. The two confusion matrices highlight a couple of important observations. First, the transformer model is considerably better at predicting POS tags with infrequent support in the dataset. For example, the accuracy for predicting the SYM POS tag increased from 38% in the LSTM model to 95% in the transformer model! Equally as impressive, the transformer improved the performance of tags that are extremely common, and, thus, provide plenty of opportunity to both approaches to learn a good model. For example, the accuracy of tagging NOUN, the second 196 Using Transformers with the Hugging Face Library Figure 13.2 Confusion matrix corresponding to the transformer-based part-of-speech tagger. most common POS tag in the dataset, increased from 96% in the LSTM model to 99% in the transformer model. 13.4 Summary In this chapter we presented two applications driven by the encoder component of a transformer network. First, we used the transformer encoder as an acceptor and implemented a text classification application for English news. Second, we used the encoder as a transducer to develop a Spanish part-of-speech tagger. Both tasks were implemented using 13.4 Summary 197 pre-trained transformer models from the Hugging Face library. For both applications, the transformer-based methods outperform considerably all approaches introduced in the previous chapters, highlighting the value of the transformer architecture.
4,221
4,328
#!/usr/bin/env python # coding: utf-8 # # Text Classification Using Transformer Networks (BERT) # Some initialization: # In[1]: import random import torch import numpy as np import pandas as pd from tqdm.notebook import tqdm # enable tqdm in pandas tqdm.pandas() # set to True to use the gpu (if there is one available) use_gpu = True # select device device = torch.device('cuda' if use_gpu and torch.cuda.is_available() else 'cpu') print(f'device: {device.type}') # random seed seed = 1234 # set random seed if seed is not None: print(f'random seed: {seed}') random.seed(seed) np.random.seed(seed) torch.manual_seed(seed) # Read the train/dev/test datasets and create a HuggingFace `Dataset` object: # In[2]: def read_data(filename): # read csv file df = pd.read_csv(filename, header=None) # add column names df.columns = ['label', 'title', 'description'] # make labels zero-based df['label'] -= 1 # concatenate title and description, and remove backslashes df['text'] = df['title'] + " " + df['description'] df['text'] = df['text'].str.replace('\\', ' ', regex=False) return df # In[3]: labels = open('data/ag_news_csv/classes.txt').read().splitlines() train_df = read_data('data/ag_news_csv/train.csv') test_df = read_data('data/ag_news_csv/test.csv') train_df # In[4]: from sklearn.model_selection import train_test_split train_df, eval_df = train_test_split(train_df, train_size=0.9) train_df.reset_index(inplace=True, drop=True) eval_df.reset_index(inplace=True, drop=True) print(f'train rows: {len(train_df.index):,}') print(f'eval rows: {len(eval_df.index):,}') print(f'test rows: {len(test_df.index):,}') # In[5]: from datasets import Dataset, DatasetDict ds = DatasetDict() ds['train'] = Dataset.from_pandas(train_df) ds['validation'] = Dataset.from_pandas(eval_df) ds['test'] = Dataset.from_pandas(test_df) ds # Tokenize the texts: # In[6]: from transformers import AutoTokenizer transformer_name = 'bert-base-cased' tokenizer = AutoTokenizer.from_pretrained(transformer_name) # In[7]: def tokenize(examples): return tokenizer(examples['text'], truncation=True) train_ds = ds['train'].map( tokenize, batched=True, remove_columns=['title', 'description', 'text'], ) eval_ds = ds['validation'].map( tokenize, batched=True, remove_columns=['title', 'description', 'text'], ) train_ds.to_pandas() # Create the transformer model: # In[8]: from torch import nn from transformers.modeling_outputs import SequenceClassifierOutput from transformers.models.bert.modeling_bert import BertModel, BertPreTrainedModel # https://github.com/huggingface/transformers/blob/65659a29cf5a079842e61a63d57fa24474288998/src/transformers/models/bert/modeling_bert.py#L1486 class BertForSequenceClassification(BertPreTrainedModel): def __init__(self, config): super().__init__(config) self.num_labels = config.num_labels self.bert = BertModel(config) self.dropout = nn.Dropout(config.hidden_dropout_prob) self.classifier = nn.Linear(config.hidden_size, config.num_labels) self.init_weights() def forward(self, input_ids=None, attention_mask=None, token_type_ids=None, labels=None, **kwargs): outputs = self.bert( input_ids, attention_mask=attention_mask, token_type_ids=token_type_ids, **kwargs, ) cls_outputs = outputs.last_hidden_state[:, 0, :] cls_outputs = self.dropout(cls_outputs) logits = self.classifier(cls_outputs) loss = None if labels is not None: loss_fn = nn.CrossEntropyLoss() loss = loss_fn(logits, labels) return SequenceClassifierOutput( loss=loss, logits=logits, hidden_states=outputs.hidden_states, attentions=outputs.attentions, ) # In[9]: from transformers import AutoConfig config = AutoConfig.from_pretrained( transformer_name, num_labels=len(labels), ) model = ( BertForSequenceClassification .from_pretrained(transformer_name, config=config) ) # Create the trainer object and train: # In[10]: from transformers import TrainingArguments num_epochs = 2 batch_size = 24 weight_decay = 0.01 model_name = f'{transformer_name}-sequence-classification' training_args = TrainingArguments( output_dir=model_name, log_level='error', num_train_epochs=num_epochs, per_device_train_batch_size=batch_size, per_device_eval_batch_size=batch_size, evaluation_strategy='epoch', weight_decay=weight_decay, ) # In[11]: from sklearn.metrics import accuracy_score def compute_metrics(eval_pred): y_true = eval_pred.label_ids y_pred = np.argmax(eval_pred.predictions, axis=-1) return {'accuracy': accuracy_score(y_true, y_pred)} # In[12]: from transformers import Trainer trainer = Trainer( model=model, args=training_args, compute_metrics=compute_metrics, train_dataset=train_ds, eval_dataset=eval_ds, tokenizer=tokenizer, ) # In[13]: trainer.train() # Evaluate on the test partition: # In[14]: test_ds = ds['test'].map( tokenize, batched=True, remove_columns=['title', 'description', 'text'], ) test_ds.to_pandas() # In[15]: output = trainer.predict(test_ds) output # In[16]: from sklearn.metrics import classification_report y_true = output.label_ids y_pred = np.argmax(output.predictions, axis=-1) target_names = labels print(classification_report(y_true, y_pred, target_names=target_names))
1,751
1,904
23
chap13-24
chap13-24
13 Using Transformers with the Hugging Face Library One of the key advantages of transformer networks is the ability to take a model that was pre-trained over vast quantities of text and fine-tune it for the task at hand. Intuitively, this strategy allows transformer networks to achieve higher performance on smaller datasets by relying on statistics acquired at scale in an unsupervised way (e.g., through the masked language model training objective). To this end, in this chapter we will use the Hugging Face library,1 which has a rich repository of datasets and pre-trained models, as well as helper methods and classes that make it easy to target downstream tasks. Using pre-trained transformer encoders, we will implement the two tasks that served as use cases in the previous chapters: text classification and part-of-speech tagging. 13.1 Tokenization As discussed in Section 12.2, transformers rely on sub-word tokens. This strategy provides an elegant way to handle unknown and low-frequency words by splitting them into more frequent sub-word parts. At the same time, these tokenization algorithms maintain frequently-occurring words as standalone tokens, so the signal for these common words is preserved. To make this more concrete, we show below how tokenizers are employed in the Hugging Face library. First, we load the tokenizer that corresponds to the transformer we intend to use. This is important for two reasons: (a) different transformers rely on different tokenization algorithms, and (b) even for the ones that use the same algorithm, their tokenizer vocabularies are likely to be different if they were pre-trained 1 https://huggingface.co/docs/transformers/main/en/index 186 13.1 Tokenization 187 on different corpora. Next, we tokenize some example text and display some of the resulting attributes with pandas: As shown above, the tokenizer splits the text into tokens, and adds two special tokens: the [CLS] token at the beginning of the token sequence, and the [SEP] token at the end. Also, note that the ## characters at the beginning of some tokens indicate that they are not standalone words, but rather sub-words that continue a word previously started. For example, the output above shows that the word walrus was split into three sub-words. Note, however, that this is specific to this particular tokenization algorithm, and other tokenizers may indicate word continuation in different ways. A better way to detect word continuations is using the word_ids() method of the tokenizer output, which assigns the same id to all tokens part of the same word. For example, all fragments of the word walrus share the word id 3. Lastly, the input_ids attribute provides the token ids used internally by the transformer to map tokens to embeddings. To briefly demonstrate how different tokenizers produce different outputs, here is the same text tokenized with the tokenizer corresponding to xlm-roberta-base: Note how the [CLS] and [SEP] special tokens have been replaced with <s> and </s> respectively. Also, spaces have been replaced with the Unicode character (U+2581, LOWER ONE EIGHTH BLOCK). Tokens that start with that character are considered word beginnings and the rest are word continuations, as can be confirmed by looking at the word ids. This illustrates the importance of using the tokenizer that corresponds to the transformer you intend to use. 012345678 tokens [CLS] I am the wa ##l ##rus . [SEP] word_ids None 0 1 2 3 3 3 4 None input_ids 101 146 1821 1103 20049 1233 6208 119 102 01234567 tokens <s> ▁I ▁am ▁the ▁wal rus . </s > None 0 1 2 3 3 3 None 0 87 444 70 32973 6563 5 2 word_ids input_ids 188 Using Transformers with the Hugging Face Library 13.2 Text Classification For our text classification example, we will continue using the AG News dataset from previous chapters. We will load, preprocess, and split the dataset into pandas dataframes in the same way as before. Now however, rather than continuing with pandas, we will create a Hugging Face dataset from the dataframes. Hugging Face datasets are convenient because of their built-in support of batching, efficient data transformations, and caching. In particular, we convert each dataframe into a Hugging Face dataset. The various datasets are managed with a DatasetDict. Note that this is the same data structure seen when downloading a Hugging Face dataset from their hub.2 The keys in this dictionary are usually train, validation, and test:3 Once our dataset is loaded, we load a tokenizer. Different pre-trained models are tokenized differently, and it is important to select the tokenizer that corresponds to the model we will use so that the inputs are consistent with model expectations. In our example, we will use the bert-base-cased pre-trained model and tokenizer: Datasets have a map() method that transforms the dataset by applying a function to each example. The method returns a new dataset with the transformation applied. We use the map() method to tokenize our dataset. To this end, we define a function that tokenizes an example using the tokenizer we loaded previously. Note that tokenizers support many options that you may need depending on your situation. However, since this is a simple scenario, all we need to do is provide the text to tokenize and specify how to handle texts that exceed the maximum number of tokens permitted by the pre-trained model. Here we have our tokenizer truncate any inputs that are too long by specifying the truncation=True parameter. The output of this function will be added to the new dataset as extra columns. Further, we also want to remove some of the columns that are no longer needed, simplifying subsequent steps. For this, we use the remove_columns argument, listing the columns that we want to discard. Additionally, the dataset’s map() method can batch the dataset; we enable this option with the batched=True argument: 2 https://huggingface.co/datasets
3 These correspond to the more common terms train, development, and test we have used throughout the book so far. In this chapter we use the Hugging Face naming conventions for consistency. 13.2 Text Classification 189 label 03 10 20 32 40 ... ... . 107995  0 
 . 107996  0 
 . 107997  0 
 . 107998  0 
 . 107999  3 
 input_ids [101, 3270, 11906, 1522, 1146, 7106, 1111, 251... [101, 158, 119, 156, 119, 12068, 5084, 1116, 9... [101, 7270, 118, 2733, 1383, 1111, 12448, 7430... [101, 6096, 117, 10378, 3969, 5977, 1111, 8988... [101, 19569, 5480, 10582, 2087, 1867, 158, 119... [101, 1130, 139, 24683, 131, 21107, 2050, 1739... token_type_ids attention_mask [101, 22087, 8223, 1611, 1106, 4417, [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 5572, 324... 0, 0, 0, ... [0,0,0,0,0,0,0,0, [0,0,0,0,0,0,0,0, [0,0,0,0,0,0,0,0, [0,0,0,0,0,0,0,0, [0,0,0,0,0,0,0,0, [0,0,0,0,0,0,0,0, 0,0,0,0, 0,0,0,... 0,0,0,0, 0,0,0,... 0,0,0,0, 0,0,0,... 0,0,0,0, 0,0,0,... 0,0,0,0, 0,0,0,... 0,0,0,0, 0,0,0,... [1,1,1,1,1,1,1,1, [1,1,1,1,1,1,1,1, [1,1,1,1,1,1,1,1, [1,1,1,1,1,1,1,1, [1,1,1,1,1,1,1,1, [1,1,1,1,1,1,1,1, 1,1,1,1, 1,1,1,... 1,1,1,1, 1,1,1,... 1,1,1,1, 1,1,1,... 1,1,1,1, 1,1,1,... 1,1,1,1, 1,1,1,... 1,1,1,1, 1,1,1,... ... ... [101, 16409, 118, 16587, 159, 4064, [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1106, 1564... 0, 0, 0, ... 108000 rows × 4 columns [101, 4222, 11404, 1174, 117, 1476, [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1130, 2696... 0, 0, 0, ... [101, 11560, 3881, 108, 3614, 132, [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 3498, 2944,... 0, 0, 0, ... [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... ... Next, we implement a classifier for our task. Hugging Face provides a
variety of models corresponding to several types of downstream tasks. However, for pedagogical purposes, we implement one from scratch. In particular, our model class inherits from BertPreTrainedModel, which
provides several useful methods such as init_weights() and from_pretrained() methods, which we will use later. The model constructor takes a config- uration object as its only parameter. Configuration objects contain all the hyper-parameters used by the corresponding pre-trained models. We will show later how the configuration model is retrieved and customized. Models that implement specific downstream tasks are usually composed of a pre-trained model (sometimes referred as the body), and one or more task-specific layers (usually referred as the head). Here, we initialize a BertModel using the provided configuration, as well as a dropout layer and a task-specific linear layer used for classifying the Bert output. Each of these layers is initialized by calling the init_weights() method inherited from BertPreTrainedModel. The forward() method, which implements the task-specific forward pass, takes as arguments the outputs of the tokenizer, and, optionally, the gold labels corresponding to the input data points. Our implementation of the forward pass sends the input tokens to the Bert model to produce the contextualized representations for all tokens. This output has several components, including the last_hidden_state which con- 190 Using Transformers with the Hugging Face Library tains the final hidden-state embedding for each token. For our task, we will represent the whole sequence using the embedding for the [CLS] token that occurs at the start of each example. We retrieve it by selecting the first element of each output sequence in the batch (i.e., last_hidden_state[:, 0, :]). As in the previous chapters, we apply dropout to our sequence representation, and then pass it through our linear classification layer. If gold labels are provided (i.e., we are training), we now compute the loss using the cross-entropy loss. The output of the forward pass is wrapped in a Hugging Face SequenceClassifierOutput object4 and returned: Next we load the configuration of the pre-trained model and instantiate our model. The AutoConfig class can load the configuration for any pre-trained model, retrieving it from Hugging Face if needed. Then we use the configuration to instantiate our model using the from_pretrained() method. With this call, the pre-trained model will be loaded, which includes downloading if necessary: Hugging Face provides a Trainer class that greatly simplifies the training process. This class not only implements the training loop we have been using in the previous chapters, but also handles other useful steps such as saving checkpoints (i.e., intermediate models after a number of mini-batches have been processed during training), and tracking custom measures about model performance. In order to create a Trainer, we first need to specify its configuration in a TrainingArguments object. In ours, we specify certain hyper parameters such as batch size, weight decay, and number of epochs, as well as where to store model checkpoints: The TrainingArguments class provides a wide variety of arguments that we have not shown.5 These arguments usually have appropriate default values, so it is often fine to omit them. For example, we did not use the label_names argument, which specifies the key that corresponds to the training labels. When omitted, it defaults to keys such as label, labels and label_ids.6 In this chapter we used label. Note that we also specify how often we would like to see the perfor- . 4  Hugging Face utilizes a set of output objects to standardize model output for a given task. These objects typically include additional information, e.g., attention weights, which can be used for visualizing or debugging model behavior. 
 . 5  https://huggingface.co/docs/transformers/main/en/main_classes/trainer# transformers. TrainingArguments 
 6 In the case of extractive question answering (see Chapter 16), the start_positions and end_positions store the start/end positions of the correct answers. 13.3 Part-of-speech Tagging 191 mance of the current model (at the end of each epoch) with evaluation_strategy='epoch'. This means that after each epoch we print the current loss on the training
partition and on the evaluation dataset, if one is available. Additionally,
we can report custom metrics at this time. For this purpose, we use the compute_metrics parameter of the Trainer, which expects a function that receives a transformers. EvalPredictions object containing the label ids and the predicted logits. The expected return type is a dictionary whose keys correspond to different metrics, each of which will be displayed as a separate result column. Using the above TrainingArguments and compute_metrics function, we create our Trainer. Note that when you provide a tokenizer, the trainer will automatically pad the sequences in each batch. Also, the trainer will automatically use any GPU that is available, unless specifically disabled in the TrainingArguments. Training our model takes a single call to the train() method of the Trainer object. As specified in the our instance of TrainingArguments, the training and validation losses, as well as the accuracy, are reported every epoch. As in the other chapters, we can write custom code to obtain the model’s predictions on the test data. However, the Trainer class provides a predict() method that drastically simplifies this: As shown in the table above, this model achieves an accuracy of 95%, which is the highest performance we have achieved so far on this dataset. 13.3 Part-of-speech Tagging To showcase part-of-speech tagging using transformers, we continue with the Spanish section of the AnCora corpus introduced in Chapter 11. Recall that the dataset is stored in the CoNLL-U format. We load this format in the same way as before, but then we convert the loaded dataset into a Hugging Face DictDataset: Importantly, because the CoNLL-U dataset is already tokenized, we use the is_split_into_words=True tokenizer argument to ensure that the tokenizer respects the existing word boundaries during its sub-word tokenization. Further, while we want to predict one POS tag per word, Epoch Training Loss Validation Loss Accuracy 1 0.187800 0.172629 0.941667 2 0.104000 0.183001 0.946250 192 Using Transformers with the Hugging Face Library any given word may be split into smaller pieces by our tokenizer. Thus, we need to align the tokenizer output to the CoNLL-U words. The original BERT paper (Devlin et al., 2018) addresses this by only using the embedding corresponding to the first sub-token for each word. We follow the same approach for consistency. For the sub-words that do not correspond to the beginning of a word, we use a special value that indicates that we are not interested in their predictions. The CrossEntropyLoss has a parameter called ignore_index for this purpose. The default value for this parameter is −100, which we use as the label for the sub-words we wish to ignore during training: Next, we use this function to preprocess the train and validation folds in our DatasetDict: words [El, presidente, de, el, órgano, regulador, de... [Afirmó, que, sigue, el, criterio, europeo, y,... [Durante, la, presentación, de, el, libro, ", ... [Y, todas, las, miradas, convergen, en, la, lu... [Cambiar, las, formas, parece, de, rigor, ,, p... [Él, llega, a, tirar, la, sobre, la, cama, y, ... tags [DET, NOUN, ADP, DET, NOUN, ADJ, ADP, DET, PRO... [VERB, SCONJ, VERB, DET, NOUN, ADJ, CCONJ, SCO... [ADP, DET, NOUN, ADP, DET, NOUN, PUNCT, DET, P... [CCONJ, DET, DET, NOUN, VERB, ADP, DET, NOUN, ... [VERB, DET, NOUN, VERB, ADP, NOUN, PUNCT, CCON... [PRON, VERB, ADP, VERB, PRON, ADP, DET, NOUN, ... input_ids [0, 540, 9692, 8, 88, 103633, 15913, 1846, 8, ... [0, 62, 38949, 849, 41, 58453, 88, 166220, 620... [0, 24292, 21, 43945, 8, 88, 7750, 44, 239, 78... [0, 990, 5136, 576, 100688, 7, 158, 814, 1409,... [0, 313, 61055, 42, 576, 26497, 12295, 8, 7599... [0, 124043, 47612, 10, 61846, 21, 1028, 21, 39... attention_mask labels [-100, 0, 1, 2, 0, 1, 3, -100, 2, 0, 4, -100, ... [-100, 6, -100, -100, 7, 6, 0, 1, 3, 10, 7, 6,... [-100, 2, 0, 1, 2, 0, 1, 8, 0, 4, -100, 2, 4, ... [-100, 10, 0, 0, 1, -100, 6, -100, -100, 2, 0,... [-100, 6, -100, -100, 0, 1, 6, 2, 1, 8, -100, ... [-100, 5, 6, 2, 6, 5, 2, 0, 1, 10, 5, 6, 0, 1,... 0 1 2 3 4 ... 14300 14301 14302 14303 [1,1,1,1,1, 1,1,1,1,1, [1,1,1,1,1, 1,1,1,1,1, [1,1,1,1,1, 1,1,1,1,1, [1,1,1,1,1, 1,1,1,1,1, [1,1,1,1,1, 1,1,1,1,1, [1,1,1,1,1, 1,1,1,1,1, 1, 1, 1, 1, 1, ... 1, 1, 1, 1, 1, ... 1, 1, 1, 1, 1, ... 1, 1, 1, 1, 1, ... 1, 1, 1, 1, 1, ... 1, 1, 1, 1, 1, ... [Sobre, la, oferta, de, interconexión, con, Te... [ADP, DET, NOUN, ADP, NOUN, ADP, PROPN, ADP, D... [0, 44125, 21, 19806, 8, 1940, 2271, 3355, 194... [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... [-100, 2, 0, 1, 2, 1, -100, -100, -100, 2, 4, ... [La, inversión, en, investigación, básica, es,... [DET, NOUN, ADP, NOUN, ADJ, AUX, DET, NOUN, AD... [0, 239, 98649, 22, 31674, 124528, 198, 88, 46... [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... [-100, 0, 1, 2, 1, 3, 9, 0, 1, 2, 0, 1, 10, 0,... [Conviene, que, ahora, ,, en, plena, apoteosis... [VERB, SCONJ, ADV, PUNCT, ADP, ADJ, NOUN, ADP,... [0, 1657, 7772, 13, 41, 18451, 6, 4, 22, 31161... [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... [-100, 6, -100, -100, 7, 11, 8, -100, 2, 3, 1,... [Carlos, y, Fayna, se, enzarzan, en, una, bron... [PROPN, CCONJ, PROPN, PRON, VERB, ADP, DET, NO... [0, 24856, 113, 114162, 76, 40, 22, 6383, 5935... [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... [-100, 4, 10, 4, -100, 5, 6, -100, -100, 2, 0,... 14304
14305 rows × 5 columns ... ... ... ... ... Next, we implement our model class that uses a transformer encoder as a transducer. Because our downstream task consists of POS tagging for Spanish, we need a transformer model that was pre-trained on Spanish texts. Here, we chose XLM-RoBERTa (Conneau et al., 2019) as our base model. XLM-Roberta is a RoBERTa model (Liu et al., 2019) that has 13.3 Part-of-speech Tagging 193 been pre-trained on 100 different languages, including Spanish. Of note, XLM-RoBERTa does not require us to specify what language we are working on. Similar to BERT, it only requires the input_ids. We discussed in the text classification section that Hugging Face provides implementations for text classification models. This is also true for token classification problems that require transducers. In particular, the XLMRobertaForTokenClassification model provided by Hugging Face does everything needed for this task. However, as before, here we implement it ourselves for pedagogical purposes. The model architecture is similar to our text classification example. It consists of a transformer, a dropout layer, and a linear layer used for classification. The number of labels which determines the output dimension of the linear layer is equal to the number of POS tags. The primary difference between the text classification example and this token classification model is that with the former we produced one label for each text document, while here we produce one label for each token in the input text. Specifically, in our text classification model the output shape was two-dimensional: (batch_size, num_labels). Here, our output is three-dimensional: (batch_size, sequence_size, num_labels). So, while much of the forward method is familiar to us, when we are required to compute the loss, we need to reshape the logits and the labels before passing them to the CrossEntropyLoss, since it expects two-dimensional input and one-dimensional labels. For this purpose, we use the view() method to reshape the tensors. This method is efficient because it does not copy the tensor data. Instead it provides a new view of the same data that behaves like a tensor with a different shape.7 As mentioned before, the number of arguments passed to this method determines the number of dimensions in the output tensor. Here, for our logits, we pass two arguments and so our new view will have two dimensions. The second will be the size of self.num_labels, while the first (because we pass -1) will be inferred based on the original tensor shape. For our labels, on the other hand, we only provide one argument and so the new view will have one dimension, inferred by the original shape: Next, we instantiate our model using the XLM-RoBERTa configuration: 7 Similar to NumPy, PyTorch tensors are represented internally by a block of memory storing the data and some metadata that describes how the data should be read, e.g., type, shape, and stride. The view() method returns a new tensor with new metadata but pointing to the same memory block. 194 Using Transformers with the Hugging Face Library As before, we create a TrainingArguments object and define a compute_metrics function in order to customize a Trainer: While the TrainingArguments code has no substantial changes, we need to adjust the compute_metrics function to account for the fact that our model uses sub-word tokens rather than complete words. Recall that only the first sub-word token per word was assigned a POS tag. This function discards the labels corresponding to the ignored sub-word tokens and evaluates the rest, returning the accuracy score: The last component required for the Trainer is a collator. Since this time we are batching sequences of tokens, we need a collator that can pad them dynamically when constructing the batches. The transformers library includes a DataCollatorForTokenClassification specifically for this purpose. Once we have our collator and our trainer object, we can train our model: Next, we evaluate our newly trained model on the test dataset. For this purpose, we preprocess the data in the same way we did for the train and validation partitions. Then, for convenience, we use the trainer’s predict() method to generate the predicted logits using our model: As before, we use scikit-learn’s classification_report() function to display the results of the evaluation. This function expects two onedimensional lists of labels, so we need to follow a similar approach to the one we employed for text classification. Note that output.label_ids and output.predictions are NumPy arrays rather than PyTorch tensors. This time we use NumPy’s reshape() method to reshape the arrays. This method is similar to PyTorch’s view() method that we used previously, except that view() may copy the array’s data in some situations. We discard the labels corresponding to ignored sub-word tokens, and then we print the classification report: Our model based on XLM-RoBERTa achieves 99% accuracy. This is considerably better than the LSTM-based model developed in Chapter 11. In order to understand the differences between the two methods, we produce below a confusion matrix for the results of each model. Rows in the confusion matrix represent the true labels and columns represent the predicted labels. In the confusion matrices shown below, each cell xij corresponds to the proportion of values with label i that were assigned the label j.8 For a perfect model, all cells in the diagonal would have value 1 and all other cells would have value 0. The code used to generate the confusion matrix is shown below. The confusion matrices 8 This is the case because we used the normalize='true' parameter of the confusion_matrix() function. 13.3 Part-of-speech Tagging 195 Figure 13.1 Confusion matrix corresponding to the LSTM-based part-ofspeech tagger developed in Chapter 11. for the LSTM and transformer are show in Figure 13.1 and Figure 13.2, respectively. The two confusion matrices highlight a couple of important observations. First, the transformer model is considerably better at predicting POS tags with infrequent support in the dataset. For example, the accuracy for predicting the SYM POS tag increased from 38% in the LSTM model to 95% in the transformer model! Equally as impressive, the transformer improved the performance of tags that are extremely common, and, thus, provide plenty of opportunity to both approaches to learn a good model. For example, the accuracy of tagging NOUN, the second 196 Using Transformers with the Hugging Face Library Figure 13.2 Confusion matrix corresponding to the transformer-based part-of-speech tagger. most common POS tag in the dataset, increased from 96% in the LSTM model to 99% in the transformer model. 13.4 Summary In this chapter we presented two applications driven by the encoder component of a transformer network. First, we used the transformer encoder as an acceptor and implemented a text classification application for English news. Second, we used the encoder as a transducer to develop a Spanish part-of-speech tagger. Both tasks were implemented using 13.4 Summary 197 pre-trained transformer models from the Hugging Face library. For both applications, the transformer-based methods outperform considerably all approaches introduced in the previous chapters, highlighting the value of the transformer architecture.
8,972
9,164
#!/usr/bin/env python # coding: utf-8 # # Text Classification Using Transformer Networks (BERT) # Some initialization: # In[1]: import random import torch import numpy as np import pandas as pd from tqdm.notebook import tqdm # enable tqdm in pandas tqdm.pandas() # set to True to use the gpu (if there is one available) use_gpu = True # select device device = torch.device('cuda' if use_gpu and torch.cuda.is_available() else 'cpu') print(f'device: {device.type}') # random seed seed = 1234 # set random seed if seed is not None: print(f'random seed: {seed}') random.seed(seed) np.random.seed(seed) torch.manual_seed(seed) # Read the train/dev/test datasets and create a HuggingFace `Dataset` object: # In[2]: def read_data(filename): # read csv file df = pd.read_csv(filename, header=None) # add column names df.columns = ['label', 'title', 'description'] # make labels zero-based df['label'] -= 1 # concatenate title and description, and remove backslashes df['text'] = df['title'] + " " + df['description'] df['text'] = df['text'].str.replace('\\', ' ', regex=False) return df # In[3]: labels = open('data/ag_news_csv/classes.txt').read().splitlines() train_df = read_data('data/ag_news_csv/train.csv') test_df = read_data('data/ag_news_csv/test.csv') train_df # In[4]: from sklearn.model_selection import train_test_split train_df, eval_df = train_test_split(train_df, train_size=0.9) train_df.reset_index(inplace=True, drop=True) eval_df.reset_index(inplace=True, drop=True) print(f'train rows: {len(train_df.index):,}') print(f'eval rows: {len(eval_df.index):,}') print(f'test rows: {len(test_df.index):,}') # In[5]: from datasets import Dataset, DatasetDict ds = DatasetDict() ds['train'] = Dataset.from_pandas(train_df) ds['validation'] = Dataset.from_pandas(eval_df) ds['test'] = Dataset.from_pandas(test_df) ds # Tokenize the texts: # In[6]: from transformers import AutoTokenizer transformer_name = 'bert-base-cased' tokenizer = AutoTokenizer.from_pretrained(transformer_name) # In[7]: def tokenize(examples): return tokenizer(examples['text'], truncation=True) train_ds = ds['train'].map( tokenize, batched=True, remove_columns=['title', 'description', 'text'], ) eval_ds = ds['validation'].map( tokenize, batched=True, remove_columns=['title', 'description', 'text'], ) train_ds.to_pandas() # Create the transformer model: # In[8]: from torch import nn from transformers.modeling_outputs import SequenceClassifierOutput from transformers.models.bert.modeling_bert import BertModel, BertPreTrainedModel # https://github.com/huggingface/transformers/blob/65659a29cf5a079842e61a63d57fa24474288998/src/transformers/models/bert/modeling_bert.py#L1486 class BertForSequenceClassification(BertPreTrainedModel): def __init__(self, config): super().__init__(config) self.num_labels = config.num_labels self.bert = BertModel(config) self.dropout = nn.Dropout(config.hidden_dropout_prob) self.classifier = nn.Linear(config.hidden_size, config.num_labels) self.init_weights() def forward(self, input_ids=None, attention_mask=None, token_type_ids=None, labels=None, **kwargs): outputs = self.bert( input_ids, attention_mask=attention_mask, token_type_ids=token_type_ids, **kwargs, ) cls_outputs = outputs.last_hidden_state[:, 0, :] cls_outputs = self.dropout(cls_outputs) logits = self.classifier(cls_outputs) loss = None if labels is not None: loss_fn = nn.CrossEntropyLoss() loss = loss_fn(logits, labels) return SequenceClassifierOutput( loss=loss, logits=logits, hidden_states=outputs.hidden_states, attentions=outputs.attentions, ) # In[9]: from transformers import AutoConfig config = AutoConfig.from_pretrained( transformer_name, num_labels=len(labels), ) model = ( BertForSequenceClassification .from_pretrained(transformer_name, config=config) ) # Create the trainer object and train: # In[10]: from transformers import TrainingArguments num_epochs = 2 batch_size = 24 weight_decay = 0.01 model_name = f'{transformer_name}-sequence-classification' training_args = TrainingArguments( output_dir=model_name, log_level='error', num_train_epochs=num_epochs, per_device_train_batch_size=batch_size, per_device_eval_batch_size=batch_size, evaluation_strategy='epoch', weight_decay=weight_decay, ) # In[11]: from sklearn.metrics import accuracy_score def compute_metrics(eval_pred): y_true = eval_pred.label_ids y_pred = np.argmax(eval_pred.predictions, axis=-1) return {'accuracy': accuracy_score(y_true, y_pred)} # In[12]: from transformers import Trainer trainer = Trainer( model=model, args=training_args, compute_metrics=compute_metrics, train_dataset=train_ds, eval_dataset=eval_ds, tokenizer=tokenizer, ) # In[13]: trainer.train() # Evaluate on the test partition: # In[14]: test_ds = ds['test'].map( tokenize, batched=True, remove_columns=['title', 'description', 'text'], ) test_ds.to_pandas() # In[15]: output = trainer.predict(test_ds) output # In[16]: from sklearn.metrics import classification_report y_true = output.label_ids y_pred = np.argmax(output.predictions, axis=-1) target_names = labels print(classification_report(y_true, y_pred, target_names=target_names))
3,166
3,270
24
chap13-25
chap13-25
13 Using Transformers with the Hugging Face Library One of the key advantages of transformer networks is the ability to take a model that was pre-trained over vast quantities of text and fine-tune it for the task at hand. Intuitively, this strategy allows transformer networks to achieve higher performance on smaller datasets by relying on statistics acquired at scale in an unsupervised way (e.g., through the masked language model training objective). To this end, in this chapter we will use the Hugging Face library,1 which has a rich repository of datasets and pre-trained models, as well as helper methods and classes that make it easy to target downstream tasks. Using pre-trained transformer encoders, we will implement the two tasks that served as use cases in the previous chapters: text classification and part-of-speech tagging. 13.1 Tokenization As discussed in Section 12.2, transformers rely on sub-word tokens. This strategy provides an elegant way to handle unknown and low-frequency words by splitting them into more frequent sub-word parts. At the same time, these tokenization algorithms maintain frequently-occurring words as standalone tokens, so the signal for these common words is preserved. To make this more concrete, we show below how tokenizers are employed in the Hugging Face library. First, we load the tokenizer that corresponds to the transformer we intend to use. This is important for two reasons: (a) different transformers rely on different tokenization algorithms, and (b) even for the ones that use the same algorithm, their tokenizer vocabularies are likely to be different if they were pre-trained 1 https://huggingface.co/docs/transformers/main/en/index 186 13.1 Tokenization 187 on different corpora. Next, we tokenize some example text and display some of the resulting attributes with pandas: As shown above, the tokenizer splits the text into tokens, and adds two special tokens: the [CLS] token at the beginning of the token sequence, and the [SEP] token at the end. Also, note that the ## characters at the beginning of some tokens indicate that they are not standalone words, but rather sub-words that continue a word previously started. For example, the output above shows that the word walrus was split into three sub-words. Note, however, that this is specific to this particular tokenization algorithm, and other tokenizers may indicate word continuation in different ways. A better way to detect word continuations is using the word_ids() method of the tokenizer output, which assigns the same id to all tokens part of the same word. For example, all fragments of the word walrus share the word id 3. Lastly, the input_ids attribute provides the token ids used internally by the transformer to map tokens to embeddings. To briefly demonstrate how different tokenizers produce different outputs, here is the same text tokenized with the tokenizer corresponding to xlm-roberta-base: Note how the [CLS] and [SEP] special tokens have been replaced with <s> and </s> respectively. Also, spaces have been replaced with the Unicode character (U+2581, LOWER ONE EIGHTH BLOCK). Tokens that start with that character are considered word beginnings and the rest are word continuations, as can be confirmed by looking at the word ids. This illustrates the importance of using the tokenizer that corresponds to the transformer you intend to use. 012345678 tokens [CLS] I am the wa ##l ##rus . [SEP] word_ids None 0 1 2 3 3 3 4 None input_ids 101 146 1821 1103 20049 1233 6208 119 102 01234567 tokens <s> ▁I ▁am ▁the ▁wal rus . </s > None 0 1 2 3 3 3 None 0 87 444 70 32973 6563 5 2 word_ids input_ids 188 Using Transformers with the Hugging Face Library 13.2 Text Classification For our text classification example, we will continue using the AG News dataset from previous chapters. We will load, preprocess, and split the dataset into pandas dataframes in the same way as before. Now however, rather than continuing with pandas, we will create a Hugging Face dataset from the dataframes. Hugging Face datasets are convenient because of their built-in support of batching, efficient data transformations, and caching. In particular, we convert each dataframe into a Hugging Face dataset. The various datasets are managed with a DatasetDict. Note that this is the same data structure seen when downloading a Hugging Face dataset from their hub.2 The keys in this dictionary are usually train, validation, and test:3 Once our dataset is loaded, we load a tokenizer. Different pre-trained models are tokenized differently, and it is important to select the tokenizer that corresponds to the model we will use so that the inputs are consistent with model expectations. In our example, we will use the bert-base-cased pre-trained model and tokenizer: Datasets have a map() method that transforms the dataset by applying a function to each example. The method returns a new dataset with the transformation applied. We use the map() method to tokenize our dataset. To this end, we define a function that tokenizes an example using the tokenizer we loaded previously. Note that tokenizers support many options that you may need depending on your situation. However, since this is a simple scenario, all we need to do is provide the text to tokenize and specify how to handle texts that exceed the maximum number of tokens permitted by the pre-trained model. Here we have our tokenizer truncate any inputs that are too long by specifying the truncation=True parameter. The output of this function will be added to the new dataset as extra columns. Further, we also want to remove some of the columns that are no longer needed, simplifying subsequent steps. For this, we use the remove_columns argument, listing the columns that we want to discard. Additionally, the dataset’s map() method can batch the dataset; we enable this option with the batched=True argument: 2 https://huggingface.co/datasets
3 These correspond to the more common terms train, development, and test we have used throughout the book so far. In this chapter we use the Hugging Face naming conventions for consistency. 13.2 Text Classification 189 label 03 10 20 32 40 ... ... . 107995  0 
 . 107996  0 
 . 107997  0 
 . 107998  0 
 . 107999  3 
 input_ids [101, 3270, 11906, 1522, 1146, 7106, 1111, 251... [101, 158, 119, 156, 119, 12068, 5084, 1116, 9... [101, 7270, 118, 2733, 1383, 1111, 12448, 7430... [101, 6096, 117, 10378, 3969, 5977, 1111, 8988... [101, 19569, 5480, 10582, 2087, 1867, 158, 119... [101, 1130, 139, 24683, 131, 21107, 2050, 1739... token_type_ids attention_mask [101, 22087, 8223, 1611, 1106, 4417, [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 5572, 324... 0, 0, 0, ... [0,0,0,0,0,0,0,0, [0,0,0,0,0,0,0,0, [0,0,0,0,0,0,0,0, [0,0,0,0,0,0,0,0, [0,0,0,0,0,0,0,0, [0,0,0,0,0,0,0,0, 0,0,0,0, 0,0,0,... 0,0,0,0, 0,0,0,... 0,0,0,0, 0,0,0,... 0,0,0,0, 0,0,0,... 0,0,0,0, 0,0,0,... 0,0,0,0, 0,0,0,... [1,1,1,1,1,1,1,1, [1,1,1,1,1,1,1,1, [1,1,1,1,1,1,1,1, [1,1,1,1,1,1,1,1, [1,1,1,1,1,1,1,1, [1,1,1,1,1,1,1,1, 1,1,1,1, 1,1,1,... 1,1,1,1, 1,1,1,... 1,1,1,1, 1,1,1,... 1,1,1,1, 1,1,1,... 1,1,1,1, 1,1,1,... 1,1,1,1, 1,1,1,... ... ... [101, 16409, 118, 16587, 159, 4064, [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1106, 1564... 0, 0, 0, ... 108000 rows × 4 columns [101, 4222, 11404, 1174, 117, 1476, [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1130, 2696... 0, 0, 0, ... [101, 11560, 3881, 108, 3614, 132, [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 3498, 2944,... 0, 0, 0, ... [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... ... Next, we implement a classifier for our task. Hugging Face provides a
variety of models corresponding to several types of downstream tasks. However, for pedagogical purposes, we implement one from scratch. In particular, our model class inherits from BertPreTrainedModel, which
provides several useful methods such as init_weights() and from_pretrained() methods, which we will use later. The model constructor takes a config- uration object as its only parameter. Configuration objects contain all the hyper-parameters used by the corresponding pre-trained models. We will show later how the configuration model is retrieved and customized. Models that implement specific downstream tasks are usually composed of a pre-trained model (sometimes referred as the body), and one or more task-specific layers (usually referred as the head). Here, we initialize a BertModel using the provided configuration, as well as a dropout layer and a task-specific linear layer used for classifying the Bert output. Each of these layers is initialized by calling the init_weights() method inherited from BertPreTrainedModel. The forward() method, which implements the task-specific forward pass, takes as arguments the outputs of the tokenizer, and, optionally, the gold labels corresponding to the input data points. Our implementation of the forward pass sends the input tokens to the Bert model to produce the contextualized representations for all tokens. This output has several components, including the last_hidden_state which con- 190 Using Transformers with the Hugging Face Library tains the final hidden-state embedding for each token. For our task, we will represent the whole sequence using the embedding for the [CLS] token that occurs at the start of each example. We retrieve it by selecting the first element of each output sequence in the batch (i.e., last_hidden_state[:, 0, :]). As in the previous chapters, we apply dropout to our sequence representation, and then pass it through our linear classification layer. If gold labels are provided (i.e., we are training), we now compute the loss using the cross-entropy loss. The output of the forward pass is wrapped in a Hugging Face SequenceClassifierOutput object4 and returned: Next we load the configuration of the pre-trained model and instantiate our model. The AutoConfig class can load the configuration for any pre-trained model, retrieving it from Hugging Face if needed. Then we use the configuration to instantiate our model using the from_pretrained() method. With this call, the pre-trained model will be loaded, which includes downloading if necessary: Hugging Face provides a Trainer class that greatly simplifies the training process. This class not only implements the training loop we have been using in the previous chapters, but also handles other useful steps such as saving checkpoints (i.e., intermediate models after a number of mini-batches have been processed during training), and tracking custom measures about model performance. In order to create a Trainer, we first need to specify its configuration in a TrainingArguments object. In ours, we specify certain hyper parameters such as batch size, weight decay, and number of epochs, as well as where to store model checkpoints: The TrainingArguments class provides a wide variety of arguments that we have not shown.5 These arguments usually have appropriate default values, so it is often fine to omit them. For example, we did not use the label_names argument, which specifies the key that corresponds to the training labels. When omitted, it defaults to keys such as label, labels and label_ids.6 In this chapter we used label. Note that we also specify how often we would like to see the perfor- . 4  Hugging Face utilizes a set of output objects to standardize model output for a given task. These objects typically include additional information, e.g., attention weights, which can be used for visualizing or debugging model behavior. 
 . 5  https://huggingface.co/docs/transformers/main/en/main_classes/trainer# transformers. TrainingArguments 
 6 In the case of extractive question answering (see Chapter 16), the start_positions and end_positions store the start/end positions of the correct answers. 13.3 Part-of-speech Tagging 191 mance of the current model (at the end of each epoch) with evaluation_strategy='epoch'. This means that after each epoch we print the current loss on the training
partition and on the evaluation dataset, if one is available. Additionally,
we can report custom metrics at this time. For this purpose, we use the compute_metrics parameter of the Trainer, which expects a function that receives a transformers. EvalPredictions object containing the label ids and the predicted logits. The expected return type is a dictionary whose keys correspond to different metrics, each of which will be displayed as a separate result column. Using the above TrainingArguments and compute_metrics function, we create our Trainer. Note that when you provide a tokenizer, the trainer will automatically pad the sequences in each batch. Also, the trainer will automatically use any GPU that is available, unless specifically disabled in the TrainingArguments. Training our model takes a single call to the train() method of the Trainer object. As specified in the our instance of TrainingArguments, the training and validation losses, as well as the accuracy, are reported every epoch. As in the other chapters, we can write custom code to obtain the model’s predictions on the test data. However, the Trainer class provides a predict() method that drastically simplifies this: As shown in the table above, this model achieves an accuracy of 95%, which is the highest performance we have achieved so far on this dataset. 13.3 Part-of-speech Tagging To showcase part-of-speech tagging using transformers, we continue with the Spanish section of the AnCora corpus introduced in Chapter 11. Recall that the dataset is stored in the CoNLL-U format. We load this format in the same way as before, but then we convert the loaded dataset into a Hugging Face DictDataset: Importantly, because the CoNLL-U dataset is already tokenized, we use the is_split_into_words=True tokenizer argument to ensure that the tokenizer respects the existing word boundaries during its sub-word tokenization. Further, while we want to predict one POS tag per word, Epoch Training Loss Validation Loss Accuracy 1 0.187800 0.172629 0.941667 2 0.104000 0.183001 0.946250 192 Using Transformers with the Hugging Face Library any given word may be split into smaller pieces by our tokenizer. Thus, we need to align the tokenizer output to the CoNLL-U words. The original BERT paper (Devlin et al., 2018) addresses this by only using the embedding corresponding to the first sub-token for each word. We follow the same approach for consistency. For the sub-words that do not correspond to the beginning of a word, we use a special value that indicates that we are not interested in their predictions. The CrossEntropyLoss has a parameter called ignore_index for this purpose. The default value for this parameter is −100, which we use as the label for the sub-words we wish to ignore during training: Next, we use this function to preprocess the train and validation folds in our DatasetDict: words [El, presidente, de, el, órgano, regulador, de... [Afirmó, que, sigue, el, criterio, europeo, y,... [Durante, la, presentación, de, el, libro, ", ... [Y, todas, las, miradas, convergen, en, la, lu... [Cambiar, las, formas, parece, de, rigor, ,, p... [Él, llega, a, tirar, la, sobre, la, cama, y, ... tags [DET, NOUN, ADP, DET, NOUN, ADJ, ADP, DET, PRO... [VERB, SCONJ, VERB, DET, NOUN, ADJ, CCONJ, SCO... [ADP, DET, NOUN, ADP, DET, NOUN, PUNCT, DET, P... [CCONJ, DET, DET, NOUN, VERB, ADP, DET, NOUN, ... [VERB, DET, NOUN, VERB, ADP, NOUN, PUNCT, CCON... [PRON, VERB, ADP, VERB, PRON, ADP, DET, NOUN, ... input_ids [0, 540, 9692, 8, 88, 103633, 15913, 1846, 8, ... [0, 62, 38949, 849, 41, 58453, 88, 166220, 620... [0, 24292, 21, 43945, 8, 88, 7750, 44, 239, 78... [0, 990, 5136, 576, 100688, 7, 158, 814, 1409,... [0, 313, 61055, 42, 576, 26497, 12295, 8, 7599... [0, 124043, 47612, 10, 61846, 21, 1028, 21, 39... attention_mask labels [-100, 0, 1, 2, 0, 1, 3, -100, 2, 0, 4, -100, ... [-100, 6, -100, -100, 7, 6, 0, 1, 3, 10, 7, 6,... [-100, 2, 0, 1, 2, 0, 1, 8, 0, 4, -100, 2, 4, ... [-100, 10, 0, 0, 1, -100, 6, -100, -100, 2, 0,... [-100, 6, -100, -100, 0, 1, 6, 2, 1, 8, -100, ... [-100, 5, 6, 2, 6, 5, 2, 0, 1, 10, 5, 6, 0, 1,... 0 1 2 3 4 ... 14300 14301 14302 14303 [1,1,1,1,1, 1,1,1,1,1, [1,1,1,1,1, 1,1,1,1,1, [1,1,1,1,1, 1,1,1,1,1, [1,1,1,1,1, 1,1,1,1,1, [1,1,1,1,1, 1,1,1,1,1, [1,1,1,1,1, 1,1,1,1,1, 1, 1, 1, 1, 1, ... 1, 1, 1, 1, 1, ... 1, 1, 1, 1, 1, ... 1, 1, 1, 1, 1, ... 1, 1, 1, 1, 1, ... 1, 1, 1, 1, 1, ... [Sobre, la, oferta, de, interconexión, con, Te... [ADP, DET, NOUN, ADP, NOUN, ADP, PROPN, ADP, D... [0, 44125, 21, 19806, 8, 1940, 2271, 3355, 194... [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... [-100, 2, 0, 1, 2, 1, -100, -100, -100, 2, 4, ... [La, inversión, en, investigación, básica, es,... [DET, NOUN, ADP, NOUN, ADJ, AUX, DET, NOUN, AD... [0, 239, 98649, 22, 31674, 124528, 198, 88, 46... [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... [-100, 0, 1, 2, 1, 3, 9, 0, 1, 2, 0, 1, 10, 0,... [Conviene, que, ahora, ,, en, plena, apoteosis... [VERB, SCONJ, ADV, PUNCT, ADP, ADJ, NOUN, ADP,... [0, 1657, 7772, 13, 41, 18451, 6, 4, 22, 31161... [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... [-100, 6, -100, -100, 7, 11, 8, -100, 2, 3, 1,... [Carlos, y, Fayna, se, enzarzan, en, una, bron... [PROPN, CCONJ, PROPN, PRON, VERB, ADP, DET, NO... [0, 24856, 113, 114162, 76, 40, 22, 6383, 5935... [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... [-100, 4, 10, 4, -100, 5, 6, -100, -100, 2, 0,... 14304
14305 rows × 5 columns ... ... ... ... ... Next, we implement our model class that uses a transformer encoder as a transducer. Because our downstream task consists of POS tagging for Spanish, we need a transformer model that was pre-trained on Spanish texts. Here, we chose XLM-RoBERTa (Conneau et al., 2019) as our base model. XLM-Roberta is a RoBERTa model (Liu et al., 2019) that has 13.3 Part-of-speech Tagging 193 been pre-trained on 100 different languages, including Spanish. Of note, XLM-RoBERTa does not require us to specify what language we are working on. Similar to BERT, it only requires the input_ids. We discussed in the text classification section that Hugging Face provides implementations for text classification models. This is also true for token classification problems that require transducers. In particular, the XLMRobertaForTokenClassification model provided by Hugging Face does everything needed for this task. However, as before, here we implement it ourselves for pedagogical purposes. The model architecture is similar to our text classification example. It consists of a transformer, a dropout layer, and a linear layer used for classification. The number of labels which determines the output dimension of the linear layer is equal to the number of POS tags. The primary difference between the text classification example and this token classification model is that with the former we produced one label for each text document, while here we produce one label for each token in the input text. Specifically, in our text classification model the output shape was two-dimensional: (batch_size, num_labels). Here, our output is three-dimensional: (batch_size, sequence_size, num_labels). So, while much of the forward method is familiar to us, when we are required to compute the loss, we need to reshape the logits and the labels before passing them to the CrossEntropyLoss, since it expects two-dimensional input and one-dimensional labels. For this purpose, we use the view() method to reshape the tensors. This method is efficient because it does not copy the tensor data. Instead it provides a new view of the same data that behaves like a tensor with a different shape.7 As mentioned before, the number of arguments passed to this method determines the number of dimensions in the output tensor. Here, for our logits, we pass two arguments and so our new view will have two dimensions. The second will be the size of self.num_labels, while the first (because we pass -1) will be inferred based on the original tensor shape. For our labels, on the other hand, we only provide one argument and so the new view will have one dimension, inferred by the original shape: Next, we instantiate our model using the XLM-RoBERTa configuration: 7 Similar to NumPy, PyTorch tensors are represented internally by a block of memory storing the data and some metadata that describes how the data should be read, e.g., type, shape, and stride. The view() method returns a new tensor with new metadata but pointing to the same memory block. 194 Using Transformers with the Hugging Face Library As before, we create a TrainingArguments object and define a compute_metrics function in order to customize a Trainer: While the TrainingArguments code has no substantial changes, we need to adjust the compute_metrics function to account for the fact that our model uses sub-word tokens rather than complete words. Recall that only the first sub-word token per word was assigned a POS tag. This function discards the labels corresponding to the ignored sub-word tokens and evaluates the rest, returning the accuracy score: The last component required for the Trainer is a collator. Since this time we are batching sequences of tokens, we need a collator that can pad them dynamically when constructing the batches. The transformers library includes a DataCollatorForTokenClassification specifically for this purpose. Once we have our collator and our trainer object, we can train our model: Next, we evaluate our newly trained model on the test dataset. For this purpose, we preprocess the data in the same way we did for the train and validation partitions. Then, for convenience, we use the trainer’s predict() method to generate the predicted logits using our model: As before, we use scikit-learn’s classification_report() function to display the results of the evaluation. This function expects two onedimensional lists of labels, so we need to follow a similar approach to the one we employed for text classification. Note that output.label_ids and output.predictions are NumPy arrays rather than PyTorch tensors. This time we use NumPy’s reshape() method to reshape the arrays. This method is similar to PyTorch’s view() method that we used previously, except that view() may copy the array’s data in some situations. We discard the labels corresponding to ignored sub-word tokens, and then we print the classification report: Our model based on XLM-RoBERTa achieves 99% accuracy. This is considerably better than the LSTM-based model developed in Chapter 11. In order to understand the differences between the two methods, we produce below a confusion matrix for the results of each model. Rows in the confusion matrix represent the true labels and columns represent the predicted labels. In the confusion matrices shown below, each cell xij corresponds to the proportion of values with label i that were assigned the label j.8 For a perfect model, all cells in the diagonal would have value 1 and all other cells would have value 0. The code used to generate the confusion matrix is shown below. The confusion matrices 8 This is the case because we used the normalize='true' parameter of the confusion_matrix() function. 13.3 Part-of-speech Tagging 195 Figure 13.1 Confusion matrix corresponding to the LSTM-based part-ofspeech tagger developed in Chapter 11. for the LSTM and transformer are show in Figure 13.1 and Figure 13.2, respectively. The two confusion matrices highlight a couple of important observations. First, the transformer model is considerably better at predicting POS tags with infrequent support in the dataset. For example, the accuracy for predicting the SYM POS tag increased from 38% in the LSTM model to 95% in the transformer model! Equally as impressive, the transformer improved the performance of tags that are extremely common, and, thus, provide plenty of opportunity to both approaches to learn a good model. For example, the accuracy of tagging NOUN, the second 196 Using Transformers with the Hugging Face Library Figure 13.2 Confusion matrix corresponding to the transformer-based part-of-speech tagger. most common POS tag in the dataset, increased from 96% in the LSTM model to 99% in the transformer model. 13.4 Summary In this chapter we presented two applications driven by the encoder component of a transformer network. First, we used the transformer encoder as an acceptor and implemented a text classification application for English news. Second, we used the encoder as a transducer to develop a Spanish part-of-speech tagger. Both tasks were implemented using 13.4 Summary 197 pre-trained transformer models from the Hugging Face library. For both applications, the transformer-based methods outperform considerably all approaches introduced in the previous chapters, highlighting the value of the transformer architecture.
12,799
12,885
#!/usr/bin/env python # coding: utf-8 # # Text Classification Using Transformer Networks (DistilBERT) # Some initialization: # In[1]: import random import torch import numpy as np import pandas as pd from tqdm.notebook import tqdm # enable tqdm in pandas tqdm.pandas() # set to True to use the gpu (if there is one available) use_gpu = True # select device device = torch.device('cuda' if use_gpu and torch.cuda.is_available() else 'cpu') print(f'device: {device.type}') # random seed seed = 1234 # set random seed if seed is not None: print(f'random seed: {seed}') random.seed(seed) np.random.seed(seed) torch.manual_seed(seed) # Read the train/dev/test datasets and create a HuggingFace `Dataset` object: # In[2]: def read_data(filename): # read csv file df = pd.read_csv(filename, header=None) # add column names df.columns = ['label', 'title', 'description'] # make labels zero-based df['label'] -= 1 # concatenate title and description, and remove backslashes df['text'] = df['title'] + " " + df['description'] df['text'] = df['text'].str.replace('\\', ' ', regex=False) return df # In[3]: labels = open('data/ag_news_csv/classes.txt').read().splitlines() train_df = read_data('data/ag_news_csv/train.csv') test_df = read_data('data/ag_news_csv/test.csv') train_df # In[4]: from sklearn.model_selection import train_test_split train_df, eval_df = train_test_split(train_df, train_size=0.9) train_df.reset_index(inplace=True, drop=True) eval_df.reset_index(inplace=True, drop=True) print(f'train rows: {len(train_df.index):,}') print(f'eval rows: {len(eval_df.index):,}') print(f'test rows: {len(test_df.index):,}') # In[5]: from datasets import Dataset, DatasetDict ds = DatasetDict() ds['train'] = Dataset.from_pandas(train_df) ds['validation'] = Dataset.from_pandas(eval_df) ds['test'] = Dataset.from_pandas(test_df) ds # Tokenize the texts: # In[6]: from transformers import AutoTokenizer transformer_name = 'distilbert-base-cased' tokenizer = AutoTokenizer.from_pretrained(transformer_name) # In[7]: def tokenize(examples): return tokenizer(examples['text'], truncation=True) train_ds = ds['train'].map(tokenize, batched=True, remove_columns=['title', 'description', 'text']) eval_ds = ds['validation'].map(tokenize, batched=True, remove_columns=['title', 'description', 'text']) train_ds.to_pandas() # Create the transformer model: # In[8]: from transformers import AutoConfig config = AutoConfig.from_pretrained(transformer_name, num_labels=len(labels)) # In[9]: from transformers.models.distilbert.modeling_distilbert import DistilBertForSequenceClassification model = ( DistilBertForSequenceClassification .from_pretrained(transformer_name, config=config) ) # Create the trainer object and train: # In[10]: from transformers import TrainingArguments num_epochs = 2 batch_size = 24 logging_steps = len(ds['train']) // batch_size model_name = f'{transformer_name}-sequence-classification' training_args = TrainingArguments( output_dir=model_name, log_level='error', num_train_epochs=num_epochs, per_device_train_batch_size=batch_size, per_device_eval_batch_size=batch_size, evaluation_strategy='epoch', weight_decay=0.01, disable_tqdm=False, logging_steps=logging_steps, ) # In[11]: from sklearn.metrics import accuracy_score def compute_metrics(eval_pred): y_true = eval_pred.label_ids y_pred = np.argmax(eval_pred.predictions, axis=-1) return {'accuracy': accuracy_score(y_true, y_pred)} # In[12]: from transformers import Trainer trainer = Trainer( model=model, args=training_args, compute_metrics=compute_metrics, train_dataset=train_ds, eval_dataset=eval_ds, tokenizer=tokenizer, ) # In[13]: trainer.train() # Evaluate on the test partition: # In[14]: test_ds = ds['test'].map(tokenize, batched=True, remove_columns=['title', 'description', 'text']) test_ds.to_pandas() # In[15]: output = trainer.predict(test_ds) output # In[16]: from sklearn.metrics import classification_report y_true = output.label_ids y_pred = np.argmax(output.predictions, axis=-1) target_names = labels print(classification_report(y_true, y_pred, target_names=target_names)) # In[ ]:
3,628
3,647
25
chap09-0
chap09-0
9 Implementing Text Classification Using Word Embeddings In the previous chapter we introduced word embeddings, which are realvalued vectors that encode semantic representation of words. We discussed how to learn them, and how they capture semantic information that makes them useful for downstream tasks. In this chapter we show how to use word embeddings that have been pretrained using a variant of the algorithm discussed in the previous chapter. We show how to load them, explore some of their characteristics, and show their application for a text classification task. As usual, the code for this chapter is available in our repository. It is organized into two notebooks: one corresponding to the explorations shown in the first half of this chapter (chap9_embeddings), and a second one in which we modify our previous classifier to use word embeddings (chap9_classification). 9.1 Pre-trained Word Embeddings There are several algorithms for training word embeddings, including the original word2vec algorithm (Mikolov et al., 2013a) (which we discussed in the previous chapter), GloVe (Pennington et al., 2014), and fastText (Bojanowski et al., 2017). They all provide the software for training the embeddings as well as pretrained word embeddings on their respective websites. In general, most open-domain word embeddings are trained on large corpora that cover a variety of topics such as Wikipedia1 and Gigaword.2 Commonly, these embeddings are freely distributed so 1 https://en.wikipedia.org/wiki/Wikipedia:Database_download 2 https://catalog.ldc.upenn.edu/LDC2011T07 133 134 Implementing Text Classification Using Word Embeddings house 0.60137 0.28521 -0.032038 -0.43026 0.74806 0.26223 -0.97361 0.078581 -0.57588 -1.188 -1.8507 -0.24887 0.055549 0.0086155 0.067951 0.40554 -0.073998 -0.21318 0.37167 -0.71791 1.2234 0.35546 -0.41537 -0.21931 -0.39661 -1.7831 -0.41507 0.29533 -0.41254 0.020096 2.7425 -0.9926 -0.71033 -0.46813 0.28265 -0.077639 0.3041 -0.06644 0.3951 -0.70747 -0.38894 0.23158 -0.49508 0.14612 -0.02314 0.56389 -0.86188 -1.0278 0.039922 0.20018 Figure 9.1 GloVe embedding corresponding to the word house, found in the GloVe file glove.6B.50d.txt. We have broken the vector in several lines for display purposes, but this is a single line in the text file. that practitioners can use them in downstream tasks. We will use one such set of vectors in this chapter. Pretrained embeddings are usually distributed as a text file in which each line represents a word vector. The first element in the line is the word itself, and the rest of the elements are the vector components. This is usually referred to as the word2vec format. For example, Figure 9.1 shows the line in the glove.6B.50d.txt file (from the GloVe website) corresponding to the word house. This vector is represented by the word itself, followed by 50 floating-point numbers corresponding to the 50dimensional vector. Note that some embeddings files have a header line composed of two numbers: the number of vectors (i.e., the number of lines in the file), and the vector dimensionality. However, this is not always the case. For example, the original word2vec implementation includes this header line, but the more recent GloVe does not (probably because this information can be inferred from the content of the file). For the examples in the rest of the chapter, we will use the glove.6B.300d.txt embeddings that can be downloaded from the GloVe website.3 This file provides 400,000 word embeddings of 300-dimensions trained on texts
from Wikipedia 2014 and Gigaword 5. We will begin our exploration of word embeddings using Gensim,4 a Python library that provides excellent support for loading and using word embeddings, among other more advanced features. As we can see, the embeddings have been loaded and assigned to the glove variable. Note that we had to specify that this file doesn’t contain the header that is usually present in the word2vec format. The 3 https://nlp.stanford.edu/projects/glove/ 4 https://radimrehurek.com/gensim/ 9.1 Pre-trained Word Embeddings 135 glove.vectors attribute contains a 2-dimensional NumPy array with 400,000 rows and 300 columns, each row corresponding to a word embedding. 9.1.1 Word Similarity Gensim’s KeyedVectors class provides a method called most_similar that receives a word and computes its cosine similarity to all other embeddings, and returns the topn most-similar words. By default, topn is set to 10. The example above shows the top 10 most-similar words to the word cactus, when using the 300-dimension GloVe embeddings trained on Wikipedia and Gigaword. All ten most-similar words are related to cactus in different ways: cacti and cactuses are its plural forms; saguaro, peyote, opuntia, and prickly pear are types of cacti; and mesquite, shrubs, and succulents are other plants from arid climates. You can find more examples of word similarity queries in the Jupyter notebook that accompanies this chapter. Also, as an exercise, try loading a different set of embeddings trained with a different corpus (e.g., Twitter) to see if you obtain different results! 9.1.2 Word Analogies As we discussed in the previous chapter, the semantic information en- coded by word embeddings captures much more than word similar- ity. To surface this additional information, we will use word analogies represented using additional vector operations. For example, a well- ⃗
known analogy that highlights gender information is: king − m⃗an ≈ qu⃗een−wom⃗an,5or,inplainlanguage:“manistokingwhatwomanis to queen.” From this, it immediately follows that one can subtract the meaning of man and add the meaning of woman to obtain the definition ⃗ offemaleroyalty:king−m⃗an+wom⃗an≈qu⃗een.
 The same most_similar method we’ve been using can be repurposed to find word analogies such as the one mentioned above. To this end, two sets of words have to be provided to the most_similar method: a list of positive words that should be added, and a list of negative words 5 A word with an arrow on top refers to the embedding vector corresponding to that word. Please see Section 1.4 for a summary of the notations used in this book. 136 Implementing Text Classification Using Word Embeddings that should be subtracted. For example, the code below implements the left-hand side of the previous analogy: Another interesting analogy relation that shows how the embeddings have captured information about currencies is shown below. More examples are discussed in the Jupyter notebook. 9.1.3 Looking Under the Hood Let us understand now how these queries are actually implemented. First, we need to know what components we need. Clearly, we need the embedding vectors themselves. They are stored in the vectors attribute of the KeyedVectors object. As we mentioned previously, this is a 2-dimensional NumPy array, each row corresponding to a word in the vocabulary. These embeddings are not normalized, but normalized embeddings can be obtained using the get_normed_vectors method. We also need to know the mapping between words and the matrix rows. The KeyedVectors object stores this mapping in a list of terms called index_to_key, and a term-to-index dictionary called key_to_index. Below we show only the first 5 terms to save space, but you can inspect the whole vocabulary in the Jupyter notebook. 9.1.4 Word Similarity from Scratch Implementing the word similarity function ourselves is a good exercise to ensure that we understand how cosine similarity works, and to practice our NumPy skills. We will write a function called most_similar_words that will take a word, the embeddings matrix, the vocabulary in the form of the index_to_key list and key_to_index dictionary, and the number of similar words to return (defaults to 10). The implementation of most_similar_words is straightforward. First, we find the word id for the given word, using the key_to_index dictionary. Then we retrieve the row from the vectors matrix that corresponds to that word. The next step is computing the cosine similarity between the word of interest and the rest of the vocabulary. Recall that the cosine similarity is equivalent to a dot product if the vectors are normalized. We use this equivalence by performing a matrix-vector multiplication between the word embedding and the embedding matrix using Python’s at operator (denoted as @ in code). This means that we must pass the 9.2 Text Classification with Pretrained Word Embeddings 137 normalized embeddings as an argument to this function. Next, we need to sort the similarities preserving the mapping to the words in the vocabulary. We achieve this using the argsort NumPy method, which returns the indices in sorted (ascending) order. Since we need them in descending order, the next step is to reverse this list of indices. Obviously, the most similar word to whichever word we’re querying is the word itself, but that is not an interesting result, so we will remove it from the results. We do this by using NumPy’s ability to index arrays using booleans. We first create a new array in which the position corresponding to the query word is set to False and every other element is set to True, and we use this boolean array to index the list of ids. Lastly, we create a list of tuples of the form (word, similarity) for the topn words, and return the results. Now we will test our implementation of word similarity using the word cactus. You can compare the results to the ones obtained by KeyedVectors’s most_similar method. 9.1.5 Word Analogies from Scratch The implementation of the word analogy function is not that much different from our most_similar_word function above. The main difference between this function and most_similar_words is that now we have two lists of words that we need to combine into a single embedding. We first add the positive words into a single vector, and we do the same for the negative words. Then we subtract the negative vector from the positive one, and normalize the result. The similarity scores are computed the same way as before, but now we need to remove several words from the results, so this time we use NumPy’s isin function, which checks for any of the words in given_word_ids. We then package the results the same way we did before, and return them. ⃗ Nowlet’stryourimplementationwiththesameking−m⃗an+wom⃗an query we discussed previously. Please compare the results to the ones obtained by Gensim. 9.2 Text Classification with Pretrained Word Embeddings In this section we will continue using the AG News classification dataset introduced in previous chapters. Most of the data preparation is the 138 Implementing Text Classification Using Word Embeddings same, up to tokenization. However, we need to remember that the embeddings were trained on a different corpus, so it would be a good idea to estimate how well they cover the words AG News dataset. To achieve this, we load the embeddings just like we did previously. Then we count the tokens in our corpus that do not appear in the embeddings vocabulary, as well as the total number to tokens. We use these numbers to print some informative statistics such as the proportion of unknown tokens in the corpus. We also print the top ten unknown tokens. You can use the Jupyter notebook to explore this task further. Our analysis indicates that only 1.25% of the tokens are not accounted for in the embeddings vocabulary. Further, the most common unknown words seem to be URL fragments. This is encouraging. However, for more robustness, we will introduce a couple of special embeddings that are often needed when dealing with word embeddings. The first one is an embedding used to represent unknown words. A common strategy is to use the average of all the embeddings in the vocabulary for this purpose. The second embedding we will add will be used for padding. Padding is required when we want to train with (mini-)batches because the lengths of all the examples in a given batch have to match in order for the batch to be efficiently processed in parallel. The padding embedding consists only of zeros, which essentially excludes these virtual tokens from the forward/backward passes. None of these embeddings are included in the pretrained GloVe embeddings, but other pretrained embeddings may already include them, so it is a good idea to check if they are included with the embeddings we are using before adding them. The new embeddings were added at the end of embedding collection, so their ids are 400,000 and 400,001. Now we need to generate a list of token ids for each training example. Recall that we decided to ignore tokens that appear less than 10 times, so we need to replace those with [UNK] too, even if they appear in the embedding vocabulary. Next, we create a Dataset object from the padded lists of token ids. This one is even easier since the lists of token ids are ready. So all that is required is turning them into tensors. Lastly, we need to modify the model class to indicate that we now use embedding vectors. To this end, we will add an nn. Embedding layer that stores the embedding vectors for all words in the vocabulary. We will use this object to look up embeddings by their token ids. This layer will be initialized from a tensor containing the pretrained embeddings for the entire vocabulary. Also, the pad_id is specified when creating the 9.2 Text Classification with Pretrained Word Embeddings 139 embedding layer. When a nn. Embedding layer gets initialized using the from_pretrained method with other arguments set to default values, the embeddings are not updated during training. We will keep it that way for this example, but that could be changed by setting the freeze parameter to False. The rest of the layers are the same as in our previous example from Chapter 7, i.e., one intermediate layer and one output layer, with a nonlinearity (ReLU) between them. The only major difference is that now the input size of the intermediate layer is the size of one embedding (e.g., 300) instead of the size of the vocabulary like last time. This is because, as we explain below, the intermediate layer receives an average of the numerical representations of the words in the current text. The forward function of the Model class changes significantly. This time we are encoding the text as an average of the embeddings of all the words it contains. To compute the denominator of this average, we obtain the length of each text by counting how many of its words are not the virtual padding token. Then we sum all the embeddings and divide by the number of non-padding tokens. Adding all embeddings is safe, because padding embeddings are comprised of zeros. This process leaves us with a single embedding for the whole text, which is then passed to the rest of the layers. The training and evaluation steps are the same before. The results of this model on the AG News test partition are displayed below: Comparing these results with the ones obtained by the multilayer perceptron with explicit features in Chapter 7, we observe that on this particular task utilizing embeddings as features does not yield a performance improvement. Notably, this is a small dataset and a rather simplistic task where the presence of certain words is sufficient to distinguish the category of an article (e.g., the word basketball is highly indicative of the label Sports). Nevertheless, in other tasks where distinctions are more nuanced, or in which there is less likely to be word overlap between texts of interest, word embeddings do provide necessary signal. Additionally, when there are class imbalances, word embeddings can supplement underrepresented classes by bringing the external knowledge gained during their pretraining. 140 Implementing Text Classification Using Word Embeddings 9.3 Summary In this chapter we showed how to explore the semantic space encoded by word embeddings through word similarity and analogies, as well as one way to use them for text classification. At this point we have not taken into consideration the order in which the words appear, i.e., we averaged the embeddings for all the words in the text using a bag-ofwords representation of text. In subsequent chapters we will explore how to incorporate word order into the learned representations of text.
8,905
9,068
#!/usr/bin/env python # coding: utf-8 # # Using Pre-trained Word Embeddings # # In this notebook we will show some operations on pre-trained word embeddings to gain an intuition about them. # # We will be using the pre-trained GloVe embeddings that can be found in the [official website](https://nlp.stanford.edu/projects/glove/). In particular, we will use the file `glove.6B.300d.txt` contained in this [zip file](https://nlp.stanford.edu/data/glove.6B.zip). # # We will first load the GloVe embeddings using [Gensim](https://radimrehurek.com/gensim/). Specifically, we will use [`KeyedVectors`](https://radimrehurek.com/gensim/models/keyedvectors.html)'s [`load_word2vec_format()`](https://radimrehurek.com/gensim/models/keyedvectors.html#gensim.models.keyedvectors.KeyedVectors.load_word2vec_format) classmethod, which supports the original word2vec file format. # However, there is a difference in the file formats used by GloVe and word2vec, which is a header used by word2vec to indicate the number of embeddings and dimensions stored in the file. The file that stores the GloVe embeddings doesn't have this header, so we will have to address that when loading the embeddings. # # Loading the embeddings may take a little bit, so hang in there! # In[2]: from gensim.models import KeyedVectors fname = "glove.6B.300d.txt" glove = KeyedVectors.load_word2vec_format(fname, no_header=True) glove.vectors.shape # ## Word similarity # # One attribute of word embeddings that makes them useful is the ability to compare them using cosine similarity to find how similar they are. [`KeyedVectors`](https://radimrehurek.com/gensim/models/keyedvectors.html) objects provide a method called [`most_similar()`](https://radimrehurek.com/gensim/models/keyedvectors.html#gensim.models.keyedvectors.KeyedVectors.most_similar) that we can use to find the closest words to a particular word of interest. By default, [`most_similar()`](https://radimrehurek.com/gensim/models/keyedvectors.html#gensim.models.keyedvectors.KeyedVectors.most_similar) returns the 10 most similar words, but this can be changed using the `topn` parameter. # # Below we test this function using a few different words. # In[3]: # common noun glove.most_similar("cactus") # In[4]: # common noun glove.most_similar("cake") # In[5]: # adjective glove.most_similar("angry") # In[6]: # adverb glove.most_similar("quickly") # In[7]: # preposition glove.most_similar("between") # In[8]: # determiner glove.most_similar("the") # ## Word analogies # # Another characteristic of word embeddings is their ability to solve analogy problems. # The same [`most_similar()`](https://radimrehurek.com/gensim/models/keyedvectors.html#gensim.models.keyedvectors.KeyedVectors.most_similar) method can be used for this task, by passing two lists of words: # a `positive` list with the words that should be added and a `negative` list with the words that should be subtracted. Using these arguments, the famous example $\vec{king} - \vec{man} + \vec{woman} \approx \vec{queen}$ can be executed as follows: # In[9]: # king - man + woman glove.most_similar(positive=["king", "woman"], negative=["man"]) # Here are a few other interesting analogies: # In[10]: # car - drive + fly glove.most_similar(positive=["car", "fly"], negative=["drive"]) # In[11]: # berlin - germany + australia glove.most_similar(positive=["berlin", "australia"], negative=["germany"]) # In[12]: # england - london + baghdad glove.most_similar(positive=["england", "baghdad"], negative=["london"]) # In[13]: # japan - yen + peso glove.most_similar(positive=["japan", "peso"], negative=["yen"]) # In[14]: # best - good + tall glove.most_similar(positive=["best", "tall"], negative=["good"]) # ## Looking under the hood # # Now that we are more familiar with the [`most_similar()`](https://radimrehurek.com/gensim/models/keyedvectors.html#gensim.models.keyedvectors.KeyedVectors.most_similar) method, it is time to implement its functionality ourselves. # But first, we need to take a look at the different parts of the [`KeyedVectors`](https://radimrehurek.com/gensim/models/keyedvectors.html) object that we will need. # Obviously, we will need the vectors themselves. They are stored in the `vectors` attribute. # In[15]: glove.vectors.shape # As we can see above, `vectors` is a 2-dimensional matrix with 400,000 rows and 300 columns. # Each row corresponds to a 300-dimensional word embedding. These embeddings are not normalized, but normalized embeddings can be obtained using the [`get_normed_vectors()`](https://radimrehurek.com/gensim/models/keyedvectors.html#gensim.models.keyedvectors.KeyedVectors.get_normed_vectors) method. # In[16]: normed_vectors = glove.get_normed_vectors() normed_vectors.shape # Now we need to map the words in the vocabulary to rows in the `vectors` matrix, and vice versa. # The [`KeyedVectors`](https://radimrehurek.com/gensim/models/keyedvectors.html) object has the attributes `index_to_key` and `key_to_index` which are a list of words and a dictionary of words to indices, respectively. # In[17]: #glove.index_to_key # In[18]: #glove.key_to_index # ## Word similarity from scratch # # Now we have everything we need to implement a `most_similar_words()` function that takes a word, the vector matrix, the `index_to_key` list, and the `key_to_index` dictionary. This function will return the 10 most similar words to the provided word, along with their similarity scores. # In[19]: import numpy as np def most_similar_words(word, vectors, index_to_key, key_to_index, topn=10): # retrieve word_id corresponding to given word word_id = key_to_index[word] # retrieve embedding for given word emb = vectors[word_id] # calculate similarities to all words in out vocabulary similarities = vectors @ emb # get word_ids in ascending order with respect to similarity score ids_ascending = similarities.argsort() # reverse word_ids ids_descending = ids_ascending[::-1] # get boolean array with element corresponding to word_id set to false mask = ids_descending != word_id # obtain new array of indices that doesn't contain word_id # (otherwise the most similar word to the argument would be the argument itself) ids_descending = ids_descending[mask] # get topn word_ids top_ids = ids_descending[:topn] # retrieve topn words with their corresponding similarity score top_words = [(index_to_key[i], similarities[i]) for i in top_ids] # return results return top_words # Now let's try the same example that we used above: the most similar words to "cactus". # In[20]: vectors = glove.get_normed_vectors() index_to_key = glove.index_to_key key_to_index = glove.key_to_index most_similar_words("cactus", vectors, index_to_key, key_to_index) # ## Analogies from scratch # # The `most_similar_words()` function behaves as expected. Now let's implement a function to perform the analogy task. We will give it the very creative name `analogy`. This function will get two lists of words (one for positive words and one for negative words), just like the [`most_similar()`](https://radimrehurek.com/gensim/models/keyedvectors.html#gensim.models.keyedvectors.KeyedVectors.most_similar) method we discussed above. # In[21]: from numpy.linalg import norm def analogy(positive, negative, vectors, index_to_key, key_to_index, topn=10): # find ids for positive and negative words pos_ids = [key_to_index[w] for w in positive] neg_ids = [key_to_index[w] for w in negative] given_word_ids = pos_ids + neg_ids # get embeddings for positive and negative words pos_emb = vectors[pos_ids].sum(axis=0) neg_emb = vectors[neg_ids].sum(axis=0) # get embedding for analogy emb = pos_emb - neg_emb # normalize embedding emb = emb / norm(emb) # calculate similarities to all words in out vocabulary similarities = vectors @ emb # get word_ids in ascending order with respect to similarity score ids_ascending = similarities.argsort() # reverse word_ids ids_descending = ids_ascending[::-1] # get boolean array with element corresponding to any of given_word_ids set to false given_words_mask = np.isin(ids_descending, given_word_ids, invert=True) # obtain new array of indices that doesn't contain any of the given_word_ids ids_descending = ids_descending[given_words_mask] # get topn word_ids top_ids = ids_descending[:topn] # retrieve topn words with their corresponding similarity score top_words = [(index_to_key[i], similarities[i]) for i in top_ids] # return results return top_words # Let's try this function with the $\vec{king} - \vec{man} + \vec{woman} \approx \vec{queen}$ example we discussed above. # In[22]: positive = ["king", "woman"] negative = ["man"] vectors = glove.get_normed_vectors() index_to_key = glove.index_to_key key_to_index = glove.key_to_index analogy(positive, negative, vectors, index_to_key, key_to_index) # In[ ]:
6,116
6,403
0
chap09-1
chap09-1
9 Implementing Text Classification Using Word Embeddings In the previous chapter we introduced word embeddings, which are realvalued vectors that encode semantic representation of words. We discussed how to learn them, and how they capture semantic information that makes them useful for downstream tasks. In this chapter we show how to use word embeddings that have been pretrained using a variant of the algorithm discussed in the previous chapter. We show how to load them, explore some of their characteristics, and show their application for a text classification task. As usual, the code for this chapter is available in our repository. It is organized into two notebooks: one corresponding to the explorations shown in the first half of this chapter (chap9_embeddings), and a second one in which we modify our previous classifier to use word embeddings (chap9_classification). 9.1 Pre-trained Word Embeddings There are several algorithms for training word embeddings, including the original word2vec algorithm (Mikolov et al., 2013a) (which we discussed in the previous chapter), GloVe (Pennington et al., 2014), and fastText (Bojanowski et al., 2017). They all provide the software for training the embeddings as well as pretrained word embeddings on their respective websites. In general, most open-domain word embeddings are trained on large corpora that cover a variety of topics such as Wikipedia1 and Gigaword.2 Commonly, these embeddings are freely distributed so 1 https://en.wikipedia.org/wiki/Wikipedia:Database_download 2 https://catalog.ldc.upenn.edu/LDC2011T07 133 134 Implementing Text Classification Using Word Embeddings house 0.60137 0.28521 -0.032038 -0.43026 0.74806 0.26223 -0.97361 0.078581 -0.57588 -1.188 -1.8507 -0.24887 0.055549 0.0086155 0.067951 0.40554 -0.073998 -0.21318 0.37167 -0.71791 1.2234 0.35546 -0.41537 -0.21931 -0.39661 -1.7831 -0.41507 0.29533 -0.41254 0.020096 2.7425 -0.9926 -0.71033 -0.46813 0.28265 -0.077639 0.3041 -0.06644 0.3951 -0.70747 -0.38894 0.23158 -0.49508 0.14612 -0.02314 0.56389 -0.86188 -1.0278 0.039922 0.20018 Figure 9.1 GloVe embedding corresponding to the word house, found in the GloVe file glove.6B.50d.txt. We have broken the vector in several lines for display purposes, but this is a single line in the text file. that practitioners can use them in downstream tasks. We will use one such set of vectors in this chapter. Pretrained embeddings are usually distributed as a text file in which each line represents a word vector. The first element in the line is the word itself, and the rest of the elements are the vector components. This is usually referred to as the word2vec format. For example, Figure 9.1 shows the line in the glove.6B.50d.txt file (from the GloVe website) corresponding to the word house. This vector is represented by the word itself, followed by 50 floating-point numbers corresponding to the 50dimensional vector. Note that some embeddings files have a header line composed of two numbers: the number of vectors (i.e., the number of lines in the file), and the vector dimensionality. However, this is not always the case. For example, the original word2vec implementation includes this header line, but the more recent GloVe does not (probably because this information can be inferred from the content of the file). For the examples in the rest of the chapter, we will use the glove.6B.300d.txt embeddings that can be downloaded from the GloVe website.3 This file provides 400,000 word embeddings of 300-dimensions trained on texts
from Wikipedia 2014 and Gigaword 5. We will begin our exploration of word embeddings using Gensim,4 a Python library that provides excellent support for loading and using word embeddings, among other more advanced features. As we can see, the embeddings have been loaded and assigned to the glove variable. Note that we had to specify that this file doesn’t contain the header that is usually present in the word2vec format. The 3 https://nlp.stanford.edu/projects/glove/ 4 https://radimrehurek.com/gensim/ 9.1 Pre-trained Word Embeddings 135 glove.vectors attribute contains a 2-dimensional NumPy array with 400,000 rows and 300 columns, each row corresponding to a word embedding. 9.1.1 Word Similarity Gensim’s KeyedVectors class provides a method called most_similar that receives a word and computes its cosine similarity to all other embeddings, and returns the topn most-similar words. By default, topn is set to 10. The example above shows the top 10 most-similar words to the word cactus, when using the 300-dimension GloVe embeddings trained on Wikipedia and Gigaword. All ten most-similar words are related to cactus in different ways: cacti and cactuses are its plural forms; saguaro, peyote, opuntia, and prickly pear are types of cacti; and mesquite, shrubs, and succulents are other plants from arid climates. You can find more examples of word similarity queries in the Jupyter notebook that accompanies this chapter. Also, as an exercise, try loading a different set of embeddings trained with a different corpus (e.g., Twitter) to see if you obtain different results! 9.1.2 Word Analogies As we discussed in the previous chapter, the semantic information en- coded by word embeddings captures much more than word similar- ity. To surface this additional information, we will use word analogies represented using additional vector operations. For example, a well- ⃗
known analogy that highlights gender information is: king − m⃗an ≈ qu⃗een−wom⃗an,5or,inplainlanguage:“manistokingwhatwomanis to queen.” From this, it immediately follows that one can subtract the meaning of man and add the meaning of woman to obtain the definition ⃗ offemaleroyalty:king−m⃗an+wom⃗an≈qu⃗een.
 The same most_similar method we’ve been using can be repurposed to find word analogies such as the one mentioned above. To this end, two sets of words have to be provided to the most_similar method: a list of positive words that should be added, and a list of negative words 5 A word with an arrow on top refers to the embedding vector corresponding to that word. Please see Section 1.4 for a summary of the notations used in this book. 136 Implementing Text Classification Using Word Embeddings that should be subtracted. For example, the code below implements the left-hand side of the previous analogy: Another interesting analogy relation that shows how the embeddings have captured information about currencies is shown below. More examples are discussed in the Jupyter notebook. 9.1.3 Looking Under the Hood Let us understand now how these queries are actually implemented. First, we need to know what components we need. Clearly, we need the embedding vectors themselves. They are stored in the vectors attribute of the KeyedVectors object. As we mentioned previously, this is a 2-dimensional NumPy array, each row corresponding to a word in the vocabulary. These embeddings are not normalized, but normalized embeddings can be obtained using the get_normed_vectors method. We also need to know the mapping between words and the matrix rows. The KeyedVectors object stores this mapping in a list of terms called index_to_key, and a term-to-index dictionary called key_to_index. Below we show only the first 5 terms to save space, but you can inspect the whole vocabulary in the Jupyter notebook. 9.1.4 Word Similarity from Scratch Implementing the word similarity function ourselves is a good exercise to ensure that we understand how cosine similarity works, and to practice our NumPy skills. We will write a function called most_similar_words that will take a word, the embeddings matrix, the vocabulary in the form of the index_to_key list and key_to_index dictionary, and the number of similar words to return (defaults to 10). The implementation of most_similar_words is straightforward. First, we find the word id for the given word, using the key_to_index dictionary. Then we retrieve the row from the vectors matrix that corresponds to that word. The next step is computing the cosine similarity between the word of interest and the rest of the vocabulary. Recall that the cosine similarity is equivalent to a dot product if the vectors are normalized. We use this equivalence by performing a matrix-vector multiplication between the word embedding and the embedding matrix using Python’s at operator (denoted as @ in code). This means that we must pass the 9.2 Text Classification with Pretrained Word Embeddings 137 normalized embeddings as an argument to this function. Next, we need to sort the similarities preserving the mapping to the words in the vocabulary. We achieve this using the argsort NumPy method, which returns the indices in sorted (ascending) order. Since we need them in descending order, the next step is to reverse this list of indices. Obviously, the most similar word to whichever word we’re querying is the word itself, but that is not an interesting result, so we will remove it from the results. We do this by using NumPy’s ability to index arrays using booleans. We first create a new array in which the position corresponding to the query word is set to False and every other element is set to True, and we use this boolean array to index the list of ids. Lastly, we create a list of tuples of the form (word, similarity) for the topn words, and return the results. Now we will test our implementation of word similarity using the word cactus. You can compare the results to the ones obtained by KeyedVectors’s most_similar method. 9.1.5 Word Analogies from Scratch The implementation of the word analogy function is not that much different from our most_similar_word function above. The main difference between this function and most_similar_words is that now we have two lists of words that we need to combine into a single embedding. We first add the positive words into a single vector, and we do the same for the negative words. Then we subtract the negative vector from the positive one, and normalize the result. The similarity scores are computed the same way as before, but now we need to remove several words from the results, so this time we use NumPy’s isin function, which checks for any of the words in given_word_ids. We then package the results the same way we did before, and return them. ⃗ Nowlet’stryourimplementationwiththesameking−m⃗an+wom⃗an query we discussed previously. Please compare the results to the ones obtained by Gensim. 9.2 Text Classification with Pretrained Word Embeddings In this section we will continue using the AG News classification dataset introduced in previous chapters. Most of the data preparation is the 138 Implementing Text Classification Using Word Embeddings same, up to tokenization. However, we need to remember that the embeddings were trained on a different corpus, so it would be a good idea to estimate how well they cover the words AG News dataset. To achieve this, we load the embeddings just like we did previously. Then we count the tokens in our corpus that do not appear in the embeddings vocabulary, as well as the total number to tokens. We use these numbers to print some informative statistics such as the proportion of unknown tokens in the corpus. We also print the top ten unknown tokens. You can use the Jupyter notebook to explore this task further. Our analysis indicates that only 1.25% of the tokens are not accounted for in the embeddings vocabulary. Further, the most common unknown words seem to be URL fragments. This is encouraging. However, for more robustness, we will introduce a couple of special embeddings that are often needed when dealing with word embeddings. The first one is an embedding used to represent unknown words. A common strategy is to use the average of all the embeddings in the vocabulary for this purpose. The second embedding we will add will be used for padding. Padding is required when we want to train with (mini-)batches because the lengths of all the examples in a given batch have to match in order for the batch to be efficiently processed in parallel. The padding embedding consists only of zeros, which essentially excludes these virtual tokens from the forward/backward passes. None of these embeddings are included in the pretrained GloVe embeddings, but other pretrained embeddings may already include them, so it is a good idea to check if they are included with the embeddings we are using before adding them. The new embeddings were added at the end of embedding collection, so their ids are 400,000 and 400,001. Now we need to generate a list of token ids for each training example. Recall that we decided to ignore tokens that appear less than 10 times, so we need to replace those with [UNK] too, even if they appear in the embedding vocabulary. Next, we create a Dataset object from the padded lists of token ids. This one is even easier since the lists of token ids are ready. So all that is required is turning them into tensors. Lastly, we need to modify the model class to indicate that we now use embedding vectors. To this end, we will add an nn. Embedding layer that stores the embedding vectors for all words in the vocabulary. We will use this object to look up embeddings by their token ids. This layer will be initialized from a tensor containing the pretrained embeddings for the entire vocabulary. Also, the pad_id is specified when creating the 9.2 Text Classification with Pretrained Word Embeddings 139 embedding layer. When a nn. Embedding layer gets initialized using the from_pretrained method with other arguments set to default values, the embeddings are not updated during training. We will keep it that way for this example, but that could be changed by setting the freeze parameter to False. The rest of the layers are the same as in our previous example from Chapter 7, i.e., one intermediate layer and one output layer, with a nonlinearity (ReLU) between them. The only major difference is that now the input size of the intermediate layer is the size of one embedding (e.g., 300) instead of the size of the vocabulary like last time. This is because, as we explain below, the intermediate layer receives an average of the numerical representations of the words in the current text. The forward function of the Model class changes significantly. This time we are encoding the text as an average of the embeddings of all the words it contains. To compute the denominator of this average, we obtain the length of each text by counting how many of its words are not the virtual padding token. Then we sum all the embeddings and divide by the number of non-padding tokens. Adding all embeddings is safe, because padding embeddings are comprised of zeros. This process leaves us with a single embedding for the whole text, which is then passed to the rest of the layers. The training and evaluation steps are the same before. The results of this model on the AG News test partition are displayed below: Comparing these results with the ones obtained by the multilayer perceptron with explicit features in Chapter 7, we observe that on this particular task utilizing embeddings as features does not yield a performance improvement. Notably, this is a small dataset and a rather simplistic task where the presence of certain words is sufficient to distinguish the category of an article (e.g., the word basketball is highly indicative of the label Sports). Nevertheless, in other tasks where distinctions are more nuanced, or in which there is less likely to be word overlap between texts of interest, word embeddings do provide necessary signal. Additionally, when there are class imbalances, word embeddings can supplement underrepresented classes by bringing the external knowledge gained during their pretraining. 140 Implementing Text Classification Using Word Embeddings 9.3 Summary In this chapter we showed how to explore the semantic space encoded by word embeddings through word similarity and analogies, as well as one way to use them for text classification. At this point we have not taken into consideration the order in which the words appear, i.e., we averaged the embeddings for all the words in the text using a bag-ofwords representation of text. In subsequent chapters we will explore how to incorporate word order into the learned representations of text.
12,872
12,940
#!/usr/bin/env python # coding: utf-8 # # Multiclass Text Classification with # # Feed-forward Neural Networks and Word Embeddings # First, we will do some initialization. # In[1]: import random import torch import numpy as np import pandas as pd from tqdm.notebook import tqdm # enable tqdm in pandas tqdm.pandas() # set to True to use the gpu (if there is one available) use_gpu = True # select device device = torch.device('cuda' if use_gpu and torch.cuda.is_available() else 'cpu') print(f'device: {device.type}') # random seed seed = 1234 # set random seed if seed is not None: print(f'random seed: {seed}') random.seed(seed) np.random.seed(seed) torch.manual_seed(seed) # We will be using the AG's News Topic Classification Dataset. # It is stored in two CSV files: `train.csv` and `test.csv`, as well as a `classes.txt` that stores the labels of the classes to predict. # # First, we will load the training dataset using [pandas](https://pandas.pydata.org/) and take a quick look at how the data. # In[2]: train_df = pd.read_csv('data/ag_news_csv/train.csv', header=None) train_df.columns = ['class index', 'title', 'description'] train_df # The dataset consists of 120,000 examples, each consisting of a class index, a title, and a description. # The class labels are distributed in a separated file. We will add the labels to the dataset so that we can interpret the data more easily. Note that the label indexes are one-based, so we need to subtract one to retrieve them from the list. # In[3]: labels = open('data/ag_news_csv/classes.txt').read().splitlines() classes = train_df['class index'].map(lambda i: labels[i-1]) train_df.insert(1, 'class', classes) train_df # Let's inspect how balanced our examples are by using a bar plot. # In[4]: pd.value_counts(train_df['class']).plot.bar() # The classes are evenly distributed. That's great! # # However, the text contains some spurious backslashes in some parts of the text. # They are meant to represent newlines in the original text. # An example can be seen below, between the words "dwindling" and "band". # In[5]: print(train_df.loc[0, 'description']) # We will replace the backslashes with spaces on the whole column using pandas replace method. # In[6]: train_df['text'] = train_df['title'].str.lower() + " " + train_df['description'].str.lower() train_df['text'] = train_df['text'].str.replace('\\', ' ', regex=False) train_df # Now we will proceed to tokenize the title and description columns using NLTK's word_tokenize(). # We will add a new column to our dataframe with the list of tokens. # In[7]: from nltk.tokenize import word_tokenize train_df['tokens'] = train_df['text'].progress_map(word_tokenize) train_df # Now we will load the GloVe word embeddings. # In[8]: from gensim.models import KeyedVectors glove = KeyedVectors.load_word2vec_format("glove.6B.300d.txt", no_header=True) glove.vectors.shape # The word embeddings have been pretrained in a different corpus, so it would be a good idea to estimate how good our tokenization matches the GloVe vocabulary. # In[9]: from collections import Counter def count_unknown_words(data, vocabulary): counter = Counter() for row in tqdm(data): counter.update(tok for tok in row if tok not in vocabulary) return counter # find out how many times each unknown token occurrs in the corpus c = count_unknown_words(train_df['tokens'], glove.key_to_index) # find the total number of tokens in the corpus total_tokens = train_df['tokens'].map(len).sum() # find some statistics about occurrences of unknown tokens unk_tokens = sum(c.values()) percent_unk = unk_tokens / total_tokens distinct_tokens = len(list(c)) print(f'total number of tokens: {total_tokens:,}') print(f'number of unknown tokens: {unk_tokens:,}') print(f'number of distinct unknown tokens: {distinct_tokens:,}') print(f'percentage of unkown tokens: {percent_unk:.2%}') print('top 50 unknown words:') for token, n in c.most_common(10): print(f'\t{n}\t{token}') # Glove embeddings seem to have a good coverage on this dataset -- only 1.25% of the tokens in the dataset are unknown, i.e., don't appear in the GloVe vocabulary. # # Still, we will need a way to handle these unknown tokens. # Our approach will be to add a new embedding to GloVe that will be used to represent them. # This new embedding will be initialized as the average of all the GloVe embeddings. # # We will also add another embedding, this one initialized to zeros, that will be used to pad the sequences of tokens so that they all have the same length. This will be useful when we train with mini-batches. # In[10]: # string values corresponding to the new embeddings unk_tok = '[UNK]' pad_tok = '[PAD]' # initialize the new embedding values unk_emb = glove.vectors.mean(axis=0) pad_emb = np.zeros(300) # add new embeddings to glove glove.add_vectors([unk_tok, pad_tok], [unk_emb, pad_emb]) # get token ids corresponding to the new embeddings unk_id = glove.key_to_index[unk_tok] pad_id = glove.key_to_index[pad_tok] unk_id, pad_id # In[11]: from sklearn.model_selection import train_test_split train_df, dev_df = train_test_split(train_df, train_size=0.8) train_df.reset_index(inplace=True) dev_df.reset_index(inplace=True) # We will now add a new column to our dataframe that will contain the padded sequences of token ids. # In[12]: threshold = 10 tokens = train_df['tokens'].explode().value_counts() vocabulary = set(tokens[tokens > threshold].index.tolist()) print(f'vocabulary size: {len(vocabulary):,}') # In[13]: # find the length of the longest list of tokens max_tokens = train_df['tokens'].map(len).max() # return unk_id for infrequent tokens too def get_id(tok): if tok in vocabulary: return glove.key_to_index.get(tok, unk_id) else: return unk_id # function that gets a list of tokens and returns a list of token ids, # with padding added accordingly def token_ids(tokens): tok_ids = [get_id(tok) for tok in tokens] pad_len = max_tokens - len(tok_ids) return tok_ids + [pad_id] * pad_len # add new column to the dataframe train_df['token ids'] = train_df['tokens'].progress_map(token_ids) train_df # In[14]: max_tokens = dev_df['tokens'].map(len).max() dev_df['token ids'] = dev_df['tokens'].progress_map(token_ids) dev_df # Now we will get a numpy 2-dimensional array corresponding to the token ids, # and a 1-dimensional array with the gold classes. Note that the classes are one-based (i.e., they start at one), # but we need them to be zero-based, so we need to subtract one from this array. # In[15]: from torch.utils.data import Dataset class MyDataset(Dataset): def __init__(self, x, y): self.x = x self.y = y def __len__(self): return len(self.y) def __getitem__(self, index): x = torch.tensor(self.x[index]) y = torch.tensor(self.y[index]) return x, y # Next, we construct our PyTorch model, which is a feed-forward neural network with two layers: # In[16]: from torch import nn import torch.nn.functional as F class Model(nn.Module): def __init__(self, vectors, pad_id, hidden_dim, output_dim, dropout): super().__init__() # embeddings must be a tensor if not torch.is_tensor(vectors): vectors = torch.tensor(vectors) # keep padding id self.padding_idx = pad_id # embedding layer self.embs = nn.Embedding.from_pretrained(vectors, padding_idx=pad_id) # feedforward layers self.layers = nn.Sequential( nn.Dropout(dropout), nn.Linear(vectors.shape[1], hidden_dim), nn.ReLU(), nn.Dropout(dropout), nn.Linear(hidden_dim, output_dim), ) def forward(self, x): # get boolean array with padding elements set to false not_padding = torch.isin(x, self.padding_idx, invert=True) # get lengths of examples (excluding padding) lengths = torch.count_nonzero(not_padding, axis=1) # get embeddings x = self.embs(x) # calculate means x = x.sum(dim=1) / lengths.unsqueeze(dim=1) # pass to rest of the model output = self.layers(x) # calculate softmax if we're not in training mode #if not self.training: # output = F.softmax(output, dim=1) return output # Next, we implement the training procedure. We compute the loss and accuracy on the development partition after each epoch. # In[17]: from torch import optim from torch.utils.data import DataLoader from sklearn.metrics import accuracy_score # hyperparameters lr = 1e-3 weight_decay = 0 batch_size = 500 shuffle = True n_epochs = 5 hidden_dim = 50 output_dim = len(labels) dropout = 0.1 vectors = glove.vectors # initialize the model, loss function, optimizer, and data-loader model = Model(vectors, pad_id, hidden_dim, output_dim, dropout).to(device) loss_func = nn.CrossEntropyLoss() optimizer = optim.Adam(model.parameters(), lr=lr, weight_decay=weight_decay) train_ds = MyDataset(train_df['token ids'], train_df['class index'] - 1) train_dl = DataLoader(train_ds, batch_size=batch_size, shuffle=shuffle) dev_ds = MyDataset(dev_df['token ids'], dev_df['class index'] - 1) dev_dl = DataLoader(dev_ds, batch_size=batch_size, shuffle=shuffle) train_loss = [] train_acc = [] dev_loss = [] dev_acc = [] # train the model for epoch in range(n_epochs): losses = [] gold = [] pred = [] model.train() for X, y_true in tqdm(train_dl, desc=f'epoch {epoch+1} (train)'): # clear gradients model.zero_grad() # send batch to right device X = X.to(device) y_true = y_true.to(device) # predict label scores y_pred = model(X) # compute loss loss = loss_func(y_pred, y_true) # accumulate for plotting losses.append(loss.detach().cpu().item()) gold.append(y_true.detach().cpu().numpy()) pred.append(np.argmax(y_pred.detach().cpu().numpy(), axis=1)) # backpropagate loss.backward() # optimize model parameters optimizer.step() train_loss.append(np.mean(losses)) train_acc.append(accuracy_score(np.concatenate(gold), np.concatenate(pred))) model.eval() with torch.no_grad(): losses = [] gold = [] pred = [] for X, y_true in tqdm(dev_dl, desc=f'epoch {epoch+1} (dev)'): X = X.to(device) y_true = y_true.to(device) y_pred = model(X) loss = loss_func(y_pred, y_true) losses.append(loss.cpu().item()) gold.append(y_true.cpu().numpy()) pred.append(np.argmax(y_pred.cpu().numpy(), axis=1)) dev_loss.append(np.mean(losses)) dev_acc.append(accuracy_score(np.concatenate(gold), np.concatenate(pred))) # Let's plot the loss and accuracy on dev: # In[18]: import matplotlib.pyplot as plt get_ipython().run_line_magic('matplotlib', 'inline') x = np.arange(n_epochs) + 1 plt.plot(x, train_loss) plt.plot(x, dev_loss) plt.legend(['train', 'dev']) plt.xlabel('epoch') plt.ylabel('loss') plt.grid(True) # In[19]: plt.plot(x, train_acc) plt.plot(x, dev_acc) plt.legend(['train', 'dev']) plt.xlabel('epoch') plt.ylabel('accuracy') plt.grid(True) # Next, we evaluate on the testing partition: # In[20]: # repeat all preprocessing done above, this time on the test set test_df = pd.read_csv('data/ag_news_csv/test.csv', header=None) test_df.columns = ['class index', 'title', 'description'] test_df['text'] = test_df['title'].str.lower() + " " + test_df['description'].str.lower() test_df['text'] = test_df['text'].str.replace('\\', ' ', regex=False) test_df['tokens'] = test_df['text'].progress_map(word_tokenize) max_tokens = dev_df['tokens'].map(len).max() test_df['token ids'] = test_df['tokens'].progress_map(token_ids) # In[21]: from sklearn.metrics import classification_report # set model to evaluation mode model.eval() dataset = MyDataset(test_df['token ids'], test_df['class index'] - 1) data_loader = DataLoader(dataset, batch_size=batch_size) y_pred = [] # don't store gradients with torch.no_grad(): for X, _ in tqdm(data_loader): X = X.to(device) # predict one class per example y = torch.argmax(model(X), dim=1) # convert tensor to numpy array (sending it back to the cpu if needed) y_pred.append(y.cpu().numpy()) # print results print(classification_report(dataset.y, np.concatenate(y_pred), target_names=labels))
6,683
6,709
1
chap09-2
chap09-2
9 Implementing Text Classification Using Word Embeddings In the previous chapter we introduced word embeddings, which are realvalued vectors that encode semantic representation of words. We discussed how to learn them, and how they capture semantic information that makes them useful for downstream tasks. In this chapter we show how to use word embeddings that have been pretrained using a variant of the algorithm discussed in the previous chapter. We show how to load them, explore some of their characteristics, and show their application for a text classification task. As usual, the code for this chapter is available in our repository. It is organized into two notebooks: one corresponding to the explorations shown in the first half of this chapter (chap9_embeddings), and a second one in which we modify our previous classifier to use word embeddings (chap9_classification). 9.1 Pre-trained Word Embeddings There are several algorithms for training word embeddings, including the original word2vec algorithm (Mikolov et al., 2013a) (which we discussed in the previous chapter), GloVe (Pennington et al., 2014), and fastText (Bojanowski et al., 2017). They all provide the software for training the embeddings as well as pretrained word embeddings on their respective websites. In general, most open-domain word embeddings are trained on large corpora that cover a variety of topics such as Wikipedia1 and Gigaword.2 Commonly, these embeddings are freely distributed so 1 https://en.wikipedia.org/wiki/Wikipedia:Database_download 2 https://catalog.ldc.upenn.edu/LDC2011T07 133 134 Implementing Text Classification Using Word Embeddings house 0.60137 0.28521 -0.032038 -0.43026 0.74806 0.26223 -0.97361 0.078581 -0.57588 -1.188 -1.8507 -0.24887 0.055549 0.0086155 0.067951 0.40554 -0.073998 -0.21318 0.37167 -0.71791 1.2234 0.35546 -0.41537 -0.21931 -0.39661 -1.7831 -0.41507 0.29533 -0.41254 0.020096 2.7425 -0.9926 -0.71033 -0.46813 0.28265 -0.077639 0.3041 -0.06644 0.3951 -0.70747 -0.38894 0.23158 -0.49508 0.14612 -0.02314 0.56389 -0.86188 -1.0278 0.039922 0.20018 Figure 9.1 GloVe embedding corresponding to the word house, found in the GloVe file glove.6B.50d.txt. We have broken the vector in several lines for display purposes, but this is a single line in the text file. that practitioners can use them in downstream tasks. We will use one such set of vectors in this chapter. Pretrained embeddings are usually distributed as a text file in which each line represents a word vector. The first element in the line is the word itself, and the rest of the elements are the vector components. This is usually referred to as the word2vec format. For example, Figure 9.1 shows the line in the glove.6B.50d.txt file (from the GloVe website) corresponding to the word house. This vector is represented by the word itself, followed by 50 floating-point numbers corresponding to the 50dimensional vector. Note that some embeddings files have a header line composed of two numbers: the number of vectors (i.e., the number of lines in the file), and the vector dimensionality. However, this is not always the case. For example, the original word2vec implementation includes this header line, but the more recent GloVe does not (probably because this information can be inferred from the content of the file). For the examples in the rest of the chapter, we will use the glove.6B.300d.txt embeddings that can be downloaded from the GloVe website.3 This file provides 400,000 word embeddings of 300-dimensions trained on texts
from Wikipedia 2014 and Gigaword 5. We will begin our exploration of word embeddings using Gensim,4 a Python library that provides excellent support for loading and using word embeddings, among other more advanced features. As we can see, the embeddings have been loaded and assigned to the glove variable. Note that we had to specify that this file doesn’t contain the header that is usually present in the word2vec format. The 3 https://nlp.stanford.edu/projects/glove/ 4 https://radimrehurek.com/gensim/ 9.1 Pre-trained Word Embeddings 135 glove.vectors attribute contains a 2-dimensional NumPy array with 400,000 rows and 300 columns, each row corresponding to a word embedding. 9.1.1 Word Similarity Gensim’s KeyedVectors class provides a method called most_similar that receives a word and computes its cosine similarity to all other embeddings, and returns the topn most-similar words. By default, topn is set to 10. The example above shows the top 10 most-similar words to the word cactus, when using the 300-dimension GloVe embeddings trained on Wikipedia and Gigaword. All ten most-similar words are related to cactus in different ways: cacti and cactuses are its plural forms; saguaro, peyote, opuntia, and prickly pear are types of cacti; and mesquite, shrubs, and succulents are other plants from arid climates. You can find more examples of word similarity queries in the Jupyter notebook that accompanies this chapter. Also, as an exercise, try loading a different set of embeddings trained with a different corpus (e.g., Twitter) to see if you obtain different results! 9.1.2 Word Analogies As we discussed in the previous chapter, the semantic information en- coded by word embeddings captures much more than word similar- ity. To surface this additional information, we will use word analogies represented using additional vector operations. For example, a well- ⃗
known analogy that highlights gender information is: king − m⃗an ≈ qu⃗een−wom⃗an,5or,inplainlanguage:“manistokingwhatwomanis to queen.” From this, it immediately follows that one can subtract the meaning of man and add the meaning of woman to obtain the definition ⃗ offemaleroyalty:king−m⃗an+wom⃗an≈qu⃗een.
 The same most_similar method we’ve been using can be repurposed to find word analogies such as the one mentioned above. To this end, two sets of words have to be provided to the most_similar method: a list of positive words that should be added, and a list of negative words 5 A word with an arrow on top refers to the embedding vector corresponding to that word. Please see Section 1.4 for a summary of the notations used in this book. 136 Implementing Text Classification Using Word Embeddings that should be subtracted. For example, the code below implements the left-hand side of the previous analogy: Another interesting analogy relation that shows how the embeddings have captured information about currencies is shown below. More examples are discussed in the Jupyter notebook. 9.1.3 Looking Under the Hood Let us understand now how these queries are actually implemented. First, we need to know what components we need. Clearly, we need the embedding vectors themselves. They are stored in the vectors attribute of the KeyedVectors object. As we mentioned previously, this is a 2-dimensional NumPy array, each row corresponding to a word in the vocabulary. These embeddings are not normalized, but normalized embeddings can be obtained using the get_normed_vectors method. We also need to know the mapping between words and the matrix rows. The KeyedVectors object stores this mapping in a list of terms called index_to_key, and a term-to-index dictionary called key_to_index. Below we show only the first 5 terms to save space, but you can inspect the whole vocabulary in the Jupyter notebook. 9.1.4 Word Similarity from Scratch Implementing the word similarity function ourselves is a good exercise to ensure that we understand how cosine similarity works, and to practice our NumPy skills. We will write a function called most_similar_words that will take a word, the embeddings matrix, the vocabulary in the form of the index_to_key list and key_to_index dictionary, and the number of similar words to return (defaults to 10). The implementation of most_similar_words is straightforward. First, we find the word id for the given word, using the key_to_index dictionary. Then we retrieve the row from the vectors matrix that corresponds to that word. The next step is computing the cosine similarity between the word of interest and the rest of the vocabulary. Recall that the cosine similarity is equivalent to a dot product if the vectors are normalized. We use this equivalence by performing a matrix-vector multiplication between the word embedding and the embedding matrix using Python’s at operator (denoted as @ in code). This means that we must pass the 9.2 Text Classification with Pretrained Word Embeddings 137 normalized embeddings as an argument to this function. Next, we need to sort the similarities preserving the mapping to the words in the vocabulary. We achieve this using the argsort NumPy method, which returns the indices in sorted (ascending) order. Since we need them in descending order, the next step is to reverse this list of indices. Obviously, the most similar word to whichever word we’re querying is the word itself, but that is not an interesting result, so we will remove it from the results. We do this by using NumPy’s ability to index arrays using booleans. We first create a new array in which the position corresponding to the query word is set to False and every other element is set to True, and we use this boolean array to index the list of ids. Lastly, we create a list of tuples of the form (word, similarity) for the topn words, and return the results. Now we will test our implementation of word similarity using the word cactus. You can compare the results to the ones obtained by KeyedVectors’s most_similar method. 9.1.5 Word Analogies from Scratch The implementation of the word analogy function is not that much different from our most_similar_word function above. The main difference between this function and most_similar_words is that now we have two lists of words that we need to combine into a single embedding. We first add the positive words into a single vector, and we do the same for the negative words. Then we subtract the negative vector from the positive one, and normalize the result. The similarity scores are computed the same way as before, but now we need to remove several words from the results, so this time we use NumPy’s isin function, which checks for any of the words in given_word_ids. We then package the results the same way we did before, and return them. ⃗ Nowlet’stryourimplementationwiththesameking−m⃗an+wom⃗an query we discussed previously. Please compare the results to the ones obtained by Gensim. 9.2 Text Classification with Pretrained Word Embeddings In this section we will continue using the AG News classification dataset introduced in previous chapters. Most of the data preparation is the 138 Implementing Text Classification Using Word Embeddings same, up to tokenization. However, we need to remember that the embeddings were trained on a different corpus, so it would be a good idea to estimate how well they cover the words AG News dataset. To achieve this, we load the embeddings just like we did previously. Then we count the tokens in our corpus that do not appear in the embeddings vocabulary, as well as the total number to tokens. We use these numbers to print some informative statistics such as the proportion of unknown tokens in the corpus. We also print the top ten unknown tokens. You can use the Jupyter notebook to explore this task further. Our analysis indicates that only 1.25% of the tokens are not accounted for in the embeddings vocabulary. Further, the most common unknown words seem to be URL fragments. This is encouraging. However, for more robustness, we will introduce a couple of special embeddings that are often needed when dealing with word embeddings. The first one is an embedding used to represent unknown words. A common strategy is to use the average of all the embeddings in the vocabulary for this purpose. The second embedding we will add will be used for padding. Padding is required when we want to train with (mini-)batches because the lengths of all the examples in a given batch have to match in order for the batch to be efficiently processed in parallel. The padding embedding consists only of zeros, which essentially excludes these virtual tokens from the forward/backward passes. None of these embeddings are included in the pretrained GloVe embeddings, but other pretrained embeddings may already include them, so it is a good idea to check if they are included with the embeddings we are using before adding them. The new embeddings were added at the end of embedding collection, so their ids are 400,000 and 400,001. Now we need to generate a list of token ids for each training example. Recall that we decided to ignore tokens that appear less than 10 times, so we need to replace those with [UNK] too, even if they appear in the embedding vocabulary. Next, we create a Dataset object from the padded lists of token ids. This one is even easier since the lists of token ids are ready. So all that is required is turning them into tensors. Lastly, we need to modify the model class to indicate that we now use embedding vectors. To this end, we will add an nn. Embedding layer that stores the embedding vectors for all words in the vocabulary. We will use this object to look up embeddings by their token ids. This layer will be initialized from a tensor containing the pretrained embeddings for the entire vocabulary. Also, the pad_id is specified when creating the 9.2 Text Classification with Pretrained Word Embeddings 139 embedding layer. When a nn. Embedding layer gets initialized using the from_pretrained method with other arguments set to default values, the embeddings are not updated during training. We will keep it that way for this example, but that could be changed by setting the freeze parameter to False. The rest of the layers are the same as in our previous example from Chapter 7, i.e., one intermediate layer and one output layer, with a nonlinearity (ReLU) between them. The only major difference is that now the input size of the intermediate layer is the size of one embedding (e.g., 300) instead of the size of the vocabulary like last time. This is because, as we explain below, the intermediate layer receives an average of the numerical representations of the words in the current text. The forward function of the Model class changes significantly. This time we are encoding the text as an average of the embeddings of all the words it contains. To compute the denominator of this average, we obtain the length of each text by counting how many of its words are not the virtual padding token. Then we sum all the embeddings and divide by the number of non-padding tokens. Adding all embeddings is safe, because padding embeddings are comprised of zeros. This process leaves us with a single embedding for the whole text, which is then passed to the rest of the layers. The training and evaluation steps are the same before. The results of this model on the AG News test partition are displayed below: Comparing these results with the ones obtained by the multilayer perceptron with explicit features in Chapter 7, we observe that on this particular task utilizing embeddings as features does not yield a performance improvement. Notably, this is a small dataset and a rather simplistic task where the presence of certain words is sufficient to distinguish the category of an article (e.g., the word basketball is highly indicative of the label Sports). Nevertheless, in other tasks where distinctions are more nuanced, or in which there is less likely to be word overlap between texts of interest, word embeddings do provide necessary signal. Additionally, when there are class imbalances, word embeddings can supplement underrepresented classes by bringing the external knowledge gained during their pretraining. 140 Implementing Text Classification Using Word Embeddings 9.3 Summary In this chapter we showed how to explore the semantic space encoded by word embeddings through word similarity and analogies, as well as one way to use them for text classification. At this point we have not taken into consideration the order in which the words appear, i.e., we averaged the embeddings for all the words in the text using a bag-ofwords representation of text. In subsequent chapters we will explore how to incorporate word order into the learned representations of text.
12,705
12,869
#!/usr/bin/env python # coding: utf-8 # # Multiclass Text Classification with # # Feed-forward Neural Networks and Word Embeddings # First, we will do some initialization. # In[1]: import random import torch import numpy as np import pandas as pd from tqdm.notebook import tqdm # enable tqdm in pandas tqdm.pandas() # set to True to use the gpu (if there is one available) use_gpu = True # select device device = torch.device('cuda' if use_gpu and torch.cuda.is_available() else 'cpu') print(f'device: {device.type}') # random seed seed = 1234 # set random seed if seed is not None: print(f'random seed: {seed}') random.seed(seed) np.random.seed(seed) torch.manual_seed(seed) # We will be using the AG's News Topic Classification Dataset. # It is stored in two CSV files: `train.csv` and `test.csv`, as well as a `classes.txt` that stores the labels of the classes to predict. # # First, we will load the training dataset using [pandas](https://pandas.pydata.org/) and take a quick look at how the data. # In[2]: train_df = pd.read_csv('data/ag_news_csv/train.csv', header=None) train_df.columns = ['class index', 'title', 'description'] train_df # The dataset consists of 120,000 examples, each consisting of a class index, a title, and a description. # The class labels are distributed in a separated file. We will add the labels to the dataset so that we can interpret the data more easily. Note that the label indexes are one-based, so we need to subtract one to retrieve them from the list. # In[3]: labels = open('data/ag_news_csv/classes.txt').read().splitlines() classes = train_df['class index'].map(lambda i: labels[i-1]) train_df.insert(1, 'class', classes) train_df # Let's inspect how balanced our examples are by using a bar plot. # In[4]: pd.value_counts(train_df['class']).plot.bar() # The classes are evenly distributed. That's great! # # However, the text contains some spurious backslashes in some parts of the text. # They are meant to represent newlines in the original text. # An example can be seen below, between the words "dwindling" and "band". # In[5]: print(train_df.loc[0, 'description']) # We will replace the backslashes with spaces on the whole column using pandas replace method. # In[6]: train_df['text'] = train_df['title'].str.lower() + " " + train_df['description'].str.lower() train_df['text'] = train_df['text'].str.replace('\\', ' ', regex=False) train_df # Now we will proceed to tokenize the title and description columns using NLTK's word_tokenize(). # We will add a new column to our dataframe with the list of tokens. # In[7]: from nltk.tokenize import word_tokenize train_df['tokens'] = train_df['text'].progress_map(word_tokenize) train_df # Now we will load the GloVe word embeddings. # In[8]: from gensim.models import KeyedVectors glove = KeyedVectors.load_word2vec_format("glove.6B.300d.txt", no_header=True) glove.vectors.shape # The word embeddings have been pretrained in a different corpus, so it would be a good idea to estimate how good our tokenization matches the GloVe vocabulary. # In[9]: from collections import Counter def count_unknown_words(data, vocabulary): counter = Counter() for row in tqdm(data): counter.update(tok for tok in row if tok not in vocabulary) return counter # find out how many times each unknown token occurrs in the corpus c = count_unknown_words(train_df['tokens'], glove.key_to_index) # find the total number of tokens in the corpus total_tokens = train_df['tokens'].map(len).sum() # find some statistics about occurrences of unknown tokens unk_tokens = sum(c.values()) percent_unk = unk_tokens / total_tokens distinct_tokens = len(list(c)) print(f'total number of tokens: {total_tokens:,}') print(f'number of unknown tokens: {unk_tokens:,}') print(f'number of distinct unknown tokens: {distinct_tokens:,}') print(f'percentage of unkown tokens: {percent_unk:.2%}') print('top 50 unknown words:') for token, n in c.most_common(10): print(f'\t{n}\t{token}') # Glove embeddings seem to have a good coverage on this dataset -- only 1.25% of the tokens in the dataset are unknown, i.e., don't appear in the GloVe vocabulary. # # Still, we will need a way to handle these unknown tokens. # Our approach will be to add a new embedding to GloVe that will be used to represent them. # This new embedding will be initialized as the average of all the GloVe embeddings. # # We will also add another embedding, this one initialized to zeros, that will be used to pad the sequences of tokens so that they all have the same length. This will be useful when we train with mini-batches. # In[10]: # string values corresponding to the new embeddings unk_tok = '[UNK]' pad_tok = '[PAD]' # initialize the new embedding values unk_emb = glove.vectors.mean(axis=0) pad_emb = np.zeros(300) # add new embeddings to glove glove.add_vectors([unk_tok, pad_tok], [unk_emb, pad_emb]) # get token ids corresponding to the new embeddings unk_id = glove.key_to_index[unk_tok] pad_id = glove.key_to_index[pad_tok] unk_id, pad_id # In[11]: from sklearn.model_selection import train_test_split train_df, dev_df = train_test_split(train_df, train_size=0.8) train_df.reset_index(inplace=True) dev_df.reset_index(inplace=True) # We will now add a new column to our dataframe that will contain the padded sequences of token ids. # In[12]: threshold = 10 tokens = train_df['tokens'].explode().value_counts() vocabulary = set(tokens[tokens > threshold].index.tolist()) print(f'vocabulary size: {len(vocabulary):,}') # In[13]: # find the length of the longest list of tokens max_tokens = train_df['tokens'].map(len).max() # return unk_id for infrequent tokens too def get_id(tok): if tok in vocabulary: return glove.key_to_index.get(tok, unk_id) else: return unk_id # function that gets a list of tokens and returns a list of token ids, # with padding added accordingly def token_ids(tokens): tok_ids = [get_id(tok) for tok in tokens] pad_len = max_tokens - len(tok_ids) return tok_ids + [pad_id] * pad_len # add new column to the dataframe train_df['token ids'] = train_df['tokens'].progress_map(token_ids) train_df # In[14]: max_tokens = dev_df['tokens'].map(len).max() dev_df['token ids'] = dev_df['tokens'].progress_map(token_ids) dev_df # Now we will get a numpy 2-dimensional array corresponding to the token ids, # and a 1-dimensional array with the gold classes. Note that the classes are one-based (i.e., they start at one), # but we need them to be zero-based, so we need to subtract one from this array. # In[15]: from torch.utils.data import Dataset class MyDataset(Dataset): def __init__(self, x, y): self.x = x self.y = y def __len__(self): return len(self.y) def __getitem__(self, index): x = torch.tensor(self.x[index]) y = torch.tensor(self.y[index]) return x, y # Next, we construct our PyTorch model, which is a feed-forward neural network with two layers: # In[16]: from torch import nn import torch.nn.functional as F class Model(nn.Module): def __init__(self, vectors, pad_id, hidden_dim, output_dim, dropout): super().__init__() # embeddings must be a tensor if not torch.is_tensor(vectors): vectors = torch.tensor(vectors) # keep padding id self.padding_idx = pad_id # embedding layer self.embs = nn.Embedding.from_pretrained(vectors, padding_idx=pad_id) # feedforward layers self.layers = nn.Sequential( nn.Dropout(dropout), nn.Linear(vectors.shape[1], hidden_dim), nn.ReLU(), nn.Dropout(dropout), nn.Linear(hidden_dim, output_dim), ) def forward(self, x): # get boolean array with padding elements set to false not_padding = torch.isin(x, self.padding_idx, invert=True) # get lengths of examples (excluding padding) lengths = torch.count_nonzero(not_padding, axis=1) # get embeddings x = self.embs(x) # calculate means x = x.sum(dim=1) / lengths.unsqueeze(dim=1) # pass to rest of the model output = self.layers(x) # calculate softmax if we're not in training mode #if not self.training: # output = F.softmax(output, dim=1) return output # Next, we implement the training procedure. We compute the loss and accuracy on the development partition after each epoch. # In[17]: from torch import optim from torch.utils.data import DataLoader from sklearn.metrics import accuracy_score # hyperparameters lr = 1e-3 weight_decay = 0 batch_size = 500 shuffle = True n_epochs = 5 hidden_dim = 50 output_dim = len(labels) dropout = 0.1 vectors = glove.vectors # initialize the model, loss function, optimizer, and data-loader model = Model(vectors, pad_id, hidden_dim, output_dim, dropout).to(device) loss_func = nn.CrossEntropyLoss() optimizer = optim.Adam(model.parameters(), lr=lr, weight_decay=weight_decay) train_ds = MyDataset(train_df['token ids'], train_df['class index'] - 1) train_dl = DataLoader(train_ds, batch_size=batch_size, shuffle=shuffle) dev_ds = MyDataset(dev_df['token ids'], dev_df['class index'] - 1) dev_dl = DataLoader(dev_ds, batch_size=batch_size, shuffle=shuffle) train_loss = [] train_acc = [] dev_loss = [] dev_acc = [] # train the model for epoch in range(n_epochs): losses = [] gold = [] pred = [] model.train() for X, y_true in tqdm(train_dl, desc=f'epoch {epoch+1} (train)'): # clear gradients model.zero_grad() # send batch to right device X = X.to(device) y_true = y_true.to(device) # predict label scores y_pred = model(X) # compute loss loss = loss_func(y_pred, y_true) # accumulate for plotting losses.append(loss.detach().cpu().item()) gold.append(y_true.detach().cpu().numpy()) pred.append(np.argmax(y_pred.detach().cpu().numpy(), axis=1)) # backpropagate loss.backward() # optimize model parameters optimizer.step() train_loss.append(np.mean(losses)) train_acc.append(accuracy_score(np.concatenate(gold), np.concatenate(pred))) model.eval() with torch.no_grad(): losses = [] gold = [] pred = [] for X, y_true in tqdm(dev_dl, desc=f'epoch {epoch+1} (dev)'): X = X.to(device) y_true = y_true.to(device) y_pred = model(X) loss = loss_func(y_pred, y_true) losses.append(loss.cpu().item()) gold.append(y_true.cpu().numpy()) pred.append(np.argmax(y_pred.cpu().numpy(), axis=1)) dev_loss.append(np.mean(losses)) dev_acc.append(accuracy_score(np.concatenate(gold), np.concatenate(pred))) # Let's plot the loss and accuracy on dev: # In[18]: import matplotlib.pyplot as plt get_ipython().run_line_magic('matplotlib', 'inline') x = np.arange(n_epochs) + 1 plt.plot(x, train_loss) plt.plot(x, dev_loss) plt.legend(['train', 'dev']) plt.xlabel('epoch') plt.ylabel('loss') plt.grid(True) # In[19]: plt.plot(x, train_acc) plt.plot(x, dev_acc) plt.legend(['train', 'dev']) plt.xlabel('epoch') plt.ylabel('accuracy') plt.grid(True) # Next, we evaluate on the testing partition: # In[20]: # repeat all preprocessing done above, this time on the test set test_df = pd.read_csv('data/ag_news_csv/test.csv', header=None) test_df.columns = ['class index', 'title', 'description'] test_df['text'] = test_df['title'].str.lower() + " " + test_df['description'].str.lower() test_df['text'] = test_df['text'].str.replace('\\', ' ', regex=False) test_df['tokens'] = test_df['text'].progress_map(word_tokenize) max_tokens = dev_df['tokens'].map(len).max() test_df['token ids'] = test_df['tokens'].progress_map(token_ids) # In[21]: from sklearn.metrics import classification_report # set model to evaluation mode model.eval() dataset = MyDataset(test_df['token ids'], test_df['class index'] - 1) data_loader = DataLoader(dataset, batch_size=batch_size) y_pred = [] # don't store gradients with torch.no_grad(): for X, _ in tqdm(data_loader): X = X.to(device) # predict one class per example y = torch.argmax(model(X), dim=1) # convert tensor to numpy array (sending it back to the cpu if needed) y_pred.append(y.cpu().numpy()) # print results print(classification_report(dataset.y, np.concatenate(y_pred), target_names=labels))
5,410
5,538
2
chap09-3
chap09-3
9 Implementing Text Classification Using Word Embeddings In the previous chapter we introduced word embeddings, which are realvalued vectors that encode semantic representation of words. We discussed how to learn them, and how they capture semantic information that makes them useful for downstream tasks. In this chapter we show how to use word embeddings that have been pretrained using a variant of the algorithm discussed in the previous chapter. We show how to load them, explore some of their characteristics, and show their application for a text classification task. As usual, the code for this chapter is available in our repository. It is organized into two notebooks: one corresponding to the explorations shown in the first half of this chapter (chap9_embeddings), and a second one in which we modify our previous classifier to use word embeddings (chap9_classification). 9.1 Pre-trained Word Embeddings There are several algorithms for training word embeddings, including the original word2vec algorithm (Mikolov et al., 2013a) (which we discussed in the previous chapter), GloVe (Pennington et al., 2014), and fastText (Bojanowski et al., 2017). They all provide the software for training the embeddings as well as pretrained word embeddings on their respective websites. In general, most open-domain word embeddings are trained on large corpora that cover a variety of topics such as Wikipedia1 and Gigaword.2 Commonly, these embeddings are freely distributed so 1 https://en.wikipedia.org/wiki/Wikipedia:Database_download 2 https://catalog.ldc.upenn.edu/LDC2011T07 133 134 Implementing Text Classification Using Word Embeddings house 0.60137 0.28521 -0.032038 -0.43026 0.74806 0.26223 -0.97361 0.078581 -0.57588 -1.188 -1.8507 -0.24887 0.055549 0.0086155 0.067951 0.40554 -0.073998 -0.21318 0.37167 -0.71791 1.2234 0.35546 -0.41537 -0.21931 -0.39661 -1.7831 -0.41507 0.29533 -0.41254 0.020096 2.7425 -0.9926 -0.71033 -0.46813 0.28265 -0.077639 0.3041 -0.06644 0.3951 -0.70747 -0.38894 0.23158 -0.49508 0.14612 -0.02314 0.56389 -0.86188 -1.0278 0.039922 0.20018 Figure 9.1 GloVe embedding corresponding to the word house, found in the GloVe file glove.6B.50d.txt. We have broken the vector in several lines for display purposes, but this is a single line in the text file. that practitioners can use them in downstream tasks. We will use one such set of vectors in this chapter. Pretrained embeddings are usually distributed as a text file in which each line represents a word vector. The first element in the line is the word itself, and the rest of the elements are the vector components. This is usually referred to as the word2vec format. For example, Figure 9.1 shows the line in the glove.6B.50d.txt file (from the GloVe website) corresponding to the word house. This vector is represented by the word itself, followed by 50 floating-point numbers corresponding to the 50dimensional vector. Note that some embeddings files have a header line composed of two numbers: the number of vectors (i.e., the number of lines in the file), and the vector dimensionality. However, this is not always the case. For example, the original word2vec implementation includes this header line, but the more recent GloVe does not (probably because this information can be inferred from the content of the file). For the examples in the rest of the chapter, we will use the glove.6B.300d.txt embeddings that can be downloaded from the GloVe website.3 This file provides 400,000 word embeddings of 300-dimensions trained on texts
from Wikipedia 2014 and Gigaword 5. We will begin our exploration of word embeddings using Gensim,4 a Python library that provides excellent support for loading and using word embeddings, among other more advanced features. As we can see, the embeddings have been loaded and assigned to the glove variable. Note that we had to specify that this file doesn’t contain the header that is usually present in the word2vec format. The 3 https://nlp.stanford.edu/projects/glove/ 4 https://radimrehurek.com/gensim/ 9.1 Pre-trained Word Embeddings 135 glove.vectors attribute contains a 2-dimensional NumPy array with 400,000 rows and 300 columns, each row corresponding to a word embedding. 9.1.1 Word Similarity Gensim’s KeyedVectors class provides a method called most_similar that receives a word and computes its cosine similarity to all other embeddings, and returns the topn most-similar words. By default, topn is set to 10. The example above shows the top 10 most-similar words to the word cactus, when using the 300-dimension GloVe embeddings trained on Wikipedia and Gigaword. All ten most-similar words are related to cactus in different ways: cacti and cactuses are its plural forms; saguaro, peyote, opuntia, and prickly pear are types of cacti; and mesquite, shrubs, and succulents are other plants from arid climates. You can find more examples of word similarity queries in the Jupyter notebook that accompanies this chapter. Also, as an exercise, try loading a different set of embeddings trained with a different corpus (e.g., Twitter) to see if you obtain different results! 9.1.2 Word Analogies As we discussed in the previous chapter, the semantic information en- coded by word embeddings captures much more than word similar- ity. To surface this additional information, we will use word analogies represented using additional vector operations. For example, a well- ⃗
known analogy that highlights gender information is: king − m⃗an ≈ qu⃗een−wom⃗an,5or,inplainlanguage:“manistokingwhatwomanis to queen.” From this, it immediately follows that one can subtract the meaning of man and add the meaning of woman to obtain the definition ⃗ offemaleroyalty:king−m⃗an+wom⃗an≈qu⃗een.
 The same most_similar method we’ve been using can be repurposed to find word analogies such as the one mentioned above. To this end, two sets of words have to be provided to the most_similar method: a list of positive words that should be added, and a list of negative words 5 A word with an arrow on top refers to the embedding vector corresponding to that word. Please see Section 1.4 for a summary of the notations used in this book. 136 Implementing Text Classification Using Word Embeddings that should be subtracted. For example, the code below implements the left-hand side of the previous analogy: Another interesting analogy relation that shows how the embeddings have captured information about currencies is shown below. More examples are discussed in the Jupyter notebook. 9.1.3 Looking Under the Hood Let us understand now how these queries are actually implemented. First, we need to know what components we need. Clearly, we need the embedding vectors themselves. They are stored in the vectors attribute of the KeyedVectors object. As we mentioned previously, this is a 2-dimensional NumPy array, each row corresponding to a word in the vocabulary. These embeddings are not normalized, but normalized embeddings can be obtained using the get_normed_vectors method. We also need to know the mapping between words and the matrix rows. The KeyedVectors object stores this mapping in a list of terms called index_to_key, and a term-to-index dictionary called key_to_index. Below we show only the first 5 terms to save space, but you can inspect the whole vocabulary in the Jupyter notebook. 9.1.4 Word Similarity from Scratch Implementing the word similarity function ourselves is a good exercise to ensure that we understand how cosine similarity works, and to practice our NumPy skills. We will write a function called most_similar_words that will take a word, the embeddings matrix, the vocabulary in the form of the index_to_key list and key_to_index dictionary, and the number of similar words to return (defaults to 10). The implementation of most_similar_words is straightforward. First, we find the word id for the given word, using the key_to_index dictionary. Then we retrieve the row from the vectors matrix that corresponds to that word. The next step is computing the cosine similarity between the word of interest and the rest of the vocabulary. Recall that the cosine similarity is equivalent to a dot product if the vectors are normalized. We use this equivalence by performing a matrix-vector multiplication between the word embedding and the embedding matrix using Python’s at operator (denoted as @ in code). This means that we must pass the 9.2 Text Classification with Pretrained Word Embeddings 137 normalized embeddings as an argument to this function. Next, we need to sort the similarities preserving the mapping to the words in the vocabulary. We achieve this using the argsort NumPy method, which returns the indices in sorted (ascending) order. Since we need them in descending order, the next step is to reverse this list of indices. Obviously, the most similar word to whichever word we’re querying is the word itself, but that is not an interesting result, so we will remove it from the results. We do this by using NumPy’s ability to index arrays using booleans. We first create a new array in which the position corresponding to the query word is set to False and every other element is set to True, and we use this boolean array to index the list of ids. Lastly, we create a list of tuples of the form (word, similarity) for the topn words, and return the results. Now we will test our implementation of word similarity using the word cactus. You can compare the results to the ones obtained by KeyedVectors’s most_similar method. 9.1.5 Word Analogies from Scratch The implementation of the word analogy function is not that much different from our most_similar_word function above. The main difference between this function and most_similar_words is that now we have two lists of words that we need to combine into a single embedding. We first add the positive words into a single vector, and we do the same for the negative words. Then we subtract the negative vector from the positive one, and normalize the result. The similarity scores are computed the same way as before, but now we need to remove several words from the results, so this time we use NumPy’s isin function, which checks for any of the words in given_word_ids. We then package the results the same way we did before, and return them. ⃗ Nowlet’stryourimplementationwiththesameking−m⃗an+wom⃗an query we discussed previously. Please compare the results to the ones obtained by Gensim. 9.2 Text Classification with Pretrained Word Embeddings In this section we will continue using the AG News classification dataset introduced in previous chapters. Most of the data preparation is the 138 Implementing Text Classification Using Word Embeddings same, up to tokenization. However, we need to remember that the embeddings were trained on a different corpus, so it would be a good idea to estimate how well they cover the words AG News dataset. To achieve this, we load the embeddings just like we did previously. Then we count the tokens in our corpus that do not appear in the embeddings vocabulary, as well as the total number to tokens. We use these numbers to print some informative statistics such as the proportion of unknown tokens in the corpus. We also print the top ten unknown tokens. You can use the Jupyter notebook to explore this task further. Our analysis indicates that only 1.25% of the tokens are not accounted for in the embeddings vocabulary. Further, the most common unknown words seem to be URL fragments. This is encouraging. However, for more robustness, we will introduce a couple of special embeddings that are often needed when dealing with word embeddings. The first one is an embedding used to represent unknown words. A common strategy is to use the average of all the embeddings in the vocabulary for this purpose. The second embedding we will add will be used for padding. Padding is required when we want to train with (mini-)batches because the lengths of all the examples in a given batch have to match in order for the batch to be efficiently processed in parallel. The padding embedding consists only of zeros, which essentially excludes these virtual tokens from the forward/backward passes. None of these embeddings are included in the pretrained GloVe embeddings, but other pretrained embeddings may already include them, so it is a good idea to check if they are included with the embeddings we are using before adding them. The new embeddings were added at the end of embedding collection, so their ids are 400,000 and 400,001. Now we need to generate a list of token ids for each training example. Recall that we decided to ignore tokens that appear less than 10 times, so we need to replace those with [UNK] too, even if they appear in the embedding vocabulary. Next, we create a Dataset object from the padded lists of token ids. This one is even easier since the lists of token ids are ready. So all that is required is turning them into tensors. Lastly, we need to modify the model class to indicate that we now use embedding vectors. To this end, we will add an nn. Embedding layer that stores the embedding vectors for all words in the vocabulary. We will use this object to look up embeddings by their token ids. This layer will be initialized from a tensor containing the pretrained embeddings for the entire vocabulary. Also, the pad_id is specified when creating the 9.2 Text Classification with Pretrained Word Embeddings 139 embedding layer. When a nn. Embedding layer gets initialized using the from_pretrained method with other arguments set to default values, the embeddings are not updated during training. We will keep it that way for this example, but that could be changed by setting the freeze parameter to False. The rest of the layers are the same as in our previous example from Chapter 7, i.e., one intermediate layer and one output layer, with a nonlinearity (ReLU) between them. The only major difference is that now the input size of the intermediate layer is the size of one embedding (e.g., 300) instead of the size of the vocabulary like last time. This is because, as we explain below, the intermediate layer receives an average of the numerical representations of the words in the current text. The forward function of the Model class changes significantly. This time we are encoding the text as an average of the embeddings of all the words it contains. To compute the denominator of this average, we obtain the length of each text by counting how many of its words are not the virtual padding token. Then we sum all the embeddings and divide by the number of non-padding tokens. Adding all embeddings is safe, because padding embeddings are comprised of zeros. This process leaves us with a single embedding for the whole text, which is then passed to the rest of the layers. The training and evaluation steps are the same before. The results of this model on the AG News test partition are displayed below: Comparing these results with the ones obtained by the multilayer perceptron with explicit features in Chapter 7, we observe that on this particular task utilizing embeddings as features does not yield a performance improvement. Notably, this is a small dataset and a rather simplistic task where the presence of certain words is sufficient to distinguish the category of an article (e.g., the word basketball is highly indicative of the label Sports). Nevertheless, in other tasks where distinctions are more nuanced, or in which there is less likely to be word overlap between texts of interest, word embeddings do provide necessary signal. Additionally, when there are class imbalances, word embeddings can supplement underrepresented classes by bringing the external knowledge gained during their pretraining. 140 Implementing Text Classification Using Word Embeddings 9.3 Summary In this chapter we showed how to explore the semantic space encoded by word embeddings through word similarity and analogies, as well as one way to use them for text classification. At this point we have not taken into consideration the order in which the words appear, i.e., we averaged the embeddings for all the words in the text using a bag-ofwords representation of text. In subsequent chapters we will explore how to incorporate word order into the learned representations of text.
7,625
7,862
#!/usr/bin/env python # coding: utf-8 # # Using Pre-trained Word Embeddings # # In this notebook we will show some operations on pre-trained word embeddings to gain an intuition about them. # # We will be using the pre-trained GloVe embeddings that can be found in the [official website](https://nlp.stanford.edu/projects/glove/). In particular, we will use the file `glove.6B.300d.txt` contained in this [zip file](https://nlp.stanford.edu/data/glove.6B.zip). # # We will first load the GloVe embeddings using [Gensim](https://radimrehurek.com/gensim/). Specifically, we will use [`KeyedVectors`](https://radimrehurek.com/gensim/models/keyedvectors.html)'s [`load_word2vec_format()`](https://radimrehurek.com/gensim/models/keyedvectors.html#gensim.models.keyedvectors.KeyedVectors.load_word2vec_format) classmethod, which supports the original word2vec file format. # However, there is a difference in the file formats used by GloVe and word2vec, which is a header used by word2vec to indicate the number of embeddings and dimensions stored in the file. The file that stores the GloVe embeddings doesn't have this header, so we will have to address that when loading the embeddings. # # Loading the embeddings may take a little bit, so hang in there! # In[2]: from gensim.models import KeyedVectors fname = "glove.6B.300d.txt" glove = KeyedVectors.load_word2vec_format(fname, no_header=True) glove.vectors.shape # ## Word similarity # # One attribute of word embeddings that makes them useful is the ability to compare them using cosine similarity to find how similar they are. [`KeyedVectors`](https://radimrehurek.com/gensim/models/keyedvectors.html) objects provide a method called [`most_similar()`](https://radimrehurek.com/gensim/models/keyedvectors.html#gensim.models.keyedvectors.KeyedVectors.most_similar) that we can use to find the closest words to a particular word of interest. By default, [`most_similar()`](https://radimrehurek.com/gensim/models/keyedvectors.html#gensim.models.keyedvectors.KeyedVectors.most_similar) returns the 10 most similar words, but this can be changed using the `topn` parameter. # # Below we test this function using a few different words. # In[3]: # common noun glove.most_similar("cactus") # In[4]: # common noun glove.most_similar("cake") # In[5]: # adjective glove.most_similar("angry") # In[6]: # adverb glove.most_similar("quickly") # In[7]: # preposition glove.most_similar("between") # In[8]: # determiner glove.most_similar("the") # ## Word analogies # # Another characteristic of word embeddings is their ability to solve analogy problems. # The same [`most_similar()`](https://radimrehurek.com/gensim/models/keyedvectors.html#gensim.models.keyedvectors.KeyedVectors.most_similar) method can be used for this task, by passing two lists of words: # a `positive` list with the words that should be added and a `negative` list with the words that should be subtracted. Using these arguments, the famous example $\vec{king} - \vec{man} + \vec{woman} \approx \vec{queen}$ can be executed as follows: # In[9]: # king - man + woman glove.most_similar(positive=["king", "woman"], negative=["man"]) # Here are a few other interesting analogies: # In[10]: # car - drive + fly glove.most_similar(positive=["car", "fly"], negative=["drive"]) # In[11]: # berlin - germany + australia glove.most_similar(positive=["berlin", "australia"], negative=["germany"]) # In[12]: # england - london + baghdad glove.most_similar(positive=["england", "baghdad"], negative=["london"]) # In[13]: # japan - yen + peso glove.most_similar(positive=["japan", "peso"], negative=["yen"]) # In[14]: # best - good + tall glove.most_similar(positive=["best", "tall"], negative=["good"]) # ## Looking under the hood # # Now that we are more familiar with the [`most_similar()`](https://radimrehurek.com/gensim/models/keyedvectors.html#gensim.models.keyedvectors.KeyedVectors.most_similar) method, it is time to implement its functionality ourselves. # But first, we need to take a look at the different parts of the [`KeyedVectors`](https://radimrehurek.com/gensim/models/keyedvectors.html) object that we will need. # Obviously, we will need the vectors themselves. They are stored in the `vectors` attribute. # In[15]: glove.vectors.shape # As we can see above, `vectors` is a 2-dimensional matrix with 400,000 rows and 300 columns. # Each row corresponds to a 300-dimensional word embedding. These embeddings are not normalized, but normalized embeddings can be obtained using the [`get_normed_vectors()`](https://radimrehurek.com/gensim/models/keyedvectors.html#gensim.models.keyedvectors.KeyedVectors.get_normed_vectors) method. # In[16]: normed_vectors = glove.get_normed_vectors() normed_vectors.shape # Now we need to map the words in the vocabulary to rows in the `vectors` matrix, and vice versa. # The [`KeyedVectors`](https://radimrehurek.com/gensim/models/keyedvectors.html) object has the attributes `index_to_key` and `key_to_index` which are a list of words and a dictionary of words to indices, respectively. # In[17]: #glove.index_to_key # In[18]: #glove.key_to_index # ## Word similarity from scratch # # Now we have everything we need to implement a `most_similar_words()` function that takes a word, the vector matrix, the `index_to_key` list, and the `key_to_index` dictionary. This function will return the 10 most similar words to the provided word, along with their similarity scores. # In[19]: import numpy as np def most_similar_words(word, vectors, index_to_key, key_to_index, topn=10): # retrieve word_id corresponding to given word word_id = key_to_index[word] # retrieve embedding for given word emb = vectors[word_id] # calculate similarities to all words in out vocabulary similarities = vectors @ emb # get word_ids in ascending order with respect to similarity score ids_ascending = similarities.argsort() # reverse word_ids ids_descending = ids_ascending[::-1] # get boolean array with element corresponding to word_id set to false mask = ids_descending != word_id # obtain new array of indices that doesn't contain word_id # (otherwise the most similar word to the argument would be the argument itself) ids_descending = ids_descending[mask] # get topn word_ids top_ids = ids_descending[:topn] # retrieve topn words with their corresponding similarity score top_words = [(index_to_key[i], similarities[i]) for i in top_ids] # return results return top_words # Now let's try the same example that we used above: the most similar words to "cactus". # In[20]: vectors = glove.get_normed_vectors() index_to_key = glove.index_to_key key_to_index = glove.key_to_index most_similar_words("cactus", vectors, index_to_key, key_to_index) # ## Analogies from scratch # # The `most_similar_words()` function behaves as expected. Now let's implement a function to perform the analogy task. We will give it the very creative name `analogy`. This function will get two lists of words (one for positive words and one for negative words), just like the [`most_similar()`](https://radimrehurek.com/gensim/models/keyedvectors.html#gensim.models.keyedvectors.KeyedVectors.most_similar) method we discussed above. # In[21]: from numpy.linalg import norm def analogy(positive, negative, vectors, index_to_key, key_to_index, topn=10): # find ids for positive and negative words pos_ids = [key_to_index[w] for w in positive] neg_ids = [key_to_index[w] for w in negative] given_word_ids = pos_ids + neg_ids # get embeddings for positive and negative words pos_emb = vectors[pos_ids].sum(axis=0) neg_emb = vectors[neg_ids].sum(axis=0) # get embedding for analogy emb = pos_emb - neg_emb # normalize embedding emb = emb / norm(emb) # calculate similarities to all words in out vocabulary similarities = vectors @ emb # get word_ids in ascending order with respect to similarity score ids_ascending = similarities.argsort() # reverse word_ids ids_descending = ids_ascending[::-1] # get boolean array with element corresponding to any of given_word_ids set to false given_words_mask = np.isin(ids_descending, given_word_ids, invert=True) # obtain new array of indices that doesn't contain any of the given_word_ids ids_descending = ids_descending[given_words_mask] # get topn word_ids top_ids = ids_descending[:topn] # retrieve topn words with their corresponding similarity score top_words = [(index_to_key[i], similarities[i]) for i in top_ids] # return results return top_words # Let's try this function with the $\vec{king} - \vec{man} + \vec{woman} \approx \vec{queen}$ example we discussed above. # In[22]: positive = ["king", "woman"] negative = ["man"] vectors = glove.get_normed_vectors() index_to_key = glove.index_to_key key_to_index = glove.key_to_index analogy(positive, negative, vectors, index_to_key, key_to_index) # In[ ]:
5,543
5,619
3
chap09-4
chap09-4
9 Implementing Text Classification Using Word Embeddings In the previous chapter we introduced word embeddings, which are realvalued vectors that encode semantic representation of words. We discussed how to learn them, and how they capture semantic information that makes them useful for downstream tasks. In this chapter we show how to use word embeddings that have been pretrained using a variant of the algorithm discussed in the previous chapter. We show how to load them, explore some of their characteristics, and show their application for a text classification task. As usual, the code for this chapter is available in our repository. It is organized into two notebooks: one corresponding to the explorations shown in the first half of this chapter (chap9_embeddings), and a second one in which we modify our previous classifier to use word embeddings (chap9_classification). 9.1 Pre-trained Word Embeddings There are several algorithms for training word embeddings, including the original word2vec algorithm (Mikolov et al., 2013a) (which we discussed in the previous chapter), GloVe (Pennington et al., 2014), and fastText (Bojanowski et al., 2017). They all provide the software for training the embeddings as well as pretrained word embeddings on their respective websites. In general, most open-domain word embeddings are trained on large corpora that cover a variety of topics such as Wikipedia1 and Gigaword.2 Commonly, these embeddings are freely distributed so 1 https://en.wikipedia.org/wiki/Wikipedia:Database_download 2 https://catalog.ldc.upenn.edu/LDC2011T07 133 134 Implementing Text Classification Using Word Embeddings house 0.60137 0.28521 -0.032038 -0.43026 0.74806 0.26223 -0.97361 0.078581 -0.57588 -1.188 -1.8507 -0.24887 0.055549 0.0086155 0.067951 0.40554 -0.073998 -0.21318 0.37167 -0.71791 1.2234 0.35546 -0.41537 -0.21931 -0.39661 -1.7831 -0.41507 0.29533 -0.41254 0.020096 2.7425 -0.9926 -0.71033 -0.46813 0.28265 -0.077639 0.3041 -0.06644 0.3951 -0.70747 -0.38894 0.23158 -0.49508 0.14612 -0.02314 0.56389 -0.86188 -1.0278 0.039922 0.20018 Figure 9.1 GloVe embedding corresponding to the word house, found in the GloVe file glove.6B.50d.txt. We have broken the vector in several lines for display purposes, but this is a single line in the text file. that practitioners can use them in downstream tasks. We will use one such set of vectors in this chapter. Pretrained embeddings are usually distributed as a text file in which each line represents a word vector. The first element in the line is the word itself, and the rest of the elements are the vector components. This is usually referred to as the word2vec format. For example, Figure 9.1 shows the line in the glove.6B.50d.txt file (from the GloVe website) corresponding to the word house. This vector is represented by the word itself, followed by 50 floating-point numbers corresponding to the 50dimensional vector. Note that some embeddings files have a header line composed of two numbers: the number of vectors (i.e., the number of lines in the file), and the vector dimensionality. However, this is not always the case. For example, the original word2vec implementation includes this header line, but the more recent GloVe does not (probably because this information can be inferred from the content of the file). For the examples in the rest of the chapter, we will use the glove.6B.300d.txt embeddings that can be downloaded from the GloVe website.3 This file provides 400,000 word embeddings of 300-dimensions trained on texts
from Wikipedia 2014 and Gigaword 5. We will begin our exploration of word embeddings using Gensim,4 a Python library that provides excellent support for loading and using word embeddings, among other more advanced features. As we can see, the embeddings have been loaded and assigned to the glove variable. Note that we had to specify that this file doesn’t contain the header that is usually present in the word2vec format. The 3 https://nlp.stanford.edu/projects/glove/ 4 https://radimrehurek.com/gensim/ 9.1 Pre-trained Word Embeddings 135 glove.vectors attribute contains a 2-dimensional NumPy array with 400,000 rows and 300 columns, each row corresponding to a word embedding. 9.1.1 Word Similarity Gensim’s KeyedVectors class provides a method called most_similar that receives a word and computes its cosine similarity to all other embeddings, and returns the topn most-similar words. By default, topn is set to 10. The example above shows the top 10 most-similar words to the word cactus, when using the 300-dimension GloVe embeddings trained on Wikipedia and Gigaword. All ten most-similar words are related to cactus in different ways: cacti and cactuses are its plural forms; saguaro, peyote, opuntia, and prickly pear are types of cacti; and mesquite, shrubs, and succulents are other plants from arid climates. You can find more examples of word similarity queries in the Jupyter notebook that accompanies this chapter. Also, as an exercise, try loading a different set of embeddings trained with a different corpus (e.g., Twitter) to see if you obtain different results! 9.1.2 Word Analogies As we discussed in the previous chapter, the semantic information en- coded by word embeddings captures much more than word similar- ity. To surface this additional information, we will use word analogies represented using additional vector operations. For example, a well- ⃗
known analogy that highlights gender information is: king − m⃗an ≈ qu⃗een−wom⃗an,5or,inplainlanguage:“manistokingwhatwomanis to queen.” From this, it immediately follows that one can subtract the meaning of man and add the meaning of woman to obtain the definition ⃗ offemaleroyalty:king−m⃗an+wom⃗an≈qu⃗een.
 The same most_similar method we’ve been using can be repurposed to find word analogies such as the one mentioned above. To this end, two sets of words have to be provided to the most_similar method: a list of positive words that should be added, and a list of negative words 5 A word with an arrow on top refers to the embedding vector corresponding to that word. Please see Section 1.4 for a summary of the notations used in this book. 136 Implementing Text Classification Using Word Embeddings that should be subtracted. For example, the code below implements the left-hand side of the previous analogy: Another interesting analogy relation that shows how the embeddings have captured information about currencies is shown below. More examples are discussed in the Jupyter notebook. 9.1.3 Looking Under the Hood Let us understand now how these queries are actually implemented. First, we need to know what components we need. Clearly, we need the embedding vectors themselves. They are stored in the vectors attribute of the KeyedVectors object. As we mentioned previously, this is a 2-dimensional NumPy array, each row corresponding to a word in the vocabulary. These embeddings are not normalized, but normalized embeddings can be obtained using the get_normed_vectors method. We also need to know the mapping between words and the matrix rows. The KeyedVectors object stores this mapping in a list of terms called index_to_key, and a term-to-index dictionary called key_to_index. Below we show only the first 5 terms to save space, but you can inspect the whole vocabulary in the Jupyter notebook. 9.1.4 Word Similarity from Scratch Implementing the word similarity function ourselves is a good exercise to ensure that we understand how cosine similarity works, and to practice our NumPy skills. We will write a function called most_similar_words that will take a word, the embeddings matrix, the vocabulary in the form of the index_to_key list and key_to_index dictionary, and the number of similar words to return (defaults to 10). The implementation of most_similar_words is straightforward. First, we find the word id for the given word, using the key_to_index dictionary. Then we retrieve the row from the vectors matrix that corresponds to that word. The next step is computing the cosine similarity between the word of interest and the rest of the vocabulary. Recall that the cosine similarity is equivalent to a dot product if the vectors are normalized. We use this equivalence by performing a matrix-vector multiplication between the word embedding and the embedding matrix using Python’s at operator (denoted as @ in code). This means that we must pass the 9.2 Text Classification with Pretrained Word Embeddings 137 normalized embeddings as an argument to this function. Next, we need to sort the similarities preserving the mapping to the words in the vocabulary. We achieve this using the argsort NumPy method, which returns the indices in sorted (ascending) order. Since we need them in descending order, the next step is to reverse this list of indices. Obviously, the most similar word to whichever word we’re querying is the word itself, but that is not an interesting result, so we will remove it from the results. We do this by using NumPy’s ability to index arrays using booleans. We first create a new array in which the position corresponding to the query word is set to False and every other element is set to True, and we use this boolean array to index the list of ids. Lastly, we create a list of tuples of the form (word, similarity) for the topn words, and return the results. Now we will test our implementation of word similarity using the word cactus. You can compare the results to the ones obtained by KeyedVectors’s most_similar method. 9.1.5 Word Analogies from Scratch The implementation of the word analogy function is not that much different from our most_similar_word function above. The main difference between this function and most_similar_words is that now we have two lists of words that we need to combine into a single embedding. We first add the positive words into a single vector, and we do the same for the negative words. Then we subtract the negative vector from the positive one, and normalize the result. The similarity scores are computed the same way as before, but now we need to remove several words from the results, so this time we use NumPy’s isin function, which checks for any of the words in given_word_ids. We then package the results the same way we did before, and return them. ⃗ Nowlet’stryourimplementationwiththesameking−m⃗an+wom⃗an query we discussed previously. Please compare the results to the ones obtained by Gensim. 9.2 Text Classification with Pretrained Word Embeddings In this section we will continue using the AG News classification dataset introduced in previous chapters. Most of the data preparation is the 138 Implementing Text Classification Using Word Embeddings same, up to tokenization. However, we need to remember that the embeddings were trained on a different corpus, so it would be a good idea to estimate how well they cover the words AG News dataset. To achieve this, we load the embeddings just like we did previously. Then we count the tokens in our corpus that do not appear in the embeddings vocabulary, as well as the total number to tokens. We use these numbers to print some informative statistics such as the proportion of unknown tokens in the corpus. We also print the top ten unknown tokens. You can use the Jupyter notebook to explore this task further. Our analysis indicates that only 1.25% of the tokens are not accounted for in the embeddings vocabulary. Further, the most common unknown words seem to be URL fragments. This is encouraging. However, for more robustness, we will introduce a couple of special embeddings that are often needed when dealing with word embeddings. The first one is an embedding used to represent unknown words. A common strategy is to use the average of all the embeddings in the vocabulary for this purpose. The second embedding we will add will be used for padding. Padding is required when we want to train with (mini-)batches because the lengths of all the examples in a given batch have to match in order for the batch to be efficiently processed in parallel. The padding embedding consists only of zeros, which essentially excludes these virtual tokens from the forward/backward passes. None of these embeddings are included in the pretrained GloVe embeddings, but other pretrained embeddings may already include them, so it is a good idea to check if they are included with the embeddings we are using before adding them. The new embeddings were added at the end of embedding collection, so their ids are 400,000 and 400,001. Now we need to generate a list of token ids for each training example. Recall that we decided to ignore tokens that appear less than 10 times, so we need to replace those with [UNK] too, even if they appear in the embedding vocabulary. Next, we create a Dataset object from the padded lists of token ids. This one is even easier since the lists of token ids are ready. So all that is required is turning them into tensors. Lastly, we need to modify the model class to indicate that we now use embedding vectors. To this end, we will add an nn. Embedding layer that stores the embedding vectors for all words in the vocabulary. We will use this object to look up embeddings by their token ids. This layer will be initialized from a tensor containing the pretrained embeddings for the entire vocabulary. Also, the pad_id is specified when creating the 9.2 Text Classification with Pretrained Word Embeddings 139 embedding layer. When a nn. Embedding layer gets initialized using the from_pretrained method with other arguments set to default values, the embeddings are not updated during training. We will keep it that way for this example, but that could be changed by setting the freeze parameter to False. The rest of the layers are the same as in our previous example from Chapter 7, i.e., one intermediate layer and one output layer, with a nonlinearity (ReLU) between them. The only major difference is that now the input size of the intermediate layer is the size of one embedding (e.g., 300) instead of the size of the vocabulary like last time. This is because, as we explain below, the intermediate layer receives an average of the numerical representations of the words in the current text. The forward function of the Model class changes significantly. This time we are encoding the text as an average of the embeddings of all the words it contains. To compute the denominator of this average, we obtain the length of each text by counting how many of its words are not the virtual padding token. Then we sum all the embeddings and divide by the number of non-padding tokens. Adding all embeddings is safe, because padding embeddings are comprised of zeros. This process leaves us with a single embedding for the whole text, which is then passed to the rest of the layers. The training and evaluation steps are the same before. The results of this model on the AG News test partition are displayed below: Comparing these results with the ones obtained by the multilayer perceptron with explicit features in Chapter 7, we observe that on this particular task utilizing embeddings as features does not yield a performance improvement. Notably, this is a small dataset and a rather simplistic task where the presence of certain words is sufficient to distinguish the category of an article (e.g., the word basketball is highly indicative of the label Sports). Nevertheless, in other tasks where distinctions are more nuanced, or in which there is less likely to be word overlap between texts of interest, word embeddings do provide necessary signal. Additionally, when there are class imbalances, word embeddings can supplement underrepresented classes by bringing the external knowledge gained during their pretraining. 140 Implementing Text Classification Using Word Embeddings 9.3 Summary In this chapter we showed how to explore the semantic space encoded by word embeddings through word similarity and analogies, as well as one way to use them for text classification. At this point we have not taken into consideration the order in which the words appear, i.e., we averaged the embeddings for all the words in the text using a bag-ofwords representation of text. In subsequent chapters we will explore how to incorporate word order into the learned representations of text.
8,624
8,814
#!/usr/bin/env python # coding: utf-8 # # Using Pre-trained Word Embeddings # # In this notebook we will show some operations on pre-trained word embeddings to gain an intuition about them. # # We will be using the pre-trained GloVe embeddings that can be found in the [official website](https://nlp.stanford.edu/projects/glove/). In particular, we will use the file `glove.6B.300d.txt` contained in this [zip file](https://nlp.stanford.edu/data/glove.6B.zip). # # We will first load the GloVe embeddings using [Gensim](https://radimrehurek.com/gensim/). Specifically, we will use [`KeyedVectors`](https://radimrehurek.com/gensim/models/keyedvectors.html)'s [`load_word2vec_format()`](https://radimrehurek.com/gensim/models/keyedvectors.html#gensim.models.keyedvectors.KeyedVectors.load_word2vec_format) classmethod, which supports the original word2vec file format. # However, there is a difference in the file formats used by GloVe and word2vec, which is a header used by word2vec to indicate the number of embeddings and dimensions stored in the file. The file that stores the GloVe embeddings doesn't have this header, so we will have to address that when loading the embeddings. # # Loading the embeddings may take a little bit, so hang in there! # In[2]: from gensim.models import KeyedVectors fname = "glove.6B.300d.txt" glove = KeyedVectors.load_word2vec_format(fname, no_header=True) glove.vectors.shape # ## Word similarity # # One attribute of word embeddings that makes them useful is the ability to compare them using cosine similarity to find how similar they are. [`KeyedVectors`](https://radimrehurek.com/gensim/models/keyedvectors.html) objects provide a method called [`most_similar()`](https://radimrehurek.com/gensim/models/keyedvectors.html#gensim.models.keyedvectors.KeyedVectors.most_similar) that we can use to find the closest words to a particular word of interest. By default, [`most_similar()`](https://radimrehurek.com/gensim/models/keyedvectors.html#gensim.models.keyedvectors.KeyedVectors.most_similar) returns the 10 most similar words, but this can be changed using the `topn` parameter. # # Below we test this function using a few different words. # In[3]: # common noun glove.most_similar("cactus") # In[4]: # common noun glove.most_similar("cake") # In[5]: # adjective glove.most_similar("angry") # In[6]: # adverb glove.most_similar("quickly") # In[7]: # preposition glove.most_similar("between") # In[8]: # determiner glove.most_similar("the") # ## Word analogies # # Another characteristic of word embeddings is their ability to solve analogy problems. # The same [`most_similar()`](https://radimrehurek.com/gensim/models/keyedvectors.html#gensim.models.keyedvectors.KeyedVectors.most_similar) method can be used for this task, by passing two lists of words: # a `positive` list with the words that should be added and a `negative` list with the words that should be subtracted. Using these arguments, the famous example $\vec{king} - \vec{man} + \vec{woman} \approx \vec{queen}$ can be executed as follows: # In[9]: # king - man + woman glove.most_similar(positive=["king", "woman"], negative=["man"]) # Here are a few other interesting analogies: # In[10]: # car - drive + fly glove.most_similar(positive=["car", "fly"], negative=["drive"]) # In[11]: # berlin - germany + australia glove.most_similar(positive=["berlin", "australia"], negative=["germany"]) # In[12]: # england - london + baghdad glove.most_similar(positive=["england", "baghdad"], negative=["london"]) # In[13]: # japan - yen + peso glove.most_similar(positive=["japan", "peso"], negative=["yen"]) # In[14]: # best - good + tall glove.most_similar(positive=["best", "tall"], negative=["good"]) # ## Looking under the hood # # Now that we are more familiar with the [`most_similar()`](https://radimrehurek.com/gensim/models/keyedvectors.html#gensim.models.keyedvectors.KeyedVectors.most_similar) method, it is time to implement its functionality ourselves. # But first, we need to take a look at the different parts of the [`KeyedVectors`](https://radimrehurek.com/gensim/models/keyedvectors.html) object that we will need. # Obviously, we will need the vectors themselves. They are stored in the `vectors` attribute. # In[15]: glove.vectors.shape # As we can see above, `vectors` is a 2-dimensional matrix with 400,000 rows and 300 columns. # Each row corresponds to a 300-dimensional word embedding. These embeddings are not normalized, but normalized embeddings can be obtained using the [`get_normed_vectors()`](https://radimrehurek.com/gensim/models/keyedvectors.html#gensim.models.keyedvectors.KeyedVectors.get_normed_vectors) method. # In[16]: normed_vectors = glove.get_normed_vectors() normed_vectors.shape # Now we need to map the words in the vocabulary to rows in the `vectors` matrix, and vice versa. # The [`KeyedVectors`](https://radimrehurek.com/gensim/models/keyedvectors.html) object has the attributes `index_to_key` and `key_to_index` which are a list of words and a dictionary of words to indices, respectively. # In[17]: #glove.index_to_key # In[18]: #glove.key_to_index # ## Word similarity from scratch # # Now we have everything we need to implement a `most_similar_words()` function that takes a word, the vector matrix, the `index_to_key` list, and the `key_to_index` dictionary. This function will return the 10 most similar words to the provided word, along with their similarity scores. # In[19]: import numpy as np def most_similar_words(word, vectors, index_to_key, key_to_index, topn=10): # retrieve word_id corresponding to given word word_id = key_to_index[word] # retrieve embedding for given word emb = vectors[word_id] # calculate similarities to all words in out vocabulary similarities = vectors @ emb # get word_ids in ascending order with respect to similarity score ids_ascending = similarities.argsort() # reverse word_ids ids_descending = ids_ascending[::-1] # get boolean array with element corresponding to word_id set to false mask = ids_descending != word_id # obtain new array of indices that doesn't contain word_id # (otherwise the most similar word to the argument would be the argument itself) ids_descending = ids_descending[mask] # get topn word_ids top_ids = ids_descending[:topn] # retrieve topn words with their corresponding similarity score top_words = [(index_to_key[i], similarities[i]) for i in top_ids] # return results return top_words # Now let's try the same example that we used above: the most similar words to "cactus". # In[20]: vectors = glove.get_normed_vectors() index_to_key = glove.index_to_key key_to_index = glove.key_to_index most_similar_words("cactus", vectors, index_to_key, key_to_index) # ## Analogies from scratch # # The `most_similar_words()` function behaves as expected. Now let's implement a function to perform the analogy task. We will give it the very creative name `analogy`. This function will get two lists of words (one for positive words and one for negative words), just like the [`most_similar()`](https://radimrehurek.com/gensim/models/keyedvectors.html#gensim.models.keyedvectors.KeyedVectors.most_similar) method we discussed above. # In[21]: from numpy.linalg import norm def analogy(positive, negative, vectors, index_to_key, key_to_index, topn=10): # find ids for positive and negative words pos_ids = [key_to_index[w] for w in positive] neg_ids = [key_to_index[w] for w in negative] given_word_ids = pos_ids + neg_ids # get embeddings for positive and negative words pos_emb = vectors[pos_ids].sum(axis=0) neg_emb = vectors[neg_ids].sum(axis=0) # get embedding for analogy emb = pos_emb - neg_emb # normalize embedding emb = emb / norm(emb) # calculate similarities to all words in out vocabulary similarities = vectors @ emb # get word_ids in ascending order with respect to similarity score ids_ascending = similarities.argsort() # reverse word_ids ids_descending = ids_ascending[::-1] # get boolean array with element corresponding to any of given_word_ids set to false given_words_mask = np.isin(ids_descending, given_word_ids, invert=True) # obtain new array of indices that doesn't contain any of the given_word_ids ids_descending = ids_descending[given_words_mask] # get topn word_ids top_ids = ids_descending[:topn] # retrieve topn words with their corresponding similarity score top_words = [(index_to_key[i], similarities[i]) for i in top_ids] # return results return top_words # Let's try this function with the $\vec{king} - \vec{man} + \vec{woman} \approx \vec{queen}$ example we discussed above. # In[22]: positive = ["king", "woman"] negative = ["man"] vectors = glove.get_normed_vectors() index_to_key = glove.index_to_key key_to_index = glove.key_to_index analogy(positive, negative, vectors, index_to_key, key_to_index) # In[ ]:
5,934
5,977
4
chap09-5
chap09-5
9 Implementing Text Classification Using Word Embeddings In the previous chapter we introduced word embeddings, which are realvalued vectors that encode semantic representation of words. We discussed how to learn them, and how they capture semantic information that makes them useful for downstream tasks. In this chapter we show how to use word embeddings that have been pretrained using a variant of the algorithm discussed in the previous chapter. We show how to load them, explore some of their characteristics, and show their application for a text classification task. As usual, the code for this chapter is available in our repository. It is organized into two notebooks: one corresponding to the explorations shown in the first half of this chapter (chap9_embeddings), and a second one in which we modify our previous classifier to use word embeddings (chap9_classification). 9.1 Pre-trained Word Embeddings There are several algorithms for training word embeddings, including the original word2vec algorithm (Mikolov et al., 2013a) (which we discussed in the previous chapter), GloVe (Pennington et al., 2014), and fastText (Bojanowski et al., 2017). They all provide the software for training the embeddings as well as pretrained word embeddings on their respective websites. In general, most open-domain word embeddings are trained on large corpora that cover a variety of topics such as Wikipedia1 and Gigaword.2 Commonly, these embeddings are freely distributed so 1 https://en.wikipedia.org/wiki/Wikipedia:Database_download 2 https://catalog.ldc.upenn.edu/LDC2011T07 133 134 Implementing Text Classification Using Word Embeddings house 0.60137 0.28521 -0.032038 -0.43026 0.74806 0.26223 -0.97361 0.078581 -0.57588 -1.188 -1.8507 -0.24887 0.055549 0.0086155 0.067951 0.40554 -0.073998 -0.21318 0.37167 -0.71791 1.2234 0.35546 -0.41537 -0.21931 -0.39661 -1.7831 -0.41507 0.29533 -0.41254 0.020096 2.7425 -0.9926 -0.71033 -0.46813 0.28265 -0.077639 0.3041 -0.06644 0.3951 -0.70747 -0.38894 0.23158 -0.49508 0.14612 -0.02314 0.56389 -0.86188 -1.0278 0.039922 0.20018 Figure 9.1 GloVe embedding corresponding to the word house, found in the GloVe file glove.6B.50d.txt. We have broken the vector in several lines for display purposes, but this is a single line in the text file. that practitioners can use them in downstream tasks. We will use one such set of vectors in this chapter. Pretrained embeddings are usually distributed as a text file in which each line represents a word vector. The first element in the line is the word itself, and the rest of the elements are the vector components. This is usually referred to as the word2vec format. For example, Figure 9.1 shows the line in the glove.6B.50d.txt file (from the GloVe website) corresponding to the word house. This vector is represented by the word itself, followed by 50 floating-point numbers corresponding to the 50dimensional vector. Note that some embeddings files have a header line composed of two numbers: the number of vectors (i.e., the number of lines in the file), and the vector dimensionality. However, this is not always the case. For example, the original word2vec implementation includes this header line, but the more recent GloVe does not (probably because this information can be inferred from the content of the file). For the examples in the rest of the chapter, we will use the glove.6B.300d.txt embeddings that can be downloaded from the GloVe website.3 This file provides 400,000 word embeddings of 300-dimensions trained on texts
from Wikipedia 2014 and Gigaword 5. We will begin our exploration of word embeddings using Gensim,4 a Python library that provides excellent support for loading and using word embeddings, among other more advanced features. As we can see, the embeddings have been loaded and assigned to the glove variable. Note that we had to specify that this file doesn’t contain the header that is usually present in the word2vec format. The 3 https://nlp.stanford.edu/projects/glove/ 4 https://radimrehurek.com/gensim/ 9.1 Pre-trained Word Embeddings 135 glove.vectors attribute contains a 2-dimensional NumPy array with 400,000 rows and 300 columns, each row corresponding to a word embedding. 9.1.1 Word Similarity Gensim’s KeyedVectors class provides a method called most_similar that receives a word and computes its cosine similarity to all other embeddings, and returns the topn most-similar words. By default, topn is set to 10. The example above shows the top 10 most-similar words to the word cactus, when using the 300-dimension GloVe embeddings trained on Wikipedia and Gigaword. All ten most-similar words are related to cactus in different ways: cacti and cactuses are its plural forms; saguaro, peyote, opuntia, and prickly pear are types of cacti; and mesquite, shrubs, and succulents are other plants from arid climates. You can find more examples of word similarity queries in the Jupyter notebook that accompanies this chapter. Also, as an exercise, try loading a different set of embeddings trained with a different corpus (e.g., Twitter) to see if you obtain different results! 9.1.2 Word Analogies As we discussed in the previous chapter, the semantic information en- coded by word embeddings captures much more than word similar- ity. To surface this additional information, we will use word analogies represented using additional vector operations. For example, a well- ⃗
known analogy that highlights gender information is: king − m⃗an ≈ qu⃗een−wom⃗an,5or,inplainlanguage:“manistokingwhatwomanis to queen.” From this, it immediately follows that one can subtract the meaning of man and add the meaning of woman to obtain the definition ⃗ offemaleroyalty:king−m⃗an+wom⃗an≈qu⃗een.
 The same most_similar method we’ve been using can be repurposed to find word analogies such as the one mentioned above. To this end, two sets of words have to be provided to the most_similar method: a list of positive words that should be added, and a list of negative words 5 A word with an arrow on top refers to the embedding vector corresponding to that word. Please see Section 1.4 for a summary of the notations used in this book. 136 Implementing Text Classification Using Word Embeddings that should be subtracted. For example, the code below implements the left-hand side of the previous analogy: Another interesting analogy relation that shows how the embeddings have captured information about currencies is shown below. More examples are discussed in the Jupyter notebook. 9.1.3 Looking Under the Hood Let us understand now how these queries are actually implemented. First, we need to know what components we need. Clearly, we need the embedding vectors themselves. They are stored in the vectors attribute of the KeyedVectors object. As we mentioned previously, this is a 2-dimensional NumPy array, each row corresponding to a word in the vocabulary. These embeddings are not normalized, but normalized embeddings can be obtained using the get_normed_vectors method. We also need to know the mapping between words and the matrix rows. The KeyedVectors object stores this mapping in a list of terms called index_to_key, and a term-to-index dictionary called key_to_index. Below we show only the first 5 terms to save space, but you can inspect the whole vocabulary in the Jupyter notebook. 9.1.4 Word Similarity from Scratch Implementing the word similarity function ourselves is a good exercise to ensure that we understand how cosine similarity works, and to practice our NumPy skills. We will write a function called most_similar_words that will take a word, the embeddings matrix, the vocabulary in the form of the index_to_key list and key_to_index dictionary, and the number of similar words to return (defaults to 10). The implementation of most_similar_words is straightforward. First, we find the word id for the given word, using the key_to_index dictionary. Then we retrieve the row from the vectors matrix that corresponds to that word. The next step is computing the cosine similarity between the word of interest and the rest of the vocabulary. Recall that the cosine similarity is equivalent to a dot product if the vectors are normalized. We use this equivalence by performing a matrix-vector multiplication between the word embedding and the embedding matrix using Python’s at operator (denoted as @ in code). This means that we must pass the 9.2 Text Classification with Pretrained Word Embeddings 137 normalized embeddings as an argument to this function. Next, we need to sort the similarities preserving the mapping to the words in the vocabulary. We achieve this using the argsort NumPy method, which returns the indices in sorted (ascending) order. Since we need them in descending order, the next step is to reverse this list of indices. Obviously, the most similar word to whichever word we’re querying is the word itself, but that is not an interesting result, so we will remove it from the results. We do this by using NumPy’s ability to index arrays using booleans. We first create a new array in which the position corresponding to the query word is set to False and every other element is set to True, and we use this boolean array to index the list of ids. Lastly, we create a list of tuples of the form (word, similarity) for the topn words, and return the results. Now we will test our implementation of word similarity using the word cactus. You can compare the results to the ones obtained by KeyedVectors’s most_similar method. 9.1.5 Word Analogies from Scratch The implementation of the word analogy function is not that much different from our most_similar_word function above. The main difference between this function and most_similar_words is that now we have two lists of words that we need to combine into a single embedding. We first add the positive words into a single vector, and we do the same for the negative words. Then we subtract the negative vector from the positive one, and normalize the result. The similarity scores are computed the same way as before, but now we need to remove several words from the results, so this time we use NumPy’s isin function, which checks for any of the words in given_word_ids. We then package the results the same way we did before, and return them. ⃗ Nowlet’stryourimplementationwiththesameking−m⃗an+wom⃗an query we discussed previously. Please compare the results to the ones obtained by Gensim. 9.2 Text Classification with Pretrained Word Embeddings In this section we will continue using the AG News classification dataset introduced in previous chapters. Most of the data preparation is the 138 Implementing Text Classification Using Word Embeddings same, up to tokenization. However, we need to remember that the embeddings were trained on a different corpus, so it would be a good idea to estimate how well they cover the words AG News dataset. To achieve this, we load the embeddings just like we did previously. Then we count the tokens in our corpus that do not appear in the embeddings vocabulary, as well as the total number to tokens. We use these numbers to print some informative statistics such as the proportion of unknown tokens in the corpus. We also print the top ten unknown tokens. You can use the Jupyter notebook to explore this task further. Our analysis indicates that only 1.25% of the tokens are not accounted for in the embeddings vocabulary. Further, the most common unknown words seem to be URL fragments. This is encouraging. However, for more robustness, we will introduce a couple of special embeddings that are often needed when dealing with word embeddings. The first one is an embedding used to represent unknown words. A common strategy is to use the average of all the embeddings in the vocabulary for this purpose. The second embedding we will add will be used for padding. Padding is required when we want to train with (mini-)batches because the lengths of all the examples in a given batch have to match in order for the batch to be efficiently processed in parallel. The padding embedding consists only of zeros, which essentially excludes these virtual tokens from the forward/backward passes. None of these embeddings are included in the pretrained GloVe embeddings, but other pretrained embeddings may already include them, so it is a good idea to check if they are included with the embeddings we are using before adding them. The new embeddings were added at the end of embedding collection, so their ids are 400,000 and 400,001. Now we need to generate a list of token ids for each training example. Recall that we decided to ignore tokens that appear less than 10 times, so we need to replace those with [UNK] too, even if they appear in the embedding vocabulary. Next, we create a Dataset object from the padded lists of token ids. This one is even easier since the lists of token ids are ready. So all that is required is turning them into tensors. Lastly, we need to modify the model class to indicate that we now use embedding vectors. To this end, we will add an nn. Embedding layer that stores the embedding vectors for all words in the vocabulary. We will use this object to look up embeddings by their token ids. This layer will be initialized from a tensor containing the pretrained embeddings for the entire vocabulary. Also, the pad_id is specified when creating the 9.2 Text Classification with Pretrained Word Embeddings 139 embedding layer. When a nn. Embedding layer gets initialized using the from_pretrained method with other arguments set to default values, the embeddings are not updated during training. We will keep it that way for this example, but that could be changed by setting the freeze parameter to False. The rest of the layers are the same as in our previous example from Chapter 7, i.e., one intermediate layer and one output layer, with a nonlinearity (ReLU) between them. The only major difference is that now the input size of the intermediate layer is the size of one embedding (e.g., 300) instead of the size of the vocabulary like last time. This is because, as we explain below, the intermediate layer receives an average of the numerical representations of the words in the current text. The forward function of the Model class changes significantly. This time we are encoding the text as an average of the embeddings of all the words it contains. To compute the denominator of this average, we obtain the length of each text by counting how many of its words are not the virtual padding token. Then we sum all the embeddings and divide by the number of non-padding tokens. Adding all embeddings is safe, because padding embeddings are comprised of zeros. This process leaves us with a single embedding for the whole text, which is then passed to the rest of the layers. The training and evaluation steps are the same before. The results of this model on the AG News test partition are displayed below: Comparing these results with the ones obtained by the multilayer perceptron with explicit features in Chapter 7, we observe that on this particular task utilizing embeddings as features does not yield a performance improvement. Notably, this is a small dataset and a rather simplistic task where the presence of certain words is sufficient to distinguish the category of an article (e.g., the word basketball is highly indicative of the label Sports). Nevertheless, in other tasks where distinctions are more nuanced, or in which there is less likely to be word overlap between texts of interest, word embeddings do provide necessary signal. Additionally, when there are class imbalances, word embeddings can supplement underrepresented classes by bringing the external knowledge gained during their pretraining. 140 Implementing Text Classification Using Word Embeddings 9.3 Summary In this chapter we showed how to explore the semantic space encoded by word embeddings through word similarity and analogies, as well as one way to use them for text classification. At this point we have not taken into consideration the order in which the words appear, i.e., we averaged the embeddings for all the words in the text using a bag-ofwords representation of text. In subsequent chapters we will explore how to incorporate word order into the learned representations of text.
5,800
6,076
#!/usr/bin/env python # coding: utf-8 # # Using Pre-trained Word Embeddings # # In this notebook we will show some operations on pre-trained word embeddings to gain an intuition about them. # # We will be using the pre-trained GloVe embeddings that can be found in the [official website](https://nlp.stanford.edu/projects/glove/). In particular, we will use the file `glove.6B.300d.txt` contained in this [zip file](https://nlp.stanford.edu/data/glove.6B.zip). # # We will first load the GloVe embeddings using [Gensim](https://radimrehurek.com/gensim/). Specifically, we will use [`KeyedVectors`](https://radimrehurek.com/gensim/models/keyedvectors.html)'s [`load_word2vec_format()`](https://radimrehurek.com/gensim/models/keyedvectors.html#gensim.models.keyedvectors.KeyedVectors.load_word2vec_format) classmethod, which supports the original word2vec file format. # However, there is a difference in the file formats used by GloVe and word2vec, which is a header used by word2vec to indicate the number of embeddings and dimensions stored in the file. The file that stores the GloVe embeddings doesn't have this header, so we will have to address that when loading the embeddings. # # Loading the embeddings may take a little bit, so hang in there! # In[2]: from gensim.models import KeyedVectors fname = "glove.6B.300d.txt" glove = KeyedVectors.load_word2vec_format(fname, no_header=True) glove.vectors.shape # ## Word similarity # # One attribute of word embeddings that makes them useful is the ability to compare them using cosine similarity to find how similar they are. [`KeyedVectors`](https://radimrehurek.com/gensim/models/keyedvectors.html) objects provide a method called [`most_similar()`](https://radimrehurek.com/gensim/models/keyedvectors.html#gensim.models.keyedvectors.KeyedVectors.most_similar) that we can use to find the closest words to a particular word of interest. By default, [`most_similar()`](https://radimrehurek.com/gensim/models/keyedvectors.html#gensim.models.keyedvectors.KeyedVectors.most_similar) returns the 10 most similar words, but this can be changed using the `topn` parameter. # # Below we test this function using a few different words. # In[3]: # common noun glove.most_similar("cactus") # In[4]: # common noun glove.most_similar("cake") # In[5]: # adjective glove.most_similar("angry") # In[6]: # adverb glove.most_similar("quickly") # In[7]: # preposition glove.most_similar("between") # In[8]: # determiner glove.most_similar("the") # ## Word analogies # # Another characteristic of word embeddings is their ability to solve analogy problems. # The same [`most_similar()`](https://radimrehurek.com/gensim/models/keyedvectors.html#gensim.models.keyedvectors.KeyedVectors.most_similar) method can be used for this task, by passing two lists of words: # a `positive` list with the words that should be added and a `negative` list with the words that should be subtracted. Using these arguments, the famous example $\vec{king} - \vec{man} + \vec{woman} \approx \vec{queen}$ can be executed as follows: # In[9]: # king - man + woman glove.most_similar(positive=["king", "woman"], negative=["man"]) # Here are a few other interesting analogies: # In[10]: # car - drive + fly glove.most_similar(positive=["car", "fly"], negative=["drive"]) # In[11]: # berlin - germany + australia glove.most_similar(positive=["berlin", "australia"], negative=["germany"]) # In[12]: # england - london + baghdad glove.most_similar(positive=["england", "baghdad"], negative=["london"]) # In[13]: # japan - yen + peso glove.most_similar(positive=["japan", "peso"], negative=["yen"]) # In[14]: # best - good + tall glove.most_similar(positive=["best", "tall"], negative=["good"]) # ## Looking under the hood # # Now that we are more familiar with the [`most_similar()`](https://radimrehurek.com/gensim/models/keyedvectors.html#gensim.models.keyedvectors.KeyedVectors.most_similar) method, it is time to implement its functionality ourselves. # But first, we need to take a look at the different parts of the [`KeyedVectors`](https://radimrehurek.com/gensim/models/keyedvectors.html) object that we will need. # Obviously, we will need the vectors themselves. They are stored in the `vectors` attribute. # In[15]: glove.vectors.shape # As we can see above, `vectors` is a 2-dimensional matrix with 400,000 rows and 300 columns. # Each row corresponds to a 300-dimensional word embedding. These embeddings are not normalized, but normalized embeddings can be obtained using the [`get_normed_vectors()`](https://radimrehurek.com/gensim/models/keyedvectors.html#gensim.models.keyedvectors.KeyedVectors.get_normed_vectors) method. # In[16]: normed_vectors = glove.get_normed_vectors() normed_vectors.shape # Now we need to map the words in the vocabulary to rows in the `vectors` matrix, and vice versa. # The [`KeyedVectors`](https://radimrehurek.com/gensim/models/keyedvectors.html) object has the attributes `index_to_key` and `key_to_index` which are a list of words and a dictionary of words to indices, respectively. # In[17]: #glove.index_to_key # In[18]: #glove.key_to_index # ## Word similarity from scratch # # Now we have everything we need to implement a `most_similar_words()` function that takes a word, the vector matrix, the `index_to_key` list, and the `key_to_index` dictionary. This function will return the 10 most similar words to the provided word, along with their similarity scores. # In[19]: import numpy as np def most_similar_words(word, vectors, index_to_key, key_to_index, topn=10): # retrieve word_id corresponding to given word word_id = key_to_index[word] # retrieve embedding for given word emb = vectors[word_id] # calculate similarities to all words in out vocabulary similarities = vectors @ emb # get word_ids in ascending order with respect to similarity score ids_ascending = similarities.argsort() # reverse word_ids ids_descending = ids_ascending[::-1] # get boolean array with element corresponding to word_id set to false mask = ids_descending != word_id # obtain new array of indices that doesn't contain word_id # (otherwise the most similar word to the argument would be the argument itself) ids_descending = ids_descending[mask] # get topn word_ids top_ids = ids_descending[:topn] # retrieve topn words with their corresponding similarity score top_words = [(index_to_key[i], similarities[i]) for i in top_ids] # return results return top_words # Now let's try the same example that we used above: the most similar words to "cactus". # In[20]: vectors = glove.get_normed_vectors() index_to_key = glove.index_to_key key_to_index = glove.key_to_index most_similar_words("cactus", vectors, index_to_key, key_to_index) # ## Analogies from scratch # # The `most_similar_words()` function behaves as expected. Now let's implement a function to perform the analogy task. We will give it the very creative name `analogy`. This function will get two lists of words (one for positive words and one for negative words), just like the [`most_similar()`](https://radimrehurek.com/gensim/models/keyedvectors.html#gensim.models.keyedvectors.KeyedVectors.most_similar) method we discussed above. # In[21]: from numpy.linalg import norm def analogy(positive, negative, vectors, index_to_key, key_to_index, topn=10): # find ids for positive and negative words pos_ids = [key_to_index[w] for w in positive] neg_ids = [key_to_index[w] for w in negative] given_word_ids = pos_ids + neg_ids # get embeddings for positive and negative words pos_emb = vectors[pos_ids].sum(axis=0) neg_emb = vectors[neg_ids].sum(axis=0) # get embedding for analogy emb = pos_emb - neg_emb # normalize embedding emb = emb / norm(emb) # calculate similarities to all words in out vocabulary similarities = vectors @ emb # get word_ids in ascending order with respect to similarity score ids_ascending = similarities.argsort() # reverse word_ids ids_descending = ids_ascending[::-1] # get boolean array with element corresponding to any of given_word_ids set to false given_words_mask = np.isin(ids_descending, given_word_ids, invert=True) # obtain new array of indices that doesn't contain any of the given_word_ids ids_descending = ids_descending[given_words_mask] # get topn word_ids top_ids = ids_descending[:topn] # retrieve topn words with their corresponding similarity score top_words = [(index_to_key[i], similarities[i]) for i in top_ids] # return results return top_words # Let's try this function with the $\vec{king} - \vec{man} + \vec{woman} \approx \vec{queen}$ example we discussed above. # In[22]: positive = ["king", "woman"] negative = ["man"] vectors = glove.get_normed_vectors() index_to_key = glove.index_to_key key_to_index = glove.key_to_index analogy(positive, negative, vectors, index_to_key, key_to_index) # In[ ]:
3,119
3,184
5
chap09-6
chap09-6
9 Implementing Text Classification Using Word Embeddings In the previous chapter we introduced word embeddings, which are realvalued vectors that encode semantic representation of words. We discussed how to learn them, and how they capture semantic information that makes them useful for downstream tasks. In this chapter we show how to use word embeddings that have been pretrained using a variant of the algorithm discussed in the previous chapter. We show how to load them, explore some of their characteristics, and show their application for a text classification task. As usual, the code for this chapter is available in our repository. It is organized into two notebooks: one corresponding to the explorations shown in the first half of this chapter (chap9_embeddings), and a second one in which we modify our previous classifier to use word embeddings (chap9_classification). 9.1 Pre-trained Word Embeddings There are several algorithms for training word embeddings, including the original word2vec algorithm (Mikolov et al., 2013a) (which we discussed in the previous chapter), GloVe (Pennington et al., 2014), and fastText (Bojanowski et al., 2017). They all provide the software for training the embeddings as well as pretrained word embeddings on their respective websites. In general, most open-domain word embeddings are trained on large corpora that cover a variety of topics such as Wikipedia1 and Gigaword.2 Commonly, these embeddings are freely distributed so 1 https://en.wikipedia.org/wiki/Wikipedia:Database_download 2 https://catalog.ldc.upenn.edu/LDC2011T07 133 134 Implementing Text Classification Using Word Embeddings house 0.60137 0.28521 -0.032038 -0.43026 0.74806 0.26223 -0.97361 0.078581 -0.57588 -1.188 -1.8507 -0.24887 0.055549 0.0086155 0.067951 0.40554 -0.073998 -0.21318 0.37167 -0.71791 1.2234 0.35546 -0.41537 -0.21931 -0.39661 -1.7831 -0.41507 0.29533 -0.41254 0.020096 2.7425 -0.9926 -0.71033 -0.46813 0.28265 -0.077639 0.3041 -0.06644 0.3951 -0.70747 -0.38894 0.23158 -0.49508 0.14612 -0.02314 0.56389 -0.86188 -1.0278 0.039922 0.20018 Figure 9.1 GloVe embedding corresponding to the word house, found in the GloVe file glove.6B.50d.txt. We have broken the vector in several lines for display purposes, but this is a single line in the text file. that practitioners can use them in downstream tasks. We will use one such set of vectors in this chapter. Pretrained embeddings are usually distributed as a text file in which each line represents a word vector. The first element in the line is the word itself, and the rest of the elements are the vector components. This is usually referred to as the word2vec format. For example, Figure 9.1 shows the line in the glove.6B.50d.txt file (from the GloVe website) corresponding to the word house. This vector is represented by the word itself, followed by 50 floating-point numbers corresponding to the 50dimensional vector. Note that some embeddings files have a header line composed of two numbers: the number of vectors (i.e., the number of lines in the file), and the vector dimensionality. However, this is not always the case. For example, the original word2vec implementation includes this header line, but the more recent GloVe does not (probably because this information can be inferred from the content of the file). For the examples in the rest of the chapter, we will use the glove.6B.300d.txt embeddings that can be downloaded from the GloVe website.3 This file provides 400,000 word embeddings of 300-dimensions trained on texts
from Wikipedia 2014 and Gigaword 5. We will begin our exploration of word embeddings using Gensim,4 a Python library that provides excellent support for loading and using word embeddings, among other more advanced features. As we can see, the embeddings have been loaded and assigned to the glove variable. Note that we had to specify that this file doesn’t contain the header that is usually present in the word2vec format. The 3 https://nlp.stanford.edu/projects/glove/ 4 https://radimrehurek.com/gensim/ 9.1 Pre-trained Word Embeddings 135 glove.vectors attribute contains a 2-dimensional NumPy array with 400,000 rows and 300 columns, each row corresponding to a word embedding. 9.1.1 Word Similarity Gensim’s KeyedVectors class provides a method called most_similar that receives a word and computes its cosine similarity to all other embeddings, and returns the topn most-similar words. By default, topn is set to 10. The example above shows the top 10 most-similar words to the word cactus, when using the 300-dimension GloVe embeddings trained on Wikipedia and Gigaword. All ten most-similar words are related to cactus in different ways: cacti and cactuses are its plural forms; saguaro, peyote, opuntia, and prickly pear are types of cacti; and mesquite, shrubs, and succulents are other plants from arid climates. You can find more examples of word similarity queries in the Jupyter notebook that accompanies this chapter. Also, as an exercise, try loading a different set of embeddings trained with a different corpus (e.g., Twitter) to see if you obtain different results! 9.1.2 Word Analogies As we discussed in the previous chapter, the semantic information en- coded by word embeddings captures much more than word similar- ity. To surface this additional information, we will use word analogies represented using additional vector operations. For example, a well- ⃗
known analogy that highlights gender information is: king − m⃗an ≈ qu⃗een−wom⃗an,5or,inplainlanguage:“manistokingwhatwomanis to queen.” From this, it immediately follows that one can subtract the meaning of man and add the meaning of woman to obtain the definition ⃗ offemaleroyalty:king−m⃗an+wom⃗an≈qu⃗een.
 The same most_similar method we’ve been using can be repurposed to find word analogies such as the one mentioned above. To this end, two sets of words have to be provided to the most_similar method: a list of positive words that should be added, and a list of negative words 5 A word with an arrow on top refers to the embedding vector corresponding to that word. Please see Section 1.4 for a summary of the notations used in this book. 136 Implementing Text Classification Using Word Embeddings that should be subtracted. For example, the code below implements the left-hand side of the previous analogy: Another interesting analogy relation that shows how the embeddings have captured information about currencies is shown below. More examples are discussed in the Jupyter notebook. 9.1.3 Looking Under the Hood Let us understand now how these queries are actually implemented. First, we need to know what components we need. Clearly, we need the embedding vectors themselves. They are stored in the vectors attribute of the KeyedVectors object. As we mentioned previously, this is a 2-dimensional NumPy array, each row corresponding to a word in the vocabulary. These embeddings are not normalized, but normalized embeddings can be obtained using the get_normed_vectors method. We also need to know the mapping between words and the matrix rows. The KeyedVectors object stores this mapping in a list of terms called index_to_key, and a term-to-index dictionary called key_to_index. Below we show only the first 5 terms to save space, but you can inspect the whole vocabulary in the Jupyter notebook. 9.1.4 Word Similarity from Scratch Implementing the word similarity function ourselves is a good exercise to ensure that we understand how cosine similarity works, and to practice our NumPy skills. We will write a function called most_similar_words that will take a word, the embeddings matrix, the vocabulary in the form of the index_to_key list and key_to_index dictionary, and the number of similar words to return (defaults to 10). The implementation of most_similar_words is straightforward. First, we find the word id for the given word, using the key_to_index dictionary. Then we retrieve the row from the vectors matrix that corresponds to that word. The next step is computing the cosine similarity between the word of interest and the rest of the vocabulary. Recall that the cosine similarity is equivalent to a dot product if the vectors are normalized. We use this equivalence by performing a matrix-vector multiplication between the word embedding and the embedding matrix using Python’s at operator (denoted as @ in code). This means that we must pass the 9.2 Text Classification with Pretrained Word Embeddings 137 normalized embeddings as an argument to this function. Next, we need to sort the similarities preserving the mapping to the words in the vocabulary. We achieve this using the argsort NumPy method, which returns the indices in sorted (ascending) order. Since we need them in descending order, the next step is to reverse this list of indices. Obviously, the most similar word to whichever word we’re querying is the word itself, but that is not an interesting result, so we will remove it from the results. We do this by using NumPy’s ability to index arrays using booleans. We first create a new array in which the position corresponding to the query word is set to False and every other element is set to True, and we use this boolean array to index the list of ids. Lastly, we create a list of tuples of the form (word, similarity) for the topn words, and return the results. Now we will test our implementation of word similarity using the word cactus. You can compare the results to the ones obtained by KeyedVectors’s most_similar method. 9.1.5 Word Analogies from Scratch The implementation of the word analogy function is not that much different from our most_similar_word function above. The main difference between this function and most_similar_words is that now we have two lists of words that we need to combine into a single embedding. We first add the positive words into a single vector, and we do the same for the negative words. Then we subtract the negative vector from the positive one, and normalize the result. The similarity scores are computed the same way as before, but now we need to remove several words from the results, so this time we use NumPy’s isin function, which checks for any of the words in given_word_ids. We then package the results the same way we did before, and return them. ⃗ Nowlet’stryourimplementationwiththesameking−m⃗an+wom⃗an query we discussed previously. Please compare the results to the ones obtained by Gensim. 9.2 Text Classification with Pretrained Word Embeddings In this section we will continue using the AG News classification dataset introduced in previous chapters. Most of the data preparation is the 138 Implementing Text Classification Using Word Embeddings same, up to tokenization. However, we need to remember that the embeddings were trained on a different corpus, so it would be a good idea to estimate how well they cover the words AG News dataset. To achieve this, we load the embeddings just like we did previously. Then we count the tokens in our corpus that do not appear in the embeddings vocabulary, as well as the total number to tokens. We use these numbers to print some informative statistics such as the proportion of unknown tokens in the corpus. We also print the top ten unknown tokens. You can use the Jupyter notebook to explore this task further. Our analysis indicates that only 1.25% of the tokens are not accounted for in the embeddings vocabulary. Further, the most common unknown words seem to be URL fragments. This is encouraging. However, for more robustness, we will introduce a couple of special embeddings that are often needed when dealing with word embeddings. The first one is an embedding used to represent unknown words. A common strategy is to use the average of all the embeddings in the vocabulary for this purpose. The second embedding we will add will be used for padding. Padding is required when we want to train with (mini-)batches because the lengths of all the examples in a given batch have to match in order for the batch to be efficiently processed in parallel. The padding embedding consists only of zeros, which essentially excludes these virtual tokens from the forward/backward passes. None of these embeddings are included in the pretrained GloVe embeddings, but other pretrained embeddings may already include them, so it is a good idea to check if they are included with the embeddings we are using before adding them. The new embeddings were added at the end of embedding collection, so their ids are 400,000 and 400,001. Now we need to generate a list of token ids for each training example. Recall that we decided to ignore tokens that appear less than 10 times, so we need to replace those with [UNK] too, even if they appear in the embedding vocabulary. Next, we create a Dataset object from the padded lists of token ids. This one is even easier since the lists of token ids are ready. So all that is required is turning them into tensors. Lastly, we need to modify the model class to indicate that we now use embedding vectors. To this end, we will add an nn. Embedding layer that stores the embedding vectors for all words in the vocabulary. We will use this object to look up embeddings by their token ids. This layer will be initialized from a tensor containing the pretrained embeddings for the entire vocabulary. Also, the pad_id is specified when creating the 9.2 Text Classification with Pretrained Word Embeddings 139 embedding layer. When a nn. Embedding layer gets initialized using the from_pretrained method with other arguments set to default values, the embeddings are not updated during training. We will keep it that way for this example, but that could be changed by setting the freeze parameter to False. The rest of the layers are the same as in our previous example from Chapter 7, i.e., one intermediate layer and one output layer, with a nonlinearity (ReLU) between them. The only major difference is that now the input size of the intermediate layer is the size of one embedding (e.g., 300) instead of the size of the vocabulary like last time. This is because, as we explain below, the intermediate layer receives an average of the numerical representations of the words in the current text. The forward function of the Model class changes significantly. This time we are encoding the text as an average of the embeddings of all the words it contains. To compute the denominator of this average, we obtain the length of each text by counting how many of its words are not the virtual padding token. Then we sum all the embeddings and divide by the number of non-padding tokens. Adding all embeddings is safe, because padding embeddings are comprised of zeros. This process leaves us with a single embedding for the whole text, which is then passed to the rest of the layers. The training and evaluation steps are the same before. The results of this model on the AG News test partition are displayed below: Comparing these results with the ones obtained by the multilayer perceptron with explicit features in Chapter 7, we observe that on this particular task utilizing embeddings as features does not yield a performance improvement. Notably, this is a small dataset and a rather simplistic task where the presence of certain words is sufficient to distinguish the category of an article (e.g., the word basketball is highly indicative of the label Sports). Nevertheless, in other tasks where distinctions are more nuanced, or in which there is less likely to be word overlap between texts of interest, word embeddings do provide necessary signal. Additionally, when there are class imbalances, word embeddings can supplement underrepresented classes by bringing the external knowledge gained during their pretraining. 140 Implementing Text Classification Using Word Embeddings 9.3 Summary In this chapter we showed how to explore the semantic space encoded by word embeddings through word similarity and analogies, as well as one way to use them for text classification. At this point we have not taken into consideration the order in which the words appear, i.e., we averaged the embeddings for all the words in the text using a bag-ofwords representation of text. In subsequent chapters we will explore how to incorporate word order into the learned representations of text.
14,409
14,731
#!/usr/bin/env python # coding: utf-8 # # Multiclass Text Classification with # # Feed-forward Neural Networks and Word Embeddings # First, we will do some initialization. # In[1]: import random import torch import numpy as np import pandas as pd from tqdm.notebook import tqdm # enable tqdm in pandas tqdm.pandas() # set to True to use the gpu (if there is one available) use_gpu = True # select device device = torch.device('cuda' if use_gpu and torch.cuda.is_available() else 'cpu') print(f'device: {device.type}') # random seed seed = 1234 # set random seed if seed is not None: print(f'random seed: {seed}') random.seed(seed) np.random.seed(seed) torch.manual_seed(seed) # We will be using the AG's News Topic Classification Dataset. # It is stored in two CSV files: `train.csv` and `test.csv`, as well as a `classes.txt` that stores the labels of the classes to predict. # # First, we will load the training dataset using [pandas](https://pandas.pydata.org/) and take a quick look at how the data. # In[2]: train_df = pd.read_csv('data/ag_news_csv/train.csv', header=None) train_df.columns = ['class index', 'title', 'description'] train_df # The dataset consists of 120,000 examples, each consisting of a class index, a title, and a description. # The class labels are distributed in a separated file. We will add the labels to the dataset so that we can interpret the data more easily. Note that the label indexes are one-based, so we need to subtract one to retrieve them from the list. # In[3]: labels = open('data/ag_news_csv/classes.txt').read().splitlines() classes = train_df['class index'].map(lambda i: labels[i-1]) train_df.insert(1, 'class', classes) train_df # Let's inspect how balanced our examples are by using a bar plot. # In[4]: pd.value_counts(train_df['class']).plot.bar() # The classes are evenly distributed. That's great! # # However, the text contains some spurious backslashes in some parts of the text. # They are meant to represent newlines in the original text. # An example can be seen below, between the words "dwindling" and "band". # In[5]: print(train_df.loc[0, 'description']) # We will replace the backslashes with spaces on the whole column using pandas replace method. # In[6]: train_df['text'] = train_df['title'].str.lower() + " " + train_df['description'].str.lower() train_df['text'] = train_df['text'].str.replace('\\', ' ', regex=False) train_df # Now we will proceed to tokenize the title and description columns using NLTK's word_tokenize(). # We will add a new column to our dataframe with the list of tokens. # In[7]: from nltk.tokenize import word_tokenize train_df['tokens'] = train_df['text'].progress_map(word_tokenize) train_df # Now we will load the GloVe word embeddings. # In[8]: from gensim.models import KeyedVectors glove = KeyedVectors.load_word2vec_format("glove.6B.300d.txt", no_header=True) glove.vectors.shape # The word embeddings have been pretrained in a different corpus, so it would be a good idea to estimate how good our tokenization matches the GloVe vocabulary. # In[9]: from collections import Counter def count_unknown_words(data, vocabulary): counter = Counter() for row in tqdm(data): counter.update(tok for tok in row if tok not in vocabulary) return counter # find out how many times each unknown token occurrs in the corpus c = count_unknown_words(train_df['tokens'], glove.key_to_index) # find the total number of tokens in the corpus total_tokens = train_df['tokens'].map(len).sum() # find some statistics about occurrences of unknown tokens unk_tokens = sum(c.values()) percent_unk = unk_tokens / total_tokens distinct_tokens = len(list(c)) print(f'total number of tokens: {total_tokens:,}') print(f'number of unknown tokens: {unk_tokens:,}') print(f'number of distinct unknown tokens: {distinct_tokens:,}') print(f'percentage of unkown tokens: {percent_unk:.2%}') print('top 50 unknown words:') for token, n in c.most_common(10): print(f'\t{n}\t{token}') # Glove embeddings seem to have a good coverage on this dataset -- only 1.25% of the tokens in the dataset are unknown, i.e., don't appear in the GloVe vocabulary. # # Still, we will need a way to handle these unknown tokens. # Our approach will be to add a new embedding to GloVe that will be used to represent them. # This new embedding will be initialized as the average of all the GloVe embeddings. # # We will also add another embedding, this one initialized to zeros, that will be used to pad the sequences of tokens so that they all have the same length. This will be useful when we train with mini-batches. # In[10]: # string values corresponding to the new embeddings unk_tok = '[UNK]' pad_tok = '[PAD]' # initialize the new embedding values unk_emb = glove.vectors.mean(axis=0) pad_emb = np.zeros(300) # add new embeddings to glove glove.add_vectors([unk_tok, pad_tok], [unk_emb, pad_emb]) # get token ids corresponding to the new embeddings unk_id = glove.key_to_index[unk_tok] pad_id = glove.key_to_index[pad_tok] unk_id, pad_id # In[11]: from sklearn.model_selection import train_test_split train_df, dev_df = train_test_split(train_df, train_size=0.8) train_df.reset_index(inplace=True) dev_df.reset_index(inplace=True) # We will now add a new column to our dataframe that will contain the padded sequences of token ids. # In[12]: threshold = 10 tokens = train_df['tokens'].explode().value_counts() vocabulary = set(tokens[tokens > threshold].index.tolist()) print(f'vocabulary size: {len(vocabulary):,}') # In[13]: # find the length of the longest list of tokens max_tokens = train_df['tokens'].map(len).max() # return unk_id for infrequent tokens too def get_id(tok): if tok in vocabulary: return glove.key_to_index.get(tok, unk_id) else: return unk_id # function that gets a list of tokens and returns a list of token ids, # with padding added accordingly def token_ids(tokens): tok_ids = [get_id(tok) for tok in tokens] pad_len = max_tokens - len(tok_ids) return tok_ids + [pad_id] * pad_len # add new column to the dataframe train_df['token ids'] = train_df['tokens'].progress_map(token_ids) train_df # In[14]: max_tokens = dev_df['tokens'].map(len).max() dev_df['token ids'] = dev_df['tokens'].progress_map(token_ids) dev_df # Now we will get a numpy 2-dimensional array corresponding to the token ids, # and a 1-dimensional array with the gold classes. Note that the classes are one-based (i.e., they start at one), # but we need them to be zero-based, so we need to subtract one from this array. # In[15]: from torch.utils.data import Dataset class MyDataset(Dataset): def __init__(self, x, y): self.x = x self.y = y def __len__(self): return len(self.y) def __getitem__(self, index): x = torch.tensor(self.x[index]) y = torch.tensor(self.y[index]) return x, y # Next, we construct our PyTorch model, which is a feed-forward neural network with two layers: # In[16]: from torch import nn import torch.nn.functional as F class Model(nn.Module): def __init__(self, vectors, pad_id, hidden_dim, output_dim, dropout): super().__init__() # embeddings must be a tensor if not torch.is_tensor(vectors): vectors = torch.tensor(vectors) # keep padding id self.padding_idx = pad_id # embedding layer self.embs = nn.Embedding.from_pretrained(vectors, padding_idx=pad_id) # feedforward layers self.layers = nn.Sequential( nn.Dropout(dropout), nn.Linear(vectors.shape[1], hidden_dim), nn.ReLU(), nn.Dropout(dropout), nn.Linear(hidden_dim, output_dim), ) def forward(self, x): # get boolean array with padding elements set to false not_padding = torch.isin(x, self.padding_idx, invert=True) # get lengths of examples (excluding padding) lengths = torch.count_nonzero(not_padding, axis=1) # get embeddings x = self.embs(x) # calculate means x = x.sum(dim=1) / lengths.unsqueeze(dim=1) # pass to rest of the model output = self.layers(x) # calculate softmax if we're not in training mode #if not self.training: # output = F.softmax(output, dim=1) return output # Next, we implement the training procedure. We compute the loss and accuracy on the development partition after each epoch. # In[17]: from torch import optim from torch.utils.data import DataLoader from sklearn.metrics import accuracy_score # hyperparameters lr = 1e-3 weight_decay = 0 batch_size = 500 shuffle = True n_epochs = 5 hidden_dim = 50 output_dim = len(labels) dropout = 0.1 vectors = glove.vectors # initialize the model, loss function, optimizer, and data-loader model = Model(vectors, pad_id, hidden_dim, output_dim, dropout).to(device) loss_func = nn.CrossEntropyLoss() optimizer = optim.Adam(model.parameters(), lr=lr, weight_decay=weight_decay) train_ds = MyDataset(train_df['token ids'], train_df['class index'] - 1) train_dl = DataLoader(train_ds, batch_size=batch_size, shuffle=shuffle) dev_ds = MyDataset(dev_df['token ids'], dev_df['class index'] - 1) dev_dl = DataLoader(dev_ds, batch_size=batch_size, shuffle=shuffle) train_loss = [] train_acc = [] dev_loss = [] dev_acc = [] # train the model for epoch in range(n_epochs): losses = [] gold = [] pred = [] model.train() for X, y_true in tqdm(train_dl, desc=f'epoch {epoch+1} (train)'): # clear gradients model.zero_grad() # send batch to right device X = X.to(device) y_true = y_true.to(device) # predict label scores y_pred = model(X) # compute loss loss = loss_func(y_pred, y_true) # accumulate for plotting losses.append(loss.detach().cpu().item()) gold.append(y_true.detach().cpu().numpy()) pred.append(np.argmax(y_pred.detach().cpu().numpy(), axis=1)) # backpropagate loss.backward() # optimize model parameters optimizer.step() train_loss.append(np.mean(losses)) train_acc.append(accuracy_score(np.concatenate(gold), np.concatenate(pred))) model.eval() with torch.no_grad(): losses = [] gold = [] pred = [] for X, y_true in tqdm(dev_dl, desc=f'epoch {epoch+1} (dev)'): X = X.to(device) y_true = y_true.to(device) y_pred = model(X) loss = loss_func(y_pred, y_true) losses.append(loss.cpu().item()) gold.append(y_true.cpu().numpy()) pred.append(np.argmax(y_pred.cpu().numpy(), axis=1)) dev_loss.append(np.mean(losses)) dev_acc.append(accuracy_score(np.concatenate(gold), np.concatenate(pred))) # Let's plot the loss and accuracy on dev: # In[18]: import matplotlib.pyplot as plt get_ipython().run_line_magic('matplotlib', 'inline') x = np.arange(n_epochs) + 1 plt.plot(x, train_loss) plt.plot(x, dev_loss) plt.legend(['train', 'dev']) plt.xlabel('epoch') plt.ylabel('loss') plt.grid(True) # In[19]: plt.plot(x, train_acc) plt.plot(x, dev_acc) plt.legend(['train', 'dev']) plt.xlabel('epoch') plt.ylabel('accuracy') plt.grid(True) # Next, we evaluate on the testing partition: # In[20]: # repeat all preprocessing done above, this time on the test set test_df = pd.read_csv('data/ag_news_csv/test.csv', header=None) test_df.columns = ['class index', 'title', 'description'] test_df['text'] = test_df['title'].str.lower() + " " + test_df['description'].str.lower() test_df['text'] = test_df['text'].str.replace('\\', ' ', regex=False) test_df['tokens'] = test_df['text'].progress_map(word_tokenize) max_tokens = dev_df['tokens'].map(len).max() test_df['token ids'] = test_df['tokens'].progress_map(token_ids) # In[21]: from sklearn.metrics import classification_report # set model to evaluation mode model.eval() dataset = MyDataset(test_df['token ids'], test_df['class index'] - 1) data_loader = DataLoader(dataset, batch_size=batch_size) y_pred = [] # don't store gradients with torch.no_grad(): for X, _ in tqdm(data_loader): X = X.to(device) # predict one class per example y = torch.argmax(model(X), dim=1) # convert tensor to numpy array (sending it back to the cpu if needed) y_pred.append(y.cpu().numpy()) # print results print(classification_report(dataset.y, np.concatenate(y_pred), target_names=labels))
8,036
8,223
6
chap09-7
chap09-7
9 Implementing Text Classification Using Word Embeddings In the previous chapter we introduced word embeddings, which are realvalued vectors that encode semantic representation of words. We discussed how to learn them, and how they capture semantic information that makes them useful for downstream tasks. In this chapter we show how to use word embeddings that have been pretrained using a variant of the algorithm discussed in the previous chapter. We show how to load them, explore some of their characteristics, and show their application for a text classification task. As usual, the code for this chapter is available in our repository. It is organized into two notebooks: one corresponding to the explorations shown in the first half of this chapter (chap9_embeddings), and a second one in which we modify our previous classifier to use word embeddings (chap9_classification). 9.1 Pre-trained Word Embeddings There are several algorithms for training word embeddings, including the original word2vec algorithm (Mikolov et al., 2013a) (which we discussed in the previous chapter), GloVe (Pennington et al., 2014), and fastText (Bojanowski et al., 2017). They all provide the software for training the embeddings as well as pretrained word embeddings on their respective websites. In general, most open-domain word embeddings are trained on large corpora that cover a variety of topics such as Wikipedia1 and Gigaword.2 Commonly, these embeddings are freely distributed so 1 https://en.wikipedia.org/wiki/Wikipedia:Database_download 2 https://catalog.ldc.upenn.edu/LDC2011T07 133 134 Implementing Text Classification Using Word Embeddings house 0.60137 0.28521 -0.032038 -0.43026 0.74806 0.26223 -0.97361 0.078581 -0.57588 -1.188 -1.8507 -0.24887 0.055549 0.0086155 0.067951 0.40554 -0.073998 -0.21318 0.37167 -0.71791 1.2234 0.35546 -0.41537 -0.21931 -0.39661 -1.7831 -0.41507 0.29533 -0.41254 0.020096 2.7425 -0.9926 -0.71033 -0.46813 0.28265 -0.077639 0.3041 -0.06644 0.3951 -0.70747 -0.38894 0.23158 -0.49508 0.14612 -0.02314 0.56389 -0.86188 -1.0278 0.039922 0.20018 Figure 9.1 GloVe embedding corresponding to the word house, found in the GloVe file glove.6B.50d.txt. We have broken the vector in several lines for display purposes, but this is a single line in the text file. that practitioners can use them in downstream tasks. We will use one such set of vectors in this chapter. Pretrained embeddings are usually distributed as a text file in which each line represents a word vector. The first element in the line is the word itself, and the rest of the elements are the vector components. This is usually referred to as the word2vec format. For example, Figure 9.1 shows the line in the glove.6B.50d.txt file (from the GloVe website) corresponding to the word house. This vector is represented by the word itself, followed by 50 floating-point numbers corresponding to the 50dimensional vector. Note that some embeddings files have a header line composed of two numbers: the number of vectors (i.e., the number of lines in the file), and the vector dimensionality. However, this is not always the case. For example, the original word2vec implementation includes this header line, but the more recent GloVe does not (probably because this information can be inferred from the content of the file). For the examples in the rest of the chapter, we will use the glove.6B.300d.txt embeddings that can be downloaded from the GloVe website.3 This file provides 400,000 word embeddings of 300-dimensions trained on texts
from Wikipedia 2014 and Gigaword 5. We will begin our exploration of word embeddings using Gensim,4 a Python library that provides excellent support for loading and using word embeddings, among other more advanced features. As we can see, the embeddings have been loaded and assigned to the glove variable. Note that we had to specify that this file doesn’t contain the header that is usually present in the word2vec format. The 3 https://nlp.stanford.edu/projects/glove/ 4 https://radimrehurek.com/gensim/ 9.1 Pre-trained Word Embeddings 135 glove.vectors attribute contains a 2-dimensional NumPy array with 400,000 rows and 300 columns, each row corresponding to a word embedding. 9.1.1 Word Similarity Gensim’s KeyedVectors class provides a method called most_similar that receives a word and computes its cosine similarity to all other embeddings, and returns the topn most-similar words. By default, topn is set to 10. The example above shows the top 10 most-similar words to the word cactus, when using the 300-dimension GloVe embeddings trained on Wikipedia and Gigaword. All ten most-similar words are related to cactus in different ways: cacti and cactuses are its plural forms; saguaro, peyote, opuntia, and prickly pear are types of cacti; and mesquite, shrubs, and succulents are other plants from arid climates. You can find more examples of word similarity queries in the Jupyter notebook that accompanies this chapter. Also, as an exercise, try loading a different set of embeddings trained with a different corpus (e.g., Twitter) to see if you obtain different results! 9.1.2 Word Analogies As we discussed in the previous chapter, the semantic information en- coded by word embeddings captures much more than word similar- ity. To surface this additional information, we will use word analogies represented using additional vector operations. For example, a well- ⃗
known analogy that highlights gender information is: king − m⃗an ≈ qu⃗een−wom⃗an,5or,inplainlanguage:“manistokingwhatwomanis to queen.” From this, it immediately follows that one can subtract the meaning of man and add the meaning of woman to obtain the definition ⃗ offemaleroyalty:king−m⃗an+wom⃗an≈qu⃗een.
 The same most_similar method we’ve been using can be repurposed to find word analogies such as the one mentioned above. To this end, two sets of words have to be provided to the most_similar method: a list of positive words that should be added, and a list of negative words 5 A word with an arrow on top refers to the embedding vector corresponding to that word. Please see Section 1.4 for a summary of the notations used in this book. 136 Implementing Text Classification Using Word Embeddings that should be subtracted. For example, the code below implements the left-hand side of the previous analogy: Another interesting analogy relation that shows how the embeddings have captured information about currencies is shown below. More examples are discussed in the Jupyter notebook. 9.1.3 Looking Under the Hood Let us understand now how these queries are actually implemented. First, we need to know what components we need. Clearly, we need the embedding vectors themselves. They are stored in the vectors attribute of the KeyedVectors object. As we mentioned previously, this is a 2-dimensional NumPy array, each row corresponding to a word in the vocabulary. These embeddings are not normalized, but normalized embeddings can be obtained using the get_normed_vectors method. We also need to know the mapping between words and the matrix rows. The KeyedVectors object stores this mapping in a list of terms called index_to_key, and a term-to-index dictionary called key_to_index. Below we show only the first 5 terms to save space, but you can inspect the whole vocabulary in the Jupyter notebook. 9.1.4 Word Similarity from Scratch Implementing the word similarity function ourselves is a good exercise to ensure that we understand how cosine similarity works, and to practice our NumPy skills. We will write a function called most_similar_words that will take a word, the embeddings matrix, the vocabulary in the form of the index_to_key list and key_to_index dictionary, and the number of similar words to return (defaults to 10). The implementation of most_similar_words is straightforward. First, we find the word id for the given word, using the key_to_index dictionary. Then we retrieve the row from the vectors matrix that corresponds to that word. The next step is computing the cosine similarity between the word of interest and the rest of the vocabulary. Recall that the cosine similarity is equivalent to a dot product if the vectors are normalized. We use this equivalence by performing a matrix-vector multiplication between the word embedding and the embedding matrix using Python’s at operator (denoted as @ in code). This means that we must pass the 9.2 Text Classification with Pretrained Word Embeddings 137 normalized embeddings as an argument to this function. Next, we need to sort the similarities preserving the mapping to the words in the vocabulary. We achieve this using the argsort NumPy method, which returns the indices in sorted (ascending) order. Since we need them in descending order, the next step is to reverse this list of indices. Obviously, the most similar word to whichever word we’re querying is the word itself, but that is not an interesting result, so we will remove it from the results. We do this by using NumPy’s ability to index arrays using booleans. We first create a new array in which the position corresponding to the query word is set to False and every other element is set to True, and we use this boolean array to index the list of ids. Lastly, we create a list of tuples of the form (word, similarity) for the topn words, and return the results. Now we will test our implementation of word similarity using the word cactus. You can compare the results to the ones obtained by KeyedVectors’s most_similar method. 9.1.5 Word Analogies from Scratch The implementation of the word analogy function is not that much different from our most_similar_word function above. The main difference between this function and most_similar_words is that now we have two lists of words that we need to combine into a single embedding. We first add the positive words into a single vector, and we do the same for the negative words. Then we subtract the negative vector from the positive one, and normalize the result. The similarity scores are computed the same way as before, but now we need to remove several words from the results, so this time we use NumPy’s isin function, which checks for any of the words in given_word_ids. We then package the results the same way we did before, and return them. ⃗ Nowlet’stryourimplementationwiththesameking−m⃗an+wom⃗an query we discussed previously. Please compare the results to the ones obtained by Gensim. 9.2 Text Classification with Pretrained Word Embeddings In this section we will continue using the AG News classification dataset introduced in previous chapters. Most of the data preparation is the 138 Implementing Text Classification Using Word Embeddings same, up to tokenization. However, we need to remember that the embeddings were trained on a different corpus, so it would be a good idea to estimate how well they cover the words AG News dataset. To achieve this, we load the embeddings just like we did previously. Then we count the tokens in our corpus that do not appear in the embeddings vocabulary, as well as the total number to tokens. We use these numbers to print some informative statistics such as the proportion of unknown tokens in the corpus. We also print the top ten unknown tokens. You can use the Jupyter notebook to explore this task further. Our analysis indicates that only 1.25% of the tokens are not accounted for in the embeddings vocabulary. Further, the most common unknown words seem to be URL fragments. This is encouraging. However, for more robustness, we will introduce a couple of special embeddings that are often needed when dealing with word embeddings. The first one is an embedding used to represent unknown words. A common strategy is to use the average of all the embeddings in the vocabulary for this purpose. The second embedding we will add will be used for padding. Padding is required when we want to train with (mini-)batches because the lengths of all the examples in a given batch have to match in order for the batch to be efficiently processed in parallel. The padding embedding consists only of zeros, which essentially excludes these virtual tokens from the forward/backward passes. None of these embeddings are included in the pretrained GloVe embeddings, but other pretrained embeddings may already include them, so it is a good idea to check if they are included with the embeddings we are using before adding them. The new embeddings were added at the end of embedding collection, so their ids are 400,000 and 400,001. Now we need to generate a list of token ids for each training example. Recall that we decided to ignore tokens that appear less than 10 times, so we need to replace those with [UNK] too, even if they appear in the embedding vocabulary. Next, we create a Dataset object from the padded lists of token ids. This one is even easier since the lists of token ids are ready. So all that is required is turning them into tensors. Lastly, we need to modify the model class to indicate that we now use embedding vectors. To this end, we will add an nn. Embedding layer that stores the embedding vectors for all words in the vocabulary. We will use this object to look up embeddings by their token ids. This layer will be initialized from a tensor containing the pretrained embeddings for the entire vocabulary. Also, the pad_id is specified when creating the 9.2 Text Classification with Pretrained Word Embeddings 139 embedding layer. When a nn. Embedding layer gets initialized using the from_pretrained method with other arguments set to default values, the embeddings are not updated during training. We will keep it that way for this example, but that could be changed by setting the freeze parameter to False. The rest of the layers are the same as in our previous example from Chapter 7, i.e., one intermediate layer and one output layer, with a nonlinearity (ReLU) between them. The only major difference is that now the input size of the intermediate layer is the size of one embedding (e.g., 300) instead of the size of the vocabulary like last time. This is because, as we explain below, the intermediate layer receives an average of the numerical representations of the words in the current text. The forward function of the Model class changes significantly. This time we are encoding the text as an average of the embeddings of all the words it contains. To compute the denominator of this average, we obtain the length of each text by counting how many of its words are not the virtual padding token. Then we sum all the embeddings and divide by the number of non-padding tokens. Adding all embeddings is safe, because padding embeddings are comprised of zeros. This process leaves us with a single embedding for the whole text, which is then passed to the rest of the layers. The training and evaluation steps are the same before. The results of this model on the AG News test partition are displayed below: Comparing these results with the ones obtained by the multilayer perceptron with explicit features in Chapter 7, we observe that on this particular task utilizing embeddings as features does not yield a performance improvement. Notably, this is a small dataset and a rather simplistic task where the presence of certain words is sufficient to distinguish the category of an article (e.g., the word basketball is highly indicative of the label Sports). Nevertheless, in other tasks where distinctions are more nuanced, or in which there is less likely to be word overlap between texts of interest, word embeddings do provide necessary signal. Additionally, when there are class imbalances, word embeddings can supplement underrepresented classes by bringing the external knowledge gained during their pretraining. 140 Implementing Text Classification Using Word Embeddings 9.3 Summary In this chapter we showed how to explore the semantic space encoded by word embeddings through word similarity and analogies, as well as one way to use them for text classification. At this point we have not taken into consideration the order in which the words appear, i.e., we averaged the embeddings for all the words in the text using a bag-ofwords representation of text. In subsequent chapters we will explore how to incorporate word order into the learned representations of text.
11,746
11,808
#!/usr/bin/env python # coding: utf-8 # # Multiclass Text Classification with # # Feed-forward Neural Networks and Word Embeddings # First, we will do some initialization. # In[1]: import random import torch import numpy as np import pandas as pd from tqdm.notebook import tqdm # enable tqdm in pandas tqdm.pandas() # set to True to use the gpu (if there is one available) use_gpu = True # select device device = torch.device('cuda' if use_gpu and torch.cuda.is_available() else 'cpu') print(f'device: {device.type}') # random seed seed = 1234 # set random seed if seed is not None: print(f'random seed: {seed}') random.seed(seed) np.random.seed(seed) torch.manual_seed(seed) # We will be using the AG's News Topic Classification Dataset. # It is stored in two CSV files: `train.csv` and `test.csv`, as well as a `classes.txt` that stores the labels of the classes to predict. # # First, we will load the training dataset using [pandas](https://pandas.pydata.org/) and take a quick look at how the data. # In[2]: train_df = pd.read_csv('data/ag_news_csv/train.csv', header=None) train_df.columns = ['class index', 'title', 'description'] train_df # The dataset consists of 120,000 examples, each consisting of a class index, a title, and a description. # The class labels are distributed in a separated file. We will add the labels to the dataset so that we can interpret the data more easily. Note that the label indexes are one-based, so we need to subtract one to retrieve them from the list. # In[3]: labels = open('data/ag_news_csv/classes.txt').read().splitlines() classes = train_df['class index'].map(lambda i: labels[i-1]) train_df.insert(1, 'class', classes) train_df # Let's inspect how balanced our examples are by using a bar plot. # In[4]: pd.value_counts(train_df['class']).plot.bar() # The classes are evenly distributed. That's great! # # However, the text contains some spurious backslashes in some parts of the text. # They are meant to represent newlines in the original text. # An example can be seen below, between the words "dwindling" and "band". # In[5]: print(train_df.loc[0, 'description']) # We will replace the backslashes with spaces on the whole column using pandas replace method. # In[6]: train_df['text'] = train_df['title'].str.lower() + " " + train_df['description'].str.lower() train_df['text'] = train_df['text'].str.replace('\\', ' ', regex=False) train_df # Now we will proceed to tokenize the title and description columns using NLTK's word_tokenize(). # We will add a new column to our dataframe with the list of tokens. # In[7]: from nltk.tokenize import word_tokenize train_df['tokens'] = train_df['text'].progress_map(word_tokenize) train_df # Now we will load the GloVe word embeddings. # In[8]: from gensim.models import KeyedVectors glove = KeyedVectors.load_word2vec_format("glove.6B.300d.txt", no_header=True) glove.vectors.shape # The word embeddings have been pretrained in a different corpus, so it would be a good idea to estimate how good our tokenization matches the GloVe vocabulary. # In[9]: from collections import Counter def count_unknown_words(data, vocabulary): counter = Counter() for row in tqdm(data): counter.update(tok for tok in row if tok not in vocabulary) return counter # find out how many times each unknown token occurrs in the corpus c = count_unknown_words(train_df['tokens'], glove.key_to_index) # find the total number of tokens in the corpus total_tokens = train_df['tokens'].map(len).sum() # find some statistics about occurrences of unknown tokens unk_tokens = sum(c.values()) percent_unk = unk_tokens / total_tokens distinct_tokens = len(list(c)) print(f'total number of tokens: {total_tokens:,}') print(f'number of unknown tokens: {unk_tokens:,}') print(f'number of distinct unknown tokens: {distinct_tokens:,}') print(f'percentage of unkown tokens: {percent_unk:.2%}') print('top 50 unknown words:') for token, n in c.most_common(10): print(f'\t{n}\t{token}') # Glove embeddings seem to have a good coverage on this dataset -- only 1.25% of the tokens in the dataset are unknown, i.e., don't appear in the GloVe vocabulary. # # Still, we will need a way to handle these unknown tokens. # Our approach will be to add a new embedding to GloVe that will be used to represent them. # This new embedding will be initialized as the average of all the GloVe embeddings. # # We will also add another embedding, this one initialized to zeros, that will be used to pad the sequences of tokens so that they all have the same length. This will be useful when we train with mini-batches. # In[10]: # string values corresponding to the new embeddings unk_tok = '[UNK]' pad_tok = '[PAD]' # initialize the new embedding values unk_emb = glove.vectors.mean(axis=0) pad_emb = np.zeros(300) # add new embeddings to glove glove.add_vectors([unk_tok, pad_tok], [unk_emb, pad_emb]) # get token ids corresponding to the new embeddings unk_id = glove.key_to_index[unk_tok] pad_id = glove.key_to_index[pad_tok] unk_id, pad_id # In[11]: from sklearn.model_selection import train_test_split train_df, dev_df = train_test_split(train_df, train_size=0.8) train_df.reset_index(inplace=True) dev_df.reset_index(inplace=True) # We will now add a new column to our dataframe that will contain the padded sequences of token ids. # In[12]: threshold = 10 tokens = train_df['tokens'].explode().value_counts() vocabulary = set(tokens[tokens > threshold].index.tolist()) print(f'vocabulary size: {len(vocabulary):,}') # In[13]: # find the length of the longest list of tokens max_tokens = train_df['tokens'].map(len).max() # return unk_id for infrequent tokens too def get_id(tok): if tok in vocabulary: return glove.key_to_index.get(tok, unk_id) else: return unk_id # function that gets a list of tokens and returns a list of token ids, # with padding added accordingly def token_ids(tokens): tok_ids = [get_id(tok) for tok in tokens] pad_len = max_tokens - len(tok_ids) return tok_ids + [pad_id] * pad_len # add new column to the dataframe train_df['token ids'] = train_df['tokens'].progress_map(token_ids) train_df # In[14]: max_tokens = dev_df['tokens'].map(len).max() dev_df['token ids'] = dev_df['tokens'].progress_map(token_ids) dev_df # Now we will get a numpy 2-dimensional array corresponding to the token ids, # and a 1-dimensional array with the gold classes. Note that the classes are one-based (i.e., they start at one), # but we need them to be zero-based, so we need to subtract one from this array. # In[15]: from torch.utils.data import Dataset class MyDataset(Dataset): def __init__(self, x, y): self.x = x self.y = y def __len__(self): return len(self.y) def __getitem__(self, index): x = torch.tensor(self.x[index]) y = torch.tensor(self.y[index]) return x, y # Next, we construct our PyTorch model, which is a feed-forward neural network with two layers: # In[16]: from torch import nn import torch.nn.functional as F class Model(nn.Module): def __init__(self, vectors, pad_id, hidden_dim, output_dim, dropout): super().__init__() # embeddings must be a tensor if not torch.is_tensor(vectors): vectors = torch.tensor(vectors) # keep padding id self.padding_idx = pad_id # embedding layer self.embs = nn.Embedding.from_pretrained(vectors, padding_idx=pad_id) # feedforward layers self.layers = nn.Sequential( nn.Dropout(dropout), nn.Linear(vectors.shape[1], hidden_dim), nn.ReLU(), nn.Dropout(dropout), nn.Linear(hidden_dim, output_dim), ) def forward(self, x): # get boolean array with padding elements set to false not_padding = torch.isin(x, self.padding_idx, invert=True) # get lengths of examples (excluding padding) lengths = torch.count_nonzero(not_padding, axis=1) # get embeddings x = self.embs(x) # calculate means x = x.sum(dim=1) / lengths.unsqueeze(dim=1) # pass to rest of the model output = self.layers(x) # calculate softmax if we're not in training mode #if not self.training: # output = F.softmax(output, dim=1) return output # Next, we implement the training procedure. We compute the loss and accuracy on the development partition after each epoch. # In[17]: from torch import optim from torch.utils.data import DataLoader from sklearn.metrics import accuracy_score # hyperparameters lr = 1e-3 weight_decay = 0 batch_size = 500 shuffle = True n_epochs = 5 hidden_dim = 50 output_dim = len(labels) dropout = 0.1 vectors = glove.vectors # initialize the model, loss function, optimizer, and data-loader model = Model(vectors, pad_id, hidden_dim, output_dim, dropout).to(device) loss_func = nn.CrossEntropyLoss() optimizer = optim.Adam(model.parameters(), lr=lr, weight_decay=weight_decay) train_ds = MyDataset(train_df['token ids'], train_df['class index'] - 1) train_dl = DataLoader(train_ds, batch_size=batch_size, shuffle=shuffle) dev_ds = MyDataset(dev_df['token ids'], dev_df['class index'] - 1) dev_dl = DataLoader(dev_ds, batch_size=batch_size, shuffle=shuffle) train_loss = [] train_acc = [] dev_loss = [] dev_acc = [] # train the model for epoch in range(n_epochs): losses = [] gold = [] pred = [] model.train() for X, y_true in tqdm(train_dl, desc=f'epoch {epoch+1} (train)'): # clear gradients model.zero_grad() # send batch to right device X = X.to(device) y_true = y_true.to(device) # predict label scores y_pred = model(X) # compute loss loss = loss_func(y_pred, y_true) # accumulate for plotting losses.append(loss.detach().cpu().item()) gold.append(y_true.detach().cpu().numpy()) pred.append(np.argmax(y_pred.detach().cpu().numpy(), axis=1)) # backpropagate loss.backward() # optimize model parameters optimizer.step() train_loss.append(np.mean(losses)) train_acc.append(accuracy_score(np.concatenate(gold), np.concatenate(pred))) model.eval() with torch.no_grad(): losses = [] gold = [] pred = [] for X, y_true in tqdm(dev_dl, desc=f'epoch {epoch+1} (dev)'): X = X.to(device) y_true = y_true.to(device) y_pred = model(X) loss = loss_func(y_pred, y_true) losses.append(loss.cpu().item()) gold.append(y_true.cpu().numpy()) pred.append(np.argmax(y_pred.cpu().numpy(), axis=1)) dev_loss.append(np.mean(losses)) dev_acc.append(accuracy_score(np.concatenate(gold), np.concatenate(pred))) # Let's plot the loss and accuracy on dev: # In[18]: import matplotlib.pyplot as plt get_ipython().run_line_magic('matplotlib', 'inline') x = np.arange(n_epochs) + 1 plt.plot(x, train_loss) plt.plot(x, dev_loss) plt.legend(['train', 'dev']) plt.xlabel('epoch') plt.ylabel('loss') plt.grid(True) # In[19]: plt.plot(x, train_acc) plt.plot(x, dev_acc) plt.legend(['train', 'dev']) plt.xlabel('epoch') plt.ylabel('accuracy') plt.grid(True) # Next, we evaluate on the testing partition: # In[20]: # repeat all preprocessing done above, this time on the test set test_df = pd.read_csv('data/ag_news_csv/test.csv', header=None) test_df.columns = ['class index', 'title', 'description'] test_df['text'] = test_df['title'].str.lower() + " " + test_df['description'].str.lower() test_df['text'] = test_df['text'].str.replace('\\', ' ', regex=False) test_df['tokens'] = test_df['text'].progress_map(word_tokenize) max_tokens = dev_df['tokens'].map(len).max() test_df['token ids'] = test_df['tokens'].progress_map(token_ids) # In[21]: from sklearn.metrics import classification_report # set model to evaluation mode model.eval() dataset = MyDataset(test_df['token ids'], test_df['class index'] - 1) data_loader = DataLoader(dataset, batch_size=batch_size) y_pred = [] # don't store gradients with torch.no_grad(): for X, _ in tqdm(data_loader): X = X.to(device) # predict one class per example y = torch.argmax(model(X), dim=1) # convert tensor to numpy array (sending it back to the cpu if needed) y_pred.append(y.cpu().numpy()) # print results print(classification_report(dataset.y, np.concatenate(y_pred), target_names=labels))
4,803
4,840
7
chap09-8
chap09-8
9 Implementing Text Classification Using Word Embeddings In the previous chapter we introduced word embeddings, which are realvalued vectors that encode semantic representation of words. We discussed how to learn them, and how they capture semantic information that makes them useful for downstream tasks. In this chapter we show how to use word embeddings that have been pretrained using a variant of the algorithm discussed in the previous chapter. We show how to load them, explore some of their characteristics, and show their application for a text classification task. As usual, the code for this chapter is available in our repository. It is organized into two notebooks: one corresponding to the explorations shown in the first half of this chapter (chap9_embeddings), and a second one in which we modify our previous classifier to use word embeddings (chap9_classification). 9.1 Pre-trained Word Embeddings There are several algorithms for training word embeddings, including the original word2vec algorithm (Mikolov et al., 2013a) (which we discussed in the previous chapter), GloVe (Pennington et al., 2014), and fastText (Bojanowski et al., 2017). They all provide the software for training the embeddings as well as pretrained word embeddings on their respective websites. In general, most open-domain word embeddings are trained on large corpora that cover a variety of topics such as Wikipedia1 and Gigaword.2 Commonly, these embeddings are freely distributed so 1 https://en.wikipedia.org/wiki/Wikipedia:Database_download 2 https://catalog.ldc.upenn.edu/LDC2011T07 133 134 Implementing Text Classification Using Word Embeddings house 0.60137 0.28521 -0.032038 -0.43026 0.74806 0.26223 -0.97361 0.078581 -0.57588 -1.188 -1.8507 -0.24887 0.055549 0.0086155 0.067951 0.40554 -0.073998 -0.21318 0.37167 -0.71791 1.2234 0.35546 -0.41537 -0.21931 -0.39661 -1.7831 -0.41507 0.29533 -0.41254 0.020096 2.7425 -0.9926 -0.71033 -0.46813 0.28265 -0.077639 0.3041 -0.06644 0.3951 -0.70747 -0.38894 0.23158 -0.49508 0.14612 -0.02314 0.56389 -0.86188 -1.0278 0.039922 0.20018 Figure 9.1 GloVe embedding corresponding to the word house, found in the GloVe file glove.6B.50d.txt. We have broken the vector in several lines for display purposes, but this is a single line in the text file. that practitioners can use them in downstream tasks. We will use one such set of vectors in this chapter. Pretrained embeddings are usually distributed as a text file in which each line represents a word vector. The first element in the line is the word itself, and the rest of the elements are the vector components. This is usually referred to as the word2vec format. For example, Figure 9.1 shows the line in the glove.6B.50d.txt file (from the GloVe website) corresponding to the word house. This vector is represented by the word itself, followed by 50 floating-point numbers corresponding to the 50dimensional vector. Note that some embeddings files have a header line composed of two numbers: the number of vectors (i.e., the number of lines in the file), and the vector dimensionality. However, this is not always the case. For example, the original word2vec implementation includes this header line, but the more recent GloVe does not (probably because this information can be inferred from the content of the file). For the examples in the rest of the chapter, we will use the glove.6B.300d.txt embeddings that can be downloaded from the GloVe website.3 This file provides 400,000 word embeddings of 300-dimensions trained on texts
from Wikipedia 2014 and Gigaword 5. We will begin our exploration of word embeddings using Gensim,4 a Python library that provides excellent support for loading and using word embeddings, among other more advanced features. As we can see, the embeddings have been loaded and assigned to the glove variable. Note that we had to specify that this file doesn’t contain the header that is usually present in the word2vec format. The 3 https://nlp.stanford.edu/projects/glove/ 4 https://radimrehurek.com/gensim/ 9.1 Pre-trained Word Embeddings 135 glove.vectors attribute contains a 2-dimensional NumPy array with 400,000 rows and 300 columns, each row corresponding to a word embedding. 9.1.1 Word Similarity Gensim’s KeyedVectors class provides a method called most_similar that receives a word and computes its cosine similarity to all other embeddings, and returns the topn most-similar words. By default, topn is set to 10. The example above shows the top 10 most-similar words to the word cactus, when using the 300-dimension GloVe embeddings trained on Wikipedia and Gigaword. All ten most-similar words are related to cactus in different ways: cacti and cactuses are its plural forms; saguaro, peyote, opuntia, and prickly pear are types of cacti; and mesquite, shrubs, and succulents are other plants from arid climates. You can find more examples of word similarity queries in the Jupyter notebook that accompanies this chapter. Also, as an exercise, try loading a different set of embeddings trained with a different corpus (e.g., Twitter) to see if you obtain different results! 9.1.2 Word Analogies As we discussed in the previous chapter, the semantic information en- coded by word embeddings captures much more than word similar- ity. To surface this additional information, we will use word analogies represented using additional vector operations. For example, a well- ⃗
known analogy that highlights gender information is: king − m⃗an ≈ qu⃗een−wom⃗an,5or,inplainlanguage:“manistokingwhatwomanis to queen.” From this, it immediately follows that one can subtract the meaning of man and add the meaning of woman to obtain the definition ⃗ offemaleroyalty:king−m⃗an+wom⃗an≈qu⃗een.
 The same most_similar method we’ve been using can be repurposed to find word analogies such as the one mentioned above. To this end, two sets of words have to be provided to the most_similar method: a list of positive words that should be added, and a list of negative words 5 A word with an arrow on top refers to the embedding vector corresponding to that word. Please see Section 1.4 for a summary of the notations used in this book. 136 Implementing Text Classification Using Word Embeddings that should be subtracted. For example, the code below implements the left-hand side of the previous analogy: Another interesting analogy relation that shows how the embeddings have captured information about currencies is shown below. More examples are discussed in the Jupyter notebook. 9.1.3 Looking Under the Hood Let us understand now how these queries are actually implemented. First, we need to know what components we need. Clearly, we need the embedding vectors themselves. They are stored in the vectors attribute of the KeyedVectors object. As we mentioned previously, this is a 2-dimensional NumPy array, each row corresponding to a word in the vocabulary. These embeddings are not normalized, but normalized embeddings can be obtained using the get_normed_vectors method. We also need to know the mapping between words and the matrix rows. The KeyedVectors object stores this mapping in a list of terms called index_to_key, and a term-to-index dictionary called key_to_index. Below we show only the first 5 terms to save space, but you can inspect the whole vocabulary in the Jupyter notebook. 9.1.4 Word Similarity from Scratch Implementing the word similarity function ourselves is a good exercise to ensure that we understand how cosine similarity works, and to practice our NumPy skills. We will write a function called most_similar_words that will take a word, the embeddings matrix, the vocabulary in the form of the index_to_key list and key_to_index dictionary, and the number of similar words to return (defaults to 10). The implementation of most_similar_words is straightforward. First, we find the word id for the given word, using the key_to_index dictionary. Then we retrieve the row from the vectors matrix that corresponds to that word. The next step is computing the cosine similarity between the word of interest and the rest of the vocabulary. Recall that the cosine similarity is equivalent to a dot product if the vectors are normalized. We use this equivalence by performing a matrix-vector multiplication between the word embedding and the embedding matrix using Python’s at operator (denoted as @ in code). This means that we must pass the 9.2 Text Classification with Pretrained Word Embeddings 137 normalized embeddings as an argument to this function. Next, we need to sort the similarities preserving the mapping to the words in the vocabulary. We achieve this using the argsort NumPy method, which returns the indices in sorted (ascending) order. Since we need them in descending order, the next step is to reverse this list of indices. Obviously, the most similar word to whichever word we’re querying is the word itself, but that is not an interesting result, so we will remove it from the results. We do this by using NumPy’s ability to index arrays using booleans. We first create a new array in which the position corresponding to the query word is set to False and every other element is set to True, and we use this boolean array to index the list of ids. Lastly, we create a list of tuples of the form (word, similarity) for the topn words, and return the results. Now we will test our implementation of word similarity using the word cactus. You can compare the results to the ones obtained by KeyedVectors’s most_similar method. 9.1.5 Word Analogies from Scratch The implementation of the word analogy function is not that much different from our most_similar_word function above. The main difference between this function and most_similar_words is that now we have two lists of words that we need to combine into a single embedding. We first add the positive words into a single vector, and we do the same for the negative words. Then we subtract the negative vector from the positive one, and normalize the result. The similarity scores are computed the same way as before, but now we need to remove several words from the results, so this time we use NumPy’s isin function, which checks for any of the words in given_word_ids. We then package the results the same way we did before, and return them. ⃗ Nowlet’stryourimplementationwiththesameking−m⃗an+wom⃗an query we discussed previously. Please compare the results to the ones obtained by Gensim. 9.2 Text Classification with Pretrained Word Embeddings In this section we will continue using the AG News classification dataset introduced in previous chapters. Most of the data preparation is the 138 Implementing Text Classification Using Word Embeddings same, up to tokenization. However, we need to remember that the embeddings were trained on a different corpus, so it would be a good idea to estimate how well they cover the words AG News dataset. To achieve this, we load the embeddings just like we did previously. Then we count the tokens in our corpus that do not appear in the embeddings vocabulary, as well as the total number to tokens. We use these numbers to print some informative statistics such as the proportion of unknown tokens in the corpus. We also print the top ten unknown tokens. You can use the Jupyter notebook to explore this task further. Our analysis indicates that only 1.25% of the tokens are not accounted for in the embeddings vocabulary. Further, the most common unknown words seem to be URL fragments. This is encouraging. However, for more robustness, we will introduce a couple of special embeddings that are often needed when dealing with word embeddings. The first one is an embedding used to represent unknown words. A common strategy is to use the average of all the embeddings in the vocabulary for this purpose. The second embedding we will add will be used for padding. Padding is required when we want to train with (mini-)batches because the lengths of all the examples in a given batch have to match in order for the batch to be efficiently processed in parallel. The padding embedding consists only of zeros, which essentially excludes these virtual tokens from the forward/backward passes. None of these embeddings are included in the pretrained GloVe embeddings, but other pretrained embeddings may already include them, so it is a good idea to check if they are included with the embeddings we are using before adding them. The new embeddings were added at the end of embedding collection, so their ids are 400,000 and 400,001. Now we need to generate a list of token ids for each training example. Recall that we decided to ignore tokens that appear less than 10 times, so we need to replace those with [UNK] too, even if they appear in the embedding vocabulary. Next, we create a Dataset object from the padded lists of token ids. This one is even easier since the lists of token ids are ready. So all that is required is turning them into tensors. Lastly, we need to modify the model class to indicate that we now use embedding vectors. To this end, we will add an nn. Embedding layer that stores the embedding vectors for all words in the vocabulary. We will use this object to look up embeddings by their token ids. This layer will be initialized from a tensor containing the pretrained embeddings for the entire vocabulary. Also, the pad_id is specified when creating the 9.2 Text Classification with Pretrained Word Embeddings 139 embedding layer. When a nn. Embedding layer gets initialized using the from_pretrained method with other arguments set to default values, the embeddings are not updated during training. We will keep it that way for this example, but that could be changed by setting the freeze parameter to False. The rest of the layers are the same as in our previous example from Chapter 7, i.e., one intermediate layer and one output layer, with a nonlinearity (ReLU) between them. The only major difference is that now the input size of the intermediate layer is the size of one embedding (e.g., 300) instead of the size of the vocabulary like last time. This is because, as we explain below, the intermediate layer receives an average of the numerical representations of the words in the current text. The forward function of the Model class changes significantly. This time we are encoding the text as an average of the embeddings of all the words it contains. To compute the denominator of this average, we obtain the length of each text by counting how many of its words are not the virtual padding token. Then we sum all the embeddings and divide by the number of non-padding tokens. Adding all embeddings is safe, because padding embeddings are comprised of zeros. This process leaves us with a single embedding for the whole text, which is then passed to the rest of the layers. The training and evaluation steps are the same before. The results of this model on the AG News test partition are displayed below: Comparing these results with the ones obtained by the multilayer perceptron with explicit features in Chapter 7, we observe that on this particular task utilizing embeddings as features does not yield a performance improvement. Notably, this is a small dataset and a rather simplistic task where the presence of certain words is sufficient to distinguish the category of an article (e.g., the word basketball is highly indicative of the label Sports). Nevertheless, in other tasks where distinctions are more nuanced, or in which there is less likely to be word overlap between texts of interest, word embeddings do provide necessary signal. Additionally, when there are class imbalances, word embeddings can supplement underrepresented classes by bringing the external knowledge gained during their pretraining. 140 Implementing Text Classification Using Word Embeddings 9.3 Summary In this chapter we showed how to explore the semantic space encoded by word embeddings through word similarity and analogies, as well as one way to use them for text classification. At this point we have not taken into consideration the order in which the words appear, i.e., we averaged the embeddings for all the words in the text using a bag-ofwords representation of text. In subsequent chapters we will explore how to incorporate word order into the learned representations of text.
9,339
9,440
#!/usr/bin/env python # coding: utf-8 # # Using Pre-trained Word Embeddings # # In this notebook we will show some operations on pre-trained word embeddings to gain an intuition about them. # # We will be using the pre-trained GloVe embeddings that can be found in the [official website](https://nlp.stanford.edu/projects/glove/). In particular, we will use the file `glove.6B.300d.txt` contained in this [zip file](https://nlp.stanford.edu/data/glove.6B.zip). # # We will first load the GloVe embeddings using [Gensim](https://radimrehurek.com/gensim/). Specifically, we will use [`KeyedVectors`](https://radimrehurek.com/gensim/models/keyedvectors.html)'s [`load_word2vec_format()`](https://radimrehurek.com/gensim/models/keyedvectors.html#gensim.models.keyedvectors.KeyedVectors.load_word2vec_format) classmethod, which supports the original word2vec file format. # However, there is a difference in the file formats used by GloVe and word2vec, which is a header used by word2vec to indicate the number of embeddings and dimensions stored in the file. The file that stores the GloVe embeddings doesn't have this header, so we will have to address that when loading the embeddings. # # Loading the embeddings may take a little bit, so hang in there! # In[2]: from gensim.models import KeyedVectors fname = "glove.6B.300d.txt" glove = KeyedVectors.load_word2vec_format(fname, no_header=True) glove.vectors.shape # ## Word similarity # # One attribute of word embeddings that makes them useful is the ability to compare them using cosine similarity to find how similar they are. [`KeyedVectors`](https://radimrehurek.com/gensim/models/keyedvectors.html) objects provide a method called [`most_similar()`](https://radimrehurek.com/gensim/models/keyedvectors.html#gensim.models.keyedvectors.KeyedVectors.most_similar) that we can use to find the closest words to a particular word of interest. By default, [`most_similar()`](https://radimrehurek.com/gensim/models/keyedvectors.html#gensim.models.keyedvectors.KeyedVectors.most_similar) returns the 10 most similar words, but this can be changed using the `topn` parameter. # # Below we test this function using a few different words. # In[3]: # common noun glove.most_similar("cactus") # In[4]: # common noun glove.most_similar("cake") # In[5]: # adjective glove.most_similar("angry") # In[6]: # adverb glove.most_similar("quickly") # In[7]: # preposition glove.most_similar("between") # In[8]: # determiner glove.most_similar("the") # ## Word analogies # # Another characteristic of word embeddings is their ability to solve analogy problems. # The same [`most_similar()`](https://radimrehurek.com/gensim/models/keyedvectors.html#gensim.models.keyedvectors.KeyedVectors.most_similar) method can be used for this task, by passing two lists of words: # a `positive` list with the words that should be added and a `negative` list with the words that should be subtracted. Using these arguments, the famous example $\vec{king} - \vec{man} + \vec{woman} \approx \vec{queen}$ can be executed as follows: # In[9]: # king - man + woman glove.most_similar(positive=["king", "woman"], negative=["man"]) # Here are a few other interesting analogies: # In[10]: # car - drive + fly glove.most_similar(positive=["car", "fly"], negative=["drive"]) # In[11]: # berlin - germany + australia glove.most_similar(positive=["berlin", "australia"], negative=["germany"]) # In[12]: # england - london + baghdad glove.most_similar(positive=["england", "baghdad"], negative=["london"]) # In[13]: # japan - yen + peso glove.most_similar(positive=["japan", "peso"], negative=["yen"]) # In[14]: # best - good + tall glove.most_similar(positive=["best", "tall"], negative=["good"]) # ## Looking under the hood # # Now that we are more familiar with the [`most_similar()`](https://radimrehurek.com/gensim/models/keyedvectors.html#gensim.models.keyedvectors.KeyedVectors.most_similar) method, it is time to implement its functionality ourselves. # But first, we need to take a look at the different parts of the [`KeyedVectors`](https://radimrehurek.com/gensim/models/keyedvectors.html) object that we will need. # Obviously, we will need the vectors themselves. They are stored in the `vectors` attribute. # In[15]: glove.vectors.shape # As we can see above, `vectors` is a 2-dimensional matrix with 400,000 rows and 300 columns. # Each row corresponds to a 300-dimensional word embedding. These embeddings are not normalized, but normalized embeddings can be obtained using the [`get_normed_vectors()`](https://radimrehurek.com/gensim/models/keyedvectors.html#gensim.models.keyedvectors.KeyedVectors.get_normed_vectors) method. # In[16]: normed_vectors = glove.get_normed_vectors() normed_vectors.shape # Now we need to map the words in the vocabulary to rows in the `vectors` matrix, and vice versa. # The [`KeyedVectors`](https://radimrehurek.com/gensim/models/keyedvectors.html) object has the attributes `index_to_key` and `key_to_index` which are a list of words and a dictionary of words to indices, respectively. # In[17]: #glove.index_to_key # In[18]: #glove.key_to_index # ## Word similarity from scratch # # Now we have everything we need to implement a `most_similar_words()` function that takes a word, the vector matrix, the `index_to_key` list, and the `key_to_index` dictionary. This function will return the 10 most similar words to the provided word, along with their similarity scores. # In[19]: import numpy as np def most_similar_words(word, vectors, index_to_key, key_to_index, topn=10): # retrieve word_id corresponding to given word word_id = key_to_index[word] # retrieve embedding for given word emb = vectors[word_id] # calculate similarities to all words in out vocabulary similarities = vectors @ emb # get word_ids in ascending order with respect to similarity score ids_ascending = similarities.argsort() # reverse word_ids ids_descending = ids_ascending[::-1] # get boolean array with element corresponding to word_id set to false mask = ids_descending != word_id # obtain new array of indices that doesn't contain word_id # (otherwise the most similar word to the argument would be the argument itself) ids_descending = ids_descending[mask] # get topn word_ids top_ids = ids_descending[:topn] # retrieve topn words with their corresponding similarity score top_words = [(index_to_key[i], similarities[i]) for i in top_ids] # return results return top_words # Now let's try the same example that we used above: the most similar words to "cactus". # In[20]: vectors = glove.get_normed_vectors() index_to_key = glove.index_to_key key_to_index = glove.key_to_index most_similar_words("cactus", vectors, index_to_key, key_to_index) # ## Analogies from scratch # # The `most_similar_words()` function behaves as expected. Now let's implement a function to perform the analogy task. We will give it the very creative name `analogy`. This function will get two lists of words (one for positive words and one for negative words), just like the [`most_similar()`](https://radimrehurek.com/gensim/models/keyedvectors.html#gensim.models.keyedvectors.KeyedVectors.most_similar) method we discussed above. # In[21]: from numpy.linalg import norm def analogy(positive, negative, vectors, index_to_key, key_to_index, topn=10): # find ids for positive and negative words pos_ids = [key_to_index[w] for w in positive] neg_ids = [key_to_index[w] for w in negative] given_word_ids = pos_ids + neg_ids # get embeddings for positive and negative words pos_emb = vectors[pos_ids].sum(axis=0) neg_emb = vectors[neg_ids].sum(axis=0) # get embedding for analogy emb = pos_emb - neg_emb # normalize embedding emb = emb / norm(emb) # calculate similarities to all words in out vocabulary similarities = vectors @ emb # get word_ids in ascending order with respect to similarity score ids_ascending = similarities.argsort() # reverse word_ids ids_descending = ids_ascending[::-1] # get boolean array with element corresponding to any of given_word_ids set to false given_words_mask = np.isin(ids_descending, given_word_ids, invert=True) # obtain new array of indices that doesn't contain any of the given_word_ids ids_descending = ids_descending[given_words_mask] # get topn word_ids top_ids = ids_descending[:topn] # retrieve topn words with their corresponding similarity score top_words = [(index_to_key[i], similarities[i]) for i in top_ids] # return results return top_words # Let's try this function with the $\vec{king} - \vec{man} + \vec{woman} \approx \vec{queen}$ example we discussed above. # In[22]: positive = ["king", "woman"] negative = ["man"] vectors = glove.get_normed_vectors() index_to_key = glove.index_to_key key_to_index = glove.key_to_index analogy(positive, negative, vectors, index_to_key, key_to_index) # In[ ]:
6,471
6,583
8
chap09-9
chap09-9
9 Implementing Text Classification Using Word Embeddings In the previous chapter we introduced word embeddings, which are realvalued vectors that encode semantic representation of words. We discussed how to learn them, and how they capture semantic information that makes them useful for downstream tasks. In this chapter we show how to use word embeddings that have been pretrained using a variant of the algorithm discussed in the previous chapter. We show how to load them, explore some of their characteristics, and show their application for a text classification task. As usual, the code for this chapter is available in our repository. It is organized into two notebooks: one corresponding to the explorations shown in the first half of this chapter (chap9_embeddings), and a second one in which we modify our previous classifier to use word embeddings (chap9_classification). 9.1 Pre-trained Word Embeddings There are several algorithms for training word embeddings, including the original word2vec algorithm (Mikolov et al., 2013a) (which we discussed in the previous chapter), GloVe (Pennington et al., 2014), and fastText (Bojanowski et al., 2017). They all provide the software for training the embeddings as well as pretrained word embeddings on their respective websites. In general, most open-domain word embeddings are trained on large corpora that cover a variety of topics such as Wikipedia1 and Gigaword.2 Commonly, these embeddings are freely distributed so 1 https://en.wikipedia.org/wiki/Wikipedia:Database_download 2 https://catalog.ldc.upenn.edu/LDC2011T07 133 134 Implementing Text Classification Using Word Embeddings house 0.60137 0.28521 -0.032038 -0.43026 0.74806 0.26223 -0.97361 0.078581 -0.57588 -1.188 -1.8507 -0.24887 0.055549 0.0086155 0.067951 0.40554 -0.073998 -0.21318 0.37167 -0.71791 1.2234 0.35546 -0.41537 -0.21931 -0.39661 -1.7831 -0.41507 0.29533 -0.41254 0.020096 2.7425 -0.9926 -0.71033 -0.46813 0.28265 -0.077639 0.3041 -0.06644 0.3951 -0.70747 -0.38894 0.23158 -0.49508 0.14612 -0.02314 0.56389 -0.86188 -1.0278 0.039922 0.20018 Figure 9.1 GloVe embedding corresponding to the word house, found in the GloVe file glove.6B.50d.txt. We have broken the vector in several lines for display purposes, but this is a single line in the text file. that practitioners can use them in downstream tasks. We will use one such set of vectors in this chapter. Pretrained embeddings are usually distributed as a text file in which each line represents a word vector. The first element in the line is the word itself, and the rest of the elements are the vector components. This is usually referred to as the word2vec format. For example, Figure 9.1 shows the line in the glove.6B.50d.txt file (from the GloVe website) corresponding to the word house. This vector is represented by the word itself, followed by 50 floating-point numbers corresponding to the 50dimensional vector. Note that some embeddings files have a header line composed of two numbers: the number of vectors (i.e., the number of lines in the file), and the vector dimensionality. However, this is not always the case. For example, the original word2vec implementation includes this header line, but the more recent GloVe does not (probably because this information can be inferred from the content of the file). For the examples in the rest of the chapter, we will use the glove.6B.300d.txt embeddings that can be downloaded from the GloVe website.3 This file provides 400,000 word embeddings of 300-dimensions trained on texts
from Wikipedia 2014 and Gigaword 5. We will begin our exploration of word embeddings using Gensim,4 a Python library that provides excellent support for loading and using word embeddings, among other more advanced features. As we can see, the embeddings have been loaded and assigned to the glove variable. Note that we had to specify that this file doesn’t contain the header that is usually present in the word2vec format. The 3 https://nlp.stanford.edu/projects/glove/ 4 https://radimrehurek.com/gensim/ 9.1 Pre-trained Word Embeddings 135 glove.vectors attribute contains a 2-dimensional NumPy array with 400,000 rows and 300 columns, each row corresponding to a word embedding. 9.1.1 Word Similarity Gensim’s KeyedVectors class provides a method called most_similar that receives a word and computes its cosine similarity to all other embeddings, and returns the topn most-similar words. By default, topn is set to 10. The example above shows the top 10 most-similar words to the word cactus, when using the 300-dimension GloVe embeddings trained on Wikipedia and Gigaword. All ten most-similar words are related to cactus in different ways: cacti and cactuses are its plural forms; saguaro, peyote, opuntia, and prickly pear are types of cacti; and mesquite, shrubs, and succulents are other plants from arid climates. You can find more examples of word similarity queries in the Jupyter notebook that accompanies this chapter. Also, as an exercise, try loading a different set of embeddings trained with a different corpus (e.g., Twitter) to see if you obtain different results! 9.1.2 Word Analogies As we discussed in the previous chapter, the semantic information en- coded by word embeddings captures much more than word similar- ity. To surface this additional information, we will use word analogies represented using additional vector operations. For example, a well- ⃗
known analogy that highlights gender information is: king − m⃗an ≈ qu⃗een−wom⃗an,5or,inplainlanguage:“manistokingwhatwomanis to queen.” From this, it immediately follows that one can subtract the meaning of man and add the meaning of woman to obtain the definition ⃗ offemaleroyalty:king−m⃗an+wom⃗an≈qu⃗een.
 The same most_similar method we’ve been using can be repurposed to find word analogies such as the one mentioned above. To this end, two sets of words have to be provided to the most_similar method: a list of positive words that should be added, and a list of negative words 5 A word with an arrow on top refers to the embedding vector corresponding to that word. Please see Section 1.4 for a summary of the notations used in this book. 136 Implementing Text Classification Using Word Embeddings that should be subtracted. For example, the code below implements the left-hand side of the previous analogy: Another interesting analogy relation that shows how the embeddings have captured information about currencies is shown below. More examples are discussed in the Jupyter notebook. 9.1.3 Looking Under the Hood Let us understand now how these queries are actually implemented. First, we need to know what components we need. Clearly, we need the embedding vectors themselves. They are stored in the vectors attribute of the KeyedVectors object. As we mentioned previously, this is a 2-dimensional NumPy array, each row corresponding to a word in the vocabulary. These embeddings are not normalized, but normalized embeddings can be obtained using the get_normed_vectors method. We also need to know the mapping between words and the matrix rows. The KeyedVectors object stores this mapping in a list of terms called index_to_key, and a term-to-index dictionary called key_to_index. Below we show only the first 5 terms to save space, but you can inspect the whole vocabulary in the Jupyter notebook. 9.1.4 Word Similarity from Scratch Implementing the word similarity function ourselves is a good exercise to ensure that we understand how cosine similarity works, and to practice our NumPy skills. We will write a function called most_similar_words that will take a word, the embeddings matrix, the vocabulary in the form of the index_to_key list and key_to_index dictionary, and the number of similar words to return (defaults to 10). The implementation of most_similar_words is straightforward. First, we find the word id for the given word, using the key_to_index dictionary. Then we retrieve the row from the vectors matrix that corresponds to that word. The next step is computing the cosine similarity between the word of interest and the rest of the vocabulary. Recall that the cosine similarity is equivalent to a dot product if the vectors are normalized. We use this equivalence by performing a matrix-vector multiplication between the word embedding and the embedding matrix using Python’s at operator (denoted as @ in code). This means that we must pass the 9.2 Text Classification with Pretrained Word Embeddings 137 normalized embeddings as an argument to this function. Next, we need to sort the similarities preserving the mapping to the words in the vocabulary. We achieve this using the argsort NumPy method, which returns the indices in sorted (ascending) order. Since we need them in descending order, the next step is to reverse this list of indices. Obviously, the most similar word to whichever word we’re querying is the word itself, but that is not an interesting result, so we will remove it from the results. We do this by using NumPy’s ability to index arrays using booleans. We first create a new array in which the position corresponding to the query word is set to False and every other element is set to True, and we use this boolean array to index the list of ids. Lastly, we create a list of tuples of the form (word, similarity) for the topn words, and return the results. Now we will test our implementation of word similarity using the word cactus. You can compare the results to the ones obtained by KeyedVectors’s most_similar method. 9.1.5 Word Analogies from Scratch The implementation of the word analogy function is not that much different from our most_similar_word function above. The main difference between this function and most_similar_words is that now we have two lists of words that we need to combine into a single embedding. We first add the positive words into a single vector, and we do the same for the negative words. Then we subtract the negative vector from the positive one, and normalize the result. The similarity scores are computed the same way as before, but now we need to remove several words from the results, so this time we use NumPy’s isin function, which checks for any of the words in given_word_ids. We then package the results the same way we did before, and return them. ⃗ Nowlet’stryourimplementationwiththesameking−m⃗an+wom⃗an query we discussed previously. Please compare the results to the ones obtained by Gensim. 9.2 Text Classification with Pretrained Word Embeddings In this section we will continue using the AG News classification dataset introduced in previous chapters. Most of the data preparation is the 138 Implementing Text Classification Using Word Embeddings same, up to tokenization. However, we need to remember that the embeddings were trained on a different corpus, so it would be a good idea to estimate how well they cover the words AG News dataset. To achieve this, we load the embeddings just like we did previously. Then we count the tokens in our corpus that do not appear in the embeddings vocabulary, as well as the total number to tokens. We use these numbers to print some informative statistics such as the proportion of unknown tokens in the corpus. We also print the top ten unknown tokens. You can use the Jupyter notebook to explore this task further. Our analysis indicates that only 1.25% of the tokens are not accounted for in the embeddings vocabulary. Further, the most common unknown words seem to be URL fragments. This is encouraging. However, for more robustness, we will introduce a couple of special embeddings that are often needed when dealing with word embeddings. The first one is an embedding used to represent unknown words. A common strategy is to use the average of all the embeddings in the vocabulary for this purpose. The second embedding we will add will be used for padding. Padding is required when we want to train with (mini-)batches because the lengths of all the examples in a given batch have to match in order for the batch to be efficiently processed in parallel. The padding embedding consists only of zeros, which essentially excludes these virtual tokens from the forward/backward passes. None of these embeddings are included in the pretrained GloVe embeddings, but other pretrained embeddings may already include them, so it is a good idea to check if they are included with the embeddings we are using before adding them. The new embeddings were added at the end of embedding collection, so their ids are 400,000 and 400,001. Now we need to generate a list of token ids for each training example. Recall that we decided to ignore tokens that appear less than 10 times, so we need to replace those with [UNK] too, even if they appear in the embedding vocabulary. Next, we create a Dataset object from the padded lists of token ids. This one is even easier since the lists of token ids are ready. So all that is required is turning them into tensors. Lastly, we need to modify the model class to indicate that we now use embedding vectors. To this end, we will add an nn. Embedding layer that stores the embedding vectors for all words in the vocabulary. We will use this object to look up embeddings by their token ids. This layer will be initialized from a tensor containing the pretrained embeddings for the entire vocabulary. Also, the pad_id is specified when creating the 9.2 Text Classification with Pretrained Word Embeddings 139 embedding layer. When a nn. Embedding layer gets initialized using the from_pretrained method with other arguments set to default values, the embeddings are not updated during training. We will keep it that way for this example, but that could be changed by setting the freeze parameter to False. The rest of the layers are the same as in our previous example from Chapter 7, i.e., one intermediate layer and one output layer, with a nonlinearity (ReLU) between them. The only major difference is that now the input size of the intermediate layer is the size of one embedding (e.g., 300) instead of the size of the vocabulary like last time. This is because, as we explain below, the intermediate layer receives an average of the numerical representations of the words in the current text. The forward function of the Model class changes significantly. This time we are encoding the text as an average of the embeddings of all the words it contains. To compute the denominator of this average, we obtain the length of each text by counting how many of its words are not the virtual padding token. Then we sum all the embeddings and divide by the number of non-padding tokens. Adding all embeddings is safe, because padding embeddings are comprised of zeros. This process leaves us with a single embedding for the whole text, which is then passed to the rest of the layers. The training and evaluation steps are the same before. The results of this model on the AG News test partition are displayed below: Comparing these results with the ones obtained by the multilayer perceptron with explicit features in Chapter 7, we observe that on this particular task utilizing embeddings as features does not yield a performance improvement. Notably, this is a small dataset and a rather simplistic task where the presence of certain words is sufficient to distinguish the category of an article (e.g., the word basketball is highly indicative of the label Sports). Nevertheless, in other tasks where distinctions are more nuanced, or in which there is less likely to be word overlap between texts of interest, word embeddings do provide necessary signal. Additionally, when there are class imbalances, word embeddings can supplement underrepresented classes by bringing the external knowledge gained during their pretraining. 140 Implementing Text Classification Using Word Embeddings 9.3 Summary In this chapter we showed how to explore the semantic space encoded by word embeddings through word similarity and analogies, as well as one way to use them for text classification. At this point we have not taken into consideration the order in which the words appear, i.e., we averaged the embeddings for all the words in the text using a bag-ofwords representation of text. In subsequent chapters we will explore how to incorporate word order into the learned representations of text.
13,150
13,264
#!/usr/bin/env python # coding: utf-8 # # Multiclass Text Classification with # # Feed-forward Neural Networks and Word Embeddings # First, we will do some initialization. # In[1]: import random import torch import numpy as np import pandas as pd from tqdm.notebook import tqdm # enable tqdm in pandas tqdm.pandas() # set to True to use the gpu (if there is one available) use_gpu = True # select device device = torch.device('cuda' if use_gpu and torch.cuda.is_available() else 'cpu') print(f'device: {device.type}') # random seed seed = 1234 # set random seed if seed is not None: print(f'random seed: {seed}') random.seed(seed) np.random.seed(seed) torch.manual_seed(seed) # We will be using the AG's News Topic Classification Dataset. # It is stored in two CSV files: `train.csv` and `test.csv`, as well as a `classes.txt` that stores the labels of the classes to predict. # # First, we will load the training dataset using [pandas](https://pandas.pydata.org/) and take a quick look at how the data. # In[2]: train_df = pd.read_csv('data/ag_news_csv/train.csv', header=None) train_df.columns = ['class index', 'title', 'description'] train_df # The dataset consists of 120,000 examples, each consisting of a class index, a title, and a description. # The class labels are distributed in a separated file. We will add the labels to the dataset so that we can interpret the data more easily. Note that the label indexes are one-based, so we need to subtract one to retrieve them from the list. # In[3]: labels = open('data/ag_news_csv/classes.txt').read().splitlines() classes = train_df['class index'].map(lambda i: labels[i-1]) train_df.insert(1, 'class', classes) train_df # Let's inspect how balanced our examples are by using a bar plot. # In[4]: pd.value_counts(train_df['class']).plot.bar() # The classes are evenly distributed. That's great! # # However, the text contains some spurious backslashes in some parts of the text. # They are meant to represent newlines in the original text. # An example can be seen below, between the words "dwindling" and "band". # In[5]: print(train_df.loc[0, 'description']) # We will replace the backslashes with spaces on the whole column using pandas replace method. # In[6]: train_df['text'] = train_df['title'].str.lower() + " " + train_df['description'].str.lower() train_df['text'] = train_df['text'].str.replace('\\', ' ', regex=False) train_df # Now we will proceed to tokenize the title and description columns using NLTK's word_tokenize(). # We will add a new column to our dataframe with the list of tokens. # In[7]: from nltk.tokenize import word_tokenize train_df['tokens'] = train_df['text'].progress_map(word_tokenize) train_df # Now we will load the GloVe word embeddings. # In[8]: from gensim.models import KeyedVectors glove = KeyedVectors.load_word2vec_format("glove.6B.300d.txt", no_header=True) glove.vectors.shape # The word embeddings have been pretrained in a different corpus, so it would be a good idea to estimate how good our tokenization matches the GloVe vocabulary. # In[9]: from collections import Counter def count_unknown_words(data, vocabulary): counter = Counter() for row in tqdm(data): counter.update(tok for tok in row if tok not in vocabulary) return counter # find out how many times each unknown token occurrs in the corpus c = count_unknown_words(train_df['tokens'], glove.key_to_index) # find the total number of tokens in the corpus total_tokens = train_df['tokens'].map(len).sum() # find some statistics about occurrences of unknown tokens unk_tokens = sum(c.values()) percent_unk = unk_tokens / total_tokens distinct_tokens = len(list(c)) print(f'total number of tokens: {total_tokens:,}') print(f'number of unknown tokens: {unk_tokens:,}') print(f'number of distinct unknown tokens: {distinct_tokens:,}') print(f'percentage of unkown tokens: {percent_unk:.2%}') print('top 50 unknown words:') for token, n in c.most_common(10): print(f'\t{n}\t{token}') # Glove embeddings seem to have a good coverage on this dataset -- only 1.25% of the tokens in the dataset are unknown, i.e., don't appear in the GloVe vocabulary. # # Still, we will need a way to handle these unknown tokens. # Our approach will be to add a new embedding to GloVe that will be used to represent them. # This new embedding will be initialized as the average of all the GloVe embeddings. # # We will also add another embedding, this one initialized to zeros, that will be used to pad the sequences of tokens so that they all have the same length. This will be useful when we train with mini-batches. # In[10]: # string values corresponding to the new embeddings unk_tok = '[UNK]' pad_tok = '[PAD]' # initialize the new embedding values unk_emb = glove.vectors.mean(axis=0) pad_emb = np.zeros(300) # add new embeddings to glove glove.add_vectors([unk_tok, pad_tok], [unk_emb, pad_emb]) # get token ids corresponding to the new embeddings unk_id = glove.key_to_index[unk_tok] pad_id = glove.key_to_index[pad_tok] unk_id, pad_id # In[11]: from sklearn.model_selection import train_test_split train_df, dev_df = train_test_split(train_df, train_size=0.8) train_df.reset_index(inplace=True) dev_df.reset_index(inplace=True) # We will now add a new column to our dataframe that will contain the padded sequences of token ids. # In[12]: threshold = 10 tokens = train_df['tokens'].explode().value_counts() vocabulary = set(tokens[tokens > threshold].index.tolist()) print(f'vocabulary size: {len(vocabulary):,}') # In[13]: # find the length of the longest list of tokens max_tokens = train_df['tokens'].map(len).max() # return unk_id for infrequent tokens too def get_id(tok): if tok in vocabulary: return glove.key_to_index.get(tok, unk_id) else: return unk_id # function that gets a list of tokens and returns a list of token ids, # with padding added accordingly def token_ids(tokens): tok_ids = [get_id(tok) for tok in tokens] pad_len = max_tokens - len(tok_ids) return tok_ids + [pad_id] * pad_len # add new column to the dataframe train_df['token ids'] = train_df['tokens'].progress_map(token_ids) train_df # In[14]: max_tokens = dev_df['tokens'].map(len).max() dev_df['token ids'] = dev_df['tokens'].progress_map(token_ids) dev_df # Now we will get a numpy 2-dimensional array corresponding to the token ids, # and a 1-dimensional array with the gold classes. Note that the classes are one-based (i.e., they start at one), # but we need them to be zero-based, so we need to subtract one from this array. # In[15]: from torch.utils.data import Dataset class MyDataset(Dataset): def __init__(self, x, y): self.x = x self.y = y def __len__(self): return len(self.y) def __getitem__(self, index): x = torch.tensor(self.x[index]) y = torch.tensor(self.y[index]) return x, y # Next, we construct our PyTorch model, which is a feed-forward neural network with two layers: # In[16]: from torch import nn import torch.nn.functional as F class Model(nn.Module): def __init__(self, vectors, pad_id, hidden_dim, output_dim, dropout): super().__init__() # embeddings must be a tensor if not torch.is_tensor(vectors): vectors = torch.tensor(vectors) # keep padding id self.padding_idx = pad_id # embedding layer self.embs = nn.Embedding.from_pretrained(vectors, padding_idx=pad_id) # feedforward layers self.layers = nn.Sequential( nn.Dropout(dropout), nn.Linear(vectors.shape[1], hidden_dim), nn.ReLU(), nn.Dropout(dropout), nn.Linear(hidden_dim, output_dim), ) def forward(self, x): # get boolean array with padding elements set to false not_padding = torch.isin(x, self.padding_idx, invert=True) # get lengths of examples (excluding padding) lengths = torch.count_nonzero(not_padding, axis=1) # get embeddings x = self.embs(x) # calculate means x = x.sum(dim=1) / lengths.unsqueeze(dim=1) # pass to rest of the model output = self.layers(x) # calculate softmax if we're not in training mode #if not self.training: # output = F.softmax(output, dim=1) return output # Next, we implement the training procedure. We compute the loss and accuracy on the development partition after each epoch. # In[17]: from torch import optim from torch.utils.data import DataLoader from sklearn.metrics import accuracy_score # hyperparameters lr = 1e-3 weight_decay = 0 batch_size = 500 shuffle = True n_epochs = 5 hidden_dim = 50 output_dim = len(labels) dropout = 0.1 vectors = glove.vectors # initialize the model, loss function, optimizer, and data-loader model = Model(vectors, pad_id, hidden_dim, output_dim, dropout).to(device) loss_func = nn.CrossEntropyLoss() optimizer = optim.Adam(model.parameters(), lr=lr, weight_decay=weight_decay) train_ds = MyDataset(train_df['token ids'], train_df['class index'] - 1) train_dl = DataLoader(train_ds, batch_size=batch_size, shuffle=shuffle) dev_ds = MyDataset(dev_df['token ids'], dev_df['class index'] - 1) dev_dl = DataLoader(dev_ds, batch_size=batch_size, shuffle=shuffle) train_loss = [] train_acc = [] dev_loss = [] dev_acc = [] # train the model for epoch in range(n_epochs): losses = [] gold = [] pred = [] model.train() for X, y_true in tqdm(train_dl, desc=f'epoch {epoch+1} (train)'): # clear gradients model.zero_grad() # send batch to right device X = X.to(device) y_true = y_true.to(device) # predict label scores y_pred = model(X) # compute loss loss = loss_func(y_pred, y_true) # accumulate for plotting losses.append(loss.detach().cpu().item()) gold.append(y_true.detach().cpu().numpy()) pred.append(np.argmax(y_pred.detach().cpu().numpy(), axis=1)) # backpropagate loss.backward() # optimize model parameters optimizer.step() train_loss.append(np.mean(losses)) train_acc.append(accuracy_score(np.concatenate(gold), np.concatenate(pred))) model.eval() with torch.no_grad(): losses = [] gold = [] pred = [] for X, y_true in tqdm(dev_dl, desc=f'epoch {epoch+1} (dev)'): X = X.to(device) y_true = y_true.to(device) y_pred = model(X) loss = loss_func(y_pred, y_true) losses.append(loss.cpu().item()) gold.append(y_true.cpu().numpy()) pred.append(np.argmax(y_pred.cpu().numpy(), axis=1)) dev_loss.append(np.mean(losses)) dev_acc.append(accuracy_score(np.concatenate(gold), np.concatenate(pred))) # Let's plot the loss and accuracy on dev: # In[18]: import matplotlib.pyplot as plt get_ipython().run_line_magic('matplotlib', 'inline') x = np.arange(n_epochs) + 1 plt.plot(x, train_loss) plt.plot(x, dev_loss) plt.legend(['train', 'dev']) plt.xlabel('epoch') plt.ylabel('loss') plt.grid(True) # In[19]: plt.plot(x, train_acc) plt.plot(x, dev_acc) plt.legend(['train', 'dev']) plt.xlabel('epoch') plt.ylabel('accuracy') plt.grid(True) # Next, we evaluate on the testing partition: # In[20]: # repeat all preprocessing done above, this time on the test set test_df = pd.read_csv('data/ag_news_csv/test.csv', header=None) test_df.columns = ['class index', 'title', 'description'] test_df['text'] = test_df['title'].str.lower() + " " + test_df['description'].str.lower() test_df['text'] = test_df['text'].str.replace('\\', ' ', regex=False) test_df['tokens'] = test_df['text'].progress_map(word_tokenize) max_tokens = dev_df['tokens'].map(len).max() test_df['token ids'] = test_df['tokens'].progress_map(token_ids) # In[21]: from sklearn.metrics import classification_report # set model to evaluation mode model.eval() dataset = MyDataset(test_df['token ids'], test_df['class index'] - 1) data_loader = DataLoader(dataset, batch_size=batch_size) y_pred = [] # don't store gradients with torch.no_grad(): for X, _ in tqdm(data_loader): X = X.to(device) # predict one class per example y = torch.argmax(model(X), dim=1) # convert tensor to numpy array (sending it back to the cpu if needed) y_pred.append(y.cpu().numpy()) # print results print(classification_report(dataset.y, np.concatenate(y_pred), target_names=labels))
7,474
7,552
9
chap09-10
chap09-10
9 Implementing Text Classification Using Word Embeddings In the previous chapter we introduced word embeddings, which are realvalued vectors that encode semantic representation of words. We discussed how to learn them, and how they capture semantic information that makes them useful for downstream tasks. In this chapter we show how to use word embeddings that have been pretrained using a variant of the algorithm discussed in the previous chapter. We show how to load them, explore some of their characteristics, and show their application for a text classification task. As usual, the code for this chapter is available in our repository. It is organized into two notebooks: one corresponding to the explorations shown in the first half of this chapter (chap9_embeddings), and a second one in which we modify our previous classifier to use word embeddings (chap9_classification). 9.1 Pre-trained Word Embeddings There are several algorithms for training word embeddings, including the original word2vec algorithm (Mikolov et al., 2013a) (which we discussed in the previous chapter), GloVe (Pennington et al., 2014), and fastText (Bojanowski et al., 2017). They all provide the software for training the embeddings as well as pretrained word embeddings on their respective websites. In general, most open-domain word embeddings are trained on large corpora that cover a variety of topics such as Wikipedia1 and Gigaword.2 Commonly, these embeddings are freely distributed so 1 https://en.wikipedia.org/wiki/Wikipedia:Database_download 2 https://catalog.ldc.upenn.edu/LDC2011T07 133 134 Implementing Text Classification Using Word Embeddings house 0.60137 0.28521 -0.032038 -0.43026 0.74806 0.26223 -0.97361 0.078581 -0.57588 -1.188 -1.8507 -0.24887 0.055549 0.0086155 0.067951 0.40554 -0.073998 -0.21318 0.37167 -0.71791 1.2234 0.35546 -0.41537 -0.21931 -0.39661 -1.7831 -0.41507 0.29533 -0.41254 0.020096 2.7425 -0.9926 -0.71033 -0.46813 0.28265 -0.077639 0.3041 -0.06644 0.3951 -0.70747 -0.38894 0.23158 -0.49508 0.14612 -0.02314 0.56389 -0.86188 -1.0278 0.039922 0.20018 Figure 9.1 GloVe embedding corresponding to the word house, found in the GloVe file glove.6B.50d.txt. We have broken the vector in several lines for display purposes, but this is a single line in the text file. that practitioners can use them in downstream tasks. We will use one such set of vectors in this chapter. Pretrained embeddings are usually distributed as a text file in which each line represents a word vector. The first element in the line is the word itself, and the rest of the elements are the vector components. This is usually referred to as the word2vec format. For example, Figure 9.1 shows the line in the glove.6B.50d.txt file (from the GloVe website) corresponding to the word house. This vector is represented by the word itself, followed by 50 floating-point numbers corresponding to the 50dimensional vector. Note that some embeddings files have a header line composed of two numbers: the number of vectors (i.e., the number of lines in the file), and the vector dimensionality. However, this is not always the case. For example, the original word2vec implementation includes this header line, but the more recent GloVe does not (probably because this information can be inferred from the content of the file). For the examples in the rest of the chapter, we will use the glove.6B.300d.txt embeddings that can be downloaded from the GloVe website.3 This file provides 400,000 word embeddings of 300-dimensions trained on texts
from Wikipedia 2014 and Gigaword 5. We will begin our exploration of word embeddings using Gensim,4 a Python library that provides excellent support for loading and using word embeddings, among other more advanced features. As we can see, the embeddings have been loaded and assigned to the glove variable. Note that we had to specify that this file doesn’t contain the header that is usually present in the word2vec format. The 3 https://nlp.stanford.edu/projects/glove/ 4 https://radimrehurek.com/gensim/ 9.1 Pre-trained Word Embeddings 135 glove.vectors attribute contains a 2-dimensional NumPy array with 400,000 rows and 300 columns, each row corresponding to a word embedding. 9.1.1 Word Similarity Gensim’s KeyedVectors class provides a method called most_similar that receives a word and computes its cosine similarity to all other embeddings, and returns the topn most-similar words. By default, topn is set to 10. The example above shows the top 10 most-similar words to the word cactus, when using the 300-dimension GloVe embeddings trained on Wikipedia and Gigaword. All ten most-similar words are related to cactus in different ways: cacti and cactuses are its plural forms; saguaro, peyote, opuntia, and prickly pear are types of cacti; and mesquite, shrubs, and succulents are other plants from arid climates. You can find more examples of word similarity queries in the Jupyter notebook that accompanies this chapter. Also, as an exercise, try loading a different set of embeddings trained with a different corpus (e.g., Twitter) to see if you obtain different results! 9.1.2 Word Analogies As we discussed in the previous chapter, the semantic information en- coded by word embeddings captures much more than word similar- ity. To surface this additional information, we will use word analogies represented using additional vector operations. For example, a well- ⃗
known analogy that highlights gender information is: king − m⃗an ≈ qu⃗een−wom⃗an,5or,inplainlanguage:“manistokingwhatwomanis to queen.” From this, it immediately follows that one can subtract the meaning of man and add the meaning of woman to obtain the definition ⃗ offemaleroyalty:king−m⃗an+wom⃗an≈qu⃗een.
 The same most_similar method we’ve been using can be repurposed to find word analogies such as the one mentioned above. To this end, two sets of words have to be provided to the most_similar method: a list of positive words that should be added, and a list of negative words 5 A word with an arrow on top refers to the embedding vector corresponding to that word. Please see Section 1.4 for a summary of the notations used in this book. 136 Implementing Text Classification Using Word Embeddings that should be subtracted. For example, the code below implements the left-hand side of the previous analogy: Another interesting analogy relation that shows how the embeddings have captured information about currencies is shown below. More examples are discussed in the Jupyter notebook. 9.1.3 Looking Under the Hood Let us understand now how these queries are actually implemented. First, we need to know what components we need. Clearly, we need the embedding vectors themselves. They are stored in the vectors attribute of the KeyedVectors object. As we mentioned previously, this is a 2-dimensional NumPy array, each row corresponding to a word in the vocabulary. These embeddings are not normalized, but normalized embeddings can be obtained using the get_normed_vectors method. We also need to know the mapping between words and the matrix rows. The KeyedVectors object stores this mapping in a list of terms called index_to_key, and a term-to-index dictionary called key_to_index. Below we show only the first 5 terms to save space, but you can inspect the whole vocabulary in the Jupyter notebook. 9.1.4 Word Similarity from Scratch Implementing the word similarity function ourselves is a good exercise to ensure that we understand how cosine similarity works, and to practice our NumPy skills. We will write a function called most_similar_words that will take a word, the embeddings matrix, the vocabulary in the form of the index_to_key list and key_to_index dictionary, and the number of similar words to return (defaults to 10). The implementation of most_similar_words is straightforward. First, we find the word id for the given word, using the key_to_index dictionary. Then we retrieve the row from the vectors matrix that corresponds to that word. The next step is computing the cosine similarity between the word of interest and the rest of the vocabulary. Recall that the cosine similarity is equivalent to a dot product if the vectors are normalized. We use this equivalence by performing a matrix-vector multiplication between the word embedding and the embedding matrix using Python’s at operator (denoted as @ in code). This means that we must pass the 9.2 Text Classification with Pretrained Word Embeddings 137 normalized embeddings as an argument to this function. Next, we need to sort the similarities preserving the mapping to the words in the vocabulary. We achieve this using the argsort NumPy method, which returns the indices in sorted (ascending) order. Since we need them in descending order, the next step is to reverse this list of indices. Obviously, the most similar word to whichever word we’re querying is the word itself, but that is not an interesting result, so we will remove it from the results. We do this by using NumPy’s ability to index arrays using booleans. We first create a new array in which the position corresponding to the query word is set to False and every other element is set to True, and we use this boolean array to index the list of ids. Lastly, we create a list of tuples of the form (word, similarity) for the topn words, and return the results. Now we will test our implementation of word similarity using the word cactus. You can compare the results to the ones obtained by KeyedVectors’s most_similar method. 9.1.5 Word Analogies from Scratch The implementation of the word analogy function is not that much different from our most_similar_word function above. The main difference between this function and most_similar_words is that now we have two lists of words that we need to combine into a single embedding. We first add the positive words into a single vector, and we do the same for the negative words. Then we subtract the negative vector from the positive one, and normalize the result. The similarity scores are computed the same way as before, but now we need to remove several words from the results, so this time we use NumPy’s isin function, which checks for any of the words in given_word_ids. We then package the results the same way we did before, and return them. ⃗ Nowlet’stryourimplementationwiththesameking−m⃗an+wom⃗an query we discussed previously. Please compare the results to the ones obtained by Gensim. 9.2 Text Classification with Pretrained Word Embeddings In this section we will continue using the AG News classification dataset introduced in previous chapters. Most of the data preparation is the 138 Implementing Text Classification Using Word Embeddings same, up to tokenization. However, we need to remember that the embeddings were trained on a different corpus, so it would be a good idea to estimate how well they cover the words AG News dataset. To achieve this, we load the embeddings just like we did previously. Then we count the tokens in our corpus that do not appear in the embeddings vocabulary, as well as the total number to tokens. We use these numbers to print some informative statistics such as the proportion of unknown tokens in the corpus. We also print the top ten unknown tokens. You can use the Jupyter notebook to explore this task further. Our analysis indicates that only 1.25% of the tokens are not accounted for in the embeddings vocabulary. Further, the most common unknown words seem to be URL fragments. This is encouraging. However, for more robustness, we will introduce a couple of special embeddings that are often needed when dealing with word embeddings. The first one is an embedding used to represent unknown words. A common strategy is to use the average of all the embeddings in the vocabulary for this purpose. The second embedding we will add will be used for padding. Padding is required when we want to train with (mini-)batches because the lengths of all the examples in a given batch have to match in order for the batch to be efficiently processed in parallel. The padding embedding consists only of zeros, which essentially excludes these virtual tokens from the forward/backward passes. None of these embeddings are included in the pretrained GloVe embeddings, but other pretrained embeddings may already include them, so it is a good idea to check if they are included with the embeddings we are using before adding them. The new embeddings were added at the end of embedding collection, so their ids are 400,000 and 400,001. Now we need to generate a list of token ids for each training example. Recall that we decided to ignore tokens that appear less than 10 times, so we need to replace those with [UNK] too, even if they appear in the embedding vocabulary. Next, we create a Dataset object from the padded lists of token ids. This one is even easier since the lists of token ids are ready. So all that is required is turning them into tensors. Lastly, we need to modify the model class to indicate that we now use embedding vectors. To this end, we will add an nn. Embedding layer that stores the embedding vectors for all words in the vocabulary. We will use this object to look up embeddings by their token ids. This layer will be initialized from a tensor containing the pretrained embeddings for the entire vocabulary. Also, the pad_id is specified when creating the 9.2 Text Classification with Pretrained Word Embeddings 139 embedding layer. When a nn. Embedding layer gets initialized using the from_pretrained method with other arguments set to default values, the embeddings are not updated during training. We will keep it that way for this example, but that could be changed by setting the freeze parameter to False. The rest of the layers are the same as in our previous example from Chapter 7, i.e., one intermediate layer and one output layer, with a nonlinearity (ReLU) between them. The only major difference is that now the input size of the intermediate layer is the size of one embedding (e.g., 300) instead of the size of the vocabulary like last time. This is because, as we explain below, the intermediate layer receives an average of the numerical representations of the words in the current text. The forward function of the Model class changes significantly. This time we are encoding the text as an average of the embeddings of all the words it contains. To compute the denominator of this average, we obtain the length of each text by counting how many of its words are not the virtual padding token. Then we sum all the embeddings and divide by the number of non-padding tokens. Adding all embeddings is safe, because padding embeddings are comprised of zeros. This process leaves us with a single embedding for the whole text, which is then passed to the rest of the layers. The training and evaluation steps are the same before. The results of this model on the AG News test partition are displayed below: Comparing these results with the ones obtained by the multilayer perceptron with explicit features in Chapter 7, we observe that on this particular task utilizing embeddings as features does not yield a performance improvement. Notably, this is a small dataset and a rather simplistic task where the presence of certain words is sufficient to distinguish the category of an article (e.g., the word basketball is highly indicative of the label Sports). Nevertheless, in other tasks where distinctions are more nuanced, or in which there is less likely to be word overlap between texts of interest, word embeddings do provide necessary signal. Additionally, when there are class imbalances, word embeddings can supplement underrepresented classes by bringing the external knowledge gained during their pretraining. 140 Implementing Text Classification Using Word Embeddings 9.3 Summary In this chapter we showed how to explore the semantic space encoded by word embeddings through word similarity and analogies, as well as one way to use them for text classification. At this point we have not taken into consideration the order in which the words appear, i.e., we averaged the embeddings for all the words in the text using a bag-ofwords representation of text. In subsequent chapters we will explore how to incorporate word order into the learned representations of text.
13,851
14,021
#!/usr/bin/env python # coding: utf-8 # # Multiclass Text Classification with # # Feed-forward Neural Networks and Word Embeddings # First, we will do some initialization. # In[1]: import random import torch import numpy as np import pandas as pd from tqdm.notebook import tqdm # enable tqdm in pandas tqdm.pandas() # set to True to use the gpu (if there is one available) use_gpu = True # select device device = torch.device('cuda' if use_gpu and torch.cuda.is_available() else 'cpu') print(f'device: {device.type}') # random seed seed = 1234 # set random seed if seed is not None: print(f'random seed: {seed}') random.seed(seed) np.random.seed(seed) torch.manual_seed(seed) # We will be using the AG's News Topic Classification Dataset. # It is stored in two CSV files: `train.csv` and `test.csv`, as well as a `classes.txt` that stores the labels of the classes to predict. # # First, we will load the training dataset using [pandas](https://pandas.pydata.org/) and take a quick look at how the data. # In[2]: train_df = pd.read_csv('data/ag_news_csv/train.csv', header=None) train_df.columns = ['class index', 'title', 'description'] train_df # The dataset consists of 120,000 examples, each consisting of a class index, a title, and a description. # The class labels are distributed in a separated file. We will add the labels to the dataset so that we can interpret the data more easily. Note that the label indexes are one-based, so we need to subtract one to retrieve them from the list. # In[3]: labels = open('data/ag_news_csv/classes.txt').read().splitlines() classes = train_df['class index'].map(lambda i: labels[i-1]) train_df.insert(1, 'class', classes) train_df # Let's inspect how balanced our examples are by using a bar plot. # In[4]: pd.value_counts(train_df['class']).plot.bar() # The classes are evenly distributed. That's great! # # However, the text contains some spurious backslashes in some parts of the text. # They are meant to represent newlines in the original text. # An example can be seen below, between the words "dwindling" and "band". # In[5]: print(train_df.loc[0, 'description']) # We will replace the backslashes with spaces on the whole column using pandas replace method. # In[6]: train_df['text'] = train_df['title'].str.lower() + " " + train_df['description'].str.lower() train_df['text'] = train_df['text'].str.replace('\\', ' ', regex=False) train_df # Now we will proceed to tokenize the title and description columns using NLTK's word_tokenize(). # We will add a new column to our dataframe with the list of tokens. # In[7]: from nltk.tokenize import word_tokenize train_df['tokens'] = train_df['text'].progress_map(word_tokenize) train_df # Now we will load the GloVe word embeddings. # In[8]: from gensim.models import KeyedVectors glove = KeyedVectors.load_word2vec_format("glove.6B.300d.txt", no_header=True) glove.vectors.shape # The word embeddings have been pretrained in a different corpus, so it would be a good idea to estimate how good our tokenization matches the GloVe vocabulary. # In[9]: from collections import Counter def count_unknown_words(data, vocabulary): counter = Counter() for row in tqdm(data): counter.update(tok for tok in row if tok not in vocabulary) return counter # find out how many times each unknown token occurrs in the corpus c = count_unknown_words(train_df['tokens'], glove.key_to_index) # find the total number of tokens in the corpus total_tokens = train_df['tokens'].map(len).sum() # find some statistics about occurrences of unknown tokens unk_tokens = sum(c.values()) percent_unk = unk_tokens / total_tokens distinct_tokens = len(list(c)) print(f'total number of tokens: {total_tokens:,}') print(f'number of unknown tokens: {unk_tokens:,}') print(f'number of distinct unknown tokens: {distinct_tokens:,}') print(f'percentage of unkown tokens: {percent_unk:.2%}') print('top 50 unknown words:') for token, n in c.most_common(10): print(f'\t{n}\t{token}') # Glove embeddings seem to have a good coverage on this dataset -- only 1.25% of the tokens in the dataset are unknown, i.e., don't appear in the GloVe vocabulary. # # Still, we will need a way to handle these unknown tokens. # Our approach will be to add a new embedding to GloVe that will be used to represent them. # This new embedding will be initialized as the average of all the GloVe embeddings. # # We will also add another embedding, this one initialized to zeros, that will be used to pad the sequences of tokens so that they all have the same length. This will be useful when we train with mini-batches. # In[10]: # string values corresponding to the new embeddings unk_tok = '[UNK]' pad_tok = '[PAD]' # initialize the new embedding values unk_emb = glove.vectors.mean(axis=0) pad_emb = np.zeros(300) # add new embeddings to glove glove.add_vectors([unk_tok, pad_tok], [unk_emb, pad_emb]) # get token ids corresponding to the new embeddings unk_id = glove.key_to_index[unk_tok] pad_id = glove.key_to_index[pad_tok] unk_id, pad_id # In[11]: from sklearn.model_selection import train_test_split train_df, dev_df = train_test_split(train_df, train_size=0.8) train_df.reset_index(inplace=True) dev_df.reset_index(inplace=True) # We will now add a new column to our dataframe that will contain the padded sequences of token ids. # In[12]: threshold = 10 tokens = train_df['tokens'].explode().value_counts() vocabulary = set(tokens[tokens > threshold].index.tolist()) print(f'vocabulary size: {len(vocabulary):,}') # In[13]: # find the length of the longest list of tokens max_tokens = train_df['tokens'].map(len).max() # return unk_id for infrequent tokens too def get_id(tok): if tok in vocabulary: return glove.key_to_index.get(tok, unk_id) else: return unk_id # function that gets a list of tokens and returns a list of token ids, # with padding added accordingly def token_ids(tokens): tok_ids = [get_id(tok) for tok in tokens] pad_len = max_tokens - len(tok_ids) return tok_ids + [pad_id] * pad_len # add new column to the dataframe train_df['token ids'] = train_df['tokens'].progress_map(token_ids) train_df # In[14]: max_tokens = dev_df['tokens'].map(len).max() dev_df['token ids'] = dev_df['tokens'].progress_map(token_ids) dev_df # Now we will get a numpy 2-dimensional array corresponding to the token ids, # and a 1-dimensional array with the gold classes. Note that the classes are one-based (i.e., they start at one), # but we need them to be zero-based, so we need to subtract one from this array. # In[15]: from torch.utils.data import Dataset class MyDataset(Dataset): def __init__(self, x, y): self.x = x self.y = y def __len__(self): return len(self.y) def __getitem__(self, index): x = torch.tensor(self.x[index]) y = torch.tensor(self.y[index]) return x, y # Next, we construct our PyTorch model, which is a feed-forward neural network with two layers: # In[16]: from torch import nn import torch.nn.functional as F class Model(nn.Module): def __init__(self, vectors, pad_id, hidden_dim, output_dim, dropout): super().__init__() # embeddings must be a tensor if not torch.is_tensor(vectors): vectors = torch.tensor(vectors) # keep padding id self.padding_idx = pad_id # embedding layer self.embs = nn.Embedding.from_pretrained(vectors, padding_idx=pad_id) # feedforward layers self.layers = nn.Sequential( nn.Dropout(dropout), nn.Linear(vectors.shape[1], hidden_dim), nn.ReLU(), nn.Dropout(dropout), nn.Linear(hidden_dim, output_dim), ) def forward(self, x): # get boolean array with padding elements set to false not_padding = torch.isin(x, self.padding_idx, invert=True) # get lengths of examples (excluding padding) lengths = torch.count_nonzero(not_padding, axis=1) # get embeddings x = self.embs(x) # calculate means x = x.sum(dim=1) / lengths.unsqueeze(dim=1) # pass to rest of the model output = self.layers(x) # calculate softmax if we're not in training mode #if not self.training: # output = F.softmax(output, dim=1) return output # Next, we implement the training procedure. We compute the loss and accuracy on the development partition after each epoch. # In[17]: from torch import optim from torch.utils.data import DataLoader from sklearn.metrics import accuracy_score # hyperparameters lr = 1e-3 weight_decay = 0 batch_size = 500 shuffle = True n_epochs = 5 hidden_dim = 50 output_dim = len(labels) dropout = 0.1 vectors = glove.vectors # initialize the model, loss function, optimizer, and data-loader model = Model(vectors, pad_id, hidden_dim, output_dim, dropout).to(device) loss_func = nn.CrossEntropyLoss() optimizer = optim.Adam(model.parameters(), lr=lr, weight_decay=weight_decay) train_ds = MyDataset(train_df['token ids'], train_df['class index'] - 1) train_dl = DataLoader(train_ds, batch_size=batch_size, shuffle=shuffle) dev_ds = MyDataset(dev_df['token ids'], dev_df['class index'] - 1) dev_dl = DataLoader(dev_ds, batch_size=batch_size, shuffle=shuffle) train_loss = [] train_acc = [] dev_loss = [] dev_acc = [] # train the model for epoch in range(n_epochs): losses = [] gold = [] pred = [] model.train() for X, y_true in tqdm(train_dl, desc=f'epoch {epoch+1} (train)'): # clear gradients model.zero_grad() # send batch to right device X = X.to(device) y_true = y_true.to(device) # predict label scores y_pred = model(X) # compute loss loss = loss_func(y_pred, y_true) # accumulate for plotting losses.append(loss.detach().cpu().item()) gold.append(y_true.detach().cpu().numpy()) pred.append(np.argmax(y_pred.detach().cpu().numpy(), axis=1)) # backpropagate loss.backward() # optimize model parameters optimizer.step() train_loss.append(np.mean(losses)) train_acc.append(accuracy_score(np.concatenate(gold), np.concatenate(pred))) model.eval() with torch.no_grad(): losses = [] gold = [] pred = [] for X, y_true in tqdm(dev_dl, desc=f'epoch {epoch+1} (dev)'): X = X.to(device) y_true = y_true.to(device) y_pred = model(X) loss = loss_func(y_pred, y_true) losses.append(loss.cpu().item()) gold.append(y_true.cpu().numpy()) pred.append(np.argmax(y_pred.cpu().numpy(), axis=1)) dev_loss.append(np.mean(losses)) dev_acc.append(accuracy_score(np.concatenate(gold), np.concatenate(pred))) # Let's plot the loss and accuracy on dev: # In[18]: import matplotlib.pyplot as plt get_ipython().run_line_magic('matplotlib', 'inline') x = np.arange(n_epochs) + 1 plt.plot(x, train_loss) plt.plot(x, dev_loss) plt.legend(['train', 'dev']) plt.xlabel('epoch') plt.ylabel('loss') plt.grid(True) # In[19]: plt.plot(x, train_acc) plt.plot(x, dev_acc) plt.legend(['train', 'dev']) plt.xlabel('epoch') plt.ylabel('accuracy') plt.grid(True) # Next, we evaluate on the testing partition: # In[20]: # repeat all preprocessing done above, this time on the test set test_df = pd.read_csv('data/ag_news_csv/test.csv', header=None) test_df.columns = ['class index', 'title', 'description'] test_df['text'] = test_df['title'].str.lower() + " " + test_df['description'].str.lower() test_df['text'] = test_df['text'].str.replace('\\', ' ', regex=False) test_df['tokens'] = test_df['text'].progress_map(word_tokenize) max_tokens = dev_df['tokens'].map(len).max() test_df['token ids'] = test_df['tokens'].progress_map(token_ids) # In[21]: from sklearn.metrics import classification_report # set model to evaluation mode model.eval() dataset = MyDataset(test_df['token ids'], test_df['class index'] - 1) data_loader = DataLoader(dataset, batch_size=batch_size) y_pred = [] # don't store gradients with torch.no_grad(): for X, _ in tqdm(data_loader): X = X.to(device) # predict one class per example y = torch.argmax(model(X), dim=1) # convert tensor to numpy array (sending it back to the cpu if needed) y_pred.append(y.cpu().numpy()) # print results print(classification_report(dataset.y, np.concatenate(y_pred), target_names=labels))
7,581
7,817
10
chap09-11
chap09-11
9 Implementing Text Classification Using Word Embeddings In the previous chapter we introduced word embeddings, which are realvalued vectors that encode semantic representation of words. We discussed how to learn them, and how they capture semantic information that makes them useful for downstream tasks. In this chapter we show how to use word embeddings that have been pretrained using a variant of the algorithm discussed in the previous chapter. We show how to load them, explore some of their characteristics, and show their application for a text classification task. As usual, the code for this chapter is available in our repository. It is organized into two notebooks: one corresponding to the explorations shown in the first half of this chapter (chap9_embeddings), and a second one in which we modify our previous classifier to use word embeddings (chap9_classification). 9.1 Pre-trained Word Embeddings There are several algorithms for training word embeddings, including the original word2vec algorithm (Mikolov et al., 2013a) (which we discussed in the previous chapter), GloVe (Pennington et al., 2014), and fastText (Bojanowski et al., 2017). They all provide the software for training the embeddings as well as pretrained word embeddings on their respective websites. In general, most open-domain word embeddings are trained on large corpora that cover a variety of topics such as Wikipedia1 and Gigaword.2 Commonly, these embeddings are freely distributed so 1 https://en.wikipedia.org/wiki/Wikipedia:Database_download 2 https://catalog.ldc.upenn.edu/LDC2011T07 133 134 Implementing Text Classification Using Word Embeddings house 0.60137 0.28521 -0.032038 -0.43026 0.74806 0.26223 -0.97361 0.078581 -0.57588 -1.188 -1.8507 -0.24887 0.055549 0.0086155 0.067951 0.40554 -0.073998 -0.21318 0.37167 -0.71791 1.2234 0.35546 -0.41537 -0.21931 -0.39661 -1.7831 -0.41507 0.29533 -0.41254 0.020096 2.7425 -0.9926 -0.71033 -0.46813 0.28265 -0.077639 0.3041 -0.06644 0.3951 -0.70747 -0.38894 0.23158 -0.49508 0.14612 -0.02314 0.56389 -0.86188 -1.0278 0.039922 0.20018 Figure 9.1 GloVe embedding corresponding to the word house, found in the GloVe file glove.6B.50d.txt. We have broken the vector in several lines for display purposes, but this is a single line in the text file. that practitioners can use them in downstream tasks. We will use one such set of vectors in this chapter. Pretrained embeddings are usually distributed as a text file in which each line represents a word vector. The first element in the line is the word itself, and the rest of the elements are the vector components. This is usually referred to as the word2vec format. For example, Figure 9.1 shows the line in the glove.6B.50d.txt file (from the GloVe website) corresponding to the word house. This vector is represented by the word itself, followed by 50 floating-point numbers corresponding to the 50dimensional vector. Note that some embeddings files have a header line composed of two numbers: the number of vectors (i.e., the number of lines in the file), and the vector dimensionality. However, this is not always the case. For example, the original word2vec implementation includes this header line, but the more recent GloVe does not (probably because this information can be inferred from the content of the file). For the examples in the rest of the chapter, we will use the glove.6B.300d.txt embeddings that can be downloaded from the GloVe website.3 This file provides 400,000 word embeddings of 300-dimensions trained on texts
from Wikipedia 2014 and Gigaword 5. We will begin our exploration of word embeddings using Gensim,4 a Python library that provides excellent support for loading and using word embeddings, among other more advanced features. As we can see, the embeddings have been loaded and assigned to the glove variable. Note that we had to specify that this file doesn’t contain the header that is usually present in the word2vec format. The 3 https://nlp.stanford.edu/projects/glove/ 4 https://radimrehurek.com/gensim/ 9.1 Pre-trained Word Embeddings 135 glove.vectors attribute contains a 2-dimensional NumPy array with 400,000 rows and 300 columns, each row corresponding to a word embedding. 9.1.1 Word Similarity Gensim’s KeyedVectors class provides a method called most_similar that receives a word and computes its cosine similarity to all other embeddings, and returns the topn most-similar words. By default, topn is set to 10. The example above shows the top 10 most-similar words to the word cactus, when using the 300-dimension GloVe embeddings trained on Wikipedia and Gigaword. All ten most-similar words are related to cactus in different ways: cacti and cactuses are its plural forms; saguaro, peyote, opuntia, and prickly pear are types of cacti; and mesquite, shrubs, and succulents are other plants from arid climates. You can find more examples of word similarity queries in the Jupyter notebook that accompanies this chapter. Also, as an exercise, try loading a different set of embeddings trained with a different corpus (e.g., Twitter) to see if you obtain different results! 9.1.2 Word Analogies As we discussed in the previous chapter, the semantic information en- coded by word embeddings captures much more than word similar- ity. To surface this additional information, we will use word analogies represented using additional vector operations. For example, a well- ⃗
known analogy that highlights gender information is: king − m⃗an ≈ qu⃗een−wom⃗an,5or,inplainlanguage:“manistokingwhatwomanis to queen.” From this, it immediately follows that one can subtract the meaning of man and add the meaning of woman to obtain the definition ⃗ offemaleroyalty:king−m⃗an+wom⃗an≈qu⃗een.
 The same most_similar method we’ve been using can be repurposed to find word analogies such as the one mentioned above. To this end, two sets of words have to be provided to the most_similar method: a list of positive words that should be added, and a list of negative words 5 A word with an arrow on top refers to the embedding vector corresponding to that word. Please see Section 1.4 for a summary of the notations used in this book. 136 Implementing Text Classification Using Word Embeddings that should be subtracted. For example, the code below implements the left-hand side of the previous analogy: Another interesting analogy relation that shows how the embeddings have captured information about currencies is shown below. More examples are discussed in the Jupyter notebook. 9.1.3 Looking Under the Hood Let us understand now how these queries are actually implemented. First, we need to know what components we need. Clearly, we need the embedding vectors themselves. They are stored in the vectors attribute of the KeyedVectors object. As we mentioned previously, this is a 2-dimensional NumPy array, each row corresponding to a word in the vocabulary. These embeddings are not normalized, but normalized embeddings can be obtained using the get_normed_vectors method. We also need to know the mapping between words and the matrix rows. The KeyedVectors object stores this mapping in a list of terms called index_to_key, and a term-to-index dictionary called key_to_index. Below we show only the first 5 terms to save space, but you can inspect the whole vocabulary in the Jupyter notebook. 9.1.4 Word Similarity from Scratch Implementing the word similarity function ourselves is a good exercise to ensure that we understand how cosine similarity works, and to practice our NumPy skills. We will write a function called most_similar_words that will take a word, the embeddings matrix, the vocabulary in the form of the index_to_key list and key_to_index dictionary, and the number of similar words to return (defaults to 10). The implementation of most_similar_words is straightforward. First, we find the word id for the given word, using the key_to_index dictionary. Then we retrieve the row from the vectors matrix that corresponds to that word. The next step is computing the cosine similarity between the word of interest and the rest of the vocabulary. Recall that the cosine similarity is equivalent to a dot product if the vectors are normalized. We use this equivalence by performing a matrix-vector multiplication between the word embedding and the embedding matrix using Python’s at operator (denoted as @ in code). This means that we must pass the 9.2 Text Classification with Pretrained Word Embeddings 137 normalized embeddings as an argument to this function. Next, we need to sort the similarities preserving the mapping to the words in the vocabulary. We achieve this using the argsort NumPy method, which returns the indices in sorted (ascending) order. Since we need them in descending order, the next step is to reverse this list of indices. Obviously, the most similar word to whichever word we’re querying is the word itself, but that is not an interesting result, so we will remove it from the results. We do this by using NumPy’s ability to index arrays using booleans. We first create a new array in which the position corresponding to the query word is set to False and every other element is set to True, and we use this boolean array to index the list of ids. Lastly, we create a list of tuples of the form (word, similarity) for the topn words, and return the results. Now we will test our implementation of word similarity using the word cactus. You can compare the results to the ones obtained by KeyedVectors’s most_similar method. 9.1.5 Word Analogies from Scratch The implementation of the word analogy function is not that much different from our most_similar_word function above. The main difference between this function and most_similar_words is that now we have two lists of words that we need to combine into a single embedding. We first add the positive words into a single vector, and we do the same for the negative words. Then we subtract the negative vector from the positive one, and normalize the result. The similarity scores are computed the same way as before, but now we need to remove several words from the results, so this time we use NumPy’s isin function, which checks for any of the words in given_word_ids. We then package the results the same way we did before, and return them. ⃗ Nowlet’stryourimplementationwiththesameking−m⃗an+wom⃗an query we discussed previously. Please compare the results to the ones obtained by Gensim. 9.2 Text Classification with Pretrained Word Embeddings In this section we will continue using the AG News classification dataset introduced in previous chapters. Most of the data preparation is the 138 Implementing Text Classification Using Word Embeddings same, up to tokenization. However, we need to remember that the embeddings were trained on a different corpus, so it would be a good idea to estimate how well they cover the words AG News dataset. To achieve this, we load the embeddings just like we did previously. Then we count the tokens in our corpus that do not appear in the embeddings vocabulary, as well as the total number to tokens. We use these numbers to print some informative statistics such as the proportion of unknown tokens in the corpus. We also print the top ten unknown tokens. You can use the Jupyter notebook to explore this task further. Our analysis indicates that only 1.25% of the tokens are not accounted for in the embeddings vocabulary. Further, the most common unknown words seem to be URL fragments. This is encouraging. However, for more robustness, we will introduce a couple of special embeddings that are often needed when dealing with word embeddings. The first one is an embedding used to represent unknown words. A common strategy is to use the average of all the embeddings in the vocabulary for this purpose. The second embedding we will add will be used for padding. Padding is required when we want to train with (mini-)batches because the lengths of all the examples in a given batch have to match in order for the batch to be efficiently processed in parallel. The padding embedding consists only of zeros, which essentially excludes these virtual tokens from the forward/backward passes. None of these embeddings are included in the pretrained GloVe embeddings, but other pretrained embeddings may already include them, so it is a good idea to check if they are included with the embeddings we are using before adding them. The new embeddings were added at the end of embedding collection, so their ids are 400,000 and 400,001. Now we need to generate a list of token ids for each training example. Recall that we decided to ignore tokens that appear less than 10 times, so we need to replace those with [UNK] too, even if they appear in the embedding vocabulary. Next, we create a Dataset object from the padded lists of token ids. This one is even easier since the lists of token ids are ready. So all that is required is turning them into tensors. Lastly, we need to modify the model class to indicate that we now use embedding vectors. To this end, we will add an nn. Embedding layer that stores the embedding vectors for all words in the vocabulary. We will use this object to look up embeddings by their token ids. This layer will be initialized from a tensor containing the pretrained embeddings for the entire vocabulary. Also, the pad_id is specified when creating the 9.2 Text Classification with Pretrained Word Embeddings 139 embedding layer. When a nn. Embedding layer gets initialized using the from_pretrained method with other arguments set to default values, the embeddings are not updated during training. We will keep it that way for this example, but that could be changed by setting the freeze parameter to False. The rest of the layers are the same as in our previous example from Chapter 7, i.e., one intermediate layer and one output layer, with a nonlinearity (ReLU) between them. The only major difference is that now the input size of the intermediate layer is the size of one embedding (e.g., 300) instead of the size of the vocabulary like last time. This is because, as we explain below, the intermediate layer receives an average of the numerical representations of the words in the current text. The forward function of the Model class changes significantly. This time we are encoding the text as an average of the embeddings of all the words it contains. To compute the denominator of this average, we obtain the length of each text by counting how many of its words are not the virtual padding token. Then we sum all the embeddings and divide by the number of non-padding tokens. Adding all embeddings is safe, because padding embeddings are comprised of zeros. This process leaves us with a single embedding for the whole text, which is then passed to the rest of the layers. The training and evaluation steps are the same before. The results of this model on the AG News test partition are displayed below: Comparing these results with the ones obtained by the multilayer perceptron with explicit features in Chapter 7, we observe that on this particular task utilizing embeddings as features does not yield a performance improvement. Notably, this is a small dataset and a rather simplistic task where the presence of certain words is sufficient to distinguish the category of an article (e.g., the word basketball is highly indicative of the label Sports). Nevertheless, in other tasks where distinctions are more nuanced, or in which there is less likely to be word overlap between texts of interest, word embeddings do provide necessary signal. Additionally, when there are class imbalances, word embeddings can supplement underrepresented classes by bringing the external knowledge gained during their pretraining. 140 Implementing Text Classification Using Word Embeddings 9.3 Summary In this chapter we showed how to explore the semantic space encoded by word embeddings through word similarity and analogies, as well as one way to use them for text classification. At this point we have not taken into consideration the order in which the words appear, i.e., we averaged the embeddings for all the words in the text using a bag-ofwords representation of text. In subsequent chapters we will explore how to incorporate word order into the learned representations of text.
14,346
14,408
#!/usr/bin/env python # coding: utf-8 # # Multiclass Text Classification with # # Feed-forward Neural Networks and Word Embeddings # First, we will do some initialization. # In[1]: import random import torch import numpy as np import pandas as pd from tqdm.notebook import tqdm # enable tqdm in pandas tqdm.pandas() # set to True to use the gpu (if there is one available) use_gpu = True # select device device = torch.device('cuda' if use_gpu and torch.cuda.is_available() else 'cpu') print(f'device: {device.type}') # random seed seed = 1234 # set random seed if seed is not None: print(f'random seed: {seed}') random.seed(seed) np.random.seed(seed) torch.manual_seed(seed) # We will be using the AG's News Topic Classification Dataset. # It is stored in two CSV files: `train.csv` and `test.csv`, as well as a `classes.txt` that stores the labels of the classes to predict. # # First, we will load the training dataset using [pandas](https://pandas.pydata.org/) and take a quick look at how the data. # In[2]: train_df = pd.read_csv('data/ag_news_csv/train.csv', header=None) train_df.columns = ['class index', 'title', 'description'] train_df # The dataset consists of 120,000 examples, each consisting of a class index, a title, and a description. # The class labels are distributed in a separated file. We will add the labels to the dataset so that we can interpret the data more easily. Note that the label indexes are one-based, so we need to subtract one to retrieve them from the list. # In[3]: labels = open('data/ag_news_csv/classes.txt').read().splitlines() classes = train_df['class index'].map(lambda i: labels[i-1]) train_df.insert(1, 'class', classes) train_df # Let's inspect how balanced our examples are by using a bar plot. # In[4]: pd.value_counts(train_df['class']).plot.bar() # The classes are evenly distributed. That's great! # # However, the text contains some spurious backslashes in some parts of the text. # They are meant to represent newlines in the original text. # An example can be seen below, between the words "dwindling" and "band". # In[5]: print(train_df.loc[0, 'description']) # We will replace the backslashes with spaces on the whole column using pandas replace method. # In[6]: train_df['text'] = train_df['title'].str.lower() + " " + train_df['description'].str.lower() train_df['text'] = train_df['text'].str.replace('\\', ' ', regex=False) train_df # Now we will proceed to tokenize the title and description columns using NLTK's word_tokenize(). # We will add a new column to our dataframe with the list of tokens. # In[7]: from nltk.tokenize import word_tokenize train_df['tokens'] = train_df['text'].progress_map(word_tokenize) train_df # Now we will load the GloVe word embeddings. # In[8]: from gensim.models import KeyedVectors glove = KeyedVectors.load_word2vec_format("glove.6B.300d.txt", no_header=True) glove.vectors.shape # The word embeddings have been pretrained in a different corpus, so it would be a good idea to estimate how good our tokenization matches the GloVe vocabulary. # In[9]: from collections import Counter def count_unknown_words(data, vocabulary): counter = Counter() for row in tqdm(data): counter.update(tok for tok in row if tok not in vocabulary) return counter # find out how many times each unknown token occurrs in the corpus c = count_unknown_words(train_df['tokens'], glove.key_to_index) # find the total number of tokens in the corpus total_tokens = train_df['tokens'].map(len).sum() # find some statistics about occurrences of unknown tokens unk_tokens = sum(c.values()) percent_unk = unk_tokens / total_tokens distinct_tokens = len(list(c)) print(f'total number of tokens: {total_tokens:,}') print(f'number of unknown tokens: {unk_tokens:,}') print(f'number of distinct unknown tokens: {distinct_tokens:,}') print(f'percentage of unkown tokens: {percent_unk:.2%}') print('top 50 unknown words:') for token, n in c.most_common(10): print(f'\t{n}\t{token}') # Glove embeddings seem to have a good coverage on this dataset -- only 1.25% of the tokens in the dataset are unknown, i.e., don't appear in the GloVe vocabulary. # # Still, we will need a way to handle these unknown tokens. # Our approach will be to add a new embedding to GloVe that will be used to represent them. # This new embedding will be initialized as the average of all the GloVe embeddings. # # We will also add another embedding, this one initialized to zeros, that will be used to pad the sequences of tokens so that they all have the same length. This will be useful when we train with mini-batches. # In[10]: # string values corresponding to the new embeddings unk_tok = '[UNK]' pad_tok = '[PAD]' # initialize the new embedding values unk_emb = glove.vectors.mean(axis=0) pad_emb = np.zeros(300) # add new embeddings to glove glove.add_vectors([unk_tok, pad_tok], [unk_emb, pad_emb]) # get token ids corresponding to the new embeddings unk_id = glove.key_to_index[unk_tok] pad_id = glove.key_to_index[pad_tok] unk_id, pad_id # In[11]: from sklearn.model_selection import train_test_split train_df, dev_df = train_test_split(train_df, train_size=0.8) train_df.reset_index(inplace=True) dev_df.reset_index(inplace=True) # We will now add a new column to our dataframe that will contain the padded sequences of token ids. # In[12]: threshold = 10 tokens = train_df['tokens'].explode().value_counts() vocabulary = set(tokens[tokens > threshold].index.tolist()) print(f'vocabulary size: {len(vocabulary):,}') # In[13]: # find the length of the longest list of tokens max_tokens = train_df['tokens'].map(len).max() # return unk_id for infrequent tokens too def get_id(tok): if tok in vocabulary: return glove.key_to_index.get(tok, unk_id) else: return unk_id # function that gets a list of tokens and returns a list of token ids, # with padding added accordingly def token_ids(tokens): tok_ids = [get_id(tok) for tok in tokens] pad_len = max_tokens - len(tok_ids) return tok_ids + [pad_id] * pad_len # add new column to the dataframe train_df['token ids'] = train_df['tokens'].progress_map(token_ids) train_df # In[14]: max_tokens = dev_df['tokens'].map(len).max() dev_df['token ids'] = dev_df['tokens'].progress_map(token_ids) dev_df # Now we will get a numpy 2-dimensional array corresponding to the token ids, # and a 1-dimensional array with the gold classes. Note that the classes are one-based (i.e., they start at one), # but we need them to be zero-based, so we need to subtract one from this array. # In[15]: from torch.utils.data import Dataset class MyDataset(Dataset): def __init__(self, x, y): self.x = x self.y = y def __len__(self): return len(self.y) def __getitem__(self, index): x = torch.tensor(self.x[index]) y = torch.tensor(self.y[index]) return x, y # Next, we construct our PyTorch model, which is a feed-forward neural network with two layers: # In[16]: from torch import nn import torch.nn.functional as F class Model(nn.Module): def __init__(self, vectors, pad_id, hidden_dim, output_dim, dropout): super().__init__() # embeddings must be a tensor if not torch.is_tensor(vectors): vectors = torch.tensor(vectors) # keep padding id self.padding_idx = pad_id # embedding layer self.embs = nn.Embedding.from_pretrained(vectors, padding_idx=pad_id) # feedforward layers self.layers = nn.Sequential( nn.Dropout(dropout), nn.Linear(vectors.shape[1], hidden_dim), nn.ReLU(), nn.Dropout(dropout), nn.Linear(hidden_dim, output_dim), ) def forward(self, x): # get boolean array with padding elements set to false not_padding = torch.isin(x, self.padding_idx, invert=True) # get lengths of examples (excluding padding) lengths = torch.count_nonzero(not_padding, axis=1) # get embeddings x = self.embs(x) # calculate means x = x.sum(dim=1) / lengths.unsqueeze(dim=1) # pass to rest of the model output = self.layers(x) # calculate softmax if we're not in training mode #if not self.training: # output = F.softmax(output, dim=1) return output # Next, we implement the training procedure. We compute the loss and accuracy on the development partition after each epoch. # In[17]: from torch import optim from torch.utils.data import DataLoader from sklearn.metrics import accuracy_score # hyperparameters lr = 1e-3 weight_decay = 0 batch_size = 500 shuffle = True n_epochs = 5 hidden_dim = 50 output_dim = len(labels) dropout = 0.1 vectors = glove.vectors # initialize the model, loss function, optimizer, and data-loader model = Model(vectors, pad_id, hidden_dim, output_dim, dropout).to(device) loss_func = nn.CrossEntropyLoss() optimizer = optim.Adam(model.parameters(), lr=lr, weight_decay=weight_decay) train_ds = MyDataset(train_df['token ids'], train_df['class index'] - 1) train_dl = DataLoader(train_ds, batch_size=batch_size, shuffle=shuffle) dev_ds = MyDataset(dev_df['token ids'], dev_df['class index'] - 1) dev_dl = DataLoader(dev_ds, batch_size=batch_size, shuffle=shuffle) train_loss = [] train_acc = [] dev_loss = [] dev_acc = [] # train the model for epoch in range(n_epochs): losses = [] gold = [] pred = [] model.train() for X, y_true in tqdm(train_dl, desc=f'epoch {epoch+1} (train)'): # clear gradients model.zero_grad() # send batch to right device X = X.to(device) y_true = y_true.to(device) # predict label scores y_pred = model(X) # compute loss loss = loss_func(y_pred, y_true) # accumulate for plotting losses.append(loss.detach().cpu().item()) gold.append(y_true.detach().cpu().numpy()) pred.append(np.argmax(y_pred.detach().cpu().numpy(), axis=1)) # backpropagate loss.backward() # optimize model parameters optimizer.step() train_loss.append(np.mean(losses)) train_acc.append(accuracy_score(np.concatenate(gold), np.concatenate(pred))) model.eval() with torch.no_grad(): losses = [] gold = [] pred = [] for X, y_true in tqdm(dev_dl, desc=f'epoch {epoch+1} (dev)'): X = X.to(device) y_true = y_true.to(device) y_pred = model(X) loss = loss_func(y_pred, y_true) losses.append(loss.cpu().item()) gold.append(y_true.cpu().numpy()) pred.append(np.argmax(y_pred.cpu().numpy(), axis=1)) dev_loss.append(np.mean(losses)) dev_acc.append(accuracy_score(np.concatenate(gold), np.concatenate(pred))) # Let's plot the loss and accuracy on dev: # In[18]: import matplotlib.pyplot as plt get_ipython().run_line_magic('matplotlib', 'inline') x = np.arange(n_epochs) + 1 plt.plot(x, train_loss) plt.plot(x, dev_loss) plt.legend(['train', 'dev']) plt.xlabel('epoch') plt.ylabel('loss') plt.grid(True) # In[19]: plt.plot(x, train_acc) plt.plot(x, dev_acc) plt.legend(['train', 'dev']) plt.xlabel('epoch') plt.ylabel('accuracy') plt.grid(True) # Next, we evaluate on the testing partition: # In[20]: # repeat all preprocessing done above, this time on the test set test_df = pd.read_csv('data/ag_news_csv/test.csv', header=None) test_df.columns = ['class index', 'title', 'description'] test_df['text'] = test_df['title'].str.lower() + " " + test_df['description'].str.lower() test_df['text'] = test_df['text'].str.replace('\\', ' ', regex=False) test_df['tokens'] = test_df['text'].progress_map(word_tokenize) max_tokens = dev_df['tokens'].map(len).max() test_df['token ids'] = test_df['tokens'].progress_map(token_ids) # In[21]: from sklearn.metrics import classification_report # set model to evaluation mode model.eval() dataset = MyDataset(test_df['token ids'], test_df['class index'] - 1) data_loader = DataLoader(dataset, batch_size=batch_size) y_pred = [] # don't store gradients with torch.no_grad(): for X, _ in tqdm(data_loader): X = X.to(device) # predict one class per example y = torch.argmax(model(X), dim=1) # convert tensor to numpy array (sending it back to the cpu if needed) y_pred.append(y.cpu().numpy()) # print results print(classification_report(dataset.y, np.concatenate(y_pred), target_names=labels))
7,826
7,852
11
chap09-12
chap09-12
9 Implementing Text Classification Using Word Embeddings In the previous chapter we introduced word embeddings, which are realvalued vectors that encode semantic representation of words. We discussed how to learn them, and how they capture semantic information that makes them useful for downstream tasks. In this chapter we show how to use word embeddings that have been pretrained using a variant of the algorithm discussed in the previous chapter. We show how to load them, explore some of their characteristics, and show their application for a text classification task. As usual, the code for this chapter is available in our repository. It is organized into two notebooks: one corresponding to the explorations shown in the first half of this chapter (chap9_embeddings), and a second one in which we modify our previous classifier to use word embeddings (chap9_classification). 9.1 Pre-trained Word Embeddings There are several algorithms for training word embeddings, including the original word2vec algorithm (Mikolov et al., 2013a) (which we discussed in the previous chapter), GloVe (Pennington et al., 2014), and fastText (Bojanowski et al., 2017). They all provide the software for training the embeddings as well as pretrained word embeddings on their respective websites. In general, most open-domain word embeddings are trained on large corpora that cover a variety of topics such as Wikipedia1 and Gigaword.2 Commonly, these embeddings are freely distributed so 1 https://en.wikipedia.org/wiki/Wikipedia:Database_download 2 https://catalog.ldc.upenn.edu/LDC2011T07 133 134 Implementing Text Classification Using Word Embeddings house 0.60137 0.28521 -0.032038 -0.43026 0.74806 0.26223 -0.97361 0.078581 -0.57588 -1.188 -1.8507 -0.24887 0.055549 0.0086155 0.067951 0.40554 -0.073998 -0.21318 0.37167 -0.71791 1.2234 0.35546 -0.41537 -0.21931 -0.39661 -1.7831 -0.41507 0.29533 -0.41254 0.020096 2.7425 -0.9926 -0.71033 -0.46813 0.28265 -0.077639 0.3041 -0.06644 0.3951 -0.70747 -0.38894 0.23158 -0.49508 0.14612 -0.02314 0.56389 -0.86188 -1.0278 0.039922 0.20018 Figure 9.1 GloVe embedding corresponding to the word house, found in the GloVe file glove.6B.50d.txt. We have broken the vector in several lines for display purposes, but this is a single line in the text file. that practitioners can use them in downstream tasks. We will use one such set of vectors in this chapter. Pretrained embeddings are usually distributed as a text file in which each line represents a word vector. The first element in the line is the word itself, and the rest of the elements are the vector components. This is usually referred to as the word2vec format. For example, Figure 9.1 shows the line in the glove.6B.50d.txt file (from the GloVe website) corresponding to the word house. This vector is represented by the word itself, followed by 50 floating-point numbers corresponding to the 50dimensional vector. Note that some embeddings files have a header line composed of two numbers: the number of vectors (i.e., the number of lines in the file), and the vector dimensionality. However, this is not always the case. For example, the original word2vec implementation includes this header line, but the more recent GloVe does not (probably because this information can be inferred from the content of the file). For the examples in the rest of the chapter, we will use the glove.6B.300d.txt embeddings that can be downloaded from the GloVe website.3 This file provides 400,000 word embeddings of 300-dimensions trained on texts
from Wikipedia 2014 and Gigaword 5. We will begin our exploration of word embeddings using Gensim,4 a Python library that provides excellent support for loading and using word embeddings, among other more advanced features. As we can see, the embeddings have been loaded and assigned to the glove variable. Note that we had to specify that this file doesn’t contain the header that is usually present in the word2vec format. The 3 https://nlp.stanford.edu/projects/glove/ 4 https://radimrehurek.com/gensim/ 9.1 Pre-trained Word Embeddings 135 glove.vectors attribute contains a 2-dimensional NumPy array with 400,000 rows and 300 columns, each row corresponding to a word embedding. 9.1.1 Word Similarity Gensim’s KeyedVectors class provides a method called most_similar that receives a word and computes its cosine similarity to all other embeddings, and returns the topn most-similar words. By default, topn is set to 10. The example above shows the top 10 most-similar words to the word cactus, when using the 300-dimension GloVe embeddings trained on Wikipedia and Gigaword. All ten most-similar words are related to cactus in different ways: cacti and cactuses are its plural forms; saguaro, peyote, opuntia, and prickly pear are types of cacti; and mesquite, shrubs, and succulents are other plants from arid climates. You can find more examples of word similarity queries in the Jupyter notebook that accompanies this chapter. Also, as an exercise, try loading a different set of embeddings trained with a different corpus (e.g., Twitter) to see if you obtain different results! 9.1.2 Word Analogies As we discussed in the previous chapter, the semantic information en- coded by word embeddings captures much more than word similar- ity. To surface this additional information, we will use word analogies represented using additional vector operations. For example, a well- ⃗
known analogy that highlights gender information is: king − m⃗an ≈ qu⃗een−wom⃗an,5or,inplainlanguage:“manistokingwhatwomanis to queen.” From this, it immediately follows that one can subtract the meaning of man and add the meaning of woman to obtain the definition ⃗ offemaleroyalty:king−m⃗an+wom⃗an≈qu⃗een.
 The same most_similar method we’ve been using can be repurposed to find word analogies such as the one mentioned above. To this end, two sets of words have to be provided to the most_similar method: a list of positive words that should be added, and a list of negative words 5 A word with an arrow on top refers to the embedding vector corresponding to that word. Please see Section 1.4 for a summary of the notations used in this book. 136 Implementing Text Classification Using Word Embeddings that should be subtracted. For example, the code below implements the left-hand side of the previous analogy: Another interesting analogy relation that shows how the embeddings have captured information about currencies is shown below. More examples are discussed in the Jupyter notebook. 9.1.3 Looking Under the Hood Let us understand now how these queries are actually implemented. First, we need to know what components we need. Clearly, we need the embedding vectors themselves. They are stored in the vectors attribute of the KeyedVectors object. As we mentioned previously, this is a 2-dimensional NumPy array, each row corresponding to a word in the vocabulary. These embeddings are not normalized, but normalized embeddings can be obtained using the get_normed_vectors method. We also need to know the mapping between words and the matrix rows. The KeyedVectors object stores this mapping in a list of terms called index_to_key, and a term-to-index dictionary called key_to_index. Below we show only the first 5 terms to save space, but you can inspect the whole vocabulary in the Jupyter notebook. 9.1.4 Word Similarity from Scratch Implementing the word similarity function ourselves is a good exercise to ensure that we understand how cosine similarity works, and to practice our NumPy skills. We will write a function called most_similar_words that will take a word, the embeddings matrix, the vocabulary in the form of the index_to_key list and key_to_index dictionary, and the number of similar words to return (defaults to 10). The implementation of most_similar_words is straightforward. First, we find the word id for the given word, using the key_to_index dictionary. Then we retrieve the row from the vectors matrix that corresponds to that word. The next step is computing the cosine similarity between the word of interest and the rest of the vocabulary. Recall that the cosine similarity is equivalent to a dot product if the vectors are normalized. We use this equivalence by performing a matrix-vector multiplication between the word embedding and the embedding matrix using Python’s at operator (denoted as @ in code). This means that we must pass the 9.2 Text Classification with Pretrained Word Embeddings 137 normalized embeddings as an argument to this function. Next, we need to sort the similarities preserving the mapping to the words in the vocabulary. We achieve this using the argsort NumPy method, which returns the indices in sorted (ascending) order. Since we need them in descending order, the next step is to reverse this list of indices. Obviously, the most similar word to whichever word we’re querying is the word itself, but that is not an interesting result, so we will remove it from the results. We do this by using NumPy’s ability to index arrays using booleans. We first create a new array in which the position corresponding to the query word is set to False and every other element is set to True, and we use this boolean array to index the list of ids. Lastly, we create a list of tuples of the form (word, similarity) for the topn words, and return the results. Now we will test our implementation of word similarity using the word cactus. You can compare the results to the ones obtained by KeyedVectors’s most_similar method. 9.1.5 Word Analogies from Scratch The implementation of the word analogy function is not that much different from our most_similar_word function above. The main difference between this function and most_similar_words is that now we have two lists of words that we need to combine into a single embedding. We first add the positive words into a single vector, and we do the same for the negative words. Then we subtract the negative vector from the positive one, and normalize the result. The similarity scores are computed the same way as before, but now we need to remove several words from the results, so this time we use NumPy’s isin function, which checks for any of the words in given_word_ids. We then package the results the same way we did before, and return them. ⃗ Nowlet’stryourimplementationwiththesameking−m⃗an+wom⃗an query we discussed previously. Please compare the results to the ones obtained by Gensim. 9.2 Text Classification with Pretrained Word Embeddings In this section we will continue using the AG News classification dataset introduced in previous chapters. Most of the data preparation is the 138 Implementing Text Classification Using Word Embeddings same, up to tokenization. However, we need to remember that the embeddings were trained on a different corpus, so it would be a good idea to estimate how well they cover the words AG News dataset. To achieve this, we load the embeddings just like we did previously. Then we count the tokens in our corpus that do not appear in the embeddings vocabulary, as well as the total number to tokens. We use these numbers to print some informative statistics such as the proportion of unknown tokens in the corpus. We also print the top ten unknown tokens. You can use the Jupyter notebook to explore this task further. Our analysis indicates that only 1.25% of the tokens are not accounted for in the embeddings vocabulary. Further, the most common unknown words seem to be URL fragments. This is encouraging. However, for more robustness, we will introduce a couple of special embeddings that are often needed when dealing with word embeddings. The first one is an embedding used to represent unknown words. A common strategy is to use the average of all the embeddings in the vocabulary for this purpose. The second embedding we will add will be used for padding. Padding is required when we want to train with (mini-)batches because the lengths of all the examples in a given batch have to match in order for the batch to be efficiently processed in parallel. The padding embedding consists only of zeros, which essentially excludes these virtual tokens from the forward/backward passes. None of these embeddings are included in the pretrained GloVe embeddings, but other pretrained embeddings may already include them, so it is a good idea to check if they are included with the embeddings we are using before adding them. The new embeddings were added at the end of embedding collection, so their ids are 400,000 and 400,001. Now we need to generate a list of token ids for each training example. Recall that we decided to ignore tokens that appear less than 10 times, so we need to replace those with [UNK] too, even if they appear in the embedding vocabulary. Next, we create a Dataset object from the padded lists of token ids. This one is even easier since the lists of token ids are ready. So all that is required is turning them into tensors. Lastly, we need to modify the model class to indicate that we now use embedding vectors. To this end, we will add an nn. Embedding layer that stores the embedding vectors for all words in the vocabulary. We will use this object to look up embeddings by their token ids. This layer will be initialized from a tensor containing the pretrained embeddings for the entire vocabulary. Also, the pad_id is specified when creating the 9.2 Text Classification with Pretrained Word Embeddings 139 embedding layer. When a nn. Embedding layer gets initialized using the from_pretrained method with other arguments set to default values, the embeddings are not updated during training. We will keep it that way for this example, but that could be changed by setting the freeze parameter to False. The rest of the layers are the same as in our previous example from Chapter 7, i.e., one intermediate layer and one output layer, with a nonlinearity (ReLU) between them. The only major difference is that now the input size of the intermediate layer is the size of one embedding (e.g., 300) instead of the size of the vocabulary like last time. This is because, as we explain below, the intermediate layer receives an average of the numerical representations of the words in the current text. The forward function of the Model class changes significantly. This time we are encoding the text as an average of the embeddings of all the words it contains. To compute the denominator of this average, we obtain the length of each text by counting how many of its words are not the virtual padding token. Then we sum all the embeddings and divide by the number of non-padding tokens. Adding all embeddings is safe, because padding embeddings are comprised of zeros. This process leaves us with a single embedding for the whole text, which is then passed to the rest of the layers. The training and evaluation steps are the same before. The results of this model on the AG News test partition are displayed below: Comparing these results with the ones obtained by the multilayer perceptron with explicit features in Chapter 7, we observe that on this particular task utilizing embeddings as features does not yield a performance improvement. Notably, this is a small dataset and a rather simplistic task where the presence of certain words is sufficient to distinguish the category of an article (e.g., the word basketball is highly indicative of the label Sports). Nevertheless, in other tasks where distinctions are more nuanced, or in which there is less likely to be word overlap between texts of interest, word embeddings do provide necessary signal. Additionally, when there are class imbalances, word embeddings can supplement underrepresented classes by bringing the external knowledge gained during their pretraining. 140 Implementing Text Classification Using Word Embeddings 9.3 Summary In this chapter we showed how to explore the semantic space encoded by word embeddings through word similarity and analogies, as well as one way to use them for text classification. At this point we have not taken into consideration the order in which the words appear, i.e., we averaged the embeddings for all the words in the text using a bag-ofwords representation of text. In subsequent chapters we will explore how to incorporate word order into the learned representations of text.
8,105
8,197
#!/usr/bin/env python # coding: utf-8 # # Using Pre-trained Word Embeddings # # In this notebook we will show some operations on pre-trained word embeddings to gain an intuition about them. # # We will be using the pre-trained GloVe embeddings that can be found in the [official website](https://nlp.stanford.edu/projects/glove/). In particular, we will use the file `glove.6B.300d.txt` contained in this [zip file](https://nlp.stanford.edu/data/glove.6B.zip). # # We will first load the GloVe embeddings using [Gensim](https://radimrehurek.com/gensim/). Specifically, we will use [`KeyedVectors`](https://radimrehurek.com/gensim/models/keyedvectors.html)'s [`load_word2vec_format()`](https://radimrehurek.com/gensim/models/keyedvectors.html#gensim.models.keyedvectors.KeyedVectors.load_word2vec_format) classmethod, which supports the original word2vec file format. # However, there is a difference in the file formats used by GloVe and word2vec, which is a header used by word2vec to indicate the number of embeddings and dimensions stored in the file. The file that stores the GloVe embeddings doesn't have this header, so we will have to address that when loading the embeddings. # # Loading the embeddings may take a little bit, so hang in there! # In[2]: from gensim.models import KeyedVectors fname = "glove.6B.300d.txt" glove = KeyedVectors.load_word2vec_format(fname, no_header=True) glove.vectors.shape # ## Word similarity # # One attribute of word embeddings that makes them useful is the ability to compare them using cosine similarity to find how similar they are. [`KeyedVectors`](https://radimrehurek.com/gensim/models/keyedvectors.html) objects provide a method called [`most_similar()`](https://radimrehurek.com/gensim/models/keyedvectors.html#gensim.models.keyedvectors.KeyedVectors.most_similar) that we can use to find the closest words to a particular word of interest. By default, [`most_similar()`](https://radimrehurek.com/gensim/models/keyedvectors.html#gensim.models.keyedvectors.KeyedVectors.most_similar) returns the 10 most similar words, but this can be changed using the `topn` parameter. # # Below we test this function using a few different words. # In[3]: # common noun glove.most_similar("cactus") # In[4]: # common noun glove.most_similar("cake") # In[5]: # adjective glove.most_similar("angry") # In[6]: # adverb glove.most_similar("quickly") # In[7]: # preposition glove.most_similar("between") # In[8]: # determiner glove.most_similar("the") # ## Word analogies # # Another characteristic of word embeddings is their ability to solve analogy problems. # The same [`most_similar()`](https://radimrehurek.com/gensim/models/keyedvectors.html#gensim.models.keyedvectors.KeyedVectors.most_similar) method can be used for this task, by passing two lists of words: # a `positive` list with the words that should be added and a `negative` list with the words that should be subtracted. Using these arguments, the famous example $\vec{king} - \vec{man} + \vec{woman} \approx \vec{queen}$ can be executed as follows: # In[9]: # king - man + woman glove.most_similar(positive=["king", "woman"], negative=["man"]) # Here are a few other interesting analogies: # In[10]: # car - drive + fly glove.most_similar(positive=["car", "fly"], negative=["drive"]) # In[11]: # berlin - germany + australia glove.most_similar(positive=["berlin", "australia"], negative=["germany"]) # In[12]: # england - london + baghdad glove.most_similar(positive=["england", "baghdad"], negative=["london"]) # In[13]: # japan - yen + peso glove.most_similar(positive=["japan", "peso"], negative=["yen"]) # In[14]: # best - good + tall glove.most_similar(positive=["best", "tall"], negative=["good"]) # ## Looking under the hood # # Now that we are more familiar with the [`most_similar()`](https://radimrehurek.com/gensim/models/keyedvectors.html#gensim.models.keyedvectors.KeyedVectors.most_similar) method, it is time to implement its functionality ourselves. # But first, we need to take a look at the different parts of the [`KeyedVectors`](https://radimrehurek.com/gensim/models/keyedvectors.html) object that we will need. # Obviously, we will need the vectors themselves. They are stored in the `vectors` attribute. # In[15]: glove.vectors.shape # As we can see above, `vectors` is a 2-dimensional matrix with 400,000 rows and 300 columns. # Each row corresponds to a 300-dimensional word embedding. These embeddings are not normalized, but normalized embeddings can be obtained using the [`get_normed_vectors()`](https://radimrehurek.com/gensim/models/keyedvectors.html#gensim.models.keyedvectors.KeyedVectors.get_normed_vectors) method. # In[16]: normed_vectors = glove.get_normed_vectors() normed_vectors.shape # Now we need to map the words in the vocabulary to rows in the `vectors` matrix, and vice versa. # The [`KeyedVectors`](https://radimrehurek.com/gensim/models/keyedvectors.html) object has the attributes `index_to_key` and `key_to_index` which are a list of words and a dictionary of words to indices, respectively. # In[17]: #glove.index_to_key # In[18]: #glove.key_to_index # ## Word similarity from scratch # # Now we have everything we need to implement a `most_similar_words()` function that takes a word, the vector matrix, the `index_to_key` list, and the `key_to_index` dictionary. This function will return the 10 most similar words to the provided word, along with their similarity scores. # In[19]: import numpy as np def most_similar_words(word, vectors, index_to_key, key_to_index, topn=10): # retrieve word_id corresponding to given word word_id = key_to_index[word] # retrieve embedding for given word emb = vectors[word_id] # calculate similarities to all words in out vocabulary similarities = vectors @ emb # get word_ids in ascending order with respect to similarity score ids_ascending = similarities.argsort() # reverse word_ids ids_descending = ids_ascending[::-1] # get boolean array with element corresponding to word_id set to false mask = ids_descending != word_id # obtain new array of indices that doesn't contain word_id # (otherwise the most similar word to the argument would be the argument itself) ids_descending = ids_descending[mask] # get topn word_ids top_ids = ids_descending[:topn] # retrieve topn words with their corresponding similarity score top_words = [(index_to_key[i], similarities[i]) for i in top_ids] # return results return top_words # Now let's try the same example that we used above: the most similar words to "cactus". # In[20]: vectors = glove.get_normed_vectors() index_to_key = glove.index_to_key key_to_index = glove.key_to_index most_similar_words("cactus", vectors, index_to_key, key_to_index) # ## Analogies from scratch # # The `most_similar_words()` function behaves as expected. Now let's implement a function to perform the analogy task. We will give it the very creative name `analogy`. This function will get two lists of words (one for positive words and one for negative words), just like the [`most_similar()`](https://radimrehurek.com/gensim/models/keyedvectors.html#gensim.models.keyedvectors.KeyedVectors.most_similar) method we discussed above. # In[21]: from numpy.linalg import norm def analogy(positive, negative, vectors, index_to_key, key_to_index, topn=10): # find ids for positive and negative words pos_ids = [key_to_index[w] for w in positive] neg_ids = [key_to_index[w] for w in negative] given_word_ids = pos_ids + neg_ids # get embeddings for positive and negative words pos_emb = vectors[pos_ids].sum(axis=0) neg_emb = vectors[neg_ids].sum(axis=0) # get embedding for analogy emb = pos_emb - neg_emb # normalize embedding emb = emb / norm(emb) # calculate similarities to all words in out vocabulary similarities = vectors @ emb # get word_ids in ascending order with respect to similarity score ids_ascending = similarities.argsort() # reverse word_ids ids_descending = ids_ascending[::-1] # get boolean array with element corresponding to any of given_word_ids set to false given_words_mask = np.isin(ids_descending, given_word_ids, invert=True) # obtain new array of indices that doesn't contain any of the given_word_ids ids_descending = ids_descending[given_words_mask] # get topn word_ids top_ids = ids_descending[:topn] # retrieve topn words with their corresponding similarity score top_words = [(index_to_key[i], similarities[i]) for i in top_ids] # return results return top_words # Let's try this function with the $\vec{king} - \vec{man} + \vec{woman} \approx \vec{queen}$ example we discussed above. # In[22]: positive = ["king", "woman"] negative = ["man"] vectors = glove.get_normed_vectors() index_to_key = glove.index_to_key key_to_index = glove.key_to_index analogy(positive, negative, vectors, index_to_key, key_to_index) # In[ ]:
5,830
5,863
12
chap09-13
chap09-13
9 Implementing Text Classification Using Word Embeddings In the previous chapter we introduced word embeddings, which are realvalued vectors that encode semantic representation of words. We discussed how to learn them, and how they capture semantic information that makes them useful for downstream tasks. In this chapter we show how to use word embeddings that have been pretrained using a variant of the algorithm discussed in the previous chapter. We show how to load them, explore some of their characteristics, and show their application for a text classification task. As usual, the code for this chapter is available in our repository. It is organized into two notebooks: one corresponding to the explorations shown in the first half of this chapter (chap9_embeddings), and a second one in which we modify our previous classifier to use word embeddings (chap9_classification). 9.1 Pre-trained Word Embeddings There are several algorithms for training word embeddings, including the original word2vec algorithm (Mikolov et al., 2013a) (which we discussed in the previous chapter), GloVe (Pennington et al., 2014), and fastText (Bojanowski et al., 2017). They all provide the software for training the embeddings as well as pretrained word embeddings on their respective websites. In general, most open-domain word embeddings are trained on large corpora that cover a variety of topics such as Wikipedia1 and Gigaword.2 Commonly, these embeddings are freely distributed so 1 https://en.wikipedia.org/wiki/Wikipedia:Database_download 2 https://catalog.ldc.upenn.edu/LDC2011T07 133 134 Implementing Text Classification Using Word Embeddings house 0.60137 0.28521 -0.032038 -0.43026 0.74806 0.26223 -0.97361 0.078581 -0.57588 -1.188 -1.8507 -0.24887 0.055549 0.0086155 0.067951 0.40554 -0.073998 -0.21318 0.37167 -0.71791 1.2234 0.35546 -0.41537 -0.21931 -0.39661 -1.7831 -0.41507 0.29533 -0.41254 0.020096 2.7425 -0.9926 -0.71033 -0.46813 0.28265 -0.077639 0.3041 -0.06644 0.3951 -0.70747 -0.38894 0.23158 -0.49508 0.14612 -0.02314 0.56389 -0.86188 -1.0278 0.039922 0.20018 Figure 9.1 GloVe embedding corresponding to the word house, found in the GloVe file glove.6B.50d.txt. We have broken the vector in several lines for display purposes, but this is a single line in the text file. that practitioners can use them in downstream tasks. We will use one such set of vectors in this chapter. Pretrained embeddings are usually distributed as a text file in which each line represents a word vector. The first element in the line is the word itself, and the rest of the elements are the vector components. This is usually referred to as the word2vec format. For example, Figure 9.1 shows the line in the glove.6B.50d.txt file (from the GloVe website) corresponding to the word house. This vector is represented by the word itself, followed by 50 floating-point numbers corresponding to the 50dimensional vector. Note that some embeddings files have a header line composed of two numbers: the number of vectors (i.e., the number of lines in the file), and the vector dimensionality. However, this is not always the case. For example, the original word2vec implementation includes this header line, but the more recent GloVe does not (probably because this information can be inferred from the content of the file). For the examples in the rest of the chapter, we will use the glove.6B.300d.txt embeddings that can be downloaded from the GloVe website.3 This file provides 400,000 word embeddings of 300-dimensions trained on texts
from Wikipedia 2014 and Gigaword 5. We will begin our exploration of word embeddings using Gensim,4 a Python library that provides excellent support for loading and using word embeddings, among other more advanced features. As we can see, the embeddings have been loaded and assigned to the glove variable. Note that we had to specify that this file doesn’t contain the header that is usually present in the word2vec format. The 3 https://nlp.stanford.edu/projects/glove/ 4 https://radimrehurek.com/gensim/ 9.1 Pre-trained Word Embeddings 135 glove.vectors attribute contains a 2-dimensional NumPy array with 400,000 rows and 300 columns, each row corresponding to a word embedding. 9.1.1 Word Similarity Gensim’s KeyedVectors class provides a method called most_similar that receives a word and computes its cosine similarity to all other embeddings, and returns the topn most-similar words. By default, topn is set to 10. The example above shows the top 10 most-similar words to the word cactus, when using the 300-dimension GloVe embeddings trained on Wikipedia and Gigaword. All ten most-similar words are related to cactus in different ways: cacti and cactuses are its plural forms; saguaro, peyote, opuntia, and prickly pear are types of cacti; and mesquite, shrubs, and succulents are other plants from arid climates. You can find more examples of word similarity queries in the Jupyter notebook that accompanies this chapter. Also, as an exercise, try loading a different set of embeddings trained with a different corpus (e.g., Twitter) to see if you obtain different results! 9.1.2 Word Analogies As we discussed in the previous chapter, the semantic information en- coded by word embeddings captures much more than word similar- ity. To surface this additional information, we will use word analogies represented using additional vector operations. For example, a well- ⃗
known analogy that highlights gender information is: king − m⃗an ≈ qu⃗een−wom⃗an,5or,inplainlanguage:“manistokingwhatwomanis to queen.” From this, it immediately follows that one can subtract the meaning of man and add the meaning of woman to obtain the definition ⃗ offemaleroyalty:king−m⃗an+wom⃗an≈qu⃗een.
 The same most_similar method we’ve been using can be repurposed to find word analogies such as the one mentioned above. To this end, two sets of words have to be provided to the most_similar method: a list of positive words that should be added, and a list of negative words 5 A word with an arrow on top refers to the embedding vector corresponding to that word. Please see Section 1.4 for a summary of the notations used in this book. 136 Implementing Text Classification Using Word Embeddings that should be subtracted. For example, the code below implements the left-hand side of the previous analogy: Another interesting analogy relation that shows how the embeddings have captured information about currencies is shown below. More examples are discussed in the Jupyter notebook. 9.1.3 Looking Under the Hood Let us understand now how these queries are actually implemented. First, we need to know what components we need. Clearly, we need the embedding vectors themselves. They are stored in the vectors attribute of the KeyedVectors object. As we mentioned previously, this is a 2-dimensional NumPy array, each row corresponding to a word in the vocabulary. These embeddings are not normalized, but normalized embeddings can be obtained using the get_normed_vectors method. We also need to know the mapping between words and the matrix rows. The KeyedVectors object stores this mapping in a list of terms called index_to_key, and a term-to-index dictionary called key_to_index. Below we show only the first 5 terms to save space, but you can inspect the whole vocabulary in the Jupyter notebook. 9.1.4 Word Similarity from Scratch Implementing the word similarity function ourselves is a good exercise to ensure that we understand how cosine similarity works, and to practice our NumPy skills. We will write a function called most_similar_words that will take a word, the embeddings matrix, the vocabulary in the form of the index_to_key list and key_to_index dictionary, and the number of similar words to return (defaults to 10). The implementation of most_similar_words is straightforward. First, we find the word id for the given word, using the key_to_index dictionary. Then we retrieve the row from the vectors matrix that corresponds to that word. The next step is computing the cosine similarity between the word of interest and the rest of the vocabulary. Recall that the cosine similarity is equivalent to a dot product if the vectors are normalized. We use this equivalence by performing a matrix-vector multiplication between the word embedding and the embedding matrix using Python’s at operator (denoted as @ in code). This means that we must pass the 9.2 Text Classification with Pretrained Word Embeddings 137 normalized embeddings as an argument to this function. Next, we need to sort the similarities preserving the mapping to the words in the vocabulary. We achieve this using the argsort NumPy method, which returns the indices in sorted (ascending) order. Since we need them in descending order, the next step is to reverse this list of indices. Obviously, the most similar word to whichever word we’re querying is the word itself, but that is not an interesting result, so we will remove it from the results. We do this by using NumPy’s ability to index arrays using booleans. We first create a new array in which the position corresponding to the query word is set to False and every other element is set to True, and we use this boolean array to index the list of ids. Lastly, we create a list of tuples of the form (word, similarity) for the topn words, and return the results. Now we will test our implementation of word similarity using the word cactus. You can compare the results to the ones obtained by KeyedVectors’s most_similar method. 9.1.5 Word Analogies from Scratch The implementation of the word analogy function is not that much different from our most_similar_word function above. The main difference between this function and most_similar_words is that now we have two lists of words that we need to combine into a single embedding. We first add the positive words into a single vector, and we do the same for the negative words. Then we subtract the negative vector from the positive one, and normalize the result. The similarity scores are computed the same way as before, but now we need to remove several words from the results, so this time we use NumPy’s isin function, which checks for any of the words in given_word_ids. We then package the results the same way we did before, and return them. ⃗ Nowlet’stryourimplementationwiththesameking−m⃗an+wom⃗an query we discussed previously. Please compare the results to the ones obtained by Gensim. 9.2 Text Classification with Pretrained Word Embeddings In this section we will continue using the AG News classification dataset introduced in previous chapters. Most of the data preparation is the 138 Implementing Text Classification Using Word Embeddings same, up to tokenization. However, we need to remember that the embeddings were trained on a different corpus, so it would be a good idea to estimate how well they cover the words AG News dataset. To achieve this, we load the embeddings just like we did previously. Then we count the tokens in our corpus that do not appear in the embeddings vocabulary, as well as the total number to tokens. We use these numbers to print some informative statistics such as the proportion of unknown tokens in the corpus. We also print the top ten unknown tokens. You can use the Jupyter notebook to explore this task further. Our analysis indicates that only 1.25% of the tokens are not accounted for in the embeddings vocabulary. Further, the most common unknown words seem to be URL fragments. This is encouraging. However, for more robustness, we will introduce a couple of special embeddings that are often needed when dealing with word embeddings. The first one is an embedding used to represent unknown words. A common strategy is to use the average of all the embeddings in the vocabulary for this purpose. The second embedding we will add will be used for padding. Padding is required when we want to train with (mini-)batches because the lengths of all the examples in a given batch have to match in order for the batch to be efficiently processed in parallel. The padding embedding consists only of zeros, which essentially excludes these virtual tokens from the forward/backward passes. None of these embeddings are included in the pretrained GloVe embeddings, but other pretrained embeddings may already include them, so it is a good idea to check if they are included with the embeddings we are using before adding them. The new embeddings were added at the end of embedding collection, so their ids are 400,000 and 400,001. Now we need to generate a list of token ids for each training example. Recall that we decided to ignore tokens that appear less than 10 times, so we need to replace those with [UNK] too, even if they appear in the embedding vocabulary. Next, we create a Dataset object from the padded lists of token ids. This one is even easier since the lists of token ids are ready. So all that is required is turning them into tensors. Lastly, we need to modify the model class to indicate that we now use embedding vectors. To this end, we will add an nn. Embedding layer that stores the embedding vectors for all words in the vocabulary. We will use this object to look up embeddings by their token ids. This layer will be initialized from a tensor containing the pretrained embeddings for the entire vocabulary. Also, the pad_id is specified when creating the 9.2 Text Classification with Pretrained Word Embeddings 139 embedding layer. When a nn. Embedding layer gets initialized using the from_pretrained method with other arguments set to default values, the embeddings are not updated during training. We will keep it that way for this example, but that could be changed by setting the freeze parameter to False. The rest of the layers are the same as in our previous example from Chapter 7, i.e., one intermediate layer and one output layer, with a nonlinearity (ReLU) between them. The only major difference is that now the input size of the intermediate layer is the size of one embedding (e.g., 300) instead of the size of the vocabulary like last time. This is because, as we explain below, the intermediate layer receives an average of the numerical representations of the words in the current text. The forward function of the Model class changes significantly. This time we are encoding the text as an average of the embeddings of all the words it contains. To compute the denominator of this average, we obtain the length of each text by counting how many of its words are not the virtual padding token. Then we sum all the embeddings and divide by the number of non-padding tokens. Adding all embeddings is safe, because padding embeddings are comprised of zeros. This process leaves us with a single embedding for the whole text, which is then passed to the rest of the layers. The training and evaluation steps are the same before. The results of this model on the AG News test partition are displayed below: Comparing these results with the ones obtained by the multilayer perceptron with explicit features in Chapter 7, we observe that on this particular task utilizing embeddings as features does not yield a performance improvement. Notably, this is a small dataset and a rather simplistic task where the presence of certain words is sufficient to distinguish the category of an article (e.g., the word basketball is highly indicative of the label Sports). Nevertheless, in other tasks where distinctions are more nuanced, or in which there is less likely to be word overlap between texts of interest, word embeddings do provide necessary signal. Additionally, when there are class imbalances, word embeddings can supplement underrepresented classes by bringing the external knowledge gained during their pretraining. 140 Implementing Text Classification Using Word Embeddings 9.3 Summary In this chapter we showed how to explore the semantic space encoded by word embeddings through word similarity and analogies, as well as one way to use them for text classification. At this point we have not taken into consideration the order in which the words appear, i.e., we averaged the embeddings for all the words in the text using a bag-ofwords representation of text. In subsequent chapters we will explore how to incorporate word order into the learned representations of text.
4,512
4,666
#!/usr/bin/env python # coding: utf-8 # # Using Pre-trained Word Embeddings # # In this notebook we will show some operations on pre-trained word embeddings to gain an intuition about them. # # We will be using the pre-trained GloVe embeddings that can be found in the [official website](https://nlp.stanford.edu/projects/glove/). In particular, we will use the file `glove.6B.300d.txt` contained in this [zip file](https://nlp.stanford.edu/data/glove.6B.zip). # # We will first load the GloVe embeddings using [Gensim](https://radimrehurek.com/gensim/). Specifically, we will use [`KeyedVectors`](https://radimrehurek.com/gensim/models/keyedvectors.html)'s [`load_word2vec_format()`](https://radimrehurek.com/gensim/models/keyedvectors.html#gensim.models.keyedvectors.KeyedVectors.load_word2vec_format) classmethod, which supports the original word2vec file format. # However, there is a difference in the file formats used by GloVe and word2vec, which is a header used by word2vec to indicate the number of embeddings and dimensions stored in the file. The file that stores the GloVe embeddings doesn't have this header, so we will have to address that when loading the embeddings. # # Loading the embeddings may take a little bit, so hang in there! # In[2]: from gensim.models import KeyedVectors fname = "glove.6B.300d.txt" glove = KeyedVectors.load_word2vec_format(fname, no_header=True) glove.vectors.shape # ## Word similarity # # One attribute of word embeddings that makes them useful is the ability to compare them using cosine similarity to find how similar they are. [`KeyedVectors`](https://radimrehurek.com/gensim/models/keyedvectors.html) objects provide a method called [`most_similar()`](https://radimrehurek.com/gensim/models/keyedvectors.html#gensim.models.keyedvectors.KeyedVectors.most_similar) that we can use to find the closest words to a particular word of interest. By default, [`most_similar()`](https://radimrehurek.com/gensim/models/keyedvectors.html#gensim.models.keyedvectors.KeyedVectors.most_similar) returns the 10 most similar words, but this can be changed using the `topn` parameter. # # Below we test this function using a few different words. # In[3]: # common noun glove.most_similar("cactus") # In[4]: # common noun glove.most_similar("cake") # In[5]: # adjective glove.most_similar("angry") # In[6]: # adverb glove.most_similar("quickly") # In[7]: # preposition glove.most_similar("between") # In[8]: # determiner glove.most_similar("the") # ## Word analogies # # Another characteristic of word embeddings is their ability to solve analogy problems. # The same [`most_similar()`](https://radimrehurek.com/gensim/models/keyedvectors.html#gensim.models.keyedvectors.KeyedVectors.most_similar) method can be used for this task, by passing two lists of words: # a `positive` list with the words that should be added and a `negative` list with the words that should be subtracted. Using these arguments, the famous example $\vec{king} - \vec{man} + \vec{woman} \approx \vec{queen}$ can be executed as follows: # In[9]: # king - man + woman glove.most_similar(positive=["king", "woman"], negative=["man"]) # Here are a few other interesting analogies: # In[10]: # car - drive + fly glove.most_similar(positive=["car", "fly"], negative=["drive"]) # In[11]: # berlin - germany + australia glove.most_similar(positive=["berlin", "australia"], negative=["germany"]) # In[12]: # england - london + baghdad glove.most_similar(positive=["england", "baghdad"], negative=["london"]) # In[13]: # japan - yen + peso glove.most_similar(positive=["japan", "peso"], negative=["yen"]) # In[14]: # best - good + tall glove.most_similar(positive=["best", "tall"], negative=["good"]) # ## Looking under the hood # # Now that we are more familiar with the [`most_similar()`](https://radimrehurek.com/gensim/models/keyedvectors.html#gensim.models.keyedvectors.KeyedVectors.most_similar) method, it is time to implement its functionality ourselves. # But first, we need to take a look at the different parts of the [`KeyedVectors`](https://radimrehurek.com/gensim/models/keyedvectors.html) object that we will need. # Obviously, we will need the vectors themselves. They are stored in the `vectors` attribute. # In[15]: glove.vectors.shape # As we can see above, `vectors` is a 2-dimensional matrix with 400,000 rows and 300 columns. # Each row corresponds to a 300-dimensional word embedding. These embeddings are not normalized, but normalized embeddings can be obtained using the [`get_normed_vectors()`](https://radimrehurek.com/gensim/models/keyedvectors.html#gensim.models.keyedvectors.KeyedVectors.get_normed_vectors) method. # In[16]: normed_vectors = glove.get_normed_vectors() normed_vectors.shape # Now we need to map the words in the vocabulary to rows in the `vectors` matrix, and vice versa. # The [`KeyedVectors`](https://radimrehurek.com/gensim/models/keyedvectors.html) object has the attributes `index_to_key` and `key_to_index` which are a list of words and a dictionary of words to indices, respectively. # In[17]: #glove.index_to_key # In[18]: #glove.key_to_index # ## Word similarity from scratch # # Now we have everything we need to implement a `most_similar_words()` function that takes a word, the vector matrix, the `index_to_key` list, and the `key_to_index` dictionary. This function will return the 10 most similar words to the provided word, along with their similarity scores. # In[19]: import numpy as np def most_similar_words(word, vectors, index_to_key, key_to_index, topn=10): # retrieve word_id corresponding to given word word_id = key_to_index[word] # retrieve embedding for given word emb = vectors[word_id] # calculate similarities to all words in out vocabulary similarities = vectors @ emb # get word_ids in ascending order with respect to similarity score ids_ascending = similarities.argsort() # reverse word_ids ids_descending = ids_ascending[::-1] # get boolean array with element corresponding to word_id set to false mask = ids_descending != word_id # obtain new array of indices that doesn't contain word_id # (otherwise the most similar word to the argument would be the argument itself) ids_descending = ids_descending[mask] # get topn word_ids top_ids = ids_descending[:topn] # retrieve topn words with their corresponding similarity score top_words = [(index_to_key[i], similarities[i]) for i in top_ids] # return results return top_words # Now let's try the same example that we used above: the most similar words to "cactus". # In[20]: vectors = glove.get_normed_vectors() index_to_key = glove.index_to_key key_to_index = glove.key_to_index most_similar_words("cactus", vectors, index_to_key, key_to_index) # ## Analogies from scratch # # The `most_similar_words()` function behaves as expected. Now let's implement a function to perform the analogy task. We will give it the very creative name `analogy`. This function will get two lists of words (one for positive words and one for negative words), just like the [`most_similar()`](https://radimrehurek.com/gensim/models/keyedvectors.html#gensim.models.keyedvectors.KeyedVectors.most_similar) method we discussed above. # In[21]: from numpy.linalg import norm def analogy(positive, negative, vectors, index_to_key, key_to_index, topn=10): # find ids for positive and negative words pos_ids = [key_to_index[w] for w in positive] neg_ids = [key_to_index[w] for w in negative] given_word_ids = pos_ids + neg_ids # get embeddings for positive and negative words pos_emb = vectors[pos_ids].sum(axis=0) neg_emb = vectors[neg_ids].sum(axis=0) # get embedding for analogy emb = pos_emb - neg_emb # normalize embedding emb = emb / norm(emb) # calculate similarities to all words in out vocabulary similarities = vectors @ emb # get word_ids in ascending order with respect to similarity score ids_ascending = similarities.argsort() # reverse word_ids ids_descending = ids_ascending[::-1] # get boolean array with element corresponding to any of given_word_ids set to false given_words_mask = np.isin(ids_descending, given_word_ids, invert=True) # obtain new array of indices that doesn't contain any of the given_word_ids ids_descending = ids_descending[given_words_mask] # get topn word_ids top_ids = ids_descending[:topn] # retrieve topn words with their corresponding similarity score top_words = [(index_to_key[i], similarities[i]) for i in top_ids] # return results return top_words # Let's try this function with the $\vec{king} - \vec{man} + \vec{woman} \approx \vec{queen}$ example we discussed above. # In[22]: positive = ["king", "woman"] negative = ["man"] vectors = glove.get_normed_vectors() index_to_key = glove.index_to_key key_to_index = glove.key_to_index analogy(positive, negative, vectors, index_to_key, key_to_index) # In[ ]:
2,221
2,250
13
chap09-14
chap09-14
9 Implementing Text Classification Using Word Embeddings In the previous chapter we introduced word embeddings, which are realvalued vectors that encode semantic representation of words. We discussed how to learn them, and how they capture semantic information that makes them useful for downstream tasks. In this chapter we show how to use word embeddings that have been pretrained using a variant of the algorithm discussed in the previous chapter. We show how to load them, explore some of their characteristics, and show their application for a text classification task. As usual, the code for this chapter is available in our repository. It is organized into two notebooks: one corresponding to the explorations shown in the first half of this chapter (chap9_embeddings), and a second one in which we modify our previous classifier to use word embeddings (chap9_classification). 9.1 Pre-trained Word Embeddings There are several algorithms for training word embeddings, including the original word2vec algorithm (Mikolov et al., 2013a) (which we discussed in the previous chapter), GloVe (Pennington et al., 2014), and fastText (Bojanowski et al., 2017). They all provide the software for training the embeddings as well as pretrained word embeddings on their respective websites. In general, most open-domain word embeddings are trained on large corpora that cover a variety of topics such as Wikipedia1 and Gigaword.2 Commonly, these embeddings are freely distributed so 1 https://en.wikipedia.org/wiki/Wikipedia:Database_download 2 https://catalog.ldc.upenn.edu/LDC2011T07 133 134 Implementing Text Classification Using Word Embeddings house 0.60137 0.28521 -0.032038 -0.43026 0.74806 0.26223 -0.97361 0.078581 -0.57588 -1.188 -1.8507 -0.24887 0.055549 0.0086155 0.067951 0.40554 -0.073998 -0.21318 0.37167 -0.71791 1.2234 0.35546 -0.41537 -0.21931 -0.39661 -1.7831 -0.41507 0.29533 -0.41254 0.020096 2.7425 -0.9926 -0.71033 -0.46813 0.28265 -0.077639 0.3041 -0.06644 0.3951 -0.70747 -0.38894 0.23158 -0.49508 0.14612 -0.02314 0.56389 -0.86188 -1.0278 0.039922 0.20018 Figure 9.1 GloVe embedding corresponding to the word house, found in the GloVe file glove.6B.50d.txt. We have broken the vector in several lines for display purposes, but this is a single line in the text file. that practitioners can use them in downstream tasks. We will use one such set of vectors in this chapter. Pretrained embeddings are usually distributed as a text file in which each line represents a word vector. The first element in the line is the word itself, and the rest of the elements are the vector components. This is usually referred to as the word2vec format. For example, Figure 9.1 shows the line in the glove.6B.50d.txt file (from the GloVe website) corresponding to the word house. This vector is represented by the word itself, followed by 50 floating-point numbers corresponding to the 50dimensional vector. Note that some embeddings files have a header line composed of two numbers: the number of vectors (i.e., the number of lines in the file), and the vector dimensionality. However, this is not always the case. For example, the original word2vec implementation includes this header line, but the more recent GloVe does not (probably because this information can be inferred from the content of the file). For the examples in the rest of the chapter, we will use the glove.6B.300d.txt embeddings that can be downloaded from the GloVe website.3 This file provides 400,000 word embeddings of 300-dimensions trained on texts
from Wikipedia 2014 and Gigaword 5. We will begin our exploration of word embeddings using Gensim,4 a Python library that provides excellent support for loading and using word embeddings, among other more advanced features. As we can see, the embeddings have been loaded and assigned to the glove variable. Note that we had to specify that this file doesn’t contain the header that is usually present in the word2vec format. The 3 https://nlp.stanford.edu/projects/glove/ 4 https://radimrehurek.com/gensim/ 9.1 Pre-trained Word Embeddings 135 glove.vectors attribute contains a 2-dimensional NumPy array with 400,000 rows and 300 columns, each row corresponding to a word embedding. 9.1.1 Word Similarity Gensim’s KeyedVectors class provides a method called most_similar that receives a word and computes its cosine similarity to all other embeddings, and returns the topn most-similar words. By default, topn is set to 10. The example above shows the top 10 most-similar words to the word cactus, when using the 300-dimension GloVe embeddings trained on Wikipedia and Gigaword. All ten most-similar words are related to cactus in different ways: cacti and cactuses are its plural forms; saguaro, peyote, opuntia, and prickly pear are types of cacti; and mesquite, shrubs, and succulents are other plants from arid climates. You can find more examples of word similarity queries in the Jupyter notebook that accompanies this chapter. Also, as an exercise, try loading a different set of embeddings trained with a different corpus (e.g., Twitter) to see if you obtain different results! 9.1.2 Word Analogies As we discussed in the previous chapter, the semantic information en- coded by word embeddings captures much more than word similar- ity. To surface this additional information, we will use word analogies represented using additional vector operations. For example, a well- ⃗
known analogy that highlights gender information is: king − m⃗an ≈ qu⃗een−wom⃗an,5or,inplainlanguage:“manistokingwhatwomanis to queen.” From this, it immediately follows that one can subtract the meaning of man and add the meaning of woman to obtain the definition ⃗ offemaleroyalty:king−m⃗an+wom⃗an≈qu⃗een.
 The same most_similar method we’ve been using can be repurposed to find word analogies such as the one mentioned above. To this end, two sets of words have to be provided to the most_similar method: a list of positive words that should be added, and a list of negative words 5 A word with an arrow on top refers to the embedding vector corresponding to that word. Please see Section 1.4 for a summary of the notations used in this book. 136 Implementing Text Classification Using Word Embeddings that should be subtracted. For example, the code below implements the left-hand side of the previous analogy: Another interesting analogy relation that shows how the embeddings have captured information about currencies is shown below. More examples are discussed in the Jupyter notebook. 9.1.3 Looking Under the Hood Let us understand now how these queries are actually implemented. First, we need to know what components we need. Clearly, we need the embedding vectors themselves. They are stored in the vectors attribute of the KeyedVectors object. As we mentioned previously, this is a 2-dimensional NumPy array, each row corresponding to a word in the vocabulary. These embeddings are not normalized, but normalized embeddings can be obtained using the get_normed_vectors method. We also need to know the mapping between words and the matrix rows. The KeyedVectors object stores this mapping in a list of terms called index_to_key, and a term-to-index dictionary called key_to_index. Below we show only the first 5 terms to save space, but you can inspect the whole vocabulary in the Jupyter notebook. 9.1.4 Word Similarity from Scratch Implementing the word similarity function ourselves is a good exercise to ensure that we understand how cosine similarity works, and to practice our NumPy skills. We will write a function called most_similar_words that will take a word, the embeddings matrix, the vocabulary in the form of the index_to_key list and key_to_index dictionary, and the number of similar words to return (defaults to 10). The implementation of most_similar_words is straightforward. First, we find the word id for the given word, using the key_to_index dictionary. Then we retrieve the row from the vectors matrix that corresponds to that word. The next step is computing the cosine similarity between the word of interest and the rest of the vocabulary. Recall that the cosine similarity is equivalent to a dot product if the vectors are normalized. We use this equivalence by performing a matrix-vector multiplication between the word embedding and the embedding matrix using Python’s at operator (denoted as @ in code). This means that we must pass the 9.2 Text Classification with Pretrained Word Embeddings 137 normalized embeddings as an argument to this function. Next, we need to sort the similarities preserving the mapping to the words in the vocabulary. We achieve this using the argsort NumPy method, which returns the indices in sorted (ascending) order. Since we need them in descending order, the next step is to reverse this list of indices. Obviously, the most similar word to whichever word we’re querying is the word itself, but that is not an interesting result, so we will remove it from the results. We do this by using NumPy’s ability to index arrays using booleans. We first create a new array in which the position corresponding to the query word is set to False and every other element is set to True, and we use this boolean array to index the list of ids. Lastly, we create a list of tuples of the form (word, similarity) for the topn words, and return the results. Now we will test our implementation of word similarity using the word cactus. You can compare the results to the ones obtained by KeyedVectors’s most_similar method. 9.1.5 Word Analogies from Scratch The implementation of the word analogy function is not that much different from our most_similar_word function above. The main difference between this function and most_similar_words is that now we have two lists of words that we need to combine into a single embedding. We first add the positive words into a single vector, and we do the same for the negative words. Then we subtract the negative vector from the positive one, and normalize the result. The similarity scores are computed the same way as before, but now we need to remove several words from the results, so this time we use NumPy’s isin function, which checks for any of the words in given_word_ids. We then package the results the same way we did before, and return them. ⃗ Nowlet’stryourimplementationwiththesameking−m⃗an+wom⃗an query we discussed previously. Please compare the results to the ones obtained by Gensim. 9.2 Text Classification with Pretrained Word Embeddings In this section we will continue using the AG News classification dataset introduced in previous chapters. Most of the data preparation is the 138 Implementing Text Classification Using Word Embeddings same, up to tokenization. However, we need to remember that the embeddings were trained on a different corpus, so it would be a good idea to estimate how well they cover the words AG News dataset. To achieve this, we load the embeddings just like we did previously. Then we count the tokens in our corpus that do not appear in the embeddings vocabulary, as well as the total number to tokens. We use these numbers to print some informative statistics such as the proportion of unknown tokens in the corpus. We also print the top ten unknown tokens. You can use the Jupyter notebook to explore this task further. Our analysis indicates that only 1.25% of the tokens are not accounted for in the embeddings vocabulary. Further, the most common unknown words seem to be URL fragments. This is encouraging. However, for more robustness, we will introduce a couple of special embeddings that are often needed when dealing with word embeddings. The first one is an embedding used to represent unknown words. A common strategy is to use the average of all the embeddings in the vocabulary for this purpose. The second embedding we will add will be used for padding. Padding is required when we want to train with (mini-)batches because the lengths of all the examples in a given batch have to match in order for the batch to be efficiently processed in parallel. The padding embedding consists only of zeros, which essentially excludes these virtual tokens from the forward/backward passes. None of these embeddings are included in the pretrained GloVe embeddings, but other pretrained embeddings may already include them, so it is a good idea to check if they are included with the embeddings we are using before adding them. The new embeddings were added at the end of embedding collection, so their ids are 400,000 and 400,001. Now we need to generate a list of token ids for each training example. Recall that we decided to ignore tokens that appear less than 10 times, so we need to replace those with [UNK] too, even if they appear in the embedding vocabulary. Next, we create a Dataset object from the padded lists of token ids. This one is even easier since the lists of token ids are ready. So all that is required is turning them into tensors. Lastly, we need to modify the model class to indicate that we now use embedding vectors. To this end, we will add an nn. Embedding layer that stores the embedding vectors for all words in the vocabulary. We will use this object to look up embeddings by their token ids. This layer will be initialized from a tensor containing the pretrained embeddings for the entire vocabulary. Also, the pad_id is specified when creating the 9.2 Text Classification with Pretrained Word Embeddings 139 embedding layer. When a nn. Embedding layer gets initialized using the from_pretrained method with other arguments set to default values, the embeddings are not updated during training. We will keep it that way for this example, but that could be changed by setting the freeze parameter to False. The rest of the layers are the same as in our previous example from Chapter 7, i.e., one intermediate layer and one output layer, with a nonlinearity (ReLU) between them. The only major difference is that now the input size of the intermediate layer is the size of one embedding (e.g., 300) instead of the size of the vocabulary like last time. This is because, as we explain below, the intermediate layer receives an average of the numerical representations of the words in the current text. The forward function of the Model class changes significantly. This time we are encoding the text as an average of the embeddings of all the words it contains. To compute the denominator of this average, we obtain the length of each text by counting how many of its words are not the virtual padding token. Then we sum all the embeddings and divide by the number of non-padding tokens. Adding all embeddings is safe, because padding embeddings are comprised of zeros. This process leaves us with a single embedding for the whole text, which is then passed to the rest of the layers. The training and evaluation steps are the same before. The results of this model on the AG News test partition are displayed below: Comparing these results with the ones obtained by the multilayer perceptron with explicit features in Chapter 7, we observe that on this particular task utilizing embeddings as features does not yield a performance improvement. Notably, this is a small dataset and a rather simplistic task where the presence of certain words is sufficient to distinguish the category of an article (e.g., the word basketball is highly indicative of the label Sports). Nevertheless, in other tasks where distinctions are more nuanced, or in which there is less likely to be word overlap between texts of interest, word embeddings do provide necessary signal. Additionally, when there are class imbalances, word embeddings can supplement underrepresented classes by bringing the external knowledge gained during their pretraining. 140 Implementing Text Classification Using Word Embeddings 9.3 Summary In this chapter we showed how to explore the semantic space encoded by word embeddings through word similarity and analogies, as well as one way to use them for text classification. At this point we have not taken into consideration the order in which the words appear, i.e., we averaged the embeddings for all the words in the text using a bag-ofwords representation of text. In subsequent chapters we will explore how to incorporate word order into the learned representations of text.
8,815
8,904
#!/usr/bin/env python # coding: utf-8 # # Using Pre-trained Word Embeddings # # In this notebook we will show some operations on pre-trained word embeddings to gain an intuition about them. # # We will be using the pre-trained GloVe embeddings that can be found in the [official website](https://nlp.stanford.edu/projects/glove/). In particular, we will use the file `glove.6B.300d.txt` contained in this [zip file](https://nlp.stanford.edu/data/glove.6B.zip). # # We will first load the GloVe embeddings using [Gensim](https://radimrehurek.com/gensim/). Specifically, we will use [`KeyedVectors`](https://radimrehurek.com/gensim/models/keyedvectors.html)'s [`load_word2vec_format()`](https://radimrehurek.com/gensim/models/keyedvectors.html#gensim.models.keyedvectors.KeyedVectors.load_word2vec_format) classmethod, which supports the original word2vec file format. # However, there is a difference in the file formats used by GloVe and word2vec, which is a header used by word2vec to indicate the number of embeddings and dimensions stored in the file. The file that stores the GloVe embeddings doesn't have this header, so we will have to address that when loading the embeddings. # # Loading the embeddings may take a little bit, so hang in there! # In[2]: from gensim.models import KeyedVectors fname = "glove.6B.300d.txt" glove = KeyedVectors.load_word2vec_format(fname, no_header=True) glove.vectors.shape # ## Word similarity # # One attribute of word embeddings that makes them useful is the ability to compare them using cosine similarity to find how similar they are. [`KeyedVectors`](https://radimrehurek.com/gensim/models/keyedvectors.html) objects provide a method called [`most_similar()`](https://radimrehurek.com/gensim/models/keyedvectors.html#gensim.models.keyedvectors.KeyedVectors.most_similar) that we can use to find the closest words to a particular word of interest. By default, [`most_similar()`](https://radimrehurek.com/gensim/models/keyedvectors.html#gensim.models.keyedvectors.KeyedVectors.most_similar) returns the 10 most similar words, but this can be changed using the `topn` parameter. # # Below we test this function using a few different words. # In[3]: # common noun glove.most_similar("cactus") # In[4]: # common noun glove.most_similar("cake") # In[5]: # adjective glove.most_similar("angry") # In[6]: # adverb glove.most_similar("quickly") # In[7]: # preposition glove.most_similar("between") # In[8]: # determiner glove.most_similar("the") # ## Word analogies # # Another characteristic of word embeddings is their ability to solve analogy problems. # The same [`most_similar()`](https://radimrehurek.com/gensim/models/keyedvectors.html#gensim.models.keyedvectors.KeyedVectors.most_similar) method can be used for this task, by passing two lists of words: # a `positive` list with the words that should be added and a `negative` list with the words that should be subtracted. Using these arguments, the famous example $\vec{king} - \vec{man} + \vec{woman} \approx \vec{queen}$ can be executed as follows: # In[9]: # king - man + woman glove.most_similar(positive=["king", "woman"], negative=["man"]) # Here are a few other interesting analogies: # In[10]: # car - drive + fly glove.most_similar(positive=["car", "fly"], negative=["drive"]) # In[11]: # berlin - germany + australia glove.most_similar(positive=["berlin", "australia"], negative=["germany"]) # In[12]: # england - london + baghdad glove.most_similar(positive=["england", "baghdad"], negative=["london"]) # In[13]: # japan - yen + peso glove.most_similar(positive=["japan", "peso"], negative=["yen"]) # In[14]: # best - good + tall glove.most_similar(positive=["best", "tall"], negative=["good"]) # ## Looking under the hood # # Now that we are more familiar with the [`most_similar()`](https://radimrehurek.com/gensim/models/keyedvectors.html#gensim.models.keyedvectors.KeyedVectors.most_similar) method, it is time to implement its functionality ourselves. # But first, we need to take a look at the different parts of the [`KeyedVectors`](https://radimrehurek.com/gensim/models/keyedvectors.html) object that we will need. # Obviously, we will need the vectors themselves. They are stored in the `vectors` attribute. # In[15]: glove.vectors.shape # As we can see above, `vectors` is a 2-dimensional matrix with 400,000 rows and 300 columns. # Each row corresponds to a 300-dimensional word embedding. These embeddings are not normalized, but normalized embeddings can be obtained using the [`get_normed_vectors()`](https://radimrehurek.com/gensim/models/keyedvectors.html#gensim.models.keyedvectors.KeyedVectors.get_normed_vectors) method. # In[16]: normed_vectors = glove.get_normed_vectors() normed_vectors.shape # Now we need to map the words in the vocabulary to rows in the `vectors` matrix, and vice versa. # The [`KeyedVectors`](https://radimrehurek.com/gensim/models/keyedvectors.html) object has the attributes `index_to_key` and `key_to_index` which are a list of words and a dictionary of words to indices, respectively. # In[17]: #glove.index_to_key # In[18]: #glove.key_to_index # ## Word similarity from scratch # # Now we have everything we need to implement a `most_similar_words()` function that takes a word, the vector matrix, the `index_to_key` list, and the `key_to_index` dictionary. This function will return the 10 most similar words to the provided word, along with their similarity scores. # In[19]: import numpy as np def most_similar_words(word, vectors, index_to_key, key_to_index, topn=10): # retrieve word_id corresponding to given word word_id = key_to_index[word] # retrieve embedding for given word emb = vectors[word_id] # calculate similarities to all words in out vocabulary similarities = vectors @ emb # get word_ids in ascending order with respect to similarity score ids_ascending = similarities.argsort() # reverse word_ids ids_descending = ids_ascending[::-1] # get boolean array with element corresponding to word_id set to false mask = ids_descending != word_id # obtain new array of indices that doesn't contain word_id # (otherwise the most similar word to the argument would be the argument itself) ids_descending = ids_descending[mask] # get topn word_ids top_ids = ids_descending[:topn] # retrieve topn words with their corresponding similarity score top_words = [(index_to_key[i], similarities[i]) for i in top_ids] # return results return top_words # Now let's try the same example that we used above: the most similar words to "cactus". # In[20]: vectors = glove.get_normed_vectors() index_to_key = glove.index_to_key key_to_index = glove.key_to_index most_similar_words("cactus", vectors, index_to_key, key_to_index) # ## Analogies from scratch # # The `most_similar_words()` function behaves as expected. Now let's implement a function to perform the analogy task. We will give it the very creative name `analogy`. This function will get two lists of words (one for positive words and one for negative words), just like the [`most_similar()`](https://radimrehurek.com/gensim/models/keyedvectors.html#gensim.models.keyedvectors.KeyedVectors.most_similar) method we discussed above. # In[21]: from numpy.linalg import norm def analogy(positive, negative, vectors, index_to_key, key_to_index, topn=10): # find ids for positive and negative words pos_ids = [key_to_index[w] for w in positive] neg_ids = [key_to_index[w] for w in negative] given_word_ids = pos_ids + neg_ids # get embeddings for positive and negative words pos_emb = vectors[pos_ids].sum(axis=0) neg_emb = vectors[neg_ids].sum(axis=0) # get embedding for analogy emb = pos_emb - neg_emb # normalize embedding emb = emb / norm(emb) # calculate similarities to all words in out vocabulary similarities = vectors @ emb # get word_ids in ascending order with respect to similarity score ids_ascending = similarities.argsort() # reverse word_ids ids_descending = ids_ascending[::-1] # get boolean array with element corresponding to any of given_word_ids set to false given_words_mask = np.isin(ids_descending, given_word_ids, invert=True) # obtain new array of indices that doesn't contain any of the given_word_ids ids_descending = ids_descending[given_words_mask] # get topn word_ids top_ids = ids_descending[:topn] # retrieve topn words with their corresponding similarity score top_words = [(index_to_key[i], similarities[i]) for i in top_ids] # return results return top_words # Let's try this function with the $\vec{king} - \vec{man} + \vec{woman} \approx \vec{queen}$ example we discussed above. # In[22]: positive = ["king", "woman"] negative = ["man"] vectors = glove.get_normed_vectors() index_to_key = glove.index_to_key key_to_index = glove.key_to_index analogy(positive, negative, vectors, index_to_key, key_to_index) # In[ ]:
6,000
6,041
14
chap09-15
chap09-15
9 Implementing Text Classification Using Word Embeddings In the previous chapter we introduced word embeddings, which are realvalued vectors that encode semantic representation of words. We discussed how to learn them, and how they capture semantic information that makes them useful for downstream tasks. In this chapter we show how to use word embeddings that have been pretrained using a variant of the algorithm discussed in the previous chapter. We show how to load them, explore some of their characteristics, and show their application for a text classification task. As usual, the code for this chapter is available in our repository. It is organized into two notebooks: one corresponding to the explorations shown in the first half of this chapter (chap9_embeddings), and a second one in which we modify our previous classifier to use word embeddings (chap9_classification). 9.1 Pre-trained Word Embeddings There are several algorithms for training word embeddings, including the original word2vec algorithm (Mikolov et al., 2013a) (which we discussed in the previous chapter), GloVe (Pennington et al., 2014), and fastText (Bojanowski et al., 2017). They all provide the software for training the embeddings as well as pretrained word embeddings on their respective websites. In general, most open-domain word embeddings are trained on large corpora that cover a variety of topics such as Wikipedia1 and Gigaword.2 Commonly, these embeddings are freely distributed so 1 https://en.wikipedia.org/wiki/Wikipedia:Database_download 2 https://catalog.ldc.upenn.edu/LDC2011T07 133 134 Implementing Text Classification Using Word Embeddings house 0.60137 0.28521 -0.032038 -0.43026 0.74806 0.26223 -0.97361 0.078581 -0.57588 -1.188 -1.8507 -0.24887 0.055549 0.0086155 0.067951 0.40554 -0.073998 -0.21318 0.37167 -0.71791 1.2234 0.35546 -0.41537 -0.21931 -0.39661 -1.7831 -0.41507 0.29533 -0.41254 0.020096 2.7425 -0.9926 -0.71033 -0.46813 0.28265 -0.077639 0.3041 -0.06644 0.3951 -0.70747 -0.38894 0.23158 -0.49508 0.14612 -0.02314 0.56389 -0.86188 -1.0278 0.039922 0.20018 Figure 9.1 GloVe embedding corresponding to the word house, found in the GloVe file glove.6B.50d.txt. We have broken the vector in several lines for display purposes, but this is a single line in the text file. that practitioners can use them in downstream tasks. We will use one such set of vectors in this chapter. Pretrained embeddings are usually distributed as a text file in which each line represents a word vector. The first element in the line is the word itself, and the rest of the elements are the vector components. This is usually referred to as the word2vec format. For example, Figure 9.1 shows the line in the glove.6B.50d.txt file (from the GloVe website) corresponding to the word house. This vector is represented by the word itself, followed by 50 floating-point numbers corresponding to the 50dimensional vector. Note that some embeddings files have a header line composed of two numbers: the number of vectors (i.e., the number of lines in the file), and the vector dimensionality. However, this is not always the case. For example, the original word2vec implementation includes this header line, but the more recent GloVe does not (probably because this information can be inferred from the content of the file). For the examples in the rest of the chapter, we will use the glove.6B.300d.txt embeddings that can be downloaded from the GloVe website.3 This file provides 400,000 word embeddings of 300-dimensions trained on texts
from Wikipedia 2014 and Gigaword 5. We will begin our exploration of word embeddings using Gensim,4 a Python library that provides excellent support for loading and using word embeddings, among other more advanced features. As we can see, the embeddings have been loaded and assigned to the glove variable. Note that we had to specify that this file doesn’t contain the header that is usually present in the word2vec format. The 3 https://nlp.stanford.edu/projects/glove/ 4 https://radimrehurek.com/gensim/ 9.1 Pre-trained Word Embeddings 135 glove.vectors attribute contains a 2-dimensional NumPy array with 400,000 rows and 300 columns, each row corresponding to a word embedding. 9.1.1 Word Similarity Gensim’s KeyedVectors class provides a method called most_similar that receives a word and computes its cosine similarity to all other embeddings, and returns the topn most-similar words. By default, topn is set to 10. The example above shows the top 10 most-similar words to the word cactus, when using the 300-dimension GloVe embeddings trained on Wikipedia and Gigaword. All ten most-similar words are related to cactus in different ways: cacti and cactuses are its plural forms; saguaro, peyote, opuntia, and prickly pear are types of cacti; and mesquite, shrubs, and succulents are other plants from arid climates. You can find more examples of word similarity queries in the Jupyter notebook that accompanies this chapter. Also, as an exercise, try loading a different set of embeddings trained with a different corpus (e.g., Twitter) to see if you obtain different results! 9.1.2 Word Analogies As we discussed in the previous chapter, the semantic information en- coded by word embeddings captures much more than word similar- ity. To surface this additional information, we will use word analogies represented using additional vector operations. For example, a well- ⃗
known analogy that highlights gender information is: king − m⃗an ≈ qu⃗een−wom⃗an,5or,inplainlanguage:“manistokingwhatwomanis to queen.” From this, it immediately follows that one can subtract the meaning of man and add the meaning of woman to obtain the definition ⃗ offemaleroyalty:king−m⃗an+wom⃗an≈qu⃗een.
 The same most_similar method we’ve been using can be repurposed to find word analogies such as the one mentioned above. To this end, two sets of words have to be provided to the most_similar method: a list of positive words that should be added, and a list of negative words 5 A word with an arrow on top refers to the embedding vector corresponding to that word. Please see Section 1.4 for a summary of the notations used in this book. 136 Implementing Text Classification Using Word Embeddings that should be subtracted. For example, the code below implements the left-hand side of the previous analogy: Another interesting analogy relation that shows how the embeddings have captured information about currencies is shown below. More examples are discussed in the Jupyter notebook. 9.1.3 Looking Under the Hood Let us understand now how these queries are actually implemented. First, we need to know what components we need. Clearly, we need the embedding vectors themselves. They are stored in the vectors attribute of the KeyedVectors object. As we mentioned previously, this is a 2-dimensional NumPy array, each row corresponding to a word in the vocabulary. These embeddings are not normalized, but normalized embeddings can be obtained using the get_normed_vectors method. We also need to know the mapping between words and the matrix rows. The KeyedVectors object stores this mapping in a list of terms called index_to_key, and a term-to-index dictionary called key_to_index. Below we show only the first 5 terms to save space, but you can inspect the whole vocabulary in the Jupyter notebook. 9.1.4 Word Similarity from Scratch Implementing the word similarity function ourselves is a good exercise to ensure that we understand how cosine similarity works, and to practice our NumPy skills. We will write a function called most_similar_words that will take a word, the embeddings matrix, the vocabulary in the form of the index_to_key list and key_to_index dictionary, and the number of similar words to return (defaults to 10). The implementation of most_similar_words is straightforward. First, we find the word id for the given word, using the key_to_index dictionary. Then we retrieve the row from the vectors matrix that corresponds to that word. The next step is computing the cosine similarity between the word of interest and the rest of the vocabulary. Recall that the cosine similarity is equivalent to a dot product if the vectors are normalized. We use this equivalence by performing a matrix-vector multiplication between the word embedding and the embedding matrix using Python’s at operator (denoted as @ in code). This means that we must pass the 9.2 Text Classification with Pretrained Word Embeddings 137 normalized embeddings as an argument to this function. Next, we need to sort the similarities preserving the mapping to the words in the vocabulary. We achieve this using the argsort NumPy method, which returns the indices in sorted (ascending) order. Since we need them in descending order, the next step is to reverse this list of indices. Obviously, the most similar word to whichever word we’re querying is the word itself, but that is not an interesting result, so we will remove it from the results. We do this by using NumPy’s ability to index arrays using booleans. We first create a new array in which the position corresponding to the query word is set to False and every other element is set to True, and we use this boolean array to index the list of ids. Lastly, we create a list of tuples of the form (word, similarity) for the topn words, and return the results. Now we will test our implementation of word similarity using the word cactus. You can compare the results to the ones obtained by KeyedVectors’s most_similar method. 9.1.5 Word Analogies from Scratch The implementation of the word analogy function is not that much different from our most_similar_word function above. The main difference between this function and most_similar_words is that now we have two lists of words that we need to combine into a single embedding. We first add the positive words into a single vector, and we do the same for the negative words. Then we subtract the negative vector from the positive one, and normalize the result. The similarity scores are computed the same way as before, but now we need to remove several words from the results, so this time we use NumPy’s isin function, which checks for any of the words in given_word_ids. We then package the results the same way we did before, and return them. ⃗ Nowlet’stryourimplementationwiththesameking−m⃗an+wom⃗an query we discussed previously. Please compare the results to the ones obtained by Gensim. 9.2 Text Classification with Pretrained Word Embeddings In this section we will continue using the AG News classification dataset introduced in previous chapters. Most of the data preparation is the 138 Implementing Text Classification Using Word Embeddings same, up to tokenization. However, we need to remember that the embeddings were trained on a different corpus, so it would be a good idea to estimate how well they cover the words AG News dataset. To achieve this, we load the embeddings just like we did previously. Then we count the tokens in our corpus that do not appear in the embeddings vocabulary, as well as the total number to tokens. We use these numbers to print some informative statistics such as the proportion of unknown tokens in the corpus. We also print the top ten unknown tokens. You can use the Jupyter notebook to explore this task further. Our analysis indicates that only 1.25% of the tokens are not accounted for in the embeddings vocabulary. Further, the most common unknown words seem to be URL fragments. This is encouraging. However, for more robustness, we will introduce a couple of special embeddings that are often needed when dealing with word embeddings. The first one is an embedding used to represent unknown words. A common strategy is to use the average of all the embeddings in the vocabulary for this purpose. The second embedding we will add will be used for padding. Padding is required when we want to train with (mini-)batches because the lengths of all the examples in a given batch have to match in order for the batch to be efficiently processed in parallel. The padding embedding consists only of zeros, which essentially excludes these virtual tokens from the forward/backward passes. None of these embeddings are included in the pretrained GloVe embeddings, but other pretrained embeddings may already include them, so it is a good idea to check if they are included with the embeddings we are using before adding them. The new embeddings were added at the end of embedding collection, so their ids are 400,000 and 400,001. Now we need to generate a list of token ids for each training example. Recall that we decided to ignore tokens that appear less than 10 times, so we need to replace those with [UNK] too, even if they appear in the embedding vocabulary. Next, we create a Dataset object from the padded lists of token ids. This one is even easier since the lists of token ids are ready. So all that is required is turning them into tensors. Lastly, we need to modify the model class to indicate that we now use embedding vectors. To this end, we will add an nn. Embedding layer that stores the embedding vectors for all words in the vocabulary. We will use this object to look up embeddings by their token ids. This layer will be initialized from a tensor containing the pretrained embeddings for the entire vocabulary. Also, the pad_id is specified when creating the 9.2 Text Classification with Pretrained Word Embeddings 139 embedding layer. When a nn. Embedding layer gets initialized using the from_pretrained method with other arguments set to default values, the embeddings are not updated during training. We will keep it that way for this example, but that could be changed by setting the freeze parameter to False. The rest of the layers are the same as in our previous example from Chapter 7, i.e., one intermediate layer and one output layer, with a nonlinearity (ReLU) between them. The only major difference is that now the input size of the intermediate layer is the size of one embedding (e.g., 300) instead of the size of the vocabulary like last time. This is because, as we explain below, the intermediate layer receives an average of the numerical representations of the words in the current text. The forward function of the Model class changes significantly. This time we are encoding the text as an average of the embeddings of all the words it contains. To compute the denominator of this average, we obtain the length of each text by counting how many of its words are not the virtual padding token. Then we sum all the embeddings and divide by the number of non-padding tokens. Adding all embeddings is safe, because padding embeddings are comprised of zeros. This process leaves us with a single embedding for the whole text, which is then passed to the rest of the layers. The training and evaluation steps are the same before. The results of this model on the AG News test partition are displayed below: Comparing these results with the ones obtained by the multilayer perceptron with explicit features in Chapter 7, we observe that on this particular task utilizing embeddings as features does not yield a performance improvement. Notably, this is a small dataset and a rather simplistic task where the presence of certain words is sufficient to distinguish the category of an article (e.g., the word basketball is highly indicative of the label Sports). Nevertheless, in other tasks where distinctions are more nuanced, or in which there is less likely to be word overlap between texts of interest, word embeddings do provide necessary signal. Additionally, when there are class imbalances, word embeddings can supplement underrepresented classes by bringing the external knowledge gained during their pretraining. 140 Implementing Text Classification Using Word Embeddings 9.3 Summary In this chapter we showed how to explore the semantic space encoded by word embeddings through word similarity and analogies, as well as one way to use them for text classification. At this point we have not taken into consideration the order in which the words appear, i.e., we averaged the embeddings for all the words in the text using a bag-ofwords representation of text. In subsequent chapters we will explore how to incorporate word order into the learned representations of text.
9,918
10,014
#!/usr/bin/env python # coding: utf-8 # # Using Pre-trained Word Embeddings # # In this notebook we will show some operations on pre-trained word embeddings to gain an intuition about them. # # We will be using the pre-trained GloVe embeddings that can be found in the [official website](https://nlp.stanford.edu/projects/glove/). In particular, we will use the file `glove.6B.300d.txt` contained in this [zip file](https://nlp.stanford.edu/data/glove.6B.zip). # # We will first load the GloVe embeddings using [Gensim](https://radimrehurek.com/gensim/). Specifically, we will use [`KeyedVectors`](https://radimrehurek.com/gensim/models/keyedvectors.html)'s [`load_word2vec_format()`](https://radimrehurek.com/gensim/models/keyedvectors.html#gensim.models.keyedvectors.KeyedVectors.load_word2vec_format) classmethod, which supports the original word2vec file format. # However, there is a difference in the file formats used by GloVe and word2vec, which is a header used by word2vec to indicate the number of embeddings and dimensions stored in the file. The file that stores the GloVe embeddings doesn't have this header, so we will have to address that when loading the embeddings. # # Loading the embeddings may take a little bit, so hang in there! # In[2]: from gensim.models import KeyedVectors fname = "glove.6B.300d.txt" glove = KeyedVectors.load_word2vec_format(fname, no_header=True) glove.vectors.shape # ## Word similarity # # One attribute of word embeddings that makes them useful is the ability to compare them using cosine similarity to find how similar they are. [`KeyedVectors`](https://radimrehurek.com/gensim/models/keyedvectors.html) objects provide a method called [`most_similar()`](https://radimrehurek.com/gensim/models/keyedvectors.html#gensim.models.keyedvectors.KeyedVectors.most_similar) that we can use to find the closest words to a particular word of interest. By default, [`most_similar()`](https://radimrehurek.com/gensim/models/keyedvectors.html#gensim.models.keyedvectors.KeyedVectors.most_similar) returns the 10 most similar words, but this can be changed using the `topn` parameter. # # Below we test this function using a few different words. # In[3]: # common noun glove.most_similar("cactus") # In[4]: # common noun glove.most_similar("cake") # In[5]: # adjective glove.most_similar("angry") # In[6]: # adverb glove.most_similar("quickly") # In[7]: # preposition glove.most_similar("between") # In[8]: # determiner glove.most_similar("the") # ## Word analogies # # Another characteristic of word embeddings is their ability to solve analogy problems. # The same [`most_similar()`](https://radimrehurek.com/gensim/models/keyedvectors.html#gensim.models.keyedvectors.KeyedVectors.most_similar) method can be used for this task, by passing two lists of words: # a `positive` list with the words that should be added and a `negative` list with the words that should be subtracted. Using these arguments, the famous example $\vec{king} - \vec{man} + \vec{woman} \approx \vec{queen}$ can be executed as follows: # In[9]: # king - man + woman glove.most_similar(positive=["king", "woman"], negative=["man"]) # Here are a few other interesting analogies: # In[10]: # car - drive + fly glove.most_similar(positive=["car", "fly"], negative=["drive"]) # In[11]: # berlin - germany + australia glove.most_similar(positive=["berlin", "australia"], negative=["germany"]) # In[12]: # england - london + baghdad glove.most_similar(positive=["england", "baghdad"], negative=["london"]) # In[13]: # japan - yen + peso glove.most_similar(positive=["japan", "peso"], negative=["yen"]) # In[14]: # best - good + tall glove.most_similar(positive=["best", "tall"], negative=["good"]) # ## Looking under the hood # # Now that we are more familiar with the [`most_similar()`](https://radimrehurek.com/gensim/models/keyedvectors.html#gensim.models.keyedvectors.KeyedVectors.most_similar) method, it is time to implement its functionality ourselves. # But first, we need to take a look at the different parts of the [`KeyedVectors`](https://radimrehurek.com/gensim/models/keyedvectors.html) object that we will need. # Obviously, we will need the vectors themselves. They are stored in the `vectors` attribute. # In[15]: glove.vectors.shape # As we can see above, `vectors` is a 2-dimensional matrix with 400,000 rows and 300 columns. # Each row corresponds to a 300-dimensional word embedding. These embeddings are not normalized, but normalized embeddings can be obtained using the [`get_normed_vectors()`](https://radimrehurek.com/gensim/models/keyedvectors.html#gensim.models.keyedvectors.KeyedVectors.get_normed_vectors) method. # In[16]: normed_vectors = glove.get_normed_vectors() normed_vectors.shape # Now we need to map the words in the vocabulary to rows in the `vectors` matrix, and vice versa. # The [`KeyedVectors`](https://radimrehurek.com/gensim/models/keyedvectors.html) object has the attributes `index_to_key` and `key_to_index` which are a list of words and a dictionary of words to indices, respectively. # In[17]: #glove.index_to_key # In[18]: #glove.key_to_index # ## Word similarity from scratch # # Now we have everything we need to implement a `most_similar_words()` function that takes a word, the vector matrix, the `index_to_key` list, and the `key_to_index` dictionary. This function will return the 10 most similar words to the provided word, along with their similarity scores. # In[19]: import numpy as np def most_similar_words(word, vectors, index_to_key, key_to_index, topn=10): # retrieve word_id corresponding to given word word_id = key_to_index[word] # retrieve embedding for given word emb = vectors[word_id] # calculate similarities to all words in out vocabulary similarities = vectors @ emb # get word_ids in ascending order with respect to similarity score ids_ascending = similarities.argsort() # reverse word_ids ids_descending = ids_ascending[::-1] # get boolean array with element corresponding to word_id set to false mask = ids_descending != word_id # obtain new array of indices that doesn't contain word_id # (otherwise the most similar word to the argument would be the argument itself) ids_descending = ids_descending[mask] # get topn word_ids top_ids = ids_descending[:topn] # retrieve topn words with their corresponding similarity score top_words = [(index_to_key[i], similarities[i]) for i in top_ids] # return results return top_words # Now let's try the same example that we used above: the most similar words to "cactus". # In[20]: vectors = glove.get_normed_vectors() index_to_key = glove.index_to_key key_to_index = glove.key_to_index most_similar_words("cactus", vectors, index_to_key, key_to_index) # ## Analogies from scratch # # The `most_similar_words()` function behaves as expected. Now let's implement a function to perform the analogy task. We will give it the very creative name `analogy`. This function will get two lists of words (one for positive words and one for negative words), just like the [`most_similar()`](https://radimrehurek.com/gensim/models/keyedvectors.html#gensim.models.keyedvectors.KeyedVectors.most_similar) method we discussed above. # In[21]: from numpy.linalg import norm def analogy(positive, negative, vectors, index_to_key, key_to_index, topn=10): # find ids for positive and negative words pos_ids = [key_to_index[w] for w in positive] neg_ids = [key_to_index[w] for w in negative] given_word_ids = pos_ids + neg_ids # get embeddings for positive and negative words pos_emb = vectors[pos_ids].sum(axis=0) neg_emb = vectors[neg_ids].sum(axis=0) # get embedding for analogy emb = pos_emb - neg_emb # normalize embedding emb = emb / norm(emb) # calculate similarities to all words in out vocabulary similarities = vectors @ emb # get word_ids in ascending order with respect to similarity score ids_ascending = similarities.argsort() # reverse word_ids ids_descending = ids_ascending[::-1] # get boolean array with element corresponding to any of given_word_ids set to false given_words_mask = np.isin(ids_descending, given_word_ids, invert=True) # obtain new array of indices that doesn't contain any of the given_word_ids ids_descending = ids_descending[given_words_mask] # get topn word_ids top_ids = ids_descending[:topn] # retrieve topn words with their corresponding similarity score top_words = [(index_to_key[i], similarities[i]) for i in top_ids] # return results return top_words # Let's try this function with the $\vec{king} - \vec{man} + \vec{woman} \approx \vec{queen}$ example we discussed above. # In[22]: positive = ["king", "woman"] negative = ["man"] vectors = glove.get_normed_vectors() index_to_key = glove.index_to_key key_to_index = glove.key_to_index analogy(positive, negative, vectors, index_to_key, key_to_index) # In[ ]:
7,688
7,774
15
chap09-16
chap09-16
9 Implementing Text Classification Using Word Embeddings In the previous chapter we introduced word embeddings, which are realvalued vectors that encode semantic representation of words. We discussed how to learn them, and how they capture semantic information that makes them useful for downstream tasks. In this chapter we show how to use word embeddings that have been pretrained using a variant of the algorithm discussed in the previous chapter. We show how to load them, explore some of their characteristics, and show their application for a text classification task. As usual, the code for this chapter is available in our repository. It is organized into two notebooks: one corresponding to the explorations shown in the first half of this chapter (chap9_embeddings), and a second one in which we modify our previous classifier to use word embeddings (chap9_classification). 9.1 Pre-trained Word Embeddings There are several algorithms for training word embeddings, including the original word2vec algorithm (Mikolov et al., 2013a) (which we discussed in the previous chapter), GloVe (Pennington et al., 2014), and fastText (Bojanowski et al., 2017). They all provide the software for training the embeddings as well as pretrained word embeddings on their respective websites. In general, most open-domain word embeddings are trained on large corpora that cover a variety of topics such as Wikipedia1 and Gigaword.2 Commonly, these embeddings are freely distributed so 1 https://en.wikipedia.org/wiki/Wikipedia:Database_download 2 https://catalog.ldc.upenn.edu/LDC2011T07 133 134 Implementing Text Classification Using Word Embeddings house 0.60137 0.28521 -0.032038 -0.43026 0.74806 0.26223 -0.97361 0.078581 -0.57588 -1.188 -1.8507 -0.24887 0.055549 0.0086155 0.067951 0.40554 -0.073998 -0.21318 0.37167 -0.71791 1.2234 0.35546 -0.41537 -0.21931 -0.39661 -1.7831 -0.41507 0.29533 -0.41254 0.020096 2.7425 -0.9926 -0.71033 -0.46813 0.28265 -0.077639 0.3041 -0.06644 0.3951 -0.70747 -0.38894 0.23158 -0.49508 0.14612 -0.02314 0.56389 -0.86188 -1.0278 0.039922 0.20018 Figure 9.1 GloVe embedding corresponding to the word house, found in the GloVe file glove.6B.50d.txt. We have broken the vector in several lines for display purposes, but this is a single line in the text file. that practitioners can use them in downstream tasks. We will use one such set of vectors in this chapter. Pretrained embeddings are usually distributed as a text file in which each line represents a word vector. The first element in the line is the word itself, and the rest of the elements are the vector components. This is usually referred to as the word2vec format. For example, Figure 9.1 shows the line in the glove.6B.50d.txt file (from the GloVe website) corresponding to the word house. This vector is represented by the word itself, followed by 50 floating-point numbers corresponding to the 50dimensional vector. Note that some embeddings files have a header line composed of two numbers: the number of vectors (i.e., the number of lines in the file), and the vector dimensionality. However, this is not always the case. For example, the original word2vec implementation includes this header line, but the more recent GloVe does not (probably because this information can be inferred from the content of the file). For the examples in the rest of the chapter, we will use the glove.6B.300d.txt embeddings that can be downloaded from the GloVe website.3 This file provides 400,000 word embeddings of 300-dimensions trained on texts
from Wikipedia 2014 and Gigaword 5. We will begin our exploration of word embeddings using Gensim,4 a Python library that provides excellent support for loading and using word embeddings, among other more advanced features. As we can see, the embeddings have been loaded and assigned to the glove variable. Note that we had to specify that this file doesn’t contain the header that is usually present in the word2vec format. The 3 https://nlp.stanford.edu/projects/glove/ 4 https://radimrehurek.com/gensim/ 9.1 Pre-trained Word Embeddings 135 glove.vectors attribute contains a 2-dimensional NumPy array with 400,000 rows and 300 columns, each row corresponding to a word embedding. 9.1.1 Word Similarity Gensim’s KeyedVectors class provides a method called most_similar that receives a word and computes its cosine similarity to all other embeddings, and returns the topn most-similar words. By default, topn is set to 10. The example above shows the top 10 most-similar words to the word cactus, when using the 300-dimension GloVe embeddings trained on Wikipedia and Gigaword. All ten most-similar words are related to cactus in different ways: cacti and cactuses are its plural forms; saguaro, peyote, opuntia, and prickly pear are types of cacti; and mesquite, shrubs, and succulents are other plants from arid climates. You can find more examples of word similarity queries in the Jupyter notebook that accompanies this chapter. Also, as an exercise, try loading a different set of embeddings trained with a different corpus (e.g., Twitter) to see if you obtain different results! 9.1.2 Word Analogies As we discussed in the previous chapter, the semantic information en- coded by word embeddings captures much more than word similar- ity. To surface this additional information, we will use word analogies represented using additional vector operations. For example, a well- ⃗
known analogy that highlights gender information is: king − m⃗an ≈ qu⃗een−wom⃗an,5or,inplainlanguage:“manistokingwhatwomanis to queen.” From this, it immediately follows that one can subtract the meaning of man and add the meaning of woman to obtain the definition ⃗ offemaleroyalty:king−m⃗an+wom⃗an≈qu⃗een.
 The same most_similar method we’ve been using can be repurposed to find word analogies such as the one mentioned above. To this end, two sets of words have to be provided to the most_similar method: a list of positive words that should be added, and a list of negative words 5 A word with an arrow on top refers to the embedding vector corresponding to that word. Please see Section 1.4 for a summary of the notations used in this book. 136 Implementing Text Classification Using Word Embeddings that should be subtracted. For example, the code below implements the left-hand side of the previous analogy: Another interesting analogy relation that shows how the embeddings have captured information about currencies is shown below. More examples are discussed in the Jupyter notebook. 9.1.3 Looking Under the Hood Let us understand now how these queries are actually implemented. First, we need to know what components we need. Clearly, we need the embedding vectors themselves. They are stored in the vectors attribute of the KeyedVectors object. As we mentioned previously, this is a 2-dimensional NumPy array, each row corresponding to a word in the vocabulary. These embeddings are not normalized, but normalized embeddings can be obtained using the get_normed_vectors method. We also need to know the mapping between words and the matrix rows. The KeyedVectors object stores this mapping in a list of terms called index_to_key, and a term-to-index dictionary called key_to_index. Below we show only the first 5 terms to save space, but you can inspect the whole vocabulary in the Jupyter notebook. 9.1.4 Word Similarity from Scratch Implementing the word similarity function ourselves is a good exercise to ensure that we understand how cosine similarity works, and to practice our NumPy skills. We will write a function called most_similar_words that will take a word, the embeddings matrix, the vocabulary in the form of the index_to_key list and key_to_index dictionary, and the number of similar words to return (defaults to 10). The implementation of most_similar_words is straightforward. First, we find the word id for the given word, using the key_to_index dictionary. Then we retrieve the row from the vectors matrix that corresponds to that word. The next step is computing the cosine similarity between the word of interest and the rest of the vocabulary. Recall that the cosine similarity is equivalent to a dot product if the vectors are normalized. We use this equivalence by performing a matrix-vector multiplication between the word embedding and the embedding matrix using Python’s at operator (denoted as @ in code). This means that we must pass the 9.2 Text Classification with Pretrained Word Embeddings 137 normalized embeddings as an argument to this function. Next, we need to sort the similarities preserving the mapping to the words in the vocabulary. We achieve this using the argsort NumPy method, which returns the indices in sorted (ascending) order. Since we need them in descending order, the next step is to reverse this list of indices. Obviously, the most similar word to whichever word we’re querying is the word itself, but that is not an interesting result, so we will remove it from the results. We do this by using NumPy’s ability to index arrays using booleans. We first create a new array in which the position corresponding to the query word is set to False and every other element is set to True, and we use this boolean array to index the list of ids. Lastly, we create a list of tuples of the form (word, similarity) for the topn words, and return the results. Now we will test our implementation of word similarity using the word cactus. You can compare the results to the ones obtained by KeyedVectors’s most_similar method. 9.1.5 Word Analogies from Scratch The implementation of the word analogy function is not that much different from our most_similar_word function above. The main difference between this function and most_similar_words is that now we have two lists of words that we need to combine into a single embedding. We first add the positive words into a single vector, and we do the same for the negative words. Then we subtract the negative vector from the positive one, and normalize the result. The similarity scores are computed the same way as before, but now we need to remove several words from the results, so this time we use NumPy’s isin function, which checks for any of the words in given_word_ids. We then package the results the same way we did before, and return them. ⃗ Nowlet’stryourimplementationwiththesameking−m⃗an+wom⃗an query we discussed previously. Please compare the results to the ones obtained by Gensim. 9.2 Text Classification with Pretrained Word Embeddings In this section we will continue using the AG News classification dataset introduced in previous chapters. Most of the data preparation is the 138 Implementing Text Classification Using Word Embeddings same, up to tokenization. However, we need to remember that the embeddings were trained on a different corpus, so it would be a good idea to estimate how well they cover the words AG News dataset. To achieve this, we load the embeddings just like we did previously. Then we count the tokens in our corpus that do not appear in the embeddings vocabulary, as well as the total number to tokens. We use these numbers to print some informative statistics such as the proportion of unknown tokens in the corpus. We also print the top ten unknown tokens. You can use the Jupyter notebook to explore this task further. Our analysis indicates that only 1.25% of the tokens are not accounted for in the embeddings vocabulary. Further, the most common unknown words seem to be URL fragments. This is encouraging. However, for more robustness, we will introduce a couple of special embeddings that are often needed when dealing with word embeddings. The first one is an embedding used to represent unknown words. A common strategy is to use the average of all the embeddings in the vocabulary for this purpose. The second embedding we will add will be used for padding. Padding is required when we want to train with (mini-)batches because the lengths of all the examples in a given batch have to match in order for the batch to be efficiently processed in parallel. The padding embedding consists only of zeros, which essentially excludes these virtual tokens from the forward/backward passes. None of these embeddings are included in the pretrained GloVe embeddings, but other pretrained embeddings may already include them, so it is a good idea to check if they are included with the embeddings we are using before adding them. The new embeddings were added at the end of embedding collection, so their ids are 400,000 and 400,001. Now we need to generate a list of token ids for each training example. Recall that we decided to ignore tokens that appear less than 10 times, so we need to replace those with [UNK] too, even if they appear in the embedding vocabulary. Next, we create a Dataset object from the padded lists of token ids. This one is even easier since the lists of token ids are ready. So all that is required is turning them into tensors. Lastly, we need to modify the model class to indicate that we now use embedding vectors. To this end, we will add an nn. Embedding layer that stores the embedding vectors for all words in the vocabulary. We will use this object to look up embeddings by their token ids. This layer will be initialized from a tensor containing the pretrained embeddings for the entire vocabulary. Also, the pad_id is specified when creating the 9.2 Text Classification with Pretrained Word Embeddings 139 embedding layer. When a nn. Embedding layer gets initialized using the from_pretrained method with other arguments set to default values, the embeddings are not updated during training. We will keep it that way for this example, but that could be changed by setting the freeze parameter to False. The rest of the layers are the same as in our previous example from Chapter 7, i.e., one intermediate layer and one output layer, with a nonlinearity (ReLU) between them. The only major difference is that now the input size of the intermediate layer is the size of one embedding (e.g., 300) instead of the size of the vocabulary like last time. This is because, as we explain below, the intermediate layer receives an average of the numerical representations of the words in the current text. The forward function of the Model class changes significantly. This time we are encoding the text as an average of the embeddings of all the words it contains. To compute the denominator of this average, we obtain the length of each text by counting how many of its words are not the virtual padding token. Then we sum all the embeddings and divide by the number of non-padding tokens. Adding all embeddings is safe, because padding embeddings are comprised of zeros. This process leaves us with a single embedding for the whole text, which is then passed to the rest of the layers. The training and evaluation steps are the same before. The results of this model on the AG News test partition are displayed below: Comparing these results with the ones obtained by the multilayer perceptron with explicit features in Chapter 7, we observe that on this particular task utilizing embeddings as features does not yield a performance improvement. Notably, this is a small dataset and a rather simplistic task where the presence of certain words is sufficient to distinguish the category of an article (e.g., the word basketball is highly indicative of the label Sports). Nevertheless, in other tasks where distinctions are more nuanced, or in which there is less likely to be word overlap between texts of interest, word embeddings do provide necessary signal. Additionally, when there are class imbalances, word embeddings can supplement underrepresented classes by bringing the external knowledge gained during their pretraining. 140 Implementing Text Classification Using Word Embeddings 9.3 Summary In this chapter we showed how to explore the semantic space encoded by word embeddings through word similarity and analogies, as well as one way to use them for text classification. At this point we have not taken into consideration the order in which the words appear, i.e., we averaged the embeddings for all the words in the text using a bag-ofwords representation of text. In subsequent chapters we will explore how to incorporate word order into the learned representations of text.
13,061
13,149
#!/usr/bin/env python # coding: utf-8 # # Multiclass Text Classification with # # Feed-forward Neural Networks and Word Embeddings # First, we will do some initialization. # In[1]: import random import torch import numpy as np import pandas as pd from tqdm.notebook import tqdm # enable tqdm in pandas tqdm.pandas() # set to True to use the gpu (if there is one available) use_gpu = True # select device device = torch.device('cuda' if use_gpu and torch.cuda.is_available() else 'cpu') print(f'device: {device.type}') # random seed seed = 1234 # set random seed if seed is not None: print(f'random seed: {seed}') random.seed(seed) np.random.seed(seed) torch.manual_seed(seed) # We will be using the AG's News Topic Classification Dataset. # It is stored in two CSV files: `train.csv` and `test.csv`, as well as a `classes.txt` that stores the labels of the classes to predict. # # First, we will load the training dataset using [pandas](https://pandas.pydata.org/) and take a quick look at how the data. # In[2]: train_df = pd.read_csv('data/ag_news_csv/train.csv', header=None) train_df.columns = ['class index', 'title', 'description'] train_df # The dataset consists of 120,000 examples, each consisting of a class index, a title, and a description. # The class labels are distributed in a separated file. We will add the labels to the dataset so that we can interpret the data more easily. Note that the label indexes are one-based, so we need to subtract one to retrieve them from the list. # In[3]: labels = open('data/ag_news_csv/classes.txt').read().splitlines() classes = train_df['class index'].map(lambda i: labels[i-1]) train_df.insert(1, 'class', classes) train_df # Let's inspect how balanced our examples are by using a bar plot. # In[4]: pd.value_counts(train_df['class']).plot.bar() # The classes are evenly distributed. That's great! # # However, the text contains some spurious backslashes in some parts of the text. # They are meant to represent newlines in the original text. # An example can be seen below, between the words "dwindling" and "band". # In[5]: print(train_df.loc[0, 'description']) # We will replace the backslashes with spaces on the whole column using pandas replace method. # In[6]: train_df['text'] = train_df['title'].str.lower() + " " + train_df['description'].str.lower() train_df['text'] = train_df['text'].str.replace('\\', ' ', regex=False) train_df # Now we will proceed to tokenize the title and description columns using NLTK's word_tokenize(). # We will add a new column to our dataframe with the list of tokens. # In[7]: from nltk.tokenize import word_tokenize train_df['tokens'] = train_df['text'].progress_map(word_tokenize) train_df # Now we will load the GloVe word embeddings. # In[8]: from gensim.models import KeyedVectors glove = KeyedVectors.load_word2vec_format("glove.6B.300d.txt", no_header=True) glove.vectors.shape # The word embeddings have been pretrained in a different corpus, so it would be a good idea to estimate how good our tokenization matches the GloVe vocabulary. # In[9]: from collections import Counter def count_unknown_words(data, vocabulary): counter = Counter() for row in tqdm(data): counter.update(tok for tok in row if tok not in vocabulary) return counter # find out how many times each unknown token occurrs in the corpus c = count_unknown_words(train_df['tokens'], glove.key_to_index) # find the total number of tokens in the corpus total_tokens = train_df['tokens'].map(len).sum() # find some statistics about occurrences of unknown tokens unk_tokens = sum(c.values()) percent_unk = unk_tokens / total_tokens distinct_tokens = len(list(c)) print(f'total number of tokens: {total_tokens:,}') print(f'number of unknown tokens: {unk_tokens:,}') print(f'number of distinct unknown tokens: {distinct_tokens:,}') print(f'percentage of unkown tokens: {percent_unk:.2%}') print('top 50 unknown words:') for token, n in c.most_common(10): print(f'\t{n}\t{token}') # Glove embeddings seem to have a good coverage on this dataset -- only 1.25% of the tokens in the dataset are unknown, i.e., don't appear in the GloVe vocabulary. # # Still, we will need a way to handle these unknown tokens. # Our approach will be to add a new embedding to GloVe that will be used to represent them. # This new embedding will be initialized as the average of all the GloVe embeddings. # # We will also add another embedding, this one initialized to zeros, that will be used to pad the sequences of tokens so that they all have the same length. This will be useful when we train with mini-batches. # In[10]: # string values corresponding to the new embeddings unk_tok = '[UNK]' pad_tok = '[PAD]' # initialize the new embedding values unk_emb = glove.vectors.mean(axis=0) pad_emb = np.zeros(300) # add new embeddings to glove glove.add_vectors([unk_tok, pad_tok], [unk_emb, pad_emb]) # get token ids corresponding to the new embeddings unk_id = glove.key_to_index[unk_tok] pad_id = glove.key_to_index[pad_tok] unk_id, pad_id # In[11]: from sklearn.model_selection import train_test_split train_df, dev_df = train_test_split(train_df, train_size=0.8) train_df.reset_index(inplace=True) dev_df.reset_index(inplace=True) # We will now add a new column to our dataframe that will contain the padded sequences of token ids. # In[12]: threshold = 10 tokens = train_df['tokens'].explode().value_counts() vocabulary = set(tokens[tokens > threshold].index.tolist()) print(f'vocabulary size: {len(vocabulary):,}') # In[13]: # find the length of the longest list of tokens max_tokens = train_df['tokens'].map(len).max() # return unk_id for infrequent tokens too def get_id(tok): if tok in vocabulary: return glove.key_to_index.get(tok, unk_id) else: return unk_id # function that gets a list of tokens and returns a list of token ids, # with padding added accordingly def token_ids(tokens): tok_ids = [get_id(tok) for tok in tokens] pad_len = max_tokens - len(tok_ids) return tok_ids + [pad_id] * pad_len # add new column to the dataframe train_df['token ids'] = train_df['tokens'].progress_map(token_ids) train_df # In[14]: max_tokens = dev_df['tokens'].map(len).max() dev_df['token ids'] = dev_df['tokens'].progress_map(token_ids) dev_df # Now we will get a numpy 2-dimensional array corresponding to the token ids, # and a 1-dimensional array with the gold classes. Note that the classes are one-based (i.e., they start at one), # but we need them to be zero-based, so we need to subtract one from this array. # In[15]: from torch.utils.data import Dataset class MyDataset(Dataset): def __init__(self, x, y): self.x = x self.y = y def __len__(self): return len(self.y) def __getitem__(self, index): x = torch.tensor(self.x[index]) y = torch.tensor(self.y[index]) return x, y # Next, we construct our PyTorch model, which is a feed-forward neural network with two layers: # In[16]: from torch import nn import torch.nn.functional as F class Model(nn.Module): def __init__(self, vectors, pad_id, hidden_dim, output_dim, dropout): super().__init__() # embeddings must be a tensor if not torch.is_tensor(vectors): vectors = torch.tensor(vectors) # keep padding id self.padding_idx = pad_id # embedding layer self.embs = nn.Embedding.from_pretrained(vectors, padding_idx=pad_id) # feedforward layers self.layers = nn.Sequential( nn.Dropout(dropout), nn.Linear(vectors.shape[1], hidden_dim), nn.ReLU(), nn.Dropout(dropout), nn.Linear(hidden_dim, output_dim), ) def forward(self, x): # get boolean array with padding elements set to false not_padding = torch.isin(x, self.padding_idx, invert=True) # get lengths of examples (excluding padding) lengths = torch.count_nonzero(not_padding, axis=1) # get embeddings x = self.embs(x) # calculate means x = x.sum(dim=1) / lengths.unsqueeze(dim=1) # pass to rest of the model output = self.layers(x) # calculate softmax if we're not in training mode #if not self.training: # output = F.softmax(output, dim=1) return output # Next, we implement the training procedure. We compute the loss and accuracy on the development partition after each epoch. # In[17]: from torch import optim from torch.utils.data import DataLoader from sklearn.metrics import accuracy_score # hyperparameters lr = 1e-3 weight_decay = 0 batch_size = 500 shuffle = True n_epochs = 5 hidden_dim = 50 output_dim = len(labels) dropout = 0.1 vectors = glove.vectors # initialize the model, loss function, optimizer, and data-loader model = Model(vectors, pad_id, hidden_dim, output_dim, dropout).to(device) loss_func = nn.CrossEntropyLoss() optimizer = optim.Adam(model.parameters(), lr=lr, weight_decay=weight_decay) train_ds = MyDataset(train_df['token ids'], train_df['class index'] - 1) train_dl = DataLoader(train_ds, batch_size=batch_size, shuffle=shuffle) dev_ds = MyDataset(dev_df['token ids'], dev_df['class index'] - 1) dev_dl = DataLoader(dev_ds, batch_size=batch_size, shuffle=shuffle) train_loss = [] train_acc = [] dev_loss = [] dev_acc = [] # train the model for epoch in range(n_epochs): losses = [] gold = [] pred = [] model.train() for X, y_true in tqdm(train_dl, desc=f'epoch {epoch+1} (train)'): # clear gradients model.zero_grad() # send batch to right device X = X.to(device) y_true = y_true.to(device) # predict label scores y_pred = model(X) # compute loss loss = loss_func(y_pred, y_true) # accumulate for plotting losses.append(loss.detach().cpu().item()) gold.append(y_true.detach().cpu().numpy()) pred.append(np.argmax(y_pred.detach().cpu().numpy(), axis=1)) # backpropagate loss.backward() # optimize model parameters optimizer.step() train_loss.append(np.mean(losses)) train_acc.append(accuracy_score(np.concatenate(gold), np.concatenate(pred))) model.eval() with torch.no_grad(): losses = [] gold = [] pred = [] for X, y_true in tqdm(dev_dl, desc=f'epoch {epoch+1} (dev)'): X = X.to(device) y_true = y_true.to(device) y_pred = model(X) loss = loss_func(y_pred, y_true) losses.append(loss.cpu().item()) gold.append(y_true.cpu().numpy()) pred.append(np.argmax(y_pred.cpu().numpy(), axis=1)) dev_loss.append(np.mean(losses)) dev_acc.append(accuracy_score(np.concatenate(gold), np.concatenate(pred))) # Let's plot the loss and accuracy on dev: # In[18]: import matplotlib.pyplot as plt get_ipython().run_line_magic('matplotlib', 'inline') x = np.arange(n_epochs) + 1 plt.plot(x, train_loss) plt.plot(x, dev_loss) plt.legend(['train', 'dev']) plt.xlabel('epoch') plt.ylabel('loss') plt.grid(True) # In[19]: plt.plot(x, train_acc) plt.plot(x, dev_acc) plt.legend(['train', 'dev']) plt.xlabel('epoch') plt.ylabel('accuracy') plt.grid(True) # Next, we evaluate on the testing partition: # In[20]: # repeat all preprocessing done above, this time on the test set test_df = pd.read_csv('data/ag_news_csv/test.csv', header=None) test_df.columns = ['class index', 'title', 'description'] test_df['text'] = test_df['title'].str.lower() + " " + test_df['description'].str.lower() test_df['text'] = test_df['text'].str.replace('\\', ' ', regex=False) test_df['tokens'] = test_df['text'].progress_map(word_tokenize) max_tokens = dev_df['tokens'].map(len).max() test_df['token ids'] = test_df['tokens'].progress_map(token_ids) # In[21]: from sklearn.metrics import classification_report # set model to evaluation mode model.eval() dataset = MyDataset(test_df['token ids'], test_df['class index'] - 1) data_loader = DataLoader(dataset, batch_size=batch_size) y_pred = [] # don't store gradients with torch.no_grad(): for X, _ in tqdm(data_loader): X = X.to(device) # predict one class per example y = torch.argmax(model(X), dim=1) # convert tensor to numpy array (sending it back to the cpu if needed) y_pred.append(y.cpu().numpy()) # print results print(classification_report(dataset.y, np.concatenate(y_pred), target_names=labels))
7,140
7,164
16
chap09-17
chap09-17
9 Implementing Text Classification Using Word Embeddings In the previous chapter we introduced word embeddings, which are realvalued vectors that encode semantic representation of words. We discussed how to learn them, and how they capture semantic information that makes them useful for downstream tasks. In this chapter we show how to use word embeddings that have been pretrained using a variant of the algorithm discussed in the previous chapter. We show how to load them, explore some of their characteristics, and show their application for a text classification task. As usual, the code for this chapter is available in our repository. It is organized into two notebooks: one corresponding to the explorations shown in the first half of this chapter (chap9_embeddings), and a second one in which we modify our previous classifier to use word embeddings (chap9_classification). 9.1 Pre-trained Word Embeddings There are several algorithms for training word embeddings, including the original word2vec algorithm (Mikolov et al., 2013a) (which we discussed in the previous chapter), GloVe (Pennington et al., 2014), and fastText (Bojanowski et al., 2017). They all provide the software for training the embeddings as well as pretrained word embeddings on their respective websites. In general, most open-domain word embeddings are trained on large corpora that cover a variety of topics such as Wikipedia1 and Gigaword.2 Commonly, these embeddings are freely distributed so 1 https://en.wikipedia.org/wiki/Wikipedia:Database_download 2 https://catalog.ldc.upenn.edu/LDC2011T07 133 134 Implementing Text Classification Using Word Embeddings house 0.60137 0.28521 -0.032038 -0.43026 0.74806 0.26223 -0.97361 0.078581 -0.57588 -1.188 -1.8507 -0.24887 0.055549 0.0086155 0.067951 0.40554 -0.073998 -0.21318 0.37167 -0.71791 1.2234 0.35546 -0.41537 -0.21931 -0.39661 -1.7831 -0.41507 0.29533 -0.41254 0.020096 2.7425 -0.9926 -0.71033 -0.46813 0.28265 -0.077639 0.3041 -0.06644 0.3951 -0.70747 -0.38894 0.23158 -0.49508 0.14612 -0.02314 0.56389 -0.86188 -1.0278 0.039922 0.20018 Figure 9.1 GloVe embedding corresponding to the word house, found in the GloVe file glove.6B.50d.txt. We have broken the vector in several lines for display purposes, but this is a single line in the text file. that practitioners can use them in downstream tasks. We will use one such set of vectors in this chapter. Pretrained embeddings are usually distributed as a text file in which each line represents a word vector. The first element in the line is the word itself, and the rest of the elements are the vector components. This is usually referred to as the word2vec format. For example, Figure 9.1 shows the line in the glove.6B.50d.txt file (from the GloVe website) corresponding to the word house. This vector is represented by the word itself, followed by 50 floating-point numbers corresponding to the 50dimensional vector. Note that some embeddings files have a header line composed of two numbers: the number of vectors (i.e., the number of lines in the file), and the vector dimensionality. However, this is not always the case. For example, the original word2vec implementation includes this header line, but the more recent GloVe does not (probably because this information can be inferred from the content of the file). For the examples in the rest of the chapter, we will use the glove.6B.300d.txt embeddings that can be downloaded from the GloVe website.3 This file provides 400,000 word embeddings of 300-dimensions trained on texts
from Wikipedia 2014 and Gigaword 5. We will begin our exploration of word embeddings using Gensim,4 a Python library that provides excellent support for loading and using word embeddings, among other more advanced features. As we can see, the embeddings have been loaded and assigned to the glove variable. Note that we had to specify that this file doesn’t contain the header that is usually present in the word2vec format. The 3 https://nlp.stanford.edu/projects/glove/ 4 https://radimrehurek.com/gensim/ 9.1 Pre-trained Word Embeddings 135 glove.vectors attribute contains a 2-dimensional NumPy array with 400,000 rows and 300 columns, each row corresponding to a word embedding. 9.1.1 Word Similarity Gensim’s KeyedVectors class provides a method called most_similar that receives a word and computes its cosine similarity to all other embeddings, and returns the topn most-similar words. By default, topn is set to 10. The example above shows the top 10 most-similar words to the word cactus, when using the 300-dimension GloVe embeddings trained on Wikipedia and Gigaword. All ten most-similar words are related to cactus in different ways: cacti and cactuses are its plural forms; saguaro, peyote, opuntia, and prickly pear are types of cacti; and mesquite, shrubs, and succulents are other plants from arid climates. You can find more examples of word similarity queries in the Jupyter notebook that accompanies this chapter. Also, as an exercise, try loading a different set of embeddings trained with a different corpus (e.g., Twitter) to see if you obtain different results! 9.1.2 Word Analogies As we discussed in the previous chapter, the semantic information en- coded by word embeddings captures much more than word similar- ity. To surface this additional information, we will use word analogies represented using additional vector operations. For example, a well- ⃗
known analogy that highlights gender information is: king − m⃗an ≈ qu⃗een−wom⃗an,5or,inplainlanguage:“manistokingwhatwomanis to queen.” From this, it immediately follows that one can subtract the meaning of man and add the meaning of woman to obtain the definition ⃗ offemaleroyalty:king−m⃗an+wom⃗an≈qu⃗een.
 The same most_similar method we’ve been using can be repurposed to find word analogies such as the one mentioned above. To this end, two sets of words have to be provided to the most_similar method: a list of positive words that should be added, and a list of negative words 5 A word with an arrow on top refers to the embedding vector corresponding to that word. Please see Section 1.4 for a summary of the notations used in this book. 136 Implementing Text Classification Using Word Embeddings that should be subtracted. For example, the code below implements the left-hand side of the previous analogy: Another interesting analogy relation that shows how the embeddings have captured information about currencies is shown below. More examples are discussed in the Jupyter notebook. 9.1.3 Looking Under the Hood Let us understand now how these queries are actually implemented. First, we need to know what components we need. Clearly, we need the embedding vectors themselves. They are stored in the vectors attribute of the KeyedVectors object. As we mentioned previously, this is a 2-dimensional NumPy array, each row corresponding to a word in the vocabulary. These embeddings are not normalized, but normalized embeddings can be obtained using the get_normed_vectors method. We also need to know the mapping between words and the matrix rows. The KeyedVectors object stores this mapping in a list of terms called index_to_key, and a term-to-index dictionary called key_to_index. Below we show only the first 5 terms to save space, but you can inspect the whole vocabulary in the Jupyter notebook. 9.1.4 Word Similarity from Scratch Implementing the word similarity function ourselves is a good exercise to ensure that we understand how cosine similarity works, and to practice our NumPy skills. We will write a function called most_similar_words that will take a word, the embeddings matrix, the vocabulary in the form of the index_to_key list and key_to_index dictionary, and the number of similar words to return (defaults to 10). The implementation of most_similar_words is straightforward. First, we find the word id for the given word, using the key_to_index dictionary. Then we retrieve the row from the vectors matrix that corresponds to that word. The next step is computing the cosine similarity between the word of interest and the rest of the vocabulary. Recall that the cosine similarity is equivalent to a dot product if the vectors are normalized. We use this equivalence by performing a matrix-vector multiplication between the word embedding and the embedding matrix using Python’s at operator (denoted as @ in code). This means that we must pass the 9.2 Text Classification with Pretrained Word Embeddings 137 normalized embeddings as an argument to this function. Next, we need to sort the similarities preserving the mapping to the words in the vocabulary. We achieve this using the argsort NumPy method, which returns the indices in sorted (ascending) order. Since we need them in descending order, the next step is to reverse this list of indices. Obviously, the most similar word to whichever word we’re querying is the word itself, but that is not an interesting result, so we will remove it from the results. We do this by using NumPy’s ability to index arrays using booleans. We first create a new array in which the position corresponding to the query word is set to False and every other element is set to True, and we use this boolean array to index the list of ids. Lastly, we create a list of tuples of the form (word, similarity) for the topn words, and return the results. Now we will test our implementation of word similarity using the word cactus. You can compare the results to the ones obtained by KeyedVectors’s most_similar method. 9.1.5 Word Analogies from Scratch The implementation of the word analogy function is not that much different from our most_similar_word function above. The main difference between this function and most_similar_words is that now we have two lists of words that we need to combine into a single embedding. We first add the positive words into a single vector, and we do the same for the negative words. Then we subtract the negative vector from the positive one, and normalize the result. The similarity scores are computed the same way as before, but now we need to remove several words from the results, so this time we use NumPy’s isin function, which checks for any of the words in given_word_ids. We then package the results the same way we did before, and return them. ⃗ Nowlet’stryourimplementationwiththesameking−m⃗an+wom⃗an query we discussed previously. Please compare the results to the ones obtained by Gensim. 9.2 Text Classification with Pretrained Word Embeddings In this section we will continue using the AG News classification dataset introduced in previous chapters. Most of the data preparation is the 138 Implementing Text Classification Using Word Embeddings same, up to tokenization. However, we need to remember that the embeddings were trained on a different corpus, so it would be a good idea to estimate how well they cover the words AG News dataset. To achieve this, we load the embeddings just like we did previously. Then we count the tokens in our corpus that do not appear in the embeddings vocabulary, as well as the total number to tokens. We use these numbers to print some informative statistics such as the proportion of unknown tokens in the corpus. We also print the top ten unknown tokens. You can use the Jupyter notebook to explore this task further. Our analysis indicates that only 1.25% of the tokens are not accounted for in the embeddings vocabulary. Further, the most common unknown words seem to be URL fragments. This is encouraging. However, for more robustness, we will introduce a couple of special embeddings that are often needed when dealing with word embeddings. The first one is an embedding used to represent unknown words. A common strategy is to use the average of all the embeddings in the vocabulary for this purpose. The second embedding we will add will be used for padding. Padding is required when we want to train with (mini-)batches because the lengths of all the examples in a given batch have to match in order for the batch to be efficiently processed in parallel. The padding embedding consists only of zeros, which essentially excludes these virtual tokens from the forward/backward passes. None of these embeddings are included in the pretrained GloVe embeddings, but other pretrained embeddings may already include them, so it is a good idea to check if they are included with the embeddings we are using before adding them. The new embeddings were added at the end of embedding collection, so their ids are 400,000 and 400,001. Now we need to generate a list of token ids for each training example. Recall that we decided to ignore tokens that appear less than 10 times, so we need to replace those with [UNK] too, even if they appear in the embedding vocabulary. Next, we create a Dataset object from the padded lists of token ids. This one is even easier since the lists of token ids are ready. So all that is required is turning them into tensors. Lastly, we need to modify the model class to indicate that we now use embedding vectors. To this end, we will add an nn. Embedding layer that stores the embedding vectors for all words in the vocabulary. We will use this object to look up embeddings by their token ids. This layer will be initialized from a tensor containing the pretrained embeddings for the entire vocabulary. Also, the pad_id is specified when creating the 9.2 Text Classification with Pretrained Word Embeddings 139 embedding layer. When a nn. Embedding layer gets initialized using the from_pretrained method with other arguments set to default values, the embeddings are not updated during training. We will keep it that way for this example, but that could be changed by setting the freeze parameter to False. The rest of the layers are the same as in our previous example from Chapter 7, i.e., one intermediate layer and one output layer, with a nonlinearity (ReLU) between them. The only major difference is that now the input size of the intermediate layer is the size of one embedding (e.g., 300) instead of the size of the vocabulary like last time. This is because, as we explain below, the intermediate layer receives an average of the numerical representations of the words in the current text. The forward function of the Model class changes significantly. This time we are encoding the text as an average of the embeddings of all the words it contains. To compute the denominator of this average, we obtain the length of each text by counting how many of its words are not the virtual padding token. Then we sum all the embeddings and divide by the number of non-padding tokens. Adding all embeddings is safe, because padding embeddings are comprised of zeros. This process leaves us with a single embedding for the whole text, which is then passed to the rest of the layers. The training and evaluation steps are the same before. The results of this model on the AG News test partition are displayed below: Comparing these results with the ones obtained by the multilayer perceptron with explicit features in Chapter 7, we observe that on this particular task utilizing embeddings as features does not yield a performance improvement. Notably, this is a small dataset and a rather simplistic task where the presence of certain words is sufficient to distinguish the category of an article (e.g., the word basketball is highly indicative of the label Sports). Nevertheless, in other tasks where distinctions are more nuanced, or in which there is less likely to be word overlap between texts of interest, word embeddings do provide necessary signal. Additionally, when there are class imbalances, word embeddings can supplement underrepresented classes by bringing the external knowledge gained during their pretraining. 140 Implementing Text Classification Using Word Embeddings 9.3 Summary In this chapter we showed how to explore the semantic space encoded by word embeddings through word similarity and analogies, as well as one way to use them for text classification. At this point we have not taken into consideration the order in which the words appear, i.e., we averaged the embeddings for all the words in the text using a bag-ofwords representation of text. In subsequent chapters we will explore how to incorporate word order into the learned representations of text.
7,933
8,007
#!/usr/bin/env python # coding: utf-8 # # Using Pre-trained Word Embeddings # # In this notebook we will show some operations on pre-trained word embeddings to gain an intuition about them. # # We will be using the pre-trained GloVe embeddings that can be found in the [official website](https://nlp.stanford.edu/projects/glove/). In particular, we will use the file `glove.6B.300d.txt` contained in this [zip file](https://nlp.stanford.edu/data/glove.6B.zip). # # We will first load the GloVe embeddings using [Gensim](https://radimrehurek.com/gensim/). Specifically, we will use [`KeyedVectors`](https://radimrehurek.com/gensim/models/keyedvectors.html)'s [`load_word2vec_format()`](https://radimrehurek.com/gensim/models/keyedvectors.html#gensim.models.keyedvectors.KeyedVectors.load_word2vec_format) classmethod, which supports the original word2vec file format. # However, there is a difference in the file formats used by GloVe and word2vec, which is a header used by word2vec to indicate the number of embeddings and dimensions stored in the file. The file that stores the GloVe embeddings doesn't have this header, so we will have to address that when loading the embeddings. # # Loading the embeddings may take a little bit, so hang in there! # In[2]: from gensim.models import KeyedVectors fname = "glove.6B.300d.txt" glove = KeyedVectors.load_word2vec_format(fname, no_header=True) glove.vectors.shape # ## Word similarity # # One attribute of word embeddings that makes them useful is the ability to compare them using cosine similarity to find how similar they are. [`KeyedVectors`](https://radimrehurek.com/gensim/models/keyedvectors.html) objects provide a method called [`most_similar()`](https://radimrehurek.com/gensim/models/keyedvectors.html#gensim.models.keyedvectors.KeyedVectors.most_similar) that we can use to find the closest words to a particular word of interest. By default, [`most_similar()`](https://radimrehurek.com/gensim/models/keyedvectors.html#gensim.models.keyedvectors.KeyedVectors.most_similar) returns the 10 most similar words, but this can be changed using the `topn` parameter. # # Below we test this function using a few different words. # In[3]: # common noun glove.most_similar("cactus") # In[4]: # common noun glove.most_similar("cake") # In[5]: # adjective glove.most_similar("angry") # In[6]: # adverb glove.most_similar("quickly") # In[7]: # preposition glove.most_similar("between") # In[8]: # determiner glove.most_similar("the") # ## Word analogies # # Another characteristic of word embeddings is their ability to solve analogy problems. # The same [`most_similar()`](https://radimrehurek.com/gensim/models/keyedvectors.html#gensim.models.keyedvectors.KeyedVectors.most_similar) method can be used for this task, by passing two lists of words: # a `positive` list with the words that should be added and a `negative` list with the words that should be subtracted. Using these arguments, the famous example $\vec{king} - \vec{man} + \vec{woman} \approx \vec{queen}$ can be executed as follows: # In[9]: # king - man + woman glove.most_similar(positive=["king", "woman"], negative=["man"]) # Here are a few other interesting analogies: # In[10]: # car - drive + fly glove.most_similar(positive=["car", "fly"], negative=["drive"]) # In[11]: # berlin - germany + australia glove.most_similar(positive=["berlin", "australia"], negative=["germany"]) # In[12]: # england - london + baghdad glove.most_similar(positive=["england", "baghdad"], negative=["london"]) # In[13]: # japan - yen + peso glove.most_similar(positive=["japan", "peso"], negative=["yen"]) # In[14]: # best - good + tall glove.most_similar(positive=["best", "tall"], negative=["good"]) # ## Looking under the hood # # Now that we are more familiar with the [`most_similar()`](https://radimrehurek.com/gensim/models/keyedvectors.html#gensim.models.keyedvectors.KeyedVectors.most_similar) method, it is time to implement its functionality ourselves. # But first, we need to take a look at the different parts of the [`KeyedVectors`](https://radimrehurek.com/gensim/models/keyedvectors.html) object that we will need. # Obviously, we will need the vectors themselves. They are stored in the `vectors` attribute. # In[15]: glove.vectors.shape # As we can see above, `vectors` is a 2-dimensional matrix with 400,000 rows and 300 columns. # Each row corresponds to a 300-dimensional word embedding. These embeddings are not normalized, but normalized embeddings can be obtained using the [`get_normed_vectors()`](https://radimrehurek.com/gensim/models/keyedvectors.html#gensim.models.keyedvectors.KeyedVectors.get_normed_vectors) method. # In[16]: normed_vectors = glove.get_normed_vectors() normed_vectors.shape # Now we need to map the words in the vocabulary to rows in the `vectors` matrix, and vice versa. # The [`KeyedVectors`](https://radimrehurek.com/gensim/models/keyedvectors.html) object has the attributes `index_to_key` and `key_to_index` which are a list of words and a dictionary of words to indices, respectively. # In[17]: #glove.index_to_key # In[18]: #glove.key_to_index # ## Word similarity from scratch # # Now we have everything we need to implement a `most_similar_words()` function that takes a word, the vector matrix, the `index_to_key` list, and the `key_to_index` dictionary. This function will return the 10 most similar words to the provided word, along with their similarity scores. # In[19]: import numpy as np def most_similar_words(word, vectors, index_to_key, key_to_index, topn=10): # retrieve word_id corresponding to given word word_id = key_to_index[word] # retrieve embedding for given word emb = vectors[word_id] # calculate similarities to all words in out vocabulary similarities = vectors @ emb # get word_ids in ascending order with respect to similarity score ids_ascending = similarities.argsort() # reverse word_ids ids_descending = ids_ascending[::-1] # get boolean array with element corresponding to word_id set to false mask = ids_descending != word_id # obtain new array of indices that doesn't contain word_id # (otherwise the most similar word to the argument would be the argument itself) ids_descending = ids_descending[mask] # get topn word_ids top_ids = ids_descending[:topn] # retrieve topn words with their corresponding similarity score top_words = [(index_to_key[i], similarities[i]) for i in top_ids] # return results return top_words # Now let's try the same example that we used above: the most similar words to "cactus". # In[20]: vectors = glove.get_normed_vectors() index_to_key = glove.index_to_key key_to_index = glove.key_to_index most_similar_words("cactus", vectors, index_to_key, key_to_index) # ## Analogies from scratch # # The `most_similar_words()` function behaves as expected. Now let's implement a function to perform the analogy task. We will give it the very creative name `analogy`. This function will get two lists of words (one for positive words and one for negative words), just like the [`most_similar()`](https://radimrehurek.com/gensim/models/keyedvectors.html#gensim.models.keyedvectors.KeyedVectors.most_similar) method we discussed above. # In[21]: from numpy.linalg import norm def analogy(positive, negative, vectors, index_to_key, key_to_index, topn=10): # find ids for positive and negative words pos_ids = [key_to_index[w] for w in positive] neg_ids = [key_to_index[w] for w in negative] given_word_ids = pos_ids + neg_ids # get embeddings for positive and negative words pos_emb = vectors[pos_ids].sum(axis=0) neg_emb = vectors[neg_ids].sum(axis=0) # get embedding for analogy emb = pos_emb - neg_emb # normalize embedding emb = emb / norm(emb) # calculate similarities to all words in out vocabulary similarities = vectors @ emb # get word_ids in ascending order with respect to similarity score ids_ascending = similarities.argsort() # reverse word_ids ids_descending = ids_ascending[::-1] # get boolean array with element corresponding to any of given_word_ids set to false given_words_mask = np.isin(ids_descending, given_word_ids, invert=True) # obtain new array of indices that doesn't contain any of the given_word_ids ids_descending = ids_descending[given_words_mask] # get topn word_ids top_ids = ids_descending[:topn] # retrieve topn words with their corresponding similarity score top_words = [(index_to_key[i], similarities[i]) for i in top_ids] # return results return top_words # Let's try this function with the $\vec{king} - \vec{man} + \vec{woman} \approx \vec{queen}$ example we discussed above. # In[22]: positive = ["king", "woman"] negative = ["man"] vectors = glove.get_normed_vectors() index_to_key = glove.index_to_key key_to_index = glove.key_to_index analogy(positive, negative, vectors, index_to_key, key_to_index) # In[ ]:
5,670
5,703
17
chap09-18
chap09-18
9 Implementing Text Classification Using Word Embeddings In the previous chapter we introduced word embeddings, which are realvalued vectors that encode semantic representation of words. We discussed how to learn them, and how they capture semantic information that makes them useful for downstream tasks. In this chapter we show how to use word embeddings that have been pretrained using a variant of the algorithm discussed in the previous chapter. We show how to load them, explore some of their characteristics, and show their application for a text classification task. As usual, the code for this chapter is available in our repository. It is organized into two notebooks: one corresponding to the explorations shown in the first half of this chapter (chap9_embeddings), and a second one in which we modify our previous classifier to use word embeddings (chap9_classification). 9.1 Pre-trained Word Embeddings There are several algorithms for training word embeddings, including the original word2vec algorithm (Mikolov et al., 2013a) (which we discussed in the previous chapter), GloVe (Pennington et al., 2014), and fastText (Bojanowski et al., 2017). They all provide the software for training the embeddings as well as pretrained word embeddings on their respective websites. In general, most open-domain word embeddings are trained on large corpora that cover a variety of topics such as Wikipedia1 and Gigaword.2 Commonly, these embeddings are freely distributed so 1 https://en.wikipedia.org/wiki/Wikipedia:Database_download 2 https://catalog.ldc.upenn.edu/LDC2011T07 133 134 Implementing Text Classification Using Word Embeddings house 0.60137 0.28521 -0.032038 -0.43026 0.74806 0.26223 -0.97361 0.078581 -0.57588 -1.188 -1.8507 -0.24887 0.055549 0.0086155 0.067951 0.40554 -0.073998 -0.21318 0.37167 -0.71791 1.2234 0.35546 -0.41537 -0.21931 -0.39661 -1.7831 -0.41507 0.29533 -0.41254 0.020096 2.7425 -0.9926 -0.71033 -0.46813 0.28265 -0.077639 0.3041 -0.06644 0.3951 -0.70747 -0.38894 0.23158 -0.49508 0.14612 -0.02314 0.56389 -0.86188 -1.0278 0.039922 0.20018 Figure 9.1 GloVe embedding corresponding to the word house, found in the GloVe file glove.6B.50d.txt. We have broken the vector in several lines for display purposes, but this is a single line in the text file. that practitioners can use them in downstream tasks. We will use one such set of vectors in this chapter. Pretrained embeddings are usually distributed as a text file in which each line represents a word vector. The first element in the line is the word itself, and the rest of the elements are the vector components. This is usually referred to as the word2vec format. For example, Figure 9.1 shows the line in the glove.6B.50d.txt file (from the GloVe website) corresponding to the word house. This vector is represented by the word itself, followed by 50 floating-point numbers corresponding to the 50dimensional vector. Note that some embeddings files have a header line composed of two numbers: the number of vectors (i.e., the number of lines in the file), and the vector dimensionality. However, this is not always the case. For example, the original word2vec implementation includes this header line, but the more recent GloVe does not (probably because this information can be inferred from the content of the file). For the examples in the rest of the chapter, we will use the glove.6B.300d.txt embeddings that can be downloaded from the GloVe website.3 This file provides 400,000 word embeddings of 300-dimensions trained on texts
from Wikipedia 2014 and Gigaword 5. We will begin our exploration of word embeddings using Gensim,4 a Python library that provides excellent support for loading and using word embeddings, among other more advanced features. As we can see, the embeddings have been loaded and assigned to the glove variable. Note that we had to specify that this file doesn’t contain the header that is usually present in the word2vec format. The 3 https://nlp.stanford.edu/projects/glove/ 4 https://radimrehurek.com/gensim/ 9.1 Pre-trained Word Embeddings 135 glove.vectors attribute contains a 2-dimensional NumPy array with 400,000 rows and 300 columns, each row corresponding to a word embedding. 9.1.1 Word Similarity Gensim’s KeyedVectors class provides a method called most_similar that receives a word and computes its cosine similarity to all other embeddings, and returns the topn most-similar words. By default, topn is set to 10. The example above shows the top 10 most-similar words to the word cactus, when using the 300-dimension GloVe embeddings trained on Wikipedia and Gigaword. All ten most-similar words are related to cactus in different ways: cacti and cactuses are its plural forms; saguaro, peyote, opuntia, and prickly pear are types of cacti; and mesquite, shrubs, and succulents are other plants from arid climates. You can find more examples of word similarity queries in the Jupyter notebook that accompanies this chapter. Also, as an exercise, try loading a different set of embeddings trained with a different corpus (e.g., Twitter) to see if you obtain different results! 9.1.2 Word Analogies As we discussed in the previous chapter, the semantic information en- coded by word embeddings captures much more than word similar- ity. To surface this additional information, we will use word analogies represented using additional vector operations. For example, a well- ⃗
known analogy that highlights gender information is: king − m⃗an ≈ qu⃗een−wom⃗an,5or,inplainlanguage:“manistokingwhatwomanis to queen.” From this, it immediately follows that one can subtract the meaning of man and add the meaning of woman to obtain the definition ⃗ offemaleroyalty:king−m⃗an+wom⃗an≈qu⃗een.
 The same most_similar method we’ve been using can be repurposed to find word analogies such as the one mentioned above. To this end, two sets of words have to be provided to the most_similar method: a list of positive words that should be added, and a list of negative words 5 A word with an arrow on top refers to the embedding vector corresponding to that word. Please see Section 1.4 for a summary of the notations used in this book. 136 Implementing Text Classification Using Word Embeddings that should be subtracted. For example, the code below implements the left-hand side of the previous analogy: Another interesting analogy relation that shows how the embeddings have captured information about currencies is shown below. More examples are discussed in the Jupyter notebook. 9.1.3 Looking Under the Hood Let us understand now how these queries are actually implemented. First, we need to know what components we need. Clearly, we need the embedding vectors themselves. They are stored in the vectors attribute of the KeyedVectors object. As we mentioned previously, this is a 2-dimensional NumPy array, each row corresponding to a word in the vocabulary. These embeddings are not normalized, but normalized embeddings can be obtained using the get_normed_vectors method. We also need to know the mapping between words and the matrix rows. The KeyedVectors object stores this mapping in a list of terms called index_to_key, and a term-to-index dictionary called key_to_index. Below we show only the first 5 terms to save space, but you can inspect the whole vocabulary in the Jupyter notebook. 9.1.4 Word Similarity from Scratch Implementing the word similarity function ourselves is a good exercise to ensure that we understand how cosine similarity works, and to practice our NumPy skills. We will write a function called most_similar_words that will take a word, the embeddings matrix, the vocabulary in the form of the index_to_key list and key_to_index dictionary, and the number of similar words to return (defaults to 10). The implementation of most_similar_words is straightforward. First, we find the word id for the given word, using the key_to_index dictionary. Then we retrieve the row from the vectors matrix that corresponds to that word. The next step is computing the cosine similarity between the word of interest and the rest of the vocabulary. Recall that the cosine similarity is equivalent to a dot product if the vectors are normalized. We use this equivalence by performing a matrix-vector multiplication between the word embedding and the embedding matrix using Python’s at operator (denoted as @ in code). This means that we must pass the 9.2 Text Classification with Pretrained Word Embeddings 137 normalized embeddings as an argument to this function. Next, we need to sort the similarities preserving the mapping to the words in the vocabulary. We achieve this using the argsort NumPy method, which returns the indices in sorted (ascending) order. Since we need them in descending order, the next step is to reverse this list of indices. Obviously, the most similar word to whichever word we’re querying is the word itself, but that is not an interesting result, so we will remove it from the results. We do this by using NumPy’s ability to index arrays using booleans. We first create a new array in which the position corresponding to the query word is set to False and every other element is set to True, and we use this boolean array to index the list of ids. Lastly, we create a list of tuples of the form (word, similarity) for the topn words, and return the results. Now we will test our implementation of word similarity using the word cactus. You can compare the results to the ones obtained by KeyedVectors’s most_similar method. 9.1.5 Word Analogies from Scratch The implementation of the word analogy function is not that much different from our most_similar_word function above. The main difference between this function and most_similar_words is that now we have two lists of words that we need to combine into a single embedding. We first add the positive words into a single vector, and we do the same for the negative words. Then we subtract the negative vector from the positive one, and normalize the result. The similarity scores are computed the same way as before, but now we need to remove several words from the results, so this time we use NumPy’s isin function, which checks for any of the words in given_word_ids. We then package the results the same way we did before, and return them. ⃗ Nowlet’stryourimplementationwiththesameking−m⃗an+wom⃗an query we discussed previously. Please compare the results to the ones obtained by Gensim. 9.2 Text Classification with Pretrained Word Embeddings In this section we will continue using the AG News classification dataset introduced in previous chapters. Most of the data preparation is the 138 Implementing Text Classification Using Word Embeddings same, up to tokenization. However, we need to remember that the embeddings were trained on a different corpus, so it would be a good idea to estimate how well they cover the words AG News dataset. To achieve this, we load the embeddings just like we did previously. Then we count the tokens in our corpus that do not appear in the embeddings vocabulary, as well as the total number to tokens. We use these numbers to print some informative statistics such as the proportion of unknown tokens in the corpus. We also print the top ten unknown tokens. You can use the Jupyter notebook to explore this task further. Our analysis indicates that only 1.25% of the tokens are not accounted for in the embeddings vocabulary. Further, the most common unknown words seem to be URL fragments. This is encouraging. However, for more robustness, we will introduce a couple of special embeddings that are often needed when dealing with word embeddings. The first one is an embedding used to represent unknown words. A common strategy is to use the average of all the embeddings in the vocabulary for this purpose. The second embedding we will add will be used for padding. Padding is required when we want to train with (mini-)batches because the lengths of all the examples in a given batch have to match in order for the batch to be efficiently processed in parallel. The padding embedding consists only of zeros, which essentially excludes these virtual tokens from the forward/backward passes. None of these embeddings are included in the pretrained GloVe embeddings, but other pretrained embeddings may already include them, so it is a good idea to check if they are included with the embeddings we are using before adding them. The new embeddings were added at the end of embedding collection, so their ids are 400,000 and 400,001. Now we need to generate a list of token ids for each training example. Recall that we decided to ignore tokens that appear less than 10 times, so we need to replace those with [UNK] too, even if they appear in the embedding vocabulary. Next, we create a Dataset object from the padded lists of token ids. This one is even easier since the lists of token ids are ready. So all that is required is turning them into tensors. Lastly, we need to modify the model class to indicate that we now use embedding vectors. To this end, we will add an nn. Embedding layer that stores the embedding vectors for all words in the vocabulary. We will use this object to look up embeddings by their token ids. This layer will be initialized from a tensor containing the pretrained embeddings for the entire vocabulary. Also, the pad_id is specified when creating the 9.2 Text Classification with Pretrained Word Embeddings 139 embedding layer. When a nn. Embedding layer gets initialized using the from_pretrained method with other arguments set to default values, the embeddings are not updated during training. We will keep it that way for this example, but that could be changed by setting the freeze parameter to False. The rest of the layers are the same as in our previous example from Chapter 7, i.e., one intermediate layer and one output layer, with a nonlinearity (ReLU) between them. The only major difference is that now the input size of the intermediate layer is the size of one embedding (e.g., 300) instead of the size of the vocabulary like last time. This is because, as we explain below, the intermediate layer receives an average of the numerical representations of the words in the current text. The forward function of the Model class changes significantly. This time we are encoding the text as an average of the embeddings of all the words it contains. To compute the denominator of this average, we obtain the length of each text by counting how many of its words are not the virtual padding token. Then we sum all the embeddings and divide by the number of non-padding tokens. Adding all embeddings is safe, because padding embeddings are comprised of zeros. This process leaves us with a single embedding for the whole text, which is then passed to the rest of the layers. The training and evaluation steps are the same before. The results of this model on the AG News test partition are displayed below: Comparing these results with the ones obtained by the multilayer perceptron with explicit features in Chapter 7, we observe that on this particular task utilizing embeddings as features does not yield a performance improvement. Notably, this is a small dataset and a rather simplistic task where the presence of certain words is sufficient to distinguish the category of an article (e.g., the word basketball is highly indicative of the label Sports). Nevertheless, in other tasks where distinctions are more nuanced, or in which there is less likely to be word overlap between texts of interest, word embeddings do provide necessary signal. Additionally, when there are class imbalances, word embeddings can supplement underrepresented classes by bringing the external knowledge gained during their pretraining. 140 Implementing Text Classification Using Word Embeddings 9.3 Summary In this chapter we showed how to explore the semantic space encoded by word embeddings through word similarity and analogies, as well as one way to use them for text classification. At this point we have not taken into consideration the order in which the words appear, i.e., we averaged the embeddings for all the words in the text using a bag-ofwords representation of text. In subsequent chapters we will explore how to incorporate word order into the learned representations of text.
11,071
11,197
#!/usr/bin/env python # coding: utf-8 # # Multiclass Text Classification with # # Feed-forward Neural Networks and Word Embeddings # First, we will do some initialization. # In[1]: import random import torch import numpy as np import pandas as pd from tqdm.notebook import tqdm # enable tqdm in pandas tqdm.pandas() # set to True to use the gpu (if there is one available) use_gpu = True # select device device = torch.device('cuda' if use_gpu and torch.cuda.is_available() else 'cpu') print(f'device: {device.type}') # random seed seed = 1234 # set random seed if seed is not None: print(f'random seed: {seed}') random.seed(seed) np.random.seed(seed) torch.manual_seed(seed) # We will be using the AG's News Topic Classification Dataset. # It is stored in two CSV files: `train.csv` and `test.csv`, as well as a `classes.txt` that stores the labels of the classes to predict. # # First, we will load the training dataset using [pandas](https://pandas.pydata.org/) and take a quick look at how the data. # In[2]: train_df = pd.read_csv('data/ag_news_csv/train.csv', header=None) train_df.columns = ['class index', 'title', 'description'] train_df # The dataset consists of 120,000 examples, each consisting of a class index, a title, and a description. # The class labels are distributed in a separated file. We will add the labels to the dataset so that we can interpret the data more easily. Note that the label indexes are one-based, so we need to subtract one to retrieve them from the list. # In[3]: labels = open('data/ag_news_csv/classes.txt').read().splitlines() classes = train_df['class index'].map(lambda i: labels[i-1]) train_df.insert(1, 'class', classes) train_df # Let's inspect how balanced our examples are by using a bar plot. # In[4]: pd.value_counts(train_df['class']).plot.bar() # The classes are evenly distributed. That's great! # # However, the text contains some spurious backslashes in some parts of the text. # They are meant to represent newlines in the original text. # An example can be seen below, between the words "dwindling" and "band". # In[5]: print(train_df.loc[0, 'description']) # We will replace the backslashes with spaces on the whole column using pandas replace method. # In[6]: train_df['text'] = train_df['title'].str.lower() + " " + train_df['description'].str.lower() train_df['text'] = train_df['text'].str.replace('\\', ' ', regex=False) train_df # Now we will proceed to tokenize the title and description columns using NLTK's word_tokenize(). # We will add a new column to our dataframe with the list of tokens. # In[7]: from nltk.tokenize import word_tokenize train_df['tokens'] = train_df['text'].progress_map(word_tokenize) train_df # Now we will load the GloVe word embeddings. # In[8]: from gensim.models import KeyedVectors glove = KeyedVectors.load_word2vec_format("glove.6B.300d.txt", no_header=True) glove.vectors.shape # The word embeddings have been pretrained in a different corpus, so it would be a good idea to estimate how good our tokenization matches the GloVe vocabulary. # In[9]: from collections import Counter def count_unknown_words(data, vocabulary): counter = Counter() for row in tqdm(data): counter.update(tok for tok in row if tok not in vocabulary) return counter # find out how many times each unknown token occurrs in the corpus c = count_unknown_words(train_df['tokens'], glove.key_to_index) # find the total number of tokens in the corpus total_tokens = train_df['tokens'].map(len).sum() # find some statistics about occurrences of unknown tokens unk_tokens = sum(c.values()) percent_unk = unk_tokens / total_tokens distinct_tokens = len(list(c)) print(f'total number of tokens: {total_tokens:,}') print(f'number of unknown tokens: {unk_tokens:,}') print(f'number of distinct unknown tokens: {distinct_tokens:,}') print(f'percentage of unkown tokens: {percent_unk:.2%}') print('top 50 unknown words:') for token, n in c.most_common(10): print(f'\t{n}\t{token}') # Glove embeddings seem to have a good coverage on this dataset -- only 1.25% of the tokens in the dataset are unknown, i.e., don't appear in the GloVe vocabulary. # # Still, we will need a way to handle these unknown tokens. # Our approach will be to add a new embedding to GloVe that will be used to represent them. # This new embedding will be initialized as the average of all the GloVe embeddings. # # We will also add another embedding, this one initialized to zeros, that will be used to pad the sequences of tokens so that they all have the same length. This will be useful when we train with mini-batches. # In[10]: # string values corresponding to the new embeddings unk_tok = '[UNK]' pad_tok = '[PAD]' # initialize the new embedding values unk_emb = glove.vectors.mean(axis=0) pad_emb = np.zeros(300) # add new embeddings to glove glove.add_vectors([unk_tok, pad_tok], [unk_emb, pad_emb]) # get token ids corresponding to the new embeddings unk_id = glove.key_to_index[unk_tok] pad_id = glove.key_to_index[pad_tok] unk_id, pad_id # In[11]: from sklearn.model_selection import train_test_split train_df, dev_df = train_test_split(train_df, train_size=0.8) train_df.reset_index(inplace=True) dev_df.reset_index(inplace=True) # We will now add a new column to our dataframe that will contain the padded sequences of token ids. # In[12]: threshold = 10 tokens = train_df['tokens'].explode().value_counts() vocabulary = set(tokens[tokens > threshold].index.tolist()) print(f'vocabulary size: {len(vocabulary):,}') # In[13]: # find the length of the longest list of tokens max_tokens = train_df['tokens'].map(len).max() # return unk_id for infrequent tokens too def get_id(tok): if tok in vocabulary: return glove.key_to_index.get(tok, unk_id) else: return unk_id # function that gets a list of tokens and returns a list of token ids, # with padding added accordingly def token_ids(tokens): tok_ids = [get_id(tok) for tok in tokens] pad_len = max_tokens - len(tok_ids) return tok_ids + [pad_id] * pad_len # add new column to the dataframe train_df['token ids'] = train_df['tokens'].progress_map(token_ids) train_df # In[14]: max_tokens = dev_df['tokens'].map(len).max() dev_df['token ids'] = dev_df['tokens'].progress_map(token_ids) dev_df # Now we will get a numpy 2-dimensional array corresponding to the token ids, # and a 1-dimensional array with the gold classes. Note that the classes are one-based (i.e., they start at one), # but we need them to be zero-based, so we need to subtract one from this array. # In[15]: from torch.utils.data import Dataset class MyDataset(Dataset): def __init__(self, x, y): self.x = x self.y = y def __len__(self): return len(self.y) def __getitem__(self, index): x = torch.tensor(self.x[index]) y = torch.tensor(self.y[index]) return x, y # Next, we construct our PyTorch model, which is a feed-forward neural network with two layers: # In[16]: from torch import nn import torch.nn.functional as F class Model(nn.Module): def __init__(self, vectors, pad_id, hidden_dim, output_dim, dropout): super().__init__() # embeddings must be a tensor if not torch.is_tensor(vectors): vectors = torch.tensor(vectors) # keep padding id self.padding_idx = pad_id # embedding layer self.embs = nn.Embedding.from_pretrained(vectors, padding_idx=pad_id) # feedforward layers self.layers = nn.Sequential( nn.Dropout(dropout), nn.Linear(vectors.shape[1], hidden_dim), nn.ReLU(), nn.Dropout(dropout), nn.Linear(hidden_dim, output_dim), ) def forward(self, x): # get boolean array with padding elements set to false not_padding = torch.isin(x, self.padding_idx, invert=True) # get lengths of examples (excluding padding) lengths = torch.count_nonzero(not_padding, axis=1) # get embeddings x = self.embs(x) # calculate means x = x.sum(dim=1) / lengths.unsqueeze(dim=1) # pass to rest of the model output = self.layers(x) # calculate softmax if we're not in training mode #if not self.training: # output = F.softmax(output, dim=1) return output # Next, we implement the training procedure. We compute the loss and accuracy on the development partition after each epoch. # In[17]: from torch import optim from torch.utils.data import DataLoader from sklearn.metrics import accuracy_score # hyperparameters lr = 1e-3 weight_decay = 0 batch_size = 500 shuffle = True n_epochs = 5 hidden_dim = 50 output_dim = len(labels) dropout = 0.1 vectors = glove.vectors # initialize the model, loss function, optimizer, and data-loader model = Model(vectors, pad_id, hidden_dim, output_dim, dropout).to(device) loss_func = nn.CrossEntropyLoss() optimizer = optim.Adam(model.parameters(), lr=lr, weight_decay=weight_decay) train_ds = MyDataset(train_df['token ids'], train_df['class index'] - 1) train_dl = DataLoader(train_ds, batch_size=batch_size, shuffle=shuffle) dev_ds = MyDataset(dev_df['token ids'], dev_df['class index'] - 1) dev_dl = DataLoader(dev_ds, batch_size=batch_size, shuffle=shuffle) train_loss = [] train_acc = [] dev_loss = [] dev_acc = [] # train the model for epoch in range(n_epochs): losses = [] gold = [] pred = [] model.train() for X, y_true in tqdm(train_dl, desc=f'epoch {epoch+1} (train)'): # clear gradients model.zero_grad() # send batch to right device X = X.to(device) y_true = y_true.to(device) # predict label scores y_pred = model(X) # compute loss loss = loss_func(y_pred, y_true) # accumulate for plotting losses.append(loss.detach().cpu().item()) gold.append(y_true.detach().cpu().numpy()) pred.append(np.argmax(y_pred.detach().cpu().numpy(), axis=1)) # backpropagate loss.backward() # optimize model parameters optimizer.step() train_loss.append(np.mean(losses)) train_acc.append(accuracy_score(np.concatenate(gold), np.concatenate(pred))) model.eval() with torch.no_grad(): losses = [] gold = [] pred = [] for X, y_true in tqdm(dev_dl, desc=f'epoch {epoch+1} (dev)'): X = X.to(device) y_true = y_true.to(device) y_pred = model(X) loss = loss_func(y_pred, y_true) losses.append(loss.cpu().item()) gold.append(y_true.cpu().numpy()) pred.append(np.argmax(y_pred.cpu().numpy(), axis=1)) dev_loss.append(np.mean(losses)) dev_acc.append(accuracy_score(np.concatenate(gold), np.concatenate(pred))) # Let's plot the loss and accuracy on dev: # In[18]: import matplotlib.pyplot as plt get_ipython().run_line_magic('matplotlib', 'inline') x = np.arange(n_epochs) + 1 plt.plot(x, train_loss) plt.plot(x, dev_loss) plt.legend(['train', 'dev']) plt.xlabel('epoch') plt.ylabel('loss') plt.grid(True) # In[19]: plt.plot(x, train_acc) plt.plot(x, dev_acc) plt.legend(['train', 'dev']) plt.xlabel('epoch') plt.ylabel('accuracy') plt.grid(True) # Next, we evaluate on the testing partition: # In[20]: # repeat all preprocessing done above, this time on the test set test_df = pd.read_csv('data/ag_news_csv/test.csv', header=None) test_df.columns = ['class index', 'title', 'description'] test_df['text'] = test_df['title'].str.lower() + " " + test_df['description'].str.lower() test_df['text'] = test_df['text'].str.replace('\\', ' ', regex=False) test_df['tokens'] = test_df['text'].progress_map(word_tokenize) max_tokens = dev_df['tokens'].map(len).max() test_df['token ids'] = test_df['tokens'].progress_map(token_ids) # In[21]: from sklearn.metrics import classification_report # set model to evaluation mode model.eval() dataset = MyDataset(test_df['token ids'], test_df['class index'] - 1) data_loader = DataLoader(dataset, batch_size=batch_size) y_pred = [] # don't store gradients with torch.no_grad(): for X, _ in tqdm(data_loader): X = X.to(device) # predict one class per example y = torch.argmax(model(X), dim=1) # convert tensor to numpy array (sending it back to the cpu if needed) y_pred.append(y.cpu().numpy()) # print results print(classification_report(dataset.y, np.concatenate(y_pred), target_names=labels))
3,402
3,564
18
chap09-19
chap09-19
9 Implementing Text Classification Using Word Embeddings In the previous chapter we introduced word embeddings, which are realvalued vectors that encode semantic representation of words. We discussed how to learn them, and how they capture semantic information that makes them useful for downstream tasks. In this chapter we show how to use word embeddings that have been pretrained using a variant of the algorithm discussed in the previous chapter. We show how to load them, explore some of their characteristics, and show their application for a text classification task. As usual, the code for this chapter is available in our repository. It is organized into two notebooks: one corresponding to the explorations shown in the first half of this chapter (chap9_embeddings), and a second one in which we modify our previous classifier to use word embeddings (chap9_classification). 9.1 Pre-trained Word Embeddings There are several algorithms for training word embeddings, including the original word2vec algorithm (Mikolov et al., 2013a) (which we discussed in the previous chapter), GloVe (Pennington et al., 2014), and fastText (Bojanowski et al., 2017). They all provide the software for training the embeddings as well as pretrained word embeddings on their respective websites. In general, most open-domain word embeddings are trained on large corpora that cover a variety of topics such as Wikipedia1 and Gigaword.2 Commonly, these embeddings are freely distributed so 1 https://en.wikipedia.org/wiki/Wikipedia:Database_download 2 https://catalog.ldc.upenn.edu/LDC2011T07 133 134 Implementing Text Classification Using Word Embeddings house 0.60137 0.28521 -0.032038 -0.43026 0.74806 0.26223 -0.97361 0.078581 -0.57588 -1.188 -1.8507 -0.24887 0.055549 0.0086155 0.067951 0.40554 -0.073998 -0.21318 0.37167 -0.71791 1.2234 0.35546 -0.41537 -0.21931 -0.39661 -1.7831 -0.41507 0.29533 -0.41254 0.020096 2.7425 -0.9926 -0.71033 -0.46813 0.28265 -0.077639 0.3041 -0.06644 0.3951 -0.70747 -0.38894 0.23158 -0.49508 0.14612 -0.02314 0.56389 -0.86188 -1.0278 0.039922 0.20018 Figure 9.1 GloVe embedding corresponding to the word house, found in the GloVe file glove.6B.50d.txt. We have broken the vector in several lines for display purposes, but this is a single line in the text file. that practitioners can use them in downstream tasks. We will use one such set of vectors in this chapter. Pretrained embeddings are usually distributed as a text file in which each line represents a word vector. The first element in the line is the word itself, and the rest of the elements are the vector components. This is usually referred to as the word2vec format. For example, Figure 9.1 shows the line in the glove.6B.50d.txt file (from the GloVe website) corresponding to the word house. This vector is represented by the word itself, followed by 50 floating-point numbers corresponding to the 50dimensional vector. Note that some embeddings files have a header line composed of two numbers: the number of vectors (i.e., the number of lines in the file), and the vector dimensionality. However, this is not always the case. For example, the original word2vec implementation includes this header line, but the more recent GloVe does not (probably because this information can be inferred from the content of the file). For the examples in the rest of the chapter, we will use the glove.6B.300d.txt embeddings that can be downloaded from the GloVe website.3 This file provides 400,000 word embeddings of 300-dimensions trained on texts
from Wikipedia 2014 and Gigaword 5. We will begin our exploration of word embeddings using Gensim,4 a Python library that provides excellent support for loading and using word embeddings, among other more advanced features. As we can see, the embeddings have been loaded and assigned to the glove variable. Note that we had to specify that this file doesn’t contain the header that is usually present in the word2vec format. The 3 https://nlp.stanford.edu/projects/glove/ 4 https://radimrehurek.com/gensim/ 9.1 Pre-trained Word Embeddings 135 glove.vectors attribute contains a 2-dimensional NumPy array with 400,000 rows and 300 columns, each row corresponding to a word embedding. 9.1.1 Word Similarity Gensim’s KeyedVectors class provides a method called most_similar that receives a word and computes its cosine similarity to all other embeddings, and returns the topn most-similar words. By default, topn is set to 10. The example above shows the top 10 most-similar words to the word cactus, when using the 300-dimension GloVe embeddings trained on Wikipedia and Gigaword. All ten most-similar words are related to cactus in different ways: cacti and cactuses are its plural forms; saguaro, peyote, opuntia, and prickly pear are types of cacti; and mesquite, shrubs, and succulents are other plants from arid climates. You can find more examples of word similarity queries in the Jupyter notebook that accompanies this chapter. Also, as an exercise, try loading a different set of embeddings trained with a different corpus (e.g., Twitter) to see if you obtain different results! 9.1.2 Word Analogies As we discussed in the previous chapter, the semantic information en- coded by word embeddings captures much more than word similar- ity. To surface this additional information, we will use word analogies represented using additional vector operations. For example, a well- ⃗
known analogy that highlights gender information is: king − m⃗an ≈ qu⃗een−wom⃗an,5or,inplainlanguage:“manistokingwhatwomanis to queen.” From this, it immediately follows that one can subtract the meaning of man and add the meaning of woman to obtain the definition ⃗ offemaleroyalty:king−m⃗an+wom⃗an≈qu⃗een.
 The same most_similar method we’ve been using can be repurposed to find word analogies such as the one mentioned above. To this end, two sets of words have to be provided to the most_similar method: a list of positive words that should be added, and a list of negative words 5 A word with an arrow on top refers to the embedding vector corresponding to that word. Please see Section 1.4 for a summary of the notations used in this book. 136 Implementing Text Classification Using Word Embeddings that should be subtracted. For example, the code below implements the left-hand side of the previous analogy: Another interesting analogy relation that shows how the embeddings have captured information about currencies is shown below. More examples are discussed in the Jupyter notebook. 9.1.3 Looking Under the Hood Let us understand now how these queries are actually implemented. First, we need to know what components we need. Clearly, we need the embedding vectors themselves. They are stored in the vectors attribute of the KeyedVectors object. As we mentioned previously, this is a 2-dimensional NumPy array, each row corresponding to a word in the vocabulary. These embeddings are not normalized, but normalized embeddings can be obtained using the get_normed_vectors method. We also need to know the mapping between words and the matrix rows. The KeyedVectors object stores this mapping in a list of terms called index_to_key, and a term-to-index dictionary called key_to_index. Below we show only the first 5 terms to save space, but you can inspect the whole vocabulary in the Jupyter notebook. 9.1.4 Word Similarity from Scratch Implementing the word similarity function ourselves is a good exercise to ensure that we understand how cosine similarity works, and to practice our NumPy skills. We will write a function called most_similar_words that will take a word, the embeddings matrix, the vocabulary in the form of the index_to_key list and key_to_index dictionary, and the number of similar words to return (defaults to 10). The implementation of most_similar_words is straightforward. First, we find the word id for the given word, using the key_to_index dictionary. Then we retrieve the row from the vectors matrix that corresponds to that word. The next step is computing the cosine similarity between the word of interest and the rest of the vocabulary. Recall that the cosine similarity is equivalent to a dot product if the vectors are normalized. We use this equivalence by performing a matrix-vector multiplication between the word embedding and the embedding matrix using Python’s at operator (denoted as @ in code). This means that we must pass the 9.2 Text Classification with Pretrained Word Embeddings 137 normalized embeddings as an argument to this function. Next, we need to sort the similarities preserving the mapping to the words in the vocabulary. We achieve this using the argsort NumPy method, which returns the indices in sorted (ascending) order. Since we need them in descending order, the next step is to reverse this list of indices. Obviously, the most similar word to whichever word we’re querying is the word itself, but that is not an interesting result, so we will remove it from the results. We do this by using NumPy’s ability to index arrays using booleans. We first create a new array in which the position corresponding to the query word is set to False and every other element is set to True, and we use this boolean array to index the list of ids. Lastly, we create a list of tuples of the form (word, similarity) for the topn words, and return the results. Now we will test our implementation of word similarity using the word cactus. You can compare the results to the ones obtained by KeyedVectors’s most_similar method. 9.1.5 Word Analogies from Scratch The implementation of the word analogy function is not that much different from our most_similar_word function above. The main difference between this function and most_similar_words is that now we have two lists of words that we need to combine into a single embedding. We first add the positive words into a single vector, and we do the same for the negative words. Then we subtract the negative vector from the positive one, and normalize the result. The similarity scores are computed the same way as before, but now we need to remove several words from the results, so this time we use NumPy’s isin function, which checks for any of the words in given_word_ids. We then package the results the same way we did before, and return them. ⃗ Nowlet’stryourimplementationwiththesameking−m⃗an+wom⃗an query we discussed previously. Please compare the results to the ones obtained by Gensim. 9.2 Text Classification with Pretrained Word Embeddings In this section we will continue using the AG News classification dataset introduced in previous chapters. Most of the data preparation is the 138 Implementing Text Classification Using Word Embeddings same, up to tokenization. However, we need to remember that the embeddings were trained on a different corpus, so it would be a good idea to estimate how well they cover the words AG News dataset. To achieve this, we load the embeddings just like we did previously. Then we count the tokens in our corpus that do not appear in the embeddings vocabulary, as well as the total number to tokens. We use these numbers to print some informative statistics such as the proportion of unknown tokens in the corpus. We also print the top ten unknown tokens. You can use the Jupyter notebook to explore this task further. Our analysis indicates that only 1.25% of the tokens are not accounted for in the embeddings vocabulary. Further, the most common unknown words seem to be URL fragments. This is encouraging. However, for more robustness, we will introduce a couple of special embeddings that are often needed when dealing with word embeddings. The first one is an embedding used to represent unknown words. A common strategy is to use the average of all the embeddings in the vocabulary for this purpose. The second embedding we will add will be used for padding. Padding is required when we want to train with (mini-)batches because the lengths of all the examples in a given batch have to match in order for the batch to be efficiently processed in parallel. The padding embedding consists only of zeros, which essentially excludes these virtual tokens from the forward/backward passes. None of these embeddings are included in the pretrained GloVe embeddings, but other pretrained embeddings may already include them, so it is a good idea to check if they are included with the embeddings we are using before adding them. The new embeddings were added at the end of embedding collection, so their ids are 400,000 and 400,001. Now we need to generate a list of token ids for each training example. Recall that we decided to ignore tokens that appear less than 10 times, so we need to replace those with [UNK] too, even if they appear in the embedding vocabulary. Next, we create a Dataset object from the padded lists of token ids. This one is even easier since the lists of token ids are ready. So all that is required is turning them into tensors. Lastly, we need to modify the model class to indicate that we now use embedding vectors. To this end, we will add an nn. Embedding layer that stores the embedding vectors for all words in the vocabulary. We will use this object to look up embeddings by their token ids. This layer will be initialized from a tensor containing the pretrained embeddings for the entire vocabulary. Also, the pad_id is specified when creating the 9.2 Text Classification with Pretrained Word Embeddings 139 embedding layer. When a nn. Embedding layer gets initialized using the from_pretrained method with other arguments set to default values, the embeddings are not updated during training. We will keep it that way for this example, but that could be changed by setting the freeze parameter to False. The rest of the layers are the same as in our previous example from Chapter 7, i.e., one intermediate layer and one output layer, with a nonlinearity (ReLU) between them. The only major difference is that now the input size of the intermediate layer is the size of one embedding (e.g., 300) instead of the size of the vocabulary like last time. This is because, as we explain below, the intermediate layer receives an average of the numerical representations of the words in the current text. The forward function of the Model class changes significantly. This time we are encoding the text as an average of the embeddings of all the words it contains. To compute the denominator of this average, we obtain the length of each text by counting how many of its words are not the virtual padding token. Then we sum all the embeddings and divide by the number of non-padding tokens. Adding all embeddings is safe, because padding embeddings are comprised of zeros. This process leaves us with a single embedding for the whole text, which is then passed to the rest of the layers. The training and evaluation steps are the same before. The results of this model on the AG News test partition are displayed below: Comparing these results with the ones obtained by the multilayer perceptron with explicit features in Chapter 7, we observe that on this particular task utilizing embeddings as features does not yield a performance improvement. Notably, this is a small dataset and a rather simplistic task where the presence of certain words is sufficient to distinguish the category of an article (e.g., the word basketball is highly indicative of the label Sports). Nevertheless, in other tasks where distinctions are more nuanced, or in which there is less likely to be word overlap between texts of interest, word embeddings do provide necessary signal. Additionally, when there are class imbalances, word embeddings can supplement underrepresented classes by bringing the external knowledge gained during their pretraining. 140 Implementing Text Classification Using Word Embeddings 9.3 Summary In this chapter we showed how to explore the semantic space encoded by word embeddings through word similarity and analogies, as well as one way to use them for text classification. At this point we have not taken into consideration the order in which the words appear, i.e., we averaged the embeddings for all the words in the text using a bag-ofwords representation of text. In subsequent chapters we will explore how to incorporate word order into the learned representations of text.
6,751
6,861
#!/usr/bin/env python # coding: utf-8 # # Using Pre-trained Word Embeddings # # In this notebook we will show some operations on pre-trained word embeddings to gain an intuition about them. # # We will be using the pre-trained GloVe embeddings that can be found in the [official website](https://nlp.stanford.edu/projects/glove/). In particular, we will use the file `glove.6B.300d.txt` contained in this [zip file](https://nlp.stanford.edu/data/glove.6B.zip). # # We will first load the GloVe embeddings using [Gensim](https://radimrehurek.com/gensim/). Specifically, we will use [`KeyedVectors`](https://radimrehurek.com/gensim/models/keyedvectors.html)'s [`load_word2vec_format()`](https://radimrehurek.com/gensim/models/keyedvectors.html#gensim.models.keyedvectors.KeyedVectors.load_word2vec_format) classmethod, which supports the original word2vec file format. # However, there is a difference in the file formats used by GloVe and word2vec, which is a header used by word2vec to indicate the number of embeddings and dimensions stored in the file. The file that stores the GloVe embeddings doesn't have this header, so we will have to address that when loading the embeddings. # # Loading the embeddings may take a little bit, so hang in there! # In[2]: from gensim.models import KeyedVectors fname = "glove.6B.300d.txt" glove = KeyedVectors.load_word2vec_format(fname, no_header=True) glove.vectors.shape # ## Word similarity # # One attribute of word embeddings that makes them useful is the ability to compare them using cosine similarity to find how similar they are. [`KeyedVectors`](https://radimrehurek.com/gensim/models/keyedvectors.html) objects provide a method called [`most_similar()`](https://radimrehurek.com/gensim/models/keyedvectors.html#gensim.models.keyedvectors.KeyedVectors.most_similar) that we can use to find the closest words to a particular word of interest. By default, [`most_similar()`](https://radimrehurek.com/gensim/models/keyedvectors.html#gensim.models.keyedvectors.KeyedVectors.most_similar) returns the 10 most similar words, but this can be changed using the `topn` parameter. # # Below we test this function using a few different words. # In[3]: # common noun glove.most_similar("cactus") # In[4]: # common noun glove.most_similar("cake") # In[5]: # adjective glove.most_similar("angry") # In[6]: # adverb glove.most_similar("quickly") # In[7]: # preposition glove.most_similar("between") # In[8]: # determiner glove.most_similar("the") # ## Word analogies # # Another characteristic of word embeddings is their ability to solve analogy problems. # The same [`most_similar()`](https://radimrehurek.com/gensim/models/keyedvectors.html#gensim.models.keyedvectors.KeyedVectors.most_similar) method can be used for this task, by passing two lists of words: # a `positive` list with the words that should be added and a `negative` list with the words that should be subtracted. Using these arguments, the famous example $\vec{king} - \vec{man} + \vec{woman} \approx \vec{queen}$ can be executed as follows: # In[9]: # king - man + woman glove.most_similar(positive=["king", "woman"], negative=["man"]) # Here are a few other interesting analogies: # In[10]: # car - drive + fly glove.most_similar(positive=["car", "fly"], negative=["drive"]) # In[11]: # berlin - germany + australia glove.most_similar(positive=["berlin", "australia"], negative=["germany"]) # In[12]: # england - london + baghdad glove.most_similar(positive=["england", "baghdad"], negative=["london"]) # In[13]: # japan - yen + peso glove.most_similar(positive=["japan", "peso"], negative=["yen"]) # In[14]: # best - good + tall glove.most_similar(positive=["best", "tall"], negative=["good"]) # ## Looking under the hood # # Now that we are more familiar with the [`most_similar()`](https://radimrehurek.com/gensim/models/keyedvectors.html#gensim.models.keyedvectors.KeyedVectors.most_similar) method, it is time to implement its functionality ourselves. # But first, we need to take a look at the different parts of the [`KeyedVectors`](https://radimrehurek.com/gensim/models/keyedvectors.html) object that we will need. # Obviously, we will need the vectors themselves. They are stored in the `vectors` attribute. # In[15]: glove.vectors.shape # As we can see above, `vectors` is a 2-dimensional matrix with 400,000 rows and 300 columns. # Each row corresponds to a 300-dimensional word embedding. These embeddings are not normalized, but normalized embeddings can be obtained using the [`get_normed_vectors()`](https://radimrehurek.com/gensim/models/keyedvectors.html#gensim.models.keyedvectors.KeyedVectors.get_normed_vectors) method. # In[16]: normed_vectors = glove.get_normed_vectors() normed_vectors.shape # Now we need to map the words in the vocabulary to rows in the `vectors` matrix, and vice versa. # The [`KeyedVectors`](https://radimrehurek.com/gensim/models/keyedvectors.html) object has the attributes `index_to_key` and `key_to_index` which are a list of words and a dictionary of words to indices, respectively. # In[17]: #glove.index_to_key # In[18]: #glove.key_to_index # ## Word similarity from scratch # # Now we have everything we need to implement a `most_similar_words()` function that takes a word, the vector matrix, the `index_to_key` list, and the `key_to_index` dictionary. This function will return the 10 most similar words to the provided word, along with their similarity scores. # In[19]: import numpy as np def most_similar_words(word, vectors, index_to_key, key_to_index, topn=10): # retrieve word_id corresponding to given word word_id = key_to_index[word] # retrieve embedding for given word emb = vectors[word_id] # calculate similarities to all words in out vocabulary similarities = vectors @ emb # get word_ids in ascending order with respect to similarity score ids_ascending = similarities.argsort() # reverse word_ids ids_descending = ids_ascending[::-1] # get boolean array with element corresponding to word_id set to false mask = ids_descending != word_id # obtain new array of indices that doesn't contain word_id # (otherwise the most similar word to the argument would be the argument itself) ids_descending = ids_descending[mask] # get topn word_ids top_ids = ids_descending[:topn] # retrieve topn words with their corresponding similarity score top_words = [(index_to_key[i], similarities[i]) for i in top_ids] # return results return top_words # Now let's try the same example that we used above: the most similar words to "cactus". # In[20]: vectors = glove.get_normed_vectors() index_to_key = glove.index_to_key key_to_index = glove.key_to_index most_similar_words("cactus", vectors, index_to_key, key_to_index) # ## Analogies from scratch # # The `most_similar_words()` function behaves as expected. Now let's implement a function to perform the analogy task. We will give it the very creative name `analogy`. This function will get two lists of words (one for positive words and one for negative words), just like the [`most_similar()`](https://radimrehurek.com/gensim/models/keyedvectors.html#gensim.models.keyedvectors.KeyedVectors.most_similar) method we discussed above. # In[21]: from numpy.linalg import norm def analogy(positive, negative, vectors, index_to_key, key_to_index, topn=10): # find ids for positive and negative words pos_ids = [key_to_index[w] for w in positive] neg_ids = [key_to_index[w] for w in negative] given_word_ids = pos_ids + neg_ids # get embeddings for positive and negative words pos_emb = vectors[pos_ids].sum(axis=0) neg_emb = vectors[neg_ids].sum(axis=0) # get embedding for analogy emb = pos_emb - neg_emb # normalize embedding emb = emb / norm(emb) # calculate similarities to all words in out vocabulary similarities = vectors @ emb # get word_ids in ascending order with respect to similarity score ids_ascending = similarities.argsort() # reverse word_ids ids_descending = ids_ascending[::-1] # get boolean array with element corresponding to any of given_word_ids set to false given_words_mask = np.isin(ids_descending, given_word_ids, invert=True) # obtain new array of indices that doesn't contain any of the given_word_ids ids_descending = ids_descending[given_words_mask] # get topn word_ids top_ids = ids_descending[:topn] # retrieve topn words with their corresponding similarity score top_words = [(index_to_key[i], similarities[i]) for i in top_ids] # return results return top_words # Let's try this function with the $\vec{king} - \vec{man} + \vec{woman} \approx \vec{queen}$ example we discussed above. # In[22]: positive = ["king", "woman"] negative = ["man"] vectors = glove.get_normed_vectors() index_to_key = glove.index_to_key key_to_index = glove.key_to_index analogy(positive, negative, vectors, index_to_key, key_to_index) # In[ ]:
4,732
4,776
19
chap09-20
chap09-20
9 Implementing Text Classification Using Word Embeddings In the previous chapter we introduced word embeddings, which are realvalued vectors that encode semantic representation of words. We discussed how to learn them, and how they capture semantic information that makes them useful for downstream tasks. In this chapter we show how to use word embeddings that have been pretrained using a variant of the algorithm discussed in the previous chapter. We show how to load them, explore some of their characteristics, and show their application for a text classification task. As usual, the code for this chapter is available in our repository. It is organized into two notebooks: one corresponding to the explorations shown in the first half of this chapter (chap9_embeddings), and a second one in which we modify our previous classifier to use word embeddings (chap9_classification). 9.1 Pre-trained Word Embeddings There are several algorithms for training word embeddings, including the original word2vec algorithm (Mikolov et al., 2013a) (which we discussed in the previous chapter), GloVe (Pennington et al., 2014), and fastText (Bojanowski et al., 2017). They all provide the software for training the embeddings as well as pretrained word embeddings on their respective websites. In general, most open-domain word embeddings are trained on large corpora that cover a variety of topics such as Wikipedia1 and Gigaword.2 Commonly, these embeddings are freely distributed so 1 https://en.wikipedia.org/wiki/Wikipedia:Database_download 2 https://catalog.ldc.upenn.edu/LDC2011T07 133 134 Implementing Text Classification Using Word Embeddings house 0.60137 0.28521 -0.032038 -0.43026 0.74806 0.26223 -0.97361 0.078581 -0.57588 -1.188 -1.8507 -0.24887 0.055549 0.0086155 0.067951 0.40554 -0.073998 -0.21318 0.37167 -0.71791 1.2234 0.35546 -0.41537 -0.21931 -0.39661 -1.7831 -0.41507 0.29533 -0.41254 0.020096 2.7425 -0.9926 -0.71033 -0.46813 0.28265 -0.077639 0.3041 -0.06644 0.3951 -0.70747 -0.38894 0.23158 -0.49508 0.14612 -0.02314 0.56389 -0.86188 -1.0278 0.039922 0.20018 Figure 9.1 GloVe embedding corresponding to the word house, found in the GloVe file glove.6B.50d.txt. We have broken the vector in several lines for display purposes, but this is a single line in the text file. that practitioners can use them in downstream tasks. We will use one such set of vectors in this chapter. Pretrained embeddings are usually distributed as a text file in which each line represents a word vector. The first element in the line is the word itself, and the rest of the elements are the vector components. This is usually referred to as the word2vec format. For example, Figure 9.1 shows the line in the glove.6B.50d.txt file (from the GloVe website) corresponding to the word house. This vector is represented by the word itself, followed by 50 floating-point numbers corresponding to the 50dimensional vector. Note that some embeddings files have a header line composed of two numbers: the number of vectors (i.e., the number of lines in the file), and the vector dimensionality. However, this is not always the case. For example, the original word2vec implementation includes this header line, but the more recent GloVe does not (probably because this information can be inferred from the content of the file). For the examples in the rest of the chapter, we will use the glove.6B.300d.txt embeddings that can be downloaded from the GloVe website.3 This file provides 400,000 word embeddings of 300-dimensions trained on texts
from Wikipedia 2014 and Gigaword 5. We will begin our exploration of word embeddings using Gensim,4 a Python library that provides excellent support for loading and using word embeddings, among other more advanced features. As we can see, the embeddings have been loaded and assigned to the glove variable. Note that we had to specify that this file doesn’t contain the header that is usually present in the word2vec format. The 3 https://nlp.stanford.edu/projects/glove/ 4 https://radimrehurek.com/gensim/ 9.1 Pre-trained Word Embeddings 135 glove.vectors attribute contains a 2-dimensional NumPy array with 400,000 rows and 300 columns, each row corresponding to a word embedding. 9.1.1 Word Similarity Gensim’s KeyedVectors class provides a method called most_similar that receives a word and computes its cosine similarity to all other embeddings, and returns the topn most-similar words. By default, topn is set to 10. The example above shows the top 10 most-similar words to the word cactus, when using the 300-dimension GloVe embeddings trained on Wikipedia and Gigaword. All ten most-similar words are related to cactus in different ways: cacti and cactuses are its plural forms; saguaro, peyote, opuntia, and prickly pear are types of cacti; and mesquite, shrubs, and succulents are other plants from arid climates. You can find more examples of word similarity queries in the Jupyter notebook that accompanies this chapter. Also, as an exercise, try loading a different set of embeddings trained with a different corpus (e.g., Twitter) to see if you obtain different results! 9.1.2 Word Analogies As we discussed in the previous chapter, the semantic information en- coded by word embeddings captures much more than word similar- ity. To surface this additional information, we will use word analogies represented using additional vector operations. For example, a well- ⃗
known analogy that highlights gender information is: king − m⃗an ≈ qu⃗een−wom⃗an,5or,inplainlanguage:“manistokingwhatwomanis to queen.” From this, it immediately follows that one can subtract the meaning of man and add the meaning of woman to obtain the definition ⃗ offemaleroyalty:king−m⃗an+wom⃗an≈qu⃗een.
 The same most_similar method we’ve been using can be repurposed to find word analogies such as the one mentioned above. To this end, two sets of words have to be provided to the most_similar method: a list of positive words that should be added, and a list of negative words 5 A word with an arrow on top refers to the embedding vector corresponding to that word. Please see Section 1.4 for a summary of the notations used in this book. 136 Implementing Text Classification Using Word Embeddings that should be subtracted. For example, the code below implements the left-hand side of the previous analogy: Another interesting analogy relation that shows how the embeddings have captured information about currencies is shown below. More examples are discussed in the Jupyter notebook. 9.1.3 Looking Under the Hood Let us understand now how these queries are actually implemented. First, we need to know what components we need. Clearly, we need the embedding vectors themselves. They are stored in the vectors attribute of the KeyedVectors object. As we mentioned previously, this is a 2-dimensional NumPy array, each row corresponding to a word in the vocabulary. These embeddings are not normalized, but normalized embeddings can be obtained using the get_normed_vectors method. We also need to know the mapping between words and the matrix rows. The KeyedVectors object stores this mapping in a list of terms called index_to_key, and a term-to-index dictionary called key_to_index. Below we show only the first 5 terms to save space, but you can inspect the whole vocabulary in the Jupyter notebook. 9.1.4 Word Similarity from Scratch Implementing the word similarity function ourselves is a good exercise to ensure that we understand how cosine similarity works, and to practice our NumPy skills. We will write a function called most_similar_words that will take a word, the embeddings matrix, the vocabulary in the form of the index_to_key list and key_to_index dictionary, and the number of similar words to return (defaults to 10). The implementation of most_similar_words is straightforward. First, we find the word id for the given word, using the key_to_index dictionary. Then we retrieve the row from the vectors matrix that corresponds to that word. The next step is computing the cosine similarity between the word of interest and the rest of the vocabulary. Recall that the cosine similarity is equivalent to a dot product if the vectors are normalized. We use this equivalence by performing a matrix-vector multiplication between the word embedding and the embedding matrix using Python’s at operator (denoted as @ in code). This means that we must pass the 9.2 Text Classification with Pretrained Word Embeddings 137 normalized embeddings as an argument to this function. Next, we need to sort the similarities preserving the mapping to the words in the vocabulary. We achieve this using the argsort NumPy method, which returns the indices in sorted (ascending) order. Since we need them in descending order, the next step is to reverse this list of indices. Obviously, the most similar word to whichever word we’re querying is the word itself, but that is not an interesting result, so we will remove it from the results. We do this by using NumPy’s ability to index arrays using booleans. We first create a new array in which the position corresponding to the query word is set to False and every other element is set to True, and we use this boolean array to index the list of ids. Lastly, we create a list of tuples of the form (word, similarity) for the topn words, and return the results. Now we will test our implementation of word similarity using the word cactus. You can compare the results to the ones obtained by KeyedVectors’s most_similar method. 9.1.5 Word Analogies from Scratch The implementation of the word analogy function is not that much different from our most_similar_word function above. The main difference between this function and most_similar_words is that now we have two lists of words that we need to combine into a single embedding. We first add the positive words into a single vector, and we do the same for the negative words. Then we subtract the negative vector from the positive one, and normalize the result. The similarity scores are computed the same way as before, but now we need to remove several words from the results, so this time we use NumPy’s isin function, which checks for any of the words in given_word_ids. We then package the results the same way we did before, and return them. ⃗ Nowlet’stryourimplementationwiththesameking−m⃗an+wom⃗an query we discussed previously. Please compare the results to the ones obtained by Gensim. 9.2 Text Classification with Pretrained Word Embeddings In this section we will continue using the AG News classification dataset introduced in previous chapters. Most of the data preparation is the 138 Implementing Text Classification Using Word Embeddings same, up to tokenization. However, we need to remember that the embeddings were trained on a different corpus, so it would be a good idea to estimate how well they cover the words AG News dataset. To achieve this, we load the embeddings just like we did previously. Then we count the tokens in our corpus that do not appear in the embeddings vocabulary, as well as the total number to tokens. We use these numbers to print some informative statistics such as the proportion of unknown tokens in the corpus. We also print the top ten unknown tokens. You can use the Jupyter notebook to explore this task further. Our analysis indicates that only 1.25% of the tokens are not accounted for in the embeddings vocabulary. Further, the most common unknown words seem to be URL fragments. This is encouraging. However, for more robustness, we will introduce a couple of special embeddings that are often needed when dealing with word embeddings. The first one is an embedding used to represent unknown words. A common strategy is to use the average of all the embeddings in the vocabulary for this purpose. The second embedding we will add will be used for padding. Padding is required when we want to train with (mini-)batches because the lengths of all the examples in a given batch have to match in order for the batch to be efficiently processed in parallel. The padding embedding consists only of zeros, which essentially excludes these virtual tokens from the forward/backward passes. None of these embeddings are included in the pretrained GloVe embeddings, but other pretrained embeddings may already include them, so it is a good idea to check if they are included with the embeddings we are using before adding them. The new embeddings were added at the end of embedding collection, so their ids are 400,000 and 400,001. Now we need to generate a list of token ids for each training example. Recall that we decided to ignore tokens that appear less than 10 times, so we need to replace those with [UNK] too, even if they appear in the embedding vocabulary. Next, we create a Dataset object from the padded lists of token ids. This one is even easier since the lists of token ids are ready. So all that is required is turning them into tensors. Lastly, we need to modify the model class to indicate that we now use embedding vectors. To this end, we will add an nn. Embedding layer that stores the embedding vectors for all words in the vocabulary. We will use this object to look up embeddings by their token ids. This layer will be initialized from a tensor containing the pretrained embeddings for the entire vocabulary. Also, the pad_id is specified when creating the 9.2 Text Classification with Pretrained Word Embeddings 139 embedding layer. When a nn. Embedding layer gets initialized using the from_pretrained method with other arguments set to default values, the embeddings are not updated during training. We will keep it that way for this example, but that could be changed by setting the freeze parameter to False. The rest of the layers are the same as in our previous example from Chapter 7, i.e., one intermediate layer and one output layer, with a nonlinearity (ReLU) between them. The only major difference is that now the input size of the intermediate layer is the size of one embedding (e.g., 300) instead of the size of the vocabulary like last time. This is because, as we explain below, the intermediate layer receives an average of the numerical representations of the words in the current text. The forward function of the Model class changes significantly. This time we are encoding the text as an average of the embeddings of all the words it contains. To compute the denominator of this average, we obtain the length of each text by counting how many of its words are not the virtual padding token. Then we sum all the embeddings and divide by the number of non-padding tokens. Adding all embeddings is safe, because padding embeddings are comprised of zeros. This process leaves us with a single embedding for the whole text, which is then passed to the rest of the layers. The training and evaluation steps are the same before. The results of this model on the AG News test partition are displayed below: Comparing these results with the ones obtained by the multilayer perceptron with explicit features in Chapter 7, we observe that on this particular task utilizing embeddings as features does not yield a performance improvement. Notably, this is a small dataset and a rather simplistic task where the presence of certain words is sufficient to distinguish the category of an article (e.g., the word basketball is highly indicative of the label Sports). Nevertheless, in other tasks where distinctions are more nuanced, or in which there is less likely to be word overlap between texts of interest, word embeddings do provide necessary signal. Additionally, when there are class imbalances, word embeddings can supplement underrepresented classes by bringing the external knowledge gained during their pretraining. 140 Implementing Text Classification Using Word Embeddings 9.3 Summary In this chapter we showed how to explore the semantic space encoded by word embeddings through word similarity and analogies, as well as one way to use them for text classification. At this point we have not taken into consideration the order in which the words appear, i.e., we averaged the embeddings for all the words in the text using a bag-ofwords representation of text. In subsequent chapters we will explore how to incorporate word order into the learned representations of text.
7,099
7,302
#!/usr/bin/env python # coding: utf-8 # # Using Pre-trained Word Embeddings # # In this notebook we will show some operations on pre-trained word embeddings to gain an intuition about them. # # We will be using the pre-trained GloVe embeddings that can be found in the [official website](https://nlp.stanford.edu/projects/glove/). In particular, we will use the file `glove.6B.300d.txt` contained in this [zip file](https://nlp.stanford.edu/data/glove.6B.zip). # # We will first load the GloVe embeddings using [Gensim](https://radimrehurek.com/gensim/). Specifically, we will use [`KeyedVectors`](https://radimrehurek.com/gensim/models/keyedvectors.html)'s [`load_word2vec_format()`](https://radimrehurek.com/gensim/models/keyedvectors.html#gensim.models.keyedvectors.KeyedVectors.load_word2vec_format) classmethod, which supports the original word2vec file format. # However, there is a difference in the file formats used by GloVe and word2vec, which is a header used by word2vec to indicate the number of embeddings and dimensions stored in the file. The file that stores the GloVe embeddings doesn't have this header, so we will have to address that when loading the embeddings. # # Loading the embeddings may take a little bit, so hang in there! # In[2]: from gensim.models import KeyedVectors fname = "glove.6B.300d.txt" glove = KeyedVectors.load_word2vec_format(fname, no_header=True) glove.vectors.shape # ## Word similarity # # One attribute of word embeddings that makes them useful is the ability to compare them using cosine similarity to find how similar they are. [`KeyedVectors`](https://radimrehurek.com/gensim/models/keyedvectors.html) objects provide a method called [`most_similar()`](https://radimrehurek.com/gensim/models/keyedvectors.html#gensim.models.keyedvectors.KeyedVectors.most_similar) that we can use to find the closest words to a particular word of interest. By default, [`most_similar()`](https://radimrehurek.com/gensim/models/keyedvectors.html#gensim.models.keyedvectors.KeyedVectors.most_similar) returns the 10 most similar words, but this can be changed using the `topn` parameter. # # Below we test this function using a few different words. # In[3]: # common noun glove.most_similar("cactus") # In[4]: # common noun glove.most_similar("cake") # In[5]: # adjective glove.most_similar("angry") # In[6]: # adverb glove.most_similar("quickly") # In[7]: # preposition glove.most_similar("between") # In[8]: # determiner glove.most_similar("the") # ## Word analogies # # Another characteristic of word embeddings is their ability to solve analogy problems. # The same [`most_similar()`](https://radimrehurek.com/gensim/models/keyedvectors.html#gensim.models.keyedvectors.KeyedVectors.most_similar) method can be used for this task, by passing two lists of words: # a `positive` list with the words that should be added and a `negative` list with the words that should be subtracted. Using these arguments, the famous example $\vec{king} - \vec{man} + \vec{woman} \approx \vec{queen}$ can be executed as follows: # In[9]: # king - man + woman glove.most_similar(positive=["king", "woman"], negative=["man"]) # Here are a few other interesting analogies: # In[10]: # car - drive + fly glove.most_similar(positive=["car", "fly"], negative=["drive"]) # In[11]: # berlin - germany + australia glove.most_similar(positive=["berlin", "australia"], negative=["germany"]) # In[12]: # england - london + baghdad glove.most_similar(positive=["england", "baghdad"], negative=["london"]) # In[13]: # japan - yen + peso glove.most_similar(positive=["japan", "peso"], negative=["yen"]) # In[14]: # best - good + tall glove.most_similar(positive=["best", "tall"], negative=["good"]) # ## Looking under the hood # # Now that we are more familiar with the [`most_similar()`](https://radimrehurek.com/gensim/models/keyedvectors.html#gensim.models.keyedvectors.KeyedVectors.most_similar) method, it is time to implement its functionality ourselves. # But first, we need to take a look at the different parts of the [`KeyedVectors`](https://radimrehurek.com/gensim/models/keyedvectors.html) object that we will need. # Obviously, we will need the vectors themselves. They are stored in the `vectors` attribute. # In[15]: glove.vectors.shape # As we can see above, `vectors` is a 2-dimensional matrix with 400,000 rows and 300 columns. # Each row corresponds to a 300-dimensional word embedding. These embeddings are not normalized, but normalized embeddings can be obtained using the [`get_normed_vectors()`](https://radimrehurek.com/gensim/models/keyedvectors.html#gensim.models.keyedvectors.KeyedVectors.get_normed_vectors) method. # In[16]: normed_vectors = glove.get_normed_vectors() normed_vectors.shape # Now we need to map the words in the vocabulary to rows in the `vectors` matrix, and vice versa. # The [`KeyedVectors`](https://radimrehurek.com/gensim/models/keyedvectors.html) object has the attributes `index_to_key` and `key_to_index` which are a list of words and a dictionary of words to indices, respectively. # In[17]: #glove.index_to_key # In[18]: #glove.key_to_index # ## Word similarity from scratch # # Now we have everything we need to implement a `most_similar_words()` function that takes a word, the vector matrix, the `index_to_key` list, and the `key_to_index` dictionary. This function will return the 10 most similar words to the provided word, along with their similarity scores. # In[19]: import numpy as np def most_similar_words(word, vectors, index_to_key, key_to_index, topn=10): # retrieve word_id corresponding to given word word_id = key_to_index[word] # retrieve embedding for given word emb = vectors[word_id] # calculate similarities to all words in out vocabulary similarities = vectors @ emb # get word_ids in ascending order with respect to similarity score ids_ascending = similarities.argsort() # reverse word_ids ids_descending = ids_ascending[::-1] # get boolean array with element corresponding to word_id set to false mask = ids_descending != word_id # obtain new array of indices that doesn't contain word_id # (otherwise the most similar word to the argument would be the argument itself) ids_descending = ids_descending[mask] # get topn word_ids top_ids = ids_descending[:topn] # retrieve topn words with their corresponding similarity score top_words = [(index_to_key[i], similarities[i]) for i in top_ids] # return results return top_words # Now let's try the same example that we used above: the most similar words to "cactus". # In[20]: vectors = glove.get_normed_vectors() index_to_key = glove.index_to_key key_to_index = glove.key_to_index most_similar_words("cactus", vectors, index_to_key, key_to_index) # ## Analogies from scratch # # The `most_similar_words()` function behaves as expected. Now let's implement a function to perform the analogy task. We will give it the very creative name `analogy`. This function will get two lists of words (one for positive words and one for negative words), just like the [`most_similar()`](https://radimrehurek.com/gensim/models/keyedvectors.html#gensim.models.keyedvectors.KeyedVectors.most_similar) method we discussed above. # In[21]: from numpy.linalg import norm def analogy(positive, negative, vectors, index_to_key, key_to_index, topn=10): # find ids for positive and negative words pos_ids = [key_to_index[w] for w in positive] neg_ids = [key_to_index[w] for w in negative] given_word_ids = pos_ids + neg_ids # get embeddings for positive and negative words pos_emb = vectors[pos_ids].sum(axis=0) neg_emb = vectors[neg_ids].sum(axis=0) # get embedding for analogy emb = pos_emb - neg_emb # normalize embedding emb = emb / norm(emb) # calculate similarities to all words in out vocabulary similarities = vectors @ emb # get word_ids in ascending order with respect to similarity score ids_ascending = similarities.argsort() # reverse word_ids ids_descending = ids_ascending[::-1] # get boolean array with element corresponding to any of given_word_ids set to false given_words_mask = np.isin(ids_descending, given_word_ids, invert=True) # obtain new array of indices that doesn't contain any of the given_word_ids ids_descending = ids_descending[given_words_mask] # get topn word_ids top_ids = ids_descending[:topn] # retrieve topn words with their corresponding similarity score top_words = [(index_to_key[i], similarities[i]) for i in top_ids] # return results return top_words # Let's try this function with the $\vec{king} - \vec{man} + \vec{woman} \approx \vec{queen}$ example we discussed above. # In[22]: positive = ["king", "woman"] negative = ["man"] vectors = glove.get_normed_vectors() index_to_key = glove.index_to_key key_to_index = glove.key_to_index analogy(positive, negative, vectors, index_to_key, key_to_index) # In[ ]:
5,129
5,183
20
chap09-21
chap09-21
9 Implementing Text Classification Using Word Embeddings In the previous chapter we introduced word embeddings, which are realvalued vectors that encode semantic representation of words. We discussed how to learn them, and how they capture semantic information that makes them useful for downstream tasks. In this chapter we show how to use word embeddings that have been pretrained using a variant of the algorithm discussed in the previous chapter. We show how to load them, explore some of their characteristics, and show their application for a text classification task. As usual, the code for this chapter is available in our repository. It is organized into two notebooks: one corresponding to the explorations shown in the first half of this chapter (chap9_embeddings), and a second one in which we modify our previous classifier to use word embeddings (chap9_classification). 9.1 Pre-trained Word Embeddings There are several algorithms for training word embeddings, including the original word2vec algorithm (Mikolov et al., 2013a) (which we discussed in the previous chapter), GloVe (Pennington et al., 2014), and fastText (Bojanowski et al., 2017). They all provide the software for training the embeddings as well as pretrained word embeddings on their respective websites. In general, most open-domain word embeddings are trained on large corpora that cover a variety of topics such as Wikipedia1 and Gigaword.2 Commonly, these embeddings are freely distributed so 1 https://en.wikipedia.org/wiki/Wikipedia:Database_download 2 https://catalog.ldc.upenn.edu/LDC2011T07 133 134 Implementing Text Classification Using Word Embeddings house 0.60137 0.28521 -0.032038 -0.43026 0.74806 0.26223 -0.97361 0.078581 -0.57588 -1.188 -1.8507 -0.24887 0.055549 0.0086155 0.067951 0.40554 -0.073998 -0.21318 0.37167 -0.71791 1.2234 0.35546 -0.41537 -0.21931 -0.39661 -1.7831 -0.41507 0.29533 -0.41254 0.020096 2.7425 -0.9926 -0.71033 -0.46813 0.28265 -0.077639 0.3041 -0.06644 0.3951 -0.70747 -0.38894 0.23158 -0.49508 0.14612 -0.02314 0.56389 -0.86188 -1.0278 0.039922 0.20018 Figure 9.1 GloVe embedding corresponding to the word house, found in the GloVe file glove.6B.50d.txt. We have broken the vector in several lines for display purposes, but this is a single line in the text file. that practitioners can use them in downstream tasks. We will use one such set of vectors in this chapter. Pretrained embeddings are usually distributed as a text file in which each line represents a word vector. The first element in the line is the word itself, and the rest of the elements are the vector components. This is usually referred to as the word2vec format. For example, Figure 9.1 shows the line in the glove.6B.50d.txt file (from the GloVe website) corresponding to the word house. This vector is represented by the word itself, followed by 50 floating-point numbers corresponding to the 50dimensional vector. Note that some embeddings files have a header line composed of two numbers: the number of vectors (i.e., the number of lines in the file), and the vector dimensionality. However, this is not always the case. For example, the original word2vec implementation includes this header line, but the more recent GloVe does not (probably because this information can be inferred from the content of the file). For the examples in the rest of the chapter, we will use the glove.6B.300d.txt embeddings that can be downloaded from the GloVe website.3 This file provides 400,000 word embeddings of 300-dimensions trained on texts
from Wikipedia 2014 and Gigaword 5. We will begin our exploration of word embeddings using Gensim,4 a Python library that provides excellent support for loading and using word embeddings, among other more advanced features. As we can see, the embeddings have been loaded and assigned to the glove variable. Note that we had to specify that this file doesn’t contain the header that is usually present in the word2vec format. The 3 https://nlp.stanford.edu/projects/glove/ 4 https://radimrehurek.com/gensim/ 9.1 Pre-trained Word Embeddings 135 glove.vectors attribute contains a 2-dimensional NumPy array with 400,000 rows and 300 columns, each row corresponding to a word embedding. 9.1.1 Word Similarity Gensim’s KeyedVectors class provides a method called most_similar that receives a word and computes its cosine similarity to all other embeddings, and returns the topn most-similar words. By default, topn is set to 10. The example above shows the top 10 most-similar words to the word cactus, when using the 300-dimension GloVe embeddings trained on Wikipedia and Gigaword. All ten most-similar words are related to cactus in different ways: cacti and cactuses are its plural forms; saguaro, peyote, opuntia, and prickly pear are types of cacti; and mesquite, shrubs, and succulents are other plants from arid climates. You can find more examples of word similarity queries in the Jupyter notebook that accompanies this chapter. Also, as an exercise, try loading a different set of embeddings trained with a different corpus (e.g., Twitter) to see if you obtain different results! 9.1.2 Word Analogies As we discussed in the previous chapter, the semantic information en- coded by word embeddings captures much more than word similar- ity. To surface this additional information, we will use word analogies represented using additional vector operations. For example, a well- ⃗
known analogy that highlights gender information is: king − m⃗an ≈ qu⃗een−wom⃗an,5or,inplainlanguage:“manistokingwhatwomanis to queen.” From this, it immediately follows that one can subtract the meaning of man and add the meaning of woman to obtain the definition ⃗ offemaleroyalty:king−m⃗an+wom⃗an≈qu⃗een.
 The same most_similar method we’ve been using can be repurposed to find word analogies such as the one mentioned above. To this end, two sets of words have to be provided to the most_similar method: a list of positive words that should be added, and a list of negative words 5 A word with an arrow on top refers to the embedding vector corresponding to that word. Please see Section 1.4 for a summary of the notations used in this book. 136 Implementing Text Classification Using Word Embeddings that should be subtracted. For example, the code below implements the left-hand side of the previous analogy: Another interesting analogy relation that shows how the embeddings have captured information about currencies is shown below. More examples are discussed in the Jupyter notebook. 9.1.3 Looking Under the Hood Let us understand now how these queries are actually implemented. First, we need to know what components we need. Clearly, we need the embedding vectors themselves. They are stored in the vectors attribute of the KeyedVectors object. As we mentioned previously, this is a 2-dimensional NumPy array, each row corresponding to a word in the vocabulary. These embeddings are not normalized, but normalized embeddings can be obtained using the get_normed_vectors method. We also need to know the mapping between words and the matrix rows. The KeyedVectors object stores this mapping in a list of terms called index_to_key, and a term-to-index dictionary called key_to_index. Below we show only the first 5 terms to save space, but you can inspect the whole vocabulary in the Jupyter notebook. 9.1.4 Word Similarity from Scratch Implementing the word similarity function ourselves is a good exercise to ensure that we understand how cosine similarity works, and to practice our NumPy skills. We will write a function called most_similar_words that will take a word, the embeddings matrix, the vocabulary in the form of the index_to_key list and key_to_index dictionary, and the number of similar words to return (defaults to 10). The implementation of most_similar_words is straightforward. First, we find the word id for the given word, using the key_to_index dictionary. Then we retrieve the row from the vectors matrix that corresponds to that word. The next step is computing the cosine similarity between the word of interest and the rest of the vocabulary. Recall that the cosine similarity is equivalent to a dot product if the vectors are normalized. We use this equivalence by performing a matrix-vector multiplication between the word embedding and the embedding matrix using Python’s at operator (denoted as @ in code). This means that we must pass the 9.2 Text Classification with Pretrained Word Embeddings 137 normalized embeddings as an argument to this function. Next, we need to sort the similarities preserving the mapping to the words in the vocabulary. We achieve this using the argsort NumPy method, which returns the indices in sorted (ascending) order. Since we need them in descending order, the next step is to reverse this list of indices. Obviously, the most similar word to whichever word we’re querying is the word itself, but that is not an interesting result, so we will remove it from the results. We do this by using NumPy’s ability to index arrays using booleans. We first create a new array in which the position corresponding to the query word is set to False and every other element is set to True, and we use this boolean array to index the list of ids. Lastly, we create a list of tuples of the form (word, similarity) for the topn words, and return the results. Now we will test our implementation of word similarity using the word cactus. You can compare the results to the ones obtained by KeyedVectors’s most_similar method. 9.1.5 Word Analogies from Scratch The implementation of the word analogy function is not that much different from our most_similar_word function above. The main difference between this function and most_similar_words is that now we have two lists of words that we need to combine into a single embedding. We first add the positive words into a single vector, and we do the same for the negative words. Then we subtract the negative vector from the positive one, and normalize the result. The similarity scores are computed the same way as before, but now we need to remove several words from the results, so this time we use NumPy’s isin function, which checks for any of the words in given_word_ids. We then package the results the same way we did before, and return them. ⃗ Nowlet’stryourimplementationwiththesameking−m⃗an+wom⃗an query we discussed previously. Please compare the results to the ones obtained by Gensim. 9.2 Text Classification with Pretrained Word Embeddings In this section we will continue using the AG News classification dataset introduced in previous chapters. Most of the data preparation is the 138 Implementing Text Classification Using Word Embeddings same, up to tokenization. However, we need to remember that the embeddings were trained on a different corpus, so it would be a good idea to estimate how well they cover the words AG News dataset. To achieve this, we load the embeddings just like we did previously. Then we count the tokens in our corpus that do not appear in the embeddings vocabulary, as well as the total number to tokens. We use these numbers to print some informative statistics such as the proportion of unknown tokens in the corpus. We also print the top ten unknown tokens. You can use the Jupyter notebook to explore this task further. Our analysis indicates that only 1.25% of the tokens are not accounted for in the embeddings vocabulary. Further, the most common unknown words seem to be URL fragments. This is encouraging. However, for more robustness, we will introduce a couple of special embeddings that are often needed when dealing with word embeddings. The first one is an embedding used to represent unknown words. A common strategy is to use the average of all the embeddings in the vocabulary for this purpose. The second embedding we will add will be used for padding. Padding is required when we want to train with (mini-)batches because the lengths of all the examples in a given batch have to match in order for the batch to be efficiently processed in parallel. The padding embedding consists only of zeros, which essentially excludes these virtual tokens from the forward/backward passes. None of these embeddings are included in the pretrained GloVe embeddings, but other pretrained embeddings may already include them, so it is a good idea to check if they are included with the embeddings we are using before adding them. The new embeddings were added at the end of embedding collection, so their ids are 400,000 and 400,001. Now we need to generate a list of token ids for each training example. Recall that we decided to ignore tokens that appear less than 10 times, so we need to replace those with [UNK] too, even if they appear in the embedding vocabulary. Next, we create a Dataset object from the padded lists of token ids. This one is even easier since the lists of token ids are ready. So all that is required is turning them into tensors. Lastly, we need to modify the model class to indicate that we now use embedding vectors. To this end, we will add an nn. Embedding layer that stores the embedding vectors for all words in the vocabulary. We will use this object to look up embeddings by their token ids. This layer will be initialized from a tensor containing the pretrained embeddings for the entire vocabulary. Also, the pad_id is specified when creating the 9.2 Text Classification with Pretrained Word Embeddings 139 embedding layer. When a nn. Embedding layer gets initialized using the from_pretrained method with other arguments set to default values, the embeddings are not updated during training. We will keep it that way for this example, but that could be changed by setting the freeze parameter to False. The rest of the layers are the same as in our previous example from Chapter 7, i.e., one intermediate layer and one output layer, with a nonlinearity (ReLU) between them. The only major difference is that now the input size of the intermediate layer is the size of one embedding (e.g., 300) instead of the size of the vocabulary like last time. This is because, as we explain below, the intermediate layer receives an average of the numerical representations of the words in the current text. The forward function of the Model class changes significantly. This time we are encoding the text as an average of the embeddings of all the words it contains. To compute the denominator of this average, we obtain the length of each text by counting how many of its words are not the virtual padding token. Then we sum all the embeddings and divide by the number of non-padding tokens. Adding all embeddings is safe, because padding embeddings are comprised of zeros. This process leaves us with a single embedding for the whole text, which is then passed to the rest of the layers. The training and evaluation steps are the same before. The results of this model on the AG News test partition are displayed below: Comparing these results with the ones obtained by the multilayer perceptron with explicit features in Chapter 7, we observe that on this particular task utilizing embeddings as features does not yield a performance improvement. Notably, this is a small dataset and a rather simplistic task where the presence of certain words is sufficient to distinguish the category of an article (e.g., the word basketball is highly indicative of the label Sports). Nevertheless, in other tasks where distinctions are more nuanced, or in which there is less likely to be word overlap between texts of interest, word embeddings do provide necessary signal. Additionally, when there are class imbalances, word embeddings can supplement underrepresented classes by bringing the external knowledge gained during their pretraining. 140 Implementing Text Classification Using Word Embeddings 9.3 Summary In this chapter we showed how to explore the semantic space encoded by word embeddings through word similarity and analogies, as well as one way to use them for text classification. At this point we have not taken into consideration the order in which the words appear, i.e., we averaged the embeddings for all the words in the text using a bag-ofwords representation of text. In subsequent chapters we will explore how to incorporate word order into the learned representations of text.
10,015
10,100
#!/usr/bin/env python # coding: utf-8 # # Using Pre-trained Word Embeddings # # In this notebook we will show some operations on pre-trained word embeddings to gain an intuition about them. # # We will be using the pre-trained GloVe embeddings that can be found in the [official website](https://nlp.stanford.edu/projects/glove/). In particular, we will use the file `glove.6B.300d.txt` contained in this [zip file](https://nlp.stanford.edu/data/glove.6B.zip). # # We will first load the GloVe embeddings using [Gensim](https://radimrehurek.com/gensim/). Specifically, we will use [`KeyedVectors`](https://radimrehurek.com/gensim/models/keyedvectors.html)'s [`load_word2vec_format()`](https://radimrehurek.com/gensim/models/keyedvectors.html#gensim.models.keyedvectors.KeyedVectors.load_word2vec_format) classmethod, which supports the original word2vec file format. # However, there is a difference in the file formats used by GloVe and word2vec, which is a header used by word2vec to indicate the number of embeddings and dimensions stored in the file. The file that stores the GloVe embeddings doesn't have this header, so we will have to address that when loading the embeddings. # # Loading the embeddings may take a little bit, so hang in there! # In[2]: from gensim.models import KeyedVectors fname = "glove.6B.300d.txt" glove = KeyedVectors.load_word2vec_format(fname, no_header=True) glove.vectors.shape # ## Word similarity # # One attribute of word embeddings that makes them useful is the ability to compare them using cosine similarity to find how similar they are. [`KeyedVectors`](https://radimrehurek.com/gensim/models/keyedvectors.html) objects provide a method called [`most_similar()`](https://radimrehurek.com/gensim/models/keyedvectors.html#gensim.models.keyedvectors.KeyedVectors.most_similar) that we can use to find the closest words to a particular word of interest. By default, [`most_similar()`](https://radimrehurek.com/gensim/models/keyedvectors.html#gensim.models.keyedvectors.KeyedVectors.most_similar) returns the 10 most similar words, but this can be changed using the `topn` parameter. # # Below we test this function using a few different words. # In[3]: # common noun glove.most_similar("cactus") # In[4]: # common noun glove.most_similar("cake") # In[5]: # adjective glove.most_similar("angry") # In[6]: # adverb glove.most_similar("quickly") # In[7]: # preposition glove.most_similar("between") # In[8]: # determiner glove.most_similar("the") # ## Word analogies # # Another characteristic of word embeddings is their ability to solve analogy problems. # The same [`most_similar()`](https://radimrehurek.com/gensim/models/keyedvectors.html#gensim.models.keyedvectors.KeyedVectors.most_similar) method can be used for this task, by passing two lists of words: # a `positive` list with the words that should be added and a `negative` list with the words that should be subtracted. Using these arguments, the famous example $\vec{king} - \vec{man} + \vec{woman} \approx \vec{queen}$ can be executed as follows: # In[9]: # king - man + woman glove.most_similar(positive=["king", "woman"], negative=["man"]) # Here are a few other interesting analogies: # In[10]: # car - drive + fly glove.most_similar(positive=["car", "fly"], negative=["drive"]) # In[11]: # berlin - germany + australia glove.most_similar(positive=["berlin", "australia"], negative=["germany"]) # In[12]: # england - london + baghdad glove.most_similar(positive=["england", "baghdad"], negative=["london"]) # In[13]: # japan - yen + peso glove.most_similar(positive=["japan", "peso"], negative=["yen"]) # In[14]: # best - good + tall glove.most_similar(positive=["best", "tall"], negative=["good"]) # ## Looking under the hood # # Now that we are more familiar with the [`most_similar()`](https://radimrehurek.com/gensim/models/keyedvectors.html#gensim.models.keyedvectors.KeyedVectors.most_similar) method, it is time to implement its functionality ourselves. # But first, we need to take a look at the different parts of the [`KeyedVectors`](https://radimrehurek.com/gensim/models/keyedvectors.html) object that we will need. # Obviously, we will need the vectors themselves. They are stored in the `vectors` attribute. # In[15]: glove.vectors.shape # As we can see above, `vectors` is a 2-dimensional matrix with 400,000 rows and 300 columns. # Each row corresponds to a 300-dimensional word embedding. These embeddings are not normalized, but normalized embeddings can be obtained using the [`get_normed_vectors()`](https://radimrehurek.com/gensim/models/keyedvectors.html#gensim.models.keyedvectors.KeyedVectors.get_normed_vectors) method. # In[16]: normed_vectors = glove.get_normed_vectors() normed_vectors.shape # Now we need to map the words in the vocabulary to rows in the `vectors` matrix, and vice versa. # The [`KeyedVectors`](https://radimrehurek.com/gensim/models/keyedvectors.html) object has the attributes `index_to_key` and `key_to_index` which are a list of words and a dictionary of words to indices, respectively. # In[17]: #glove.index_to_key # In[18]: #glove.key_to_index # ## Word similarity from scratch # # Now we have everything we need to implement a `most_similar_words()` function that takes a word, the vector matrix, the `index_to_key` list, and the `key_to_index` dictionary. This function will return the 10 most similar words to the provided word, along with their similarity scores. # In[19]: import numpy as np def most_similar_words(word, vectors, index_to_key, key_to_index, topn=10): # retrieve word_id corresponding to given word word_id = key_to_index[word] # retrieve embedding for given word emb = vectors[word_id] # calculate similarities to all words in out vocabulary similarities = vectors @ emb # get word_ids in ascending order with respect to similarity score ids_ascending = similarities.argsort() # reverse word_ids ids_descending = ids_ascending[::-1] # get boolean array with element corresponding to word_id set to false mask = ids_descending != word_id # obtain new array of indices that doesn't contain word_id # (otherwise the most similar word to the argument would be the argument itself) ids_descending = ids_descending[mask] # get topn word_ids top_ids = ids_descending[:topn] # retrieve topn words with their corresponding similarity score top_words = [(index_to_key[i], similarities[i]) for i in top_ids] # return results return top_words # Now let's try the same example that we used above: the most similar words to "cactus". # In[20]: vectors = glove.get_normed_vectors() index_to_key = glove.index_to_key key_to_index = glove.key_to_index most_similar_words("cactus", vectors, index_to_key, key_to_index) # ## Analogies from scratch # # The `most_similar_words()` function behaves as expected. Now let's implement a function to perform the analogy task. We will give it the very creative name `analogy`. This function will get two lists of words (one for positive words and one for negative words), just like the [`most_similar()`](https://radimrehurek.com/gensim/models/keyedvectors.html#gensim.models.keyedvectors.KeyedVectors.most_similar) method we discussed above. # In[21]: from numpy.linalg import norm def analogy(positive, negative, vectors, index_to_key, key_to_index, topn=10): # find ids for positive and negative words pos_ids = [key_to_index[w] for w in positive] neg_ids = [key_to_index[w] for w in negative] given_word_ids = pos_ids + neg_ids # get embeddings for positive and negative words pos_emb = vectors[pos_ids].sum(axis=0) neg_emb = vectors[neg_ids].sum(axis=0) # get embedding for analogy emb = pos_emb - neg_emb # normalize embedding emb = emb / norm(emb) # calculate similarities to all words in out vocabulary similarities = vectors @ emb # get word_ids in ascending order with respect to similarity score ids_ascending = similarities.argsort() # reverse word_ids ids_descending = ids_ascending[::-1] # get boolean array with element corresponding to any of given_word_ids set to false given_words_mask = np.isin(ids_descending, given_word_ids, invert=True) # obtain new array of indices that doesn't contain any of the given_word_ids ids_descending = ids_descending[given_words_mask] # get topn word_ids top_ids = ids_descending[:topn] # retrieve topn words with their corresponding similarity score top_words = [(index_to_key[i], similarities[i]) for i in top_ids] # return results return top_words # Let's try this function with the $\vec{king} - \vec{man} + \vec{woman} \approx \vec{queen}$ example we discussed above. # In[22]: positive = ["king", "woman"] negative = ["man"] vectors = glove.get_normed_vectors() index_to_key = glove.index_to_key key_to_index = glove.key_to_index analogy(positive, negative, vectors, index_to_key, key_to_index) # In[ ]:
7,806
7,886
21
chap09-22
chap09-22
9 Implementing Text Classification Using Word Embeddings In the previous chapter we introduced word embeddings, which are realvalued vectors that encode semantic representation of words. We discussed how to learn them, and how they capture semantic information that makes them useful for downstream tasks. In this chapter we show how to use word embeddings that have been pretrained using a variant of the algorithm discussed in the previous chapter. We show how to load them, explore some of their characteristics, and show their application for a text classification task. As usual, the code for this chapter is available in our repository. It is organized into two notebooks: one corresponding to the explorations shown in the first half of this chapter (chap9_embeddings), and a second one in which we modify our previous classifier to use word embeddings (chap9_classification). 9.1 Pre-trained Word Embeddings There are several algorithms for training word embeddings, including the original word2vec algorithm (Mikolov et al., 2013a) (which we discussed in the previous chapter), GloVe (Pennington et al., 2014), and fastText (Bojanowski et al., 2017). They all provide the software for training the embeddings as well as pretrained word embeddings on their respective websites. In general, most open-domain word embeddings are trained on large corpora that cover a variety of topics such as Wikipedia1 and Gigaword.2 Commonly, these embeddings are freely distributed so 1 https://en.wikipedia.org/wiki/Wikipedia:Database_download 2 https://catalog.ldc.upenn.edu/LDC2011T07 133 134 Implementing Text Classification Using Word Embeddings house 0.60137 0.28521 -0.032038 -0.43026 0.74806 0.26223 -0.97361 0.078581 -0.57588 -1.188 -1.8507 -0.24887 0.055549 0.0086155 0.067951 0.40554 -0.073998 -0.21318 0.37167 -0.71791 1.2234 0.35546 -0.41537 -0.21931 -0.39661 -1.7831 -0.41507 0.29533 -0.41254 0.020096 2.7425 -0.9926 -0.71033 -0.46813 0.28265 -0.077639 0.3041 -0.06644 0.3951 -0.70747 -0.38894 0.23158 -0.49508 0.14612 -0.02314 0.56389 -0.86188 -1.0278 0.039922 0.20018 Figure 9.1 GloVe embedding corresponding to the word house, found in the GloVe file glove.6B.50d.txt. We have broken the vector in several lines for display purposes, but this is a single line in the text file. that practitioners can use them in downstream tasks. We will use one such set of vectors in this chapter. Pretrained embeddings are usually distributed as a text file in which each line represents a word vector. The first element in the line is the word itself, and the rest of the elements are the vector components. This is usually referred to as the word2vec format. For example, Figure 9.1 shows the line in the glove.6B.50d.txt file (from the GloVe website) corresponding to the word house. This vector is represented by the word itself, followed by 50 floating-point numbers corresponding to the 50dimensional vector. Note that some embeddings files have a header line composed of two numbers: the number of vectors (i.e., the number of lines in the file), and the vector dimensionality. However, this is not always the case. For example, the original word2vec implementation includes this header line, but the more recent GloVe does not (probably because this information can be inferred from the content of the file). For the examples in the rest of the chapter, we will use the glove.6B.300d.txt embeddings that can be downloaded from the GloVe website.3 This file provides 400,000 word embeddings of 300-dimensions trained on texts
from Wikipedia 2014 and Gigaword 5. We will begin our exploration of word embeddings using Gensim,4 a Python library that provides excellent support for loading and using word embeddings, among other more advanced features. As we can see, the embeddings have been loaded and assigned to the glove variable. Note that we had to specify that this file doesn’t contain the header that is usually present in the word2vec format. The 3 https://nlp.stanford.edu/projects/glove/ 4 https://radimrehurek.com/gensim/ 9.1 Pre-trained Word Embeddings 135 glove.vectors attribute contains a 2-dimensional NumPy array with 400,000 rows and 300 columns, each row corresponding to a word embedding. 9.1.1 Word Similarity Gensim’s KeyedVectors class provides a method called most_similar that receives a word and computes its cosine similarity to all other embeddings, and returns the topn most-similar words. By default, topn is set to 10. The example above shows the top 10 most-similar words to the word cactus, when using the 300-dimension GloVe embeddings trained on Wikipedia and Gigaword. All ten most-similar words are related to cactus in different ways: cacti and cactuses are its plural forms; saguaro, peyote, opuntia, and prickly pear are types of cacti; and mesquite, shrubs, and succulents are other plants from arid climates. You can find more examples of word similarity queries in the Jupyter notebook that accompanies this chapter. Also, as an exercise, try loading a different set of embeddings trained with a different corpus (e.g., Twitter) to see if you obtain different results! 9.1.2 Word Analogies As we discussed in the previous chapter, the semantic information en- coded by word embeddings captures much more than word similar- ity. To surface this additional information, we will use word analogies represented using additional vector operations. For example, a well- ⃗
known analogy that highlights gender information is: king − m⃗an ≈ qu⃗een−wom⃗an,5or,inplainlanguage:“manistokingwhatwomanis to queen.” From this, it immediately follows that one can subtract the meaning of man and add the meaning of woman to obtain the definition ⃗ offemaleroyalty:king−m⃗an+wom⃗an≈qu⃗een.
 The same most_similar method we’ve been using can be repurposed to find word analogies such as the one mentioned above. To this end, two sets of words have to be provided to the most_similar method: a list of positive words that should be added, and a list of negative words 5 A word with an arrow on top refers to the embedding vector corresponding to that word. Please see Section 1.4 for a summary of the notations used in this book. 136 Implementing Text Classification Using Word Embeddings that should be subtracted. For example, the code below implements the left-hand side of the previous analogy: Another interesting analogy relation that shows how the embeddings have captured information about currencies is shown below. More examples are discussed in the Jupyter notebook. 9.1.3 Looking Under the Hood Let us understand now how these queries are actually implemented. First, we need to know what components we need. Clearly, we need the embedding vectors themselves. They are stored in the vectors attribute of the KeyedVectors object. As we mentioned previously, this is a 2-dimensional NumPy array, each row corresponding to a word in the vocabulary. These embeddings are not normalized, but normalized embeddings can be obtained using the get_normed_vectors method. We also need to know the mapping between words and the matrix rows. The KeyedVectors object stores this mapping in a list of terms called index_to_key, and a term-to-index dictionary called key_to_index. Below we show only the first 5 terms to save space, but you can inspect the whole vocabulary in the Jupyter notebook. 9.1.4 Word Similarity from Scratch Implementing the word similarity function ourselves is a good exercise to ensure that we understand how cosine similarity works, and to practice our NumPy skills. We will write a function called most_similar_words that will take a word, the embeddings matrix, the vocabulary in the form of the index_to_key list and key_to_index dictionary, and the number of similar words to return (defaults to 10). The implementation of most_similar_words is straightforward. First, we find the word id for the given word, using the key_to_index dictionary. Then we retrieve the row from the vectors matrix that corresponds to that word. The next step is computing the cosine similarity between the word of interest and the rest of the vocabulary. Recall that the cosine similarity is equivalent to a dot product if the vectors are normalized. We use this equivalence by performing a matrix-vector multiplication between the word embedding and the embedding matrix using Python’s at operator (denoted as @ in code). This means that we must pass the 9.2 Text Classification with Pretrained Word Embeddings 137 normalized embeddings as an argument to this function. Next, we need to sort the similarities preserving the mapping to the words in the vocabulary. We achieve this using the argsort NumPy method, which returns the indices in sorted (ascending) order. Since we need them in descending order, the next step is to reverse this list of indices. Obviously, the most similar word to whichever word we’re querying is the word itself, but that is not an interesting result, so we will remove it from the results. We do this by using NumPy’s ability to index arrays using booleans. We first create a new array in which the position corresponding to the query word is set to False and every other element is set to True, and we use this boolean array to index the list of ids. Lastly, we create a list of tuples of the form (word, similarity) for the topn words, and return the results. Now we will test our implementation of word similarity using the word cactus. You can compare the results to the ones obtained by KeyedVectors’s most_similar method. 9.1.5 Word Analogies from Scratch The implementation of the word analogy function is not that much different from our most_similar_word function above. The main difference between this function and most_similar_words is that now we have two lists of words that we need to combine into a single embedding. We first add the positive words into a single vector, and we do the same for the negative words. Then we subtract the negative vector from the positive one, and normalize the result. The similarity scores are computed the same way as before, but now we need to remove several words from the results, so this time we use NumPy’s isin function, which checks for any of the words in given_word_ids. We then package the results the same way we did before, and return them. ⃗ Nowlet’stryourimplementationwiththesameking−m⃗an+wom⃗an query we discussed previously. Please compare the results to the ones obtained by Gensim. 9.2 Text Classification with Pretrained Word Embeddings In this section we will continue using the AG News classification dataset introduced in previous chapters. Most of the data preparation is the 138 Implementing Text Classification Using Word Embeddings same, up to tokenization. However, we need to remember that the embeddings were trained on a different corpus, so it would be a good idea to estimate how well they cover the words AG News dataset. To achieve this, we load the embeddings just like we did previously. Then we count the tokens in our corpus that do not appear in the embeddings vocabulary, as well as the total number to tokens. We use these numbers to print some informative statistics such as the proportion of unknown tokens in the corpus. We also print the top ten unknown tokens. You can use the Jupyter notebook to explore this task further. Our analysis indicates that only 1.25% of the tokens are not accounted for in the embeddings vocabulary. Further, the most common unknown words seem to be URL fragments. This is encouraging. However, for more robustness, we will introduce a couple of special embeddings that are often needed when dealing with word embeddings. The first one is an embedding used to represent unknown words. A common strategy is to use the average of all the embeddings in the vocabulary for this purpose. The second embedding we will add will be used for padding. Padding is required when we want to train with (mini-)batches because the lengths of all the examples in a given batch have to match in order for the batch to be efficiently processed in parallel. The padding embedding consists only of zeros, which essentially excludes these virtual tokens from the forward/backward passes. None of these embeddings are included in the pretrained GloVe embeddings, but other pretrained embeddings may already include them, so it is a good idea to check if they are included with the embeddings we are using before adding them. The new embeddings were added at the end of embedding collection, so their ids are 400,000 and 400,001. Now we need to generate a list of token ids for each training example. Recall that we decided to ignore tokens that appear less than 10 times, so we need to replace those with [UNK] too, even if they appear in the embedding vocabulary. Next, we create a Dataset object from the padded lists of token ids. This one is even easier since the lists of token ids are ready. So all that is required is turning them into tensors. Lastly, we need to modify the model class to indicate that we now use embedding vectors. To this end, we will add an nn. Embedding layer that stores the embedding vectors for all words in the vocabulary. We will use this object to look up embeddings by their token ids. This layer will be initialized from a tensor containing the pretrained embeddings for the entire vocabulary. Also, the pad_id is specified when creating the 9.2 Text Classification with Pretrained Word Embeddings 139 embedding layer. When a nn. Embedding layer gets initialized using the from_pretrained method with other arguments set to default values, the embeddings are not updated during training. We will keep it that way for this example, but that could be changed by setting the freeze parameter to False. The rest of the layers are the same as in our previous example from Chapter 7, i.e., one intermediate layer and one output layer, with a nonlinearity (ReLU) between them. The only major difference is that now the input size of the intermediate layer is the size of one embedding (e.g., 300) instead of the size of the vocabulary like last time. This is because, as we explain below, the intermediate layer receives an average of the numerical representations of the words in the current text. The forward function of the Model class changes significantly. This time we are encoding the text as an average of the embeddings of all the words it contains. To compute the denominator of this average, we obtain the length of each text by counting how many of its words are not the virtual padding token. Then we sum all the embeddings and divide by the number of non-padding tokens. Adding all embeddings is safe, because padding embeddings are comprised of zeros. This process leaves us with a single embedding for the whole text, which is then passed to the rest of the layers. The training and evaluation steps are the same before. The results of this model on the AG News test partition are displayed below: Comparing these results with the ones obtained by the multilayer perceptron with explicit features in Chapter 7, we observe that on this particular task utilizing embeddings as features does not yield a performance improvement. Notably, this is a small dataset and a rather simplistic task where the presence of certain words is sufficient to distinguish the category of an article (e.g., the word basketball is highly indicative of the label Sports). Nevertheless, in other tasks where distinctions are more nuanced, or in which there is less likely to be word overlap between texts of interest, word embeddings do provide necessary signal. Additionally, when there are class imbalances, word embeddings can supplement underrepresented classes by bringing the external knowledge gained during their pretraining. 140 Implementing Text Classification Using Word Embeddings 9.3 Summary In this chapter we showed how to explore the semantic space encoded by word embeddings through word similarity and analogies, as well as one way to use them for text classification. At this point we have not taken into consideration the order in which the words appear, i.e., we averaged the embeddings for all the words in the text using a bag-ofwords representation of text. In subsequent chapters we will explore how to incorporate word order into the learned representations of text.
11,907
11,965
#!/usr/bin/env python # coding: utf-8 # # Multiclass Text Classification with # # Feed-forward Neural Networks and Word Embeddings # First, we will do some initialization. # In[1]: import random import torch import numpy as np import pandas as pd from tqdm.notebook import tqdm # enable tqdm in pandas tqdm.pandas() # set to True to use the gpu (if there is one available) use_gpu = True # select device device = torch.device('cuda' if use_gpu and torch.cuda.is_available() else 'cpu') print(f'device: {device.type}') # random seed seed = 1234 # set random seed if seed is not None: print(f'random seed: {seed}') random.seed(seed) np.random.seed(seed) torch.manual_seed(seed) # We will be using the AG's News Topic Classification Dataset. # It is stored in two CSV files: `train.csv` and `test.csv`, as well as a `classes.txt` that stores the labels of the classes to predict. # # First, we will load the training dataset using [pandas](https://pandas.pydata.org/) and take a quick look at how the data. # In[2]: train_df = pd.read_csv('data/ag_news_csv/train.csv', header=None) train_df.columns = ['class index', 'title', 'description'] train_df # The dataset consists of 120,000 examples, each consisting of a class index, a title, and a description. # The class labels are distributed in a separated file. We will add the labels to the dataset so that we can interpret the data more easily. Note that the label indexes are one-based, so we need to subtract one to retrieve them from the list. # In[3]: labels = open('data/ag_news_csv/classes.txt').read().splitlines() classes = train_df['class index'].map(lambda i: labels[i-1]) train_df.insert(1, 'class', classes) train_df # Let's inspect how balanced our examples are by using a bar plot. # In[4]: pd.value_counts(train_df['class']).plot.bar() # The classes are evenly distributed. That's great! # # However, the text contains some spurious backslashes in some parts of the text. # They are meant to represent newlines in the original text. # An example can be seen below, between the words "dwindling" and "band". # In[5]: print(train_df.loc[0, 'description']) # We will replace the backslashes with spaces on the whole column using pandas replace method. # In[6]: train_df['text'] = train_df['title'].str.lower() + " " + train_df['description'].str.lower() train_df['text'] = train_df['text'].str.replace('\\', ' ', regex=False) train_df # Now we will proceed to tokenize the title and description columns using NLTK's word_tokenize(). # We will add a new column to our dataframe with the list of tokens. # In[7]: from nltk.tokenize import word_tokenize train_df['tokens'] = train_df['text'].progress_map(word_tokenize) train_df # Now we will load the GloVe word embeddings. # In[8]: from gensim.models import KeyedVectors glove = KeyedVectors.load_word2vec_format("glove.6B.300d.txt", no_header=True) glove.vectors.shape # The word embeddings have been pretrained in a different corpus, so it would be a good idea to estimate how good our tokenization matches the GloVe vocabulary. # In[9]: from collections import Counter def count_unknown_words(data, vocabulary): counter = Counter() for row in tqdm(data): counter.update(tok for tok in row if tok not in vocabulary) return counter # find out how many times each unknown token occurrs in the corpus c = count_unknown_words(train_df['tokens'], glove.key_to_index) # find the total number of tokens in the corpus total_tokens = train_df['tokens'].map(len).sum() # find some statistics about occurrences of unknown tokens unk_tokens = sum(c.values()) percent_unk = unk_tokens / total_tokens distinct_tokens = len(list(c)) print(f'total number of tokens: {total_tokens:,}') print(f'number of unknown tokens: {unk_tokens:,}') print(f'number of distinct unknown tokens: {distinct_tokens:,}') print(f'percentage of unkown tokens: {percent_unk:.2%}') print('top 50 unknown words:') for token, n in c.most_common(10): print(f'\t{n}\t{token}') # Glove embeddings seem to have a good coverage on this dataset -- only 1.25% of the tokens in the dataset are unknown, i.e., don't appear in the GloVe vocabulary. # # Still, we will need a way to handle these unknown tokens. # Our approach will be to add a new embedding to GloVe that will be used to represent them. # This new embedding will be initialized as the average of all the GloVe embeddings. # # We will also add another embedding, this one initialized to zeros, that will be used to pad the sequences of tokens so that they all have the same length. This will be useful when we train with mini-batches. # In[10]: # string values corresponding to the new embeddings unk_tok = '[UNK]' pad_tok = '[PAD]' # initialize the new embedding values unk_emb = glove.vectors.mean(axis=0) pad_emb = np.zeros(300) # add new embeddings to glove glove.add_vectors([unk_tok, pad_tok], [unk_emb, pad_emb]) # get token ids corresponding to the new embeddings unk_id = glove.key_to_index[unk_tok] pad_id = glove.key_to_index[pad_tok] unk_id, pad_id # In[11]: from sklearn.model_selection import train_test_split train_df, dev_df = train_test_split(train_df, train_size=0.8) train_df.reset_index(inplace=True) dev_df.reset_index(inplace=True) # We will now add a new column to our dataframe that will contain the padded sequences of token ids. # In[12]: threshold = 10 tokens = train_df['tokens'].explode().value_counts() vocabulary = set(tokens[tokens > threshold].index.tolist()) print(f'vocabulary size: {len(vocabulary):,}') # In[13]: # find the length of the longest list of tokens max_tokens = train_df['tokens'].map(len).max() # return unk_id for infrequent tokens too def get_id(tok): if tok in vocabulary: return glove.key_to_index.get(tok, unk_id) else: return unk_id # function that gets a list of tokens and returns a list of token ids, # with padding added accordingly def token_ids(tokens): tok_ids = [get_id(tok) for tok in tokens] pad_len = max_tokens - len(tok_ids) return tok_ids + [pad_id] * pad_len # add new column to the dataframe train_df['token ids'] = train_df['tokens'].progress_map(token_ids) train_df # In[14]: max_tokens = dev_df['tokens'].map(len).max() dev_df['token ids'] = dev_df['tokens'].progress_map(token_ids) dev_df # Now we will get a numpy 2-dimensional array corresponding to the token ids, # and a 1-dimensional array with the gold classes. Note that the classes are one-based (i.e., they start at one), # but we need them to be zero-based, so we need to subtract one from this array. # In[15]: from torch.utils.data import Dataset class MyDataset(Dataset): def __init__(self, x, y): self.x = x self.y = y def __len__(self): return len(self.y) def __getitem__(self, index): x = torch.tensor(self.x[index]) y = torch.tensor(self.y[index]) return x, y # Next, we construct our PyTorch model, which is a feed-forward neural network with two layers: # In[16]: from torch import nn import torch.nn.functional as F class Model(nn.Module): def __init__(self, vectors, pad_id, hidden_dim, output_dim, dropout): super().__init__() # embeddings must be a tensor if not torch.is_tensor(vectors): vectors = torch.tensor(vectors) # keep padding id self.padding_idx = pad_id # embedding layer self.embs = nn.Embedding.from_pretrained(vectors, padding_idx=pad_id) # feedforward layers self.layers = nn.Sequential( nn.Dropout(dropout), nn.Linear(vectors.shape[1], hidden_dim), nn.ReLU(), nn.Dropout(dropout), nn.Linear(hidden_dim, output_dim), ) def forward(self, x): # get boolean array with padding elements set to false not_padding = torch.isin(x, self.padding_idx, invert=True) # get lengths of examples (excluding padding) lengths = torch.count_nonzero(not_padding, axis=1) # get embeddings x = self.embs(x) # calculate means x = x.sum(dim=1) / lengths.unsqueeze(dim=1) # pass to rest of the model output = self.layers(x) # calculate softmax if we're not in training mode #if not self.training: # output = F.softmax(output, dim=1) return output # Next, we implement the training procedure. We compute the loss and accuracy on the development partition after each epoch. # In[17]: from torch import optim from torch.utils.data import DataLoader from sklearn.metrics import accuracy_score # hyperparameters lr = 1e-3 weight_decay = 0 batch_size = 500 shuffle = True n_epochs = 5 hidden_dim = 50 output_dim = len(labels) dropout = 0.1 vectors = glove.vectors # initialize the model, loss function, optimizer, and data-loader model = Model(vectors, pad_id, hidden_dim, output_dim, dropout).to(device) loss_func = nn.CrossEntropyLoss() optimizer = optim.Adam(model.parameters(), lr=lr, weight_decay=weight_decay) train_ds = MyDataset(train_df['token ids'], train_df['class index'] - 1) train_dl = DataLoader(train_ds, batch_size=batch_size, shuffle=shuffle) dev_ds = MyDataset(dev_df['token ids'], dev_df['class index'] - 1) dev_dl = DataLoader(dev_ds, batch_size=batch_size, shuffle=shuffle) train_loss = [] train_acc = [] dev_loss = [] dev_acc = [] # train the model for epoch in range(n_epochs): losses = [] gold = [] pred = [] model.train() for X, y_true in tqdm(train_dl, desc=f'epoch {epoch+1} (train)'): # clear gradients model.zero_grad() # send batch to right device X = X.to(device) y_true = y_true.to(device) # predict label scores y_pred = model(X) # compute loss loss = loss_func(y_pred, y_true) # accumulate for plotting losses.append(loss.detach().cpu().item()) gold.append(y_true.detach().cpu().numpy()) pred.append(np.argmax(y_pred.detach().cpu().numpy(), axis=1)) # backpropagate loss.backward() # optimize model parameters optimizer.step() train_loss.append(np.mean(losses)) train_acc.append(accuracy_score(np.concatenate(gold), np.concatenate(pred))) model.eval() with torch.no_grad(): losses = [] gold = [] pred = [] for X, y_true in tqdm(dev_dl, desc=f'epoch {epoch+1} (dev)'): X = X.to(device) y_true = y_true.to(device) y_pred = model(X) loss = loss_func(y_pred, y_true) losses.append(loss.cpu().item()) gold.append(y_true.cpu().numpy()) pred.append(np.argmax(y_pred.cpu().numpy(), axis=1)) dev_loss.append(np.mean(losses)) dev_acc.append(accuracy_score(np.concatenate(gold), np.concatenate(pred))) # Let's plot the loss and accuracy on dev: # In[18]: import matplotlib.pyplot as plt get_ipython().run_line_magic('matplotlib', 'inline') x = np.arange(n_epochs) + 1 plt.plot(x, train_loss) plt.plot(x, dev_loss) plt.legend(['train', 'dev']) plt.xlabel('epoch') plt.ylabel('loss') plt.grid(True) # In[19]: plt.plot(x, train_acc) plt.plot(x, dev_acc) plt.legend(['train', 'dev']) plt.xlabel('epoch') plt.ylabel('accuracy') plt.grid(True) # Next, we evaluate on the testing partition: # In[20]: # repeat all preprocessing done above, this time on the test set test_df = pd.read_csv('data/ag_news_csv/test.csv', header=None) test_df.columns = ['class index', 'title', 'description'] test_df['text'] = test_df['title'].str.lower() + " " + test_df['description'].str.lower() test_df['text'] = test_df['text'].str.replace('\\', ' ', regex=False) test_df['tokens'] = test_df['text'].progress_map(word_tokenize) max_tokens = dev_df['tokens'].map(len).max() test_df['token ids'] = test_df['tokens'].progress_map(token_ids) # In[21]: from sklearn.metrics import classification_report # set model to evaluation mode model.eval() dataset = MyDataset(test_df['token ids'], test_df['class index'] - 1) data_loader = DataLoader(dataset, batch_size=batch_size) y_pred = [] # don't store gradients with torch.no_grad(): for X, _ in tqdm(data_loader): X = X.to(device) # predict one class per example y = torch.argmax(model(X), dim=1) # convert tensor to numpy array (sending it back to the cpu if needed) y_pred.append(y.cpu().numpy()) # print results print(classification_report(dataset.y, np.concatenate(y_pred), target_names=labels))
4,840
4,864
22
chap09-23
chap09-23
9 Implementing Text Classification Using Word Embeddings In the previous chapter we introduced word embeddings, which are realvalued vectors that encode semantic representation of words. We discussed how to learn them, and how they capture semantic information that makes them useful for downstream tasks. In this chapter we show how to use word embeddings that have been pretrained using a variant of the algorithm discussed in the previous chapter. We show how to load them, explore some of their characteristics, and show their application for a text classification task. As usual, the code for this chapter is available in our repository. It is organized into two notebooks: one corresponding to the explorations shown in the first half of this chapter (chap9_embeddings), and a second one in which we modify our previous classifier to use word embeddings (chap9_classification). 9.1 Pre-trained Word Embeddings There are several algorithms for training word embeddings, including the original word2vec algorithm (Mikolov et al., 2013a) (which we discussed in the previous chapter), GloVe (Pennington et al., 2014), and fastText (Bojanowski et al., 2017). They all provide the software for training the embeddings as well as pretrained word embeddings on their respective websites. In general, most open-domain word embeddings are trained on large corpora that cover a variety of topics such as Wikipedia1 and Gigaword.2 Commonly, these embeddings are freely distributed so 1 https://en.wikipedia.org/wiki/Wikipedia:Database_download 2 https://catalog.ldc.upenn.edu/LDC2011T07 133 134 Implementing Text Classification Using Word Embeddings house 0.60137 0.28521 -0.032038 -0.43026 0.74806 0.26223 -0.97361 0.078581 -0.57588 -1.188 -1.8507 -0.24887 0.055549 0.0086155 0.067951 0.40554 -0.073998 -0.21318 0.37167 -0.71791 1.2234 0.35546 -0.41537 -0.21931 -0.39661 -1.7831 -0.41507 0.29533 -0.41254 0.020096 2.7425 -0.9926 -0.71033 -0.46813 0.28265 -0.077639 0.3041 -0.06644 0.3951 -0.70747 -0.38894 0.23158 -0.49508 0.14612 -0.02314 0.56389 -0.86188 -1.0278 0.039922 0.20018 Figure 9.1 GloVe embedding corresponding to the word house, found in the GloVe file glove.6B.50d.txt. We have broken the vector in several lines for display purposes, but this is a single line in the text file. that practitioners can use them in downstream tasks. We will use one such set of vectors in this chapter. Pretrained embeddings are usually distributed as a text file in which each line represents a word vector. The first element in the line is the word itself, and the rest of the elements are the vector components. This is usually referred to as the word2vec format. For example, Figure 9.1 shows the line in the glove.6B.50d.txt file (from the GloVe website) corresponding to the word house. This vector is represented by the word itself, followed by 50 floating-point numbers corresponding to the 50dimensional vector. Note that some embeddings files have a header line composed of two numbers: the number of vectors (i.e., the number of lines in the file), and the vector dimensionality. However, this is not always the case. For example, the original word2vec implementation includes this header line, but the more recent GloVe does not (probably because this information can be inferred from the content of the file). For the examples in the rest of the chapter, we will use the glove.6B.300d.txt embeddings that can be downloaded from the GloVe website.3 This file provides 400,000 word embeddings of 300-dimensions trained on texts
from Wikipedia 2014 and Gigaword 5. We will begin our exploration of word embeddings using Gensim,4 a Python library that provides excellent support for loading and using word embeddings, among other more advanced features. As we can see, the embeddings have been loaded and assigned to the glove variable. Note that we had to specify that this file doesn’t contain the header that is usually present in the word2vec format. The 3 https://nlp.stanford.edu/projects/glove/ 4 https://radimrehurek.com/gensim/ 9.1 Pre-trained Word Embeddings 135 glove.vectors attribute contains a 2-dimensional NumPy array with 400,000 rows and 300 columns, each row corresponding to a word embedding. 9.1.1 Word Similarity Gensim’s KeyedVectors class provides a method called most_similar that receives a word and computes its cosine similarity to all other embeddings, and returns the topn most-similar words. By default, topn is set to 10. The example above shows the top 10 most-similar words to the word cactus, when using the 300-dimension GloVe embeddings trained on Wikipedia and Gigaword. All ten most-similar words are related to cactus in different ways: cacti and cactuses are its plural forms; saguaro, peyote, opuntia, and prickly pear are types of cacti; and mesquite, shrubs, and succulents are other plants from arid climates. You can find more examples of word similarity queries in the Jupyter notebook that accompanies this chapter. Also, as an exercise, try loading a different set of embeddings trained with a different corpus (e.g., Twitter) to see if you obtain different results! 9.1.2 Word Analogies As we discussed in the previous chapter, the semantic information en- coded by word embeddings captures much more than word similar- ity. To surface this additional information, we will use word analogies represented using additional vector operations. For example, a well- ⃗
known analogy that highlights gender information is: king − m⃗an ≈ qu⃗een−wom⃗an,5or,inplainlanguage:“manistokingwhatwomanis to queen.” From this, it immediately follows that one can subtract the meaning of man and add the meaning of woman to obtain the definition ⃗ offemaleroyalty:king−m⃗an+wom⃗an≈qu⃗een.
 The same most_similar method we’ve been using can be repurposed to find word analogies such as the one mentioned above. To this end, two sets of words have to be provided to the most_similar method: a list of positive words that should be added, and a list of negative words 5 A word with an arrow on top refers to the embedding vector corresponding to that word. Please see Section 1.4 for a summary of the notations used in this book. 136 Implementing Text Classification Using Word Embeddings that should be subtracted. For example, the code below implements the left-hand side of the previous analogy: Another interesting analogy relation that shows how the embeddings have captured information about currencies is shown below. More examples are discussed in the Jupyter notebook. 9.1.3 Looking Under the Hood Let us understand now how these queries are actually implemented. First, we need to know what components we need. Clearly, we need the embedding vectors themselves. They are stored in the vectors attribute of the KeyedVectors object. As we mentioned previously, this is a 2-dimensional NumPy array, each row corresponding to a word in the vocabulary. These embeddings are not normalized, but normalized embeddings can be obtained using the get_normed_vectors method. We also need to know the mapping between words and the matrix rows. The KeyedVectors object stores this mapping in a list of terms called index_to_key, and a term-to-index dictionary called key_to_index. Below we show only the first 5 terms to save space, but you can inspect the whole vocabulary in the Jupyter notebook. 9.1.4 Word Similarity from Scratch Implementing the word similarity function ourselves is a good exercise to ensure that we understand how cosine similarity works, and to practice our NumPy skills. We will write a function called most_similar_words that will take a word, the embeddings matrix, the vocabulary in the form of the index_to_key list and key_to_index dictionary, and the number of similar words to return (defaults to 10). The implementation of most_similar_words is straightforward. First, we find the word id for the given word, using the key_to_index dictionary. Then we retrieve the row from the vectors matrix that corresponds to that word. The next step is computing the cosine similarity between the word of interest and the rest of the vocabulary. Recall that the cosine similarity is equivalent to a dot product if the vectors are normalized. We use this equivalence by performing a matrix-vector multiplication between the word embedding and the embedding matrix using Python’s at operator (denoted as @ in code). This means that we must pass the 9.2 Text Classification with Pretrained Word Embeddings 137 normalized embeddings as an argument to this function. Next, we need to sort the similarities preserving the mapping to the words in the vocabulary. We achieve this using the argsort NumPy method, which returns the indices in sorted (ascending) order. Since we need them in descending order, the next step is to reverse this list of indices. Obviously, the most similar word to whichever word we’re querying is the word itself, but that is not an interesting result, so we will remove it from the results. We do this by using NumPy’s ability to index arrays using booleans. We first create a new array in which the position corresponding to the query word is set to False and every other element is set to True, and we use this boolean array to index the list of ids. Lastly, we create a list of tuples of the form (word, similarity) for the topn words, and return the results. Now we will test our implementation of word similarity using the word cactus. You can compare the results to the ones obtained by KeyedVectors’s most_similar method. 9.1.5 Word Analogies from Scratch The implementation of the word analogy function is not that much different from our most_similar_word function above. The main difference between this function and most_similar_words is that now we have two lists of words that we need to combine into a single embedding. We first add the positive words into a single vector, and we do the same for the negative words. Then we subtract the negative vector from the positive one, and normalize the result. The similarity scores are computed the same way as before, but now we need to remove several words from the results, so this time we use NumPy’s isin function, which checks for any of the words in given_word_ids. We then package the results the same way we did before, and return them. ⃗ Nowlet’stryourimplementationwiththesameking−m⃗an+wom⃗an query we discussed previously. Please compare the results to the ones obtained by Gensim. 9.2 Text Classification with Pretrained Word Embeddings In this section we will continue using the AG News classification dataset introduced in previous chapters. Most of the data preparation is the 138 Implementing Text Classification Using Word Embeddings same, up to tokenization. However, we need to remember that the embeddings were trained on a different corpus, so it would be a good idea to estimate how well they cover the words AG News dataset. To achieve this, we load the embeddings just like we did previously. Then we count the tokens in our corpus that do not appear in the embeddings vocabulary, as well as the total number to tokens. We use these numbers to print some informative statistics such as the proportion of unknown tokens in the corpus. We also print the top ten unknown tokens. You can use the Jupyter notebook to explore this task further. Our analysis indicates that only 1.25% of the tokens are not accounted for in the embeddings vocabulary. Further, the most common unknown words seem to be URL fragments. This is encouraging. However, for more robustness, we will introduce a couple of special embeddings that are often needed when dealing with word embeddings. The first one is an embedding used to represent unknown words. A common strategy is to use the average of all the embeddings in the vocabulary for this purpose. The second embedding we will add will be used for padding. Padding is required when we want to train with (mini-)batches because the lengths of all the examples in a given batch have to match in order for the batch to be efficiently processed in parallel. The padding embedding consists only of zeros, which essentially excludes these virtual tokens from the forward/backward passes. None of these embeddings are included in the pretrained GloVe embeddings, but other pretrained embeddings may already include them, so it is a good idea to check if they are included with the embeddings we are using before adding them. The new embeddings were added at the end of embedding collection, so their ids are 400,000 and 400,001. Now we need to generate a list of token ids for each training example. Recall that we decided to ignore tokens that appear less than 10 times, so we need to replace those with [UNK] too, even if they appear in the embedding vocabulary. Next, we create a Dataset object from the padded lists of token ids. This one is even easier since the lists of token ids are ready. So all that is required is turning them into tensors. Lastly, we need to modify the model class to indicate that we now use embedding vectors. To this end, we will add an nn. Embedding layer that stores the embedding vectors for all words in the vocabulary. We will use this object to look up embeddings by their token ids. This layer will be initialized from a tensor containing the pretrained embeddings for the entire vocabulary. Also, the pad_id is specified when creating the 9.2 Text Classification with Pretrained Word Embeddings 139 embedding layer. When a nn. Embedding layer gets initialized using the from_pretrained method with other arguments set to default values, the embeddings are not updated during training. We will keep it that way for this example, but that could be changed by setting the freeze parameter to False. The rest of the layers are the same as in our previous example from Chapter 7, i.e., one intermediate layer and one output layer, with a nonlinearity (ReLU) between them. The only major difference is that now the input size of the intermediate layer is the size of one embedding (e.g., 300) instead of the size of the vocabulary like last time. This is because, as we explain below, the intermediate layer receives an average of the numerical representations of the words in the current text. The forward function of the Model class changes significantly. This time we are encoding the text as an average of the embeddings of all the words it contains. To compute the denominator of this average, we obtain the length of each text by counting how many of its words are not the virtual padding token. Then we sum all the embeddings and divide by the number of non-padding tokens. Adding all embeddings is safe, because padding embeddings are comprised of zeros. This process leaves us with a single embedding for the whole text, which is then passed to the rest of the layers. The training and evaluation steps are the same before. The results of this model on the AG News test partition are displayed below: Comparing these results with the ones obtained by the multilayer perceptron with explicit features in Chapter 7, we observe that on this particular task utilizing embeddings as features does not yield a performance improvement. Notably, this is a small dataset and a rather simplistic task where the presence of certain words is sufficient to distinguish the category of an article (e.g., the word basketball is highly indicative of the label Sports). Nevertheless, in other tasks where distinctions are more nuanced, or in which there is less likely to be word overlap between texts of interest, word embeddings do provide necessary signal. Additionally, when there are class imbalances, word embeddings can supplement underrepresented classes by bringing the external knowledge gained during their pretraining. 140 Implementing Text Classification Using Word Embeddings 9.3 Summary In this chapter we showed how to explore the semantic space encoded by word embeddings through word similarity and analogies, as well as one way to use them for text classification. At this point we have not taken into consideration the order in which the words appear, i.e., we averaged the embeddings for all the words in the text using a bag-ofwords representation of text. In subsequent chapters we will explore how to incorporate word order into the learned representations of text.
3,607
3,794
#!/usr/bin/env python # coding: utf-8 # # Using Pre-trained Word Embeddings # # In this notebook we will show some operations on pre-trained word embeddings to gain an intuition about them. # # We will be using the pre-trained GloVe embeddings that can be found in the [official website](https://nlp.stanford.edu/projects/glove/). In particular, we will use the file `glove.6B.300d.txt` contained in this [zip file](https://nlp.stanford.edu/data/glove.6B.zip). # # We will first load the GloVe embeddings using [Gensim](https://radimrehurek.com/gensim/). Specifically, we will use [`KeyedVectors`](https://radimrehurek.com/gensim/models/keyedvectors.html)'s [`load_word2vec_format()`](https://radimrehurek.com/gensim/models/keyedvectors.html#gensim.models.keyedvectors.KeyedVectors.load_word2vec_format) classmethod, which supports the original word2vec file format. # However, there is a difference in the file formats used by GloVe and word2vec, which is a header used by word2vec to indicate the number of embeddings and dimensions stored in the file. The file that stores the GloVe embeddings doesn't have this header, so we will have to address that when loading the embeddings. # # Loading the embeddings may take a little bit, so hang in there! # In[2]: from gensim.models import KeyedVectors fname = "glove.6B.300d.txt" glove = KeyedVectors.load_word2vec_format(fname, no_header=True) glove.vectors.shape # ## Word similarity # # One attribute of word embeddings that makes them useful is the ability to compare them using cosine similarity to find how similar they are. [`KeyedVectors`](https://radimrehurek.com/gensim/models/keyedvectors.html) objects provide a method called [`most_similar()`](https://radimrehurek.com/gensim/models/keyedvectors.html#gensim.models.keyedvectors.KeyedVectors.most_similar) that we can use to find the closest words to a particular word of interest. By default, [`most_similar()`](https://radimrehurek.com/gensim/models/keyedvectors.html#gensim.models.keyedvectors.KeyedVectors.most_similar) returns the 10 most similar words, but this can be changed using the `topn` parameter. # # Below we test this function using a few different words. # In[3]: # common noun glove.most_similar("cactus") # In[4]: # common noun glove.most_similar("cake") # In[5]: # adjective glove.most_similar("angry") # In[6]: # adverb glove.most_similar("quickly") # In[7]: # preposition glove.most_similar("between") # In[8]: # determiner glove.most_similar("the") # ## Word analogies # # Another characteristic of word embeddings is their ability to solve analogy problems. # The same [`most_similar()`](https://radimrehurek.com/gensim/models/keyedvectors.html#gensim.models.keyedvectors.KeyedVectors.most_similar) method can be used for this task, by passing two lists of words: # a `positive` list with the words that should be added and a `negative` list with the words that should be subtracted. Using these arguments, the famous example $\vec{king} - \vec{man} + \vec{woman} \approx \vec{queen}$ can be executed as follows: # In[9]: # king - man + woman glove.most_similar(positive=["king", "woman"], negative=["man"]) # Here are a few other interesting analogies: # In[10]: # car - drive + fly glove.most_similar(positive=["car", "fly"], negative=["drive"]) # In[11]: # berlin - germany + australia glove.most_similar(positive=["berlin", "australia"], negative=["germany"]) # In[12]: # england - london + baghdad glove.most_similar(positive=["england", "baghdad"], negative=["london"]) # In[13]: # japan - yen + peso glove.most_similar(positive=["japan", "peso"], negative=["yen"]) # In[14]: # best - good + tall glove.most_similar(positive=["best", "tall"], negative=["good"]) # ## Looking under the hood # # Now that we are more familiar with the [`most_similar()`](https://radimrehurek.com/gensim/models/keyedvectors.html#gensim.models.keyedvectors.KeyedVectors.most_similar) method, it is time to implement its functionality ourselves. # But first, we need to take a look at the different parts of the [`KeyedVectors`](https://radimrehurek.com/gensim/models/keyedvectors.html) object that we will need. # Obviously, we will need the vectors themselves. They are stored in the `vectors` attribute. # In[15]: glove.vectors.shape # As we can see above, `vectors` is a 2-dimensional matrix with 400,000 rows and 300 columns. # Each row corresponds to a 300-dimensional word embedding. These embeddings are not normalized, but normalized embeddings can be obtained using the [`get_normed_vectors()`](https://radimrehurek.com/gensim/models/keyedvectors.html#gensim.models.keyedvectors.KeyedVectors.get_normed_vectors) method. # In[16]: normed_vectors = glove.get_normed_vectors() normed_vectors.shape # Now we need to map the words in the vocabulary to rows in the `vectors` matrix, and vice versa. # The [`KeyedVectors`](https://radimrehurek.com/gensim/models/keyedvectors.html) object has the attributes `index_to_key` and `key_to_index` which are a list of words and a dictionary of words to indices, respectively. # In[17]: #glove.index_to_key # In[18]: #glove.key_to_index # ## Word similarity from scratch # # Now we have everything we need to implement a `most_similar_words()` function that takes a word, the vector matrix, the `index_to_key` list, and the `key_to_index` dictionary. This function will return the 10 most similar words to the provided word, along with their similarity scores. # In[19]: import numpy as np def most_similar_words(word, vectors, index_to_key, key_to_index, topn=10): # retrieve word_id corresponding to given word word_id = key_to_index[word] # retrieve embedding for given word emb = vectors[word_id] # calculate similarities to all words in out vocabulary similarities = vectors @ emb # get word_ids in ascending order with respect to similarity score ids_ascending = similarities.argsort() # reverse word_ids ids_descending = ids_ascending[::-1] # get boolean array with element corresponding to word_id set to false mask = ids_descending != word_id # obtain new array of indices that doesn't contain word_id # (otherwise the most similar word to the argument would be the argument itself) ids_descending = ids_descending[mask] # get topn word_ids top_ids = ids_descending[:topn] # retrieve topn words with their corresponding similarity score top_words = [(index_to_key[i], similarities[i]) for i in top_ids] # return results return top_words # Now let's try the same example that we used above: the most similar words to "cactus". # In[20]: vectors = glove.get_normed_vectors() index_to_key = glove.index_to_key key_to_index = glove.key_to_index most_similar_words("cactus", vectors, index_to_key, key_to_index) # ## Analogies from scratch # # The `most_similar_words()` function behaves as expected. Now let's implement a function to perform the analogy task. We will give it the very creative name `analogy`. This function will get two lists of words (one for positive words and one for negative words), just like the [`most_similar()`](https://radimrehurek.com/gensim/models/keyedvectors.html#gensim.models.keyedvectors.KeyedVectors.most_similar) method we discussed above. # In[21]: from numpy.linalg import norm def analogy(positive, negative, vectors, index_to_key, key_to_index, topn=10): # find ids for positive and negative words pos_ids = [key_to_index[w] for w in positive] neg_ids = [key_to_index[w] for w in negative] given_word_ids = pos_ids + neg_ids # get embeddings for positive and negative words pos_emb = vectors[pos_ids].sum(axis=0) neg_emb = vectors[neg_ids].sum(axis=0) # get embedding for analogy emb = pos_emb - neg_emb # normalize embedding emb = emb / norm(emb) # calculate similarities to all words in out vocabulary similarities = vectors @ emb # get word_ids in ascending order with respect to similarity score ids_ascending = similarities.argsort() # reverse word_ids ids_descending = ids_ascending[::-1] # get boolean array with element corresponding to any of given_word_ids set to false given_words_mask = np.isin(ids_descending, given_word_ids, invert=True) # obtain new array of indices that doesn't contain any of the given_word_ids ids_descending = ids_descending[given_words_mask] # get topn word_ids top_ids = ids_descending[:topn] # retrieve topn words with their corresponding similarity score top_words = [(index_to_key[i], similarities[i]) for i in top_ids] # return results return top_words # Let's try this function with the $\vec{king} - \vec{man} + \vec{woman} \approx \vec{queen}$ example we discussed above. # In[22]: positive = ["king", "woman"] negative = ["man"] vectors = glove.get_normed_vectors() index_to_key = glove.index_to_key key_to_index = glove.key_to_index analogy(positive, negative, vectors, index_to_key, key_to_index) # In[ ]:
1,269
1,308
23
chap09-24
chap09-24
9 Implementing Text Classification Using Word Embeddings In the previous chapter we introduced word embeddings, which are realvalued vectors that encode semantic representation of words. We discussed how to learn them, and how they capture semantic information that makes them useful for downstream tasks. In this chapter we show how to use word embeddings that have been pretrained using a variant of the algorithm discussed in the previous chapter. We show how to load them, explore some of their characteristics, and show their application for a text classification task. As usual, the code for this chapter is available in our repository. It is organized into two notebooks: one corresponding to the explorations shown in the first half of this chapter (chap9_embeddings), and a second one in which we modify our previous classifier to use word embeddings (chap9_classification). 9.1 Pre-trained Word Embeddings There are several algorithms for training word embeddings, including the original word2vec algorithm (Mikolov et al., 2013a) (which we discussed in the previous chapter), GloVe (Pennington et al., 2014), and fastText (Bojanowski et al., 2017). They all provide the software for training the embeddings as well as pretrained word embeddings on their respective websites. In general, most open-domain word embeddings are trained on large corpora that cover a variety of topics such as Wikipedia1 and Gigaword.2 Commonly, these embeddings are freely distributed so 1 https://en.wikipedia.org/wiki/Wikipedia:Database_download 2 https://catalog.ldc.upenn.edu/LDC2011T07 133 134 Implementing Text Classification Using Word Embeddings house 0.60137 0.28521 -0.032038 -0.43026 0.74806 0.26223 -0.97361 0.078581 -0.57588 -1.188 -1.8507 -0.24887 0.055549 0.0086155 0.067951 0.40554 -0.073998 -0.21318 0.37167 -0.71791 1.2234 0.35546 -0.41537 -0.21931 -0.39661 -1.7831 -0.41507 0.29533 -0.41254 0.020096 2.7425 -0.9926 -0.71033 -0.46813 0.28265 -0.077639 0.3041 -0.06644 0.3951 -0.70747 -0.38894 0.23158 -0.49508 0.14612 -0.02314 0.56389 -0.86188 -1.0278 0.039922 0.20018 Figure 9.1 GloVe embedding corresponding to the word house, found in the GloVe file glove.6B.50d.txt. We have broken the vector in several lines for display purposes, but this is a single line in the text file. that practitioners can use them in downstream tasks. We will use one such set of vectors in this chapter. Pretrained embeddings are usually distributed as a text file in which each line represents a word vector. The first element in the line is the word itself, and the rest of the elements are the vector components. This is usually referred to as the word2vec format. For example, Figure 9.1 shows the line in the glove.6B.50d.txt file (from the GloVe website) corresponding to the word house. This vector is represented by the word itself, followed by 50 floating-point numbers corresponding to the 50dimensional vector. Note that some embeddings files have a header line composed of two numbers: the number of vectors (i.e., the number of lines in the file), and the vector dimensionality. However, this is not always the case. For example, the original word2vec implementation includes this header line, but the more recent GloVe does not (probably because this information can be inferred from the content of the file). For the examples in the rest of the chapter, we will use the glove.6B.300d.txt embeddings that can be downloaded from the GloVe website.3 This file provides 400,000 word embeddings of 300-dimensions trained on texts
from Wikipedia 2014 and Gigaword 5. We will begin our exploration of word embeddings using Gensim,4 a Python library that provides excellent support for loading and using word embeddings, among other more advanced features. As we can see, the embeddings have been loaded and assigned to the glove variable. Note that we had to specify that this file doesn’t contain the header that is usually present in the word2vec format. The 3 https://nlp.stanford.edu/projects/glove/ 4 https://radimrehurek.com/gensim/ 9.1 Pre-trained Word Embeddings 135 glove.vectors attribute contains a 2-dimensional NumPy array with 400,000 rows and 300 columns, each row corresponding to a word embedding. 9.1.1 Word Similarity Gensim’s KeyedVectors class provides a method called most_similar that receives a word and computes its cosine similarity to all other embeddings, and returns the topn most-similar words. By default, topn is set to 10. The example above shows the top 10 most-similar words to the word cactus, when using the 300-dimension GloVe embeddings trained on Wikipedia and Gigaword. All ten most-similar words are related to cactus in different ways: cacti and cactuses are its plural forms; saguaro, peyote, opuntia, and prickly pear are types of cacti; and mesquite, shrubs, and succulents are other plants from arid climates. You can find more examples of word similarity queries in the Jupyter notebook that accompanies this chapter. Also, as an exercise, try loading a different set of embeddings trained with a different corpus (e.g., Twitter) to see if you obtain different results! 9.1.2 Word Analogies As we discussed in the previous chapter, the semantic information en- coded by word embeddings captures much more than word similar- ity. To surface this additional information, we will use word analogies represented using additional vector operations. For example, a well- ⃗
known analogy that highlights gender information is: king − m⃗an ≈ qu⃗een−wom⃗an,5or,inplainlanguage:“manistokingwhatwomanis to queen.” From this, it immediately follows that one can subtract the meaning of man and add the meaning of woman to obtain the definition ⃗ offemaleroyalty:king−m⃗an+wom⃗an≈qu⃗een.
 The same most_similar method we’ve been using can be repurposed to find word analogies such as the one mentioned above. To this end, two sets of words have to be provided to the most_similar method: a list of positive words that should be added, and a list of negative words 5 A word with an arrow on top refers to the embedding vector corresponding to that word. Please see Section 1.4 for a summary of the notations used in this book. 136 Implementing Text Classification Using Word Embeddings that should be subtracted. For example, the code below implements the left-hand side of the previous analogy: Another interesting analogy relation that shows how the embeddings have captured information about currencies is shown below. More examples are discussed in the Jupyter notebook. 9.1.3 Looking Under the Hood Let us understand now how these queries are actually implemented. First, we need to know what components we need. Clearly, we need the embedding vectors themselves. They are stored in the vectors attribute of the KeyedVectors object. As we mentioned previously, this is a 2-dimensional NumPy array, each row corresponding to a word in the vocabulary. These embeddings are not normalized, but normalized embeddings can be obtained using the get_normed_vectors method. We also need to know the mapping between words and the matrix rows. The KeyedVectors object stores this mapping in a list of terms called index_to_key, and a term-to-index dictionary called key_to_index. Below we show only the first 5 terms to save space, but you can inspect the whole vocabulary in the Jupyter notebook. 9.1.4 Word Similarity from Scratch Implementing the word similarity function ourselves is a good exercise to ensure that we understand how cosine similarity works, and to practice our NumPy skills. We will write a function called most_similar_words that will take a word, the embeddings matrix, the vocabulary in the form of the index_to_key list and key_to_index dictionary, and the number of similar words to return (defaults to 10). The implementation of most_similar_words is straightforward. First, we find the word id for the given word, using the key_to_index dictionary. Then we retrieve the row from the vectors matrix that corresponds to that word. The next step is computing the cosine similarity between the word of interest and the rest of the vocabulary. Recall that the cosine similarity is equivalent to a dot product if the vectors are normalized. We use this equivalence by performing a matrix-vector multiplication between the word embedding and the embedding matrix using Python’s at operator (denoted as @ in code). This means that we must pass the 9.2 Text Classification with Pretrained Word Embeddings 137 normalized embeddings as an argument to this function. Next, we need to sort the similarities preserving the mapping to the words in the vocabulary. We achieve this using the argsort NumPy method, which returns the indices in sorted (ascending) order. Since we need them in descending order, the next step is to reverse this list of indices. Obviously, the most similar word to whichever word we’re querying is the word itself, but that is not an interesting result, so we will remove it from the results. We do this by using NumPy’s ability to index arrays using booleans. We first create a new array in which the position corresponding to the query word is set to False and every other element is set to True, and we use this boolean array to index the list of ids. Lastly, we create a list of tuples of the form (word, similarity) for the topn words, and return the results. Now we will test our implementation of word similarity using the word cactus. You can compare the results to the ones obtained by KeyedVectors’s most_similar method. 9.1.5 Word Analogies from Scratch The implementation of the word analogy function is not that much different from our most_similar_word function above. The main difference between this function and most_similar_words is that now we have two lists of words that we need to combine into a single embedding. We first add the positive words into a single vector, and we do the same for the negative words. Then we subtract the negative vector from the positive one, and normalize the result. The similarity scores are computed the same way as before, but now we need to remove several words from the results, so this time we use NumPy’s isin function, which checks for any of the words in given_word_ids. We then package the results the same way we did before, and return them. ⃗ Nowlet’stryourimplementationwiththesameking−m⃗an+wom⃗an query we discussed previously. Please compare the results to the ones obtained by Gensim. 9.2 Text Classification with Pretrained Word Embeddings In this section we will continue using the AG News classification dataset introduced in previous chapters. Most of the data preparation is the 138 Implementing Text Classification Using Word Embeddings same, up to tokenization. However, we need to remember that the embeddings were trained on a different corpus, so it would be a good idea to estimate how well they cover the words AG News dataset. To achieve this, we load the embeddings just like we did previously. Then we count the tokens in our corpus that do not appear in the embeddings vocabulary, as well as the total number to tokens. We use these numbers to print some informative statistics such as the proportion of unknown tokens in the corpus. We also print the top ten unknown tokens. You can use the Jupyter notebook to explore this task further. Our analysis indicates that only 1.25% of the tokens are not accounted for in the embeddings vocabulary. Further, the most common unknown words seem to be URL fragments. This is encouraging. However, for more robustness, we will introduce a couple of special embeddings that are often needed when dealing with word embeddings. The first one is an embedding used to represent unknown words. A common strategy is to use the average of all the embeddings in the vocabulary for this purpose. The second embedding we will add will be used for padding. Padding is required when we want to train with (mini-)batches because the lengths of all the examples in a given batch have to match in order for the batch to be efficiently processed in parallel. The padding embedding consists only of zeros, which essentially excludes these virtual tokens from the forward/backward passes. None of these embeddings are included in the pretrained GloVe embeddings, but other pretrained embeddings may already include them, so it is a good idea to check if they are included with the embeddings we are using before adding them. The new embeddings were added at the end of embedding collection, so their ids are 400,000 and 400,001. Now we need to generate a list of token ids for each training example. Recall that we decided to ignore tokens that appear less than 10 times, so we need to replace those with [UNK] too, even if they appear in the embedding vocabulary. Next, we create a Dataset object from the padded lists of token ids. This one is even easier since the lists of token ids are ready. So all that is required is turning them into tensors. Lastly, we need to modify the model class to indicate that we now use embedding vectors. To this end, we will add an nn. Embedding layer that stores the embedding vectors for all words in the vocabulary. We will use this object to look up embeddings by their token ids. This layer will be initialized from a tensor containing the pretrained embeddings for the entire vocabulary. Also, the pad_id is specified when creating the 9.2 Text Classification with Pretrained Word Embeddings 139 embedding layer. When a nn. Embedding layer gets initialized using the from_pretrained method with other arguments set to default values, the embeddings are not updated during training. We will keep it that way for this example, but that could be changed by setting the freeze parameter to False. The rest of the layers are the same as in our previous example from Chapter 7, i.e., one intermediate layer and one output layer, with a nonlinearity (ReLU) between them. The only major difference is that now the input size of the intermediate layer is the size of one embedding (e.g., 300) instead of the size of the vocabulary like last time. This is because, as we explain below, the intermediate layer receives an average of the numerical representations of the words in the current text. The forward function of the Model class changes significantly. This time we are encoding the text as an average of the embeddings of all the words it contains. To compute the denominator of this average, we obtain the length of each text by counting how many of its words are not the virtual padding token. Then we sum all the embeddings and divide by the number of non-padding tokens. Adding all embeddings is safe, because padding embeddings are comprised of zeros. This process leaves us with a single embedding for the whole text, which is then passed to the rest of the layers. The training and evaluation steps are the same before. The results of this model on the AG News test partition are displayed below: Comparing these results with the ones obtained by the multilayer perceptron with explicit features in Chapter 7, we observe that on this particular task utilizing embeddings as features does not yield a performance improvement. Notably, this is a small dataset and a rather simplistic task where the presence of certain words is sufficient to distinguish the category of an article (e.g., the word basketball is highly indicative of the label Sports). Nevertheless, in other tasks where distinctions are more nuanced, or in which there is less likely to be word overlap between texts of interest, word embeddings do provide necessary signal. Additionally, when there are class imbalances, word embeddings can supplement underrepresented classes by bringing the external knowledge gained during their pretraining. 140 Implementing Text Classification Using Word Embeddings 9.3 Summary In this chapter we showed how to explore the semantic space encoded by word embeddings through word similarity and analogies, as well as one way to use them for text classification. At this point we have not taken into consideration the order in which the words appear, i.e., we averaged the embeddings for all the words in the text using a bag-ofwords representation of text. In subsequent chapters we will explore how to incorporate word order into the learned representations of text.
8,013
8,087
#!/usr/bin/env python # coding: utf-8 # # Using Pre-trained Word Embeddings # # In this notebook we will show some operations on pre-trained word embeddings to gain an intuition about them. # # We will be using the pre-trained GloVe embeddings that can be found in the [official website](https://nlp.stanford.edu/projects/glove/). In particular, we will use the file `glove.6B.300d.txt` contained in this [zip file](https://nlp.stanford.edu/data/glove.6B.zip). # # We will first load the GloVe embeddings using [Gensim](https://radimrehurek.com/gensim/). Specifically, we will use [`KeyedVectors`](https://radimrehurek.com/gensim/models/keyedvectors.html)'s [`load_word2vec_format()`](https://radimrehurek.com/gensim/models/keyedvectors.html#gensim.models.keyedvectors.KeyedVectors.load_word2vec_format) classmethod, which supports the original word2vec file format. # However, there is a difference in the file formats used by GloVe and word2vec, which is a header used by word2vec to indicate the number of embeddings and dimensions stored in the file. The file that stores the GloVe embeddings doesn't have this header, so we will have to address that when loading the embeddings. # # Loading the embeddings may take a little bit, so hang in there! # In[2]: from gensim.models import KeyedVectors fname = "glove.6B.300d.txt" glove = KeyedVectors.load_word2vec_format(fname, no_header=True) glove.vectors.shape # ## Word similarity # # One attribute of word embeddings that makes them useful is the ability to compare them using cosine similarity to find how similar they are. [`KeyedVectors`](https://radimrehurek.com/gensim/models/keyedvectors.html) objects provide a method called [`most_similar()`](https://radimrehurek.com/gensim/models/keyedvectors.html#gensim.models.keyedvectors.KeyedVectors.most_similar) that we can use to find the closest words to a particular word of interest. By default, [`most_similar()`](https://radimrehurek.com/gensim/models/keyedvectors.html#gensim.models.keyedvectors.KeyedVectors.most_similar) returns the 10 most similar words, but this can be changed using the `topn` parameter. # # Below we test this function using a few different words. # In[3]: # common noun glove.most_similar("cactus") # In[4]: # common noun glove.most_similar("cake") # In[5]: # adjective glove.most_similar("angry") # In[6]: # adverb glove.most_similar("quickly") # In[7]: # preposition glove.most_similar("between") # In[8]: # determiner glove.most_similar("the") # ## Word analogies # # Another characteristic of word embeddings is their ability to solve analogy problems. # The same [`most_similar()`](https://radimrehurek.com/gensim/models/keyedvectors.html#gensim.models.keyedvectors.KeyedVectors.most_similar) method can be used for this task, by passing two lists of words: # a `positive` list with the words that should be added and a `negative` list with the words that should be subtracted. Using these arguments, the famous example $\vec{king} - \vec{man} + \vec{woman} \approx \vec{queen}$ can be executed as follows: # In[9]: # king - man + woman glove.most_similar(positive=["king", "woman"], negative=["man"]) # Here are a few other interesting analogies: # In[10]: # car - drive + fly glove.most_similar(positive=["car", "fly"], negative=["drive"]) # In[11]: # berlin - germany + australia glove.most_similar(positive=["berlin", "australia"], negative=["germany"]) # In[12]: # england - london + baghdad glove.most_similar(positive=["england", "baghdad"], negative=["london"]) # In[13]: # japan - yen + peso glove.most_similar(positive=["japan", "peso"], negative=["yen"]) # In[14]: # best - good + tall glove.most_similar(positive=["best", "tall"], negative=["good"]) # ## Looking under the hood # # Now that we are more familiar with the [`most_similar()`](https://radimrehurek.com/gensim/models/keyedvectors.html#gensim.models.keyedvectors.KeyedVectors.most_similar) method, it is time to implement its functionality ourselves. # But first, we need to take a look at the different parts of the [`KeyedVectors`](https://radimrehurek.com/gensim/models/keyedvectors.html) object that we will need. # Obviously, we will need the vectors themselves. They are stored in the `vectors` attribute. # In[15]: glove.vectors.shape # As we can see above, `vectors` is a 2-dimensional matrix with 400,000 rows and 300 columns. # Each row corresponds to a 300-dimensional word embedding. These embeddings are not normalized, but normalized embeddings can be obtained using the [`get_normed_vectors()`](https://radimrehurek.com/gensim/models/keyedvectors.html#gensim.models.keyedvectors.KeyedVectors.get_normed_vectors) method. # In[16]: normed_vectors = glove.get_normed_vectors() normed_vectors.shape # Now we need to map the words in the vocabulary to rows in the `vectors` matrix, and vice versa. # The [`KeyedVectors`](https://radimrehurek.com/gensim/models/keyedvectors.html) object has the attributes `index_to_key` and `key_to_index` which are a list of words and a dictionary of words to indices, respectively. # In[17]: #glove.index_to_key # In[18]: #glove.key_to_index # ## Word similarity from scratch # # Now we have everything we need to implement a `most_similar_words()` function that takes a word, the vector matrix, the `index_to_key` list, and the `key_to_index` dictionary. This function will return the 10 most similar words to the provided word, along with their similarity scores. # In[19]: import numpy as np def most_similar_words(word, vectors, index_to_key, key_to_index, topn=10): # retrieve word_id corresponding to given word word_id = key_to_index[word] # retrieve embedding for given word emb = vectors[word_id] # calculate similarities to all words in out vocabulary similarities = vectors @ emb # get word_ids in ascending order with respect to similarity score ids_ascending = similarities.argsort() # reverse word_ids ids_descending = ids_ascending[::-1] # get boolean array with element corresponding to word_id set to false mask = ids_descending != word_id # obtain new array of indices that doesn't contain word_id # (otherwise the most similar word to the argument would be the argument itself) ids_descending = ids_descending[mask] # get topn word_ids top_ids = ids_descending[:topn] # retrieve topn words with their corresponding similarity score top_words = [(index_to_key[i], similarities[i]) for i in top_ids] # return results return top_words # Now let's try the same example that we used above: the most similar words to "cactus". # In[20]: vectors = glove.get_normed_vectors() index_to_key = glove.index_to_key key_to_index = glove.key_to_index most_similar_words("cactus", vectors, index_to_key, key_to_index) # ## Analogies from scratch # # The `most_similar_words()` function behaves as expected. Now let's implement a function to perform the analogy task. We will give it the very creative name `analogy`. This function will get two lists of words (one for positive words and one for negative words), just like the [`most_similar()`](https://radimrehurek.com/gensim/models/keyedvectors.html#gensim.models.keyedvectors.KeyedVectors.most_similar) method we discussed above. # In[21]: from numpy.linalg import norm def analogy(positive, negative, vectors, index_to_key, key_to_index, topn=10): # find ids for positive and negative words pos_ids = [key_to_index[w] for w in positive] neg_ids = [key_to_index[w] for w in negative] given_word_ids = pos_ids + neg_ids # get embeddings for positive and negative words pos_emb = vectors[pos_ids].sum(axis=0) neg_emb = vectors[neg_ids].sum(axis=0) # get embedding for analogy emb = pos_emb - neg_emb # normalize embedding emb = emb / norm(emb) # calculate similarities to all words in out vocabulary similarities = vectors @ emb # get word_ids in ascending order with respect to similarity score ids_ascending = similarities.argsort() # reverse word_ids ids_descending = ids_ascending[::-1] # get boolean array with element corresponding to any of given_word_ids set to false given_words_mask = np.isin(ids_descending, given_word_ids, invert=True) # obtain new array of indices that doesn't contain any of the given_word_ids ids_descending = ids_descending[given_words_mask] # get topn word_ids top_ids = ids_descending[:topn] # retrieve topn words with their corresponding similarity score top_words = [(index_to_key[i], similarities[i]) for i in top_ids] # return results return top_words # Let's try this function with the $\vec{king} - \vec{man} + \vec{woman} \approx \vec{queen}$ example we discussed above. # In[22]: positive = ["king", "woman"] negative = ["man"] vectors = glove.get_normed_vectors() index_to_key = glove.index_to_key key_to_index = glove.key_to_index analogy(positive, negative, vectors, index_to_key, key_to_index) # In[ ]:
5,743
5,770
24
chap09-25
chap09-25
9 Implementing Text Classification Using Word Embeddings In the previous chapter we introduced word embeddings, which are realvalued vectors that encode semantic representation of words. We discussed how to learn them, and how they capture semantic information that makes them useful for downstream tasks. In this chapter we show how to use word embeddings that have been pretrained using a variant of the algorithm discussed in the previous chapter. We show how to load them, explore some of their characteristics, and show their application for a text classification task. As usual, the code for this chapter is available in our repository. It is organized into two notebooks: one corresponding to the explorations shown in the first half of this chapter (chap9_embeddings), and a second one in which we modify our previous classifier to use word embeddings (chap9_classification). 9.1 Pre-trained Word Embeddings There are several algorithms for training word embeddings, including the original word2vec algorithm (Mikolov et al., 2013a) (which we discussed in the previous chapter), GloVe (Pennington et al., 2014), and fastText (Bojanowski et al., 2017). They all provide the software for training the embeddings as well as pretrained word embeddings on their respective websites. In general, most open-domain word embeddings are trained on large corpora that cover a variety of topics such as Wikipedia1 and Gigaword.2 Commonly, these embeddings are freely distributed so 1 https://en.wikipedia.org/wiki/Wikipedia:Database_download 2 https://catalog.ldc.upenn.edu/LDC2011T07 133 134 Implementing Text Classification Using Word Embeddings house 0.60137 0.28521 -0.032038 -0.43026 0.74806 0.26223 -0.97361 0.078581 -0.57588 -1.188 -1.8507 -0.24887 0.055549 0.0086155 0.067951 0.40554 -0.073998 -0.21318 0.37167 -0.71791 1.2234 0.35546 -0.41537 -0.21931 -0.39661 -1.7831 -0.41507 0.29533 -0.41254 0.020096 2.7425 -0.9926 -0.71033 -0.46813 0.28265 -0.077639 0.3041 -0.06644 0.3951 -0.70747 -0.38894 0.23158 -0.49508 0.14612 -0.02314 0.56389 -0.86188 -1.0278 0.039922 0.20018 Figure 9.1 GloVe embedding corresponding to the word house, found in the GloVe file glove.6B.50d.txt. We have broken the vector in several lines for display purposes, but this is a single line in the text file. that practitioners can use them in downstream tasks. We will use one such set of vectors in this chapter. Pretrained embeddings are usually distributed as a text file in which each line represents a word vector. The first element in the line is the word itself, and the rest of the elements are the vector components. This is usually referred to as the word2vec format. For example, Figure 9.1 shows the line in the glove.6B.50d.txt file (from the GloVe website) corresponding to the word house. This vector is represented by the word itself, followed by 50 floating-point numbers corresponding to the 50dimensional vector. Note that some embeddings files have a header line composed of two numbers: the number of vectors (i.e., the number of lines in the file), and the vector dimensionality. However, this is not always the case. For example, the original word2vec implementation includes this header line, but the more recent GloVe does not (probably because this information can be inferred from the content of the file). For the examples in the rest of the chapter, we will use the glove.6B.300d.txt embeddings that can be downloaded from the GloVe website.3 This file provides 400,000 word embeddings of 300-dimensions trained on texts
from Wikipedia 2014 and Gigaword 5. We will begin our exploration of word embeddings using Gensim,4 a Python library that provides excellent support for loading and using word embeddings, among other more advanced features. As we can see, the embeddings have been loaded and assigned to the glove variable. Note that we had to specify that this file doesn’t contain the header that is usually present in the word2vec format. The 3 https://nlp.stanford.edu/projects/glove/ 4 https://radimrehurek.com/gensim/ 9.1 Pre-trained Word Embeddings 135 glove.vectors attribute contains a 2-dimensional NumPy array with 400,000 rows and 300 columns, each row corresponding to a word embedding. 9.1.1 Word Similarity Gensim’s KeyedVectors class provides a method called most_similar that receives a word and computes its cosine similarity to all other embeddings, and returns the topn most-similar words. By default, topn is set to 10. The example above shows the top 10 most-similar words to the word cactus, when using the 300-dimension GloVe embeddings trained on Wikipedia and Gigaword. All ten most-similar words are related to cactus in different ways: cacti and cactuses are its plural forms; saguaro, peyote, opuntia, and prickly pear are types of cacti; and mesquite, shrubs, and succulents are other plants from arid climates. You can find more examples of word similarity queries in the Jupyter notebook that accompanies this chapter. Also, as an exercise, try loading a different set of embeddings trained with a different corpus (e.g., Twitter) to see if you obtain different results! 9.1.2 Word Analogies As we discussed in the previous chapter, the semantic information en- coded by word embeddings captures much more than word similar- ity. To surface this additional information, we will use word analogies represented using additional vector operations. For example, a well- ⃗
known analogy that highlights gender information is: king − m⃗an ≈ qu⃗een−wom⃗an,5or,inplainlanguage:“manistokingwhatwomanis to queen.” From this, it immediately follows that one can subtract the meaning of man and add the meaning of woman to obtain the definition ⃗ offemaleroyalty:king−m⃗an+wom⃗an≈qu⃗een.
 The same most_similar method we’ve been using can be repurposed to find word analogies such as the one mentioned above. To this end, two sets of words have to be provided to the most_similar method: a list of positive words that should be added, and a list of negative words 5 A word with an arrow on top refers to the embedding vector corresponding to that word. Please see Section 1.4 for a summary of the notations used in this book. 136 Implementing Text Classification Using Word Embeddings that should be subtracted. For example, the code below implements the left-hand side of the previous analogy: Another interesting analogy relation that shows how the embeddings have captured information about currencies is shown below. More examples are discussed in the Jupyter notebook. 9.1.3 Looking Under the Hood Let us understand now how these queries are actually implemented. First, we need to know what components we need. Clearly, we need the embedding vectors themselves. They are stored in the vectors attribute of the KeyedVectors object. As we mentioned previously, this is a 2-dimensional NumPy array, each row corresponding to a word in the vocabulary. These embeddings are not normalized, but normalized embeddings can be obtained using the get_normed_vectors method. We also need to know the mapping between words and the matrix rows. The KeyedVectors object stores this mapping in a list of terms called index_to_key, and a term-to-index dictionary called key_to_index. Below we show only the first 5 terms to save space, but you can inspect the whole vocabulary in the Jupyter notebook. 9.1.4 Word Similarity from Scratch Implementing the word similarity function ourselves is a good exercise to ensure that we understand how cosine similarity works, and to practice our NumPy skills. We will write a function called most_similar_words that will take a word, the embeddings matrix, the vocabulary in the form of the index_to_key list and key_to_index dictionary, and the number of similar words to return (defaults to 10). The implementation of most_similar_words is straightforward. First, we find the word id for the given word, using the key_to_index dictionary. Then we retrieve the row from the vectors matrix that corresponds to that word. The next step is computing the cosine similarity between the word of interest and the rest of the vocabulary. Recall that the cosine similarity is equivalent to a dot product if the vectors are normalized. We use this equivalence by performing a matrix-vector multiplication between the word embedding and the embedding matrix using Python’s at operator (denoted as @ in code). This means that we must pass the 9.2 Text Classification with Pretrained Word Embeddings 137 normalized embeddings as an argument to this function. Next, we need to sort the similarities preserving the mapping to the words in the vocabulary. We achieve this using the argsort NumPy method, which returns the indices in sorted (ascending) order. Since we need them in descending order, the next step is to reverse this list of indices. Obviously, the most similar word to whichever word we’re querying is the word itself, but that is not an interesting result, so we will remove it from the results. We do this by using NumPy’s ability to index arrays using booleans. We first create a new array in which the position corresponding to the query word is set to False and every other element is set to True, and we use this boolean array to index the list of ids. Lastly, we create a list of tuples of the form (word, similarity) for the topn words, and return the results. Now we will test our implementation of word similarity using the word cactus. You can compare the results to the ones obtained by KeyedVectors’s most_similar method. 9.1.5 Word Analogies from Scratch The implementation of the word analogy function is not that much different from our most_similar_word function above. The main difference between this function and most_similar_words is that now we have two lists of words that we need to combine into a single embedding. We first add the positive words into a single vector, and we do the same for the negative words. Then we subtract the negative vector from the positive one, and normalize the result. The similarity scores are computed the same way as before, but now we need to remove several words from the results, so this time we use NumPy’s isin function, which checks for any of the words in given_word_ids. We then package the results the same way we did before, and return them. ⃗ Nowlet’stryourimplementationwiththesameking−m⃗an+wom⃗an query we discussed previously. Please compare the results to the ones obtained by Gensim. 9.2 Text Classification with Pretrained Word Embeddings In this section we will continue using the AG News classification dataset introduced in previous chapters. Most of the data preparation is the 138 Implementing Text Classification Using Word Embeddings same, up to tokenization. However, we need to remember that the embeddings were trained on a different corpus, so it would be a good idea to estimate how well they cover the words AG News dataset. To achieve this, we load the embeddings just like we did previously. Then we count the tokens in our corpus that do not appear in the embeddings vocabulary, as well as the total number to tokens. We use these numbers to print some informative statistics such as the proportion of unknown tokens in the corpus. We also print the top ten unknown tokens. You can use the Jupyter notebook to explore this task further. Our analysis indicates that only 1.25% of the tokens are not accounted for in the embeddings vocabulary. Further, the most common unknown words seem to be URL fragments. This is encouraging. However, for more robustness, we will introduce a couple of special embeddings that are often needed when dealing with word embeddings. The first one is an embedding used to represent unknown words. A common strategy is to use the average of all the embeddings in the vocabulary for this purpose. The second embedding we will add will be used for padding. Padding is required when we want to train with (mini-)batches because the lengths of all the examples in a given batch have to match in order for the batch to be efficiently processed in parallel. The padding embedding consists only of zeros, which essentially excludes these virtual tokens from the forward/backward passes. None of these embeddings are included in the pretrained GloVe embeddings, but other pretrained embeddings may already include them, so it is a good idea to check if they are included with the embeddings we are using before adding them. The new embeddings were added at the end of embedding collection, so their ids are 400,000 and 400,001. Now we need to generate a list of token ids for each training example. Recall that we decided to ignore tokens that appear less than 10 times, so we need to replace those with [UNK] too, even if they appear in the embedding vocabulary. Next, we create a Dataset object from the padded lists of token ids. This one is even easier since the lists of token ids are ready. So all that is required is turning them into tensors. Lastly, we need to modify the model class to indicate that we now use embedding vectors. To this end, we will add an nn. Embedding layer that stores the embedding vectors for all words in the vocabulary. We will use this object to look up embeddings by their token ids. This layer will be initialized from a tensor containing the pretrained embeddings for the entire vocabulary. Also, the pad_id is specified when creating the 9.2 Text Classification with Pretrained Word Embeddings 139 embedding layer. When a nn. Embedding layer gets initialized using the from_pretrained method with other arguments set to default values, the embeddings are not updated during training. We will keep it that way for this example, but that could be changed by setting the freeze parameter to False. The rest of the layers are the same as in our previous example from Chapter 7, i.e., one intermediate layer and one output layer, with a nonlinearity (ReLU) between them. The only major difference is that now the input size of the intermediate layer is the size of one embedding (e.g., 300) instead of the size of the vocabulary like last time. This is because, as we explain below, the intermediate layer receives an average of the numerical representations of the words in the current text. The forward function of the Model class changes significantly. This time we are encoding the text as an average of the embeddings of all the words it contains. To compute the denominator of this average, we obtain the length of each text by counting how many of its words are not the virtual padding token. Then we sum all the embeddings and divide by the number of non-padding tokens. Adding all embeddings is safe, because padding embeddings are comprised of zeros. This process leaves us with a single embedding for the whole text, which is then passed to the rest of the layers. The training and evaluation steps are the same before. The results of this model on the AG News test partition are displayed below: Comparing these results with the ones obtained by the multilayer perceptron with explicit features in Chapter 7, we observe that on this particular task utilizing embeddings as features does not yield a performance improvement. Notably, this is a small dataset and a rather simplistic task where the presence of certain words is sufficient to distinguish the category of an article (e.g., the word basketball is highly indicative of the label Sports). Nevertheless, in other tasks where distinctions are more nuanced, or in which there is less likely to be word overlap between texts of interest, word embeddings do provide necessary signal. Additionally, when there are class imbalances, word embeddings can supplement underrepresented classes by bringing the external knowledge gained during their pretraining. 140 Implementing Text Classification Using Word Embeddings 9.3 Summary In this chapter we showed how to explore the semantic space encoded by word embeddings through word similarity and analogies, as well as one way to use them for text classification. At this point we have not taken into consideration the order in which the words appear, i.e., we averaged the embeddings for all the words in the text using a bag-ofwords representation of text. In subsequent chapters we will explore how to incorporate word order into the learned representations of text.
9,647
9,764
#!/usr/bin/env python # coding: utf-8 # # Using Pre-trained Word Embeddings # # In this notebook we will show some operations on pre-trained word embeddings to gain an intuition about them. # # We will be using the pre-trained GloVe embeddings that can be found in the [official website](https://nlp.stanford.edu/projects/glove/). In particular, we will use the file `glove.6B.300d.txt` contained in this [zip file](https://nlp.stanford.edu/data/glove.6B.zip). # # We will first load the GloVe embeddings using [Gensim](https://radimrehurek.com/gensim/). Specifically, we will use [`KeyedVectors`](https://radimrehurek.com/gensim/models/keyedvectors.html)'s [`load_word2vec_format()`](https://radimrehurek.com/gensim/models/keyedvectors.html#gensim.models.keyedvectors.KeyedVectors.load_word2vec_format) classmethod, which supports the original word2vec file format. # However, there is a difference in the file formats used by GloVe and word2vec, which is a header used by word2vec to indicate the number of embeddings and dimensions stored in the file. The file that stores the GloVe embeddings doesn't have this header, so we will have to address that when loading the embeddings. # # Loading the embeddings may take a little bit, so hang in there! # In[2]: from gensim.models import KeyedVectors fname = "glove.6B.300d.txt" glove = KeyedVectors.load_word2vec_format(fname, no_header=True) glove.vectors.shape # ## Word similarity # # One attribute of word embeddings that makes them useful is the ability to compare them using cosine similarity to find how similar they are. [`KeyedVectors`](https://radimrehurek.com/gensim/models/keyedvectors.html) objects provide a method called [`most_similar()`](https://radimrehurek.com/gensim/models/keyedvectors.html#gensim.models.keyedvectors.KeyedVectors.most_similar) that we can use to find the closest words to a particular word of interest. By default, [`most_similar()`](https://radimrehurek.com/gensim/models/keyedvectors.html#gensim.models.keyedvectors.KeyedVectors.most_similar) returns the 10 most similar words, but this can be changed using the `topn` parameter. # # Below we test this function using a few different words. # In[3]: # common noun glove.most_similar("cactus") # In[4]: # common noun glove.most_similar("cake") # In[5]: # adjective glove.most_similar("angry") # In[6]: # adverb glove.most_similar("quickly") # In[7]: # preposition glove.most_similar("between") # In[8]: # determiner glove.most_similar("the") # ## Word analogies # # Another characteristic of word embeddings is their ability to solve analogy problems. # The same [`most_similar()`](https://radimrehurek.com/gensim/models/keyedvectors.html#gensim.models.keyedvectors.KeyedVectors.most_similar) method can be used for this task, by passing two lists of words: # a `positive` list with the words that should be added and a `negative` list with the words that should be subtracted. Using these arguments, the famous example $\vec{king} - \vec{man} + \vec{woman} \approx \vec{queen}$ can be executed as follows: # In[9]: # king - man + woman glove.most_similar(positive=["king", "woman"], negative=["man"]) # Here are a few other interesting analogies: # In[10]: # car - drive + fly glove.most_similar(positive=["car", "fly"], negative=["drive"]) # In[11]: # berlin - germany + australia glove.most_similar(positive=["berlin", "australia"], negative=["germany"]) # In[12]: # england - london + baghdad glove.most_similar(positive=["england", "baghdad"], negative=["london"]) # In[13]: # japan - yen + peso glove.most_similar(positive=["japan", "peso"], negative=["yen"]) # In[14]: # best - good + tall glove.most_similar(positive=["best", "tall"], negative=["good"]) # ## Looking under the hood # # Now that we are more familiar with the [`most_similar()`](https://radimrehurek.com/gensim/models/keyedvectors.html#gensim.models.keyedvectors.KeyedVectors.most_similar) method, it is time to implement its functionality ourselves. # But first, we need to take a look at the different parts of the [`KeyedVectors`](https://radimrehurek.com/gensim/models/keyedvectors.html) object that we will need. # Obviously, we will need the vectors themselves. They are stored in the `vectors` attribute. # In[15]: glove.vectors.shape # As we can see above, `vectors` is a 2-dimensional matrix with 400,000 rows and 300 columns. # Each row corresponds to a 300-dimensional word embedding. These embeddings are not normalized, but normalized embeddings can be obtained using the [`get_normed_vectors()`](https://radimrehurek.com/gensim/models/keyedvectors.html#gensim.models.keyedvectors.KeyedVectors.get_normed_vectors) method. # In[16]: normed_vectors = glove.get_normed_vectors() normed_vectors.shape # Now we need to map the words in the vocabulary to rows in the `vectors` matrix, and vice versa. # The [`KeyedVectors`](https://radimrehurek.com/gensim/models/keyedvectors.html) object has the attributes `index_to_key` and `key_to_index` which are a list of words and a dictionary of words to indices, respectively. # In[17]: #glove.index_to_key # In[18]: #glove.key_to_index # ## Word similarity from scratch # # Now we have everything we need to implement a `most_similar_words()` function that takes a word, the vector matrix, the `index_to_key` list, and the `key_to_index` dictionary. This function will return the 10 most similar words to the provided word, along with their similarity scores. # In[19]: import numpy as np def most_similar_words(word, vectors, index_to_key, key_to_index, topn=10): # retrieve word_id corresponding to given word word_id = key_to_index[word] # retrieve embedding for given word emb = vectors[word_id] # calculate similarities to all words in out vocabulary similarities = vectors @ emb # get word_ids in ascending order with respect to similarity score ids_ascending = similarities.argsort() # reverse word_ids ids_descending = ids_ascending[::-1] # get boolean array with element corresponding to word_id set to false mask = ids_descending != word_id # obtain new array of indices that doesn't contain word_id # (otherwise the most similar word to the argument would be the argument itself) ids_descending = ids_descending[mask] # get topn word_ids top_ids = ids_descending[:topn] # retrieve topn words with their corresponding similarity score top_words = [(index_to_key[i], similarities[i]) for i in top_ids] # return results return top_words # Now let's try the same example that we used above: the most similar words to "cactus". # In[20]: vectors = glove.get_normed_vectors() index_to_key = glove.index_to_key key_to_index = glove.key_to_index most_similar_words("cactus", vectors, index_to_key, key_to_index) # ## Analogies from scratch # # The `most_similar_words()` function behaves as expected. Now let's implement a function to perform the analogy task. We will give it the very creative name `analogy`. This function will get two lists of words (one for positive words and one for negative words), just like the [`most_similar()`](https://radimrehurek.com/gensim/models/keyedvectors.html#gensim.models.keyedvectors.KeyedVectors.most_similar) method we discussed above. # In[21]: from numpy.linalg import norm def analogy(positive, negative, vectors, index_to_key, key_to_index, topn=10): # find ids for positive and negative words pos_ids = [key_to_index[w] for w in positive] neg_ids = [key_to_index[w] for w in negative] given_word_ids = pos_ids + neg_ids # get embeddings for positive and negative words pos_emb = vectors[pos_ids].sum(axis=0) neg_emb = vectors[neg_ids].sum(axis=0) # get embedding for analogy emb = pos_emb - neg_emb # normalize embedding emb = emb / norm(emb) # calculate similarities to all words in out vocabulary similarities = vectors @ emb # get word_ids in ascending order with respect to similarity score ids_ascending = similarities.argsort() # reverse word_ids ids_descending = ids_ascending[::-1] # get boolean array with element corresponding to any of given_word_ids set to false given_words_mask = np.isin(ids_descending, given_word_ids, invert=True) # obtain new array of indices that doesn't contain any of the given_word_ids ids_descending = ids_descending[given_words_mask] # get topn word_ids top_ids = ids_descending[:topn] # retrieve topn words with their corresponding similarity score top_words = [(index_to_key[i], similarities[i]) for i in top_ids] # return results return top_words # Let's try this function with the $\vec{king} - \vec{man} + \vec{woman} \approx \vec{queen}$ example we discussed above. # In[22]: positive = ["king", "woman"] negative = ["man"] vectors = glove.get_normed_vectors() index_to_key = glove.index_to_key key_to_index = glove.key_to_index analogy(positive, negative, vectors, index_to_key, key_to_index) # In[ ]:
7,370
7,449
25
chap15-0
chap15-0
15 Implementing Encoder-decoder Methods In this chapter we implement a machine translation application as an example of an encoder-decoder task. In particular, we build on pre-trained encoder-decoder transformer models, which exist in the Hugging Face library for a wide variety of language pairs. We first show how to use one of these models out-of-the-box to perform translation for one of the language pairs it has been exposed to during pre-training: English to Romanian. Afterwards, we fine-tune the model to a new language combination that is has not seen before: Romanian to English. In both use cases, we use the T5 encoder-decoder model, which has been pre-trained for several tasks, including machine translation (Raffel et al., 2020). Please see Chapter 16 for a description of T5’s pre-training process. The data for this task comes from the WMT 2016 dataset (Bojar et al., 2016), which consists of English sentences aligned pairwise to German, Czech, Russian, Finnish, Romanian, and Turkish. In this chapter we only use the English-Romanian texts (in both directions). 15.1 Translating English to Romanian As a first example, we use T5 to translate from English to Romanian, which is one of the language pairs it has been exposed to during pretraining. The code discussed in this section is available in the notebook chap15_translation_en_to_ro. Even though in this exercise we are not fine-tuning the model, we still need to define a few hyper parameters to frame the task and help the model understand how to work with the data: The above settings indicate that we use the t5-small model, a smaller T5 variant, to minimize the amount of memory required. The source_lang 212 15.1 Translating English to Romanian 213 and target_lang variables define the direction of translation, i.e., from English to Romanian. To keep our computing requirements small, we limit the length of our input and output. That is, English text longer than max_source_length tokens will be truncated. Further, we limit our generated Romanian text to max_target_length. We chose a maximum target length of 128 tokens to limit the computational cost incurred during text generation (recall that the text is generated one token at a time). The T5 models are trained to support multiple tasks such as translation and summarization (please see Chapter 16 for details). Thus, during training and inference, the user must specify which task the model should perform using a text prefix. Here we use the prefix "translate English to Romanian: " to indicate that the input text is in English and should be translated to Romanian. Next, we load the model and the corresponding tokenizer, and move them to the GPU if one is available: We use the datasets library to load our translation dataset. Note that the first time one calls load_dataset() the dataset will be downloaded automatically from the Hugging Face repository.1 The load_dataset() function takes a dataset name and configuration, which in our case are wmt16 and ro-en, respectively. Since in this example we are only evaluating the model, we only load the test partition (or split) of the dataset: The dataset consists of a single column called translation. Each element in this column is a dictionary that contains the aligned pair. The dictionary keys are the abbreviated language names and the values are the corresponding sentences. An example of one of these dictionaries is shown below: We encapsulate the logic for translating the English text into Romanian in a function called translate(). Inside this function, for a batch of aligned pairs, we select the English sentence as our input, and prepend the task prefix. Then we tokenize these inputs, including the prefix, specifying that sentences longer than max_source_length should be truncated, the batch should be padded, and the tokenizer should return PyTorch tensors. Once the tokenizer output has been moved to the GPU, we pass it to the model’s generate() method. This is the first time we have seen this method, because only decoder and encoder-decoder models support it. This method generates an output sequence by predicting one token 1 https://huggingface.co/datasets/wmt16 214 Implementing Encoder-decoder Methods at a time, stopping when either the end-of-sequence token is produced or when the sequence reaches a maximum length. Several generation techniques are supported, such as beam search, in which several alternate translations are maintained by the model so that it is able to select an overall best translation from several options. For efficiency purposes, we use a greedy approach, which chooses the best token at each step of the generation. This is equivalent to using a beam search with a beam of size one. Since the model generates its predictions as a sequence of token ids, we need to convert them back into the corresponding tokens to be able to read the translated text. We do this using the tokenizer’s batch_decode() method. Finally, we return the gold and predicted Romanian sentences in a dictionary: Next, we apply our translate() function to our Dataset to translate all the sentences: reference Șeful ONU declară că nu există soluții militar... Șeful ONU a solicitat din nou tuturor părților... Ban și-a exprimat regretul că divizările în co... Nu sunt bani puțini. La sfârșitul mandatului voi face un raport cu ... "Să spună un parlamentar că nu-i ajung banii e... 1999 rows × 2 columns prediction eful ONU declară că nu există o soluţie milita... eful U.N. a cerut din nou tuturor partidelor, ... El şi-a exprimat regretul că diviziunile din c... Banii sunt suficienţi. La sfârşitul biroului voi raporta tot ceea ce ... "A spune că un parlamentar nu are suficienţi b... 1994 1995 1996 1997 1998 0 1 2 3 4 ... Secretarul General Ban Ki-moon afirmă că răspu... Secretarul General Ban Ki-moon declară că răsp... Ban a declarat miercuri în cadrul unei conferi... Ban a declarat la o conferinţă de presă susţin... ... ... Uneori mi-e rușine să ridic banii de la casierie. Uneori mi-e ruşine să iau banii de la biroul c... S-a întâmplat să ridic într-o lună și 30.000 d... Într-o lună am adunat 30 000 de lei cu ramburs... We evaluate the quality of these translations using the BLEU metric, which we introduced in Chapter 14. To this end, we load an existing implementation of BLEU from the datasets library as a Metric object.2 Metric objects have a method called add(), which is used to accumulate the predictions and gold labels, one example at a time. After accumulating all examples, the compute() method returns the results of the evaluation. Note that for each predicted sentence, BLEU expects a list of reference sentences (as there are often many correct ways of translat- 2 https://huggingface.co/docs/datasets/v2.4.0/en/package_reference/main_ classes#datasets. Metric 15.2 Implementation of Greedy Generation 215 ing a given text). Since we only have one reference, we wrap it in a list before passing it to the metric: The score corresponds to the BLEU score. The rest of the items correspond to the components required to compute the score. That is, the counts, totals, and precisions correspond to the counts, totals, and precisions for 1-, 2-, 3-, and 4-grams. The bp is the brevity penalty. The sys_len and ref_len correspond to the predictions and reference lengths. The above BLEU score of 25.2% is slightly lower than the state of the art, but we are being penalized by the peculiarities of diacritic usage in Romanian characters. For example, the letters ș and ț (corresponding to the sounds sh and ts in English) are usually spelled with a comma below the characters s and t, which is the standard imposed by the Romanian Academy. However, in “the wild” these characters are often written using a cedilla instead of a comma, e.g., ţ instead of ț (or, using the names of these Unicode characters, LATIN SMALL LETTER T WITH CEDILLA instead of LATIN SMALL LETTER T WITH COMMA BELOW). Further, some of these characters with diacritics are often omitted altogether in the T5 output. The T5 output below contains an example for each of these two situations (e.g., soluţi(e) instead of soluți(i), and eful instead of Șeful): To avoid being penalized at scoring time for these arbitrary discrepancies, post-processing scripts are sometimes used to normalize diacritic usage.3 Usage of such post-processing scripts can improve the BLEU score substantially. However, this is beyond the scope of this chapter. 15.2 Implementation of Greedy Generation To gain a better intuition of how the encoder-decoder model generates its output sequence, we show below an implementation of the greedy version of the generate() method used above. This function takes as an argument a single English text (i.e., no batching) and returns the corresponding Romanian text: This function interacts directly with the encoder and decoder components of the T5 model, so we must construct the input for both. The encoder’s input is constructed by prepending the task prefix to the English text and tokenizing it. On the other hand, the decoder’s input is constructed incrementally by accumulating the tokens predicted so far 3 https://github.com/huggingface/transformers/blob/main/examples/legacy/ seq2seq/romanian_postprocessing.md 216 Implementing Encoder-decoder Methods in order to predict the next token in the sequence. At the beginning, before any tokens are predicted, the decoder’s input is initialized with a single token that corresponds to the beginning of the sequence. We retrieve this token, called decoder_start_token_id, from the model’s configuration object. The tokens are predicted one at a time, until the model produces eos_token_id, which indicates that the sequence is finished. However, in case the model does not produce this end-of-sequence token within a reasonable number of steps, we also enforce a maximum number of predicted tokens, determined by the max_target_length parameter we defined previously. The T5 model’s forward() method, called indirectly through its __call__()) method, takes the inputs for both the encoder and the decoder. The output returned by this method corresponds to all the tokens in the decoder’s input plus an extra one: the newly predicted token. To select the best prediction, we retrieve the logits from the output and select the logits corresponding to the last token in the sequence (recall that the output shape is (batch size, sequence length, vocabulary size)). From these selected logits, we use the argmax() to select the token id corresponding to the highest-scoring vocabulary item. We append this new token id to the decoder’s input, and repeat the process until we encounter the end-of-sequence token or the decoded text reaches the maximum length. Once we are finished generating token ids, we retrieve the corresponding text by calling the tokenizer’s decode() method. This method is identical to the batch_decode() method we used previously, except that it only decodes a single example. Below is an usage example for the greedy_translation() function: 15.3 Fine-tuning Romanian to English Translation In this section, we fine-tune a T5 model on the translation of Romanian to English, a language pair that was not included in the T5 pre-training. To confirm that this data was not included in pre-training, we evaluated the performance of the vanilla t5-small model on the translation from Romanian to English using code equivalent to the code discussed in the previous section (see the chap15_translation_ro_to_en notebook). The resulting BLEU score was only 3.2%, which is substantially lower than the score we obtained when translating English to Romanian (25.2%). 15.3 Fine-tuning Romanian to English Translation 217 Note that the transformers library includes scripts to fine-tune a translation model directly from the command line.4 For didactic purposes, we will not use these scripts in this section, but instead write the fine-tuning code explicitly. For this exercise, we continue using the WMT16 dataset, but this time
we load the train and validation splits. We employ the same t5-small model that we used previously. The code from the last section to load
the model, tokenizer, and dataset does not need to change for this use-
case, so we do not repeat it here. However, as before, the complete code is available in a Jupyter notebook (chap15_translation_ro_to_en_finetune). We begin by tokenizing the source (Romanian) and target language (English) texts. As in the last section, we need to prepend the task prefix to the source texts prior to tokenizing. This time, since we are translating in the opposite direction, we use the prefix "translate Romanian to English: ", and we prepend it to the Romanian text. Each call to the tokenizer with a batch of texts produces input_ids and an attention_mask. This output is what we need for the Romanian text, which will serve as the input to the model. To generate the labels, i.e., the correct translated tokens, we use the input_ids corresponding to the English text. Recall that "labels" is the default key name expected by trainers in Hugging Face. We apply our tokenize() function to both the train and validation splits: 4 https://github.com/huggingface/transformers/tree/main/examples/pytorch/ translation 218 Implementing Encoder-decoder Methods input_ids [13959, 3871, 29, 12, 1566, 10, 4961, 106, 204... [13959, 3871, 29, 12, 1566, 10, 4961, 106, 204... [13959, 3871, 29, 12, 1566, 10, 374, 6225, 49,... [13959, 3871, 29, 12, 1566, 10, 4540, 4031, 9,... [13959, 3871, 29, 12, 1566, 10, 2262, 900, 17,... [13959, 3871, 29, 12, 1566, 10, 18420, 83, 362... attention_mask labels [19428, 13, 12876, 10, 217, 13687, 7, 1] [19428, 13, 12876, 10, 217, 13687, 7, 1] [11167, 7, 1204, 10, 217, 13687, 7, 1] [4540, 4031, 9, 7, 1672, 7, 2262, 900, 17, 38,... [2262, 900, 17, 641, 65, 46, 3761, 6, 1069, 31... [3625, 32, 5788, 35, 15, 3844, 31, 7, 3, 16143... 0 1 2 3 4 ... 610315 610316 610317 610318 610319 [1,1,1,1,1,1,1,1, [1,1,1,1,1,1,1,1, [1,1,1,1,1,1,1,1, [1,1,1,1,1,1,1,1, [1,1,1,1,1,1,1,1, [1,1,1,1,1,1,1,1, 1,1,1,1, 1,1,1,... 1,1,1,1, 1,1,1,... 1,1,1,1, 1,1,1,... 1,1,1,1, 1,1,1,... 1,1,1,1, 1,1,1,... 1,1,1,1, 1,1,1,... [13959, 3871, 29, 12, 1566, 10, 5085, [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 5840, 49... 1, 1, 1, ... [13959, 3871, 29, 12, 1566, 10, 781, [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 8750, 9, ... 1, 1, 1, ... ... ... [13959, 3871, 29, 12, 1566, 10, 2364, [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 4540, 40... 1, 1, 1, ... 610320 rows × 3 columns [13959, 3871, 29, 12, 1566, 10, 3, [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 25882, 759,... 1, 1, 1, ... [2276, 8843, 138, 13, 13687, 7, 13, 1767, 3823... [781, 2420, 13, 17500, 10, 217, 13687, 7, 1] [242, 4540, 4031, 9, 7, 6, 8, 516, 65, 66, 8, ... [9810, 157, 31, 7, 516, 92, 3088, 21, 46, 3839... ... Recall that in order to construct a trainer, we need a data collator for batching, a function to compute the metrics of interest, and a TrainingArguments object. In this section, we use a data collator called DataCollatorForSeq2Seq, which is included in the transformers library specifically for sequence-to-sequence models. The collator pads the batches using the label_pad_token_id, which we have set to −100, as we did in Chapter 13 (this is the default ignore_index value used by CrossEntropyLoss): The compute_metrics() function computes the BLEU score. It uses the tokenizer to decode the token ids into text, for both the predicted and gold labels, ignoring padding: We use the Seq2SeqTrainingArguments class, which adds the predict_with_generate parameter to the regular TrainingArguments class. This is needed to in-
dicate that the trainer should use the generate() method for inference
in order to compute the metrics (BLUE in this case): Finally, we construct the trainer using the Seq2SeqTrainer class, which is a subclass of Trainer that adds the ability to compute scores such as BLEU during training by calling generate() during evaluation: Fine-tuning a translation model takes considerably longer than training or fine-tuning the models we have developed so far in this book. To account for this, here we add support for resuming training from a checkpoint, i.e., a model that was saved after training on a number of 15.4 Using a Previously Saved Model 219 examples. Similar to how one can resume a video game, this allows one to pick up from the last “save point,” in case training was interrupted and needs to be resumed: When calling the trainer’s train() method, we either provide a model checkpoint or None. In the former case, the trainer will continue training from the provided checkpoint. In the latter case, the trainer will begin training from scratch. Once the training has completed, we save the trained model and tokenizer using the trainer’s save_model() method into the output directory: We then compute and save the metrics corresponding to the training partition. This is not required, but it is helpful to keep a record of the model’s performance on the training data. Note that the metrics do not automatically include the number of examples in the training partition, so we add them explicitly: Next, we evaluate our final model on the validation data and save the corresponding metrics. These metrics indicate that our BLEU score on the validation data is 35.2%, which is evidence that fine-tuning has helped dramatically: Lastly, we save a model card into our output directory. A model card is akin to an automatically-generated README file that includes information about the model used, the data, settings used, and performance throughout the training process. This file is helpful for reproducibility as it contains all of this key information in one place. These cards are often uploaded to the Hugging Face Hub together with the model itself.5 15.4 Using a Previously Saved Model Models that have been saved locally can be loaded using the same from_pretrained() methods we have used before. In particular, instead of providing a model name, we provide the path to the local directory where the model is stored, using the local_files_only parameter to indicate that we want to load the model from the local file system instead of downloading it from the Hugging Face Hub (Make sure you use an output directory that is valid on your machine!): Once our fine-tuned model is loaded, we use it the same way as before. That is, we use the translate() function to generate translations 5 We do not discuss the model uploading process here. Please see the documentation on model sharing at: https://huggingface.co/docs/transformers/v4.14.1/model_sharing. 220 Implementing Encoder-decoder Methods for our test partition. Then we use the BLEU metric to score this output. From this metric, we obtain the final BLEU score of 33.4%, which is markedly better than our initial score (i.e., without fine-tuning) of 3.2%! The code corresponding to this section is available in the notebook chap15_translation_ro_to_en_finetuned. 15.5 Summary In this chapter we used a complete encoder-decoder transformer network to implement a machine translation application. Importantly, transformers with a decoder component have a generate() method that simplifies the generation process and provides multiple options for decoding. We encourage you to explore these options! For example, try comparing the quality of the output with the resources required to produce it (e.g., runtime overhead) when the size of the search beam increases. Additionally, we saw how to fine-tune an encoder-decoder model on a new language pair that it has not seen during its pre-training. This exercise included using checkpoints to support resuming training in case of unexpected interruptions, saving our fine-tuned model, and loading it for later use.
4,780
5,004
#!/usr/bin/env python # coding: utf-8 # # Machine Translation from English (En) to Romanian (Ro) # # Using the T5 Transformer without Fine-tuning # Some initialization: # In[1]: import torch import numpy as np from transformers import set_seed # set to True to use the gpu (if there is one available) use_gpu = True # select device device = torch.device('cuda' if use_gpu and torch.cuda.is_available() else 'cpu') print(f'device: {device.type}') # random seed seed = 42 # set random seed if seed is not None: print(f'random seed: {seed}') set_seed(seed) # In[2]: transformer_name = 't5-small' source_lang = 'en' target_lang = 'ro' max_source_length = 1024 max_target_length = 128 task_prefix = 'translate English to Romanian: ' num_beams = 1 batch_size = 100 # Load tokenizer and pre-trained model: # In[3]: from transformers import AutoTokenizer, AutoModelForSeq2SeqLM tokenizer = AutoTokenizer.from_pretrained(transformer_name) model = AutoModelForSeq2SeqLM.from_pretrained(transformer_name) model = model.to(device) # Load dataset from HuggingFace: # In[4]: from datasets import load_dataset test_ds = load_dataset('wmt16', 'ro-en', split='test') test_ds # In[5]: test_ds['translation'][0] # Implement the `translate` method and apply on this dataset: # In[6]: def translate(batch): # get source language examples and prepend task prefix inputs = [x[source_lang] for x in batch["translation"]] inputs = [task_prefix + x for x in inputs] # tokenize inputs encoded = tokenizer( inputs, max_length=max_source_length, truncation=True, padding=True, return_tensors='pt', ) # move data to gpu if needed input_ids = encoded.input_ids.to(device) attention_mask = encoded.attention_mask.to(device) # generate translated sentences output = model.generate( input_ids=input_ids, attention_mask=attention_mask, num_beams=num_beams, max_length=max_target_length, ) # generate predicted sentences from predicted token ids decoded = tokenizer.batch_decode( output, skip_special_tokens=True, ) # get gold sentences in target language targets = [x[target_lang] for x in batch["translation"]] # return gold and predicted sentences return { 'reference': targets, 'prediction': decoded, } # In[7]: results = test_ds.map( translate, batched=True, batch_size=batch_size, remove_columns=test_ds.column_names, ) results.to_pandas() # Now evaluate the quality of translations using the BLEU metric: # In[8]: from datasets import load_metric metric = load_metric('sacrebleu') for r in results: prediction = r['prediction'] reference = [r['reference']] metric.add(prediction=prediction, reference=reference) metric.compute() # An example of greedy decoding for individual texts: # In[9]: def greedy_translation(text): # prepend task prefix text = task_prefix + text # tokenize input encoded = tokenizer( text, max_length=max_source_length, truncation=True, return_tensors='pt', ) # encoder input ids encoder_input_ids = encoded.input_ids.to(device) # decoder input ids, initialized with start token id start = model.config.decoder_start_token_id decoder_input_ids = torch.LongTensor([[start]]).to(device) # generate tokens, one at a time for _ in range(max_target_length): # get model predictions output = model( encoder_input_ids, decoder_input_ids=decoder_input_ids, ) # get logits for last token next_token_logits = output.logits[0, -1, :] # select most probable token next_token_id = torch.argmax(next_token_logits) # append new token to decoder_input_ids output_id = torch.LongTensor([[next_token_id]]).to(device) decoder_input_ids = torch.cat([decoder_input_ids, output_id], dim=-1) # if predicted token is the end of sequence, stop iterating if next_token_id == tokenizer.eos_token_id: break # return text corresponding to predicted token ids return tokenizer.decode(decoder_input_ids[0], skip_special_tokens=True) # In[10]: greedy_translation("this is a test")
2,097
2,191
0
chap15-1
chap15-1
15 Implementing Encoder-decoder Methods In this chapter we implement a machine translation application as an example of an encoder-decoder task. In particular, we build on pre-trained encoder-decoder transformer models, which exist in the Hugging Face library for a wide variety of language pairs. We first show how to use one of these models out-of-the-box to perform translation for one of the language pairs it has been exposed to during pre-training: English to Romanian. Afterwards, we fine-tune the model to a new language combination that is has not seen before: Romanian to English. In both use cases, we use the T5 encoder-decoder model, which has been pre-trained for several tasks, including machine translation (Raffel et al., 2020). Please see Chapter 16 for a description of T5’s pre-training process. The data for this task comes from the WMT 2016 dataset (Bojar et al., 2016), which consists of English sentences aligned pairwise to German, Czech, Russian, Finnish, Romanian, and Turkish. In this chapter we only use the English-Romanian texts (in both directions). 15.1 Translating English to Romanian As a first example, we use T5 to translate from English to Romanian, which is one of the language pairs it has been exposed to during pretraining. The code discussed in this section is available in the notebook chap15_translation_en_to_ro. Even though in this exercise we are not fine-tuning the model, we still need to define a few hyper parameters to frame the task and help the model understand how to work with the data: The above settings indicate that we use the t5-small model, a smaller T5 variant, to minimize the amount of memory required. The source_lang 212 15.1 Translating English to Romanian 213 and target_lang variables define the direction of translation, i.e., from English to Romanian. To keep our computing requirements small, we limit the length of our input and output. That is, English text longer than max_source_length tokens will be truncated. Further, we limit our generated Romanian text to max_target_length. We chose a maximum target length of 128 tokens to limit the computational cost incurred during text generation (recall that the text is generated one token at a time). The T5 models are trained to support multiple tasks such as translation and summarization (please see Chapter 16 for details). Thus, during training and inference, the user must specify which task the model should perform using a text prefix. Here we use the prefix "translate English to Romanian: " to indicate that the input text is in English and should be translated to Romanian. Next, we load the model and the corresponding tokenizer, and move them to the GPU if one is available: We use the datasets library to load our translation dataset. Note that the first time one calls load_dataset() the dataset will be downloaded automatically from the Hugging Face repository.1 The load_dataset() function takes a dataset name and configuration, which in our case are wmt16 and ro-en, respectively. Since in this example we are only evaluating the model, we only load the test partition (or split) of the dataset: The dataset consists of a single column called translation. Each element in this column is a dictionary that contains the aligned pair. The dictionary keys are the abbreviated language names and the values are the corresponding sentences. An example of one of these dictionaries is shown below: We encapsulate the logic for translating the English text into Romanian in a function called translate(). Inside this function, for a batch of aligned pairs, we select the English sentence as our input, and prepend the task prefix. Then we tokenize these inputs, including the prefix, specifying that sentences longer than max_source_length should be truncated, the batch should be padded, and the tokenizer should return PyTorch tensors. Once the tokenizer output has been moved to the GPU, we pass it to the model’s generate() method. This is the first time we have seen this method, because only decoder and encoder-decoder models support it. This method generates an output sequence by predicting one token 1 https://huggingface.co/datasets/wmt16 214 Implementing Encoder-decoder Methods at a time, stopping when either the end-of-sequence token is produced or when the sequence reaches a maximum length. Several generation techniques are supported, such as beam search, in which several alternate translations are maintained by the model so that it is able to select an overall best translation from several options. For efficiency purposes, we use a greedy approach, which chooses the best token at each step of the generation. This is equivalent to using a beam search with a beam of size one. Since the model generates its predictions as a sequence of token ids, we need to convert them back into the corresponding tokens to be able to read the translated text. We do this using the tokenizer’s batch_decode() method. Finally, we return the gold and predicted Romanian sentences in a dictionary: Next, we apply our translate() function to our Dataset to translate all the sentences: reference Șeful ONU declară că nu există soluții militar... Șeful ONU a solicitat din nou tuturor părților... Ban și-a exprimat regretul că divizările în co... Nu sunt bani puțini. La sfârșitul mandatului voi face un raport cu ... "Să spună un parlamentar că nu-i ajung banii e... 1999 rows × 2 columns prediction eful ONU declară că nu există o soluţie milita... eful U.N. a cerut din nou tuturor partidelor, ... El şi-a exprimat regretul că diviziunile din c... Banii sunt suficienţi. La sfârşitul biroului voi raporta tot ceea ce ... "A spune că un parlamentar nu are suficienţi b... 1994 1995 1996 1997 1998 0 1 2 3 4 ... Secretarul General Ban Ki-moon afirmă că răspu... Secretarul General Ban Ki-moon declară că răsp... Ban a declarat miercuri în cadrul unei conferi... Ban a declarat la o conferinţă de presă susţin... ... ... Uneori mi-e rușine să ridic banii de la casierie. Uneori mi-e ruşine să iau banii de la biroul c... S-a întâmplat să ridic într-o lună și 30.000 d... Într-o lună am adunat 30 000 de lei cu ramburs... We evaluate the quality of these translations using the BLEU metric, which we introduced in Chapter 14. To this end, we load an existing implementation of BLEU from the datasets library as a Metric object.2 Metric objects have a method called add(), which is used to accumulate the predictions and gold labels, one example at a time. After accumulating all examples, the compute() method returns the results of the evaluation. Note that for each predicted sentence, BLEU expects a list of reference sentences (as there are often many correct ways of translat- 2 https://huggingface.co/docs/datasets/v2.4.0/en/package_reference/main_ classes#datasets. Metric 15.2 Implementation of Greedy Generation 215 ing a given text). Since we only have one reference, we wrap it in a list before passing it to the metric: The score corresponds to the BLEU score. The rest of the items correspond to the components required to compute the score. That is, the counts, totals, and precisions correspond to the counts, totals, and precisions for 1-, 2-, 3-, and 4-grams. The bp is the brevity penalty. The sys_len and ref_len correspond to the predictions and reference lengths. The above BLEU score of 25.2% is slightly lower than the state of the art, but we are being penalized by the peculiarities of diacritic usage in Romanian characters. For example, the letters ș and ț (corresponding to the sounds sh and ts in English) are usually spelled with a comma below the characters s and t, which is the standard imposed by the Romanian Academy. However, in “the wild” these characters are often written using a cedilla instead of a comma, e.g., ţ instead of ț (or, using the names of these Unicode characters, LATIN SMALL LETTER T WITH CEDILLA instead of LATIN SMALL LETTER T WITH COMMA BELOW). Further, some of these characters with diacritics are often omitted altogether in the T5 output. The T5 output below contains an example for each of these two situations (e.g., soluţi(e) instead of soluți(i), and eful instead of Șeful): To avoid being penalized at scoring time for these arbitrary discrepancies, post-processing scripts are sometimes used to normalize diacritic usage.3 Usage of such post-processing scripts can improve the BLEU score substantially. However, this is beyond the scope of this chapter. 15.2 Implementation of Greedy Generation To gain a better intuition of how the encoder-decoder model generates its output sequence, we show below an implementation of the greedy version of the generate() method used above. This function takes as an argument a single English text (i.e., no batching) and returns the corresponding Romanian text: This function interacts directly with the encoder and decoder components of the T5 model, so we must construct the input for both. The encoder’s input is constructed by prepending the task prefix to the English text and tokenizing it. On the other hand, the decoder’s input is constructed incrementally by accumulating the tokens predicted so far 3 https://github.com/huggingface/transformers/blob/main/examples/legacy/ seq2seq/romanian_postprocessing.md 216 Implementing Encoder-decoder Methods in order to predict the next token in the sequence. At the beginning, before any tokens are predicted, the decoder’s input is initialized with a single token that corresponds to the beginning of the sequence. We retrieve this token, called decoder_start_token_id, from the model’s configuration object. The tokens are predicted one at a time, until the model produces eos_token_id, which indicates that the sequence is finished. However, in case the model does not produce this end-of-sequence token within a reasonable number of steps, we also enforce a maximum number of predicted tokens, determined by the max_target_length parameter we defined previously. The T5 model’s forward() method, called indirectly through its __call__()) method, takes the inputs for both the encoder and the decoder. The output returned by this method corresponds to all the tokens in the decoder’s input plus an extra one: the newly predicted token. To select the best prediction, we retrieve the logits from the output and select the logits corresponding to the last token in the sequence (recall that the output shape is (batch size, sequence length, vocabulary size)). From these selected logits, we use the argmax() to select the token id corresponding to the highest-scoring vocabulary item. We append this new token id to the decoder’s input, and repeat the process until we encounter the end-of-sequence token or the decoded text reaches the maximum length. Once we are finished generating token ids, we retrieve the corresponding text by calling the tokenizer’s decode() method. This method is identical to the batch_decode() method we used previously, except that it only decodes a single example. Below is an usage example for the greedy_translation() function: 15.3 Fine-tuning Romanian to English Translation In this section, we fine-tune a T5 model on the translation of Romanian to English, a language pair that was not included in the T5 pre-training. To confirm that this data was not included in pre-training, we evaluated the performance of the vanilla t5-small model on the translation from Romanian to English using code equivalent to the code discussed in the previous section (see the chap15_translation_ro_to_en notebook). The resulting BLEU score was only 3.2%, which is substantially lower than the score we obtained when translating English to Romanian (25.2%). 15.3 Fine-tuning Romanian to English Translation 217 Note that the transformers library includes scripts to fine-tune a translation model directly from the command line.4 For didactic purposes, we will not use these scripts in this section, but instead write the fine-tuning code explicitly. For this exercise, we continue using the WMT16 dataset, but this time
we load the train and validation splits. We employ the same t5-small model that we used previously. The code from the last section to load
the model, tokenizer, and dataset does not need to change for this use-
case, so we do not repeat it here. However, as before, the complete code is available in a Jupyter notebook (chap15_translation_ro_to_en_finetune). We begin by tokenizing the source (Romanian) and target language (English) texts. As in the last section, we need to prepend the task prefix to the source texts prior to tokenizing. This time, since we are translating in the opposite direction, we use the prefix "translate Romanian to English: ", and we prepend it to the Romanian text. Each call to the tokenizer with a batch of texts produces input_ids and an attention_mask. This output is what we need for the Romanian text, which will serve as the input to the model. To generate the labels, i.e., the correct translated tokens, we use the input_ids corresponding to the English text. Recall that "labels" is the default key name expected by trainers in Hugging Face. We apply our tokenize() function to both the train and validation splits: 4 https://github.com/huggingface/transformers/tree/main/examples/pytorch/ translation 218 Implementing Encoder-decoder Methods input_ids [13959, 3871, 29, 12, 1566, 10, 4961, 106, 204... [13959, 3871, 29, 12, 1566, 10, 4961, 106, 204... [13959, 3871, 29, 12, 1566, 10, 374, 6225, 49,... [13959, 3871, 29, 12, 1566, 10, 4540, 4031, 9,... [13959, 3871, 29, 12, 1566, 10, 2262, 900, 17,... [13959, 3871, 29, 12, 1566, 10, 18420, 83, 362... attention_mask labels [19428, 13, 12876, 10, 217, 13687, 7, 1] [19428, 13, 12876, 10, 217, 13687, 7, 1] [11167, 7, 1204, 10, 217, 13687, 7, 1] [4540, 4031, 9, 7, 1672, 7, 2262, 900, 17, 38,... [2262, 900, 17, 641, 65, 46, 3761, 6, 1069, 31... [3625, 32, 5788, 35, 15, 3844, 31, 7, 3, 16143... 0 1 2 3 4 ... 610315 610316 610317 610318 610319 [1,1,1,1,1,1,1,1, [1,1,1,1,1,1,1,1, [1,1,1,1,1,1,1,1, [1,1,1,1,1,1,1,1, [1,1,1,1,1,1,1,1, [1,1,1,1,1,1,1,1, 1,1,1,1, 1,1,1,... 1,1,1,1, 1,1,1,... 1,1,1,1, 1,1,1,... 1,1,1,1, 1,1,1,... 1,1,1,1, 1,1,1,... 1,1,1,1, 1,1,1,... [13959, 3871, 29, 12, 1566, 10, 5085, [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 5840, 49... 1, 1, 1, ... [13959, 3871, 29, 12, 1566, 10, 781, [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 8750, 9, ... 1, 1, 1, ... ... ... [13959, 3871, 29, 12, 1566, 10, 2364, [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 4540, 40... 1, 1, 1, ... 610320 rows × 3 columns [13959, 3871, 29, 12, 1566, 10, 3, [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 25882, 759,... 1, 1, 1, ... [2276, 8843, 138, 13, 13687, 7, 13, 1767, 3823... [781, 2420, 13, 17500, 10, 217, 13687, 7, 1] [242, 4540, 4031, 9, 7, 6, 8, 516, 65, 66, 8, ... [9810, 157, 31, 7, 516, 92, 3088, 21, 46, 3839... ... Recall that in order to construct a trainer, we need a data collator for batching, a function to compute the metrics of interest, and a TrainingArguments object. In this section, we use a data collator called DataCollatorForSeq2Seq, which is included in the transformers library specifically for sequence-to-sequence models. The collator pads the batches using the label_pad_token_id, which we have set to −100, as we did in Chapter 13 (this is the default ignore_index value used by CrossEntropyLoss): The compute_metrics() function computes the BLEU score. It uses the tokenizer to decode the token ids into text, for both the predicted and gold labels, ignoring padding: We use the Seq2SeqTrainingArguments class, which adds the predict_with_generate parameter to the regular TrainingArguments class. This is needed to in-
dicate that the trainer should use the generate() method for inference
in order to compute the metrics (BLUE in this case): Finally, we construct the trainer using the Seq2SeqTrainer class, which is a subclass of Trainer that adds the ability to compute scores such as BLEU during training by calling generate() during evaluation: Fine-tuning a translation model takes considerably longer than training or fine-tuning the models we have developed so far in this book. To account for this, here we add support for resuming training from a checkpoint, i.e., a model that was saved after training on a number of 15.4 Using a Previously Saved Model 219 examples. Similar to how one can resume a video game, this allows one to pick up from the last “save point,” in case training was interrupted and needs to be resumed: When calling the trainer’s train() method, we either provide a model checkpoint or None. In the former case, the trainer will continue training from the provided checkpoint. In the latter case, the trainer will begin training from scratch. Once the training has completed, we save the trained model and tokenizer using the trainer’s save_model() method into the output directory: We then compute and save the metrics corresponding to the training partition. This is not required, but it is helpful to keep a record of the model’s performance on the training data. Note that the metrics do not automatically include the number of examples in the training partition, so we add them explicitly: Next, we evaluate our final model on the validation data and save the corresponding metrics. These metrics indicate that our BLEU score on the validation data is 35.2%, which is evidence that fine-tuning has helped dramatically: Lastly, we save a model card into our output directory. A model card is akin to an automatically-generated README file that includes information about the model used, the data, settings used, and performance throughout the training process. This file is helpful for reproducibility as it contains all of this key information in one place. These cards are often uploaded to the Hugging Face Hub together with the model itself.5 15.4 Using a Previously Saved Model Models that have been saved locally can be loaded using the same from_pretrained() methods we have used before. In particular, instead of providing a model name, we provide the path to the local directory where the model is stored, using the local_files_only parameter to indicate that we want to load the model from the local file system instead of downloading it from the Hugging Face Hub (Make sure you use an output directory that is valid on your machine!): Once our fine-tuned model is loaded, we use it the same way as before. That is, we use the translate() function to generate translations 5 We do not discuss the model uploading process here. Please see the documentation on model sharing at: https://huggingface.co/docs/transformers/v4.14.1/model_sharing. 220 Implementing Encoder-decoder Methods for our test partition. Then we use the BLEU metric to score this output. From this metric, we obtain the final BLEU score of 33.4%, which is markedly better than our initial score (i.e., without fine-tuning) of 3.2%! The code corresponding to this section is available in the notebook chap15_translation_ro_to_en_finetuned. 15.5 Summary In this chapter we used a complete encoder-decoder transformer network to implement a machine translation application. Importantly, transformers with a decoder component have a generate() method that simplifies the generation process and provides multiple options for decoding. We encourage you to explore these options! For example, try comparing the quality of the output with the resources required to produce it (e.g., runtime overhead) when the size of the search beam increases. Additionally, we saw how to fine-tune an encoder-decoder model on a new language pair that it has not seen during its pre-training. This exercise included using checkpoints to support resuming training in case of unexpected interruptions, saving our fine-tuned model, and loading it for later use.
15,290
15,435
#!/usr/bin/env python # coding: utf-8 # # Machine Translation from Ro to En # # Using the T5 Transformer with Fine-tuning # Some initialization: # In[1]: import torch import numpy as np from transformers import set_seed # random seed seed = 42 # set random seed if seed is not None: print(f'random seed: {seed}') set_seed(seed) # In[2]: transformer_name = 't5-small' dataset_name = 'wmt16' dataset_config_name = 'ro-en' source_lang = 'ro' target_lang = 'en' max_source_length = 1024 max_target_length = 128 task_prefix = 'translate Romanian to English: ' batch_size = 4 label_pad_token_id = -100 save_steps = 25_000 num_beams = 1 learning_rate = 1e-3 num_train_epochs = 3 output_dir = '/media/data2/t5-translation-example' # make sure this is a valid path on your machine! # Load dataset from HuggingFace: # In[3]: from datasets import load_dataset wmt16 = load_dataset(dataset_name, dataset_config_name) # Load tokenizer and pre-trained model: # In[4]: from transformers import AutoConfig, AutoTokenizer, AutoModelForSeq2SeqLM config = AutoConfig.from_pretrained(transformer_name) tokenizer = AutoTokenizer.from_pretrained(transformer_name) model = AutoModelForSeq2SeqLM.from_pretrained(transformer_name, config=config) # Tokenize the texts in the dataset: # In[5]: def tokenize(batch): # get source sentences and prepend task prefix sources = [x[source_lang] for x in batch["translation"]] sources = [task_prefix + x for x in sources] # tokenize source sentences output = tokenizer( sources, max_length=max_source_length, truncation=True, ) # get target sentences targets = [x[target_lang] for x in batch["translation"]] # tokenize target sentences labels = tokenizer( targets, max_length=max_target_length, truncation=True, ) # add targets to output output["labels"] = labels["input_ids"] return output # In[6]: train_dataset = wmt16['train'] eval_dataset = wmt16['validation'] column_names = train_dataset.column_names train_dataset = train_dataset.map( tokenize, batched=True, remove_columns=column_names, ) eval_dataset = eval_dataset.map( tokenize, batched=True, remove_columns=column_names, ) # In[7]: train_dataset.to_pandas() # Create `Trainer` object and train: # In[8]: from transformers import DataCollatorForSeq2Seq data_collator = DataCollatorForSeq2Seq( tokenizer, model=model, label_pad_token_id=label_pad_token_id, ) # In[9]: from datasets import load_metric metric = load_metric('sacrebleu') def compute_metrics(eval_preds): preds, labels = eval_preds # get text for predictions predictions = tokenizer.batch_decode( preds, skip_special_tokens=True, ) # replace -100 in labels with pad token labels = np.where( labels != -100, labels, tokenizer.pad_token_id, ) # get text for gold labels references = tokenizer.batch_decode( labels, skip_special_tokens=True, ) # metric expects list of references for each prediction references = [[ref] for ref in references] # compute bleu score results = metric.compute( predictions=predictions, references=references, ) results = {'bleu': results['score']} return results # In[10]: from transformers import Seq2SeqTrainingArguments training_args = Seq2SeqTrainingArguments( output_dir=output_dir, per_device_train_batch_size=batch_size, per_device_eval_batch_size=batch_size, save_steps=save_steps, predict_with_generate=True, evaluation_strategy='steps', eval_steps=save_steps, learning_rate=learning_rate, num_train_epochs=num_train_epochs, ) # In[11]: from transformers import Seq2SeqTrainer trainer = Seq2SeqTrainer( model=model, args=training_args, train_dataset=train_dataset, eval_dataset=eval_dataset, tokenizer=tokenizer, data_collator=data_collator, compute_metrics=compute_metrics, ) # In[12]: import os from transformers.trainer_utils import get_last_checkpoint last_checkpoint = None if os.path.isdir(output_dir): last_checkpoint = get_last_checkpoint(output_dir) if last_checkpoint is not None: print(f'Checkpoint detected, resuming training at {last_checkpoint}.') # In[13]: train_result = trainer.train(resume_from_checkpoint=last_checkpoint) trainer.save_model() # In[14]: metrics = train_result.metrics metrics['train_samples'] = len(train_dataset) trainer.log_metrics('train', metrics) trainer.save_metrics('train', metrics) trainer.save_state() # Now evaluate: # In[15]: # https://discuss.huggingface.co/t/evaluation-results-metric-during-training-is-different-from-the-evaluation-results-at-the-end/15401 metrics = trainer.evaluate( max_length=max_target_length, num_beams=num_beams, metric_key_prefix='eval', ) metrics['eval_samples'] = len(eval_dataset) trainer.log_metrics('eval', metrics) trainer.save_metrics('eval', metrics) # Create a model card with meta data about this model: # In[16]: kwargs = { 'finetuned_from': transformer_name, 'tasks': 'translation', 'dataset_tags': dataset_name, 'dataset_args': dataset_config_name, 'dataset': f'{dataset_name} {dataset_config_name}', 'language': [source_lang, target_lang], } trainer.create_model_card(**kwargs) # In[ ]:
2,412
2,529
1
chap15-2
chap15-2
15 Implementing Encoder-decoder Methods In this chapter we implement a machine translation application as an example of an encoder-decoder task. In particular, we build on pre-trained encoder-decoder transformer models, which exist in the Hugging Face library for a wide variety of language pairs. We first show how to use one of these models out-of-the-box to perform translation for one of the language pairs it has been exposed to during pre-training: English to Romanian. Afterwards, we fine-tune the model to a new language combination that is has not seen before: Romanian to English. In both use cases, we use the T5 encoder-decoder model, which has been pre-trained for several tasks, including machine translation (Raffel et al., 2020). Please see Chapter 16 for a description of T5’s pre-training process. The data for this task comes from the WMT 2016 dataset (Bojar et al., 2016), which consists of English sentences aligned pairwise to German, Czech, Russian, Finnish, Romanian, and Turkish. In this chapter we only use the English-Romanian texts (in both directions). 15.1 Translating English to Romanian As a first example, we use T5 to translate from English to Romanian, which is one of the language pairs it has been exposed to during pretraining. The code discussed in this section is available in the notebook chap15_translation_en_to_ro. Even though in this exercise we are not fine-tuning the model, we still need to define a few hyper parameters to frame the task and help the model understand how to work with the data: The above settings indicate that we use the t5-small model, a smaller T5 variant, to minimize the amount of memory required. The source_lang 212 15.1 Translating English to Romanian 213 and target_lang variables define the direction of translation, i.e., from English to Romanian. To keep our computing requirements small, we limit the length of our input and output. That is, English text longer than max_source_length tokens will be truncated. Further, we limit our generated Romanian text to max_target_length. We chose a maximum target length of 128 tokens to limit the computational cost incurred during text generation (recall that the text is generated one token at a time). The T5 models are trained to support multiple tasks such as translation and summarization (please see Chapter 16 for details). Thus, during training and inference, the user must specify which task the model should perform using a text prefix. Here we use the prefix "translate English to Romanian: " to indicate that the input text is in English and should be translated to Romanian. Next, we load the model and the corresponding tokenizer, and move them to the GPU if one is available: We use the datasets library to load our translation dataset. Note that the first time one calls load_dataset() the dataset will be downloaded automatically from the Hugging Face repository.1 The load_dataset() function takes a dataset name and configuration, which in our case are wmt16 and ro-en, respectively. Since in this example we are only evaluating the model, we only load the test partition (or split) of the dataset: The dataset consists of a single column called translation. Each element in this column is a dictionary that contains the aligned pair. The dictionary keys are the abbreviated language names and the values are the corresponding sentences. An example of one of these dictionaries is shown below: We encapsulate the logic for translating the English text into Romanian in a function called translate(). Inside this function, for a batch of aligned pairs, we select the English sentence as our input, and prepend the task prefix. Then we tokenize these inputs, including the prefix, specifying that sentences longer than max_source_length should be truncated, the batch should be padded, and the tokenizer should return PyTorch tensors. Once the tokenizer output has been moved to the GPU, we pass it to the model’s generate() method. This is the first time we have seen this method, because only decoder and encoder-decoder models support it. This method generates an output sequence by predicting one token 1 https://huggingface.co/datasets/wmt16 214 Implementing Encoder-decoder Methods at a time, stopping when either the end-of-sequence token is produced or when the sequence reaches a maximum length. Several generation techniques are supported, such as beam search, in which several alternate translations are maintained by the model so that it is able to select an overall best translation from several options. For efficiency purposes, we use a greedy approach, which chooses the best token at each step of the generation. This is equivalent to using a beam search with a beam of size one. Since the model generates its predictions as a sequence of token ids, we need to convert them back into the corresponding tokens to be able to read the translated text. We do this using the tokenizer’s batch_decode() method. Finally, we return the gold and predicted Romanian sentences in a dictionary: Next, we apply our translate() function to our Dataset to translate all the sentences: reference Șeful ONU declară că nu există soluții militar... Șeful ONU a solicitat din nou tuturor părților... Ban și-a exprimat regretul că divizările în co... Nu sunt bani puțini. La sfârșitul mandatului voi face un raport cu ... "Să spună un parlamentar că nu-i ajung banii e... 1999 rows × 2 columns prediction eful ONU declară că nu există o soluţie milita... eful U.N. a cerut din nou tuturor partidelor, ... El şi-a exprimat regretul că diviziunile din c... Banii sunt suficienţi. La sfârşitul biroului voi raporta tot ceea ce ... "A spune că un parlamentar nu are suficienţi b... 1994 1995 1996 1997 1998 0 1 2 3 4 ... Secretarul General Ban Ki-moon afirmă că răspu... Secretarul General Ban Ki-moon declară că răsp... Ban a declarat miercuri în cadrul unei conferi... Ban a declarat la o conferinţă de presă susţin... ... ... Uneori mi-e rușine să ridic banii de la casierie. Uneori mi-e ruşine să iau banii de la biroul c... S-a întâmplat să ridic într-o lună și 30.000 d... Într-o lună am adunat 30 000 de lei cu ramburs... We evaluate the quality of these translations using the BLEU metric, which we introduced in Chapter 14. To this end, we load an existing implementation of BLEU from the datasets library as a Metric object.2 Metric objects have a method called add(), which is used to accumulate the predictions and gold labels, one example at a time. After accumulating all examples, the compute() method returns the results of the evaluation. Note that for each predicted sentence, BLEU expects a list of reference sentences (as there are often many correct ways of translat- 2 https://huggingface.co/docs/datasets/v2.4.0/en/package_reference/main_ classes#datasets. Metric 15.2 Implementation of Greedy Generation 215 ing a given text). Since we only have one reference, we wrap it in a list before passing it to the metric: The score corresponds to the BLEU score. The rest of the items correspond to the components required to compute the score. That is, the counts, totals, and precisions correspond to the counts, totals, and precisions for 1-, 2-, 3-, and 4-grams. The bp is the brevity penalty. The sys_len and ref_len correspond to the predictions and reference lengths. The above BLEU score of 25.2% is slightly lower than the state of the art, but we are being penalized by the peculiarities of diacritic usage in Romanian characters. For example, the letters ș and ț (corresponding to the sounds sh and ts in English) are usually spelled with a comma below the characters s and t, which is the standard imposed by the Romanian Academy. However, in “the wild” these characters are often written using a cedilla instead of a comma, e.g., ţ instead of ț (or, using the names of these Unicode characters, LATIN SMALL LETTER T WITH CEDILLA instead of LATIN SMALL LETTER T WITH COMMA BELOW). Further, some of these characters with diacritics are often omitted altogether in the T5 output. The T5 output below contains an example for each of these two situations (e.g., soluţi(e) instead of soluți(i), and eful instead of Șeful): To avoid being penalized at scoring time for these arbitrary discrepancies, post-processing scripts are sometimes used to normalize diacritic usage.3 Usage of such post-processing scripts can improve the BLEU score substantially. However, this is beyond the scope of this chapter. 15.2 Implementation of Greedy Generation To gain a better intuition of how the encoder-decoder model generates its output sequence, we show below an implementation of the greedy version of the generate() method used above. This function takes as an argument a single English text (i.e., no batching) and returns the corresponding Romanian text: This function interacts directly with the encoder and decoder components of the T5 model, so we must construct the input for both. The encoder’s input is constructed by prepending the task prefix to the English text and tokenizing it. On the other hand, the decoder’s input is constructed incrementally by accumulating the tokens predicted so far 3 https://github.com/huggingface/transformers/blob/main/examples/legacy/ seq2seq/romanian_postprocessing.md 216 Implementing Encoder-decoder Methods in order to predict the next token in the sequence. At the beginning, before any tokens are predicted, the decoder’s input is initialized with a single token that corresponds to the beginning of the sequence. We retrieve this token, called decoder_start_token_id, from the model’s configuration object. The tokens are predicted one at a time, until the model produces eos_token_id, which indicates that the sequence is finished. However, in case the model does not produce this end-of-sequence token within a reasonable number of steps, we also enforce a maximum number of predicted tokens, determined by the max_target_length parameter we defined previously. The T5 model’s forward() method, called indirectly through its __call__()) method, takes the inputs for both the encoder and the decoder. The output returned by this method corresponds to all the tokens in the decoder’s input plus an extra one: the newly predicted token. To select the best prediction, we retrieve the logits from the output and select the logits corresponding to the last token in the sequence (recall that the output shape is (batch size, sequence length, vocabulary size)). From these selected logits, we use the argmax() to select the token id corresponding to the highest-scoring vocabulary item. We append this new token id to the decoder’s input, and repeat the process until we encounter the end-of-sequence token or the decoded text reaches the maximum length. Once we are finished generating token ids, we retrieve the corresponding text by calling the tokenizer’s decode() method. This method is identical to the batch_decode() method we used previously, except that it only decodes a single example. Below is an usage example for the greedy_translation() function: 15.3 Fine-tuning Romanian to English Translation In this section, we fine-tune a T5 model on the translation of Romanian to English, a language pair that was not included in the T5 pre-training. To confirm that this data was not included in pre-training, we evaluated the performance of the vanilla t5-small model on the translation from Romanian to English using code equivalent to the code discussed in the previous section (see the chap15_translation_ro_to_en notebook). The resulting BLEU score was only 3.2%, which is substantially lower than the score we obtained when translating English to Romanian (25.2%). 15.3 Fine-tuning Romanian to English Translation 217 Note that the transformers library includes scripts to fine-tune a translation model directly from the command line.4 For didactic purposes, we will not use these scripts in this section, but instead write the fine-tuning code explicitly. For this exercise, we continue using the WMT16 dataset, but this time
we load the train and validation splits. We employ the same t5-small model that we used previously. The code from the last section to load
the model, tokenizer, and dataset does not need to change for this use-
case, so we do not repeat it here. However, as before, the complete code is available in a Jupyter notebook (chap15_translation_ro_to_en_finetune). We begin by tokenizing the source (Romanian) and target language (English) texts. As in the last section, we need to prepend the task prefix to the source texts prior to tokenizing. This time, since we are translating in the opposite direction, we use the prefix "translate Romanian to English: ", and we prepend it to the Romanian text. Each call to the tokenizer with a batch of texts produces input_ids and an attention_mask. This output is what we need for the Romanian text, which will serve as the input to the model. To generate the labels, i.e., the correct translated tokens, we use the input_ids corresponding to the English text. Recall that "labels" is the default key name expected by trainers in Hugging Face. We apply our tokenize() function to both the train and validation splits: 4 https://github.com/huggingface/transformers/tree/main/examples/pytorch/ translation 218 Implementing Encoder-decoder Methods input_ids [13959, 3871, 29, 12, 1566, 10, 4961, 106, 204... [13959, 3871, 29, 12, 1566, 10, 4961, 106, 204... [13959, 3871, 29, 12, 1566, 10, 374, 6225, 49,... [13959, 3871, 29, 12, 1566, 10, 4540, 4031, 9,... [13959, 3871, 29, 12, 1566, 10, 2262, 900, 17,... [13959, 3871, 29, 12, 1566, 10, 18420, 83, 362... attention_mask labels [19428, 13, 12876, 10, 217, 13687, 7, 1] [19428, 13, 12876, 10, 217, 13687, 7, 1] [11167, 7, 1204, 10, 217, 13687, 7, 1] [4540, 4031, 9, 7, 1672, 7, 2262, 900, 17, 38,... [2262, 900, 17, 641, 65, 46, 3761, 6, 1069, 31... [3625, 32, 5788, 35, 15, 3844, 31, 7, 3, 16143... 0 1 2 3 4 ... 610315 610316 610317 610318 610319 [1,1,1,1,1,1,1,1, [1,1,1,1,1,1,1,1, [1,1,1,1,1,1,1,1, [1,1,1,1,1,1,1,1, [1,1,1,1,1,1,1,1, [1,1,1,1,1,1,1,1, 1,1,1,1, 1,1,1,... 1,1,1,1, 1,1,1,... 1,1,1,1, 1,1,1,... 1,1,1,1, 1,1,1,... 1,1,1,1, 1,1,1,... 1,1,1,1, 1,1,1,... [13959, 3871, 29, 12, 1566, 10, 5085, [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 5840, 49... 1, 1, 1, ... [13959, 3871, 29, 12, 1566, 10, 781, [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 8750, 9, ... 1, 1, 1, ... ... ... [13959, 3871, 29, 12, 1566, 10, 2364, [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 4540, 40... 1, 1, 1, ... 610320 rows × 3 columns [13959, 3871, 29, 12, 1566, 10, 3, [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 25882, 759,... 1, 1, 1, ... [2276, 8843, 138, 13, 13687, 7, 13, 1767, 3823... [781, 2420, 13, 17500, 10, 217, 13687, 7, 1] [242, 4540, 4031, 9, 7, 6, 8, 516, 65, 66, 8, ... [9810, 157, 31, 7, 516, 92, 3088, 21, 46, 3839... ... Recall that in order to construct a trainer, we need a data collator for batching, a function to compute the metrics of interest, and a TrainingArguments object. In this section, we use a data collator called DataCollatorForSeq2Seq, which is included in the transformers library specifically for sequence-to-sequence models. The collator pads the batches using the label_pad_token_id, which we have set to −100, as we did in Chapter 13 (this is the default ignore_index value used by CrossEntropyLoss): The compute_metrics() function computes the BLEU score. It uses the tokenizer to decode the token ids into text, for both the predicted and gold labels, ignoring padding: We use the Seq2SeqTrainingArguments class, which adds the predict_with_generate parameter to the regular TrainingArguments class. This is needed to in-
dicate that the trainer should use the generate() method for inference
in order to compute the metrics (BLUE in this case): Finally, we construct the trainer using the Seq2SeqTrainer class, which is a subclass of Trainer that adds the ability to compute scores such as BLEU during training by calling generate() during evaluation: Fine-tuning a translation model takes considerably longer than training or fine-tuning the models we have developed so far in this book. To account for this, here we add support for resuming training from a checkpoint, i.e., a model that was saved after training on a number of 15.4 Using a Previously Saved Model 219 examples. Similar to how one can resume a video game, this allows one to pick up from the last “save point,” in case training was interrupted and needs to be resumed: When calling the trainer’s train() method, we either provide a model checkpoint or None. In the former case, the trainer will continue training from the provided checkpoint. In the latter case, the trainer will begin training from scratch. Once the training has completed, we save the trained model and tokenizer using the trainer’s save_model() method into the output directory: We then compute and save the metrics corresponding to the training partition. This is not required, but it is helpful to keep a record of the model’s performance on the training data. Note that the metrics do not automatically include the number of examples in the training partition, so we add them explicitly: Next, we evaluate our final model on the validation data and save the corresponding metrics. These metrics indicate that our BLEU score on the validation data is 35.2%, which is evidence that fine-tuning has helped dramatically: Lastly, we save a model card into our output directory. A model card is akin to an automatically-generated README file that includes information about the model used, the data, settings used, and performance throughout the training process. This file is helpful for reproducibility as it contains all of this key information in one place. These cards are often uploaded to the Hugging Face Hub together with the model itself.5 15.4 Using a Previously Saved Model Models that have been saved locally can be loaded using the same from_pretrained() methods we have used before. In particular, instead of providing a model name, we provide the path to the local directory where the model is stored, using the local_files_only parameter to indicate that we want to load the model from the local file system instead of downloading it from the Hugging Face Hub (Make sure you use an output directory that is valid on your machine!): Once our fine-tuned model is loaded, we use it the same way as before. That is, we use the translate() function to generate translations 5 We do not discuss the model uploading process here. Please see the documentation on model sharing at: https://huggingface.co/docs/transformers/v4.14.1/model_sharing. 220 Implementing Encoder-decoder Methods for our test partition. Then we use the BLEU metric to score this output. From this metric, we obtain the final BLEU score of 33.4%, which is markedly better than our initial score (i.e., without fine-tuning) of 3.2%! The code corresponding to this section is available in the notebook chap15_translation_ro_to_en_finetuned. 15.5 Summary In this chapter we used a complete encoder-decoder transformer network to implement a machine translation application. Importantly, transformers with a decoder component have a generate() method that simplifies the generation process and provides multiple options for decoding. We encourage you to explore these options! For example, try comparing the quality of the output with the resources required to produce it (e.g., runtime overhead) when the size of the search beam increases. Additionally, we saw how to fine-tune an encoder-decoder model on a new language pair that it has not seen during its pre-training. This exercise included using checkpoints to support resuming training in case of unexpected interruptions, saving our fine-tuned model, and loading it for later use.
13,298
13,371
#!/usr/bin/env python # coding: utf-8 # # Machine Translation from Ro to En # # Using the T5 Transformer with Fine-tuning # Some initialization: # In[1]: import torch import numpy as np from transformers import set_seed # random seed seed = 42 # set random seed if seed is not None: print(f'random seed: {seed}') set_seed(seed) # In[2]: transformer_name = 't5-small' dataset_name = 'wmt16' dataset_config_name = 'ro-en' source_lang = 'ro' target_lang = 'en' max_source_length = 1024 max_target_length = 128 task_prefix = 'translate Romanian to English: ' batch_size = 4 label_pad_token_id = -100 save_steps = 25_000 num_beams = 1 learning_rate = 1e-3 num_train_epochs = 3 output_dir = '/media/data2/t5-translation-example' # make sure this is a valid path on your machine! # Load dataset from HuggingFace: # In[3]: from datasets import load_dataset wmt16 = load_dataset(dataset_name, dataset_config_name) # Load tokenizer and pre-trained model: # In[4]: from transformers import AutoConfig, AutoTokenizer, AutoModelForSeq2SeqLM config = AutoConfig.from_pretrained(transformer_name) tokenizer = AutoTokenizer.from_pretrained(transformer_name) model = AutoModelForSeq2SeqLM.from_pretrained(transformer_name, config=config) # Tokenize the texts in the dataset: # In[5]: def tokenize(batch): # get source sentences and prepend task prefix sources = [x[source_lang] for x in batch["translation"]] sources = [task_prefix + x for x in sources] # tokenize source sentences output = tokenizer( sources, max_length=max_source_length, truncation=True, ) # get target sentences targets = [x[target_lang] for x in batch["translation"]] # tokenize target sentences labels = tokenizer( targets, max_length=max_target_length, truncation=True, ) # add targets to output output["labels"] = labels["input_ids"] return output # In[6]: train_dataset = wmt16['train'] eval_dataset = wmt16['validation'] column_names = train_dataset.column_names train_dataset = train_dataset.map( tokenize, batched=True, remove_columns=column_names, ) eval_dataset = eval_dataset.map( tokenize, batched=True, remove_columns=column_names, ) # In[7]: train_dataset.to_pandas() # Create `Trainer` object and train: # In[8]: from transformers import DataCollatorForSeq2Seq data_collator = DataCollatorForSeq2Seq( tokenizer, model=model, label_pad_token_id=label_pad_token_id, ) # In[9]: from datasets import load_metric metric = load_metric('sacrebleu') def compute_metrics(eval_preds): preds, labels = eval_preds # get text for predictions predictions = tokenizer.batch_decode( preds, skip_special_tokens=True, ) # replace -100 in labels with pad token labels = np.where( labels != -100, labels, tokenizer.pad_token_id, ) # get text for gold labels references = tokenizer.batch_decode( labels, skip_special_tokens=True, ) # metric expects list of references for each prediction references = [[ref] for ref in references] # compute bleu score results = metric.compute( predictions=predictions, references=references, ) results = {'bleu': results['score']} return results # In[10]: from transformers import Seq2SeqTrainingArguments training_args = Seq2SeqTrainingArguments( output_dir=output_dir, per_device_train_batch_size=batch_size, per_device_eval_batch_size=batch_size, save_steps=save_steps, predict_with_generate=True, evaluation_strategy='steps', eval_steps=save_steps, learning_rate=learning_rate, num_train_epochs=num_train_epochs, ) # In[11]: from transformers import Seq2SeqTrainer trainer = Seq2SeqTrainer( model=model, args=training_args, train_dataset=train_dataset, eval_dataset=eval_dataset, tokenizer=tokenizer, data_collator=data_collator, compute_metrics=compute_metrics, ) # In[12]: import os from transformers.trainer_utils import get_last_checkpoint last_checkpoint = None if os.path.isdir(output_dir): last_checkpoint = get_last_checkpoint(output_dir) if last_checkpoint is not None: print(f'Checkpoint detected, resuming training at {last_checkpoint}.') # In[13]: train_result = trainer.train(resume_from_checkpoint=last_checkpoint) trainer.save_model() # In[14]: metrics = train_result.metrics metrics['train_samples'] = len(train_dataset) trainer.log_metrics('train', metrics) trainer.save_metrics('train', metrics) trainer.save_state() # Now evaluate: # In[15]: # https://discuss.huggingface.co/t/evaluation-results-metric-during-training-is-different-from-the-evaluation-results-at-the-end/15401 metrics = trainer.evaluate( max_length=max_target_length, num_beams=num_beams, metric_key_prefix='eval', ) metrics['eval_samples'] = len(eval_dataset) trainer.log_metrics('eval', metrics) trainer.save_metrics('eval', metrics) # Create a model card with meta data about this model: # In[16]: kwargs = { 'finetuned_from': transformer_name, 'tasks': 'translation', 'dataset_tags': dataset_name, 'dataset_args': dataset_config_name, 'dataset': f'{dataset_name} {dataset_config_name}', 'language': [source_lang, target_lang], } trainer.create_model_card(**kwargs) # In[ ]:
2,070
2,273
2
chap15-3
chap15-3
15 Implementing Encoder-decoder Methods In this chapter we implement a machine translation application as an example of an encoder-decoder task. In particular, we build on pre-trained encoder-decoder transformer models, which exist in the Hugging Face library for a wide variety of language pairs. We first show how to use one of these models out-of-the-box to perform translation for one of the language pairs it has been exposed to during pre-training: English to Romanian. Afterwards, we fine-tune the model to a new language combination that is has not seen before: Romanian to English. In both use cases, we use the T5 encoder-decoder model, which has been pre-trained for several tasks, including machine translation (Raffel et al., 2020). Please see Chapter 16 for a description of T5’s pre-training process. The data for this task comes from the WMT 2016 dataset (Bojar et al., 2016), which consists of English sentences aligned pairwise to German, Czech, Russian, Finnish, Romanian, and Turkish. In this chapter we only use the English-Romanian texts (in both directions). 15.1 Translating English to Romanian As a first example, we use T5 to translate from English to Romanian, which is one of the language pairs it has been exposed to during pretraining. The code discussed in this section is available in the notebook chap15_translation_en_to_ro. Even though in this exercise we are not fine-tuning the model, we still need to define a few hyper parameters to frame the task and help the model understand how to work with the data: The above settings indicate that we use the t5-small model, a smaller T5 variant, to minimize the amount of memory required. The source_lang 212 15.1 Translating English to Romanian 213 and target_lang variables define the direction of translation, i.e., from English to Romanian. To keep our computing requirements small, we limit the length of our input and output. That is, English text longer than max_source_length tokens will be truncated. Further, we limit our generated Romanian text to max_target_length. We chose a maximum target length of 128 tokens to limit the computational cost incurred during text generation (recall that the text is generated one token at a time). The T5 models are trained to support multiple tasks such as translation and summarization (please see Chapter 16 for details). Thus, during training and inference, the user must specify which task the model should perform using a text prefix. Here we use the prefix "translate English to Romanian: " to indicate that the input text is in English and should be translated to Romanian. Next, we load the model and the corresponding tokenizer, and move them to the GPU if one is available: We use the datasets library to load our translation dataset. Note that the first time one calls load_dataset() the dataset will be downloaded automatically from the Hugging Face repository.1 The load_dataset() function takes a dataset name and configuration, which in our case are wmt16 and ro-en, respectively. Since in this example we are only evaluating the model, we only load the test partition (or split) of the dataset: The dataset consists of a single column called translation. Each element in this column is a dictionary that contains the aligned pair. The dictionary keys are the abbreviated language names and the values are the corresponding sentences. An example of one of these dictionaries is shown below: We encapsulate the logic for translating the English text into Romanian in a function called translate(). Inside this function, for a batch of aligned pairs, we select the English sentence as our input, and prepend the task prefix. Then we tokenize these inputs, including the prefix, specifying that sentences longer than max_source_length should be truncated, the batch should be padded, and the tokenizer should return PyTorch tensors. Once the tokenizer output has been moved to the GPU, we pass it to the model’s generate() method. This is the first time we have seen this method, because only decoder and encoder-decoder models support it. This method generates an output sequence by predicting one token 1 https://huggingface.co/datasets/wmt16 214 Implementing Encoder-decoder Methods at a time, stopping when either the end-of-sequence token is produced or when the sequence reaches a maximum length. Several generation techniques are supported, such as beam search, in which several alternate translations are maintained by the model so that it is able to select an overall best translation from several options. For efficiency purposes, we use a greedy approach, which chooses the best token at each step of the generation. This is equivalent to using a beam search with a beam of size one. Since the model generates its predictions as a sequence of token ids, we need to convert them back into the corresponding tokens to be able to read the translated text. We do this using the tokenizer’s batch_decode() method. Finally, we return the gold and predicted Romanian sentences in a dictionary: Next, we apply our translate() function to our Dataset to translate all the sentences: reference Șeful ONU declară că nu există soluții militar... Șeful ONU a solicitat din nou tuturor părților... Ban și-a exprimat regretul că divizările în co... Nu sunt bani puțini. La sfârșitul mandatului voi face un raport cu ... "Să spună un parlamentar că nu-i ajung banii e... 1999 rows × 2 columns prediction eful ONU declară că nu există o soluţie milita... eful U.N. a cerut din nou tuturor partidelor, ... El şi-a exprimat regretul că diviziunile din c... Banii sunt suficienţi. La sfârşitul biroului voi raporta tot ceea ce ... "A spune că un parlamentar nu are suficienţi b... 1994 1995 1996 1997 1998 0 1 2 3 4 ... Secretarul General Ban Ki-moon afirmă că răspu... Secretarul General Ban Ki-moon declară că răsp... Ban a declarat miercuri în cadrul unei conferi... Ban a declarat la o conferinţă de presă susţin... ... ... Uneori mi-e rușine să ridic banii de la casierie. Uneori mi-e ruşine să iau banii de la biroul c... S-a întâmplat să ridic într-o lună și 30.000 d... Într-o lună am adunat 30 000 de lei cu ramburs... We evaluate the quality of these translations using the BLEU metric, which we introduced in Chapter 14. To this end, we load an existing implementation of BLEU from the datasets library as a Metric object.2 Metric objects have a method called add(), which is used to accumulate the predictions and gold labels, one example at a time. After accumulating all examples, the compute() method returns the results of the evaluation. Note that for each predicted sentence, BLEU expects a list of reference sentences (as there are often many correct ways of translat- 2 https://huggingface.co/docs/datasets/v2.4.0/en/package_reference/main_ classes#datasets. Metric 15.2 Implementation of Greedy Generation 215 ing a given text). Since we only have one reference, we wrap it in a list before passing it to the metric: The score corresponds to the BLEU score. The rest of the items correspond to the components required to compute the score. That is, the counts, totals, and precisions correspond to the counts, totals, and precisions for 1-, 2-, 3-, and 4-grams. The bp is the brevity penalty. The sys_len and ref_len correspond to the predictions and reference lengths. The above BLEU score of 25.2% is slightly lower than the state of the art, but we are being penalized by the peculiarities of diacritic usage in Romanian characters. For example, the letters ș and ț (corresponding to the sounds sh and ts in English) are usually spelled with a comma below the characters s and t, which is the standard imposed by the Romanian Academy. However, in “the wild” these characters are often written using a cedilla instead of a comma, e.g., ţ instead of ț (or, using the names of these Unicode characters, LATIN SMALL LETTER T WITH CEDILLA instead of LATIN SMALL LETTER T WITH COMMA BELOW). Further, some of these characters with diacritics are often omitted altogether in the T5 output. The T5 output below contains an example for each of these two situations (e.g., soluţi(e) instead of soluți(i), and eful instead of Șeful): To avoid being penalized at scoring time for these arbitrary discrepancies, post-processing scripts are sometimes used to normalize diacritic usage.3 Usage of such post-processing scripts can improve the BLEU score substantially. However, this is beyond the scope of this chapter. 15.2 Implementation of Greedy Generation To gain a better intuition of how the encoder-decoder model generates its output sequence, we show below an implementation of the greedy version of the generate() method used above. This function takes as an argument a single English text (i.e., no batching) and returns the corresponding Romanian text: This function interacts directly with the encoder and decoder components of the T5 model, so we must construct the input for both. The encoder’s input is constructed by prepending the task prefix to the English text and tokenizing it. On the other hand, the decoder’s input is constructed incrementally by accumulating the tokens predicted so far 3 https://github.com/huggingface/transformers/blob/main/examples/legacy/ seq2seq/romanian_postprocessing.md 216 Implementing Encoder-decoder Methods in order to predict the next token in the sequence. At the beginning, before any tokens are predicted, the decoder’s input is initialized with a single token that corresponds to the beginning of the sequence. We retrieve this token, called decoder_start_token_id, from the model’s configuration object. The tokens are predicted one at a time, until the model produces eos_token_id, which indicates that the sequence is finished. However, in case the model does not produce this end-of-sequence token within a reasonable number of steps, we also enforce a maximum number of predicted tokens, determined by the max_target_length parameter we defined previously. The T5 model’s forward() method, called indirectly through its __call__()) method, takes the inputs for both the encoder and the decoder. The output returned by this method corresponds to all the tokens in the decoder’s input plus an extra one: the newly predicted token. To select the best prediction, we retrieve the logits from the output and select the logits corresponding to the last token in the sequence (recall that the output shape is (batch size, sequence length, vocabulary size)). From these selected logits, we use the argmax() to select the token id corresponding to the highest-scoring vocabulary item. We append this new token id to the decoder’s input, and repeat the process until we encounter the end-of-sequence token or the decoded text reaches the maximum length. Once we are finished generating token ids, we retrieve the corresponding text by calling the tokenizer’s decode() method. This method is identical to the batch_decode() method we used previously, except that it only decodes a single example. Below is an usage example for the greedy_translation() function: 15.3 Fine-tuning Romanian to English Translation In this section, we fine-tune a T5 model on the translation of Romanian to English, a language pair that was not included in the T5 pre-training. To confirm that this data was not included in pre-training, we evaluated the performance of the vanilla t5-small model on the translation from Romanian to English using code equivalent to the code discussed in the previous section (see the chap15_translation_ro_to_en notebook). The resulting BLEU score was only 3.2%, which is substantially lower than the score we obtained when translating English to Romanian (25.2%). 15.3 Fine-tuning Romanian to English Translation 217 Note that the transformers library includes scripts to fine-tune a translation model directly from the command line.4 For didactic purposes, we will not use these scripts in this section, but instead write the fine-tuning code explicitly. For this exercise, we continue using the WMT16 dataset, but this time
we load the train and validation splits. We employ the same t5-small model that we used previously. The code from the last section to load
the model, tokenizer, and dataset does not need to change for this use-
case, so we do not repeat it here. However, as before, the complete code is available in a Jupyter notebook (chap15_translation_ro_to_en_finetune). We begin by tokenizing the source (Romanian) and target language (English) texts. As in the last section, we need to prepend the task prefix to the source texts prior to tokenizing. This time, since we are translating in the opposite direction, we use the prefix "translate Romanian to English: ", and we prepend it to the Romanian text. Each call to the tokenizer with a batch of texts produces input_ids and an attention_mask. This output is what we need for the Romanian text, which will serve as the input to the model. To generate the labels, i.e., the correct translated tokens, we use the input_ids corresponding to the English text. Recall that "labels" is the default key name expected by trainers in Hugging Face. We apply our tokenize() function to both the train and validation splits: 4 https://github.com/huggingface/transformers/tree/main/examples/pytorch/ translation 218 Implementing Encoder-decoder Methods input_ids [13959, 3871, 29, 12, 1566, 10, 4961, 106, 204... [13959, 3871, 29, 12, 1566, 10, 4961, 106, 204... [13959, 3871, 29, 12, 1566, 10, 374, 6225, 49,... [13959, 3871, 29, 12, 1566, 10, 4540, 4031, 9,... [13959, 3871, 29, 12, 1566, 10, 2262, 900, 17,... [13959, 3871, 29, 12, 1566, 10, 18420, 83, 362... attention_mask labels [19428, 13, 12876, 10, 217, 13687, 7, 1] [19428, 13, 12876, 10, 217, 13687, 7, 1] [11167, 7, 1204, 10, 217, 13687, 7, 1] [4540, 4031, 9, 7, 1672, 7, 2262, 900, 17, 38,... [2262, 900, 17, 641, 65, 46, 3761, 6, 1069, 31... [3625, 32, 5788, 35, 15, 3844, 31, 7, 3, 16143... 0 1 2 3 4 ... 610315 610316 610317 610318 610319 [1,1,1,1,1,1,1,1, [1,1,1,1,1,1,1,1, [1,1,1,1,1,1,1,1, [1,1,1,1,1,1,1,1, [1,1,1,1,1,1,1,1, [1,1,1,1,1,1,1,1, 1,1,1,1, 1,1,1,... 1,1,1,1, 1,1,1,... 1,1,1,1, 1,1,1,... 1,1,1,1, 1,1,1,... 1,1,1,1, 1,1,1,... 1,1,1,1, 1,1,1,... [13959, 3871, 29, 12, 1566, 10, 5085, [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 5840, 49... 1, 1, 1, ... [13959, 3871, 29, 12, 1566, 10, 781, [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 8750, 9, ... 1, 1, 1, ... ... ... [13959, 3871, 29, 12, 1566, 10, 2364, [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 4540, 40... 1, 1, 1, ... 610320 rows × 3 columns [13959, 3871, 29, 12, 1566, 10, 3, [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 25882, 759,... 1, 1, 1, ... [2276, 8843, 138, 13, 13687, 7, 13, 1767, 3823... [781, 2420, 13, 17500, 10, 217, 13687, 7, 1] [242, 4540, 4031, 9, 7, 6, 8, 516, 65, 66, 8, ... [9810, 157, 31, 7, 516, 92, 3088, 21, 46, 3839... ... Recall that in order to construct a trainer, we need a data collator for batching, a function to compute the metrics of interest, and a TrainingArguments object. In this section, we use a data collator called DataCollatorForSeq2Seq, which is included in the transformers library specifically for sequence-to-sequence models. The collator pads the batches using the label_pad_token_id, which we have set to −100, as we did in Chapter 13 (this is the default ignore_index value used by CrossEntropyLoss): The compute_metrics() function computes the BLEU score. It uses the tokenizer to decode the token ids into text, for both the predicted and gold labels, ignoring padding: We use the Seq2SeqTrainingArguments class, which adds the predict_with_generate parameter to the regular TrainingArguments class. This is needed to in-
dicate that the trainer should use the generate() method for inference
in order to compute the metrics (BLUE in this case): Finally, we construct the trainer using the Seq2SeqTrainer class, which is a subclass of Trainer that adds the ability to compute scores such as BLEU during training by calling generate() during evaluation: Fine-tuning a translation model takes considerably longer than training or fine-tuning the models we have developed so far in this book. To account for this, here we add support for resuming training from a checkpoint, i.e., a model that was saved after training on a number of 15.4 Using a Previously Saved Model 219 examples. Similar to how one can resume a video game, this allows one to pick up from the last “save point,” in case training was interrupted and needs to be resumed: When calling the trainer’s train() method, we either provide a model checkpoint or None. In the former case, the trainer will continue training from the provided checkpoint. In the latter case, the trainer will begin training from scratch. Once the training has completed, we save the trained model and tokenizer using the trainer’s save_model() method into the output directory: We then compute and save the metrics corresponding to the training partition. This is not required, but it is helpful to keep a record of the model’s performance on the training data. Note that the metrics do not automatically include the number of examples in the training partition, so we add them explicitly: Next, we evaluate our final model on the validation data and save the corresponding metrics. These metrics indicate that our BLEU score on the validation data is 35.2%, which is evidence that fine-tuning has helped dramatically: Lastly, we save a model card into our output directory. A model card is akin to an automatically-generated README file that includes information about the model used, the data, settings used, and performance throughout the training process. This file is helpful for reproducibility as it contains all of this key information in one place. These cards are often uploaded to the Hugging Face Hub together with the model itself.5 15.4 Using a Previously Saved Model Models that have been saved locally can be loaded using the same from_pretrained() methods we have used before. In particular, instead of providing a model name, we provide the path to the local directory where the model is stored, using the local_files_only parameter to indicate that we want to load the model from the local file system instead of downloading it from the Hugging Face Hub (Make sure you use an output directory that is valid on your machine!): Once our fine-tuned model is loaded, we use it the same way as before. That is, we use the translate() function to generate translations 5 We do not discuss the model uploading process here. Please see the documentation on model sharing at: https://huggingface.co/docs/transformers/v4.14.1/model_sharing. 220 Implementing Encoder-decoder Methods for our test partition. Then we use the BLEU metric to score this output. From this metric, we obtain the final BLEU score of 33.4%, which is markedly better than our initial score (i.e., without fine-tuning) of 3.2%! The code corresponding to this section is available in the notebook chap15_translation_ro_to_en_finetuned. 15.5 Summary In this chapter we used a complete encoder-decoder transformer network to implement a machine translation application. Importantly, transformers with a decoder component have a generate() method that simplifies the generation process and provides multiple options for decoding. We encourage you to explore these options! For example, try comparing the quality of the output with the resources required to produce it (e.g., runtime overhead) when the size of the search beam increases. Additionally, we saw how to fine-tune an encoder-decoder model on a new language pair that it has not seen during its pre-training. This exercise included using checkpoints to support resuming training in case of unexpected interruptions, saving our fine-tuned model, and loading it for later use.
2,741
3,052
#!/usr/bin/env python # coding: utf-8 # # Machine Translation from English (En) to Romanian (Ro) # # Using the T5 Transformer without Fine-tuning # Some initialization: # In[1]: import torch import numpy as np from transformers import set_seed # set to True to use the gpu (if there is one available) use_gpu = True # select device device = torch.device('cuda' if use_gpu and torch.cuda.is_available() else 'cpu') print(f'device: {device.type}') # random seed seed = 42 # set random seed if seed is not None: print(f'random seed: {seed}') set_seed(seed) # In[2]: transformer_name = 't5-small' source_lang = 'en' target_lang = 'ro' max_source_length = 1024 max_target_length = 128 task_prefix = 'translate English to Romanian: ' num_beams = 1 batch_size = 100 # Load tokenizer and pre-trained model: # In[3]: from transformers import AutoTokenizer, AutoModelForSeq2SeqLM tokenizer = AutoTokenizer.from_pretrained(transformer_name) model = AutoModelForSeq2SeqLM.from_pretrained(transformer_name) model = model.to(device) # Load dataset from HuggingFace: # In[4]: from datasets import load_dataset test_ds = load_dataset('wmt16', 'ro-en', split='test') test_ds # In[5]: test_ds['translation'][0] # Implement the `translate` method and apply on this dataset: # In[6]: def translate(batch): # get source language examples and prepend task prefix inputs = [x[source_lang] for x in batch["translation"]] inputs = [task_prefix + x for x in inputs] # tokenize inputs encoded = tokenizer( inputs, max_length=max_source_length, truncation=True, padding=True, return_tensors='pt', ) # move data to gpu if needed input_ids = encoded.input_ids.to(device) attention_mask = encoded.attention_mask.to(device) # generate translated sentences output = model.generate( input_ids=input_ids, attention_mask=attention_mask, num_beams=num_beams, max_length=max_target_length, ) # generate predicted sentences from predicted token ids decoded = tokenizer.batch_decode( output, skip_special_tokens=True, ) # get gold sentences in target language targets = [x[target_lang] for x in batch["translation"]] # return gold and predicted sentences return { 'reference': targets, 'prediction': decoded, } # In[7]: results = test_ds.map( translate, batched=True, batch_size=batch_size, remove_columns=test_ds.column_names, ) results.to_pandas() # Now evaluate the quality of translations using the BLEU metric: # In[8]: from datasets import load_metric metric = load_metric('sacrebleu') for r in results: prediction = r['prediction'] reference = [r['reference']] metric.add(prediction=prediction, reference=reference) metric.compute() # An example of greedy decoding for individual texts: # In[9]: def greedy_translation(text): # prepend task prefix text = task_prefix + text # tokenize input encoded = tokenizer( text, max_length=max_source_length, truncation=True, return_tensors='pt', ) # encoder input ids encoder_input_ids = encoded.input_ids.to(device) # decoder input ids, initialized with start token id start = model.config.decoder_start_token_id decoder_input_ids = torch.LongTensor([[start]]).to(device) # generate tokens, one at a time for _ in range(max_target_length): # get model predictions output = model( encoder_input_ids, decoder_input_ids=decoder_input_ids, ) # get logits for last token next_token_logits = output.logits[0, -1, :] # select most probable token next_token_id = torch.argmax(next_token_logits) # append new token to decoder_input_ids output_id = torch.LongTensor([[next_token_id]]).to(device) decoder_input_ids = torch.cat([decoder_input_ids, output_id], dim=-1) # if predicted token is the end of sequence, stop iterating if next_token_id == tokenizer.eos_token_id: break # return text corresponding to predicted token ids return tokenizer.decode(decoder_input_ids[0], skip_special_tokens=True) # In[10]: greedy_translation("this is a test")
1,129
1,184
3
chap15-4
chap15-4
15 Implementing Encoder-decoder Methods In this chapter we implement a machine translation application as an example of an encoder-decoder task. In particular, we build on pre-trained encoder-decoder transformer models, which exist in the Hugging Face library for a wide variety of language pairs. We first show how to use one of these models out-of-the-box to perform translation for one of the language pairs it has been exposed to during pre-training: English to Romanian. Afterwards, we fine-tune the model to a new language combination that is has not seen before: Romanian to English. In both use cases, we use the T5 encoder-decoder model, which has been pre-trained for several tasks, including machine translation (Raffel et al., 2020). Please see Chapter 16 for a description of T5’s pre-training process. The data for this task comes from the WMT 2016 dataset (Bojar et al., 2016), which consists of English sentences aligned pairwise to German, Czech, Russian, Finnish, Romanian, and Turkish. In this chapter we only use the English-Romanian texts (in both directions). 15.1 Translating English to Romanian As a first example, we use T5 to translate from English to Romanian, which is one of the language pairs it has been exposed to during pretraining. The code discussed in this section is available in the notebook chap15_translation_en_to_ro. Even though in this exercise we are not fine-tuning the model, we still need to define a few hyper parameters to frame the task and help the model understand how to work with the data: The above settings indicate that we use the t5-small model, a smaller T5 variant, to minimize the amount of memory required. The source_lang 212 15.1 Translating English to Romanian 213 and target_lang variables define the direction of translation, i.e., from English to Romanian. To keep our computing requirements small, we limit the length of our input and output. That is, English text longer than max_source_length tokens will be truncated. Further, we limit our generated Romanian text to max_target_length. We chose a maximum target length of 128 tokens to limit the computational cost incurred during text generation (recall that the text is generated one token at a time). The T5 models are trained to support multiple tasks such as translation and summarization (please see Chapter 16 for details). Thus, during training and inference, the user must specify which task the model should perform using a text prefix. Here we use the prefix "translate English to Romanian: " to indicate that the input text is in English and should be translated to Romanian. Next, we load the model and the corresponding tokenizer, and move them to the GPU if one is available: We use the datasets library to load our translation dataset. Note that the first time one calls load_dataset() the dataset will be downloaded automatically from the Hugging Face repository.1 The load_dataset() function takes a dataset name and configuration, which in our case are wmt16 and ro-en, respectively. Since in this example we are only evaluating the model, we only load the test partition (or split) of the dataset: The dataset consists of a single column called translation. Each element in this column is a dictionary that contains the aligned pair. The dictionary keys are the abbreviated language names and the values are the corresponding sentences. An example of one of these dictionaries is shown below: We encapsulate the logic for translating the English text into Romanian in a function called translate(). Inside this function, for a batch of aligned pairs, we select the English sentence as our input, and prepend the task prefix. Then we tokenize these inputs, including the prefix, specifying that sentences longer than max_source_length should be truncated, the batch should be padded, and the tokenizer should return PyTorch tensors. Once the tokenizer output has been moved to the GPU, we pass it to the model’s generate() method. This is the first time we have seen this method, because only decoder and encoder-decoder models support it. This method generates an output sequence by predicting one token 1 https://huggingface.co/datasets/wmt16 214 Implementing Encoder-decoder Methods at a time, stopping when either the end-of-sequence token is produced or when the sequence reaches a maximum length. Several generation techniques are supported, such as beam search, in which several alternate translations are maintained by the model so that it is able to select an overall best translation from several options. For efficiency purposes, we use a greedy approach, which chooses the best token at each step of the generation. This is equivalent to using a beam search with a beam of size one. Since the model generates its predictions as a sequence of token ids, we need to convert them back into the corresponding tokens to be able to read the translated text. We do this using the tokenizer’s batch_decode() method. Finally, we return the gold and predicted Romanian sentences in a dictionary: Next, we apply our translate() function to our Dataset to translate all the sentences: reference Șeful ONU declară că nu există soluții militar... Șeful ONU a solicitat din nou tuturor părților... Ban și-a exprimat regretul că divizările în co... Nu sunt bani puțini. La sfârșitul mandatului voi face un raport cu ... "Să spună un parlamentar că nu-i ajung banii e... 1999 rows × 2 columns prediction eful ONU declară că nu există o soluţie milita... eful U.N. a cerut din nou tuturor partidelor, ... El şi-a exprimat regretul că diviziunile din c... Banii sunt suficienţi. La sfârşitul biroului voi raporta tot ceea ce ... "A spune că un parlamentar nu are suficienţi b... 1994 1995 1996 1997 1998 0 1 2 3 4 ... Secretarul General Ban Ki-moon afirmă că răspu... Secretarul General Ban Ki-moon declară că răsp... Ban a declarat miercuri în cadrul unei conferi... Ban a declarat la o conferinţă de presă susţin... ... ... Uneori mi-e rușine să ridic banii de la casierie. Uneori mi-e ruşine să iau banii de la biroul c... S-a întâmplat să ridic într-o lună și 30.000 d... Într-o lună am adunat 30 000 de lei cu ramburs... We evaluate the quality of these translations using the BLEU metric, which we introduced in Chapter 14. To this end, we load an existing implementation of BLEU from the datasets library as a Metric object.2 Metric objects have a method called add(), which is used to accumulate the predictions and gold labels, one example at a time. After accumulating all examples, the compute() method returns the results of the evaluation. Note that for each predicted sentence, BLEU expects a list of reference sentences (as there are often many correct ways of translat- 2 https://huggingface.co/docs/datasets/v2.4.0/en/package_reference/main_ classes#datasets. Metric 15.2 Implementation of Greedy Generation 215 ing a given text). Since we only have one reference, we wrap it in a list before passing it to the metric: The score corresponds to the BLEU score. The rest of the items correspond to the components required to compute the score. That is, the counts, totals, and precisions correspond to the counts, totals, and precisions for 1-, 2-, 3-, and 4-grams. The bp is the brevity penalty. The sys_len and ref_len correspond to the predictions and reference lengths. The above BLEU score of 25.2% is slightly lower than the state of the art, but we are being penalized by the peculiarities of diacritic usage in Romanian characters. For example, the letters ș and ț (corresponding to the sounds sh and ts in English) are usually spelled with a comma below the characters s and t, which is the standard imposed by the Romanian Academy. However, in “the wild” these characters are often written using a cedilla instead of a comma, e.g., ţ instead of ț (or, using the names of these Unicode characters, LATIN SMALL LETTER T WITH CEDILLA instead of LATIN SMALL LETTER T WITH COMMA BELOW). Further, some of these characters with diacritics are often omitted altogether in the T5 output. The T5 output below contains an example for each of these two situations (e.g., soluţi(e) instead of soluți(i), and eful instead of Șeful): To avoid being penalized at scoring time for these arbitrary discrepancies, post-processing scripts are sometimes used to normalize diacritic usage.3 Usage of such post-processing scripts can improve the BLEU score substantially. However, this is beyond the scope of this chapter. 15.2 Implementation of Greedy Generation To gain a better intuition of how the encoder-decoder model generates its output sequence, we show below an implementation of the greedy version of the generate() method used above. This function takes as an argument a single English text (i.e., no batching) and returns the corresponding Romanian text: This function interacts directly with the encoder and decoder components of the T5 model, so we must construct the input for both. The encoder’s input is constructed by prepending the task prefix to the English text and tokenizing it. On the other hand, the decoder’s input is constructed incrementally by accumulating the tokens predicted so far 3 https://github.com/huggingface/transformers/blob/main/examples/legacy/ seq2seq/romanian_postprocessing.md 216 Implementing Encoder-decoder Methods in order to predict the next token in the sequence. At the beginning, before any tokens are predicted, the decoder’s input is initialized with a single token that corresponds to the beginning of the sequence. We retrieve this token, called decoder_start_token_id, from the model’s configuration object. The tokens are predicted one at a time, until the model produces eos_token_id, which indicates that the sequence is finished. However, in case the model does not produce this end-of-sequence token within a reasonable number of steps, we also enforce a maximum number of predicted tokens, determined by the max_target_length parameter we defined previously. The T5 model’s forward() method, called indirectly through its __call__()) method, takes the inputs for both the encoder and the decoder. The output returned by this method corresponds to all the tokens in the decoder’s input plus an extra one: the newly predicted token. To select the best prediction, we retrieve the logits from the output and select the logits corresponding to the last token in the sequence (recall that the output shape is (batch size, sequence length, vocabulary size)). From these selected logits, we use the argmax() to select the token id corresponding to the highest-scoring vocabulary item. We append this new token id to the decoder’s input, and repeat the process until we encounter the end-of-sequence token or the decoded text reaches the maximum length. Once we are finished generating token ids, we retrieve the corresponding text by calling the tokenizer’s decode() method. This method is identical to the batch_decode() method we used previously, except that it only decodes a single example. Below is an usage example for the greedy_translation() function: 15.3 Fine-tuning Romanian to English Translation In this section, we fine-tune a T5 model on the translation of Romanian to English, a language pair that was not included in the T5 pre-training. To confirm that this data was not included in pre-training, we evaluated the performance of the vanilla t5-small model on the translation from Romanian to English using code equivalent to the code discussed in the previous section (see the chap15_translation_ro_to_en notebook). The resulting BLEU score was only 3.2%, which is substantially lower than the score we obtained when translating English to Romanian (25.2%). 15.3 Fine-tuning Romanian to English Translation 217 Note that the transformers library includes scripts to fine-tune a translation model directly from the command line.4 For didactic purposes, we will not use these scripts in this section, but instead write the fine-tuning code explicitly. For this exercise, we continue using the WMT16 dataset, but this time
we load the train and validation splits. We employ the same t5-small model that we used previously. The code from the last section to load
the model, tokenizer, and dataset does not need to change for this use-
case, so we do not repeat it here. However, as before, the complete code is available in a Jupyter notebook (chap15_translation_ro_to_en_finetune). We begin by tokenizing the source (Romanian) and target language (English) texts. As in the last section, we need to prepend the task prefix to the source texts prior to tokenizing. This time, since we are translating in the opposite direction, we use the prefix "translate Romanian to English: ", and we prepend it to the Romanian text. Each call to the tokenizer with a batch of texts produces input_ids and an attention_mask. This output is what we need for the Romanian text, which will serve as the input to the model. To generate the labels, i.e., the correct translated tokens, we use the input_ids corresponding to the English text. Recall that "labels" is the default key name expected by trainers in Hugging Face. We apply our tokenize() function to both the train and validation splits: 4 https://github.com/huggingface/transformers/tree/main/examples/pytorch/ translation 218 Implementing Encoder-decoder Methods input_ids [13959, 3871, 29, 12, 1566, 10, 4961, 106, 204... [13959, 3871, 29, 12, 1566, 10, 4961, 106, 204... [13959, 3871, 29, 12, 1566, 10, 374, 6225, 49,... [13959, 3871, 29, 12, 1566, 10, 4540, 4031, 9,... [13959, 3871, 29, 12, 1566, 10, 2262, 900, 17,... [13959, 3871, 29, 12, 1566, 10, 18420, 83, 362... attention_mask labels [19428, 13, 12876, 10, 217, 13687, 7, 1] [19428, 13, 12876, 10, 217, 13687, 7, 1] [11167, 7, 1204, 10, 217, 13687, 7, 1] [4540, 4031, 9, 7, 1672, 7, 2262, 900, 17, 38,... [2262, 900, 17, 641, 65, 46, 3761, 6, 1069, 31... [3625, 32, 5788, 35, 15, 3844, 31, 7, 3, 16143... 0 1 2 3 4 ... 610315 610316 610317 610318 610319 [1,1,1,1,1,1,1,1, [1,1,1,1,1,1,1,1, [1,1,1,1,1,1,1,1, [1,1,1,1,1,1,1,1, [1,1,1,1,1,1,1,1, [1,1,1,1,1,1,1,1, 1,1,1,1, 1,1,1,... 1,1,1,1, 1,1,1,... 1,1,1,1, 1,1,1,... 1,1,1,1, 1,1,1,... 1,1,1,1, 1,1,1,... 1,1,1,1, 1,1,1,... [13959, 3871, 29, 12, 1566, 10, 5085, [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 5840, 49... 1, 1, 1, ... [13959, 3871, 29, 12, 1566, 10, 781, [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 8750, 9, ... 1, 1, 1, ... ... ... [13959, 3871, 29, 12, 1566, 10, 2364, [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 4540, 40... 1, 1, 1, ... 610320 rows × 3 columns [13959, 3871, 29, 12, 1566, 10, 3, [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 25882, 759,... 1, 1, 1, ... [2276, 8843, 138, 13, 13687, 7, 13, 1767, 3823... [781, 2420, 13, 17500, 10, 217, 13687, 7, 1] [242, 4540, 4031, 9, 7, 6, 8, 516, 65, 66, 8, ... [9810, 157, 31, 7, 516, 92, 3088, 21, 46, 3839... ... Recall that in order to construct a trainer, we need a data collator for batching, a function to compute the metrics of interest, and a TrainingArguments object. In this section, we use a data collator called DataCollatorForSeq2Seq, which is included in the transformers library specifically for sequence-to-sequence models. The collator pads the batches using the label_pad_token_id, which we have set to −100, as we did in Chapter 13 (this is the default ignore_index value used by CrossEntropyLoss): The compute_metrics() function computes the BLEU score. It uses the tokenizer to decode the token ids into text, for both the predicted and gold labels, ignoring padding: We use the Seq2SeqTrainingArguments class, which adds the predict_with_generate parameter to the regular TrainingArguments class. This is needed to in-
dicate that the trainer should use the generate() method for inference
in order to compute the metrics (BLUE in this case): Finally, we construct the trainer using the Seq2SeqTrainer class, which is a subclass of Trainer that adds the ability to compute scores such as BLEU during training by calling generate() during evaluation: Fine-tuning a translation model takes considerably longer than training or fine-tuning the models we have developed so far in this book. To account for this, here we add support for resuming training from a checkpoint, i.e., a model that was saved after training on a number of 15.4 Using a Previously Saved Model 219 examples. Similar to how one can resume a video game, this allows one to pick up from the last “save point,” in case training was interrupted and needs to be resumed: When calling the trainer’s train() method, we either provide a model checkpoint or None. In the former case, the trainer will continue training from the provided checkpoint. In the latter case, the trainer will begin training from scratch. Once the training has completed, we save the trained model and tokenizer using the trainer’s save_model() method into the output directory: We then compute and save the metrics corresponding to the training partition. This is not required, but it is helpful to keep a record of the model’s performance on the training data. Note that the metrics do not automatically include the number of examples in the training partition, so we add them explicitly: Next, we evaluate our final model on the validation data and save the corresponding metrics. These metrics indicate that our BLEU score on the validation data is 35.2%, which is evidence that fine-tuning has helped dramatically: Lastly, we save a model card into our output directory. A model card is akin to an automatically-generated README file that includes information about the model used, the data, settings used, and performance throughout the training process. This file is helpful for reproducibility as it contains all of this key information in one place. These cards are often uploaded to the Hugging Face Hub together with the model itself.5 15.4 Using a Previously Saved Model Models that have been saved locally can be loaded using the same from_pretrained() methods we have used before. In particular, instead of providing a model name, we provide the path to the local directory where the model is stored, using the local_files_only parameter to indicate that we want to load the model from the local file system instead of downloading it from the Hugging Face Hub (Make sure you use an output directory that is valid on your machine!): Once our fine-tuned model is loaded, we use it the same way as before. That is, we use the translate() function to generate translations 5 We do not discuss the model uploading process here. Please see the documentation on model sharing at: https://huggingface.co/docs/transformers/v4.14.1/model_sharing. 220 Implementing Encoder-decoder Methods for our test partition. Then we use the BLEU metric to score this output. From this metric, we obtain the final BLEU score of 33.4%, which is markedly better than our initial score (i.e., without fine-tuning) of 3.2%! The code corresponding to this section is available in the notebook chap15_translation_ro_to_en_finetuned. 15.5 Summary In this chapter we used a complete encoder-decoder transformer network to implement a machine translation application. Importantly, transformers with a decoder component have a generate() method that simplifies the generation process and provides multiple options for decoding. We encourage you to explore these options! For example, try comparing the quality of the output with the resources required to produce it (e.g., runtime overhead) when the size of the search beam increases. Additionally, we saw how to fine-tune an encoder-decoder model on a new language pair that it has not seen during its pre-training. This exercise included using checkpoints to support resuming training in case of unexpected interruptions, saving our fine-tuned model, and loading it for later use.
3,908
3,959
#!/usr/bin/env python # coding: utf-8 # # Machine Translation from English (En) to Romanian (Ro) # # Using the T5 Transformer without Fine-tuning # Some initialization: # In[1]: import torch import numpy as np from transformers import set_seed # set to True to use the gpu (if there is one available) use_gpu = True # select device device = torch.device('cuda' if use_gpu and torch.cuda.is_available() else 'cpu') print(f'device: {device.type}') # random seed seed = 42 # set random seed if seed is not None: print(f'random seed: {seed}') set_seed(seed) # In[2]: transformer_name = 't5-small' source_lang = 'en' target_lang = 'ro' max_source_length = 1024 max_target_length = 128 task_prefix = 'translate English to Romanian: ' num_beams = 1 batch_size = 100 # Load tokenizer and pre-trained model: # In[3]: from transformers import AutoTokenizer, AutoModelForSeq2SeqLM tokenizer = AutoTokenizer.from_pretrained(transformer_name) model = AutoModelForSeq2SeqLM.from_pretrained(transformer_name) model = model.to(device) # Load dataset from HuggingFace: # In[4]: from datasets import load_dataset test_ds = load_dataset('wmt16', 'ro-en', split='test') test_ds # In[5]: test_ds['translation'][0] # Implement the `translate` method and apply on this dataset: # In[6]: def translate(batch): # get source language examples and prepend task prefix inputs = [x[source_lang] for x in batch["translation"]] inputs = [task_prefix + x for x in inputs] # tokenize inputs encoded = tokenizer( inputs, max_length=max_source_length, truncation=True, padding=True, return_tensors='pt', ) # move data to gpu if needed input_ids = encoded.input_ids.to(device) attention_mask = encoded.attention_mask.to(device) # generate translated sentences output = model.generate( input_ids=input_ids, attention_mask=attention_mask, num_beams=num_beams, max_length=max_target_length, ) # generate predicted sentences from predicted token ids decoded = tokenizer.batch_decode( output, skip_special_tokens=True, ) # get gold sentences in target language targets = [x[target_lang] for x in batch["translation"]] # return gold and predicted sentences return { 'reference': targets, 'prediction': decoded, } # In[7]: results = test_ds.map( translate, batched=True, batch_size=batch_size, remove_columns=test_ds.column_names, ) results.to_pandas() # Now evaluate the quality of translations using the BLEU metric: # In[8]: from datasets import load_metric metric = load_metric('sacrebleu') for r in results: prediction = r['prediction'] reference = [r['reference']] metric.add(prediction=prediction, reference=reference) metric.compute() # An example of greedy decoding for individual texts: # In[9]: def greedy_translation(text): # prepend task prefix text = task_prefix + text # tokenize input encoded = tokenizer( text, max_length=max_source_length, truncation=True, return_tensors='pt', ) # encoder input ids encoder_input_ids = encoded.input_ids.to(device) # decoder input ids, initialized with start token id start = model.config.decoder_start_token_id decoder_input_ids = torch.LongTensor([[start]]).to(device) # generate tokens, one at a time for _ in range(max_target_length): # get model predictions output = model( encoder_input_ids, decoder_input_ids=decoder_input_ids, ) # get logits for last token next_token_logits = output.logits[0, -1, :] # select most probable token next_token_id = torch.argmax(next_token_logits) # append new token to decoder_input_ids output_id = torch.LongTensor([[next_token_id]]).to(device) decoder_input_ids = torch.cat([decoder_input_ids, output_id], dim=-1) # if predicted token is the end of sequence, stop iterating if next_token_id == tokenizer.eos_token_id: break # return text corresponding to predicted token ids return tokenizer.decode(decoder_input_ids[0], skip_special_tokens=True) # In[10]: greedy_translation("this is a test")
1,721
1,821
4
chap15-5
chap15-5
15 Implementing Encoder-decoder Methods In this chapter we implement a machine translation application as an example of an encoder-decoder task. In particular, we build on pre-trained encoder-decoder transformer models, which exist in the Hugging Face library for a wide variety of language pairs. We first show how to use one of these models out-of-the-box to perform translation for one of the language pairs it has been exposed to during pre-training: English to Romanian. Afterwards, we fine-tune the model to a new language combination that is has not seen before: Romanian to English. In both use cases, we use the T5 encoder-decoder model, which has been pre-trained for several tasks, including machine translation (Raffel et al., 2020). Please see Chapter 16 for a description of T5’s pre-training process. The data for this task comes from the WMT 2016 dataset (Bojar et al., 2016), which consists of English sentences aligned pairwise to German, Czech, Russian, Finnish, Romanian, and Turkish. In this chapter we only use the English-Romanian texts (in both directions). 15.1 Translating English to Romanian As a first example, we use T5 to translate from English to Romanian, which is one of the language pairs it has been exposed to during pretraining. The code discussed in this section is available in the notebook chap15_translation_en_to_ro. Even though in this exercise we are not fine-tuning the model, we still need to define a few hyper parameters to frame the task and help the model understand how to work with the data: The above settings indicate that we use the t5-small model, a smaller T5 variant, to minimize the amount of memory required. The source_lang 212 15.1 Translating English to Romanian 213 and target_lang variables define the direction of translation, i.e., from English to Romanian. To keep our computing requirements small, we limit the length of our input and output. That is, English text longer than max_source_length tokens will be truncated. Further, we limit our generated Romanian text to max_target_length. We chose a maximum target length of 128 tokens to limit the computational cost incurred during text generation (recall that the text is generated one token at a time). The T5 models are trained to support multiple tasks such as translation and summarization (please see Chapter 16 for details). Thus, during training and inference, the user must specify which task the model should perform using a text prefix. Here we use the prefix "translate English to Romanian: " to indicate that the input text is in English and should be translated to Romanian. Next, we load the model and the corresponding tokenizer, and move them to the GPU if one is available: We use the datasets library to load our translation dataset. Note that the first time one calls load_dataset() the dataset will be downloaded automatically from the Hugging Face repository.1 The load_dataset() function takes a dataset name and configuration, which in our case are wmt16 and ro-en, respectively. Since in this example we are only evaluating the model, we only load the test partition (or split) of the dataset: The dataset consists of a single column called translation. Each element in this column is a dictionary that contains the aligned pair. The dictionary keys are the abbreviated language names and the values are the corresponding sentences. An example of one of these dictionaries is shown below: We encapsulate the logic for translating the English text into Romanian in a function called translate(). Inside this function, for a batch of aligned pairs, we select the English sentence as our input, and prepend the task prefix. Then we tokenize these inputs, including the prefix, specifying that sentences longer than max_source_length should be truncated, the batch should be padded, and the tokenizer should return PyTorch tensors. Once the tokenizer output has been moved to the GPU, we pass it to the model’s generate() method. This is the first time we have seen this method, because only decoder and encoder-decoder models support it. This method generates an output sequence by predicting one token 1 https://huggingface.co/datasets/wmt16 214 Implementing Encoder-decoder Methods at a time, stopping when either the end-of-sequence token is produced or when the sequence reaches a maximum length. Several generation techniques are supported, such as beam search, in which several alternate translations are maintained by the model so that it is able to select an overall best translation from several options. For efficiency purposes, we use a greedy approach, which chooses the best token at each step of the generation. This is equivalent to using a beam search with a beam of size one. Since the model generates its predictions as a sequence of token ids, we need to convert them back into the corresponding tokens to be able to read the translated text. We do this using the tokenizer’s batch_decode() method. Finally, we return the gold and predicted Romanian sentences in a dictionary: Next, we apply our translate() function to our Dataset to translate all the sentences: reference Șeful ONU declară că nu există soluții militar... Șeful ONU a solicitat din nou tuturor părților... Ban și-a exprimat regretul că divizările în co... Nu sunt bani puțini. La sfârșitul mandatului voi face un raport cu ... "Să spună un parlamentar că nu-i ajung banii e... 1999 rows × 2 columns prediction eful ONU declară că nu există o soluţie milita... eful U.N. a cerut din nou tuturor partidelor, ... El şi-a exprimat regretul că diviziunile din c... Banii sunt suficienţi. La sfârşitul biroului voi raporta tot ceea ce ... "A spune că un parlamentar nu are suficienţi b... 1994 1995 1996 1997 1998 0 1 2 3 4 ... Secretarul General Ban Ki-moon afirmă că răspu... Secretarul General Ban Ki-moon declară că răsp... Ban a declarat miercuri în cadrul unei conferi... Ban a declarat la o conferinţă de presă susţin... ... ... Uneori mi-e rușine să ridic banii de la casierie. Uneori mi-e ruşine să iau banii de la biroul c... S-a întâmplat să ridic într-o lună și 30.000 d... Într-o lună am adunat 30 000 de lei cu ramburs... We evaluate the quality of these translations using the BLEU metric, which we introduced in Chapter 14. To this end, we load an existing implementation of BLEU from the datasets library as a Metric object.2 Metric objects have a method called add(), which is used to accumulate the predictions and gold labels, one example at a time. After accumulating all examples, the compute() method returns the results of the evaluation. Note that for each predicted sentence, BLEU expects a list of reference sentences (as there are often many correct ways of translat- 2 https://huggingface.co/docs/datasets/v2.4.0/en/package_reference/main_ classes#datasets. Metric 15.2 Implementation of Greedy Generation 215 ing a given text). Since we only have one reference, we wrap it in a list before passing it to the metric: The score corresponds to the BLEU score. The rest of the items correspond to the components required to compute the score. That is, the counts, totals, and precisions correspond to the counts, totals, and precisions for 1-, 2-, 3-, and 4-grams. The bp is the brevity penalty. The sys_len and ref_len correspond to the predictions and reference lengths. The above BLEU score of 25.2% is slightly lower than the state of the art, but we are being penalized by the peculiarities of diacritic usage in Romanian characters. For example, the letters ș and ț (corresponding to the sounds sh and ts in English) are usually spelled with a comma below the characters s and t, which is the standard imposed by the Romanian Academy. However, in “the wild” these characters are often written using a cedilla instead of a comma, e.g., ţ instead of ț (or, using the names of these Unicode characters, LATIN SMALL LETTER T WITH CEDILLA instead of LATIN SMALL LETTER T WITH COMMA BELOW). Further, some of these characters with diacritics are often omitted altogether in the T5 output. The T5 output below contains an example for each of these two situations (e.g., soluţi(e) instead of soluți(i), and eful instead of Șeful): To avoid being penalized at scoring time for these arbitrary discrepancies, post-processing scripts are sometimes used to normalize diacritic usage.3 Usage of such post-processing scripts can improve the BLEU score substantially. However, this is beyond the scope of this chapter. 15.2 Implementation of Greedy Generation To gain a better intuition of how the encoder-decoder model generates its output sequence, we show below an implementation of the greedy version of the generate() method used above. This function takes as an argument a single English text (i.e., no batching) and returns the corresponding Romanian text: This function interacts directly with the encoder and decoder components of the T5 model, so we must construct the input for both. The encoder’s input is constructed by prepending the task prefix to the English text and tokenizing it. On the other hand, the decoder’s input is constructed incrementally by accumulating the tokens predicted so far 3 https://github.com/huggingface/transformers/blob/main/examples/legacy/ seq2seq/romanian_postprocessing.md 216 Implementing Encoder-decoder Methods in order to predict the next token in the sequence. At the beginning, before any tokens are predicted, the decoder’s input is initialized with a single token that corresponds to the beginning of the sequence. We retrieve this token, called decoder_start_token_id, from the model’s configuration object. The tokens are predicted one at a time, until the model produces eos_token_id, which indicates that the sequence is finished. However, in case the model does not produce this end-of-sequence token within a reasonable number of steps, we also enforce a maximum number of predicted tokens, determined by the max_target_length parameter we defined previously. The T5 model’s forward() method, called indirectly through its __call__()) method, takes the inputs for both the encoder and the decoder. The output returned by this method corresponds to all the tokens in the decoder’s input plus an extra one: the newly predicted token. To select the best prediction, we retrieve the logits from the output and select the logits corresponding to the last token in the sequence (recall that the output shape is (batch size, sequence length, vocabulary size)). From these selected logits, we use the argmax() to select the token id corresponding to the highest-scoring vocabulary item. We append this new token id to the decoder’s input, and repeat the process until we encounter the end-of-sequence token or the decoded text reaches the maximum length. Once we are finished generating token ids, we retrieve the corresponding text by calling the tokenizer’s decode() method. This method is identical to the batch_decode() method we used previously, except that it only decodes a single example. Below is an usage example for the greedy_translation() function: 15.3 Fine-tuning Romanian to English Translation In this section, we fine-tune a T5 model on the translation of Romanian to English, a language pair that was not included in the T5 pre-training. To confirm that this data was not included in pre-training, we evaluated the performance of the vanilla t5-small model on the translation from Romanian to English using code equivalent to the code discussed in the previous section (see the chap15_translation_ro_to_en notebook). The resulting BLEU score was only 3.2%, which is substantially lower than the score we obtained when translating English to Romanian (25.2%). 15.3 Fine-tuning Romanian to English Translation 217 Note that the transformers library includes scripts to fine-tune a translation model directly from the command line.4 For didactic purposes, we will not use these scripts in this section, but instead write the fine-tuning code explicitly. For this exercise, we continue using the WMT16 dataset, but this time
we load the train and validation splits. We employ the same t5-small model that we used previously. The code from the last section to load
the model, tokenizer, and dataset does not need to change for this use-
case, so we do not repeat it here. However, as before, the complete code is available in a Jupyter notebook (chap15_translation_ro_to_en_finetune). We begin by tokenizing the source (Romanian) and target language (English) texts. As in the last section, we need to prepend the task prefix to the source texts prior to tokenizing. This time, since we are translating in the opposite direction, we use the prefix "translate Romanian to English: ", and we prepend it to the Romanian text. Each call to the tokenizer with a batch of texts produces input_ids and an attention_mask. This output is what we need for the Romanian text, which will serve as the input to the model. To generate the labels, i.e., the correct translated tokens, we use the input_ids corresponding to the English text. Recall that "labels" is the default key name expected by trainers in Hugging Face. We apply our tokenize() function to both the train and validation splits: 4 https://github.com/huggingface/transformers/tree/main/examples/pytorch/ translation 218 Implementing Encoder-decoder Methods input_ids [13959, 3871, 29, 12, 1566, 10, 4961, 106, 204... [13959, 3871, 29, 12, 1566, 10, 4961, 106, 204... [13959, 3871, 29, 12, 1566, 10, 374, 6225, 49,... [13959, 3871, 29, 12, 1566, 10, 4540, 4031, 9,... [13959, 3871, 29, 12, 1566, 10, 2262, 900, 17,... [13959, 3871, 29, 12, 1566, 10, 18420, 83, 362... attention_mask labels [19428, 13, 12876, 10, 217, 13687, 7, 1] [19428, 13, 12876, 10, 217, 13687, 7, 1] [11167, 7, 1204, 10, 217, 13687, 7, 1] [4540, 4031, 9, 7, 1672, 7, 2262, 900, 17, 38,... [2262, 900, 17, 641, 65, 46, 3761, 6, 1069, 31... [3625, 32, 5788, 35, 15, 3844, 31, 7, 3, 16143... 0 1 2 3 4 ... 610315 610316 610317 610318 610319 [1,1,1,1,1,1,1,1, [1,1,1,1,1,1,1,1, [1,1,1,1,1,1,1,1, [1,1,1,1,1,1,1,1, [1,1,1,1,1,1,1,1, [1,1,1,1,1,1,1,1, 1,1,1,1, 1,1,1,... 1,1,1,1, 1,1,1,... 1,1,1,1, 1,1,1,... 1,1,1,1, 1,1,1,... 1,1,1,1, 1,1,1,... 1,1,1,1, 1,1,1,... [13959, 3871, 29, 12, 1566, 10, 5085, [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 5840, 49... 1, 1, 1, ... [13959, 3871, 29, 12, 1566, 10, 781, [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 8750, 9, ... 1, 1, 1, ... ... ... [13959, 3871, 29, 12, 1566, 10, 2364, [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 4540, 40... 1, 1, 1, ... 610320 rows × 3 columns [13959, 3871, 29, 12, 1566, 10, 3, [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 25882, 759,... 1, 1, 1, ... [2276, 8843, 138, 13, 13687, 7, 13, 1767, 3823... [781, 2420, 13, 17500, 10, 217, 13687, 7, 1] [242, 4540, 4031, 9, 7, 6, 8, 516, 65, 66, 8, ... [9810, 157, 31, 7, 516, 92, 3088, 21, 46, 3839... ... Recall that in order to construct a trainer, we need a data collator for batching, a function to compute the metrics of interest, and a TrainingArguments object. In this section, we use a data collator called DataCollatorForSeq2Seq, which is included in the transformers library specifically for sequence-to-sequence models. The collator pads the batches using the label_pad_token_id, which we have set to −100, as we did in Chapter 13 (this is the default ignore_index value used by CrossEntropyLoss): The compute_metrics() function computes the BLEU score. It uses the tokenizer to decode the token ids into text, for both the predicted and gold labels, ignoring padding: We use the Seq2SeqTrainingArguments class, which adds the predict_with_generate parameter to the regular TrainingArguments class. This is needed to in-
dicate that the trainer should use the generate() method for inference
in order to compute the metrics (BLUE in this case): Finally, we construct the trainer using the Seq2SeqTrainer class, which is a subclass of Trainer that adds the ability to compute scores such as BLEU during training by calling generate() during evaluation: Fine-tuning a translation model takes considerably longer than training or fine-tuning the models we have developed so far in this book. To account for this, here we add support for resuming training from a checkpoint, i.e., a model that was saved after training on a number of 15.4 Using a Previously Saved Model 219 examples. Similar to how one can resume a video game, this allows one to pick up from the last “save point,” in case training was interrupted and needs to be resumed: When calling the trainer’s train() method, we either provide a model checkpoint or None. In the former case, the trainer will continue training from the provided checkpoint. In the latter case, the trainer will begin training from scratch. Once the training has completed, we save the trained model and tokenizer using the trainer’s save_model() method into the output directory: We then compute and save the metrics corresponding to the training partition. This is not required, but it is helpful to keep a record of the model’s performance on the training data. Note that the metrics do not automatically include the number of examples in the training partition, so we add them explicitly: Next, we evaluate our final model on the validation data and save the corresponding metrics. These metrics indicate that our BLEU score on the validation data is 35.2%, which is evidence that fine-tuning has helped dramatically: Lastly, we save a model card into our output directory. A model card is akin to an automatically-generated README file that includes information about the model used, the data, settings used, and performance throughout the training process. This file is helpful for reproducibility as it contains all of this key information in one place. These cards are often uploaded to the Hugging Face Hub together with the model itself.5 15.4 Using a Previously Saved Model Models that have been saved locally can be loaded using the same from_pretrained() methods we have used before. In particular, instead of providing a model name, we provide the path to the local directory where the model is stored, using the local_files_only parameter to indicate that we want to load the model from the local file system instead of downloading it from the Hugging Face Hub (Make sure you use an output directory that is valid on your machine!): Once our fine-tuned model is loaded, we use it the same way as before. That is, we use the translate() function to generate translations 5 We do not discuss the model uploading process here. Please see the documentation on model sharing at: https://huggingface.co/docs/transformers/v4.14.1/model_sharing. 220 Implementing Encoder-decoder Methods for our test partition. Then we use the BLEU metric to score this output. From this metric, we obtain the final BLEU score of 33.4%, which is markedly better than our initial score (i.e., without fine-tuning) of 3.2%! The code corresponding to this section is available in the notebook chap15_translation_ro_to_en_finetuned. 15.5 Summary In this chapter we used a complete encoder-decoder transformer network to implement a machine translation application. Importantly, transformers with a decoder component have a generate() method that simplifies the generation process and provides multiple options for decoding. We encourage you to explore these options! For example, try comparing the quality of the output with the resources required to produce it (e.g., runtime overhead) when the size of the search beam increases. Additionally, we saw how to fine-tune an encoder-decoder model on a new language pair that it has not seen during its pre-training. This exercise included using checkpoints to support resuming training in case of unexpected interruptions, saving our fine-tuned model, and loading it for later use.
18,161
18,623
#!/usr/bin/env python # coding: utf-8 # # Load and Use a Previously-trained Ro-to-En T5 Model # Some initialization: # In[1]: import torch import numpy as np from transformers import set_seed # set to True to use the gpu (if there is one available) use_gpu = True # select device device = torch.device('cuda' if use_gpu and torch.cuda.is_available() else 'cpu') print(f'device: {device.type}') # random seed seed = 42 # set random seed if seed is not None: print(f'random seed: {seed}') set_seed(seed) # In[2]: source_lang = 'ro' target_lang = 'en' max_source_length = 1024 max_target_length = 128 task_prefix = 'translate Romanian to English: ' num_beams = 1 batch_size = 100 # Load the tokenizer and model from the location where you save them: # In[3]: from transformers import AutoTokenizer, AutoModelForSeq2SeqLM output_dir = '/media/data2/t5-translation-example' # make sure this is a valid path on your machine! tokenizer = AutoTokenizer.from_pretrained(output_dir, local_files_only=True) model = AutoModelForSeq2SeqLM.from_pretrained(output_dir, local_files_only=True) model = model.to(device) # Load just the test partition of the dataset from HuggingFace: # In[4]: from datasets import load_dataset test_ds = load_dataset('wmt16', 'ro-en', split='test') test_ds # In[5]: test_ds['translation'][0] # Implement the `translate` method and apply it to the test partition: # In[6]: def translate(batch): # get source language examples and prepend task prefix inputs = [x[source_lang] for x in batch["translation"]] inputs = [task_prefix + x for x in inputs] # tokenize inputs encoded = tokenizer( inputs, max_length=max_source_length, truncation=True, padding=True, return_tensors='pt', ) # move data to gpu if needed input_ids = encoded.input_ids.to(device) attention_mask = encoded.attention_mask.to(device) # generate translated sentences output = model.generate( input_ids=input_ids, attention_mask=attention_mask, num_beams=num_beams, max_length=max_target_length, ) # generate predicted sentences from predicted token ids decoded = tokenizer.batch_decode( output, skip_special_tokens=True, ) # get gold sentences in target language targets = [x[target_lang] for x in batch["translation"]] # return gold and predicted sentences return { 'reference': targets, 'prediction': decoded, } # In[7]: results = test_ds.map( translate, batched=True, batch_size=batch_size, remove_columns=test_ds.column_names, ) results.to_pandas() # Compute the BLEU score: # In[8]: from datasets import load_metric metric = load_metric('sacrebleu') for r in results: prediction = r['prediction'] reference = [r['reference']] metric.add(prediction=prediction, reference=reference) metric.compute()
845
1,104
5
chap15-6
chap15-6
15 Implementing Encoder-decoder Methods In this chapter we implement a machine translation application as an example of an encoder-decoder task. In particular, we build on pre-trained encoder-decoder transformer models, which exist in the Hugging Face library for a wide variety of language pairs. We first show how to use one of these models out-of-the-box to perform translation for one of the language pairs it has been exposed to during pre-training: English to Romanian. Afterwards, we fine-tune the model to a new language combination that is has not seen before: Romanian to English. In both use cases, we use the T5 encoder-decoder model, which has been pre-trained for several tasks, including machine translation (Raffel et al., 2020). Please see Chapter 16 for a description of T5’s pre-training process. The data for this task comes from the WMT 2016 dataset (Bojar et al., 2016), which consists of English sentences aligned pairwise to German, Czech, Russian, Finnish, Romanian, and Turkish. In this chapter we only use the English-Romanian texts (in both directions). 15.1 Translating English to Romanian As a first example, we use T5 to translate from English to Romanian, which is one of the language pairs it has been exposed to during pretraining. The code discussed in this section is available in the notebook chap15_translation_en_to_ro. Even though in this exercise we are not fine-tuning the model, we still need to define a few hyper parameters to frame the task and help the model understand how to work with the data: The above settings indicate that we use the t5-small model, a smaller T5 variant, to minimize the amount of memory required. The source_lang 212 15.1 Translating English to Romanian 213 and target_lang variables define the direction of translation, i.e., from English to Romanian. To keep our computing requirements small, we limit the length of our input and output. That is, English text longer than max_source_length tokens will be truncated. Further, we limit our generated Romanian text to max_target_length. We chose a maximum target length of 128 tokens to limit the computational cost incurred during text generation (recall that the text is generated one token at a time). The T5 models are trained to support multiple tasks such as translation and summarization (please see Chapter 16 for details). Thus, during training and inference, the user must specify which task the model should perform using a text prefix. Here we use the prefix "translate English to Romanian: " to indicate that the input text is in English and should be translated to Romanian. Next, we load the model and the corresponding tokenizer, and move them to the GPU if one is available: We use the datasets library to load our translation dataset. Note that the first time one calls load_dataset() the dataset will be downloaded automatically from the Hugging Face repository.1 The load_dataset() function takes a dataset name and configuration, which in our case are wmt16 and ro-en, respectively. Since in this example we are only evaluating the model, we only load the test partition (or split) of the dataset: The dataset consists of a single column called translation. Each element in this column is a dictionary that contains the aligned pair. The dictionary keys are the abbreviated language names and the values are the corresponding sentences. An example of one of these dictionaries is shown below: We encapsulate the logic for translating the English text into Romanian in a function called translate(). Inside this function, for a batch of aligned pairs, we select the English sentence as our input, and prepend the task prefix. Then we tokenize these inputs, including the prefix, specifying that sentences longer than max_source_length should be truncated, the batch should be padded, and the tokenizer should return PyTorch tensors. Once the tokenizer output has been moved to the GPU, we pass it to the model’s generate() method. This is the first time we have seen this method, because only decoder and encoder-decoder models support it. This method generates an output sequence by predicting one token 1 https://huggingface.co/datasets/wmt16 214 Implementing Encoder-decoder Methods at a time, stopping when either the end-of-sequence token is produced or when the sequence reaches a maximum length. Several generation techniques are supported, such as beam search, in which several alternate translations are maintained by the model so that it is able to select an overall best translation from several options. For efficiency purposes, we use a greedy approach, which chooses the best token at each step of the generation. This is equivalent to using a beam search with a beam of size one. Since the model generates its predictions as a sequence of token ids, we need to convert them back into the corresponding tokens to be able to read the translated text. We do this using the tokenizer’s batch_decode() method. Finally, we return the gold and predicted Romanian sentences in a dictionary: Next, we apply our translate() function to our Dataset to translate all the sentences: reference Șeful ONU declară că nu există soluții militar... Șeful ONU a solicitat din nou tuturor părților... Ban și-a exprimat regretul că divizările în co... Nu sunt bani puțini. La sfârșitul mandatului voi face un raport cu ... "Să spună un parlamentar că nu-i ajung banii e... 1999 rows × 2 columns prediction eful ONU declară că nu există o soluţie milita... eful U.N. a cerut din nou tuturor partidelor, ... El şi-a exprimat regretul că diviziunile din c... Banii sunt suficienţi. La sfârşitul biroului voi raporta tot ceea ce ... "A spune că un parlamentar nu are suficienţi b... 1994 1995 1996 1997 1998 0 1 2 3 4 ... Secretarul General Ban Ki-moon afirmă că răspu... Secretarul General Ban Ki-moon declară că răsp... Ban a declarat miercuri în cadrul unei conferi... Ban a declarat la o conferinţă de presă susţin... ... ... Uneori mi-e rușine să ridic banii de la casierie. Uneori mi-e ruşine să iau banii de la biroul c... S-a întâmplat să ridic într-o lună și 30.000 d... Într-o lună am adunat 30 000 de lei cu ramburs... We evaluate the quality of these translations using the BLEU metric, which we introduced in Chapter 14. To this end, we load an existing implementation of BLEU from the datasets library as a Metric object.2 Metric objects have a method called add(), which is used to accumulate the predictions and gold labels, one example at a time. After accumulating all examples, the compute() method returns the results of the evaluation. Note that for each predicted sentence, BLEU expects a list of reference sentences (as there are often many correct ways of translat- 2 https://huggingface.co/docs/datasets/v2.4.0/en/package_reference/main_ classes#datasets. Metric 15.2 Implementation of Greedy Generation 215 ing a given text). Since we only have one reference, we wrap it in a list before passing it to the metric: The score corresponds to the BLEU score. The rest of the items correspond to the components required to compute the score. That is, the counts, totals, and precisions correspond to the counts, totals, and precisions for 1-, 2-, 3-, and 4-grams. The bp is the brevity penalty. The sys_len and ref_len correspond to the predictions and reference lengths. The above BLEU score of 25.2% is slightly lower than the state of the art, but we are being penalized by the peculiarities of diacritic usage in Romanian characters. For example, the letters ș and ț (corresponding to the sounds sh and ts in English) are usually spelled with a comma below the characters s and t, which is the standard imposed by the Romanian Academy. However, in “the wild” these characters are often written using a cedilla instead of a comma, e.g., ţ instead of ț (or, using the names of these Unicode characters, LATIN SMALL LETTER T WITH CEDILLA instead of LATIN SMALL LETTER T WITH COMMA BELOW). Further, some of these characters with diacritics are often omitted altogether in the T5 output. The T5 output below contains an example for each of these two situations (e.g., soluţi(e) instead of soluți(i), and eful instead of Șeful): To avoid being penalized at scoring time for these arbitrary discrepancies, post-processing scripts are sometimes used to normalize diacritic usage.3 Usage of such post-processing scripts can improve the BLEU score substantially. However, this is beyond the scope of this chapter. 15.2 Implementation of Greedy Generation To gain a better intuition of how the encoder-decoder model generates its output sequence, we show below an implementation of the greedy version of the generate() method used above. This function takes as an argument a single English text (i.e., no batching) and returns the corresponding Romanian text: This function interacts directly with the encoder and decoder components of the T5 model, so we must construct the input for both. The encoder’s input is constructed by prepending the task prefix to the English text and tokenizing it. On the other hand, the decoder’s input is constructed incrementally by accumulating the tokens predicted so far 3 https://github.com/huggingface/transformers/blob/main/examples/legacy/ seq2seq/romanian_postprocessing.md 216 Implementing Encoder-decoder Methods in order to predict the next token in the sequence. At the beginning, before any tokens are predicted, the decoder’s input is initialized with a single token that corresponds to the beginning of the sequence. We retrieve this token, called decoder_start_token_id, from the model’s configuration object. The tokens are predicted one at a time, until the model produces eos_token_id, which indicates that the sequence is finished. However, in case the model does not produce this end-of-sequence token within a reasonable number of steps, we also enforce a maximum number of predicted tokens, determined by the max_target_length parameter we defined previously. The T5 model’s forward() method, called indirectly through its __call__()) method, takes the inputs for both the encoder and the decoder. The output returned by this method corresponds to all the tokens in the decoder’s input plus an extra one: the newly predicted token. To select the best prediction, we retrieve the logits from the output and select the logits corresponding to the last token in the sequence (recall that the output shape is (batch size, sequence length, vocabulary size)). From these selected logits, we use the argmax() to select the token id corresponding to the highest-scoring vocabulary item. We append this new token id to the decoder’s input, and repeat the process until we encounter the end-of-sequence token or the decoded text reaches the maximum length. Once we are finished generating token ids, we retrieve the corresponding text by calling the tokenizer’s decode() method. This method is identical to the batch_decode() method we used previously, except that it only decodes a single example. Below is an usage example for the greedy_translation() function: 15.3 Fine-tuning Romanian to English Translation In this section, we fine-tune a T5 model on the translation of Romanian to English, a language pair that was not included in the T5 pre-training. To confirm that this data was not included in pre-training, we evaluated the performance of the vanilla t5-small model on the translation from Romanian to English using code equivalent to the code discussed in the previous section (see the chap15_translation_ro_to_en notebook). The resulting BLEU score was only 3.2%, which is substantially lower than the score we obtained when translating English to Romanian (25.2%). 15.3 Fine-tuning Romanian to English Translation 217 Note that the transformers library includes scripts to fine-tune a translation model directly from the command line.4 For didactic purposes, we will not use these scripts in this section, but instead write the fine-tuning code explicitly. For this exercise, we continue using the WMT16 dataset, but this time
we load the train and validation splits. We employ the same t5-small model that we used previously. The code from the last section to load
the model, tokenizer, and dataset does not need to change for this use-
case, so we do not repeat it here. However, as before, the complete code is available in a Jupyter notebook (chap15_translation_ro_to_en_finetune). We begin by tokenizing the source (Romanian) and target language (English) texts. As in the last section, we need to prepend the task prefix to the source texts prior to tokenizing. This time, since we are translating in the opposite direction, we use the prefix "translate Romanian to English: ", and we prepend it to the Romanian text. Each call to the tokenizer with a batch of texts produces input_ids and an attention_mask. This output is what we need for the Romanian text, which will serve as the input to the model. To generate the labels, i.e., the correct translated tokens, we use the input_ids corresponding to the English text. Recall that "labels" is the default key name expected by trainers in Hugging Face. We apply our tokenize() function to both the train and validation splits: 4 https://github.com/huggingface/transformers/tree/main/examples/pytorch/ translation 218 Implementing Encoder-decoder Methods input_ids [13959, 3871, 29, 12, 1566, 10, 4961, 106, 204... [13959, 3871, 29, 12, 1566, 10, 4961, 106, 204... [13959, 3871, 29, 12, 1566, 10, 374, 6225, 49,... [13959, 3871, 29, 12, 1566, 10, 4540, 4031, 9,... [13959, 3871, 29, 12, 1566, 10, 2262, 900, 17,... [13959, 3871, 29, 12, 1566, 10, 18420, 83, 362... attention_mask labels [19428, 13, 12876, 10, 217, 13687, 7, 1] [19428, 13, 12876, 10, 217, 13687, 7, 1] [11167, 7, 1204, 10, 217, 13687, 7, 1] [4540, 4031, 9, 7, 1672, 7, 2262, 900, 17, 38,... [2262, 900, 17, 641, 65, 46, 3761, 6, 1069, 31... [3625, 32, 5788, 35, 15, 3844, 31, 7, 3, 16143... 0 1 2 3 4 ... 610315 610316 610317 610318 610319 [1,1,1,1,1,1,1,1, [1,1,1,1,1,1,1,1, [1,1,1,1,1,1,1,1, [1,1,1,1,1,1,1,1, [1,1,1,1,1,1,1,1, [1,1,1,1,1,1,1,1, 1,1,1,1, 1,1,1,... 1,1,1,1, 1,1,1,... 1,1,1,1, 1,1,1,... 1,1,1,1, 1,1,1,... 1,1,1,1, 1,1,1,... 1,1,1,1, 1,1,1,... [13959, 3871, 29, 12, 1566, 10, 5085, [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 5840, 49... 1, 1, 1, ... [13959, 3871, 29, 12, 1566, 10, 781, [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 8750, 9, ... 1, 1, 1, ... ... ... [13959, 3871, 29, 12, 1566, 10, 2364, [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 4540, 40... 1, 1, 1, ... 610320 rows × 3 columns [13959, 3871, 29, 12, 1566, 10, 3, [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 25882, 759,... 1, 1, 1, ... [2276, 8843, 138, 13, 13687, 7, 13, 1767, 3823... [781, 2420, 13, 17500, 10, 217, 13687, 7, 1] [242, 4540, 4031, 9, 7, 6, 8, 516, 65, 66, 8, ... [9810, 157, 31, 7, 516, 92, 3088, 21, 46, 3839... ... Recall that in order to construct a trainer, we need a data collator for batching, a function to compute the metrics of interest, and a TrainingArguments object. In this section, we use a data collator called DataCollatorForSeq2Seq, which is included in the transformers library specifically for sequence-to-sequence models. The collator pads the batches using the label_pad_token_id, which we have set to −100, as we did in Chapter 13 (this is the default ignore_index value used by CrossEntropyLoss): The compute_metrics() function computes the BLEU score. It uses the tokenizer to decode the token ids into text, for both the predicted and gold labels, ignoring padding: We use the Seq2SeqTrainingArguments class, which adds the predict_with_generate parameter to the regular TrainingArguments class. This is needed to in-
dicate that the trainer should use the generate() method for inference
in order to compute the metrics (BLUE in this case): Finally, we construct the trainer using the Seq2SeqTrainer class, which is a subclass of Trainer that adds the ability to compute scores such as BLEU during training by calling generate() during evaluation: Fine-tuning a translation model takes considerably longer than training or fine-tuning the models we have developed so far in this book. To account for this, here we add support for resuming training from a checkpoint, i.e., a model that was saved after training on a number of 15.4 Using a Previously Saved Model 219 examples. Similar to how one can resume a video game, this allows one to pick up from the last “save point,” in case training was interrupted and needs to be resumed: When calling the trainer’s train() method, we either provide a model checkpoint or None. In the former case, the trainer will continue training from the provided checkpoint. In the latter case, the trainer will begin training from scratch. Once the training has completed, we save the trained model and tokenizer using the trainer’s save_model() method into the output directory: We then compute and save the metrics corresponding to the training partition. This is not required, but it is helpful to keep a record of the model’s performance on the training data. Note that the metrics do not automatically include the number of examples in the training partition, so we add them explicitly: Next, we evaluate our final model on the validation data and save the corresponding metrics. These metrics indicate that our BLEU score on the validation data is 35.2%, which is evidence that fine-tuning has helped dramatically: Lastly, we save a model card into our output directory. A model card is akin to an automatically-generated README file that includes information about the model used, the data, settings used, and performance throughout the training process. This file is helpful for reproducibility as it contains all of this key information in one place. These cards are often uploaded to the Hugging Face Hub together with the model itself.5 15.4 Using a Previously Saved Model Models that have been saved locally can be loaded using the same from_pretrained() methods we have used before. In particular, instead of providing a model name, we provide the path to the local directory where the model is stored, using the local_files_only parameter to indicate that we want to load the model from the local file system instead of downloading it from the Hugging Face Hub (Make sure you use an output directory that is valid on your machine!): Once our fine-tuned model is loaded, we use it the same way as before. That is, we use the translate() function to generate translations 5 We do not discuss the model uploading process here. Please see the documentation on model sharing at: https://huggingface.co/docs/transformers/v4.14.1/model_sharing. 220 Implementing Encoder-decoder Methods for our test partition. Then we use the BLEU metric to score this output. From this metric, we obtain the final BLEU score of 33.4%, which is markedly better than our initial score (i.e., without fine-tuning) of 3.2%! The code corresponding to this section is available in the notebook chap15_translation_ro_to_en_finetuned. 15.5 Summary In this chapter we used a complete encoder-decoder transformer network to implement a machine translation application. Importantly, transformers with a decoder component have a generate() method that simplifies the generation process and provides multiple options for decoding. We encourage you to explore these options! For example, try comparing the quality of the output with the resources required to produce it (e.g., runtime overhead) when the size of the search beam increases. Additionally, we saw how to fine-tune an encoder-decoder model on a new language pair that it has not seen during its pre-training. This exercise included using checkpoints to support resuming training in case of unexpected interruptions, saving our fine-tuned model, and loading it for later use.
1,847
2,011
#!/usr/bin/env python # coding: utf-8 # # Machine Translation from English (En) to Romanian (Ro) # # Using the T5 Transformer without Fine-tuning # Some initialization: # In[1]: import torch import numpy as np from transformers import set_seed # set to True to use the gpu (if there is one available) use_gpu = True # select device device = torch.device('cuda' if use_gpu and torch.cuda.is_available() else 'cpu') print(f'device: {device.type}') # random seed seed = 42 # set random seed if seed is not None: print(f'random seed: {seed}') set_seed(seed) # In[2]: transformer_name = 't5-small' source_lang = 'en' target_lang = 'ro' max_source_length = 1024 max_target_length = 128 task_prefix = 'translate English to Romanian: ' num_beams = 1 batch_size = 100 # Load tokenizer and pre-trained model: # In[3]: from transformers import AutoTokenizer, AutoModelForSeq2SeqLM tokenizer = AutoTokenizer.from_pretrained(transformer_name) model = AutoModelForSeq2SeqLM.from_pretrained(transformer_name) model = model.to(device) # Load dataset from HuggingFace: # In[4]: from datasets import load_dataset test_ds = load_dataset('wmt16', 'ro-en', split='test') test_ds # In[5]: test_ds['translation'][0] # Implement the `translate` method and apply on this dataset: # In[6]: def translate(batch): # get source language examples and prepend task prefix inputs = [x[source_lang] for x in batch["translation"]] inputs = [task_prefix + x for x in inputs] # tokenize inputs encoded = tokenizer( inputs, max_length=max_source_length, truncation=True, padding=True, return_tensors='pt', ) # move data to gpu if needed input_ids = encoded.input_ids.to(device) attention_mask = encoded.attention_mask.to(device) # generate translated sentences output = model.generate( input_ids=input_ids, attention_mask=attention_mask, num_beams=num_beams, max_length=max_target_length, ) # generate predicted sentences from predicted token ids decoded = tokenizer.batch_decode( output, skip_special_tokens=True, ) # get gold sentences in target language targets = [x[target_lang] for x in batch["translation"]] # return gold and predicted sentences return { 'reference': targets, 'prediction': decoded, } # In[7]: results = test_ds.map( translate, batched=True, batch_size=batch_size, remove_columns=test_ds.column_names, ) results.to_pandas() # Now evaluate the quality of translations using the BLEU metric: # In[8]: from datasets import load_metric metric = load_metric('sacrebleu') for r in results: prediction = r['prediction'] reference = [r['reference']] metric.add(prediction=prediction, reference=reference) metric.compute() # An example of greedy decoding for individual texts: # In[9]: def greedy_translation(text): # prepend task prefix text = task_prefix + text # tokenize input encoded = tokenizer( text, max_length=max_source_length, truncation=True, return_tensors='pt', ) # encoder input ids encoder_input_ids = encoded.input_ids.to(device) # decoder input ids, initialized with start token id start = model.config.decoder_start_token_id decoder_input_ids = torch.LongTensor([[start]]).to(device) # generate tokens, one at a time for _ in range(max_target_length): # get model predictions output = model( encoder_input_ids, decoder_input_ids=decoder_input_ids, ) # get logits for last token next_token_logits = output.logits[0, -1, :] # select most probable token next_token_id = torch.argmax(next_token_logits) # append new token to decoder_input_ids output_id = torch.LongTensor([[next_token_id]]).to(device) decoder_input_ids = torch.cat([decoder_input_ids, output_id], dim=-1) # if predicted token is the end of sequence, stop iterating if next_token_id == tokenizer.eos_token_id: break # return text corresponding to predicted token ids return tokenizer.decode(decoder_input_ids[0], skip_special_tokens=True) # In[10]: greedy_translation("this is a test")
653
678
6
chap15-7
chap15-7
15 Implementing Encoder-decoder Methods In this chapter we implement a machine translation application as an example of an encoder-decoder task. In particular, we build on pre-trained encoder-decoder transformer models, which exist in the Hugging Face library for a wide variety of language pairs. We first show how to use one of these models out-of-the-box to perform translation for one of the language pairs it has been exposed to during pre-training: English to Romanian. Afterwards, we fine-tune the model to a new language combination that is has not seen before: Romanian to English. In both use cases, we use the T5 encoder-decoder model, which has been pre-trained for several tasks, including machine translation (Raffel et al., 2020). Please see Chapter 16 for a description of T5’s pre-training process. The data for this task comes from the WMT 2016 dataset (Bojar et al., 2016), which consists of English sentences aligned pairwise to German, Czech, Russian, Finnish, Romanian, and Turkish. In this chapter we only use the English-Romanian texts (in both directions). 15.1 Translating English to Romanian As a first example, we use T5 to translate from English to Romanian, which is one of the language pairs it has been exposed to during pretraining. The code discussed in this section is available in the notebook chap15_translation_en_to_ro. Even though in this exercise we are not fine-tuning the model, we still need to define a few hyper parameters to frame the task and help the model understand how to work with the data: The above settings indicate that we use the t5-small model, a smaller T5 variant, to minimize the amount of memory required. The source_lang 212 15.1 Translating English to Romanian 213 and target_lang variables define the direction of translation, i.e., from English to Romanian. To keep our computing requirements small, we limit the length of our input and output. That is, English text longer than max_source_length tokens will be truncated. Further, we limit our generated Romanian text to max_target_length. We chose a maximum target length of 128 tokens to limit the computational cost incurred during text generation (recall that the text is generated one token at a time). The T5 models are trained to support multiple tasks such as translation and summarization (please see Chapter 16 for details). Thus, during training and inference, the user must specify which task the model should perform using a text prefix. Here we use the prefix "translate English to Romanian: " to indicate that the input text is in English and should be translated to Romanian. Next, we load the model and the corresponding tokenizer, and move them to the GPU if one is available: We use the datasets library to load our translation dataset. Note that the first time one calls load_dataset() the dataset will be downloaded automatically from the Hugging Face repository.1 The load_dataset() function takes a dataset name and configuration, which in our case are wmt16 and ro-en, respectively. Since in this example we are only evaluating the model, we only load the test partition (or split) of the dataset: The dataset consists of a single column called translation. Each element in this column is a dictionary that contains the aligned pair. The dictionary keys are the abbreviated language names and the values are the corresponding sentences. An example of one of these dictionaries is shown below: We encapsulate the logic for translating the English text into Romanian in a function called translate(). Inside this function, for a batch of aligned pairs, we select the English sentence as our input, and prepend the task prefix. Then we tokenize these inputs, including the prefix, specifying that sentences longer than max_source_length should be truncated, the batch should be padded, and the tokenizer should return PyTorch tensors. Once the tokenizer output has been moved to the GPU, we pass it to the model’s generate() method. This is the first time we have seen this method, because only decoder and encoder-decoder models support it. This method generates an output sequence by predicting one token 1 https://huggingface.co/datasets/wmt16 214 Implementing Encoder-decoder Methods at a time, stopping when either the end-of-sequence token is produced or when the sequence reaches a maximum length. Several generation techniques are supported, such as beam search, in which several alternate translations are maintained by the model so that it is able to select an overall best translation from several options. For efficiency purposes, we use a greedy approach, which chooses the best token at each step of the generation. This is equivalent to using a beam search with a beam of size one. Since the model generates its predictions as a sequence of token ids, we need to convert them back into the corresponding tokens to be able to read the translated text. We do this using the tokenizer’s batch_decode() method. Finally, we return the gold and predicted Romanian sentences in a dictionary: Next, we apply our translate() function to our Dataset to translate all the sentences: reference Șeful ONU declară că nu există soluții militar... Șeful ONU a solicitat din nou tuturor părților... Ban și-a exprimat regretul că divizările în co... Nu sunt bani puțini. La sfârșitul mandatului voi face un raport cu ... "Să spună un parlamentar că nu-i ajung banii e... 1999 rows × 2 columns prediction eful ONU declară că nu există o soluţie milita... eful U.N. a cerut din nou tuturor partidelor, ... El şi-a exprimat regretul că diviziunile din c... Banii sunt suficienţi. La sfârşitul biroului voi raporta tot ceea ce ... "A spune că un parlamentar nu are suficienţi b... 1994 1995 1996 1997 1998 0 1 2 3 4 ... Secretarul General Ban Ki-moon afirmă că răspu... Secretarul General Ban Ki-moon declară că răsp... Ban a declarat miercuri în cadrul unei conferi... Ban a declarat la o conferinţă de presă susţin... ... ... Uneori mi-e rușine să ridic banii de la casierie. Uneori mi-e ruşine să iau banii de la biroul c... S-a întâmplat să ridic într-o lună și 30.000 d... Într-o lună am adunat 30 000 de lei cu ramburs... We evaluate the quality of these translations using the BLEU metric, which we introduced in Chapter 14. To this end, we load an existing implementation of BLEU from the datasets library as a Metric object.2 Metric objects have a method called add(), which is used to accumulate the predictions and gold labels, one example at a time. After accumulating all examples, the compute() method returns the results of the evaluation. Note that for each predicted sentence, BLEU expects a list of reference sentences (as there are often many correct ways of translat- 2 https://huggingface.co/docs/datasets/v2.4.0/en/package_reference/main_ classes#datasets. Metric 15.2 Implementation of Greedy Generation 215 ing a given text). Since we only have one reference, we wrap it in a list before passing it to the metric: The score corresponds to the BLEU score. The rest of the items correspond to the components required to compute the score. That is, the counts, totals, and precisions correspond to the counts, totals, and precisions for 1-, 2-, 3-, and 4-grams. The bp is the brevity penalty. The sys_len and ref_len correspond to the predictions and reference lengths. The above BLEU score of 25.2% is slightly lower than the state of the art, but we are being penalized by the peculiarities of diacritic usage in Romanian characters. For example, the letters ș and ț (corresponding to the sounds sh and ts in English) are usually spelled with a comma below the characters s and t, which is the standard imposed by the Romanian Academy. However, in “the wild” these characters are often written using a cedilla instead of a comma, e.g., ţ instead of ț (or, using the names of these Unicode characters, LATIN SMALL LETTER T WITH CEDILLA instead of LATIN SMALL LETTER T WITH COMMA BELOW). Further, some of these characters with diacritics are often omitted altogether in the T5 output. The T5 output below contains an example for each of these two situations (e.g., soluţi(e) instead of soluți(i), and eful instead of Șeful): To avoid being penalized at scoring time for these arbitrary discrepancies, post-processing scripts are sometimes used to normalize diacritic usage.3 Usage of such post-processing scripts can improve the BLEU score substantially. However, this is beyond the scope of this chapter. 15.2 Implementation of Greedy Generation To gain a better intuition of how the encoder-decoder model generates its output sequence, we show below an implementation of the greedy version of the generate() method used above. This function takes as an argument a single English text (i.e., no batching) and returns the corresponding Romanian text: This function interacts directly with the encoder and decoder components of the T5 model, so we must construct the input for both. The encoder’s input is constructed by prepending the task prefix to the English text and tokenizing it. On the other hand, the decoder’s input is constructed incrementally by accumulating the tokens predicted so far 3 https://github.com/huggingface/transformers/blob/main/examples/legacy/ seq2seq/romanian_postprocessing.md 216 Implementing Encoder-decoder Methods in order to predict the next token in the sequence. At the beginning, before any tokens are predicted, the decoder’s input is initialized with a single token that corresponds to the beginning of the sequence. We retrieve this token, called decoder_start_token_id, from the model’s configuration object. The tokens are predicted one at a time, until the model produces eos_token_id, which indicates that the sequence is finished. However, in case the model does not produce this end-of-sequence token within a reasonable number of steps, we also enforce a maximum number of predicted tokens, determined by the max_target_length parameter we defined previously. The T5 model’s forward() method, called indirectly through its __call__()) method, takes the inputs for both the encoder and the decoder. The output returned by this method corresponds to all the tokens in the decoder’s input plus an extra one: the newly predicted token. To select the best prediction, we retrieve the logits from the output and select the logits corresponding to the last token in the sequence (recall that the output shape is (batch size, sequence length, vocabulary size)). From these selected logits, we use the argmax() to select the token id corresponding to the highest-scoring vocabulary item. We append this new token id to the decoder’s input, and repeat the process until we encounter the end-of-sequence token or the decoded text reaches the maximum length. Once we are finished generating token ids, we retrieve the corresponding text by calling the tokenizer’s decode() method. This method is identical to the batch_decode() method we used previously, except that it only decodes a single example. Below is an usage example for the greedy_translation() function: 15.3 Fine-tuning Romanian to English Translation In this section, we fine-tune a T5 model on the translation of Romanian to English, a language pair that was not included in the T5 pre-training. To confirm that this data was not included in pre-training, we evaluated the performance of the vanilla t5-small model on the translation from Romanian to English using code equivalent to the code discussed in the previous section (see the chap15_translation_ro_to_en notebook). The resulting BLEU score was only 3.2%, which is substantially lower than the score we obtained when translating English to Romanian (25.2%). 15.3 Fine-tuning Romanian to English Translation 217 Note that the transformers library includes scripts to fine-tune a translation model directly from the command line.4 For didactic purposes, we will not use these scripts in this section, but instead write the fine-tuning code explicitly. For this exercise, we continue using the WMT16 dataset, but this time
we load the train and validation splits. We employ the same t5-small model that we used previously. The code from the last section to load
the model, tokenizer, and dataset does not need to change for this use-
case, so we do not repeat it here. However, as before, the complete code is available in a Jupyter notebook (chap15_translation_ro_to_en_finetune). We begin by tokenizing the source (Romanian) and target language (English) texts. As in the last section, we need to prepend the task prefix to the source texts prior to tokenizing. This time, since we are translating in the opposite direction, we use the prefix "translate Romanian to English: ", and we prepend it to the Romanian text. Each call to the tokenizer with a batch of texts produces input_ids and an attention_mask. This output is what we need for the Romanian text, which will serve as the input to the model. To generate the labels, i.e., the correct translated tokens, we use the input_ids corresponding to the English text. Recall that "labels" is the default key name expected by trainers in Hugging Face. We apply our tokenize() function to both the train and validation splits: 4 https://github.com/huggingface/transformers/tree/main/examples/pytorch/ translation 218 Implementing Encoder-decoder Methods input_ids [13959, 3871, 29, 12, 1566, 10, 4961, 106, 204... [13959, 3871, 29, 12, 1566, 10, 4961, 106, 204... [13959, 3871, 29, 12, 1566, 10, 374, 6225, 49,... [13959, 3871, 29, 12, 1566, 10, 4540, 4031, 9,... [13959, 3871, 29, 12, 1566, 10, 2262, 900, 17,... [13959, 3871, 29, 12, 1566, 10, 18420, 83, 362... attention_mask labels [19428, 13, 12876, 10, 217, 13687, 7, 1] [19428, 13, 12876, 10, 217, 13687, 7, 1] [11167, 7, 1204, 10, 217, 13687, 7, 1] [4540, 4031, 9, 7, 1672, 7, 2262, 900, 17, 38,... [2262, 900, 17, 641, 65, 46, 3761, 6, 1069, 31... [3625, 32, 5788, 35, 15, 3844, 31, 7, 3, 16143... 0 1 2 3 4 ... 610315 610316 610317 610318 610319 [1,1,1,1,1,1,1,1, [1,1,1,1,1,1,1,1, [1,1,1,1,1,1,1,1, [1,1,1,1,1,1,1,1, [1,1,1,1,1,1,1,1, [1,1,1,1,1,1,1,1, 1,1,1,1, 1,1,1,... 1,1,1,1, 1,1,1,... 1,1,1,1, 1,1,1,... 1,1,1,1, 1,1,1,... 1,1,1,1, 1,1,1,... 1,1,1,1, 1,1,1,... [13959, 3871, 29, 12, 1566, 10, 5085, [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 5840, 49... 1, 1, 1, ... [13959, 3871, 29, 12, 1566, 10, 781, [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 8750, 9, ... 1, 1, 1, ... ... ... [13959, 3871, 29, 12, 1566, 10, 2364, [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 4540, 40... 1, 1, 1, ... 610320 rows × 3 columns [13959, 3871, 29, 12, 1566, 10, 3, [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 25882, 759,... 1, 1, 1, ... [2276, 8843, 138, 13, 13687, 7, 13, 1767, 3823... [781, 2420, 13, 17500, 10, 217, 13687, 7, 1] [242, 4540, 4031, 9, 7, 6, 8, 516, 65, 66, 8, ... [9810, 157, 31, 7, 516, 92, 3088, 21, 46, 3839... ... Recall that in order to construct a trainer, we need a data collator for batching, a function to compute the metrics of interest, and a TrainingArguments object. In this section, we use a data collator called DataCollatorForSeq2Seq, which is included in the transformers library specifically for sequence-to-sequence models. The collator pads the batches using the label_pad_token_id, which we have set to −100, as we did in Chapter 13 (this is the default ignore_index value used by CrossEntropyLoss): The compute_metrics() function computes the BLEU score. It uses the tokenizer to decode the token ids into text, for both the predicted and gold labels, ignoring padding: We use the Seq2SeqTrainingArguments class, which adds the predict_with_generate parameter to the regular TrainingArguments class. This is needed to in-
dicate that the trainer should use the generate() method for inference
in order to compute the metrics (BLUE in this case): Finally, we construct the trainer using the Seq2SeqTrainer class, which is a subclass of Trainer that adds the ability to compute scores such as BLEU during training by calling generate() during evaluation: Fine-tuning a translation model takes considerably longer than training or fine-tuning the models we have developed so far in this book. To account for this, here we add support for resuming training from a checkpoint, i.e., a model that was saved after training on a number of 15.4 Using a Previously Saved Model 219 examples. Similar to how one can resume a video game, this allows one to pick up from the last “save point,” in case training was interrupted and needs to be resumed: When calling the trainer’s train() method, we either provide a model checkpoint or None. In the former case, the trainer will continue training from the provided checkpoint. In the latter case, the trainer will begin training from scratch. Once the training has completed, we save the trained model and tokenizer using the trainer’s save_model() method into the output directory: We then compute and save the metrics corresponding to the training partition. This is not required, but it is helpful to keep a record of the model’s performance on the training data. Note that the metrics do not automatically include the number of examples in the training partition, so we add them explicitly: Next, we evaluate our final model on the validation data and save the corresponding metrics. These metrics indicate that our BLEU score on the validation data is 35.2%, which is evidence that fine-tuning has helped dramatically: Lastly, we save a model card into our output directory. A model card is akin to an automatically-generated README file that includes information about the model used, the data, settings used, and performance throughout the training process. This file is helpful for reproducibility as it contains all of this key information in one place. These cards are often uploaded to the Hugging Face Hub together with the model itself.5 15.4 Using a Previously Saved Model Models that have been saved locally can be loaded using the same from_pretrained() methods we have used before. In particular, instead of providing a model name, we provide the path to the local directory where the model is stored, using the local_files_only parameter to indicate that we want to load the model from the local file system instead of downloading it from the Hugging Face Hub (Make sure you use an output directory that is valid on your machine!): Once our fine-tuned model is loaded, we use it the same way as before. That is, we use the translate() function to generate translations 5 We do not discuss the model uploading process here. Please see the documentation on model sharing at: https://huggingface.co/docs/transformers/v4.14.1/model_sharing. 220 Implementing Encoder-decoder Methods for our test partition. Then we use the BLEU metric to score this output. From this metric, we obtain the final BLEU score of 33.4%, which is markedly better than our initial score (i.e., without fine-tuning) of 3.2%! The code corresponding to this section is available in the notebook chap15_translation_ro_to_en_finetuned. 15.5 Summary In this chapter we used a complete encoder-decoder transformer network to implement a machine translation application. Importantly, transformers with a decoder component have a generate() method that simplifies the generation process and provides multiple options for decoding. We encourage you to explore these options! For example, try comparing the quality of the output with the resources required to produce it (e.g., runtime overhead) when the size of the search beam increases. Additionally, we saw how to fine-tune an encoder-decoder model on a new language pair that it has not seen during its pre-training. This exercise included using checkpoints to support resuming training in case of unexpected interruptions, saving our fine-tuned model, and loading it for later use.
17,694
17,934
#!/usr/bin/env python # coding: utf-8 # # Machine Translation from Ro to En # # Using the T5 Transformer with Fine-tuning # Some initialization: # In[1]: import torch import numpy as np from transformers import set_seed # random seed seed = 42 # set random seed if seed is not None: print(f'random seed: {seed}') set_seed(seed) # In[2]: transformer_name = 't5-small' dataset_name = 'wmt16' dataset_config_name = 'ro-en' source_lang = 'ro' target_lang = 'en' max_source_length = 1024 max_target_length = 128 task_prefix = 'translate Romanian to English: ' batch_size = 4 label_pad_token_id = -100 save_steps = 25_000 num_beams = 1 learning_rate = 1e-3 num_train_epochs = 3 output_dir = '/media/data2/t5-translation-example' # make sure this is a valid path on your machine! # Load dataset from HuggingFace: # In[3]: from datasets import load_dataset wmt16 = load_dataset(dataset_name, dataset_config_name) # Load tokenizer and pre-trained model: # In[4]: from transformers import AutoConfig, AutoTokenizer, AutoModelForSeq2SeqLM config = AutoConfig.from_pretrained(transformer_name) tokenizer = AutoTokenizer.from_pretrained(transformer_name) model = AutoModelForSeq2SeqLM.from_pretrained(transformer_name, config=config) # Tokenize the texts in the dataset: # In[5]: def tokenize(batch): # get source sentences and prepend task prefix sources = [x[source_lang] for x in batch["translation"]] sources = [task_prefix + x for x in sources] # tokenize source sentences output = tokenizer( sources, max_length=max_source_length, truncation=True, ) # get target sentences targets = [x[target_lang] for x in batch["translation"]] # tokenize target sentences labels = tokenizer( targets, max_length=max_target_length, truncation=True, ) # add targets to output output["labels"] = labels["input_ids"] return output # In[6]: train_dataset = wmt16['train'] eval_dataset = wmt16['validation'] column_names = train_dataset.column_names train_dataset = train_dataset.map( tokenize, batched=True, remove_columns=column_names, ) eval_dataset = eval_dataset.map( tokenize, batched=True, remove_columns=column_names, ) # In[7]: train_dataset.to_pandas() # Create `Trainer` object and train: # In[8]: from transformers import DataCollatorForSeq2Seq data_collator = DataCollatorForSeq2Seq( tokenizer, model=model, label_pad_token_id=label_pad_token_id, ) # In[9]: from datasets import load_metric metric = load_metric('sacrebleu') def compute_metrics(eval_preds): preds, labels = eval_preds # get text for predictions predictions = tokenizer.batch_decode( preds, skip_special_tokens=True, ) # replace -100 in labels with pad token labels = np.where( labels != -100, labels, tokenizer.pad_token_id, ) # get text for gold labels references = tokenizer.batch_decode( labels, skip_special_tokens=True, ) # metric expects list of references for each prediction references = [[ref] for ref in references] # compute bleu score results = metric.compute( predictions=predictions, references=references, ) results = {'bleu': results['score']} return results # In[10]: from transformers import Seq2SeqTrainingArguments training_args = Seq2SeqTrainingArguments( output_dir=output_dir, per_device_train_batch_size=batch_size, per_device_eval_batch_size=batch_size, save_steps=save_steps, predict_with_generate=True, evaluation_strategy='steps', eval_steps=save_steps, learning_rate=learning_rate, num_train_epochs=num_train_epochs, ) # In[11]: from transformers import Seq2SeqTrainer trainer = Seq2SeqTrainer( model=model, args=training_args, train_dataset=train_dataset, eval_dataset=eval_dataset, tokenizer=tokenizer, data_collator=data_collator, compute_metrics=compute_metrics, ) # In[12]: import os from transformers.trainer_utils import get_last_checkpoint last_checkpoint = None if os.path.isdir(output_dir): last_checkpoint = get_last_checkpoint(output_dir) if last_checkpoint is not None: print(f'Checkpoint detected, resuming training at {last_checkpoint}.') # In[13]: train_result = trainer.train(resume_from_checkpoint=last_checkpoint) trainer.save_model() # In[14]: metrics = train_result.metrics metrics['train_samples'] = len(train_dataset) trainer.log_metrics('train', metrics) trainer.save_metrics('train', metrics) trainer.save_state() # Now evaluate: # In[15]: # https://discuss.huggingface.co/t/evaluation-results-metric-during-training-is-different-from-the-evaluation-results-at-the-end/15401 metrics = trainer.evaluate( max_length=max_target_length, num_beams=num_beams, metric_key_prefix='eval', ) metrics['eval_samples'] = len(eval_dataset) trainer.log_metrics('eval', metrics) trainer.save_metrics('eval', metrics) # Create a model card with meta data about this model: # In[16]: kwargs = { 'finetuned_from': transformer_name, 'tasks': 'translation', 'dataset_tags': dataset_name, 'dataset_args': dataset_config_name, 'dataset': f'{dataset_name} {dataset_config_name}', 'language': [source_lang, target_lang], } trainer.create_model_card(**kwargs) # In[ ]:
5,145
5,437
7
chap15-8
chap15-8
15 Implementing Encoder-decoder Methods In this chapter we implement a machine translation application as an example of an encoder-decoder task. In particular, we build on pre-trained encoder-decoder transformer models, which exist in the Hugging Face library for a wide variety of language pairs. We first show how to use one of these models out-of-the-box to perform translation for one of the language pairs it has been exposed to during pre-training: English to Romanian. Afterwards, we fine-tune the model to a new language combination that is has not seen before: Romanian to English. In both use cases, we use the T5 encoder-decoder model, which has been pre-trained for several tasks, including machine translation (Raffel et al., 2020). Please see Chapter 16 for a description of T5’s pre-training process. The data for this task comes from the WMT 2016 dataset (Bojar et al., 2016), which consists of English sentences aligned pairwise to German, Czech, Russian, Finnish, Romanian, and Turkish. In this chapter we only use the English-Romanian texts (in both directions). 15.1 Translating English to Romanian As a first example, we use T5 to translate from English to Romanian, which is one of the language pairs it has been exposed to during pretraining. The code discussed in this section is available in the notebook chap15_translation_en_to_ro. Even though in this exercise we are not fine-tuning the model, we still need to define a few hyper parameters to frame the task and help the model understand how to work with the data: The above settings indicate that we use the t5-small model, a smaller T5 variant, to minimize the amount of memory required. The source_lang 212 15.1 Translating English to Romanian 213 and target_lang variables define the direction of translation, i.e., from English to Romanian. To keep our computing requirements small, we limit the length of our input and output. That is, English text longer than max_source_length tokens will be truncated. Further, we limit our generated Romanian text to max_target_length. We chose a maximum target length of 128 tokens to limit the computational cost incurred during text generation (recall that the text is generated one token at a time). The T5 models are trained to support multiple tasks such as translation and summarization (please see Chapter 16 for details). Thus, during training and inference, the user must specify which task the model should perform using a text prefix. Here we use the prefix "translate English to Romanian: " to indicate that the input text is in English and should be translated to Romanian. Next, we load the model and the corresponding tokenizer, and move them to the GPU if one is available: We use the datasets library to load our translation dataset. Note that the first time one calls load_dataset() the dataset will be downloaded automatically from the Hugging Face repository.1 The load_dataset() function takes a dataset name and configuration, which in our case are wmt16 and ro-en, respectively. Since in this example we are only evaluating the model, we only load the test partition (or split) of the dataset: The dataset consists of a single column called translation. Each element in this column is a dictionary that contains the aligned pair. The dictionary keys are the abbreviated language names and the values are the corresponding sentences. An example of one of these dictionaries is shown below: We encapsulate the logic for translating the English text into Romanian in a function called translate(). Inside this function, for a batch of aligned pairs, we select the English sentence as our input, and prepend the task prefix. Then we tokenize these inputs, including the prefix, specifying that sentences longer than max_source_length should be truncated, the batch should be padded, and the tokenizer should return PyTorch tensors. Once the tokenizer output has been moved to the GPU, we pass it to the model’s generate() method. This is the first time we have seen this method, because only decoder and encoder-decoder models support it. This method generates an output sequence by predicting one token 1 https://huggingface.co/datasets/wmt16 214 Implementing Encoder-decoder Methods at a time, stopping when either the end-of-sequence token is produced or when the sequence reaches a maximum length. Several generation techniques are supported, such as beam search, in which several alternate translations are maintained by the model so that it is able to select an overall best translation from several options. For efficiency purposes, we use a greedy approach, which chooses the best token at each step of the generation. This is equivalent to using a beam search with a beam of size one. Since the model generates its predictions as a sequence of token ids, we need to convert them back into the corresponding tokens to be able to read the translated text. We do this using the tokenizer’s batch_decode() method. Finally, we return the gold and predicted Romanian sentences in a dictionary: Next, we apply our translate() function to our Dataset to translate all the sentences: reference Șeful ONU declară că nu există soluții militar... Șeful ONU a solicitat din nou tuturor părților... Ban și-a exprimat regretul că divizările în co... Nu sunt bani puțini. La sfârșitul mandatului voi face un raport cu ... "Să spună un parlamentar că nu-i ajung banii e... 1999 rows × 2 columns prediction eful ONU declară că nu există o soluţie milita... eful U.N. a cerut din nou tuturor partidelor, ... El şi-a exprimat regretul că diviziunile din c... Banii sunt suficienţi. La sfârşitul biroului voi raporta tot ceea ce ... "A spune că un parlamentar nu are suficienţi b... 1994 1995 1996 1997 1998 0 1 2 3 4 ... Secretarul General Ban Ki-moon afirmă că răspu... Secretarul General Ban Ki-moon declară că răsp... Ban a declarat miercuri în cadrul unei conferi... Ban a declarat la o conferinţă de presă susţin... ... ... Uneori mi-e rușine să ridic banii de la casierie. Uneori mi-e ruşine să iau banii de la biroul c... S-a întâmplat să ridic într-o lună și 30.000 d... Într-o lună am adunat 30 000 de lei cu ramburs... We evaluate the quality of these translations using the BLEU metric, which we introduced in Chapter 14. To this end, we load an existing implementation of BLEU from the datasets library as a Metric object.2 Metric objects have a method called add(), which is used to accumulate the predictions and gold labels, one example at a time. After accumulating all examples, the compute() method returns the results of the evaluation. Note that for each predicted sentence, BLEU expects a list of reference sentences (as there are often many correct ways of translat- 2 https://huggingface.co/docs/datasets/v2.4.0/en/package_reference/main_ classes#datasets. Metric 15.2 Implementation of Greedy Generation 215 ing a given text). Since we only have one reference, we wrap it in a list before passing it to the metric: The score corresponds to the BLEU score. The rest of the items correspond to the components required to compute the score. That is, the counts, totals, and precisions correspond to the counts, totals, and precisions for 1-, 2-, 3-, and 4-grams. The bp is the brevity penalty. The sys_len and ref_len correspond to the predictions and reference lengths. The above BLEU score of 25.2% is slightly lower than the state of the art, but we are being penalized by the peculiarities of diacritic usage in Romanian characters. For example, the letters ș and ț (corresponding to the sounds sh and ts in English) are usually spelled with a comma below the characters s and t, which is the standard imposed by the Romanian Academy. However, in “the wild” these characters are often written using a cedilla instead of a comma, e.g., ţ instead of ț (or, using the names of these Unicode characters, LATIN SMALL LETTER T WITH CEDILLA instead of LATIN SMALL LETTER T WITH COMMA BELOW). Further, some of these characters with diacritics are often omitted altogether in the T5 output. The T5 output below contains an example for each of these two situations (e.g., soluţi(e) instead of soluți(i), and eful instead of Șeful): To avoid being penalized at scoring time for these arbitrary discrepancies, post-processing scripts are sometimes used to normalize diacritic usage.3 Usage of such post-processing scripts can improve the BLEU score substantially. However, this is beyond the scope of this chapter. 15.2 Implementation of Greedy Generation To gain a better intuition of how the encoder-decoder model generates its output sequence, we show below an implementation of the greedy version of the generate() method used above. This function takes as an argument a single English text (i.e., no batching) and returns the corresponding Romanian text: This function interacts directly with the encoder and decoder components of the T5 model, so we must construct the input for both. The encoder’s input is constructed by prepending the task prefix to the English text and tokenizing it. On the other hand, the decoder’s input is constructed incrementally by accumulating the tokens predicted so far 3 https://github.com/huggingface/transformers/blob/main/examples/legacy/ seq2seq/romanian_postprocessing.md 216 Implementing Encoder-decoder Methods in order to predict the next token in the sequence. At the beginning, before any tokens are predicted, the decoder’s input is initialized with a single token that corresponds to the beginning of the sequence. We retrieve this token, called decoder_start_token_id, from the model’s configuration object. The tokens are predicted one at a time, until the model produces eos_token_id, which indicates that the sequence is finished. However, in case the model does not produce this end-of-sequence token within a reasonable number of steps, we also enforce a maximum number of predicted tokens, determined by the max_target_length parameter we defined previously. The T5 model’s forward() method, called indirectly through its __call__()) method, takes the inputs for both the encoder and the decoder. The output returned by this method corresponds to all the tokens in the decoder’s input plus an extra one: the newly predicted token. To select the best prediction, we retrieve the logits from the output and select the logits corresponding to the last token in the sequence (recall that the output shape is (batch size, sequence length, vocabulary size)). From these selected logits, we use the argmax() to select the token id corresponding to the highest-scoring vocabulary item. We append this new token id to the decoder’s input, and repeat the process until we encounter the end-of-sequence token or the decoded text reaches the maximum length. Once we are finished generating token ids, we retrieve the corresponding text by calling the tokenizer’s decode() method. This method is identical to the batch_decode() method we used previously, except that it only decodes a single example. Below is an usage example for the greedy_translation() function: 15.3 Fine-tuning Romanian to English Translation In this section, we fine-tune a T5 model on the translation of Romanian to English, a language pair that was not included in the T5 pre-training. To confirm that this data was not included in pre-training, we evaluated the performance of the vanilla t5-small model on the translation from Romanian to English using code equivalent to the code discussed in the previous section (see the chap15_translation_ro_to_en notebook). The resulting BLEU score was only 3.2%, which is substantially lower than the score we obtained when translating English to Romanian (25.2%). 15.3 Fine-tuning Romanian to English Translation 217 Note that the transformers library includes scripts to fine-tune a translation model directly from the command line.4 For didactic purposes, we will not use these scripts in this section, but instead write the fine-tuning code explicitly. For this exercise, we continue using the WMT16 dataset, but this time
we load the train and validation splits. We employ the same t5-small model that we used previously. The code from the last section to load
the model, tokenizer, and dataset does not need to change for this use-
case, so we do not repeat it here. However, as before, the complete code is available in a Jupyter notebook (chap15_translation_ro_to_en_finetune). We begin by tokenizing the source (Romanian) and target language (English) texts. As in the last section, we need to prepend the task prefix to the source texts prior to tokenizing. This time, since we are translating in the opposite direction, we use the prefix "translate Romanian to English: ", and we prepend it to the Romanian text. Each call to the tokenizer with a batch of texts produces input_ids and an attention_mask. This output is what we need for the Romanian text, which will serve as the input to the model. To generate the labels, i.e., the correct translated tokens, we use the input_ids corresponding to the English text. Recall that "labels" is the default key name expected by trainers in Hugging Face. We apply our tokenize() function to both the train and validation splits: 4 https://github.com/huggingface/transformers/tree/main/examples/pytorch/ translation 218 Implementing Encoder-decoder Methods input_ids [13959, 3871, 29, 12, 1566, 10, 4961, 106, 204... [13959, 3871, 29, 12, 1566, 10, 4961, 106, 204... [13959, 3871, 29, 12, 1566, 10, 374, 6225, 49,... [13959, 3871, 29, 12, 1566, 10, 4540, 4031, 9,... [13959, 3871, 29, 12, 1566, 10, 2262, 900, 17,... [13959, 3871, 29, 12, 1566, 10, 18420, 83, 362... attention_mask labels [19428, 13, 12876, 10, 217, 13687, 7, 1] [19428, 13, 12876, 10, 217, 13687, 7, 1] [11167, 7, 1204, 10, 217, 13687, 7, 1] [4540, 4031, 9, 7, 1672, 7, 2262, 900, 17, 38,... [2262, 900, 17, 641, 65, 46, 3761, 6, 1069, 31... [3625, 32, 5788, 35, 15, 3844, 31, 7, 3, 16143... 0 1 2 3 4 ... 610315 610316 610317 610318 610319 [1,1,1,1,1,1,1,1, [1,1,1,1,1,1,1,1, [1,1,1,1,1,1,1,1, [1,1,1,1,1,1,1,1, [1,1,1,1,1,1,1,1, [1,1,1,1,1,1,1,1, 1,1,1,1, 1,1,1,... 1,1,1,1, 1,1,1,... 1,1,1,1, 1,1,1,... 1,1,1,1, 1,1,1,... 1,1,1,1, 1,1,1,... 1,1,1,1, 1,1,1,... [13959, 3871, 29, 12, 1566, 10, 5085, [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 5840, 49... 1, 1, 1, ... [13959, 3871, 29, 12, 1566, 10, 781, [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 8750, 9, ... 1, 1, 1, ... ... ... [13959, 3871, 29, 12, 1566, 10, 2364, [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 4540, 40... 1, 1, 1, ... 610320 rows × 3 columns [13959, 3871, 29, 12, 1566, 10, 3, [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 25882, 759,... 1, 1, 1, ... [2276, 8843, 138, 13, 13687, 7, 13, 1767, 3823... [781, 2420, 13, 17500, 10, 217, 13687, 7, 1] [242, 4540, 4031, 9, 7, 6, 8, 516, 65, 66, 8, ... [9810, 157, 31, 7, 516, 92, 3088, 21, 46, 3839... ... Recall that in order to construct a trainer, we need a data collator for batching, a function to compute the metrics of interest, and a TrainingArguments object. In this section, we use a data collator called DataCollatorForSeq2Seq, which is included in the transformers library specifically for sequence-to-sequence models. The collator pads the batches using the label_pad_token_id, which we have set to −100, as we did in Chapter 13 (this is the default ignore_index value used by CrossEntropyLoss): The compute_metrics() function computes the BLEU score. It uses the tokenizer to decode the token ids into text, for both the predicted and gold labels, ignoring padding: We use the Seq2SeqTrainingArguments class, which adds the predict_with_generate parameter to the regular TrainingArguments class. This is needed to in-
dicate that the trainer should use the generate() method for inference
in order to compute the metrics (BLUE in this case): Finally, we construct the trainer using the Seq2SeqTrainer class, which is a subclass of Trainer that adds the ability to compute scores such as BLEU during training by calling generate() during evaluation: Fine-tuning a translation model takes considerably longer than training or fine-tuning the models we have developed so far in this book. To account for this, here we add support for resuming training from a checkpoint, i.e., a model that was saved after training on a number of 15.4 Using a Previously Saved Model 219 examples. Similar to how one can resume a video game, this allows one to pick up from the last “save point,” in case training was interrupted and needs to be resumed: When calling the trainer’s train() method, we either provide a model checkpoint or None. In the former case, the trainer will continue training from the provided checkpoint. In the latter case, the trainer will begin training from scratch. Once the training has completed, we save the trained model and tokenizer using the trainer’s save_model() method into the output directory: We then compute and save the metrics corresponding to the training partition. This is not required, but it is helpful to keep a record of the model’s performance on the training data. Note that the metrics do not automatically include the number of examples in the training partition, so we add them explicitly: Next, we evaluate our final model on the validation data and save the corresponding metrics. These metrics indicate that our BLEU score on the validation data is 35.2%, which is evidence that fine-tuning has helped dramatically: Lastly, we save a model card into our output directory. A model card is akin to an automatically-generated README file that includes information about the model used, the data, settings used, and performance throughout the training process. This file is helpful for reproducibility as it contains all of this key information in one place. These cards are often uploaded to the Hugging Face Hub together with the model itself.5 15.4 Using a Previously Saved Model Models that have been saved locally can be loaded using the same from_pretrained() methods we have used before. In particular, instead of providing a model name, we provide the path to the local directory where the model is stored, using the local_files_only parameter to indicate that we want to load the model from the local file system instead of downloading it from the Hugging Face Hub (Make sure you use an output directory that is valid on your machine!): Once our fine-tuned model is loaded, we use it the same way as before. That is, we use the translate() function to generate translations 5 We do not discuss the model uploading process here. Please see the documentation on model sharing at: https://huggingface.co/docs/transformers/v4.14.1/model_sharing. 220 Implementing Encoder-decoder Methods for our test partition. Then we use the BLEU metric to score this output. From this metric, we obtain the final BLEU score of 33.4%, which is markedly better than our initial score (i.e., without fine-tuning) of 3.2%! The code corresponding to this section is available in the notebook chap15_translation_ro_to_en_finetuned. 15.5 Summary In this chapter we used a complete encoder-decoder transformer network to implement a machine translation application. Importantly, transformers with a decoder component have a generate() method that simplifies the generation process and provides multiple options for decoding. We encourage you to explore these options! For example, try comparing the quality of the output with the resources required to produce it (e.g., runtime overhead) when the size of the search beam increases. Additionally, we saw how to fine-tune an encoder-decoder model on a new language pair that it has not seen during its pre-training. This exercise included using checkpoints to support resuming training in case of unexpected interruptions, saving our fine-tuned model, and loading it for later use.
12,139
12,249
#!/usr/bin/env python # coding: utf-8 # # Machine Translation from Ro to En # # Using the T5 Transformer with Fine-tuning # Some initialization: # In[1]: import torch import numpy as np from transformers import set_seed # random seed seed = 42 # set random seed if seed is not None: print(f'random seed: {seed}') set_seed(seed) # In[2]: transformer_name = 't5-small' dataset_name = 'wmt16' dataset_config_name = 'ro-en' source_lang = 'ro' target_lang = 'en' max_source_length = 1024 max_target_length = 128 task_prefix = 'translate Romanian to English: ' batch_size = 4 label_pad_token_id = -100 save_steps = 25_000 num_beams = 1 learning_rate = 1e-3 num_train_epochs = 3 output_dir = '/media/data2/t5-translation-example' # make sure this is a valid path on your machine! # Load dataset from HuggingFace: # In[3]: from datasets import load_dataset wmt16 = load_dataset(dataset_name, dataset_config_name) # Load tokenizer and pre-trained model: # In[4]: from transformers import AutoConfig, AutoTokenizer, AutoModelForSeq2SeqLM config = AutoConfig.from_pretrained(transformer_name) tokenizer = AutoTokenizer.from_pretrained(transformer_name) model = AutoModelForSeq2SeqLM.from_pretrained(transformer_name, config=config) # Tokenize the texts in the dataset: # In[5]: def tokenize(batch): # get source sentences and prepend task prefix sources = [x[source_lang] for x in batch["translation"]] sources = [task_prefix + x for x in sources] # tokenize source sentences output = tokenizer( sources, max_length=max_source_length, truncation=True, ) # get target sentences targets = [x[target_lang] for x in batch["translation"]] # tokenize target sentences labels = tokenizer( targets, max_length=max_target_length, truncation=True, ) # add targets to output output["labels"] = labels["input_ids"] return output # In[6]: train_dataset = wmt16['train'] eval_dataset = wmt16['validation'] column_names = train_dataset.column_names train_dataset = train_dataset.map( tokenize, batched=True, remove_columns=column_names, ) eval_dataset = eval_dataset.map( tokenize, batched=True, remove_columns=column_names, ) # In[7]: train_dataset.to_pandas() # Create `Trainer` object and train: # In[8]: from transformers import DataCollatorForSeq2Seq data_collator = DataCollatorForSeq2Seq( tokenizer, model=model, label_pad_token_id=label_pad_token_id, ) # In[9]: from datasets import load_metric metric = load_metric('sacrebleu') def compute_metrics(eval_preds): preds, labels = eval_preds # get text for predictions predictions = tokenizer.batch_decode( preds, skip_special_tokens=True, ) # replace -100 in labels with pad token labels = np.where( labels != -100, labels, tokenizer.pad_token_id, ) # get text for gold labels references = tokenizer.batch_decode( labels, skip_special_tokens=True, ) # metric expects list of references for each prediction references = [[ref] for ref in references] # compute bleu score results = metric.compute( predictions=predictions, references=references, ) results = {'bleu': results['score']} return results # In[10]: from transformers import Seq2SeqTrainingArguments training_args = Seq2SeqTrainingArguments( output_dir=output_dir, per_device_train_batch_size=batch_size, per_device_eval_batch_size=batch_size, save_steps=save_steps, predict_with_generate=True, evaluation_strategy='steps', eval_steps=save_steps, learning_rate=learning_rate, num_train_epochs=num_train_epochs, ) # In[11]: from transformers import Seq2SeqTrainer trainer = Seq2SeqTrainer( model=model, args=training_args, train_dataset=train_dataset, eval_dataset=eval_dataset, tokenizer=tokenizer, data_collator=data_collator, compute_metrics=compute_metrics, ) # In[12]: import os from transformers.trainer_utils import get_last_checkpoint last_checkpoint = None if os.path.isdir(output_dir): last_checkpoint = get_last_checkpoint(output_dir) if last_checkpoint is not None: print(f'Checkpoint detected, resuming training at {last_checkpoint}.') # In[13]: train_result = trainer.train(resume_from_checkpoint=last_checkpoint) trainer.save_model() # In[14]: metrics = train_result.metrics metrics['train_samples'] = len(train_dataset) trainer.log_metrics('train', metrics) trainer.save_metrics('train', metrics) trainer.save_state() # Now evaluate: # In[15]: # https://discuss.huggingface.co/t/evaluation-results-metric-during-training-is-different-from-the-evaluation-results-at-the-end/15401 metrics = trainer.evaluate( max_length=max_target_length, num_beams=num_beams, metric_key_prefix='eval', ) metrics['eval_samples'] = len(eval_dataset) trainer.log_metrics('eval', metrics) trainer.save_metrics('eval', metrics) # Create a model card with meta data about this model: # In[16]: kwargs = { 'finetuned_from': transformer_name, 'tasks': 'translation', 'dataset_tags': dataset_name, 'dataset_args': dataset_config_name, 'dataset': f'{dataset_name} {dataset_config_name}', 'language': [source_lang, target_lang], } trainer.create_model_card(**kwargs) # In[ ]:
874
930
8
chap15-9
chap15-9
15 Implementing Encoder-decoder Methods In this chapter we implement a machine translation application as an example of an encoder-decoder task. In particular, we build on pre-trained encoder-decoder transformer models, which exist in the Hugging Face library for a wide variety of language pairs. We first show how to use one of these models out-of-the-box to perform translation for one of the language pairs it has been exposed to during pre-training: English to Romanian. Afterwards, we fine-tune the model to a new language combination that is has not seen before: Romanian to English. In both use cases, we use the T5 encoder-decoder model, which has been pre-trained for several tasks, including machine translation (Raffel et al., 2020). Please see Chapter 16 for a description of T5’s pre-training process. The data for this task comes from the WMT 2016 dataset (Bojar et al., 2016), which consists of English sentences aligned pairwise to German, Czech, Russian, Finnish, Romanian, and Turkish. In this chapter we only use the English-Romanian texts (in both directions). 15.1 Translating English to Romanian As a first example, we use T5 to translate from English to Romanian, which is one of the language pairs it has been exposed to during pretraining. The code discussed in this section is available in the notebook chap15_translation_en_to_ro. Even though in this exercise we are not fine-tuning the model, we still need to define a few hyper parameters to frame the task and help the model understand how to work with the data: The above settings indicate that we use the t5-small model, a smaller T5 variant, to minimize the amount of memory required. The source_lang 212 15.1 Translating English to Romanian 213 and target_lang variables define the direction of translation, i.e., from English to Romanian. To keep our computing requirements small, we limit the length of our input and output. That is, English text longer than max_source_length tokens will be truncated. Further, we limit our generated Romanian text to max_target_length. We chose a maximum target length of 128 tokens to limit the computational cost incurred during text generation (recall that the text is generated one token at a time). The T5 models are trained to support multiple tasks such as translation and summarization (please see Chapter 16 for details). Thus, during training and inference, the user must specify which task the model should perform using a text prefix. Here we use the prefix "translate English to Romanian: " to indicate that the input text is in English and should be translated to Romanian. Next, we load the model and the corresponding tokenizer, and move them to the GPU if one is available: We use the datasets library to load our translation dataset. Note that the first time one calls load_dataset() the dataset will be downloaded automatically from the Hugging Face repository.1 The load_dataset() function takes a dataset name and configuration, which in our case are wmt16 and ro-en, respectively. Since in this example we are only evaluating the model, we only load the test partition (or split) of the dataset: The dataset consists of a single column called translation. Each element in this column is a dictionary that contains the aligned pair. The dictionary keys are the abbreviated language names and the values are the corresponding sentences. An example of one of these dictionaries is shown below: We encapsulate the logic for translating the English text into Romanian in a function called translate(). Inside this function, for a batch of aligned pairs, we select the English sentence as our input, and prepend the task prefix. Then we tokenize these inputs, including the prefix, specifying that sentences longer than max_source_length should be truncated, the batch should be padded, and the tokenizer should return PyTorch tensors. Once the tokenizer output has been moved to the GPU, we pass it to the model’s generate() method. This is the first time we have seen this method, because only decoder and encoder-decoder models support it. This method generates an output sequence by predicting one token 1 https://huggingface.co/datasets/wmt16 214 Implementing Encoder-decoder Methods at a time, stopping when either the end-of-sequence token is produced or when the sequence reaches a maximum length. Several generation techniques are supported, such as beam search, in which several alternate translations are maintained by the model so that it is able to select an overall best translation from several options. For efficiency purposes, we use a greedy approach, which chooses the best token at each step of the generation. This is equivalent to using a beam search with a beam of size one. Since the model generates its predictions as a sequence of token ids, we need to convert them back into the corresponding tokens to be able to read the translated text. We do this using the tokenizer’s batch_decode() method. Finally, we return the gold and predicted Romanian sentences in a dictionary: Next, we apply our translate() function to our Dataset to translate all the sentences: reference Șeful ONU declară că nu există soluții militar... Șeful ONU a solicitat din nou tuturor părților... Ban și-a exprimat regretul că divizările în co... Nu sunt bani puțini. La sfârșitul mandatului voi face un raport cu ... "Să spună un parlamentar că nu-i ajung banii e... 1999 rows × 2 columns prediction eful ONU declară că nu există o soluţie milita... eful U.N. a cerut din nou tuturor partidelor, ... El şi-a exprimat regretul că diviziunile din c... Banii sunt suficienţi. La sfârşitul biroului voi raporta tot ceea ce ... "A spune că un parlamentar nu are suficienţi b... 1994 1995 1996 1997 1998 0 1 2 3 4 ... Secretarul General Ban Ki-moon afirmă că răspu... Secretarul General Ban Ki-moon declară că răsp... Ban a declarat miercuri în cadrul unei conferi... Ban a declarat la o conferinţă de presă susţin... ... ... Uneori mi-e rușine să ridic banii de la casierie. Uneori mi-e ruşine să iau banii de la biroul c... S-a întâmplat să ridic într-o lună și 30.000 d... Într-o lună am adunat 30 000 de lei cu ramburs... We evaluate the quality of these translations using the BLEU metric, which we introduced in Chapter 14. To this end, we load an existing implementation of BLEU from the datasets library as a Metric object.2 Metric objects have a method called add(), which is used to accumulate the predictions and gold labels, one example at a time. After accumulating all examples, the compute() method returns the results of the evaluation. Note that for each predicted sentence, BLEU expects a list of reference sentences (as there are often many correct ways of translat- 2 https://huggingface.co/docs/datasets/v2.4.0/en/package_reference/main_ classes#datasets. Metric 15.2 Implementation of Greedy Generation 215 ing a given text). Since we only have one reference, we wrap it in a list before passing it to the metric: The score corresponds to the BLEU score. The rest of the items correspond to the components required to compute the score. That is, the counts, totals, and precisions correspond to the counts, totals, and precisions for 1-, 2-, 3-, and 4-grams. The bp is the brevity penalty. The sys_len and ref_len correspond to the predictions and reference lengths. The above BLEU score of 25.2% is slightly lower than the state of the art, but we are being penalized by the peculiarities of diacritic usage in Romanian characters. For example, the letters ș and ț (corresponding to the sounds sh and ts in English) are usually spelled with a comma below the characters s and t, which is the standard imposed by the Romanian Academy. However, in “the wild” these characters are often written using a cedilla instead of a comma, e.g., ţ instead of ț (or, using the names of these Unicode characters, LATIN SMALL LETTER T WITH CEDILLA instead of LATIN SMALL LETTER T WITH COMMA BELOW). Further, some of these characters with diacritics are often omitted altogether in the T5 output. The T5 output below contains an example for each of these two situations (e.g., soluţi(e) instead of soluți(i), and eful instead of Șeful): To avoid being penalized at scoring time for these arbitrary discrepancies, post-processing scripts are sometimes used to normalize diacritic usage.3 Usage of such post-processing scripts can improve the BLEU score substantially. However, this is beyond the scope of this chapter. 15.2 Implementation of Greedy Generation To gain a better intuition of how the encoder-decoder model generates its output sequence, we show below an implementation of the greedy version of the generate() method used above. This function takes as an argument a single English text (i.e., no batching) and returns the corresponding Romanian text: This function interacts directly with the encoder and decoder components of the T5 model, so we must construct the input for both. The encoder’s input is constructed by prepending the task prefix to the English text and tokenizing it. On the other hand, the decoder’s input is constructed incrementally by accumulating the tokens predicted so far 3 https://github.com/huggingface/transformers/blob/main/examples/legacy/ seq2seq/romanian_postprocessing.md 216 Implementing Encoder-decoder Methods in order to predict the next token in the sequence. At the beginning, before any tokens are predicted, the decoder’s input is initialized with a single token that corresponds to the beginning of the sequence. We retrieve this token, called decoder_start_token_id, from the model’s configuration object. The tokens are predicted one at a time, until the model produces eos_token_id, which indicates that the sequence is finished. However, in case the model does not produce this end-of-sequence token within a reasonable number of steps, we also enforce a maximum number of predicted tokens, determined by the max_target_length parameter we defined previously. The T5 model’s forward() method, called indirectly through its __call__()) method, takes the inputs for both the encoder and the decoder. The output returned by this method corresponds to all the tokens in the decoder’s input plus an extra one: the newly predicted token. To select the best prediction, we retrieve the logits from the output and select the logits corresponding to the last token in the sequence (recall that the output shape is (batch size, sequence length, vocabulary size)). From these selected logits, we use the argmax() to select the token id corresponding to the highest-scoring vocabulary item. We append this new token id to the decoder’s input, and repeat the process until we encounter the end-of-sequence token or the decoded text reaches the maximum length. Once we are finished generating token ids, we retrieve the corresponding text by calling the tokenizer’s decode() method. This method is identical to the batch_decode() method we used previously, except that it only decodes a single example. Below is an usage example for the greedy_translation() function: 15.3 Fine-tuning Romanian to English Translation In this section, we fine-tune a T5 model on the translation of Romanian to English, a language pair that was not included in the T5 pre-training. To confirm that this data was not included in pre-training, we evaluated the performance of the vanilla t5-small model on the translation from Romanian to English using code equivalent to the code discussed in the previous section (see the chap15_translation_ro_to_en notebook). The resulting BLEU score was only 3.2%, which is substantially lower than the score we obtained when translating English to Romanian (25.2%). 15.3 Fine-tuning Romanian to English Translation 217 Note that the transformers library includes scripts to fine-tune a translation model directly from the command line.4 For didactic purposes, we will not use these scripts in this section, but instead write the fine-tuning code explicitly. For this exercise, we continue using the WMT16 dataset, but this time
we load the train and validation splits. We employ the same t5-small model that we used previously. The code from the last section to load
the model, tokenizer, and dataset does not need to change for this use-
case, so we do not repeat it here. However, as before, the complete code is available in a Jupyter notebook (chap15_translation_ro_to_en_finetune). We begin by tokenizing the source (Romanian) and target language (English) texts. As in the last section, we need to prepend the task prefix to the source texts prior to tokenizing. This time, since we are translating in the opposite direction, we use the prefix "translate Romanian to English: ", and we prepend it to the Romanian text. Each call to the tokenizer with a batch of texts produces input_ids and an attention_mask. This output is what we need for the Romanian text, which will serve as the input to the model. To generate the labels, i.e., the correct translated tokens, we use the input_ids corresponding to the English text. Recall that "labels" is the default key name expected by trainers in Hugging Face. We apply our tokenize() function to both the train and validation splits: 4 https://github.com/huggingface/transformers/tree/main/examples/pytorch/ translation 218 Implementing Encoder-decoder Methods input_ids [13959, 3871, 29, 12, 1566, 10, 4961, 106, 204... [13959, 3871, 29, 12, 1566, 10, 4961, 106, 204... [13959, 3871, 29, 12, 1566, 10, 374, 6225, 49,... [13959, 3871, 29, 12, 1566, 10, 4540, 4031, 9,... [13959, 3871, 29, 12, 1566, 10, 2262, 900, 17,... [13959, 3871, 29, 12, 1566, 10, 18420, 83, 362... attention_mask labels [19428, 13, 12876, 10, 217, 13687, 7, 1] [19428, 13, 12876, 10, 217, 13687, 7, 1] [11167, 7, 1204, 10, 217, 13687, 7, 1] [4540, 4031, 9, 7, 1672, 7, 2262, 900, 17, 38,... [2262, 900, 17, 641, 65, 46, 3761, 6, 1069, 31... [3625, 32, 5788, 35, 15, 3844, 31, 7, 3, 16143... 0 1 2 3 4 ... 610315 610316 610317 610318 610319 [1,1,1,1,1,1,1,1, [1,1,1,1,1,1,1,1, [1,1,1,1,1,1,1,1, [1,1,1,1,1,1,1,1, [1,1,1,1,1,1,1,1, [1,1,1,1,1,1,1,1, 1,1,1,1, 1,1,1,... 1,1,1,1, 1,1,1,... 1,1,1,1, 1,1,1,... 1,1,1,1, 1,1,1,... 1,1,1,1, 1,1,1,... 1,1,1,1, 1,1,1,... [13959, 3871, 29, 12, 1566, 10, 5085, [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 5840, 49... 1, 1, 1, ... [13959, 3871, 29, 12, 1566, 10, 781, [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 8750, 9, ... 1, 1, 1, ... ... ... [13959, 3871, 29, 12, 1566, 10, 2364, [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 4540, 40... 1, 1, 1, ... 610320 rows × 3 columns [13959, 3871, 29, 12, 1566, 10, 3, [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 25882, 759,... 1, 1, 1, ... [2276, 8843, 138, 13, 13687, 7, 13, 1767, 3823... [781, 2420, 13, 17500, 10, 217, 13687, 7, 1] [242, 4540, 4031, 9, 7, 6, 8, 516, 65, 66, 8, ... [9810, 157, 31, 7, 516, 92, 3088, 21, 46, 3839... ... Recall that in order to construct a trainer, we need a data collator for batching, a function to compute the metrics of interest, and a TrainingArguments object. In this section, we use a data collator called DataCollatorForSeq2Seq, which is included in the transformers library specifically for sequence-to-sequence models. The collator pads the batches using the label_pad_token_id, which we have set to −100, as we did in Chapter 13 (this is the default ignore_index value used by CrossEntropyLoss): The compute_metrics() function computes the BLEU score. It uses the tokenizer to decode the token ids into text, for both the predicted and gold labels, ignoring padding: We use the Seq2SeqTrainingArguments class, which adds the predict_with_generate parameter to the regular TrainingArguments class. This is needed to in-
dicate that the trainer should use the generate() method for inference
in order to compute the metrics (BLUE in this case): Finally, we construct the trainer using the Seq2SeqTrainer class, which is a subclass of Trainer that adds the ability to compute scores such as BLEU during training by calling generate() during evaluation: Fine-tuning a translation model takes considerably longer than training or fine-tuning the models we have developed so far in this book. To account for this, here we add support for resuming training from a checkpoint, i.e., a model that was saved after training on a number of 15.4 Using a Previously Saved Model 219 examples. Similar to how one can resume a video game, this allows one to pick up from the last “save point,” in case training was interrupted and needs to be resumed: When calling the trainer’s train() method, we either provide a model checkpoint or None. In the former case, the trainer will continue training from the provided checkpoint. In the latter case, the trainer will begin training from scratch. Once the training has completed, we save the trained model and tokenizer using the trainer’s save_model() method into the output directory: We then compute and save the metrics corresponding to the training partition. This is not required, but it is helpful to keep a record of the model’s performance on the training data. Note that the metrics do not automatically include the number of examples in the training partition, so we add them explicitly: Next, we evaluate our final model on the validation data and save the corresponding metrics. These metrics indicate that our BLEU score on the validation data is 35.2%, which is evidence that fine-tuning has helped dramatically: Lastly, we save a model card into our output directory. A model card is akin to an automatically-generated README file that includes information about the model used, the data, settings used, and performance throughout the training process. This file is helpful for reproducibility as it contains all of this key information in one place. These cards are often uploaded to the Hugging Face Hub together with the model itself.5 15.4 Using a Previously Saved Model Models that have been saved locally can be loaded using the same from_pretrained() methods we have used before. In particular, instead of providing a model name, we provide the path to the local directory where the model is stored, using the local_files_only parameter to indicate that we want to load the model from the local file system instead of downloading it from the Hugging Face Hub (Make sure you use an output directory that is valid on your machine!): Once our fine-tuned model is loaded, we use it the same way as before. That is, we use the translate() function to generate translations 5 We do not discuss the model uploading process here. Please see the documentation on model sharing at: https://huggingface.co/docs/transformers/v4.14.1/model_sharing. 220 Implementing Encoder-decoder Methods for our test partition. Then we use the BLEU metric to score this output. From this metric, we obtain the final BLEU score of 33.4%, which is markedly better than our initial score (i.e., without fine-tuning) of 3.2%! The code corresponding to this section is available in the notebook chap15_translation_ro_to_en_finetuned. 15.5 Summary In this chapter we used a complete encoder-decoder transformer network to implement a machine translation application. Importantly, transformers with a decoder component have a generate() method that simplifies the generation process and provides multiple options for decoding. We encourage you to explore these options! For example, try comparing the quality of the output with the resources required to produce it (e.g., runtime overhead) when the size of the search beam increases. Additionally, we saw how to fine-tune an encoder-decoder model on a new language pair that it has not seen during its pre-training. This exercise included using checkpoints to support resuming training in case of unexpected interruptions, saving our fine-tuned model, and loading it for later use.
6,284
6,488
#!/usr/bin/env python # coding: utf-8 # # Machine Translation from English (En) to Romanian (Ro) # # Using the T5 Transformer without Fine-tuning # Some initialization: # In[1]: import torch import numpy as np from transformers import set_seed # set to True to use the gpu (if there is one available) use_gpu = True # select device device = torch.device('cuda' if use_gpu and torch.cuda.is_available() else 'cpu') print(f'device: {device.type}') # random seed seed = 42 # set random seed if seed is not None: print(f'random seed: {seed}') set_seed(seed) # In[2]: transformer_name = 't5-small' source_lang = 'en' target_lang = 'ro' max_source_length = 1024 max_target_length = 128 task_prefix = 'translate English to Romanian: ' num_beams = 1 batch_size = 100 # Load tokenizer and pre-trained model: # In[3]: from transformers import AutoTokenizer, AutoModelForSeq2SeqLM tokenizer = AutoTokenizer.from_pretrained(transformer_name) model = AutoModelForSeq2SeqLM.from_pretrained(transformer_name) model = model.to(device) # Load dataset from HuggingFace: # In[4]: from datasets import load_dataset test_ds = load_dataset('wmt16', 'ro-en', split='test') test_ds # In[5]: test_ds['translation'][0] # Implement the `translate` method and apply on this dataset: # In[6]: def translate(batch): # get source language examples and prepend task prefix inputs = [x[source_lang] for x in batch["translation"]] inputs = [task_prefix + x for x in inputs] # tokenize inputs encoded = tokenizer( inputs, max_length=max_source_length, truncation=True, padding=True, return_tensors='pt', ) # move data to gpu if needed input_ids = encoded.input_ids.to(device) attention_mask = encoded.attention_mask.to(device) # generate translated sentences output = model.generate( input_ids=input_ids, attention_mask=attention_mask, num_beams=num_beams, max_length=max_target_length, ) # generate predicted sentences from predicted token ids decoded = tokenizer.batch_decode( output, skip_special_tokens=True, ) # get gold sentences in target language targets = [x[target_lang] for x in batch["translation"]] # return gold and predicted sentences return { 'reference': targets, 'prediction': decoded, } # In[7]: results = test_ds.map( translate, batched=True, batch_size=batch_size, remove_columns=test_ds.column_names, ) results.to_pandas() # Now evaluate the quality of translations using the BLEU metric: # In[8]: from datasets import load_metric metric = load_metric('sacrebleu') for r in results: prediction = r['prediction'] reference = [r['reference']] metric.add(prediction=prediction, reference=reference) metric.compute() # An example of greedy decoding for individual texts: # In[9]: def greedy_translation(text): # prepend task prefix text = task_prefix + text # tokenize input encoded = tokenizer( text, max_length=max_source_length, truncation=True, return_tensors='pt', ) # encoder input ids encoder_input_ids = encoded.input_ids.to(device) # decoder input ids, initialized with start token id start = model.config.decoder_start_token_id decoder_input_ids = torch.LongTensor([[start]]).to(device) # generate tokens, one at a time for _ in range(max_target_length): # get model predictions output = model( encoder_input_ids, decoder_input_ids=decoder_input_ids, ) # get logits for last token next_token_logits = output.logits[0, -1, :] # select most probable token next_token_id = torch.argmax(next_token_logits) # append new token to decoder_input_ids output_id = torch.LongTensor([[next_token_id]]).to(device) decoder_input_ids = torch.cat([decoder_input_ids, output_id], dim=-1) # if predicted token is the end of sequence, stop iterating if next_token_id == tokenizer.eos_token_id: break # return text corresponding to predicted token ids return tokenizer.decode(decoder_input_ids[0], skip_special_tokens=True) # In[10]: greedy_translation("this is a test")
2,702
2,736
9
chap15-10
chap15-10
15 Implementing Encoder-decoder Methods In this chapter we implement a machine translation application as an example of an encoder-decoder task. In particular, we build on pre-trained encoder-decoder transformer models, which exist in the Hugging Face library for a wide variety of language pairs. We first show how to use one of these models out-of-the-box to perform translation for one of the language pairs it has been exposed to during pre-training: English to Romanian. Afterwards, we fine-tune the model to a new language combination that is has not seen before: Romanian to English. In both use cases, we use the T5 encoder-decoder model, which has been pre-trained for several tasks, including machine translation (Raffel et al., 2020). Please see Chapter 16 for a description of T5’s pre-training process. The data for this task comes from the WMT 2016 dataset (Bojar et al., 2016), which consists of English sentences aligned pairwise to German, Czech, Russian, Finnish, Romanian, and Turkish. In this chapter we only use the English-Romanian texts (in both directions). 15.1 Translating English to Romanian As a first example, we use T5 to translate from English to Romanian, which is one of the language pairs it has been exposed to during pretraining. The code discussed in this section is available in the notebook chap15_translation_en_to_ro. Even though in this exercise we are not fine-tuning the model, we still need to define a few hyper parameters to frame the task and help the model understand how to work with the data: The above settings indicate that we use the t5-small model, a smaller T5 variant, to minimize the amount of memory required. The source_lang 212 15.1 Translating English to Romanian 213 and target_lang variables define the direction of translation, i.e., from English to Romanian. To keep our computing requirements small, we limit the length of our input and output. That is, English text longer than max_source_length tokens will be truncated. Further, we limit our generated Romanian text to max_target_length. We chose a maximum target length of 128 tokens to limit the computational cost incurred during text generation (recall that the text is generated one token at a time). The T5 models are trained to support multiple tasks such as translation and summarization (please see Chapter 16 for details). Thus, during training and inference, the user must specify which task the model should perform using a text prefix. Here we use the prefix "translate English to Romanian: " to indicate that the input text is in English and should be translated to Romanian. Next, we load the model and the corresponding tokenizer, and move them to the GPU if one is available: We use the datasets library to load our translation dataset. Note that the first time one calls load_dataset() the dataset will be downloaded automatically from the Hugging Face repository.1 The load_dataset() function takes a dataset name and configuration, which in our case are wmt16 and ro-en, respectively. Since in this example we are only evaluating the model, we only load the test partition (or split) of the dataset: The dataset consists of a single column called translation. Each element in this column is a dictionary that contains the aligned pair. The dictionary keys are the abbreviated language names and the values are the corresponding sentences. An example of one of these dictionaries is shown below: We encapsulate the logic for translating the English text into Romanian in a function called translate(). Inside this function, for a batch of aligned pairs, we select the English sentence as our input, and prepend the task prefix. Then we tokenize these inputs, including the prefix, specifying that sentences longer than max_source_length should be truncated, the batch should be padded, and the tokenizer should return PyTorch tensors. Once the tokenizer output has been moved to the GPU, we pass it to the model’s generate() method. This is the first time we have seen this method, because only decoder and encoder-decoder models support it. This method generates an output sequence by predicting one token 1 https://huggingface.co/datasets/wmt16 214 Implementing Encoder-decoder Methods at a time, stopping when either the end-of-sequence token is produced or when the sequence reaches a maximum length. Several generation techniques are supported, such as beam search, in which several alternate translations are maintained by the model so that it is able to select an overall best translation from several options. For efficiency purposes, we use a greedy approach, which chooses the best token at each step of the generation. This is equivalent to using a beam search with a beam of size one. Since the model generates its predictions as a sequence of token ids, we need to convert them back into the corresponding tokens to be able to read the translated text. We do this using the tokenizer’s batch_decode() method. Finally, we return the gold and predicted Romanian sentences in a dictionary: Next, we apply our translate() function to our Dataset to translate all the sentences: reference Șeful ONU declară că nu există soluții militar... Șeful ONU a solicitat din nou tuturor părților... Ban și-a exprimat regretul că divizările în co... Nu sunt bani puțini. La sfârșitul mandatului voi face un raport cu ... "Să spună un parlamentar că nu-i ajung banii e... 1999 rows × 2 columns prediction eful ONU declară că nu există o soluţie milita... eful U.N. a cerut din nou tuturor partidelor, ... El şi-a exprimat regretul că diviziunile din c... Banii sunt suficienţi. La sfârşitul biroului voi raporta tot ceea ce ... "A spune că un parlamentar nu are suficienţi b... 1994 1995 1996 1997 1998 0 1 2 3 4 ... Secretarul General Ban Ki-moon afirmă că răspu... Secretarul General Ban Ki-moon declară că răsp... Ban a declarat miercuri în cadrul unei conferi... Ban a declarat la o conferinţă de presă susţin... ... ... Uneori mi-e rușine să ridic banii de la casierie. Uneori mi-e ruşine să iau banii de la biroul c... S-a întâmplat să ridic într-o lună și 30.000 d... Într-o lună am adunat 30 000 de lei cu ramburs... We evaluate the quality of these translations using the BLEU metric, which we introduced in Chapter 14. To this end, we load an existing implementation of BLEU from the datasets library as a Metric object.2 Metric objects have a method called add(), which is used to accumulate the predictions and gold labels, one example at a time. After accumulating all examples, the compute() method returns the results of the evaluation. Note that for each predicted sentence, BLEU expects a list of reference sentences (as there are often many correct ways of translat- 2 https://huggingface.co/docs/datasets/v2.4.0/en/package_reference/main_ classes#datasets. Metric 15.2 Implementation of Greedy Generation 215 ing a given text). Since we only have one reference, we wrap it in a list before passing it to the metric: The score corresponds to the BLEU score. The rest of the items correspond to the components required to compute the score. That is, the counts, totals, and precisions correspond to the counts, totals, and precisions for 1-, 2-, 3-, and 4-grams. The bp is the brevity penalty. The sys_len and ref_len correspond to the predictions and reference lengths. The above BLEU score of 25.2% is slightly lower than the state of the art, but we are being penalized by the peculiarities of diacritic usage in Romanian characters. For example, the letters ș and ț (corresponding to the sounds sh and ts in English) are usually spelled with a comma below the characters s and t, which is the standard imposed by the Romanian Academy. However, in “the wild” these characters are often written using a cedilla instead of a comma, e.g., ţ instead of ț (or, using the names of these Unicode characters, LATIN SMALL LETTER T WITH CEDILLA instead of LATIN SMALL LETTER T WITH COMMA BELOW). Further, some of these characters with diacritics are often omitted altogether in the T5 output. The T5 output below contains an example for each of these two situations (e.g., soluţi(e) instead of soluți(i), and eful instead of Șeful): To avoid being penalized at scoring time for these arbitrary discrepancies, post-processing scripts are sometimes used to normalize diacritic usage.3 Usage of such post-processing scripts can improve the BLEU score substantially. However, this is beyond the scope of this chapter. 15.2 Implementation of Greedy Generation To gain a better intuition of how the encoder-decoder model generates its output sequence, we show below an implementation of the greedy version of the generate() method used above. This function takes as an argument a single English text (i.e., no batching) and returns the corresponding Romanian text: This function interacts directly with the encoder and decoder components of the T5 model, so we must construct the input for both. The encoder’s input is constructed by prepending the task prefix to the English text and tokenizing it. On the other hand, the decoder’s input is constructed incrementally by accumulating the tokens predicted so far 3 https://github.com/huggingface/transformers/blob/main/examples/legacy/ seq2seq/romanian_postprocessing.md 216 Implementing Encoder-decoder Methods in order to predict the next token in the sequence. At the beginning, before any tokens are predicted, the decoder’s input is initialized with a single token that corresponds to the beginning of the sequence. We retrieve this token, called decoder_start_token_id, from the model’s configuration object. The tokens are predicted one at a time, until the model produces eos_token_id, which indicates that the sequence is finished. However, in case the model does not produce this end-of-sequence token within a reasonable number of steps, we also enforce a maximum number of predicted tokens, determined by the max_target_length parameter we defined previously. The T5 model’s forward() method, called indirectly through its __call__()) method, takes the inputs for both the encoder and the decoder. The output returned by this method corresponds to all the tokens in the decoder’s input plus an extra one: the newly predicted token. To select the best prediction, we retrieve the logits from the output and select the logits corresponding to the last token in the sequence (recall that the output shape is (batch size, sequence length, vocabulary size)). From these selected logits, we use the argmax() to select the token id corresponding to the highest-scoring vocabulary item. We append this new token id to the decoder’s input, and repeat the process until we encounter the end-of-sequence token or the decoded text reaches the maximum length. Once we are finished generating token ids, we retrieve the corresponding text by calling the tokenizer’s decode() method. This method is identical to the batch_decode() method we used previously, except that it only decodes a single example. Below is an usage example for the greedy_translation() function: 15.3 Fine-tuning Romanian to English Translation In this section, we fine-tune a T5 model on the translation of Romanian to English, a language pair that was not included in the T5 pre-training. To confirm that this data was not included in pre-training, we evaluated the performance of the vanilla t5-small model on the translation from Romanian to English using code equivalent to the code discussed in the previous section (see the chap15_translation_ro_to_en notebook). The resulting BLEU score was only 3.2%, which is substantially lower than the score we obtained when translating English to Romanian (25.2%). 15.3 Fine-tuning Romanian to English Translation 217 Note that the transformers library includes scripts to fine-tune a translation model directly from the command line.4 For didactic purposes, we will not use these scripts in this section, but instead write the fine-tuning code explicitly. For this exercise, we continue using the WMT16 dataset, but this time
we load the train and validation splits. We employ the same t5-small model that we used previously. The code from the last section to load
the model, tokenizer, and dataset does not need to change for this use-
case, so we do not repeat it here. However, as before, the complete code is available in a Jupyter notebook (chap15_translation_ro_to_en_finetune). We begin by tokenizing the source (Romanian) and target language (English) texts. As in the last section, we need to prepend the task prefix to the source texts prior to tokenizing. This time, since we are translating in the opposite direction, we use the prefix "translate Romanian to English: ", and we prepend it to the Romanian text. Each call to the tokenizer with a batch of texts produces input_ids and an attention_mask. This output is what we need for the Romanian text, which will serve as the input to the model. To generate the labels, i.e., the correct translated tokens, we use the input_ids corresponding to the English text. Recall that "labels" is the default key name expected by trainers in Hugging Face. We apply our tokenize() function to both the train and validation splits: 4 https://github.com/huggingface/transformers/tree/main/examples/pytorch/ translation 218 Implementing Encoder-decoder Methods input_ids [13959, 3871, 29, 12, 1566, 10, 4961, 106, 204... [13959, 3871, 29, 12, 1566, 10, 4961, 106, 204... [13959, 3871, 29, 12, 1566, 10, 374, 6225, 49,... [13959, 3871, 29, 12, 1566, 10, 4540, 4031, 9,... [13959, 3871, 29, 12, 1566, 10, 2262, 900, 17,... [13959, 3871, 29, 12, 1566, 10, 18420, 83, 362... attention_mask labels [19428, 13, 12876, 10, 217, 13687, 7, 1] [19428, 13, 12876, 10, 217, 13687, 7, 1] [11167, 7, 1204, 10, 217, 13687, 7, 1] [4540, 4031, 9, 7, 1672, 7, 2262, 900, 17, 38,... [2262, 900, 17, 641, 65, 46, 3761, 6, 1069, 31... [3625, 32, 5788, 35, 15, 3844, 31, 7, 3, 16143... 0 1 2 3 4 ... 610315 610316 610317 610318 610319 [1,1,1,1,1,1,1,1, [1,1,1,1,1,1,1,1, [1,1,1,1,1,1,1,1, [1,1,1,1,1,1,1,1, [1,1,1,1,1,1,1,1, [1,1,1,1,1,1,1,1, 1,1,1,1, 1,1,1,... 1,1,1,1, 1,1,1,... 1,1,1,1, 1,1,1,... 1,1,1,1, 1,1,1,... 1,1,1,1, 1,1,1,... 1,1,1,1, 1,1,1,... [13959, 3871, 29, 12, 1566, 10, 5085, [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 5840, 49... 1, 1, 1, ... [13959, 3871, 29, 12, 1566, 10, 781, [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 8750, 9, ... 1, 1, 1, ... ... ... [13959, 3871, 29, 12, 1566, 10, 2364, [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 4540, 40... 1, 1, 1, ... 610320 rows × 3 columns [13959, 3871, 29, 12, 1566, 10, 3, [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 25882, 759,... 1, 1, 1, ... [2276, 8843, 138, 13, 13687, 7, 13, 1767, 3823... [781, 2420, 13, 17500, 10, 217, 13687, 7, 1] [242, 4540, 4031, 9, 7, 6, 8, 516, 65, 66, 8, ... [9810, 157, 31, 7, 516, 92, 3088, 21, 46, 3839... ... Recall that in order to construct a trainer, we need a data collator for batching, a function to compute the metrics of interest, and a TrainingArguments object. In this section, we use a data collator called DataCollatorForSeq2Seq, which is included in the transformers library specifically for sequence-to-sequence models. The collator pads the batches using the label_pad_token_id, which we have set to −100, as we did in Chapter 13 (this is the default ignore_index value used by CrossEntropyLoss): The compute_metrics() function computes the BLEU score. It uses the tokenizer to decode the token ids into text, for both the predicted and gold labels, ignoring padding: We use the Seq2SeqTrainingArguments class, which adds the predict_with_generate parameter to the regular TrainingArguments class. This is needed to in-
dicate that the trainer should use the generate() method for inference
in order to compute the metrics (BLUE in this case): Finally, we construct the trainer using the Seq2SeqTrainer class, which is a subclass of Trainer that adds the ability to compute scores such as BLEU during training by calling generate() during evaluation: Fine-tuning a translation model takes considerably longer than training or fine-tuning the models we have developed so far in this book. To account for this, here we add support for resuming training from a checkpoint, i.e., a model that was saved after training on a number of 15.4 Using a Previously Saved Model 219 examples. Similar to how one can resume a video game, this allows one to pick up from the last “save point,” in case training was interrupted and needs to be resumed: When calling the trainer’s train() method, we either provide a model checkpoint or None. In the former case, the trainer will continue training from the provided checkpoint. In the latter case, the trainer will begin training from scratch. Once the training has completed, we save the trained model and tokenizer using the trainer’s save_model() method into the output directory: We then compute and save the metrics corresponding to the training partition. This is not required, but it is helpful to keep a record of the model’s performance on the training data. Note that the metrics do not automatically include the number of examples in the training partition, so we add them explicitly: Next, we evaluate our final model on the validation data and save the corresponding metrics. These metrics indicate that our BLEU score on the validation data is 35.2%, which is evidence that fine-tuning has helped dramatically: Lastly, we save a model card into our output directory. A model card is akin to an automatically-generated README file that includes information about the model used, the data, settings used, and performance throughout the training process. This file is helpful for reproducibility as it contains all of this key information in one place. These cards are often uploaded to the Hugging Face Hub together with the model itself.5 15.4 Using a Previously Saved Model Models that have been saved locally can be loaded using the same from_pretrained() methods we have used before. In particular, instead of providing a model name, we provide the path to the local directory where the model is stored, using the local_files_only parameter to indicate that we want to load the model from the local file system instead of downloading it from the Hugging Face Hub (Make sure you use an output directory that is valid on your machine!): Once our fine-tuned model is loaded, we use it the same way as before. That is, we use the translate() function to generate translations 5 We do not discuss the model uploading process here. Please see the documentation on model sharing at: https://huggingface.co/docs/transformers/v4.14.1/model_sharing. 220 Implementing Encoder-decoder Methods for our test partition. Then we use the BLEU metric to score this output. From this metric, we obtain the final BLEU score of 33.4%, which is markedly better than our initial score (i.e., without fine-tuning) of 3.2%! The code corresponding to this section is available in the notebook chap15_translation_ro_to_en_finetuned. 15.5 Summary In this chapter we used a complete encoder-decoder transformer network to implement a machine translation application. Importantly, transformers with a decoder component have a generate() method that simplifies the generation process and provides multiple options for decoding. We encourage you to explore these options! For example, try comparing the quality of the output with the resources required to produce it (e.g., runtime overhead) when the size of the search beam increases. Additionally, we saw how to fine-tune an encoder-decoder model on a new language pair that it has not seen during its pre-training. This exercise included using checkpoints to support resuming training in case of unexpected interruptions, saving our fine-tuned model, and loading it for later use.
15,789
15,918
#!/usr/bin/env python # coding: utf-8 # # Machine Translation from Ro to En # # Using the T5 Transformer with Fine-tuning # Some initialization: # In[1]: import torch import numpy as np from transformers import set_seed # random seed seed = 42 # set random seed if seed is not None: print(f'random seed: {seed}') set_seed(seed) # In[2]: transformer_name = 't5-small' dataset_name = 'wmt16' dataset_config_name = 'ro-en' source_lang = 'ro' target_lang = 'en' max_source_length = 1024 max_target_length = 128 task_prefix = 'translate Romanian to English: ' batch_size = 4 label_pad_token_id = -100 save_steps = 25_000 num_beams = 1 learning_rate = 1e-3 num_train_epochs = 3 output_dir = '/media/data2/t5-translation-example' # make sure this is a valid path on your machine! # Load dataset from HuggingFace: # In[3]: from datasets import load_dataset wmt16 = load_dataset(dataset_name, dataset_config_name) # Load tokenizer and pre-trained model: # In[4]: from transformers import AutoConfig, AutoTokenizer, AutoModelForSeq2SeqLM config = AutoConfig.from_pretrained(transformer_name) tokenizer = AutoTokenizer.from_pretrained(transformer_name) model = AutoModelForSeq2SeqLM.from_pretrained(transformer_name, config=config) # Tokenize the texts in the dataset: # In[5]: def tokenize(batch): # get source sentences and prepend task prefix sources = [x[source_lang] for x in batch["translation"]] sources = [task_prefix + x for x in sources] # tokenize source sentences output = tokenizer( sources, max_length=max_source_length, truncation=True, ) # get target sentences targets = [x[target_lang] for x in batch["translation"]] # tokenize target sentences labels = tokenizer( targets, max_length=max_target_length, truncation=True, ) # add targets to output output["labels"] = labels["input_ids"] return output # In[6]: train_dataset = wmt16['train'] eval_dataset = wmt16['validation'] column_names = train_dataset.column_names train_dataset = train_dataset.map( tokenize, batched=True, remove_columns=column_names, ) eval_dataset = eval_dataset.map( tokenize, batched=True, remove_columns=column_names, ) # In[7]: train_dataset.to_pandas() # Create `Trainer` object and train: # In[8]: from transformers import DataCollatorForSeq2Seq data_collator = DataCollatorForSeq2Seq( tokenizer, model=model, label_pad_token_id=label_pad_token_id, ) # In[9]: from datasets import load_metric metric = load_metric('sacrebleu') def compute_metrics(eval_preds): preds, labels = eval_preds # get text for predictions predictions = tokenizer.batch_decode( preds, skip_special_tokens=True, ) # replace -100 in labels with pad token labels = np.where( labels != -100, labels, tokenizer.pad_token_id, ) # get text for gold labels references = tokenizer.batch_decode( labels, skip_special_tokens=True, ) # metric expects list of references for each prediction references = [[ref] for ref in references] # compute bleu score results = metric.compute( predictions=predictions, references=references, ) results = {'bleu': results['score']} return results # In[10]: from transformers import Seq2SeqTrainingArguments training_args = Seq2SeqTrainingArguments( output_dir=output_dir, per_device_train_batch_size=batch_size, per_device_eval_batch_size=batch_size, save_steps=save_steps, predict_with_generate=True, evaluation_strategy='steps', eval_steps=save_steps, learning_rate=learning_rate, num_train_epochs=num_train_epochs, ) # In[11]: from transformers import Seq2SeqTrainer trainer = Seq2SeqTrainer( model=model, args=training_args, train_dataset=train_dataset, eval_dataset=eval_dataset, tokenizer=tokenizer, data_collator=data_collator, compute_metrics=compute_metrics, ) # In[12]: import os from transformers.trainer_utils import get_last_checkpoint last_checkpoint = None if os.path.isdir(output_dir): last_checkpoint = get_last_checkpoint(output_dir) if last_checkpoint is not None: print(f'Checkpoint detected, resuming training at {last_checkpoint}.') # In[13]: train_result = trainer.train(resume_from_checkpoint=last_checkpoint) trainer.save_model() # In[14]: metrics = train_result.metrics metrics['train_samples'] = len(train_dataset) trainer.log_metrics('train', metrics) trainer.save_metrics('train', metrics) trainer.save_state() # Now evaluate: # In[15]: # https://discuss.huggingface.co/t/evaluation-results-metric-during-training-is-different-from-the-evaluation-results-at-the-end/15401 metrics = trainer.evaluate( max_length=max_target_length, num_beams=num_beams, metric_key_prefix='eval', ) metrics['eval_samples'] = len(eval_dataset) trainer.log_metrics('eval', metrics) trainer.save_metrics('eval', metrics) # Create a model card with meta data about this model: # In[16]: kwargs = { 'finetuned_from': transformer_name, 'tasks': 'translation', 'dataset_tags': dataset_name, 'dataset_args': dataset_config_name, 'dataset': f'{dataset_name} {dataset_config_name}', 'language': [source_lang, target_lang], } trainer.create_model_card(**kwargs) # In[ ]:
3,847
4,075
10
chap15-11
chap15-11
15 Implementing Encoder-decoder Methods In this chapter we implement a machine translation application as an example of an encoder-decoder task. In particular, we build on pre-trained encoder-decoder transformer models, which exist in the Hugging Face library for a wide variety of language pairs. We first show how to use one of these models out-of-the-box to perform translation for one of the language pairs it has been exposed to during pre-training: English to Romanian. Afterwards, we fine-tune the model to a new language combination that is has not seen before: Romanian to English. In both use cases, we use the T5 encoder-decoder model, which has been pre-trained for several tasks, including machine translation (Raffel et al., 2020). Please see Chapter 16 for a description of T5’s pre-training process. The data for this task comes from the WMT 2016 dataset (Bojar et al., 2016), which consists of English sentences aligned pairwise to German, Czech, Russian, Finnish, Romanian, and Turkish. In this chapter we only use the English-Romanian texts (in both directions). 15.1 Translating English to Romanian As a first example, we use T5 to translate from English to Romanian, which is one of the language pairs it has been exposed to during pretraining. The code discussed in this section is available in the notebook chap15_translation_en_to_ro. Even though in this exercise we are not fine-tuning the model, we still need to define a few hyper parameters to frame the task and help the model understand how to work with the data: The above settings indicate that we use the t5-small model, a smaller T5 variant, to minimize the amount of memory required. The source_lang 212 15.1 Translating English to Romanian 213 and target_lang variables define the direction of translation, i.e., from English to Romanian. To keep our computing requirements small, we limit the length of our input and output. That is, English text longer than max_source_length tokens will be truncated. Further, we limit our generated Romanian text to max_target_length. We chose a maximum target length of 128 tokens to limit the computational cost incurred during text generation (recall that the text is generated one token at a time). The T5 models are trained to support multiple tasks such as translation and summarization (please see Chapter 16 for details). Thus, during training and inference, the user must specify which task the model should perform using a text prefix. Here we use the prefix "translate English to Romanian: " to indicate that the input text is in English and should be translated to Romanian. Next, we load the model and the corresponding tokenizer, and move them to the GPU if one is available: We use the datasets library to load our translation dataset. Note that the first time one calls load_dataset() the dataset will be downloaded automatically from the Hugging Face repository.1 The load_dataset() function takes a dataset name and configuration, which in our case are wmt16 and ro-en, respectively. Since in this example we are only evaluating the model, we only load the test partition (or split) of the dataset: The dataset consists of a single column called translation. Each element in this column is a dictionary that contains the aligned pair. The dictionary keys are the abbreviated language names and the values are the corresponding sentences. An example of one of these dictionaries is shown below: We encapsulate the logic for translating the English text into Romanian in a function called translate(). Inside this function, for a batch of aligned pairs, we select the English sentence as our input, and prepend the task prefix. Then we tokenize these inputs, including the prefix, specifying that sentences longer than max_source_length should be truncated, the batch should be padded, and the tokenizer should return PyTorch tensors. Once the tokenizer output has been moved to the GPU, we pass it to the model’s generate() method. This is the first time we have seen this method, because only decoder and encoder-decoder models support it. This method generates an output sequence by predicting one token 1 https://huggingface.co/datasets/wmt16 214 Implementing Encoder-decoder Methods at a time, stopping when either the end-of-sequence token is produced or when the sequence reaches a maximum length. Several generation techniques are supported, such as beam search, in which several alternate translations are maintained by the model so that it is able to select an overall best translation from several options. For efficiency purposes, we use a greedy approach, which chooses the best token at each step of the generation. This is equivalent to using a beam search with a beam of size one. Since the model generates its predictions as a sequence of token ids, we need to convert them back into the corresponding tokens to be able to read the translated text. We do this using the tokenizer’s batch_decode() method. Finally, we return the gold and predicted Romanian sentences in a dictionary: Next, we apply our translate() function to our Dataset to translate all the sentences: reference Șeful ONU declară că nu există soluții militar... Șeful ONU a solicitat din nou tuturor părților... Ban și-a exprimat regretul că divizările în co... Nu sunt bani puțini. La sfârșitul mandatului voi face un raport cu ... "Să spună un parlamentar că nu-i ajung banii e... 1999 rows × 2 columns prediction eful ONU declară că nu există o soluţie milita... eful U.N. a cerut din nou tuturor partidelor, ... El şi-a exprimat regretul că diviziunile din c... Banii sunt suficienţi. La sfârşitul biroului voi raporta tot ceea ce ... "A spune că un parlamentar nu are suficienţi b... 1994 1995 1996 1997 1998 0 1 2 3 4 ... Secretarul General Ban Ki-moon afirmă că răspu... Secretarul General Ban Ki-moon declară că răsp... Ban a declarat miercuri în cadrul unei conferi... Ban a declarat la o conferinţă de presă susţin... ... ... Uneori mi-e rușine să ridic banii de la casierie. Uneori mi-e ruşine să iau banii de la biroul c... S-a întâmplat să ridic într-o lună și 30.000 d... Într-o lună am adunat 30 000 de lei cu ramburs... We evaluate the quality of these translations using the BLEU metric, which we introduced in Chapter 14. To this end, we load an existing implementation of BLEU from the datasets library as a Metric object.2 Metric objects have a method called add(), which is used to accumulate the predictions and gold labels, one example at a time. After accumulating all examples, the compute() method returns the results of the evaluation. Note that for each predicted sentence, BLEU expects a list of reference sentences (as there are often many correct ways of translat- 2 https://huggingface.co/docs/datasets/v2.4.0/en/package_reference/main_ classes#datasets. Metric 15.2 Implementation of Greedy Generation 215 ing a given text). Since we only have one reference, we wrap it in a list before passing it to the metric: The score corresponds to the BLEU score. The rest of the items correspond to the components required to compute the score. That is, the counts, totals, and precisions correspond to the counts, totals, and precisions for 1-, 2-, 3-, and 4-grams. The bp is the brevity penalty. The sys_len and ref_len correspond to the predictions and reference lengths. The above BLEU score of 25.2% is slightly lower than the state of the art, but we are being penalized by the peculiarities of diacritic usage in Romanian characters. For example, the letters ș and ț (corresponding to the sounds sh and ts in English) are usually spelled with a comma below the characters s and t, which is the standard imposed by the Romanian Academy. However, in “the wild” these characters are often written using a cedilla instead of a comma, e.g., ţ instead of ț (or, using the names of these Unicode characters, LATIN SMALL LETTER T WITH CEDILLA instead of LATIN SMALL LETTER T WITH COMMA BELOW). Further, some of these characters with diacritics are often omitted altogether in the T5 output. The T5 output below contains an example for each of these two situations (e.g., soluţi(e) instead of soluți(i), and eful instead of Șeful): To avoid being penalized at scoring time for these arbitrary discrepancies, post-processing scripts are sometimes used to normalize diacritic usage.3 Usage of such post-processing scripts can improve the BLEU score substantially. However, this is beyond the scope of this chapter. 15.2 Implementation of Greedy Generation To gain a better intuition of how the encoder-decoder model generates its output sequence, we show below an implementation of the greedy version of the generate() method used above. This function takes as an argument a single English text (i.e., no batching) and returns the corresponding Romanian text: This function interacts directly with the encoder and decoder components of the T5 model, so we must construct the input for both. The encoder’s input is constructed by prepending the task prefix to the English text and tokenizing it. On the other hand, the decoder’s input is constructed incrementally by accumulating the tokens predicted so far 3 https://github.com/huggingface/transformers/blob/main/examples/legacy/ seq2seq/romanian_postprocessing.md 216 Implementing Encoder-decoder Methods in order to predict the next token in the sequence. At the beginning, before any tokens are predicted, the decoder’s input is initialized with a single token that corresponds to the beginning of the sequence. We retrieve this token, called decoder_start_token_id, from the model’s configuration object. The tokens are predicted one at a time, until the model produces eos_token_id, which indicates that the sequence is finished. However, in case the model does not produce this end-of-sequence token within a reasonable number of steps, we also enforce a maximum number of predicted tokens, determined by the max_target_length parameter we defined previously. The T5 model’s forward() method, called indirectly through its __call__()) method, takes the inputs for both the encoder and the decoder. The output returned by this method corresponds to all the tokens in the decoder’s input plus an extra one: the newly predicted token. To select the best prediction, we retrieve the logits from the output and select the logits corresponding to the last token in the sequence (recall that the output shape is (batch size, sequence length, vocabulary size)). From these selected logits, we use the argmax() to select the token id corresponding to the highest-scoring vocabulary item. We append this new token id to the decoder’s input, and repeat the process until we encounter the end-of-sequence token or the decoded text reaches the maximum length. Once we are finished generating token ids, we retrieve the corresponding text by calling the tokenizer’s decode() method. This method is identical to the batch_decode() method we used previously, except that it only decodes a single example. Below is an usage example for the greedy_translation() function: 15.3 Fine-tuning Romanian to English Translation In this section, we fine-tune a T5 model on the translation of Romanian to English, a language pair that was not included in the T5 pre-training. To confirm that this data was not included in pre-training, we evaluated the performance of the vanilla t5-small model on the translation from Romanian to English using code equivalent to the code discussed in the previous section (see the chap15_translation_ro_to_en notebook). The resulting BLEU score was only 3.2%, which is substantially lower than the score we obtained when translating English to Romanian (25.2%). 15.3 Fine-tuning Romanian to English Translation 217 Note that the transformers library includes scripts to fine-tune a translation model directly from the command line.4 For didactic purposes, we will not use these scripts in this section, but instead write the fine-tuning code explicitly. For this exercise, we continue using the WMT16 dataset, but this time
we load the train and validation splits. We employ the same t5-small model that we used previously. The code from the last section to load
the model, tokenizer, and dataset does not need to change for this use-
case, so we do not repeat it here. However, as before, the complete code is available in a Jupyter notebook (chap15_translation_ro_to_en_finetune). We begin by tokenizing the source (Romanian) and target language (English) texts. As in the last section, we need to prepend the task prefix to the source texts prior to tokenizing. This time, since we are translating in the opposite direction, we use the prefix "translate Romanian to English: ", and we prepend it to the Romanian text. Each call to the tokenizer with a batch of texts produces input_ids and an attention_mask. This output is what we need for the Romanian text, which will serve as the input to the model. To generate the labels, i.e., the correct translated tokens, we use the input_ids corresponding to the English text. Recall that "labels" is the default key name expected by trainers in Hugging Face. We apply our tokenize() function to both the train and validation splits: 4 https://github.com/huggingface/transformers/tree/main/examples/pytorch/ translation 218 Implementing Encoder-decoder Methods input_ids [13959, 3871, 29, 12, 1566, 10, 4961, 106, 204... [13959, 3871, 29, 12, 1566, 10, 4961, 106, 204... [13959, 3871, 29, 12, 1566, 10, 374, 6225, 49,... [13959, 3871, 29, 12, 1566, 10, 4540, 4031, 9,... [13959, 3871, 29, 12, 1566, 10, 2262, 900, 17,... [13959, 3871, 29, 12, 1566, 10, 18420, 83, 362... attention_mask labels [19428, 13, 12876, 10, 217, 13687, 7, 1] [19428, 13, 12876, 10, 217, 13687, 7, 1] [11167, 7, 1204, 10, 217, 13687, 7, 1] [4540, 4031, 9, 7, 1672, 7, 2262, 900, 17, 38,... [2262, 900, 17, 641, 65, 46, 3761, 6, 1069, 31... [3625, 32, 5788, 35, 15, 3844, 31, 7, 3, 16143... 0 1 2 3 4 ... 610315 610316 610317 610318 610319 [1,1,1,1,1,1,1,1, [1,1,1,1,1,1,1,1, [1,1,1,1,1,1,1,1, [1,1,1,1,1,1,1,1, [1,1,1,1,1,1,1,1, [1,1,1,1,1,1,1,1, 1,1,1,1, 1,1,1,... 1,1,1,1, 1,1,1,... 1,1,1,1, 1,1,1,... 1,1,1,1, 1,1,1,... 1,1,1,1, 1,1,1,... 1,1,1,1, 1,1,1,... [13959, 3871, 29, 12, 1566, 10, 5085, [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 5840, 49... 1, 1, 1, ... [13959, 3871, 29, 12, 1566, 10, 781, [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 8750, 9, ... 1, 1, 1, ... ... ... [13959, 3871, 29, 12, 1566, 10, 2364, [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 4540, 40... 1, 1, 1, ... 610320 rows × 3 columns [13959, 3871, 29, 12, 1566, 10, 3, [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 25882, 759,... 1, 1, 1, ... [2276, 8843, 138, 13, 13687, 7, 13, 1767, 3823... [781, 2420, 13, 17500, 10, 217, 13687, 7, 1] [242, 4540, 4031, 9, 7, 6, 8, 516, 65, 66, 8, ... [9810, 157, 31, 7, 516, 92, 3088, 21, 46, 3839... ... Recall that in order to construct a trainer, we need a data collator for batching, a function to compute the metrics of interest, and a TrainingArguments object. In this section, we use a data collator called DataCollatorForSeq2Seq, which is included in the transformers library specifically for sequence-to-sequence models. The collator pads the batches using the label_pad_token_id, which we have set to −100, as we did in Chapter 13 (this is the default ignore_index value used by CrossEntropyLoss): The compute_metrics() function computes the BLEU score. It uses the tokenizer to decode the token ids into text, for both the predicted and gold labels, ignoring padding: We use the Seq2SeqTrainingArguments class, which adds the predict_with_generate parameter to the regular TrainingArguments class. This is needed to in-
dicate that the trainer should use the generate() method for inference
in order to compute the metrics (BLUE in this case): Finally, we construct the trainer using the Seq2SeqTrainer class, which is a subclass of Trainer that adds the ability to compute scores such as BLEU during training by calling generate() during evaluation: Fine-tuning a translation model takes considerably longer than training or fine-tuning the models we have developed so far in this book. To account for this, here we add support for resuming training from a checkpoint, i.e., a model that was saved after training on a number of 15.4 Using a Previously Saved Model 219 examples. Similar to how one can resume a video game, this allows one to pick up from the last “save point,” in case training was interrupted and needs to be resumed: When calling the trainer’s train() method, we either provide a model checkpoint or None. In the former case, the trainer will continue training from the provided checkpoint. In the latter case, the trainer will begin training from scratch. Once the training has completed, we save the trained model and tokenizer using the trainer’s save_model() method into the output directory: We then compute and save the metrics corresponding to the training partition. This is not required, but it is helpful to keep a record of the model’s performance on the training data. Note that the metrics do not automatically include the number of examples in the training partition, so we add them explicitly: Next, we evaluate our final model on the validation data and save the corresponding metrics. These metrics indicate that our BLEU score on the validation data is 35.2%, which is evidence that fine-tuning has helped dramatically: Lastly, we save a model card into our output directory. A model card is akin to an automatically-generated README file that includes information about the model used, the data, settings used, and performance throughout the training process. This file is helpful for reproducibility as it contains all of this key information in one place. These cards are often uploaded to the Hugging Face Hub together with the model itself.5 15.4 Using a Previously Saved Model Models that have been saved locally can be loaded using the same from_pretrained() methods we have used before. In particular, instead of providing a model name, we provide the path to the local directory where the model is stored, using the local_files_only parameter to indicate that we want to load the model from the local file system instead of downloading it from the Hugging Face Hub (Make sure you use an output directory that is valid on your machine!): Once our fine-tuned model is loaded, we use it the same way as before. That is, we use the translate() function to generate translations 5 We do not discuss the model uploading process here. Please see the documentation on model sharing at: https://huggingface.co/docs/transformers/v4.14.1/model_sharing. 220 Implementing Encoder-decoder Methods for our test partition. Then we use the BLEU metric to score this output. From this metric, we obtain the final BLEU score of 33.4%, which is markedly better than our initial score (i.e., without fine-tuning) of 3.2%! The code corresponding to this section is available in the notebook chap15_translation_ro_to_en_finetuned. 15.5 Summary In this chapter we used a complete encoder-decoder transformer network to implement a machine translation application. Importantly, transformers with a decoder component have a generate() method that simplifies the generation process and provides multiple options for decoding. We encourage you to explore these options! For example, try comparing the quality of the output with the resources required to produce it (e.g., runtime overhead) when the size of the search beam increases. Additionally, we saw how to fine-tune an encoder-decoder model on a new language pair that it has not seen during its pre-training. This exercise included using checkpoints to support resuming training in case of unexpected interruptions, saving our fine-tuned model, and loading it for later use.
16,413
16,607
#!/usr/bin/env python # coding: utf-8 # # Machine Translation from Ro to En # # Using the T5 Transformer with Fine-tuning # Some initialization: # In[1]: import torch import numpy as np from transformers import set_seed # random seed seed = 42 # set random seed if seed is not None: print(f'random seed: {seed}') set_seed(seed) # In[2]: transformer_name = 't5-small' dataset_name = 'wmt16' dataset_config_name = 'ro-en' source_lang = 'ro' target_lang = 'en' max_source_length = 1024 max_target_length = 128 task_prefix = 'translate Romanian to English: ' batch_size = 4 label_pad_token_id = -100 save_steps = 25_000 num_beams = 1 learning_rate = 1e-3 num_train_epochs = 3 output_dir = '/media/data2/t5-translation-example' # make sure this is a valid path on your machine! # Load dataset from HuggingFace: # In[3]: from datasets import load_dataset wmt16 = load_dataset(dataset_name, dataset_config_name) # Load tokenizer and pre-trained model: # In[4]: from transformers import AutoConfig, AutoTokenizer, AutoModelForSeq2SeqLM config = AutoConfig.from_pretrained(transformer_name) tokenizer = AutoTokenizer.from_pretrained(transformer_name) model = AutoModelForSeq2SeqLM.from_pretrained(transformer_name, config=config) # Tokenize the texts in the dataset: # In[5]: def tokenize(batch): # get source sentences and prepend task prefix sources = [x[source_lang] for x in batch["translation"]] sources = [task_prefix + x for x in sources] # tokenize source sentences output = tokenizer( sources, max_length=max_source_length, truncation=True, ) # get target sentences targets = [x[target_lang] for x in batch["translation"]] # tokenize target sentences labels = tokenizer( targets, max_length=max_target_length, truncation=True, ) # add targets to output output["labels"] = labels["input_ids"] return output # In[6]: train_dataset = wmt16['train'] eval_dataset = wmt16['validation'] column_names = train_dataset.column_names train_dataset = train_dataset.map( tokenize, batched=True, remove_columns=column_names, ) eval_dataset = eval_dataset.map( tokenize, batched=True, remove_columns=column_names, ) # In[7]: train_dataset.to_pandas() # Create `Trainer` object and train: # In[8]: from transformers import DataCollatorForSeq2Seq data_collator = DataCollatorForSeq2Seq( tokenizer, model=model, label_pad_token_id=label_pad_token_id, ) # In[9]: from datasets import load_metric metric = load_metric('sacrebleu') def compute_metrics(eval_preds): preds, labels = eval_preds # get text for predictions predictions = tokenizer.batch_decode( preds, skip_special_tokens=True, ) # replace -100 in labels with pad token labels = np.where( labels != -100, labels, tokenizer.pad_token_id, ) # get text for gold labels references = tokenizer.batch_decode( labels, skip_special_tokens=True, ) # metric expects list of references for each prediction references = [[ref] for ref in references] # compute bleu score results = metric.compute( predictions=predictions, references=references, ) results = {'bleu': results['score']} return results # In[10]: from transformers import Seq2SeqTrainingArguments training_args = Seq2SeqTrainingArguments( output_dir=output_dir, per_device_train_batch_size=batch_size, per_device_eval_batch_size=batch_size, save_steps=save_steps, predict_with_generate=True, evaluation_strategy='steps', eval_steps=save_steps, learning_rate=learning_rate, num_train_epochs=num_train_epochs, ) # In[11]: from transformers import Seq2SeqTrainer trainer = Seq2SeqTrainer( model=model, args=training_args, train_dataset=train_dataset, eval_dataset=eval_dataset, tokenizer=tokenizer, data_collator=data_collator, compute_metrics=compute_metrics, ) # In[12]: import os from transformers.trainer_utils import get_last_checkpoint last_checkpoint = None if os.path.isdir(output_dir): last_checkpoint = get_last_checkpoint(output_dir) if last_checkpoint is not None: print(f'Checkpoint detected, resuming training at {last_checkpoint}.') # In[13]: train_result = trainer.train(resume_from_checkpoint=last_checkpoint) trainer.save_model() # In[14]: metrics = train_result.metrics metrics['train_samples'] = len(train_dataset) trainer.log_metrics('train', metrics) trainer.save_metrics('train', metrics) trainer.save_state() # Now evaluate: # In[15]: # https://discuss.huggingface.co/t/evaluation-results-metric-during-training-is-different-from-the-evaluation-results-at-the-end/15401 metrics = trainer.evaluate( max_length=max_target_length, num_beams=num_beams, metric_key_prefix='eval', ) metrics['eval_samples'] = len(eval_dataset) trainer.log_metrics('eval', metrics) trainer.save_metrics('eval', metrics) # Create a model card with meta data about this model: # In[16]: kwargs = { 'finetuned_from': transformer_name, 'tasks': 'translation', 'dataset_tags': dataset_name, 'dataset_args': dataset_config_name, 'dataset': f'{dataset_name} {dataset_config_name}', 'language': [source_lang, target_lang], } trainer.create_model_card(**kwargs) # In[ ]:
4,388
4,457
11