id
stringlengths
8
9
chunk_id
stringlengths
8
9
text
stringclasses
6 values
start_text
int64
235
36k
stop_text
int64
559
36.1k
code
stringclasses
14 values
start_code
int64
356
8.04k
stop_code
int64
386
8.58k
__index_level_0__
int64
0
35
chap15-12
chap15-12
15 Implementing Encoder-decoder Methods In this chapter we implement a machine translation application as an example of an encoder-decoder task. In particular, we build on pre-trained encoder-decoder transformer models, which exist in the Hugging Face library for a wide variety of language pairs. We first show how to use one of these models out-of-the-box to perform translation for one of the language pairs it has been exposed to during pre-training: English to Romanian. Afterwards, we fine-tune the model to a new language combination that is has not seen before: Romanian to English. In both use cases, we use the T5 encoder-decoder model, which has been pre-trained for several tasks, including machine translation (Raffel et al., 2020). Please see Chapter 16 for a description of T5’s pre-training process. The data for this task comes from the WMT 2016 dataset (Bojar et al., 2016), which consists of English sentences aligned pairwise to German, Czech, Russian, Finnish, Romanian, and Turkish. In this chapter we only use the English-Romanian texts (in both directions). 15.1 Translating English to Romanian As a first example, we use T5 to translate from English to Romanian, which is one of the language pairs it has been exposed to during pretraining. The code discussed in this section is available in the notebook chap15_translation_en_to_ro. Even though in this exercise we are not fine-tuning the model, we still need to define a few hyper parameters to frame the task and help the model understand how to work with the data: The above settings indicate that we use the t5-small model, a smaller T5 variant, to minimize the amount of memory required. The source_lang 212 15.1 Translating English to Romanian 213 and target_lang variables define the direction of translation, i.e., from English to Romanian. To keep our computing requirements small, we limit the length of our input and output. That is, English text longer than max_source_length tokens will be truncated. Further, we limit our generated Romanian text to max_target_length. We chose a maximum target length of 128 tokens to limit the computational cost incurred during text generation (recall that the text is generated one token at a time). The T5 models are trained to support multiple tasks such as translation and summarization (please see Chapter 16 for details). Thus, during training and inference, the user must specify which task the model should perform using a text prefix. Here we use the prefix "translate English to Romanian: " to indicate that the input text is in English and should be translated to Romanian. Next, we load the model and the corresponding tokenizer, and move them to the GPU if one is available: We use the datasets library to load our translation dataset. Note that the first time one calls load_dataset() the dataset will be downloaded automatically from the Hugging Face repository.1 The load_dataset() function takes a dataset name and configuration, which in our case are wmt16 and ro-en, respectively. Since in this example we are only evaluating the model, we only load the test partition (or split) of the dataset: The dataset consists of a single column called translation. Each element in this column is a dictionary that contains the aligned pair. The dictionary keys are the abbreviated language names and the values are the corresponding sentences. An example of one of these dictionaries is shown below: We encapsulate the logic for translating the English text into Romanian in a function called translate(). Inside this function, for a batch of aligned pairs, we select the English sentence as our input, and prepend the task prefix. Then we tokenize these inputs, including the prefix, specifying that sentences longer than max_source_length should be truncated, the batch should be padded, and the tokenizer should return PyTorch tensors. Once the tokenizer output has been moved to the GPU, we pass it to the model’s generate() method. This is the first time we have seen this method, because only decoder and encoder-decoder models support it. This method generates an output sequence by predicting one token 1 https://huggingface.co/datasets/wmt16 214 Implementing Encoder-decoder Methods at a time, stopping when either the end-of-sequence token is produced or when the sequence reaches a maximum length. Several generation techniques are supported, such as beam search, in which several alternate translations are maintained by the model so that it is able to select an overall best translation from several options. For efficiency purposes, we use a greedy approach, which chooses the best token at each step of the generation. This is equivalent to using a beam search with a beam of size one. Since the model generates its predictions as a sequence of token ids, we need to convert them back into the corresponding tokens to be able to read the translated text. We do this using the tokenizer’s batch_decode() method. Finally, we return the gold and predicted Romanian sentences in a dictionary: Next, we apply our translate() function to our Dataset to translate all the sentences: reference Șeful ONU declară că nu există soluții militar... Șeful ONU a solicitat din nou tuturor părților... Ban și-a exprimat regretul că divizările în co... Nu sunt bani puțini. La sfârșitul mandatului voi face un raport cu ... "Să spună un parlamentar că nu-i ajung banii e... 1999 rows × 2 columns prediction eful ONU declară că nu există o soluţie milita... eful U.N. a cerut din nou tuturor partidelor, ... El şi-a exprimat regretul că diviziunile din c... Banii sunt suficienţi. La sfârşitul biroului voi raporta tot ceea ce ... "A spune că un parlamentar nu are suficienţi b... 1994 1995 1996 1997 1998 0 1 2 3 4 ... Secretarul General Ban Ki-moon afirmă că răspu... Secretarul General Ban Ki-moon declară că răsp... Ban a declarat miercuri în cadrul unei conferi... Ban a declarat la o conferinţă de presă susţin... ... ... Uneori mi-e rușine să ridic banii de la casierie. Uneori mi-e ruşine să iau banii de la biroul c... S-a întâmplat să ridic într-o lună și 30.000 d... Într-o lună am adunat 30 000 de lei cu ramburs... We evaluate the quality of these translations using the BLEU metric, which we introduced in Chapter 14. To this end, we load an existing implementation of BLEU from the datasets library as a Metric object.2 Metric objects have a method called add(), which is used to accumulate the predictions and gold labels, one example at a time. After accumulating all examples, the compute() method returns the results of the evaluation. Note that for each predicted sentence, BLEU expects a list of reference sentences (as there are often many correct ways of translat- 2 https://huggingface.co/docs/datasets/v2.4.0/en/package_reference/main_ classes#datasets. Metric 15.2 Implementation of Greedy Generation 215 ing a given text). Since we only have one reference, we wrap it in a list before passing it to the metric: The score corresponds to the BLEU score. The rest of the items correspond to the components required to compute the score. That is, the counts, totals, and precisions correspond to the counts, totals, and precisions for 1-, 2-, 3-, and 4-grams. The bp is the brevity penalty. The sys_len and ref_len correspond to the predictions and reference lengths. The above BLEU score of 25.2% is slightly lower than the state of the art, but we are being penalized by the peculiarities of diacritic usage in Romanian characters. For example, the letters ș and ț (corresponding to the sounds sh and ts in English) are usually spelled with a comma below the characters s and t, which is the standard imposed by the Romanian Academy. However, in “the wild” these characters are often written using a cedilla instead of a comma, e.g., ţ instead of ț (or, using the names of these Unicode characters, LATIN SMALL LETTER T WITH CEDILLA instead of LATIN SMALL LETTER T WITH COMMA BELOW). Further, some of these characters with diacritics are often omitted altogether in the T5 output. The T5 output below contains an example for each of these two situations (e.g., soluţi(e) instead of soluți(i), and eful instead of Șeful): To avoid being penalized at scoring time for these arbitrary discrepancies, post-processing scripts are sometimes used to normalize diacritic usage.3 Usage of such post-processing scripts can improve the BLEU score substantially. However, this is beyond the scope of this chapter. 15.2 Implementation of Greedy Generation To gain a better intuition of how the encoder-decoder model generates its output sequence, we show below an implementation of the greedy version of the generate() method used above. This function takes as an argument a single English text (i.e., no batching) and returns the corresponding Romanian text: This function interacts directly with the encoder and decoder components of the T5 model, so we must construct the input for both. The encoder’s input is constructed by prepending the task prefix to the English text and tokenizing it. On the other hand, the decoder’s input is constructed incrementally by accumulating the tokens predicted so far 3 https://github.com/huggingface/transformers/blob/main/examples/legacy/ seq2seq/romanian_postprocessing.md 216 Implementing Encoder-decoder Methods in order to predict the next token in the sequence. At the beginning, before any tokens are predicted, the decoder’s input is initialized with a single token that corresponds to the beginning of the sequence. We retrieve this token, called decoder_start_token_id, from the model’s configuration object. The tokens are predicted one at a time, until the model produces eos_token_id, which indicates that the sequence is finished. However, in case the model does not produce this end-of-sequence token within a reasonable number of steps, we also enforce a maximum number of predicted tokens, determined by the max_target_length parameter we defined previously. The T5 model’s forward() method, called indirectly through its __call__()) method, takes the inputs for both the encoder and the decoder. The output returned by this method corresponds to all the tokens in the decoder’s input plus an extra one: the newly predicted token. To select the best prediction, we retrieve the logits from the output and select the logits corresponding to the last token in the sequence (recall that the output shape is (batch size, sequence length, vocabulary size)). From these selected logits, we use the argmax() to select the token id corresponding to the highest-scoring vocabulary item. We append this new token id to the decoder’s input, and repeat the process until we encounter the end-of-sequence token or the decoded text reaches the maximum length. Once we are finished generating token ids, we retrieve the corresponding text by calling the tokenizer’s decode() method. This method is identical to the batch_decode() method we used previously, except that it only decodes a single example. Below is an usage example for the greedy_translation() function: 15.3 Fine-tuning Romanian to English Translation In this section, we fine-tune a T5 model on the translation of Romanian to English, a language pair that was not included in the T5 pre-training. To confirm that this data was not included in pre-training, we evaluated the performance of the vanilla t5-small model on the translation from Romanian to English using code equivalent to the code discussed in the previous section (see the chap15_translation_ro_to_en notebook). The resulting BLEU score was only 3.2%, which is substantially lower than the score we obtained when translating English to Romanian (25.2%). 15.3 Fine-tuning Romanian to English Translation 217 Note that the transformers library includes scripts to fine-tune a translation model directly from the command line.4 For didactic purposes, we will not use these scripts in this section, but instead write the fine-tuning code explicitly. For this exercise, we continue using the WMT16 dataset, but this time
we load the train and validation splits. We employ the same t5-small model that we used previously. The code from the last section to load
the model, tokenizer, and dataset does not need to change for this use-
case, so we do not repeat it here. However, as before, the complete code is available in a Jupyter notebook (chap15_translation_ro_to_en_finetune). We begin by tokenizing the source (Romanian) and target language (English) texts. As in the last section, we need to prepend the task prefix to the source texts prior to tokenizing. This time, since we are translating in the opposite direction, we use the prefix "translate Romanian to English: ", and we prepend it to the Romanian text. Each call to the tokenizer with a batch of texts produces input_ids and an attention_mask. This output is what we need for the Romanian text, which will serve as the input to the model. To generate the labels, i.e., the correct translated tokens, we use the input_ids corresponding to the English text. Recall that "labels" is the default key name expected by trainers in Hugging Face. We apply our tokenize() function to both the train and validation splits: 4 https://github.com/huggingface/transformers/tree/main/examples/pytorch/ translation 218 Implementing Encoder-decoder Methods input_ids [13959, 3871, 29, 12, 1566, 10, 4961, 106, 204... [13959, 3871, 29, 12, 1566, 10, 4961, 106, 204... [13959, 3871, 29, 12, 1566, 10, 374, 6225, 49,... [13959, 3871, 29, 12, 1566, 10, 4540, 4031, 9,... [13959, 3871, 29, 12, 1566, 10, 2262, 900, 17,... [13959, 3871, 29, 12, 1566, 10, 18420, 83, 362... attention_mask labels [19428, 13, 12876, 10, 217, 13687, 7, 1] [19428, 13, 12876, 10, 217, 13687, 7, 1] [11167, 7, 1204, 10, 217, 13687, 7, 1] [4540, 4031, 9, 7, 1672, 7, 2262, 900, 17, 38,... [2262, 900, 17, 641, 65, 46, 3761, 6, 1069, 31... [3625, 32, 5788, 35, 15, 3844, 31, 7, 3, 16143... 0 1 2 3 4 ... 610315 610316 610317 610318 610319 [1,1,1,1,1,1,1,1, [1,1,1,1,1,1,1,1, [1,1,1,1,1,1,1,1, [1,1,1,1,1,1,1,1, [1,1,1,1,1,1,1,1, [1,1,1,1,1,1,1,1, 1,1,1,1, 1,1,1,... 1,1,1,1, 1,1,1,... 1,1,1,1, 1,1,1,... 1,1,1,1, 1,1,1,... 1,1,1,1, 1,1,1,... 1,1,1,1, 1,1,1,... [13959, 3871, 29, 12, 1566, 10, 5085, [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 5840, 49... 1, 1, 1, ... [13959, 3871, 29, 12, 1566, 10, 781, [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 8750, 9, ... 1, 1, 1, ... ... ... [13959, 3871, 29, 12, 1566, 10, 2364, [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 4540, 40... 1, 1, 1, ... 610320 rows × 3 columns [13959, 3871, 29, 12, 1566, 10, 3, [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 25882, 759,... 1, 1, 1, ... [2276, 8843, 138, 13, 13687, 7, 13, 1767, 3823... [781, 2420, 13, 17500, 10, 217, 13687, 7, 1] [242, 4540, 4031, 9, 7, 6, 8, 516, 65, 66, 8, ... [9810, 157, 31, 7, 516, 92, 3088, 21, 46, 3839... ... Recall that in order to construct a trainer, we need a data collator for batching, a function to compute the metrics of interest, and a TrainingArguments object. In this section, we use a data collator called DataCollatorForSeq2Seq, which is included in the transformers library specifically for sequence-to-sequence models. The collator pads the batches using the label_pad_token_id, which we have set to −100, as we did in Chapter 13 (this is the default ignore_index value used by CrossEntropyLoss): The compute_metrics() function computes the BLEU score. It uses the tokenizer to decode the token ids into text, for both the predicted and gold labels, ignoring padding: We use the Seq2SeqTrainingArguments class, which adds the predict_with_generate parameter to the regular TrainingArguments class. This is needed to in-
dicate that the trainer should use the generate() method for inference
in order to compute the metrics (BLUE in this case): Finally, we construct the trainer using the Seq2SeqTrainer class, which is a subclass of Trainer that adds the ability to compute scores such as BLEU during training by calling generate() during evaluation: Fine-tuning a translation model takes considerably longer than training or fine-tuning the models we have developed so far in this book. To account for this, here we add support for resuming training from a checkpoint, i.e., a model that was saved after training on a number of 15.4 Using a Previously Saved Model 219 examples. Similar to how one can resume a video game, this allows one to pick up from the last “save point,” in case training was interrupted and needs to be resumed: When calling the trainer’s train() method, we either provide a model checkpoint or None. In the former case, the trainer will continue training from the provided checkpoint. In the latter case, the trainer will begin training from scratch. Once the training has completed, we save the trained model and tokenizer using the trainer’s save_model() method into the output directory: We then compute and save the metrics corresponding to the training partition. This is not required, but it is helpful to keep a record of the model’s performance on the training data. Note that the metrics do not automatically include the number of examples in the training partition, so we add them explicitly: Next, we evaluate our final model on the validation data and save the corresponding metrics. These metrics indicate that our BLEU score on the validation data is 35.2%, which is evidence that fine-tuning has helped dramatically: Lastly, we save a model card into our output directory. A model card is akin to an automatically-generated README file that includes information about the model used, the data, settings used, and performance throughout the training process. This file is helpful for reproducibility as it contains all of this key information in one place. These cards are often uploaded to the Hugging Face Hub together with the model itself.5 15.4 Using a Previously Saved Model Models that have been saved locally can be loaded using the same from_pretrained() methods we have used before. In particular, instead of providing a model name, we provide the path to the local directory where the model is stored, using the local_files_only parameter to indicate that we want to load the model from the local file system instead of downloading it from the Hugging Face Hub (Make sure you use an output directory that is valid on your machine!): Once our fine-tuned model is loaded, we use it the same way as before. That is, we use the translate() function to generate translations 5 We do not discuss the model uploading process here. Please see the documentation on model sharing at: https://huggingface.co/docs/transformers/v4.14.1/model_sharing. 220 Implementing Encoder-decoder Methods for our test partition. Then we use the BLEU metric to score this output. From this metric, we obtain the final BLEU score of 33.4%, which is markedly better than our initial score (i.e., without fine-tuning) of 3.2%! The code corresponding to this section is available in the notebook chap15_translation_ro_to_en_finetuned. 15.5 Summary In this chapter we used a complete encoder-decoder transformer network to implement a machine translation application. Importantly, transformers with a decoder component have a generate() method that simplifies the generation process and provides multiple options for decoding. We encourage you to explore these options! For example, try comparing the quality of the output with the resources required to produce it (e.g., runtime overhead) when the size of the search beam increases. Additionally, we saw how to fine-tune an encoder-decoder model on a new language pair that it has not seen during its pre-training. This exercise included using checkpoints to support resuming training in case of unexpected interruptions, saving our fine-tuned model, and loading it for later use.
17,463
17,555
#!/usr/bin/env python # coding: utf-8 # # Machine Translation from Ro to En # # Using the T5 Transformer with Fine-tuning # Some initialization: # In[1]: import torch import numpy as np from transformers import set_seed # random seed seed = 42 # set random seed if seed is not None: print(f'random seed: {seed}') set_seed(seed) # In[2]: transformer_name = 't5-small' dataset_name = 'wmt16' dataset_config_name = 'ro-en' source_lang = 'ro' target_lang = 'en' max_source_length = 1024 max_target_length = 128 task_prefix = 'translate Romanian to English: ' batch_size = 4 label_pad_token_id = -100 save_steps = 25_000 num_beams = 1 learning_rate = 1e-3 num_train_epochs = 3 output_dir = '/media/data2/t5-translation-example' # make sure this is a valid path on your machine! # Load dataset from HuggingFace: # In[3]: from datasets import load_dataset wmt16 = load_dataset(dataset_name, dataset_config_name) # Load tokenizer and pre-trained model: # In[4]: from transformers import AutoConfig, AutoTokenizer, AutoModelForSeq2SeqLM config = AutoConfig.from_pretrained(transformer_name) tokenizer = AutoTokenizer.from_pretrained(transformer_name) model = AutoModelForSeq2SeqLM.from_pretrained(transformer_name, config=config) # Tokenize the texts in the dataset: # In[5]: def tokenize(batch): # get source sentences and prepend task prefix sources = [x[source_lang] for x in batch["translation"]] sources = [task_prefix + x for x in sources] # tokenize source sentences output = tokenizer( sources, max_length=max_source_length, truncation=True, ) # get target sentences targets = [x[target_lang] for x in batch["translation"]] # tokenize target sentences labels = tokenizer( targets, max_length=max_target_length, truncation=True, ) # add targets to output output["labels"] = labels["input_ids"] return output # In[6]: train_dataset = wmt16['train'] eval_dataset = wmt16['validation'] column_names = train_dataset.column_names train_dataset = train_dataset.map( tokenize, batched=True, remove_columns=column_names, ) eval_dataset = eval_dataset.map( tokenize, batched=True, remove_columns=column_names, ) # In[7]: train_dataset.to_pandas() # Create `Trainer` object and train: # In[8]: from transformers import DataCollatorForSeq2Seq data_collator = DataCollatorForSeq2Seq( tokenizer, model=model, label_pad_token_id=label_pad_token_id, ) # In[9]: from datasets import load_metric metric = load_metric('sacrebleu') def compute_metrics(eval_preds): preds, labels = eval_preds # get text for predictions predictions = tokenizer.batch_decode( preds, skip_special_tokens=True, ) # replace -100 in labels with pad token labels = np.where( labels != -100, labels, tokenizer.pad_token_id, ) # get text for gold labels references = tokenizer.batch_decode( labels, skip_special_tokens=True, ) # metric expects list of references for each prediction references = [[ref] for ref in references] # compute bleu score results = metric.compute( predictions=predictions, references=references, ) results = {'bleu': results['score']} return results # In[10]: from transformers import Seq2SeqTrainingArguments training_args = Seq2SeqTrainingArguments( output_dir=output_dir, per_device_train_batch_size=batch_size, per_device_eval_batch_size=batch_size, save_steps=save_steps, predict_with_generate=True, evaluation_strategy='steps', eval_steps=save_steps, learning_rate=learning_rate, num_train_epochs=num_train_epochs, ) # In[11]: from transformers import Seq2SeqTrainer trainer = Seq2SeqTrainer( model=model, args=training_args, train_dataset=train_dataset, eval_dataset=eval_dataset, tokenizer=tokenizer, data_collator=data_collator, compute_metrics=compute_metrics, ) # In[12]: import os from transformers.trainer_utils import get_last_checkpoint last_checkpoint = None if os.path.isdir(output_dir): last_checkpoint = get_last_checkpoint(output_dir) if last_checkpoint is not None: print(f'Checkpoint detected, resuming training at {last_checkpoint}.') # In[13]: train_result = trainer.train(resume_from_checkpoint=last_checkpoint) trainer.save_model() # In[14]: metrics = train_result.metrics metrics['train_samples'] = len(train_dataset) trainer.log_metrics('train', metrics) trainer.save_metrics('train', metrics) trainer.save_state() # Now evaluate: # In[15]: # https://discuss.huggingface.co/t/evaluation-results-metric-during-training-is-different-from-the-evaluation-results-at-the-end/15401 metrics = trainer.evaluate( max_length=max_target_length, num_beams=num_beams, metric_key_prefix='eval', ) metrics['eval_samples'] = len(eval_dataset) trainer.log_metrics('eval', metrics) trainer.save_metrics('eval', metrics) # Create a model card with meta data about this model: # In[16]: kwargs = { 'finetuned_from': transformer_name, 'tasks': 'translation', 'dataset_tags': dataset_name, 'dataset_args': dataset_config_name, 'dataset': f'{dataset_name} {dataset_config_name}', 'language': [source_lang, target_lang], } trainer.create_model_card(**kwargs) # In[ ]:
4,835
4,954
12
chap15-13
chap15-13
15 Implementing Encoder-decoder Methods In this chapter we implement a machine translation application as an example of an encoder-decoder task. In particular, we build on pre-trained encoder-decoder transformer models, which exist in the Hugging Face library for a wide variety of language pairs. We first show how to use one of these models out-of-the-box to perform translation for one of the language pairs it has been exposed to during pre-training: English to Romanian. Afterwards, we fine-tune the model to a new language combination that is has not seen before: Romanian to English. In both use cases, we use the T5 encoder-decoder model, which has been pre-trained for several tasks, including machine translation (Raffel et al., 2020). Please see Chapter 16 for a description of T5’s pre-training process. The data for this task comes from the WMT 2016 dataset (Bojar et al., 2016), which consists of English sentences aligned pairwise to German, Czech, Russian, Finnish, Romanian, and Turkish. In this chapter we only use the English-Romanian texts (in both directions). 15.1 Translating English to Romanian As a first example, we use T5 to translate from English to Romanian, which is one of the language pairs it has been exposed to during pretraining. The code discussed in this section is available in the notebook chap15_translation_en_to_ro. Even though in this exercise we are not fine-tuning the model, we still need to define a few hyper parameters to frame the task and help the model understand how to work with the data: The above settings indicate that we use the t5-small model, a smaller T5 variant, to minimize the amount of memory required. The source_lang 212 15.1 Translating English to Romanian 213 and target_lang variables define the direction of translation, i.e., from English to Romanian. To keep our computing requirements small, we limit the length of our input and output. That is, English text longer than max_source_length tokens will be truncated. Further, we limit our generated Romanian text to max_target_length. We chose a maximum target length of 128 tokens to limit the computational cost incurred during text generation (recall that the text is generated one token at a time). The T5 models are trained to support multiple tasks such as translation and summarization (please see Chapter 16 for details). Thus, during training and inference, the user must specify which task the model should perform using a text prefix. Here we use the prefix "translate English to Romanian: " to indicate that the input text is in English and should be translated to Romanian. Next, we load the model and the corresponding tokenizer, and move them to the GPU if one is available: We use the datasets library to load our translation dataset. Note that the first time one calls load_dataset() the dataset will be downloaded automatically from the Hugging Face repository.1 The load_dataset() function takes a dataset name and configuration, which in our case are wmt16 and ro-en, respectively. Since in this example we are only evaluating the model, we only load the test partition (or split) of the dataset: The dataset consists of a single column called translation. Each element in this column is a dictionary that contains the aligned pair. The dictionary keys are the abbreviated language names and the values are the corresponding sentences. An example of one of these dictionaries is shown below: We encapsulate the logic for translating the English text into Romanian in a function called translate(). Inside this function, for a batch of aligned pairs, we select the English sentence as our input, and prepend the task prefix. Then we tokenize these inputs, including the prefix, specifying that sentences longer than max_source_length should be truncated, the batch should be padded, and the tokenizer should return PyTorch tensors. Once the tokenizer output has been moved to the GPU, we pass it to the model’s generate() method. This is the first time we have seen this method, because only decoder and encoder-decoder models support it. This method generates an output sequence by predicting one token 1 https://huggingface.co/datasets/wmt16 214 Implementing Encoder-decoder Methods at a time, stopping when either the end-of-sequence token is produced or when the sequence reaches a maximum length. Several generation techniques are supported, such as beam search, in which several alternate translations are maintained by the model so that it is able to select an overall best translation from several options. For efficiency purposes, we use a greedy approach, which chooses the best token at each step of the generation. This is equivalent to using a beam search with a beam of size one. Since the model generates its predictions as a sequence of token ids, we need to convert them back into the corresponding tokens to be able to read the translated text. We do this using the tokenizer’s batch_decode() method. Finally, we return the gold and predicted Romanian sentences in a dictionary: Next, we apply our translate() function to our Dataset to translate all the sentences: reference Șeful ONU declară că nu există soluții militar... Șeful ONU a solicitat din nou tuturor părților... Ban și-a exprimat regretul că divizările în co... Nu sunt bani puțini. La sfârșitul mandatului voi face un raport cu ... "Să spună un parlamentar că nu-i ajung banii e... 1999 rows × 2 columns prediction eful ONU declară că nu există o soluţie milita... eful U.N. a cerut din nou tuturor partidelor, ... El şi-a exprimat regretul că diviziunile din c... Banii sunt suficienţi. La sfârşitul biroului voi raporta tot ceea ce ... "A spune că un parlamentar nu are suficienţi b... 1994 1995 1996 1997 1998 0 1 2 3 4 ... Secretarul General Ban Ki-moon afirmă că răspu... Secretarul General Ban Ki-moon declară că răsp... Ban a declarat miercuri în cadrul unei conferi... Ban a declarat la o conferinţă de presă susţin... ... ... Uneori mi-e rușine să ridic banii de la casierie. Uneori mi-e ruşine să iau banii de la biroul c... S-a întâmplat să ridic într-o lună și 30.000 d... Într-o lună am adunat 30 000 de lei cu ramburs... We evaluate the quality of these translations using the BLEU metric, which we introduced in Chapter 14. To this end, we load an existing implementation of BLEU from the datasets library as a Metric object.2 Metric objects have a method called add(), which is used to accumulate the predictions and gold labels, one example at a time. After accumulating all examples, the compute() method returns the results of the evaluation. Note that for each predicted sentence, BLEU expects a list of reference sentences (as there are often many correct ways of translat- 2 https://huggingface.co/docs/datasets/v2.4.0/en/package_reference/main_ classes#datasets. Metric 15.2 Implementation of Greedy Generation 215 ing a given text). Since we only have one reference, we wrap it in a list before passing it to the metric: The score corresponds to the BLEU score. The rest of the items correspond to the components required to compute the score. That is, the counts, totals, and precisions correspond to the counts, totals, and precisions for 1-, 2-, 3-, and 4-grams. The bp is the brevity penalty. The sys_len and ref_len correspond to the predictions and reference lengths. The above BLEU score of 25.2% is slightly lower than the state of the art, but we are being penalized by the peculiarities of diacritic usage in Romanian characters. For example, the letters ș and ț (corresponding to the sounds sh and ts in English) are usually spelled with a comma below the characters s and t, which is the standard imposed by the Romanian Academy. However, in “the wild” these characters are often written using a cedilla instead of a comma, e.g., ţ instead of ț (or, using the names of these Unicode characters, LATIN SMALL LETTER T WITH CEDILLA instead of LATIN SMALL LETTER T WITH COMMA BELOW). Further, some of these characters with diacritics are often omitted altogether in the T5 output. The T5 output below contains an example for each of these two situations (e.g., soluţi(e) instead of soluți(i), and eful instead of Șeful): To avoid being penalized at scoring time for these arbitrary discrepancies, post-processing scripts are sometimes used to normalize diacritic usage.3 Usage of such post-processing scripts can improve the BLEU score substantially. However, this is beyond the scope of this chapter. 15.2 Implementation of Greedy Generation To gain a better intuition of how the encoder-decoder model generates its output sequence, we show below an implementation of the greedy version of the generate() method used above. This function takes as an argument a single English text (i.e., no batching) and returns the corresponding Romanian text: This function interacts directly with the encoder and decoder components of the T5 model, so we must construct the input for both. The encoder’s input is constructed by prepending the task prefix to the English text and tokenizing it. On the other hand, the decoder’s input is constructed incrementally by accumulating the tokens predicted so far 3 https://github.com/huggingface/transformers/blob/main/examples/legacy/ seq2seq/romanian_postprocessing.md 216 Implementing Encoder-decoder Methods in order to predict the next token in the sequence. At the beginning, before any tokens are predicted, the decoder’s input is initialized with a single token that corresponds to the beginning of the sequence. We retrieve this token, called decoder_start_token_id, from the model’s configuration object. The tokens are predicted one at a time, until the model produces eos_token_id, which indicates that the sequence is finished. However, in case the model does not produce this end-of-sequence token within a reasonable number of steps, we also enforce a maximum number of predicted tokens, determined by the max_target_length parameter we defined previously. The T5 model’s forward() method, called indirectly through its __call__()) method, takes the inputs for both the encoder and the decoder. The output returned by this method corresponds to all the tokens in the decoder’s input plus an extra one: the newly predicted token. To select the best prediction, we retrieve the logits from the output and select the logits corresponding to the last token in the sequence (recall that the output shape is (batch size, sequence length, vocabulary size)). From these selected logits, we use the argmax() to select the token id corresponding to the highest-scoring vocabulary item. We append this new token id to the decoder’s input, and repeat the process until we encounter the end-of-sequence token or the decoded text reaches the maximum length. Once we are finished generating token ids, we retrieve the corresponding text by calling the tokenizer’s decode() method. This method is identical to the batch_decode() method we used previously, except that it only decodes a single example. Below is an usage example for the greedy_translation() function: 15.3 Fine-tuning Romanian to English Translation In this section, we fine-tune a T5 model on the translation of Romanian to English, a language pair that was not included in the T5 pre-training. To confirm that this data was not included in pre-training, we evaluated the performance of the vanilla t5-small model on the translation from Romanian to English using code equivalent to the code discussed in the previous section (see the chap15_translation_ro_to_en notebook). The resulting BLEU score was only 3.2%, which is substantially lower than the score we obtained when translating English to Romanian (25.2%). 15.3 Fine-tuning Romanian to English Translation 217 Note that the transformers library includes scripts to fine-tune a translation model directly from the command line.4 For didactic purposes, we will not use these scripts in this section, but instead write the fine-tuning code explicitly. For this exercise, we continue using the WMT16 dataset, but this time
we load the train and validation splits. We employ the same t5-small model that we used previously. The code from the last section to load
the model, tokenizer, and dataset does not need to change for this use-
case, so we do not repeat it here. However, as before, the complete code is available in a Jupyter notebook (chap15_translation_ro_to_en_finetune). We begin by tokenizing the source (Romanian) and target language (English) texts. As in the last section, we need to prepend the task prefix to the source texts prior to tokenizing. This time, since we are translating in the opposite direction, we use the prefix "translate Romanian to English: ", and we prepend it to the Romanian text. Each call to the tokenizer with a batch of texts produces input_ids and an attention_mask. This output is what we need for the Romanian text, which will serve as the input to the model. To generate the labels, i.e., the correct translated tokens, we use the input_ids corresponding to the English text. Recall that "labels" is the default key name expected by trainers in Hugging Face. We apply our tokenize() function to both the train and validation splits: 4 https://github.com/huggingface/transformers/tree/main/examples/pytorch/ translation 218 Implementing Encoder-decoder Methods input_ids [13959, 3871, 29, 12, 1566, 10, 4961, 106, 204... [13959, 3871, 29, 12, 1566, 10, 4961, 106, 204... [13959, 3871, 29, 12, 1566, 10, 374, 6225, 49,... [13959, 3871, 29, 12, 1566, 10, 4540, 4031, 9,... [13959, 3871, 29, 12, 1566, 10, 2262, 900, 17,... [13959, 3871, 29, 12, 1566, 10, 18420, 83, 362... attention_mask labels [19428, 13, 12876, 10, 217, 13687, 7, 1] [19428, 13, 12876, 10, 217, 13687, 7, 1] [11167, 7, 1204, 10, 217, 13687, 7, 1] [4540, 4031, 9, 7, 1672, 7, 2262, 900, 17, 38,... [2262, 900, 17, 641, 65, 46, 3761, 6, 1069, 31... [3625, 32, 5788, 35, 15, 3844, 31, 7, 3, 16143... 0 1 2 3 4 ... 610315 610316 610317 610318 610319 [1,1,1,1,1,1,1,1, [1,1,1,1,1,1,1,1, [1,1,1,1,1,1,1,1, [1,1,1,1,1,1,1,1, [1,1,1,1,1,1,1,1, [1,1,1,1,1,1,1,1, 1,1,1,1, 1,1,1,... 1,1,1,1, 1,1,1,... 1,1,1,1, 1,1,1,... 1,1,1,1, 1,1,1,... 1,1,1,1, 1,1,1,... 1,1,1,1, 1,1,1,... [13959, 3871, 29, 12, 1566, 10, 5085, [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 5840, 49... 1, 1, 1, ... [13959, 3871, 29, 12, 1566, 10, 781, [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 8750, 9, ... 1, 1, 1, ... ... ... [13959, 3871, 29, 12, 1566, 10, 2364, [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 4540, 40... 1, 1, 1, ... 610320 rows × 3 columns [13959, 3871, 29, 12, 1566, 10, 3, [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 25882, 759,... 1, 1, 1, ... [2276, 8843, 138, 13, 13687, 7, 13, 1767, 3823... [781, 2420, 13, 17500, 10, 217, 13687, 7, 1] [242, 4540, 4031, 9, 7, 6, 8, 516, 65, 66, 8, ... [9810, 157, 31, 7, 516, 92, 3088, 21, 46, 3839... ... Recall that in order to construct a trainer, we need a data collator for batching, a function to compute the metrics of interest, and a TrainingArguments object. In this section, we use a data collator called DataCollatorForSeq2Seq, which is included in the transformers library specifically for sequence-to-sequence models. The collator pads the batches using the label_pad_token_id, which we have set to −100, as we did in Chapter 13 (this is the default ignore_index value used by CrossEntropyLoss): The compute_metrics() function computes the BLEU score. It uses the tokenizer to decode the token ids into text, for both the predicted and gold labels, ignoring padding: We use the Seq2SeqTrainingArguments class, which adds the predict_with_generate parameter to the regular TrainingArguments class. This is needed to in-
dicate that the trainer should use the generate() method for inference
in order to compute the metrics (BLUE in this case): Finally, we construct the trainer using the Seq2SeqTrainer class, which is a subclass of Trainer that adds the ability to compute scores such as BLEU during training by calling generate() during evaluation: Fine-tuning a translation model takes considerably longer than training or fine-tuning the models we have developed so far in this book. To account for this, here we add support for resuming training from a checkpoint, i.e., a model that was saved after training on a number of 15.4 Using a Previously Saved Model 219 examples. Similar to how one can resume a video game, this allows one to pick up from the last “save point,” in case training was interrupted and needs to be resumed: When calling the trainer’s train() method, we either provide a model checkpoint or None. In the former case, the trainer will continue training from the provided checkpoint. In the latter case, the trainer will begin training from scratch. Once the training has completed, we save the trained model and tokenizer using the trainer’s save_model() method into the output directory: We then compute and save the metrics corresponding to the training partition. This is not required, but it is helpful to keep a record of the model’s performance on the training data. Note that the metrics do not automatically include the number of examples in the training partition, so we add them explicitly: Next, we evaluate our final model on the validation data and save the corresponding metrics. These metrics indicate that our BLEU score on the validation data is 35.2%, which is evidence that fine-tuning has helped dramatically: Lastly, we save a model card into our output directory. A model card is akin to an automatically-generated README file that includes information about the model used, the data, settings used, and performance throughout the training process. This file is helpful for reproducibility as it contains all of this key information in one place. These cards are often uploaded to the Hugging Face Hub together with the model itself.5 15.4 Using a Previously Saved Model Models that have been saved locally can be loaded using the same from_pretrained() methods we have used before. In particular, instead of providing a model name, we provide the path to the local directory where the model is stored, using the local_files_only parameter to indicate that we want to load the model from the local file system instead of downloading it from the Hugging Face Hub (Make sure you use an output directory that is valid on your machine!): Once our fine-tuned model is loaded, we use it the same way as before. That is, we use the translate() function to generate translations 5 We do not discuss the model uploading process here. Please see the documentation on model sharing at: https://huggingface.co/docs/transformers/v4.14.1/model_sharing. 220 Implementing Encoder-decoder Methods for our test partition. Then we use the BLEU metric to score this output. From this metric, we obtain the final BLEU score of 33.4%, which is markedly better than our initial score (i.e., without fine-tuning) of 3.2%! The code corresponding to this section is available in the notebook chap15_translation_ro_to_en_finetuned. 15.5 Summary In this chapter we used a complete encoder-decoder transformer network to implement a machine translation application. Importantly, transformers with a decoder component have a generate() method that simplifies the generation process and provides multiple options for decoding. We encourage you to explore these options! For example, try comparing the quality of the output with the resources required to produce it (e.g., runtime overhead) when the size of the search beam increases. Additionally, we saw how to fine-tune an encoder-decoder model on a new language pair that it has not seen during its pre-training. This exercise included using checkpoints to support resuming training in case of unexpected interruptions, saving our fine-tuned model, and loading it for later use.
3,699
3,905
#!/usr/bin/env python # coding: utf-8 # # Machine Translation from English (En) to Romanian (Ro) # # Using the T5 Transformer without Fine-tuning # Some initialization: # In[1]: import torch import numpy as np from transformers import set_seed # set to True to use the gpu (if there is one available) use_gpu = True # select device device = torch.device('cuda' if use_gpu and torch.cuda.is_available() else 'cpu') print(f'device: {device.type}') # random seed seed = 42 # set random seed if seed is not None: print(f'random seed: {seed}') set_seed(seed) # In[2]: transformer_name = 't5-small' source_lang = 'en' target_lang = 'ro' max_source_length = 1024 max_target_length = 128 task_prefix = 'translate English to Romanian: ' num_beams = 1 batch_size = 100 # Load tokenizer and pre-trained model: # In[3]: from transformers import AutoTokenizer, AutoModelForSeq2SeqLM tokenizer = AutoTokenizer.from_pretrained(transformer_name) model = AutoModelForSeq2SeqLM.from_pretrained(transformer_name) model = model.to(device) # Load dataset from HuggingFace: # In[4]: from datasets import load_dataset test_ds = load_dataset('wmt16', 'ro-en', split='test') test_ds # In[5]: test_ds['translation'][0] # Implement the `translate` method and apply on this dataset: # In[6]: def translate(batch): # get source language examples and prepend task prefix inputs = [x[source_lang] for x in batch["translation"]] inputs = [task_prefix + x for x in inputs] # tokenize inputs encoded = tokenizer( inputs, max_length=max_source_length, truncation=True, padding=True, return_tensors='pt', ) # move data to gpu if needed input_ids = encoded.input_ids.to(device) attention_mask = encoded.attention_mask.to(device) # generate translated sentences output = model.generate( input_ids=input_ids, attention_mask=attention_mask, num_beams=num_beams, max_length=max_target_length, ) # generate predicted sentences from predicted token ids decoded = tokenizer.batch_decode( output, skip_special_tokens=True, ) # get gold sentences in target language targets = [x[target_lang] for x in batch["translation"]] # return gold and predicted sentences return { 'reference': targets, 'prediction': decoded, } # In[7]: results = test_ds.map( translate, batched=True, batch_size=batch_size, remove_columns=test_ds.column_names, ) results.to_pandas() # Now evaluate the quality of translations using the BLEU metric: # In[8]: from datasets import load_metric metric = load_metric('sacrebleu') for r in results: prediction = r['prediction'] reference = [r['reference']] metric.add(prediction=prediction, reference=reference) metric.compute() # An example of greedy decoding for individual texts: # In[9]: def greedy_translation(text): # prepend task prefix text = task_prefix + text # tokenize input encoded = tokenizer( text, max_length=max_source_length, truncation=True, return_tensors='pt', ) # encoder input ids encoder_input_ids = encoded.input_ids.to(device) # decoder input ids, initialized with start token id start = model.config.decoder_start_token_id decoder_input_ids = torch.LongTensor([[start]]).to(device) # generate tokens, one at a time for _ in range(max_target_length): # get model predictions output = model( encoder_input_ids, decoder_input_ids=decoder_input_ids, ) # get logits for last token next_token_logits = output.logits[0, -1, :] # select most probable token next_token_id = torch.argmax(next_token_logits) # append new token to decoder_input_ids output_id = torch.LongTensor([[next_token_id]]).to(device) decoder_input_ids = torch.cat([decoder_input_ids, output_id], dim=-1) # if predicted token is the end of sequence, stop iterating if next_token_id == tokenizer.eos_token_id: break # return text corresponding to predicted token ids return tokenizer.decode(decoder_input_ids[0], skip_special_tokens=True) # In[10]: greedy_translation("this is a test")
1,522
1,683
13
chap15-14
chap15-14
15 Implementing Encoder-decoder Methods In this chapter we implement a machine translation application as an example of an encoder-decoder task. In particular, we build on pre-trained encoder-decoder transformer models, which exist in the Hugging Face library for a wide variety of language pairs. We first show how to use one of these models out-of-the-box to perform translation for one of the language pairs it has been exposed to during pre-training: English to Romanian. Afterwards, we fine-tune the model to a new language combination that is has not seen before: Romanian to English. In both use cases, we use the T5 encoder-decoder model, which has been pre-trained for several tasks, including machine translation (Raffel et al., 2020). Please see Chapter 16 for a description of T5’s pre-training process. The data for this task comes from the WMT 2016 dataset (Bojar et al., 2016), which consists of English sentences aligned pairwise to German, Czech, Russian, Finnish, Romanian, and Turkish. In this chapter we only use the English-Romanian texts (in both directions). 15.1 Translating English to Romanian As a first example, we use T5 to translate from English to Romanian, which is one of the language pairs it has been exposed to during pretraining. The code discussed in this section is available in the notebook chap15_translation_en_to_ro. Even though in this exercise we are not fine-tuning the model, we still need to define a few hyper parameters to frame the task and help the model understand how to work with the data: The above settings indicate that we use the t5-small model, a smaller T5 variant, to minimize the amount of memory required. The source_lang 212 15.1 Translating English to Romanian 213 and target_lang variables define the direction of translation, i.e., from English to Romanian. To keep our computing requirements small, we limit the length of our input and output. That is, English text longer than max_source_length tokens will be truncated. Further, we limit our generated Romanian text to max_target_length. We chose a maximum target length of 128 tokens to limit the computational cost incurred during text generation (recall that the text is generated one token at a time). The T5 models are trained to support multiple tasks such as translation and summarization (please see Chapter 16 for details). Thus, during training and inference, the user must specify which task the model should perform using a text prefix. Here we use the prefix "translate English to Romanian: " to indicate that the input text is in English and should be translated to Romanian. Next, we load the model and the corresponding tokenizer, and move them to the GPU if one is available: We use the datasets library to load our translation dataset. Note that the first time one calls load_dataset() the dataset will be downloaded automatically from the Hugging Face repository.1 The load_dataset() function takes a dataset name and configuration, which in our case are wmt16 and ro-en, respectively. Since in this example we are only evaluating the model, we only load the test partition (or split) of the dataset: The dataset consists of a single column called translation. Each element in this column is a dictionary that contains the aligned pair. The dictionary keys are the abbreviated language names and the values are the corresponding sentences. An example of one of these dictionaries is shown below: We encapsulate the logic for translating the English text into Romanian in a function called translate(). Inside this function, for a batch of aligned pairs, we select the English sentence as our input, and prepend the task prefix. Then we tokenize these inputs, including the prefix, specifying that sentences longer than max_source_length should be truncated, the batch should be padded, and the tokenizer should return PyTorch tensors. Once the tokenizer output has been moved to the GPU, we pass it to the model’s generate() method. This is the first time we have seen this method, because only decoder and encoder-decoder models support it. This method generates an output sequence by predicting one token 1 https://huggingface.co/datasets/wmt16 214 Implementing Encoder-decoder Methods at a time, stopping when either the end-of-sequence token is produced or when the sequence reaches a maximum length. Several generation techniques are supported, such as beam search, in which several alternate translations are maintained by the model so that it is able to select an overall best translation from several options. For efficiency purposes, we use a greedy approach, which chooses the best token at each step of the generation. This is equivalent to using a beam search with a beam of size one. Since the model generates its predictions as a sequence of token ids, we need to convert them back into the corresponding tokens to be able to read the translated text. We do this using the tokenizer’s batch_decode() method. Finally, we return the gold and predicted Romanian sentences in a dictionary: Next, we apply our translate() function to our Dataset to translate all the sentences: reference Șeful ONU declară că nu există soluții militar... Șeful ONU a solicitat din nou tuturor părților... Ban și-a exprimat regretul că divizările în co... Nu sunt bani puțini. La sfârșitul mandatului voi face un raport cu ... "Să spună un parlamentar că nu-i ajung banii e... 1999 rows × 2 columns prediction eful ONU declară că nu există o soluţie milita... eful U.N. a cerut din nou tuturor partidelor, ... El şi-a exprimat regretul că diviziunile din c... Banii sunt suficienţi. La sfârşitul biroului voi raporta tot ceea ce ... "A spune că un parlamentar nu are suficienţi b... 1994 1995 1996 1997 1998 0 1 2 3 4 ... Secretarul General Ban Ki-moon afirmă că răspu... Secretarul General Ban Ki-moon declară că răsp... Ban a declarat miercuri în cadrul unei conferi... Ban a declarat la o conferinţă de presă susţin... ... ... Uneori mi-e rușine să ridic banii de la casierie. Uneori mi-e ruşine să iau banii de la biroul c... S-a întâmplat să ridic într-o lună și 30.000 d... Într-o lună am adunat 30 000 de lei cu ramburs... We evaluate the quality of these translations using the BLEU metric, which we introduced in Chapter 14. To this end, we load an existing implementation of BLEU from the datasets library as a Metric object.2 Metric objects have a method called add(), which is used to accumulate the predictions and gold labels, one example at a time. After accumulating all examples, the compute() method returns the results of the evaluation. Note that for each predicted sentence, BLEU expects a list of reference sentences (as there are often many correct ways of translat- 2 https://huggingface.co/docs/datasets/v2.4.0/en/package_reference/main_ classes#datasets. Metric 15.2 Implementation of Greedy Generation 215 ing a given text). Since we only have one reference, we wrap it in a list before passing it to the metric: The score corresponds to the BLEU score. The rest of the items correspond to the components required to compute the score. That is, the counts, totals, and precisions correspond to the counts, totals, and precisions for 1-, 2-, 3-, and 4-grams. The bp is the brevity penalty. The sys_len and ref_len correspond to the predictions and reference lengths. The above BLEU score of 25.2% is slightly lower than the state of the art, but we are being penalized by the peculiarities of diacritic usage in Romanian characters. For example, the letters ș and ț (corresponding to the sounds sh and ts in English) are usually spelled with a comma below the characters s and t, which is the standard imposed by the Romanian Academy. However, in “the wild” these characters are often written using a cedilla instead of a comma, e.g., ţ instead of ț (or, using the names of these Unicode characters, LATIN SMALL LETTER T WITH CEDILLA instead of LATIN SMALL LETTER T WITH COMMA BELOW). Further, some of these characters with diacritics are often omitted altogether in the T5 output. The T5 output below contains an example for each of these two situations (e.g., soluţi(e) instead of soluți(i), and eful instead of Șeful): To avoid being penalized at scoring time for these arbitrary discrepancies, post-processing scripts are sometimes used to normalize diacritic usage.3 Usage of such post-processing scripts can improve the BLEU score substantially. However, this is beyond the scope of this chapter. 15.2 Implementation of Greedy Generation To gain a better intuition of how the encoder-decoder model generates its output sequence, we show below an implementation of the greedy version of the generate() method used above. This function takes as an argument a single English text (i.e., no batching) and returns the corresponding Romanian text: This function interacts directly with the encoder and decoder components of the T5 model, so we must construct the input for both. The encoder’s input is constructed by prepending the task prefix to the English text and tokenizing it. On the other hand, the decoder’s input is constructed incrementally by accumulating the tokens predicted so far 3 https://github.com/huggingface/transformers/blob/main/examples/legacy/ seq2seq/romanian_postprocessing.md 216 Implementing Encoder-decoder Methods in order to predict the next token in the sequence. At the beginning, before any tokens are predicted, the decoder’s input is initialized with a single token that corresponds to the beginning of the sequence. We retrieve this token, called decoder_start_token_id, from the model’s configuration object. The tokens are predicted one at a time, until the model produces eos_token_id, which indicates that the sequence is finished. However, in case the model does not produce this end-of-sequence token within a reasonable number of steps, we also enforce a maximum number of predicted tokens, determined by the max_target_length parameter we defined previously. The T5 model’s forward() method, called indirectly through its __call__()) method, takes the inputs for both the encoder and the decoder. The output returned by this method corresponds to all the tokens in the decoder’s input plus an extra one: the newly predicted token. To select the best prediction, we retrieve the logits from the output and select the logits corresponding to the last token in the sequence (recall that the output shape is (batch size, sequence length, vocabulary size)). From these selected logits, we use the argmax() to select the token id corresponding to the highest-scoring vocabulary item. We append this new token id to the decoder’s input, and repeat the process until we encounter the end-of-sequence token or the decoded text reaches the maximum length. Once we are finished generating token ids, we retrieve the corresponding text by calling the tokenizer’s decode() method. This method is identical to the batch_decode() method we used previously, except that it only decodes a single example. Below is an usage example for the greedy_translation() function: 15.3 Fine-tuning Romanian to English Translation In this section, we fine-tune a T5 model on the translation of Romanian to English, a language pair that was not included in the T5 pre-training. To confirm that this data was not included in pre-training, we evaluated the performance of the vanilla t5-small model on the translation from Romanian to English using code equivalent to the code discussed in the previous section (see the chap15_translation_ro_to_en notebook). The resulting BLEU score was only 3.2%, which is substantially lower than the score we obtained when translating English to Romanian (25.2%). 15.3 Fine-tuning Romanian to English Translation 217 Note that the transformers library includes scripts to fine-tune a translation model directly from the command line.4 For didactic purposes, we will not use these scripts in this section, but instead write the fine-tuning code explicitly. For this exercise, we continue using the WMT16 dataset, but this time
we load the train and validation splits. We employ the same t5-small model that we used previously. The code from the last section to load
the model, tokenizer, and dataset does not need to change for this use-
case, so we do not repeat it here. However, as before, the complete code is available in a Jupyter notebook (chap15_translation_ro_to_en_finetune). We begin by tokenizing the source (Romanian) and target language (English) texts. As in the last section, we need to prepend the task prefix to the source texts prior to tokenizing. This time, since we are translating in the opposite direction, we use the prefix "translate Romanian to English: ", and we prepend it to the Romanian text. Each call to the tokenizer with a batch of texts produces input_ids and an attention_mask. This output is what we need for the Romanian text, which will serve as the input to the model. To generate the labels, i.e., the correct translated tokens, we use the input_ids corresponding to the English text. Recall that "labels" is the default key name expected by trainers in Hugging Face. We apply our tokenize() function to both the train and validation splits: 4 https://github.com/huggingface/transformers/tree/main/examples/pytorch/ translation 218 Implementing Encoder-decoder Methods input_ids [13959, 3871, 29, 12, 1566, 10, 4961, 106, 204... [13959, 3871, 29, 12, 1566, 10, 4961, 106, 204... [13959, 3871, 29, 12, 1566, 10, 374, 6225, 49,... [13959, 3871, 29, 12, 1566, 10, 4540, 4031, 9,... [13959, 3871, 29, 12, 1566, 10, 2262, 900, 17,... [13959, 3871, 29, 12, 1566, 10, 18420, 83, 362... attention_mask labels [19428, 13, 12876, 10, 217, 13687, 7, 1] [19428, 13, 12876, 10, 217, 13687, 7, 1] [11167, 7, 1204, 10, 217, 13687, 7, 1] [4540, 4031, 9, 7, 1672, 7, 2262, 900, 17, 38,... [2262, 900, 17, 641, 65, 46, 3761, 6, 1069, 31... [3625, 32, 5788, 35, 15, 3844, 31, 7, 3, 16143... 0 1 2 3 4 ... 610315 610316 610317 610318 610319 [1,1,1,1,1,1,1,1, [1,1,1,1,1,1,1,1, [1,1,1,1,1,1,1,1, [1,1,1,1,1,1,1,1, [1,1,1,1,1,1,1,1, [1,1,1,1,1,1,1,1, 1,1,1,1, 1,1,1,... 1,1,1,1, 1,1,1,... 1,1,1,1, 1,1,1,... 1,1,1,1, 1,1,1,... 1,1,1,1, 1,1,1,... 1,1,1,1, 1,1,1,... [13959, 3871, 29, 12, 1566, 10, 5085, [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 5840, 49... 1, 1, 1, ... [13959, 3871, 29, 12, 1566, 10, 781, [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 8750, 9, ... 1, 1, 1, ... ... ... [13959, 3871, 29, 12, 1566, 10, 2364, [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 4540, 40... 1, 1, 1, ... 610320 rows × 3 columns [13959, 3871, 29, 12, 1566, 10, 3, [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 25882, 759,... 1, 1, 1, ... [2276, 8843, 138, 13, 13687, 7, 13, 1767, 3823... [781, 2420, 13, 17500, 10, 217, 13687, 7, 1] [242, 4540, 4031, 9, 7, 6, 8, 516, 65, 66, 8, ... [9810, 157, 31, 7, 516, 92, 3088, 21, 46, 3839... ... Recall that in order to construct a trainer, we need a data collator for batching, a function to compute the metrics of interest, and a TrainingArguments object. In this section, we use a data collator called DataCollatorForSeq2Seq, which is included in the transformers library specifically for sequence-to-sequence models. The collator pads the batches using the label_pad_token_id, which we have set to −100, as we did in Chapter 13 (this is the default ignore_index value used by CrossEntropyLoss): The compute_metrics() function computes the BLEU score. It uses the tokenizer to decode the token ids into text, for both the predicted and gold labels, ignoring padding: We use the Seq2SeqTrainingArguments class, which adds the predict_with_generate parameter to the regular TrainingArguments class. This is needed to in-
dicate that the trainer should use the generate() method for inference
in order to compute the metrics (BLUE in this case): Finally, we construct the trainer using the Seq2SeqTrainer class, which is a subclass of Trainer that adds the ability to compute scores such as BLEU during training by calling generate() during evaluation: Fine-tuning a translation model takes considerably longer than training or fine-tuning the models we have developed so far in this book. To account for this, here we add support for resuming training from a checkpoint, i.e., a model that was saved after training on a number of 15.4 Using a Previously Saved Model 219 examples. Similar to how one can resume a video game, this allows one to pick up from the last “save point,” in case training was interrupted and needs to be resumed: When calling the trainer’s train() method, we either provide a model checkpoint or None. In the former case, the trainer will continue training from the provided checkpoint. In the latter case, the trainer will begin training from scratch. Once the training has completed, we save the trained model and tokenizer using the trainer’s save_model() method into the output directory: We then compute and save the metrics corresponding to the training partition. This is not required, but it is helpful to keep a record of the model’s performance on the training data. Note that the metrics do not automatically include the number of examples in the training partition, so we add them explicitly: Next, we evaluate our final model on the validation data and save the corresponding metrics. These metrics indicate that our BLEU score on the validation data is 35.2%, which is evidence that fine-tuning has helped dramatically: Lastly, we save a model card into our output directory. A model card is akin to an automatically-generated README file that includes information about the model used, the data, settings used, and performance throughout the training process. This file is helpful for reproducibility as it contains all of this key information in one place. These cards are often uploaded to the Hugging Face Hub together with the model itself.5 15.4 Using a Previously Saved Model Models that have been saved locally can be loaded using the same from_pretrained() methods we have used before. In particular, instead of providing a model name, we provide the path to the local directory where the model is stored, using the local_files_only parameter to indicate that we want to load the model from the local file system instead of downloading it from the Hugging Face Hub (Make sure you use an output directory that is valid on your machine!): Once our fine-tuned model is loaded, we use it the same way as before. That is, we use the translate() function to generate translations 5 We do not discuss the model uploading process here. Please see the documentation on model sharing at: https://huggingface.co/docs/transformers/v4.14.1/model_sharing. 220 Implementing Encoder-decoder Methods for our test partition. Then we use the BLEU metric to score this output. From this metric, we obtain the final BLEU score of 33.4%, which is markedly better than our initial score (i.e., without fine-tuning) of 3.2%! The code corresponding to this section is available in the notebook chap15_translation_ro_to_en_finetuned. 15.5 Summary In this chapter we used a complete encoder-decoder transformer network to implement a machine translation application. Importantly, transformers with a decoder component have a generate() method that simplifies the generation process and provides multiple options for decoding. We encourage you to explore these options! For example, try comparing the quality of the output with the resources required to produce it (e.g., runtime overhead) when the size of the search beam increases. Additionally, we saw how to fine-tune an encoder-decoder model on a new language pair that it has not seen during its pre-training. This exercise included using checkpoints to support resuming training in case of unexpected interruptions, saving our fine-tuned model, and loading it for later use.
1,681
1,846
#!/usr/bin/env python # coding: utf-8 # # Machine Translation from English (En) to Romanian (Ro) # # Using the T5 Transformer without Fine-tuning # Some initialization: # In[1]: import torch import numpy as np from transformers import set_seed # set to True to use the gpu (if there is one available) use_gpu = True # select device device = torch.device('cuda' if use_gpu and torch.cuda.is_available() else 'cpu') print(f'device: {device.type}') # random seed seed = 42 # set random seed if seed is not None: print(f'random seed: {seed}') set_seed(seed) # In[2]: transformer_name = 't5-small' source_lang = 'en' target_lang = 'ro' max_source_length = 1024 max_target_length = 128 task_prefix = 'translate English to Romanian: ' num_beams = 1 batch_size = 100 # Load tokenizer and pre-trained model: # In[3]: from transformers import AutoTokenizer, AutoModelForSeq2SeqLM tokenizer = AutoTokenizer.from_pretrained(transformer_name) model = AutoModelForSeq2SeqLM.from_pretrained(transformer_name) model = model.to(device) # Load dataset from HuggingFace: # In[4]: from datasets import load_dataset test_ds = load_dataset('wmt16', 'ro-en', split='test') test_ds # In[5]: test_ds['translation'][0] # Implement the `translate` method and apply on this dataset: # In[6]: def translate(batch): # get source language examples and prepend task prefix inputs = [x[source_lang] for x in batch["translation"]] inputs = [task_prefix + x for x in inputs] # tokenize inputs encoded = tokenizer( inputs, max_length=max_source_length, truncation=True, padding=True, return_tensors='pt', ) # move data to gpu if needed input_ids = encoded.input_ids.to(device) attention_mask = encoded.attention_mask.to(device) # generate translated sentences output = model.generate( input_ids=input_ids, attention_mask=attention_mask, num_beams=num_beams, max_length=max_target_length, ) # generate predicted sentences from predicted token ids decoded = tokenizer.batch_decode( output, skip_special_tokens=True, ) # get gold sentences in target language targets = [x[target_lang] for x in batch["translation"]] # return gold and predicted sentences return { 'reference': targets, 'prediction': decoded, } # In[7]: results = test_ds.map( translate, batched=True, batch_size=batch_size, remove_columns=test_ds.column_names, ) results.to_pandas() # Now evaluate the quality of translations using the BLEU metric: # In[8]: from datasets import load_metric metric = load_metric('sacrebleu') for r in results: prediction = r['prediction'] reference = [r['reference']] metric.add(prediction=prediction, reference=reference) metric.compute() # An example of greedy decoding for individual texts: # In[9]: def greedy_translation(text): # prepend task prefix text = task_prefix + text # tokenize input encoded = tokenizer( text, max_length=max_source_length, truncation=True, return_tensors='pt', ) # encoder input ids encoder_input_ids = encoded.input_ids.to(device) # decoder input ids, initialized with start token id start = model.config.decoder_start_token_id decoder_input_ids = torch.LongTensor([[start]]).to(device) # generate tokens, one at a time for _ in range(max_target_length): # get model predictions output = model( encoder_input_ids, decoder_input_ids=decoder_input_ids, ) # get logits for last token next_token_logits = output.logits[0, -1, :] # select most probable token next_token_id = torch.argmax(next_token_logits) # append new token to decoder_input_ids output_id = torch.LongTensor([[next_token_id]]).to(device) decoder_input_ids = torch.cat([decoder_input_ids, output_id], dim=-1) # if predicted token is the end of sequence, stop iterating if next_token_id == tokenizer.eos_token_id: break # return text corresponding to predicted token ids return tokenizer.decode(decoder_input_ids[0], skip_special_tokens=True) # In[10]: greedy_translation("this is a test")
615
653
14
chap15-15
chap15-15
15 Implementing Encoder-decoder Methods In this chapter we implement a machine translation application as an example of an encoder-decoder task. In particular, we build on pre-trained encoder-decoder transformer models, which exist in the Hugging Face library for a wide variety of language pairs. We first show how to use one of these models out-of-the-box to perform translation for one of the language pairs it has been exposed to during pre-training: English to Romanian. Afterwards, we fine-tune the model to a new language combination that is has not seen before: Romanian to English. In both use cases, we use the T5 encoder-decoder model, which has been pre-trained for several tasks, including machine translation (Raffel et al., 2020). Please see Chapter 16 for a description of T5’s pre-training process. The data for this task comes from the WMT 2016 dataset (Bojar et al., 2016), which consists of English sentences aligned pairwise to German, Czech, Russian, Finnish, Romanian, and Turkish. In this chapter we only use the English-Romanian texts (in both directions). 15.1 Translating English to Romanian As a first example, we use T5 to translate from English to Romanian, which is one of the language pairs it has been exposed to during pretraining. The code discussed in this section is available in the notebook chap15_translation_en_to_ro. Even though in this exercise we are not fine-tuning the model, we still need to define a few hyper parameters to frame the task and help the model understand how to work with the data: The above settings indicate that we use the t5-small model, a smaller T5 variant, to minimize the amount of memory required. The source_lang 212 15.1 Translating English to Romanian 213 and target_lang variables define the direction of translation, i.e., from English to Romanian. To keep our computing requirements small, we limit the length of our input and output. That is, English text longer than max_source_length tokens will be truncated. Further, we limit our generated Romanian text to max_target_length. We chose a maximum target length of 128 tokens to limit the computational cost incurred during text generation (recall that the text is generated one token at a time). The T5 models are trained to support multiple tasks such as translation and summarization (please see Chapter 16 for details). Thus, during training and inference, the user must specify which task the model should perform using a text prefix. Here we use the prefix "translate English to Romanian: " to indicate that the input text is in English and should be translated to Romanian. Next, we load the model and the corresponding tokenizer, and move them to the GPU if one is available: We use the datasets library to load our translation dataset. Note that the first time one calls load_dataset() the dataset will be downloaded automatically from the Hugging Face repository.1 The load_dataset() function takes a dataset name and configuration, which in our case are wmt16 and ro-en, respectively. Since in this example we are only evaluating the model, we only load the test partition (or split) of the dataset: The dataset consists of a single column called translation. Each element in this column is a dictionary that contains the aligned pair. The dictionary keys are the abbreviated language names and the values are the corresponding sentences. An example of one of these dictionaries is shown below: We encapsulate the logic for translating the English text into Romanian in a function called translate(). Inside this function, for a batch of aligned pairs, we select the English sentence as our input, and prepend the task prefix. Then we tokenize these inputs, including the prefix, specifying that sentences longer than max_source_length should be truncated, the batch should be padded, and the tokenizer should return PyTorch tensors. Once the tokenizer output has been moved to the GPU, we pass it to the model’s generate() method. This is the first time we have seen this method, because only decoder and encoder-decoder models support it. This method generates an output sequence by predicting one token 1 https://huggingface.co/datasets/wmt16 214 Implementing Encoder-decoder Methods at a time, stopping when either the end-of-sequence token is produced or when the sequence reaches a maximum length. Several generation techniques are supported, such as beam search, in which several alternate translations are maintained by the model so that it is able to select an overall best translation from several options. For efficiency purposes, we use a greedy approach, which chooses the best token at each step of the generation. This is equivalent to using a beam search with a beam of size one. Since the model generates its predictions as a sequence of token ids, we need to convert them back into the corresponding tokens to be able to read the translated text. We do this using the tokenizer’s batch_decode() method. Finally, we return the gold and predicted Romanian sentences in a dictionary: Next, we apply our translate() function to our Dataset to translate all the sentences: reference Șeful ONU declară că nu există soluții militar... Șeful ONU a solicitat din nou tuturor părților... Ban și-a exprimat regretul că divizările în co... Nu sunt bani puțini. La sfârșitul mandatului voi face un raport cu ... "Să spună un parlamentar că nu-i ajung banii e... 1999 rows × 2 columns prediction eful ONU declară că nu există o soluţie milita... eful U.N. a cerut din nou tuturor partidelor, ... El şi-a exprimat regretul că diviziunile din c... Banii sunt suficienţi. La sfârşitul biroului voi raporta tot ceea ce ... "A spune că un parlamentar nu are suficienţi b... 1994 1995 1996 1997 1998 0 1 2 3 4 ... Secretarul General Ban Ki-moon afirmă că răspu... Secretarul General Ban Ki-moon declară că răsp... Ban a declarat miercuri în cadrul unei conferi... Ban a declarat la o conferinţă de presă susţin... ... ... Uneori mi-e rușine să ridic banii de la casierie. Uneori mi-e ruşine să iau banii de la biroul c... S-a întâmplat să ridic într-o lună și 30.000 d... Într-o lună am adunat 30 000 de lei cu ramburs... We evaluate the quality of these translations using the BLEU metric, which we introduced in Chapter 14. To this end, we load an existing implementation of BLEU from the datasets library as a Metric object.2 Metric objects have a method called add(), which is used to accumulate the predictions and gold labels, one example at a time. After accumulating all examples, the compute() method returns the results of the evaluation. Note that for each predicted sentence, BLEU expects a list of reference sentences (as there are often many correct ways of translat- 2 https://huggingface.co/docs/datasets/v2.4.0/en/package_reference/main_ classes#datasets. Metric 15.2 Implementation of Greedy Generation 215 ing a given text). Since we only have one reference, we wrap it in a list before passing it to the metric: The score corresponds to the BLEU score. The rest of the items correspond to the components required to compute the score. That is, the counts, totals, and precisions correspond to the counts, totals, and precisions for 1-, 2-, 3-, and 4-grams. The bp is the brevity penalty. The sys_len and ref_len correspond to the predictions and reference lengths. The above BLEU score of 25.2% is slightly lower than the state of the art, but we are being penalized by the peculiarities of diacritic usage in Romanian characters. For example, the letters ș and ț (corresponding to the sounds sh and ts in English) are usually spelled with a comma below the characters s and t, which is the standard imposed by the Romanian Academy. However, in “the wild” these characters are often written using a cedilla instead of a comma, e.g., ţ instead of ț (or, using the names of these Unicode characters, LATIN SMALL LETTER T WITH CEDILLA instead of LATIN SMALL LETTER T WITH COMMA BELOW). Further, some of these characters with diacritics are often omitted altogether in the T5 output. The T5 output below contains an example for each of these two situations (e.g., soluţi(e) instead of soluți(i), and eful instead of Șeful): To avoid being penalized at scoring time for these arbitrary discrepancies, post-processing scripts are sometimes used to normalize diacritic usage.3 Usage of such post-processing scripts can improve the BLEU score substantially. However, this is beyond the scope of this chapter. 15.2 Implementation of Greedy Generation To gain a better intuition of how the encoder-decoder model generates its output sequence, we show below an implementation of the greedy version of the generate() method used above. This function takes as an argument a single English text (i.e., no batching) and returns the corresponding Romanian text: This function interacts directly with the encoder and decoder components of the T5 model, so we must construct the input for both. The encoder’s input is constructed by prepending the task prefix to the English text and tokenizing it. On the other hand, the decoder’s input is constructed incrementally by accumulating the tokens predicted so far 3 https://github.com/huggingface/transformers/blob/main/examples/legacy/ seq2seq/romanian_postprocessing.md 216 Implementing Encoder-decoder Methods in order to predict the next token in the sequence. At the beginning, before any tokens are predicted, the decoder’s input is initialized with a single token that corresponds to the beginning of the sequence. We retrieve this token, called decoder_start_token_id, from the model’s configuration object. The tokens are predicted one at a time, until the model produces eos_token_id, which indicates that the sequence is finished. However, in case the model does not produce this end-of-sequence token within a reasonable number of steps, we also enforce a maximum number of predicted tokens, determined by the max_target_length parameter we defined previously. The T5 model’s forward() method, called indirectly through its __call__()) method, takes the inputs for both the encoder and the decoder. The output returned by this method corresponds to all the tokens in the decoder’s input plus an extra one: the newly predicted token. To select the best prediction, we retrieve the logits from the output and select the logits corresponding to the last token in the sequence (recall that the output shape is (batch size, sequence length, vocabulary size)). From these selected logits, we use the argmax() to select the token id corresponding to the highest-scoring vocabulary item. We append this new token id to the decoder’s input, and repeat the process until we encounter the end-of-sequence token or the decoded text reaches the maximum length. Once we are finished generating token ids, we retrieve the corresponding text by calling the tokenizer’s decode() method. This method is identical to the batch_decode() method we used previously, except that it only decodes a single example. Below is an usage example for the greedy_translation() function: 15.3 Fine-tuning Romanian to English Translation In this section, we fine-tune a T5 model on the translation of Romanian to English, a language pair that was not included in the T5 pre-training. To confirm that this data was not included in pre-training, we evaluated the performance of the vanilla t5-small model on the translation from Romanian to English using code equivalent to the code discussed in the previous section (see the chap15_translation_ro_to_en notebook). The resulting BLEU score was only 3.2%, which is substantially lower than the score we obtained when translating English to Romanian (25.2%). 15.3 Fine-tuning Romanian to English Translation 217 Note that the transformers library includes scripts to fine-tune a translation model directly from the command line.4 For didactic purposes, we will not use these scripts in this section, but instead write the fine-tuning code explicitly. For this exercise, we continue using the WMT16 dataset, but this time
we load the train and validation splits. We employ the same t5-small model that we used previously. The code from the last section to load
the model, tokenizer, and dataset does not need to change for this use-
case, so we do not repeat it here. However, as before, the complete code is available in a Jupyter notebook (chap15_translation_ro_to_en_finetune). We begin by tokenizing the source (Romanian) and target language (English) texts. As in the last section, we need to prepend the task prefix to the source texts prior to tokenizing. This time, since we are translating in the opposite direction, we use the prefix "translate Romanian to English: ", and we prepend it to the Romanian text. Each call to the tokenizer with a batch of texts produces input_ids and an attention_mask. This output is what we need for the Romanian text, which will serve as the input to the model. To generate the labels, i.e., the correct translated tokens, we use the input_ids corresponding to the English text. Recall that "labels" is the default key name expected by trainers in Hugging Face. We apply our tokenize() function to both the train and validation splits: 4 https://github.com/huggingface/transformers/tree/main/examples/pytorch/ translation 218 Implementing Encoder-decoder Methods input_ids [13959, 3871, 29, 12, 1566, 10, 4961, 106, 204... [13959, 3871, 29, 12, 1566, 10, 4961, 106, 204... [13959, 3871, 29, 12, 1566, 10, 374, 6225, 49,... [13959, 3871, 29, 12, 1566, 10, 4540, 4031, 9,... [13959, 3871, 29, 12, 1566, 10, 2262, 900, 17,... [13959, 3871, 29, 12, 1566, 10, 18420, 83, 362... attention_mask labels [19428, 13, 12876, 10, 217, 13687, 7, 1] [19428, 13, 12876, 10, 217, 13687, 7, 1] [11167, 7, 1204, 10, 217, 13687, 7, 1] [4540, 4031, 9, 7, 1672, 7, 2262, 900, 17, 38,... [2262, 900, 17, 641, 65, 46, 3761, 6, 1069, 31... [3625, 32, 5788, 35, 15, 3844, 31, 7, 3, 16143... 0 1 2 3 4 ... 610315 610316 610317 610318 610319 [1,1,1,1,1,1,1,1, [1,1,1,1,1,1,1,1, [1,1,1,1,1,1,1,1, [1,1,1,1,1,1,1,1, [1,1,1,1,1,1,1,1, [1,1,1,1,1,1,1,1, 1,1,1,1, 1,1,1,... 1,1,1,1, 1,1,1,... 1,1,1,1, 1,1,1,... 1,1,1,1, 1,1,1,... 1,1,1,1, 1,1,1,... 1,1,1,1, 1,1,1,... [13959, 3871, 29, 12, 1566, 10, 5085, [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 5840, 49... 1, 1, 1, ... [13959, 3871, 29, 12, 1566, 10, 781, [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 8750, 9, ... 1, 1, 1, ... ... ... [13959, 3871, 29, 12, 1566, 10, 2364, [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 4540, 40... 1, 1, 1, ... 610320 rows × 3 columns [13959, 3871, 29, 12, 1566, 10, 3, [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 25882, 759,... 1, 1, 1, ... [2276, 8843, 138, 13, 13687, 7, 13, 1767, 3823... [781, 2420, 13, 17500, 10, 217, 13687, 7, 1] [242, 4540, 4031, 9, 7, 6, 8, 516, 65, 66, 8, ... [9810, 157, 31, 7, 516, 92, 3088, 21, 46, 3839... ... Recall that in order to construct a trainer, we need a data collator for batching, a function to compute the metrics of interest, and a TrainingArguments object. In this section, we use a data collator called DataCollatorForSeq2Seq, which is included in the transformers library specifically for sequence-to-sequence models. The collator pads the batches using the label_pad_token_id, which we have set to −100, as we did in Chapter 13 (this is the default ignore_index value used by CrossEntropyLoss): The compute_metrics() function computes the BLEU score. It uses the tokenizer to decode the token ids into text, for both the predicted and gold labels, ignoring padding: We use the Seq2SeqTrainingArguments class, which adds the predict_with_generate parameter to the regular TrainingArguments class. This is needed to in-
dicate that the trainer should use the generate() method for inference
in order to compute the metrics (BLUE in this case): Finally, we construct the trainer using the Seq2SeqTrainer class, which is a subclass of Trainer that adds the ability to compute scores such as BLEU during training by calling generate() during evaluation: Fine-tuning a translation model takes considerably longer than training or fine-tuning the models we have developed so far in this book. To account for this, here we add support for resuming training from a checkpoint, i.e., a model that was saved after training on a number of 15.4 Using a Previously Saved Model 219 examples. Similar to how one can resume a video game, this allows one to pick up from the last “save point,” in case training was interrupted and needs to be resumed: When calling the trainer’s train() method, we either provide a model checkpoint or None. In the former case, the trainer will continue training from the provided checkpoint. In the latter case, the trainer will begin training from scratch. Once the training has completed, we save the trained model and tokenizer using the trainer’s save_model() method into the output directory: We then compute and save the metrics corresponding to the training partition. This is not required, but it is helpful to keep a record of the model’s performance on the training data. Note that the metrics do not automatically include the number of examples in the training partition, so we add them explicitly: Next, we evaluate our final model on the validation data and save the corresponding metrics. These metrics indicate that our BLEU score on the validation data is 35.2%, which is evidence that fine-tuning has helped dramatically: Lastly, we save a model card into our output directory. A model card is akin to an automatically-generated README file that includes information about the model used, the data, settings used, and performance throughout the training process. This file is helpful for reproducibility as it contains all of this key information in one place. These cards are often uploaded to the Hugging Face Hub together with the model itself.5 15.4 Using a Previously Saved Model Models that have been saved locally can be loaded using the same from_pretrained() methods we have used before. In particular, instead of providing a model name, we provide the path to the local directory where the model is stored, using the local_files_only parameter to indicate that we want to load the model from the local file system instead of downloading it from the Hugging Face Hub (Make sure you use an output directory that is valid on your machine!): Once our fine-tuned model is loaded, we use it the same way as before. That is, we use the translate() function to generate translations 5 We do not discuss the model uploading process here. Please see the documentation on model sharing at: https://huggingface.co/docs/transformers/v4.14.1/model_sharing. 220 Implementing Encoder-decoder Methods for our test partition. Then we use the BLEU metric to score this output. From this metric, we obtain the final BLEU score of 33.4%, which is markedly better than our initial score (i.e., without fine-tuning) of 3.2%! The code corresponding to this section is available in the notebook chap15_translation_ro_to_en_finetuned. 15.5 Summary In this chapter we used a complete encoder-decoder transformer network to implement a machine translation application. Importantly, transformers with a decoder component have a generate() method that simplifies the generation process and provides multiple options for decoding. We encourage you to explore these options! For example, try comparing the quality of the output with the resources required to produce it (e.g., runtime overhead) when the size of the search beam increases. Additionally, we saw how to fine-tune an encoder-decoder model on a new language pair that it has not seen during its pre-training. This exercise included using checkpoints to support resuming training in case of unexpected interruptions, saving our fine-tuned model, and loading it for later use.
3,961
4,005
#!/usr/bin/env python # coding: utf-8 # # Machine Translation from English (En) to Romanian (Ro) # # Using the T5 Transformer without Fine-tuning # Some initialization: # In[1]: import torch import numpy as np from transformers import set_seed # set to True to use the gpu (if there is one available) use_gpu = True # select device device = torch.device('cuda' if use_gpu and torch.cuda.is_available() else 'cpu') print(f'device: {device.type}') # random seed seed = 42 # set random seed if seed is not None: print(f'random seed: {seed}') set_seed(seed) # In[2]: transformer_name = 't5-small' source_lang = 'en' target_lang = 'ro' max_source_length = 1024 max_target_length = 128 task_prefix = 'translate English to Romanian: ' num_beams = 1 batch_size = 100 # Load tokenizer and pre-trained model: # In[3]: from transformers import AutoTokenizer, AutoModelForSeq2SeqLM tokenizer = AutoTokenizer.from_pretrained(transformer_name) model = AutoModelForSeq2SeqLM.from_pretrained(transformer_name) model = model.to(device) # Load dataset from HuggingFace: # In[4]: from datasets import load_dataset test_ds = load_dataset('wmt16', 'ro-en', split='test') test_ds # In[5]: test_ds['translation'][0] # Implement the `translate` method and apply on this dataset: # In[6]: def translate(batch): # get source language examples and prepend task prefix inputs = [x[source_lang] for x in batch["translation"]] inputs = [task_prefix + x for x in inputs] # tokenize inputs encoded = tokenizer( inputs, max_length=max_source_length, truncation=True, padding=True, return_tensors='pt', ) # move data to gpu if needed input_ids = encoded.input_ids.to(device) attention_mask = encoded.attention_mask.to(device) # generate translated sentences output = model.generate( input_ids=input_ids, attention_mask=attention_mask, num_beams=num_beams, max_length=max_target_length, ) # generate predicted sentences from predicted token ids decoded = tokenizer.batch_decode( output, skip_special_tokens=True, ) # get gold sentences in target language targets = [x[target_lang] for x in batch["translation"]] # return gold and predicted sentences return { 'reference': targets, 'prediction': decoded, } # In[7]: results = test_ds.map( translate, batched=True, batch_size=batch_size, remove_columns=test_ds.column_names, ) results.to_pandas() # Now evaluate the quality of translations using the BLEU metric: # In[8]: from datasets import load_metric metric = load_metric('sacrebleu') for r in results: prediction = r['prediction'] reference = [r['reference']] metric.add(prediction=prediction, reference=reference) metric.compute() # An example of greedy decoding for individual texts: # In[9]: def greedy_translation(text): # prepend task prefix text = task_prefix + text # tokenize input encoded = tokenizer( text, max_length=max_source_length, truncation=True, return_tensors='pt', ) # encoder input ids encoder_input_ids = encoded.input_ids.to(device) # decoder input ids, initialized with start token id start = model.config.decoder_start_token_id decoder_input_ids = torch.LongTensor([[start]]).to(device) # generate tokens, one at a time for _ in range(max_target_length): # get model predictions output = model( encoder_input_ids, decoder_input_ids=decoder_input_ids, ) # get logits for last token next_token_logits = output.logits[0, -1, :] # select most probable token next_token_id = torch.argmax(next_token_logits) # append new token to decoder_input_ids output_id = torch.LongTensor([[next_token_id]]).to(device) decoder_input_ids = torch.cat([decoder_input_ids, output_id], dim=-1) # if predicted token is the end of sequence, stop iterating if next_token_id == tokenizer.eos_token_id: break # return text corresponding to predicted token ids return tokenizer.decode(decoder_input_ids[0], skip_special_tokens=True) # In[10]: greedy_translation("this is a test")
1,862
2,032
15
chap15-16
chap15-16
15 Implementing Encoder-decoder Methods In this chapter we implement a machine translation application as an example of an encoder-decoder task. In particular, we build on pre-trained encoder-decoder transformer models, which exist in the Hugging Face library for a wide variety of language pairs. We first show how to use one of these models out-of-the-box to perform translation for one of the language pairs it has been exposed to during pre-training: English to Romanian. Afterwards, we fine-tune the model to a new language combination that is has not seen before: Romanian to English. In both use cases, we use the T5 encoder-decoder model, which has been pre-trained for several tasks, including machine translation (Raffel et al., 2020). Please see Chapter 16 for a description of T5’s pre-training process. The data for this task comes from the WMT 2016 dataset (Bojar et al., 2016), which consists of English sentences aligned pairwise to German, Czech, Russian, Finnish, Romanian, and Turkish. In this chapter we only use the English-Romanian texts (in both directions). 15.1 Translating English to Romanian As a first example, we use T5 to translate from English to Romanian, which is one of the language pairs it has been exposed to during pretraining. The code discussed in this section is available in the notebook chap15_translation_en_to_ro. Even though in this exercise we are not fine-tuning the model, we still need to define a few hyper parameters to frame the task and help the model understand how to work with the data: The above settings indicate that we use the t5-small model, a smaller T5 variant, to minimize the amount of memory required. The source_lang 212 15.1 Translating English to Romanian 213 and target_lang variables define the direction of translation, i.e., from English to Romanian. To keep our computing requirements small, we limit the length of our input and output. That is, English text longer than max_source_length tokens will be truncated. Further, we limit our generated Romanian text to max_target_length. We chose a maximum target length of 128 tokens to limit the computational cost incurred during text generation (recall that the text is generated one token at a time). The T5 models are trained to support multiple tasks such as translation and summarization (please see Chapter 16 for details). Thus, during training and inference, the user must specify which task the model should perform using a text prefix. Here we use the prefix "translate English to Romanian: " to indicate that the input text is in English and should be translated to Romanian. Next, we load the model and the corresponding tokenizer, and move them to the GPU if one is available: We use the datasets library to load our translation dataset. Note that the first time one calls load_dataset() the dataset will be downloaded automatically from the Hugging Face repository.1 The load_dataset() function takes a dataset name and configuration, which in our case are wmt16 and ro-en, respectively. Since in this example we are only evaluating the model, we only load the test partition (or split) of the dataset: The dataset consists of a single column called translation. Each element in this column is a dictionary that contains the aligned pair. The dictionary keys are the abbreviated language names and the values are the corresponding sentences. An example of one of these dictionaries is shown below: We encapsulate the logic for translating the English text into Romanian in a function called translate(). Inside this function, for a batch of aligned pairs, we select the English sentence as our input, and prepend the task prefix. Then we tokenize these inputs, including the prefix, specifying that sentences longer than max_source_length should be truncated, the batch should be padded, and the tokenizer should return PyTorch tensors. Once the tokenizer output has been moved to the GPU, we pass it to the model’s generate() method. This is the first time we have seen this method, because only decoder and encoder-decoder models support it. This method generates an output sequence by predicting one token 1 https://huggingface.co/datasets/wmt16 214 Implementing Encoder-decoder Methods at a time, stopping when either the end-of-sequence token is produced or when the sequence reaches a maximum length. Several generation techniques are supported, such as beam search, in which several alternate translations are maintained by the model so that it is able to select an overall best translation from several options. For efficiency purposes, we use a greedy approach, which chooses the best token at each step of the generation. This is equivalent to using a beam search with a beam of size one. Since the model generates its predictions as a sequence of token ids, we need to convert them back into the corresponding tokens to be able to read the translated text. We do this using the tokenizer’s batch_decode() method. Finally, we return the gold and predicted Romanian sentences in a dictionary: Next, we apply our translate() function to our Dataset to translate all the sentences: reference Șeful ONU declară că nu există soluții militar... Șeful ONU a solicitat din nou tuturor părților... Ban și-a exprimat regretul că divizările în co... Nu sunt bani puțini. La sfârșitul mandatului voi face un raport cu ... "Să spună un parlamentar că nu-i ajung banii e... 1999 rows × 2 columns prediction eful ONU declară că nu există o soluţie milita... eful U.N. a cerut din nou tuturor partidelor, ... El şi-a exprimat regretul că diviziunile din c... Banii sunt suficienţi. La sfârşitul biroului voi raporta tot ceea ce ... "A spune că un parlamentar nu are suficienţi b... 1994 1995 1996 1997 1998 0 1 2 3 4 ... Secretarul General Ban Ki-moon afirmă că răspu... Secretarul General Ban Ki-moon declară că răsp... Ban a declarat miercuri în cadrul unei conferi... Ban a declarat la o conferinţă de presă susţin... ... ... Uneori mi-e rușine să ridic banii de la casierie. Uneori mi-e ruşine să iau banii de la biroul c... S-a întâmplat să ridic într-o lună și 30.000 d... Într-o lună am adunat 30 000 de lei cu ramburs... We evaluate the quality of these translations using the BLEU metric, which we introduced in Chapter 14. To this end, we load an existing implementation of BLEU from the datasets library as a Metric object.2 Metric objects have a method called add(), which is used to accumulate the predictions and gold labels, one example at a time. After accumulating all examples, the compute() method returns the results of the evaluation. Note that for each predicted sentence, BLEU expects a list of reference sentences (as there are often many correct ways of translat- 2 https://huggingface.co/docs/datasets/v2.4.0/en/package_reference/main_ classes#datasets. Metric 15.2 Implementation of Greedy Generation 215 ing a given text). Since we only have one reference, we wrap it in a list before passing it to the metric: The score corresponds to the BLEU score. The rest of the items correspond to the components required to compute the score. That is, the counts, totals, and precisions correspond to the counts, totals, and precisions for 1-, 2-, 3-, and 4-grams. The bp is the brevity penalty. The sys_len and ref_len correspond to the predictions and reference lengths. The above BLEU score of 25.2% is slightly lower than the state of the art, but we are being penalized by the peculiarities of diacritic usage in Romanian characters. For example, the letters ș and ț (corresponding to the sounds sh and ts in English) are usually spelled with a comma below the characters s and t, which is the standard imposed by the Romanian Academy. However, in “the wild” these characters are often written using a cedilla instead of a comma, e.g., ţ instead of ț (or, using the names of these Unicode characters, LATIN SMALL LETTER T WITH CEDILLA instead of LATIN SMALL LETTER T WITH COMMA BELOW). Further, some of these characters with diacritics are often omitted altogether in the T5 output. The T5 output below contains an example for each of these two situations (e.g., soluţi(e) instead of soluți(i), and eful instead of Șeful): To avoid being penalized at scoring time for these arbitrary discrepancies, post-processing scripts are sometimes used to normalize diacritic usage.3 Usage of such post-processing scripts can improve the BLEU score substantially. However, this is beyond the scope of this chapter. 15.2 Implementation of Greedy Generation To gain a better intuition of how the encoder-decoder model generates its output sequence, we show below an implementation of the greedy version of the generate() method used above. This function takes as an argument a single English text (i.e., no batching) and returns the corresponding Romanian text: This function interacts directly with the encoder and decoder components of the T5 model, so we must construct the input for both. The encoder’s input is constructed by prepending the task prefix to the English text and tokenizing it. On the other hand, the decoder’s input is constructed incrementally by accumulating the tokens predicted so far 3 https://github.com/huggingface/transformers/blob/main/examples/legacy/ seq2seq/romanian_postprocessing.md 216 Implementing Encoder-decoder Methods in order to predict the next token in the sequence. At the beginning, before any tokens are predicted, the decoder’s input is initialized with a single token that corresponds to the beginning of the sequence. We retrieve this token, called decoder_start_token_id, from the model’s configuration object. The tokens are predicted one at a time, until the model produces eos_token_id, which indicates that the sequence is finished. However, in case the model does not produce this end-of-sequence token within a reasonable number of steps, we also enforce a maximum number of predicted tokens, determined by the max_target_length parameter we defined previously. The T5 model’s forward() method, called indirectly through its __call__()) method, takes the inputs for both the encoder and the decoder. The output returned by this method corresponds to all the tokens in the decoder’s input plus an extra one: the newly predicted token. To select the best prediction, we retrieve the logits from the output and select the logits corresponding to the last token in the sequence (recall that the output shape is (batch size, sequence length, vocabulary size)). From these selected logits, we use the argmax() to select the token id corresponding to the highest-scoring vocabulary item. We append this new token id to the decoder’s input, and repeat the process until we encounter the end-of-sequence token or the decoded text reaches the maximum length. Once we are finished generating token ids, we retrieve the corresponding text by calling the tokenizer’s decode() method. This method is identical to the batch_decode() method we used previously, except that it only decodes a single example. Below is an usage example for the greedy_translation() function: 15.3 Fine-tuning Romanian to English Translation In this section, we fine-tune a T5 model on the translation of Romanian to English, a language pair that was not included in the T5 pre-training. To confirm that this data was not included in pre-training, we evaluated the performance of the vanilla t5-small model on the translation from Romanian to English using code equivalent to the code discussed in the previous section (see the chap15_translation_ro_to_en notebook). The resulting BLEU score was only 3.2%, which is substantially lower than the score we obtained when translating English to Romanian (25.2%). 15.3 Fine-tuning Romanian to English Translation 217 Note that the transformers library includes scripts to fine-tune a translation model directly from the command line.4 For didactic purposes, we will not use these scripts in this section, but instead write the fine-tuning code explicitly. For this exercise, we continue using the WMT16 dataset, but this time
we load the train and validation splits. We employ the same t5-small model that we used previously. The code from the last section to load
the model, tokenizer, and dataset does not need to change for this use-
case, so we do not repeat it here. However, as before, the complete code is available in a Jupyter notebook (chap15_translation_ro_to_en_finetune). We begin by tokenizing the source (Romanian) and target language (English) texts. As in the last section, we need to prepend the task prefix to the source texts prior to tokenizing. This time, since we are translating in the opposite direction, we use the prefix "translate Romanian to English: ", and we prepend it to the Romanian text. Each call to the tokenizer with a batch of texts produces input_ids and an attention_mask. This output is what we need for the Romanian text, which will serve as the input to the model. To generate the labels, i.e., the correct translated tokens, we use the input_ids corresponding to the English text. Recall that "labels" is the default key name expected by trainers in Hugging Face. We apply our tokenize() function to both the train and validation splits: 4 https://github.com/huggingface/transformers/tree/main/examples/pytorch/ translation 218 Implementing Encoder-decoder Methods input_ids [13959, 3871, 29, 12, 1566, 10, 4961, 106, 204... [13959, 3871, 29, 12, 1566, 10, 4961, 106, 204... [13959, 3871, 29, 12, 1566, 10, 374, 6225, 49,... [13959, 3871, 29, 12, 1566, 10, 4540, 4031, 9,... [13959, 3871, 29, 12, 1566, 10, 2262, 900, 17,... [13959, 3871, 29, 12, 1566, 10, 18420, 83, 362... attention_mask labels [19428, 13, 12876, 10, 217, 13687, 7, 1] [19428, 13, 12876, 10, 217, 13687, 7, 1] [11167, 7, 1204, 10, 217, 13687, 7, 1] [4540, 4031, 9, 7, 1672, 7, 2262, 900, 17, 38,... [2262, 900, 17, 641, 65, 46, 3761, 6, 1069, 31... [3625, 32, 5788, 35, 15, 3844, 31, 7, 3, 16143... 0 1 2 3 4 ... 610315 610316 610317 610318 610319 [1,1,1,1,1,1,1,1, [1,1,1,1,1,1,1,1, [1,1,1,1,1,1,1,1, [1,1,1,1,1,1,1,1, [1,1,1,1,1,1,1,1, [1,1,1,1,1,1,1,1, 1,1,1,1, 1,1,1,... 1,1,1,1, 1,1,1,... 1,1,1,1, 1,1,1,... 1,1,1,1, 1,1,1,... 1,1,1,1, 1,1,1,... 1,1,1,1, 1,1,1,... [13959, 3871, 29, 12, 1566, 10, 5085, [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 5840, 49... 1, 1, 1, ... [13959, 3871, 29, 12, 1566, 10, 781, [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 8750, 9, ... 1, 1, 1, ... ... ... [13959, 3871, 29, 12, 1566, 10, 2364, [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 4540, 40... 1, 1, 1, ... 610320 rows × 3 columns [13959, 3871, 29, 12, 1566, 10, 3, [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 25882, 759,... 1, 1, 1, ... [2276, 8843, 138, 13, 13687, 7, 13, 1767, 3823... [781, 2420, 13, 17500, 10, 217, 13687, 7, 1] [242, 4540, 4031, 9, 7, 6, 8, 516, 65, 66, 8, ... [9810, 157, 31, 7, 516, 92, 3088, 21, 46, 3839... ... Recall that in order to construct a trainer, we need a data collator for batching, a function to compute the metrics of interest, and a TrainingArguments object. In this section, we use a data collator called DataCollatorForSeq2Seq, which is included in the transformers library specifically for sequence-to-sequence models. The collator pads the batches using the label_pad_token_id, which we have set to −100, as we did in Chapter 13 (this is the default ignore_index value used by CrossEntropyLoss): The compute_metrics() function computes the BLEU score. It uses the tokenizer to decode the token ids into text, for both the predicted and gold labels, ignoring padding: We use the Seq2SeqTrainingArguments class, which adds the predict_with_generate parameter to the regular TrainingArguments class. This is needed to in-
dicate that the trainer should use the generate() method for inference
in order to compute the metrics (BLUE in this case): Finally, we construct the trainer using the Seq2SeqTrainer class, which is a subclass of Trainer that adds the ability to compute scores such as BLEU during training by calling generate() during evaluation: Fine-tuning a translation model takes considerably longer than training or fine-tuning the models we have developed so far in this book. To account for this, here we add support for resuming training from a checkpoint, i.e., a model that was saved after training on a number of 15.4 Using a Previously Saved Model 219 examples. Similar to how one can resume a video game, this allows one to pick up from the last “save point,” in case training was interrupted and needs to be resumed: When calling the trainer’s train() method, we either provide a model checkpoint or None. In the former case, the trainer will continue training from the provided checkpoint. In the latter case, the trainer will begin training from scratch. Once the training has completed, we save the trained model and tokenizer using the trainer’s save_model() method into the output directory: We then compute and save the metrics corresponding to the training partition. This is not required, but it is helpful to keep a record of the model’s performance on the training data. Note that the metrics do not automatically include the number of examples in the training partition, so we add them explicitly: Next, we evaluate our final model on the validation data and save the corresponding metrics. These metrics indicate that our BLEU score on the validation data is 35.2%, which is evidence that fine-tuning has helped dramatically: Lastly, we save a model card into our output directory. A model card is akin to an automatically-generated README file that includes information about the model used, the data, settings used, and performance throughout the training process. This file is helpful for reproducibility as it contains all of this key information in one place. These cards are often uploaded to the Hugging Face Hub together with the model itself.5 15.4 Using a Previously Saved Model Models that have been saved locally can be loaded using the same from_pretrained() methods we have used before. In particular, instead of providing a model name, we provide the path to the local directory where the model is stored, using the local_files_only parameter to indicate that we want to load the model from the local file system instead of downloading it from the Hugging Face Hub (Make sure you use an output directory that is valid on your machine!): Once our fine-tuned model is loaded, we use it the same way as before. That is, we use the translate() function to generate translations 5 We do not discuss the model uploading process here. Please see the documentation on model sharing at: https://huggingface.co/docs/transformers/v4.14.1/model_sharing. 220 Implementing Encoder-decoder Methods for our test partition. Then we use the BLEU metric to score this output. From this metric, we obtain the final BLEU score of 33.4%, which is markedly better than our initial score (i.e., without fine-tuning) of 3.2%! The code corresponding to this section is available in the notebook chap15_translation_ro_to_en_finetuned. 15.5 Summary In this chapter we used a complete encoder-decoder transformer network to implement a machine translation application. Importantly, transformers with a decoder component have a generate() method that simplifies the generation process and provides multiple options for decoding. We encourage you to explore these options! For example, try comparing the quality of the output with the resources required to produce it (e.g., runtime overhead) when the size of the search beam increases. Additionally, we saw how to fine-tune an encoder-decoder model on a new language pair that it has not seen during its pre-training. This exercise included using checkpoints to support resuming training in case of unexpected interruptions, saving our fine-tuned model, and loading it for later use.
9,894
10,124
#!/usr/bin/env python # coding: utf-8 # # Machine Translation from English (En) to Romanian (Ro) # # Using the T5 Transformer without Fine-tuning # Some initialization: # In[1]: import torch import numpy as np from transformers import set_seed # set to True to use the gpu (if there is one available) use_gpu = True # select device device = torch.device('cuda' if use_gpu and torch.cuda.is_available() else 'cpu') print(f'device: {device.type}') # random seed seed = 42 # set random seed if seed is not None: print(f'random seed: {seed}') set_seed(seed) # In[2]: transformer_name = 't5-small' source_lang = 'en' target_lang = 'ro' max_source_length = 1024 max_target_length = 128 task_prefix = 'translate English to Romanian: ' num_beams = 1 batch_size = 100 # Load tokenizer and pre-trained model: # In[3]: from transformers import AutoTokenizer, AutoModelForSeq2SeqLM tokenizer = AutoTokenizer.from_pretrained(transformer_name) model = AutoModelForSeq2SeqLM.from_pretrained(transformer_name) model = model.to(device) # Load dataset from HuggingFace: # In[4]: from datasets import load_dataset test_ds = load_dataset('wmt16', 'ro-en', split='test') test_ds # In[5]: test_ds['translation'][0] # Implement the `translate` method and apply on this dataset: # In[6]: def translate(batch): # get source language examples and prepend task prefix inputs = [x[source_lang] for x in batch["translation"]] inputs = [task_prefix + x for x in inputs] # tokenize inputs encoded = tokenizer( inputs, max_length=max_source_length, truncation=True, padding=True, return_tensors='pt', ) # move data to gpu if needed input_ids = encoded.input_ids.to(device) attention_mask = encoded.attention_mask.to(device) # generate translated sentences output = model.generate( input_ids=input_ids, attention_mask=attention_mask, num_beams=num_beams, max_length=max_target_length, ) # generate predicted sentences from predicted token ids decoded = tokenizer.batch_decode( output, skip_special_tokens=True, ) # get gold sentences in target language targets = [x[target_lang] for x in batch["translation"]] # return gold and predicted sentences return { 'reference': targets, 'prediction': decoded, } # In[7]: results = test_ds.map( translate, batched=True, batch_size=batch_size, remove_columns=test_ds.column_names, ) results.to_pandas() # Now evaluate the quality of translations using the BLEU metric: # In[8]: from datasets import load_metric metric = load_metric('sacrebleu') for r in results: prediction = r['prediction'] reference = [r['reference']] metric.add(prediction=prediction, reference=reference) metric.compute() # An example of greedy decoding for individual texts: # In[9]: def greedy_translation(text): # prepend task prefix text = task_prefix + text # tokenize input encoded = tokenizer( text, max_length=max_source_length, truncation=True, return_tensors='pt', ) # encoder input ids encoder_input_ids = encoded.input_ids.to(device) # decoder input ids, initialized with start token id start = model.config.decoder_start_token_id decoder_input_ids = torch.LongTensor([[start]]).to(device) # generate tokens, one at a time for _ in range(max_target_length): # get model predictions output = model( encoder_input_ids, decoder_input_ids=decoder_input_ids, ) # get logits for last token next_token_logits = output.logits[0, -1, :] # select most probable token next_token_id = torch.argmax(next_token_logits) # append new token to decoder_input_ids output_id = torch.LongTensor([[next_token_id]]).to(device) decoder_input_ids = torch.cat([decoder_input_ids, output_id], dim=-1) # if predicted token is the end of sequence, stop iterating if next_token_id == tokenizer.eos_token_id: break # return text corresponding to predicted token ids return tokenizer.decode(decoder_input_ids[0], skip_special_tokens=True) # In[10]: greedy_translation("this is a test")
3,516
3,555
16
chap15-17
chap15-17
15 Implementing Encoder-decoder Methods In this chapter we implement a machine translation application as an example of an encoder-decoder task. In particular, we build on pre-trained encoder-decoder transformer models, which exist in the Hugging Face library for a wide variety of language pairs. We first show how to use one of these models out-of-the-box to perform translation for one of the language pairs it has been exposed to during pre-training: English to Romanian. Afterwards, we fine-tune the model to a new language combination that is has not seen before: Romanian to English. In both use cases, we use the T5 encoder-decoder model, which has been pre-trained for several tasks, including machine translation (Raffel et al., 2020). Please see Chapter 16 for a description of T5’s pre-training process. The data for this task comes from the WMT 2016 dataset (Bojar et al., 2016), which consists of English sentences aligned pairwise to German, Czech, Russian, Finnish, Romanian, and Turkish. In this chapter we only use the English-Romanian texts (in both directions). 15.1 Translating English to Romanian As a first example, we use T5 to translate from English to Romanian, which is one of the language pairs it has been exposed to during pretraining. The code discussed in this section is available in the notebook chap15_translation_en_to_ro. Even though in this exercise we are not fine-tuning the model, we still need to define a few hyper parameters to frame the task and help the model understand how to work with the data: The above settings indicate that we use the t5-small model, a smaller T5 variant, to minimize the amount of memory required. The source_lang 212 15.1 Translating English to Romanian 213 and target_lang variables define the direction of translation, i.e., from English to Romanian. To keep our computing requirements small, we limit the length of our input and output. That is, English text longer than max_source_length tokens will be truncated. Further, we limit our generated Romanian text to max_target_length. We chose a maximum target length of 128 tokens to limit the computational cost incurred during text generation (recall that the text is generated one token at a time). The T5 models are trained to support multiple tasks such as translation and summarization (please see Chapter 16 for details). Thus, during training and inference, the user must specify which task the model should perform using a text prefix. Here we use the prefix "translate English to Romanian: " to indicate that the input text is in English and should be translated to Romanian. Next, we load the model and the corresponding tokenizer, and move them to the GPU if one is available: We use the datasets library to load our translation dataset. Note that the first time one calls load_dataset() the dataset will be downloaded automatically from the Hugging Face repository.1 The load_dataset() function takes a dataset name and configuration, which in our case are wmt16 and ro-en, respectively. Since in this example we are only evaluating the model, we only load the test partition (or split) of the dataset: The dataset consists of a single column called translation. Each element in this column is a dictionary that contains the aligned pair. The dictionary keys are the abbreviated language names and the values are the corresponding sentences. An example of one of these dictionaries is shown below: We encapsulate the logic for translating the English text into Romanian in a function called translate(). Inside this function, for a batch of aligned pairs, we select the English sentence as our input, and prepend the task prefix. Then we tokenize these inputs, including the prefix, specifying that sentences longer than max_source_length should be truncated, the batch should be padded, and the tokenizer should return PyTorch tensors. Once the tokenizer output has been moved to the GPU, we pass it to the model’s generate() method. This is the first time we have seen this method, because only decoder and encoder-decoder models support it. This method generates an output sequence by predicting one token 1 https://huggingface.co/datasets/wmt16 214 Implementing Encoder-decoder Methods at a time, stopping when either the end-of-sequence token is produced or when the sequence reaches a maximum length. Several generation techniques are supported, such as beam search, in which several alternate translations are maintained by the model so that it is able to select an overall best translation from several options. For efficiency purposes, we use a greedy approach, which chooses the best token at each step of the generation. This is equivalent to using a beam search with a beam of size one. Since the model generates its predictions as a sequence of token ids, we need to convert them back into the corresponding tokens to be able to read the translated text. We do this using the tokenizer’s batch_decode() method. Finally, we return the gold and predicted Romanian sentences in a dictionary: Next, we apply our translate() function to our Dataset to translate all the sentences: reference Șeful ONU declară că nu există soluții militar... Șeful ONU a solicitat din nou tuturor părților... Ban și-a exprimat regretul că divizările în co... Nu sunt bani puțini. La sfârșitul mandatului voi face un raport cu ... "Să spună un parlamentar că nu-i ajung banii e... 1999 rows × 2 columns prediction eful ONU declară că nu există o soluţie milita... eful U.N. a cerut din nou tuturor partidelor, ... El şi-a exprimat regretul că diviziunile din c... Banii sunt suficienţi. La sfârşitul biroului voi raporta tot ceea ce ... "A spune că un parlamentar nu are suficienţi b... 1994 1995 1996 1997 1998 0 1 2 3 4 ... Secretarul General Ban Ki-moon afirmă că răspu... Secretarul General Ban Ki-moon declară că răsp... Ban a declarat miercuri în cadrul unei conferi... Ban a declarat la o conferinţă de presă susţin... ... ... Uneori mi-e rușine să ridic banii de la casierie. Uneori mi-e ruşine să iau banii de la biroul c... S-a întâmplat să ridic într-o lună și 30.000 d... Într-o lună am adunat 30 000 de lei cu ramburs... We evaluate the quality of these translations using the BLEU metric, which we introduced in Chapter 14. To this end, we load an existing implementation of BLEU from the datasets library as a Metric object.2 Metric objects have a method called add(), which is used to accumulate the predictions and gold labels, one example at a time. After accumulating all examples, the compute() method returns the results of the evaluation. Note that for each predicted sentence, BLEU expects a list of reference sentences (as there are often many correct ways of translat- 2 https://huggingface.co/docs/datasets/v2.4.0/en/package_reference/main_ classes#datasets. Metric 15.2 Implementation of Greedy Generation 215 ing a given text). Since we only have one reference, we wrap it in a list before passing it to the metric: The score corresponds to the BLEU score. The rest of the items correspond to the components required to compute the score. That is, the counts, totals, and precisions correspond to the counts, totals, and precisions for 1-, 2-, 3-, and 4-grams. The bp is the brevity penalty. The sys_len and ref_len correspond to the predictions and reference lengths. The above BLEU score of 25.2% is slightly lower than the state of the art, but we are being penalized by the peculiarities of diacritic usage in Romanian characters. For example, the letters ș and ț (corresponding to the sounds sh and ts in English) are usually spelled with a comma below the characters s and t, which is the standard imposed by the Romanian Academy. However, in “the wild” these characters are often written using a cedilla instead of a comma, e.g., ţ instead of ț (or, using the names of these Unicode characters, LATIN SMALL LETTER T WITH CEDILLA instead of LATIN SMALL LETTER T WITH COMMA BELOW). Further, some of these characters with diacritics are often omitted altogether in the T5 output. The T5 output below contains an example for each of these two situations (e.g., soluţi(e) instead of soluți(i), and eful instead of Șeful): To avoid being penalized at scoring time for these arbitrary discrepancies, post-processing scripts are sometimes used to normalize diacritic usage.3 Usage of such post-processing scripts can improve the BLEU score substantially. However, this is beyond the scope of this chapter. 15.2 Implementation of Greedy Generation To gain a better intuition of how the encoder-decoder model generates its output sequence, we show below an implementation of the greedy version of the generate() method used above. This function takes as an argument a single English text (i.e., no batching) and returns the corresponding Romanian text: This function interacts directly with the encoder and decoder components of the T5 model, so we must construct the input for both. The encoder’s input is constructed by prepending the task prefix to the English text and tokenizing it. On the other hand, the decoder’s input is constructed incrementally by accumulating the tokens predicted so far 3 https://github.com/huggingface/transformers/blob/main/examples/legacy/ seq2seq/romanian_postprocessing.md 216 Implementing Encoder-decoder Methods in order to predict the next token in the sequence. At the beginning, before any tokens are predicted, the decoder’s input is initialized with a single token that corresponds to the beginning of the sequence. We retrieve this token, called decoder_start_token_id, from the model’s configuration object. The tokens are predicted one at a time, until the model produces eos_token_id, which indicates that the sequence is finished. However, in case the model does not produce this end-of-sequence token within a reasonable number of steps, we also enforce a maximum number of predicted tokens, determined by the max_target_length parameter we defined previously. The T5 model’s forward() method, called indirectly through its __call__()) method, takes the inputs for both the encoder and the decoder. The output returned by this method corresponds to all the tokens in the decoder’s input plus an extra one: the newly predicted token. To select the best prediction, we retrieve the logits from the output and select the logits corresponding to the last token in the sequence (recall that the output shape is (batch size, sequence length, vocabulary size)). From these selected logits, we use the argmax() to select the token id corresponding to the highest-scoring vocabulary item. We append this new token id to the decoder’s input, and repeat the process until we encounter the end-of-sequence token or the decoded text reaches the maximum length. Once we are finished generating token ids, we retrieve the corresponding text by calling the tokenizer’s decode() method. This method is identical to the batch_decode() method we used previously, except that it only decodes a single example. Below is an usage example for the greedy_translation() function: 15.3 Fine-tuning Romanian to English Translation In this section, we fine-tune a T5 model on the translation of Romanian to English, a language pair that was not included in the T5 pre-training. To confirm that this data was not included in pre-training, we evaluated the performance of the vanilla t5-small model on the translation from Romanian to English using code equivalent to the code discussed in the previous section (see the chap15_translation_ro_to_en notebook). The resulting BLEU score was only 3.2%, which is substantially lower than the score we obtained when translating English to Romanian (25.2%). 15.3 Fine-tuning Romanian to English Translation 217 Note that the transformers library includes scripts to fine-tune a translation model directly from the command line.4 For didactic purposes, we will not use these scripts in this section, but instead write the fine-tuning code explicitly. For this exercise, we continue using the WMT16 dataset, but this time
we load the train and validation splits. We employ the same t5-small model that we used previously. The code from the last section to load
the model, tokenizer, and dataset does not need to change for this use-
case, so we do not repeat it here. However, as before, the complete code is available in a Jupyter notebook (chap15_translation_ro_to_en_finetune). We begin by tokenizing the source (Romanian) and target language (English) texts. As in the last section, we need to prepend the task prefix to the source texts prior to tokenizing. This time, since we are translating in the opposite direction, we use the prefix "translate Romanian to English: ", and we prepend it to the Romanian text. Each call to the tokenizer with a batch of texts produces input_ids and an attention_mask. This output is what we need for the Romanian text, which will serve as the input to the model. To generate the labels, i.e., the correct translated tokens, we use the input_ids corresponding to the English text. Recall that "labels" is the default key name expected by trainers in Hugging Face. We apply our tokenize() function to both the train and validation splits: 4 https://github.com/huggingface/transformers/tree/main/examples/pytorch/ translation 218 Implementing Encoder-decoder Methods input_ids [13959, 3871, 29, 12, 1566, 10, 4961, 106, 204... [13959, 3871, 29, 12, 1566, 10, 4961, 106, 204... [13959, 3871, 29, 12, 1566, 10, 374, 6225, 49,... [13959, 3871, 29, 12, 1566, 10, 4540, 4031, 9,... [13959, 3871, 29, 12, 1566, 10, 2262, 900, 17,... [13959, 3871, 29, 12, 1566, 10, 18420, 83, 362... attention_mask labels [19428, 13, 12876, 10, 217, 13687, 7, 1] [19428, 13, 12876, 10, 217, 13687, 7, 1] [11167, 7, 1204, 10, 217, 13687, 7, 1] [4540, 4031, 9, 7, 1672, 7, 2262, 900, 17, 38,... [2262, 900, 17, 641, 65, 46, 3761, 6, 1069, 31... [3625, 32, 5788, 35, 15, 3844, 31, 7, 3, 16143... 0 1 2 3 4 ... 610315 610316 610317 610318 610319 [1,1,1,1,1,1,1,1, [1,1,1,1,1,1,1,1, [1,1,1,1,1,1,1,1, [1,1,1,1,1,1,1,1, [1,1,1,1,1,1,1,1, [1,1,1,1,1,1,1,1, 1,1,1,1, 1,1,1,... 1,1,1,1, 1,1,1,... 1,1,1,1, 1,1,1,... 1,1,1,1, 1,1,1,... 1,1,1,1, 1,1,1,... 1,1,1,1, 1,1,1,... [13959, 3871, 29, 12, 1566, 10, 5085, [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 5840, 49... 1, 1, 1, ... [13959, 3871, 29, 12, 1566, 10, 781, [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 8750, 9, ... 1, 1, 1, ... ... ... [13959, 3871, 29, 12, 1566, 10, 2364, [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 4540, 40... 1, 1, 1, ... 610320 rows × 3 columns [13959, 3871, 29, 12, 1566, 10, 3, [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 25882, 759,... 1, 1, 1, ... [2276, 8843, 138, 13, 13687, 7, 13, 1767, 3823... [781, 2420, 13, 17500, 10, 217, 13687, 7, 1] [242, 4540, 4031, 9, 7, 6, 8, 516, 65, 66, 8, ... [9810, 157, 31, 7, 516, 92, 3088, 21, 46, 3839... ... Recall that in order to construct a trainer, we need a data collator for batching, a function to compute the metrics of interest, and a TrainingArguments object. In this section, we use a data collator called DataCollatorForSeq2Seq, which is included in the transformers library specifically for sequence-to-sequence models. The collator pads the batches using the label_pad_token_id, which we have set to −100, as we did in Chapter 13 (this is the default ignore_index value used by CrossEntropyLoss): The compute_metrics() function computes the BLEU score. It uses the tokenizer to decode the token ids into text, for both the predicted and gold labels, ignoring padding: We use the Seq2SeqTrainingArguments class, which adds the predict_with_generate parameter to the regular TrainingArguments class. This is needed to in-
dicate that the trainer should use the generate() method for inference
in order to compute the metrics (BLUE in this case): Finally, we construct the trainer using the Seq2SeqTrainer class, which is a subclass of Trainer that adds the ability to compute scores such as BLEU during training by calling generate() during evaluation: Fine-tuning a translation model takes considerably longer than training or fine-tuning the models we have developed so far in this book. To account for this, here we add support for resuming training from a checkpoint, i.e., a model that was saved after training on a number of 15.4 Using a Previously Saved Model 219 examples. Similar to how one can resume a video game, this allows one to pick up from the last “save point,” in case training was interrupted and needs to be resumed: When calling the trainer’s train() method, we either provide a model checkpoint or None. In the former case, the trainer will continue training from the provided checkpoint. In the latter case, the trainer will begin training from scratch. Once the training has completed, we save the trained model and tokenizer using the trainer’s save_model() method into the output directory: We then compute and save the metrics corresponding to the training partition. This is not required, but it is helpful to keep a record of the model’s performance on the training data. Note that the metrics do not automatically include the number of examples in the training partition, so we add them explicitly: Next, we evaluate our final model on the validation data and save the corresponding metrics. These metrics indicate that our BLEU score on the validation data is 35.2%, which is evidence that fine-tuning has helped dramatically: Lastly, we save a model card into our output directory. A model card is akin to an automatically-generated README file that includes information about the model used, the data, settings used, and performance throughout the training process. This file is helpful for reproducibility as it contains all of this key information in one place. These cards are often uploaded to the Hugging Face Hub together with the model itself.5 15.4 Using a Previously Saved Model Models that have been saved locally can be loaded using the same from_pretrained() methods we have used before. In particular, instead of providing a model name, we provide the path to the local directory where the model is stored, using the local_files_only parameter to indicate that we want to load the model from the local file system instead of downloading it from the Hugging Face Hub (Make sure you use an output directory that is valid on your machine!): Once our fine-tuned model is loaded, we use it the same way as before. That is, we use the translate() function to generate translations 5 We do not discuss the model uploading process here. Please see the documentation on model sharing at: https://huggingface.co/docs/transformers/v4.14.1/model_sharing. 220 Implementing Encoder-decoder Methods for our test partition. Then we use the BLEU metric to score this output. From this metric, we obtain the final BLEU score of 33.4%, which is markedly better than our initial score (i.e., without fine-tuning) of 3.2%! The code corresponding to this section is available in the notebook chap15_translation_ro_to_en_finetuned. 15.5 Summary In this chapter we used a complete encoder-decoder transformer network to implement a machine translation application. Importantly, transformers with a decoder component have a generate() method that simplifies the generation process and provides multiple options for decoding. We encourage you to explore these options! For example, try comparing the quality of the output with the resources required to produce it (e.g., runtime overhead) when the size of the search beam increases. Additionally, we saw how to fine-tune an encoder-decoder model on a new language pair that it has not seen during its pre-training. This exercise included using checkpoints to support resuming training in case of unexpected interruptions, saving our fine-tuned model, and loading it for later use.
15,616
15,671
#!/usr/bin/env python # coding: utf-8 # # Machine Translation from Ro to En # # Using the T5 Transformer with Fine-tuning # Some initialization: # In[1]: import torch import numpy as np from transformers import set_seed # random seed seed = 42 # set random seed if seed is not None: print(f'random seed: {seed}') set_seed(seed) # In[2]: transformer_name = 't5-small' dataset_name = 'wmt16' dataset_config_name = 'ro-en' source_lang = 'ro' target_lang = 'en' max_source_length = 1024 max_target_length = 128 task_prefix = 'translate Romanian to English: ' batch_size = 4 label_pad_token_id = -100 save_steps = 25_000 num_beams = 1 learning_rate = 1e-3 num_train_epochs = 3 output_dir = '/media/data2/t5-translation-example' # make sure this is a valid path on your machine! # Load dataset from HuggingFace: # In[3]: from datasets import load_dataset wmt16 = load_dataset(dataset_name, dataset_config_name) # Load tokenizer and pre-trained model: # In[4]: from transformers import AutoConfig, AutoTokenizer, AutoModelForSeq2SeqLM config = AutoConfig.from_pretrained(transformer_name) tokenizer = AutoTokenizer.from_pretrained(transformer_name) model = AutoModelForSeq2SeqLM.from_pretrained(transformer_name, config=config) # Tokenize the texts in the dataset: # In[5]: def tokenize(batch): # get source sentences and prepend task prefix sources = [x[source_lang] for x in batch["translation"]] sources = [task_prefix + x for x in sources] # tokenize source sentences output = tokenizer( sources, max_length=max_source_length, truncation=True, ) # get target sentences targets = [x[target_lang] for x in batch["translation"]] # tokenize target sentences labels = tokenizer( targets, max_length=max_target_length, truncation=True, ) # add targets to output output["labels"] = labels["input_ids"] return output # In[6]: train_dataset = wmt16['train'] eval_dataset = wmt16['validation'] column_names = train_dataset.column_names train_dataset = train_dataset.map( tokenize, batched=True, remove_columns=column_names, ) eval_dataset = eval_dataset.map( tokenize, batched=True, remove_columns=column_names, ) # In[7]: train_dataset.to_pandas() # Create `Trainer` object and train: # In[8]: from transformers import DataCollatorForSeq2Seq data_collator = DataCollatorForSeq2Seq( tokenizer, model=model, label_pad_token_id=label_pad_token_id, ) # In[9]: from datasets import load_metric metric = load_metric('sacrebleu') def compute_metrics(eval_preds): preds, labels = eval_preds # get text for predictions predictions = tokenizer.batch_decode( preds, skip_special_tokens=True, ) # replace -100 in labels with pad token labels = np.where( labels != -100, labels, tokenizer.pad_token_id, ) # get text for gold labels references = tokenizer.batch_decode( labels, skip_special_tokens=True, ) # metric expects list of references for each prediction references = [[ref] for ref in references] # compute bleu score results = metric.compute( predictions=predictions, references=references, ) results = {'bleu': results['score']} return results # In[10]: from transformers import Seq2SeqTrainingArguments training_args = Seq2SeqTrainingArguments( output_dir=output_dir, per_device_train_batch_size=batch_size, per_device_eval_batch_size=batch_size, save_steps=save_steps, predict_with_generate=True, evaluation_strategy='steps', eval_steps=save_steps, learning_rate=learning_rate, num_train_epochs=num_train_epochs, ) # In[11]: from transformers import Seq2SeqTrainer trainer = Seq2SeqTrainer( model=model, args=training_args, train_dataset=train_dataset, eval_dataset=eval_dataset, tokenizer=tokenizer, data_collator=data_collator, compute_metrics=compute_metrics, ) # In[12]: import os from transformers.trainer_utils import get_last_checkpoint last_checkpoint = None if os.path.isdir(output_dir): last_checkpoint = get_last_checkpoint(output_dir) if last_checkpoint is not None: print(f'Checkpoint detected, resuming training at {last_checkpoint}.') # In[13]: train_result = trainer.train(resume_from_checkpoint=last_checkpoint) trainer.save_model() # In[14]: metrics = train_result.metrics metrics['train_samples'] = len(train_dataset) trainer.log_metrics('train', metrics) trainer.save_metrics('train', metrics) trainer.save_state() # Now evaluate: # In[15]: # https://discuss.huggingface.co/t/evaluation-results-metric-during-training-is-different-from-the-evaluation-results-at-the-end/15401 metrics = trainer.evaluate( max_length=max_target_length, num_beams=num_beams, metric_key_prefix='eval', ) metrics['eval_samples'] = len(eval_dataset) trainer.log_metrics('eval', metrics) trainer.save_metrics('eval', metrics) # Create a model card with meta data about this model: # In[16]: kwargs = { 'finetuned_from': transformer_name, 'tasks': 'translation', 'dataset_tags': dataset_name, 'dataset_args': dataset_config_name, 'dataset': f'{dataset_name} {dataset_config_name}', 'language': [source_lang, target_lang], } trainer.create_model_card(**kwargs) # In[ ]:
2,611
2,644
17
chap15-18
chap15-18
15 Implementing Encoder-decoder Methods In this chapter we implement a machine translation application as an example of an encoder-decoder task. In particular, we build on pre-trained encoder-decoder transformer models, which exist in the Hugging Face library for a wide variety of language pairs. We first show how to use one of these models out-of-the-box to perform translation for one of the language pairs it has been exposed to during pre-training: English to Romanian. Afterwards, we fine-tune the model to a new language combination that is has not seen before: Romanian to English. In both use cases, we use the T5 encoder-decoder model, which has been pre-trained for several tasks, including machine translation (Raffel et al., 2020). Please see Chapter 16 for a description of T5’s pre-training process. The data for this task comes from the WMT 2016 dataset (Bojar et al., 2016), which consists of English sentences aligned pairwise to German, Czech, Russian, Finnish, Romanian, and Turkish. In this chapter we only use the English-Romanian texts (in both directions). 15.1 Translating English to Romanian As a first example, we use T5 to translate from English to Romanian, which is one of the language pairs it has been exposed to during pretraining. The code discussed in this section is available in the notebook chap15_translation_en_to_ro. Even though in this exercise we are not fine-tuning the model, we still need to define a few hyper parameters to frame the task and help the model understand how to work with the data: The above settings indicate that we use the t5-small model, a smaller T5 variant, to minimize the amount of memory required. The source_lang 212 15.1 Translating English to Romanian 213 and target_lang variables define the direction of translation, i.e., from English to Romanian. To keep our computing requirements small, we limit the length of our input and output. That is, English text longer than max_source_length tokens will be truncated. Further, we limit our generated Romanian text to max_target_length. We chose a maximum target length of 128 tokens to limit the computational cost incurred during text generation (recall that the text is generated one token at a time). The T5 models are trained to support multiple tasks such as translation and summarization (please see Chapter 16 for details). Thus, during training and inference, the user must specify which task the model should perform using a text prefix. Here we use the prefix "translate English to Romanian: " to indicate that the input text is in English and should be translated to Romanian. Next, we load the model and the corresponding tokenizer, and move them to the GPU if one is available: We use the datasets library to load our translation dataset. Note that the first time one calls load_dataset() the dataset will be downloaded automatically from the Hugging Face repository.1 The load_dataset() function takes a dataset name and configuration, which in our case are wmt16 and ro-en, respectively. Since in this example we are only evaluating the model, we only load the test partition (or split) of the dataset: The dataset consists of a single column called translation. Each element in this column is a dictionary that contains the aligned pair. The dictionary keys are the abbreviated language names and the values are the corresponding sentences. An example of one of these dictionaries is shown below: We encapsulate the logic for translating the English text into Romanian in a function called translate(). Inside this function, for a batch of aligned pairs, we select the English sentence as our input, and prepend the task prefix. Then we tokenize these inputs, including the prefix, specifying that sentences longer than max_source_length should be truncated, the batch should be padded, and the tokenizer should return PyTorch tensors. Once the tokenizer output has been moved to the GPU, we pass it to the model’s generate() method. This is the first time we have seen this method, because only decoder and encoder-decoder models support it. This method generates an output sequence by predicting one token 1 https://huggingface.co/datasets/wmt16 214 Implementing Encoder-decoder Methods at a time, stopping when either the end-of-sequence token is produced or when the sequence reaches a maximum length. Several generation techniques are supported, such as beam search, in which several alternate translations are maintained by the model so that it is able to select an overall best translation from several options. For efficiency purposes, we use a greedy approach, which chooses the best token at each step of the generation. This is equivalent to using a beam search with a beam of size one. Since the model generates its predictions as a sequence of token ids, we need to convert them back into the corresponding tokens to be able to read the translated text. We do this using the tokenizer’s batch_decode() method. Finally, we return the gold and predicted Romanian sentences in a dictionary: Next, we apply our translate() function to our Dataset to translate all the sentences: reference Șeful ONU declară că nu există soluții militar... Șeful ONU a solicitat din nou tuturor părților... Ban și-a exprimat regretul că divizările în co... Nu sunt bani puțini. La sfârșitul mandatului voi face un raport cu ... "Să spună un parlamentar că nu-i ajung banii e... 1999 rows × 2 columns prediction eful ONU declară că nu există o soluţie milita... eful U.N. a cerut din nou tuturor partidelor, ... El şi-a exprimat regretul că diviziunile din c... Banii sunt suficienţi. La sfârşitul biroului voi raporta tot ceea ce ... "A spune că un parlamentar nu are suficienţi b... 1994 1995 1996 1997 1998 0 1 2 3 4 ... Secretarul General Ban Ki-moon afirmă că răspu... Secretarul General Ban Ki-moon declară că răsp... Ban a declarat miercuri în cadrul unei conferi... Ban a declarat la o conferinţă de presă susţin... ... ... Uneori mi-e rușine să ridic banii de la casierie. Uneori mi-e ruşine să iau banii de la biroul c... S-a întâmplat să ridic într-o lună și 30.000 d... Într-o lună am adunat 30 000 de lei cu ramburs... We evaluate the quality of these translations using the BLEU metric, which we introduced in Chapter 14. To this end, we load an existing implementation of BLEU from the datasets library as a Metric object.2 Metric objects have a method called add(), which is used to accumulate the predictions and gold labels, one example at a time. After accumulating all examples, the compute() method returns the results of the evaluation. Note that for each predicted sentence, BLEU expects a list of reference sentences (as there are often many correct ways of translat- 2 https://huggingface.co/docs/datasets/v2.4.0/en/package_reference/main_ classes#datasets. Metric 15.2 Implementation of Greedy Generation 215 ing a given text). Since we only have one reference, we wrap it in a list before passing it to the metric: The score corresponds to the BLEU score. The rest of the items correspond to the components required to compute the score. That is, the counts, totals, and precisions correspond to the counts, totals, and precisions for 1-, 2-, 3-, and 4-grams. The bp is the brevity penalty. The sys_len and ref_len correspond to the predictions and reference lengths. The above BLEU score of 25.2% is slightly lower than the state of the art, but we are being penalized by the peculiarities of diacritic usage in Romanian characters. For example, the letters ș and ț (corresponding to the sounds sh and ts in English) are usually spelled with a comma below the characters s and t, which is the standard imposed by the Romanian Academy. However, in “the wild” these characters are often written using a cedilla instead of a comma, e.g., ţ instead of ț (or, using the names of these Unicode characters, LATIN SMALL LETTER T WITH CEDILLA instead of LATIN SMALL LETTER T WITH COMMA BELOW). Further, some of these characters with diacritics are often omitted altogether in the T5 output. The T5 output below contains an example for each of these two situations (e.g., soluţi(e) instead of soluți(i), and eful instead of Șeful): To avoid being penalized at scoring time for these arbitrary discrepancies, post-processing scripts are sometimes used to normalize diacritic usage.3 Usage of such post-processing scripts can improve the BLEU score substantially. However, this is beyond the scope of this chapter. 15.2 Implementation of Greedy Generation To gain a better intuition of how the encoder-decoder model generates its output sequence, we show below an implementation of the greedy version of the generate() method used above. This function takes as an argument a single English text (i.e., no batching) and returns the corresponding Romanian text: This function interacts directly with the encoder and decoder components of the T5 model, so we must construct the input for both. The encoder’s input is constructed by prepending the task prefix to the English text and tokenizing it. On the other hand, the decoder’s input is constructed incrementally by accumulating the tokens predicted so far 3 https://github.com/huggingface/transformers/blob/main/examples/legacy/ seq2seq/romanian_postprocessing.md 216 Implementing Encoder-decoder Methods in order to predict the next token in the sequence. At the beginning, before any tokens are predicted, the decoder’s input is initialized with a single token that corresponds to the beginning of the sequence. We retrieve this token, called decoder_start_token_id, from the model’s configuration object. The tokens are predicted one at a time, until the model produces eos_token_id, which indicates that the sequence is finished. However, in case the model does not produce this end-of-sequence token within a reasonable number of steps, we also enforce a maximum number of predicted tokens, determined by the max_target_length parameter we defined previously. The T5 model’s forward() method, called indirectly through its __call__()) method, takes the inputs for both the encoder and the decoder. The output returned by this method corresponds to all the tokens in the decoder’s input plus an extra one: the newly predicted token. To select the best prediction, we retrieve the logits from the output and select the logits corresponding to the last token in the sequence (recall that the output shape is (batch size, sequence length, vocabulary size)). From these selected logits, we use the argmax() to select the token id corresponding to the highest-scoring vocabulary item. We append this new token id to the decoder’s input, and repeat the process until we encounter the end-of-sequence token or the decoded text reaches the maximum length. Once we are finished generating token ids, we retrieve the corresponding text by calling the tokenizer’s decode() method. This method is identical to the batch_decode() method we used previously, except that it only decodes a single example. Below is an usage example for the greedy_translation() function: 15.3 Fine-tuning Romanian to English Translation In this section, we fine-tune a T5 model on the translation of Romanian to English, a language pair that was not included in the T5 pre-training. To confirm that this data was not included in pre-training, we evaluated the performance of the vanilla t5-small model on the translation from Romanian to English using code equivalent to the code discussed in the previous section (see the chap15_translation_ro_to_en notebook). The resulting BLEU score was only 3.2%, which is substantially lower than the score we obtained when translating English to Romanian (25.2%). 15.3 Fine-tuning Romanian to English Translation 217 Note that the transformers library includes scripts to fine-tune a translation model directly from the command line.4 For didactic purposes, we will not use these scripts in this section, but instead write the fine-tuning code explicitly. For this exercise, we continue using the WMT16 dataset, but this time
we load the train and validation splits. We employ the same t5-small model that we used previously. The code from the last section to load
the model, tokenizer, and dataset does not need to change for this use-
case, so we do not repeat it here. However, as before, the complete code is available in a Jupyter notebook (chap15_translation_ro_to_en_finetune). We begin by tokenizing the source (Romanian) and target language (English) texts. As in the last section, we need to prepend the task prefix to the source texts prior to tokenizing. This time, since we are translating in the opposite direction, we use the prefix "translate Romanian to English: ", and we prepend it to the Romanian text. Each call to the tokenizer with a batch of texts produces input_ids and an attention_mask. This output is what we need for the Romanian text, which will serve as the input to the model. To generate the labels, i.e., the correct translated tokens, we use the input_ids corresponding to the English text. Recall that "labels" is the default key name expected by trainers in Hugging Face. We apply our tokenize() function to both the train and validation splits: 4 https://github.com/huggingface/transformers/tree/main/examples/pytorch/ translation 218 Implementing Encoder-decoder Methods input_ids [13959, 3871, 29, 12, 1566, 10, 4961, 106, 204... [13959, 3871, 29, 12, 1566, 10, 4961, 106, 204... [13959, 3871, 29, 12, 1566, 10, 374, 6225, 49,... [13959, 3871, 29, 12, 1566, 10, 4540, 4031, 9,... [13959, 3871, 29, 12, 1566, 10, 2262, 900, 17,... [13959, 3871, 29, 12, 1566, 10, 18420, 83, 362... attention_mask labels [19428, 13, 12876, 10, 217, 13687, 7, 1] [19428, 13, 12876, 10, 217, 13687, 7, 1] [11167, 7, 1204, 10, 217, 13687, 7, 1] [4540, 4031, 9, 7, 1672, 7, 2262, 900, 17, 38,... [2262, 900, 17, 641, 65, 46, 3761, 6, 1069, 31... [3625, 32, 5788, 35, 15, 3844, 31, 7, 3, 16143... 0 1 2 3 4 ... 610315 610316 610317 610318 610319 [1,1,1,1,1,1,1,1, [1,1,1,1,1,1,1,1, [1,1,1,1,1,1,1,1, [1,1,1,1,1,1,1,1, [1,1,1,1,1,1,1,1, [1,1,1,1,1,1,1,1, 1,1,1,1, 1,1,1,... 1,1,1,1, 1,1,1,... 1,1,1,1, 1,1,1,... 1,1,1,1, 1,1,1,... 1,1,1,1, 1,1,1,... 1,1,1,1, 1,1,1,... [13959, 3871, 29, 12, 1566, 10, 5085, [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 5840, 49... 1, 1, 1, ... [13959, 3871, 29, 12, 1566, 10, 781, [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 8750, 9, ... 1, 1, 1, ... ... ... [13959, 3871, 29, 12, 1566, 10, 2364, [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 4540, 40... 1, 1, 1, ... 610320 rows × 3 columns [13959, 3871, 29, 12, 1566, 10, 3, [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 25882, 759,... 1, 1, 1, ... [2276, 8843, 138, 13, 13687, 7, 13, 1767, 3823... [781, 2420, 13, 17500, 10, 217, 13687, 7, 1] [242, 4540, 4031, 9, 7, 6, 8, 516, 65, 66, 8, ... [9810, 157, 31, 7, 516, 92, 3088, 21, 46, 3839... ... Recall that in order to construct a trainer, we need a data collator for batching, a function to compute the metrics of interest, and a TrainingArguments object. In this section, we use a data collator called DataCollatorForSeq2Seq, which is included in the transformers library specifically for sequence-to-sequence models. The collator pads the batches using the label_pad_token_id, which we have set to −100, as we did in Chapter 13 (this is the default ignore_index value used by CrossEntropyLoss): The compute_metrics() function computes the BLEU score. It uses the tokenizer to decode the token ids into text, for both the predicted and gold labels, ignoring padding: We use the Seq2SeqTrainingArguments class, which adds the predict_with_generate parameter to the regular TrainingArguments class. This is needed to in-
dicate that the trainer should use the generate() method for inference
in order to compute the metrics (BLUE in this case): Finally, we construct the trainer using the Seq2SeqTrainer class, which is a subclass of Trainer that adds the ability to compute scores such as BLEU during training by calling generate() during evaluation: Fine-tuning a translation model takes considerably longer than training or fine-tuning the models we have developed so far in this book. To account for this, here we add support for resuming training from a checkpoint, i.e., a model that was saved after training on a number of 15.4 Using a Previously Saved Model 219 examples. Similar to how one can resume a video game, this allows one to pick up from the last “save point,” in case training was interrupted and needs to be resumed: When calling the trainer’s train() method, we either provide a model checkpoint or None. In the former case, the trainer will continue training from the provided checkpoint. In the latter case, the trainer will begin training from scratch. Once the training has completed, we save the trained model and tokenizer using the trainer’s save_model() method into the output directory: We then compute and save the metrics corresponding to the training partition. This is not required, but it is helpful to keep a record of the model’s performance on the training data. Note that the metrics do not automatically include the number of examples in the training partition, so we add them explicitly: Next, we evaluate our final model on the validation data and save the corresponding metrics. These metrics indicate that our BLEU score on the validation data is 35.2%, which is evidence that fine-tuning has helped dramatically: Lastly, we save a model card into our output directory. A model card is akin to an automatically-generated README file that includes information about the model used, the data, settings used, and performance throughout the training process. This file is helpful for reproducibility as it contains all of this key information in one place. These cards are often uploaded to the Hugging Face Hub together with the model itself.5 15.4 Using a Previously Saved Model Models that have been saved locally can be loaded using the same from_pretrained() methods we have used before. In particular, instead of providing a model name, we provide the path to the local directory where the model is stored, using the local_files_only parameter to indicate that we want to load the model from the local file system instead of downloading it from the Hugging Face Hub (Make sure you use an output directory that is valid on your machine!): Once our fine-tuned model is loaded, we use it the same way as before. That is, we use the translate() function to generate translations 5 We do not discuss the model uploading process here. Please see the documentation on model sharing at: https://huggingface.co/docs/transformers/v4.14.1/model_sharing. 220 Implementing Encoder-decoder Methods for our test partition. Then we use the BLEU metric to score this output. From this metric, we obtain the final BLEU score of 33.4%, which is markedly better than our initial score (i.e., without fine-tuning) of 3.2%! The code corresponding to this section is available in the notebook chap15_translation_ro_to_en_finetuned. 15.5 Summary In this chapter we used a complete encoder-decoder transformer network to implement a machine translation application. Importantly, transformers with a decoder component have a generate() method that simplifies the generation process and provides multiple options for decoding. We encourage you to explore these options! For example, try comparing the quality of the output with the resources required to produce it (e.g., runtime overhead) when the size of the search beam increases. Additionally, we saw how to fine-tune an encoder-decoder model on a new language pair that it has not seen during its pre-training. This exercise included using checkpoints to support resuming training in case of unexpected interruptions, saving our fine-tuned model, and loading it for later use.
3,467
3,572
#!/usr/bin/env python # coding: utf-8 # # Machine Translation from English (En) to Romanian (Ro) # # Using the T5 Transformer without Fine-tuning # Some initialization: # In[1]: import torch import numpy as np from transformers import set_seed # set to True to use the gpu (if there is one available) use_gpu = True # select device device = torch.device('cuda' if use_gpu and torch.cuda.is_available() else 'cpu') print(f'device: {device.type}') # random seed seed = 42 # set random seed if seed is not None: print(f'random seed: {seed}') set_seed(seed) # In[2]: transformer_name = 't5-small' source_lang = 'en' target_lang = 'ro' max_source_length = 1024 max_target_length = 128 task_prefix = 'translate English to Romanian: ' num_beams = 1 batch_size = 100 # Load tokenizer and pre-trained model: # In[3]: from transformers import AutoTokenizer, AutoModelForSeq2SeqLM tokenizer = AutoTokenizer.from_pretrained(transformer_name) model = AutoModelForSeq2SeqLM.from_pretrained(transformer_name) model = model.to(device) # Load dataset from HuggingFace: # In[4]: from datasets import load_dataset test_ds = load_dataset('wmt16', 'ro-en', split='test') test_ds # In[5]: test_ds['translation'][0] # Implement the `translate` method and apply on this dataset: # In[6]: def translate(batch): # get source language examples and prepend task prefix inputs = [x[source_lang] for x in batch["translation"]] inputs = [task_prefix + x for x in inputs] # tokenize inputs encoded = tokenizer( inputs, max_length=max_source_length, truncation=True, padding=True, return_tensors='pt', ) # move data to gpu if needed input_ids = encoded.input_ids.to(device) attention_mask = encoded.attention_mask.to(device) # generate translated sentences output = model.generate( input_ids=input_ids, attention_mask=attention_mask, num_beams=num_beams, max_length=max_target_length, ) # generate predicted sentences from predicted token ids decoded = tokenizer.batch_decode( output, skip_special_tokens=True, ) # get gold sentences in target language targets = [x[target_lang] for x in batch["translation"]] # return gold and predicted sentences return { 'reference': targets, 'prediction': decoded, } # In[7]: results = test_ds.map( translate, batched=True, batch_size=batch_size, remove_columns=test_ds.column_names, ) results.to_pandas() # Now evaluate the quality of translations using the BLEU metric: # In[8]: from datasets import load_metric metric = load_metric('sacrebleu') for r in results: prediction = r['prediction'] reference = [r['reference']] metric.add(prediction=prediction, reference=reference) metric.compute() # An example of greedy decoding for individual texts: # In[9]: def greedy_translation(text): # prepend task prefix text = task_prefix + text # tokenize input encoded = tokenizer( text, max_length=max_source_length, truncation=True, return_tensors='pt', ) # encoder input ids encoder_input_ids = encoded.input_ids.to(device) # decoder input ids, initialized with start token id start = model.config.decoder_start_token_id decoder_input_ids = torch.LongTensor([[start]]).to(device) # generate tokens, one at a time for _ in range(max_target_length): # get model predictions output = model( encoder_input_ids, decoder_input_ids=decoder_input_ids, ) # get logits for last token next_token_logits = output.logits[0, -1, :] # select most probable token next_token_id = torch.argmax(next_token_logits) # append new token to decoder_input_ids output_id = torch.LongTensor([[next_token_id]]).to(device) decoder_input_ids = torch.cat([decoder_input_ids, output_id], dim=-1) # if predicted token is the end of sequence, stop iterating if next_token_id == tokenizer.eos_token_id: break # return text corresponding to predicted token ids return tokenizer.decode(decoder_input_ids[0], skip_special_tokens=True) # In[10]: greedy_translation("this is a test")
1,307
1,329
18
chap15-19
chap15-19
15 Implementing Encoder-decoder Methods In this chapter we implement a machine translation application as an example of an encoder-decoder task. In particular, we build on pre-trained encoder-decoder transformer models, which exist in the Hugging Face library for a wide variety of language pairs. We first show how to use one of these models out-of-the-box to perform translation for one of the language pairs it has been exposed to during pre-training: English to Romanian. Afterwards, we fine-tune the model to a new language combination that is has not seen before: Romanian to English. In both use cases, we use the T5 encoder-decoder model, which has been pre-trained for several tasks, including machine translation (Raffel et al., 2020). Please see Chapter 16 for a description of T5’s pre-training process. The data for this task comes from the WMT 2016 dataset (Bojar et al., 2016), which consists of English sentences aligned pairwise to German, Czech, Russian, Finnish, Romanian, and Turkish. In this chapter we only use the English-Romanian texts (in both directions). 15.1 Translating English to Romanian As a first example, we use T5 to translate from English to Romanian, which is one of the language pairs it has been exposed to during pretraining. The code discussed in this section is available in the notebook chap15_translation_en_to_ro. Even though in this exercise we are not fine-tuning the model, we still need to define a few hyper parameters to frame the task and help the model understand how to work with the data: The above settings indicate that we use the t5-small model, a smaller T5 variant, to minimize the amount of memory required. The source_lang 212 15.1 Translating English to Romanian 213 and target_lang variables define the direction of translation, i.e., from English to Romanian. To keep our computing requirements small, we limit the length of our input and output. That is, English text longer than max_source_length tokens will be truncated. Further, we limit our generated Romanian text to max_target_length. We chose a maximum target length of 128 tokens to limit the computational cost incurred during text generation (recall that the text is generated one token at a time). The T5 models are trained to support multiple tasks such as translation and summarization (please see Chapter 16 for details). Thus, during training and inference, the user must specify which task the model should perform using a text prefix. Here we use the prefix "translate English to Romanian: " to indicate that the input text is in English and should be translated to Romanian. Next, we load the model and the corresponding tokenizer, and move them to the GPU if one is available: We use the datasets library to load our translation dataset. Note that the first time one calls load_dataset() the dataset will be downloaded automatically from the Hugging Face repository.1 The load_dataset() function takes a dataset name and configuration, which in our case are wmt16 and ro-en, respectively. Since in this example we are only evaluating the model, we only load the test partition (or split) of the dataset: The dataset consists of a single column called translation. Each element in this column is a dictionary that contains the aligned pair. The dictionary keys are the abbreviated language names and the values are the corresponding sentences. An example of one of these dictionaries is shown below: We encapsulate the logic for translating the English text into Romanian in a function called translate(). Inside this function, for a batch of aligned pairs, we select the English sentence as our input, and prepend the task prefix. Then we tokenize these inputs, including the prefix, specifying that sentences longer than max_source_length should be truncated, the batch should be padded, and the tokenizer should return PyTorch tensors. Once the tokenizer output has been moved to the GPU, we pass it to the model’s generate() method. This is the first time we have seen this method, because only decoder and encoder-decoder models support it. This method generates an output sequence by predicting one token 1 https://huggingface.co/datasets/wmt16 214 Implementing Encoder-decoder Methods at a time, stopping when either the end-of-sequence token is produced or when the sequence reaches a maximum length. Several generation techniques are supported, such as beam search, in which several alternate translations are maintained by the model so that it is able to select an overall best translation from several options. For efficiency purposes, we use a greedy approach, which chooses the best token at each step of the generation. This is equivalent to using a beam search with a beam of size one. Since the model generates its predictions as a sequence of token ids, we need to convert them back into the corresponding tokens to be able to read the translated text. We do this using the tokenizer’s batch_decode() method. Finally, we return the gold and predicted Romanian sentences in a dictionary: Next, we apply our translate() function to our Dataset to translate all the sentences: reference Șeful ONU declară că nu există soluții militar... Șeful ONU a solicitat din nou tuturor părților... Ban și-a exprimat regretul că divizările în co... Nu sunt bani puțini. La sfârșitul mandatului voi face un raport cu ... "Să spună un parlamentar că nu-i ajung banii e... 1999 rows × 2 columns prediction eful ONU declară că nu există o soluţie milita... eful U.N. a cerut din nou tuturor partidelor, ... El şi-a exprimat regretul că diviziunile din c... Banii sunt suficienţi. La sfârşitul biroului voi raporta tot ceea ce ... "A spune că un parlamentar nu are suficienţi b... 1994 1995 1996 1997 1998 0 1 2 3 4 ... Secretarul General Ban Ki-moon afirmă că răspu... Secretarul General Ban Ki-moon declară că răsp... Ban a declarat miercuri în cadrul unei conferi... Ban a declarat la o conferinţă de presă susţin... ... ... Uneori mi-e rușine să ridic banii de la casierie. Uneori mi-e ruşine să iau banii de la biroul c... S-a întâmplat să ridic într-o lună și 30.000 d... Într-o lună am adunat 30 000 de lei cu ramburs... We evaluate the quality of these translations using the BLEU metric, which we introduced in Chapter 14. To this end, we load an existing implementation of BLEU from the datasets library as a Metric object.2 Metric objects have a method called add(), which is used to accumulate the predictions and gold labels, one example at a time. After accumulating all examples, the compute() method returns the results of the evaluation. Note that for each predicted sentence, BLEU expects a list of reference sentences (as there are often many correct ways of translat- 2 https://huggingface.co/docs/datasets/v2.4.0/en/package_reference/main_ classes#datasets. Metric 15.2 Implementation of Greedy Generation 215 ing a given text). Since we only have one reference, we wrap it in a list before passing it to the metric: The score corresponds to the BLEU score. The rest of the items correspond to the components required to compute the score. That is, the counts, totals, and precisions correspond to the counts, totals, and precisions for 1-, 2-, 3-, and 4-grams. The bp is the brevity penalty. The sys_len and ref_len correspond to the predictions and reference lengths. The above BLEU score of 25.2% is slightly lower than the state of the art, but we are being penalized by the peculiarities of diacritic usage in Romanian characters. For example, the letters ș and ț (corresponding to the sounds sh and ts in English) are usually spelled with a comma below the characters s and t, which is the standard imposed by the Romanian Academy. However, in “the wild” these characters are often written using a cedilla instead of a comma, e.g., ţ instead of ț (or, using the names of these Unicode characters, LATIN SMALL LETTER T WITH CEDILLA instead of LATIN SMALL LETTER T WITH COMMA BELOW). Further, some of these characters with diacritics are often omitted altogether in the T5 output. The T5 output below contains an example for each of these two situations (e.g., soluţi(e) instead of soluți(i), and eful instead of Șeful): To avoid being penalized at scoring time for these arbitrary discrepancies, post-processing scripts are sometimes used to normalize diacritic usage.3 Usage of such post-processing scripts can improve the BLEU score substantially. However, this is beyond the scope of this chapter. 15.2 Implementation of Greedy Generation To gain a better intuition of how the encoder-decoder model generates its output sequence, we show below an implementation of the greedy version of the generate() method used above. This function takes as an argument a single English text (i.e., no batching) and returns the corresponding Romanian text: This function interacts directly with the encoder and decoder components of the T5 model, so we must construct the input for both. The encoder’s input is constructed by prepending the task prefix to the English text and tokenizing it. On the other hand, the decoder’s input is constructed incrementally by accumulating the tokens predicted so far 3 https://github.com/huggingface/transformers/blob/main/examples/legacy/ seq2seq/romanian_postprocessing.md 216 Implementing Encoder-decoder Methods in order to predict the next token in the sequence. At the beginning, before any tokens are predicted, the decoder’s input is initialized with a single token that corresponds to the beginning of the sequence. We retrieve this token, called decoder_start_token_id, from the model’s configuration object. The tokens are predicted one at a time, until the model produces eos_token_id, which indicates that the sequence is finished. However, in case the model does not produce this end-of-sequence token within a reasonable number of steps, we also enforce a maximum number of predicted tokens, determined by the max_target_length parameter we defined previously. The T5 model’s forward() method, called indirectly through its __call__()) method, takes the inputs for both the encoder and the decoder. The output returned by this method corresponds to all the tokens in the decoder’s input plus an extra one: the newly predicted token. To select the best prediction, we retrieve the logits from the output and select the logits corresponding to the last token in the sequence (recall that the output shape is (batch size, sequence length, vocabulary size)). From these selected logits, we use the argmax() to select the token id corresponding to the highest-scoring vocabulary item. We append this new token id to the decoder’s input, and repeat the process until we encounter the end-of-sequence token or the decoded text reaches the maximum length. Once we are finished generating token ids, we retrieve the corresponding text by calling the tokenizer’s decode() method. This method is identical to the batch_decode() method we used previously, except that it only decodes a single example. Below is an usage example for the greedy_translation() function: 15.3 Fine-tuning Romanian to English Translation In this section, we fine-tune a T5 model on the translation of Romanian to English, a language pair that was not included in the T5 pre-training. To confirm that this data was not included in pre-training, we evaluated the performance of the vanilla t5-small model on the translation from Romanian to English using code equivalent to the code discussed in the previous section (see the chap15_translation_ro_to_en notebook). The resulting BLEU score was only 3.2%, which is substantially lower than the score we obtained when translating English to Romanian (25.2%). 15.3 Fine-tuning Romanian to English Translation 217 Note that the transformers library includes scripts to fine-tune a translation model directly from the command line.4 For didactic purposes, we will not use these scripts in this section, but instead write the fine-tuning code explicitly. For this exercise, we continue using the WMT16 dataset, but this time
we load the train and validation splits. We employ the same t5-small model that we used previously. The code from the last section to load
the model, tokenizer, and dataset does not need to change for this use-
case, so we do not repeat it here. However, as before, the complete code is available in a Jupyter notebook (chap15_translation_ro_to_en_finetune). We begin by tokenizing the source (Romanian) and target language (English) texts. As in the last section, we need to prepend the task prefix to the source texts prior to tokenizing. This time, since we are translating in the opposite direction, we use the prefix "translate Romanian to English: ", and we prepend it to the Romanian text. Each call to the tokenizer with a batch of texts produces input_ids and an attention_mask. This output is what we need for the Romanian text, which will serve as the input to the model. To generate the labels, i.e., the correct translated tokens, we use the input_ids corresponding to the English text. Recall that "labels" is the default key name expected by trainers in Hugging Face. We apply our tokenize() function to both the train and validation splits: 4 https://github.com/huggingface/transformers/tree/main/examples/pytorch/ translation 218 Implementing Encoder-decoder Methods input_ids [13959, 3871, 29, 12, 1566, 10, 4961, 106, 204... [13959, 3871, 29, 12, 1566, 10, 4961, 106, 204... [13959, 3871, 29, 12, 1566, 10, 374, 6225, 49,... [13959, 3871, 29, 12, 1566, 10, 4540, 4031, 9,... [13959, 3871, 29, 12, 1566, 10, 2262, 900, 17,... [13959, 3871, 29, 12, 1566, 10, 18420, 83, 362... attention_mask labels [19428, 13, 12876, 10, 217, 13687, 7, 1] [19428, 13, 12876, 10, 217, 13687, 7, 1] [11167, 7, 1204, 10, 217, 13687, 7, 1] [4540, 4031, 9, 7, 1672, 7, 2262, 900, 17, 38,... [2262, 900, 17, 641, 65, 46, 3761, 6, 1069, 31... [3625, 32, 5788, 35, 15, 3844, 31, 7, 3, 16143... 0 1 2 3 4 ... 610315 610316 610317 610318 610319 [1,1,1,1,1,1,1,1, [1,1,1,1,1,1,1,1, [1,1,1,1,1,1,1,1, [1,1,1,1,1,1,1,1, [1,1,1,1,1,1,1,1, [1,1,1,1,1,1,1,1, 1,1,1,1, 1,1,1,... 1,1,1,1, 1,1,1,... 1,1,1,1, 1,1,1,... 1,1,1,1, 1,1,1,... 1,1,1,1, 1,1,1,... 1,1,1,1, 1,1,1,... [13959, 3871, 29, 12, 1566, 10, 5085, [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 5840, 49... 1, 1, 1, ... [13959, 3871, 29, 12, 1566, 10, 781, [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 8750, 9, ... 1, 1, 1, ... ... ... [13959, 3871, 29, 12, 1566, 10, 2364, [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 4540, 40... 1, 1, 1, ... 610320 rows × 3 columns [13959, 3871, 29, 12, 1566, 10, 3, [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 25882, 759,... 1, 1, 1, ... [2276, 8843, 138, 13, 13687, 7, 13, 1767, 3823... [781, 2420, 13, 17500, 10, 217, 13687, 7, 1] [242, 4540, 4031, 9, 7, 6, 8, 516, 65, 66, 8, ... [9810, 157, 31, 7, 516, 92, 3088, 21, 46, 3839... ... Recall that in order to construct a trainer, we need a data collator for batching, a function to compute the metrics of interest, and a TrainingArguments object. In this section, we use a data collator called DataCollatorForSeq2Seq, which is included in the transformers library specifically for sequence-to-sequence models. The collator pads the batches using the label_pad_token_id, which we have set to −100, as we did in Chapter 13 (this is the default ignore_index value used by CrossEntropyLoss): The compute_metrics() function computes the BLEU score. It uses the tokenizer to decode the token ids into text, for both the predicted and gold labels, ignoring padding: We use the Seq2SeqTrainingArguments class, which adds the predict_with_generate parameter to the regular TrainingArguments class. This is needed to in-
dicate that the trainer should use the generate() method for inference
in order to compute the metrics (BLUE in this case): Finally, we construct the trainer using the Seq2SeqTrainer class, which is a subclass of Trainer that adds the ability to compute scores such as BLEU during training by calling generate() during evaluation: Fine-tuning a translation model takes considerably longer than training or fine-tuning the models we have developed so far in this book. To account for this, here we add support for resuming training from a checkpoint, i.e., a model that was saved after training on a number of 15.4 Using a Previously Saved Model 219 examples. Similar to how one can resume a video game, this allows one to pick up from the last “save point,” in case training was interrupted and needs to be resumed: When calling the trainer’s train() method, we either provide a model checkpoint or None. In the former case, the trainer will continue training from the provided checkpoint. In the latter case, the trainer will begin training from scratch. Once the training has completed, we save the trained model and tokenizer using the trainer’s save_model() method into the output directory: We then compute and save the metrics corresponding to the training partition. This is not required, but it is helpful to keep a record of the model’s performance on the training data. Note that the metrics do not automatically include the number of examples in the training partition, so we add them explicitly: Next, we evaluate our final model on the validation data and save the corresponding metrics. These metrics indicate that our BLEU score on the validation data is 35.2%, which is evidence that fine-tuning has helped dramatically: Lastly, we save a model card into our output directory. A model card is akin to an automatically-generated README file that includes information about the model used, the data, settings used, and performance throughout the training process. This file is helpful for reproducibility as it contains all of this key information in one place. These cards are often uploaded to the Hugging Face Hub together with the model itself.5 15.4 Using a Previously Saved Model Models that have been saved locally can be loaded using the same from_pretrained() methods we have used before. In particular, instead of providing a model name, we provide the path to the local directory where the model is stored, using the local_files_only parameter to indicate that we want to load the model from the local file system instead of downloading it from the Hugging Face Hub (Make sure you use an output directory that is valid on your machine!): Once our fine-tuned model is loaded, we use it the same way as before. That is, we use the translate() function to generate translations 5 We do not discuss the model uploading process here. Please see the documentation on model sharing at: https://huggingface.co/docs/transformers/v4.14.1/model_sharing. 220 Implementing Encoder-decoder Methods for our test partition. Then we use the BLEU metric to score this output. From this metric, we obtain the final BLEU score of 33.4%, which is markedly better than our initial score (i.e., without fine-tuning) of 3.2%! The code corresponding to this section is available in the notebook chap15_translation_ro_to_en_finetuned. 15.5 Summary In this chapter we used a complete encoder-decoder transformer network to implement a machine translation application. Importantly, transformers with a decoder component have a generate() method that simplifies the generation process and provides multiple options for decoding. We encourage you to explore these options! For example, try comparing the quality of the output with the resources required to produce it (e.g., runtime overhead) when the size of the search beam increases. Additionally, we saw how to fine-tune an encoder-decoder model on a new language pair that it has not seen during its pre-training. This exercise included using checkpoints to support resuming training in case of unexpected interruptions, saving our fine-tuned model, and loading it for later use.
12,250
12,308
#!/usr/bin/env python # coding: utf-8 # # Machine Translation from Ro to En # # Using the T5 Transformer with Fine-tuning # Some initialization: # In[1]: import torch import numpy as np from transformers import set_seed # random seed seed = 42 # set random seed if seed is not None: print(f'random seed: {seed}') set_seed(seed) # In[2]: transformer_name = 't5-small' dataset_name = 'wmt16' dataset_config_name = 'ro-en' source_lang = 'ro' target_lang = 'en' max_source_length = 1024 max_target_length = 128 task_prefix = 'translate Romanian to English: ' batch_size = 4 label_pad_token_id = -100 save_steps = 25_000 num_beams = 1 learning_rate = 1e-3 num_train_epochs = 3 output_dir = '/media/data2/t5-translation-example' # make sure this is a valid path on your machine! # Load dataset from HuggingFace: # In[3]: from datasets import load_dataset wmt16 = load_dataset(dataset_name, dataset_config_name) # Load tokenizer and pre-trained model: # In[4]: from transformers import AutoConfig, AutoTokenizer, AutoModelForSeq2SeqLM config = AutoConfig.from_pretrained(transformer_name) tokenizer = AutoTokenizer.from_pretrained(transformer_name) model = AutoModelForSeq2SeqLM.from_pretrained(transformer_name, config=config) # Tokenize the texts in the dataset: # In[5]: def tokenize(batch): # get source sentences and prepend task prefix sources = [x[source_lang] for x in batch["translation"]] sources = [task_prefix + x for x in sources] # tokenize source sentences output = tokenizer( sources, max_length=max_source_length, truncation=True, ) # get target sentences targets = [x[target_lang] for x in batch["translation"]] # tokenize target sentences labels = tokenizer( targets, max_length=max_target_length, truncation=True, ) # add targets to output output["labels"] = labels["input_ids"] return output # In[6]: train_dataset = wmt16['train'] eval_dataset = wmt16['validation'] column_names = train_dataset.column_names train_dataset = train_dataset.map( tokenize, batched=True, remove_columns=column_names, ) eval_dataset = eval_dataset.map( tokenize, batched=True, remove_columns=column_names, ) # In[7]: train_dataset.to_pandas() # Create `Trainer` object and train: # In[8]: from transformers import DataCollatorForSeq2Seq data_collator = DataCollatorForSeq2Seq( tokenizer, model=model, label_pad_token_id=label_pad_token_id, ) # In[9]: from datasets import load_metric metric = load_metric('sacrebleu') def compute_metrics(eval_preds): preds, labels = eval_preds # get text for predictions predictions = tokenizer.batch_decode( preds, skip_special_tokens=True, ) # replace -100 in labels with pad token labels = np.where( labels != -100, labels, tokenizer.pad_token_id, ) # get text for gold labels references = tokenizer.batch_decode( labels, skip_special_tokens=True, ) # metric expects list of references for each prediction references = [[ref] for ref in references] # compute bleu score results = metric.compute( predictions=predictions, references=references, ) results = {'bleu': results['score']} return results # In[10]: from transformers import Seq2SeqTrainingArguments training_args = Seq2SeqTrainingArguments( output_dir=output_dir, per_device_train_batch_size=batch_size, per_device_eval_batch_size=batch_size, save_steps=save_steps, predict_with_generate=True, evaluation_strategy='steps', eval_steps=save_steps, learning_rate=learning_rate, num_train_epochs=num_train_epochs, ) # In[11]: from transformers import Seq2SeqTrainer trainer = Seq2SeqTrainer( model=model, args=training_args, train_dataset=train_dataset, eval_dataset=eval_dataset, tokenizer=tokenizer, data_collator=data_collator, compute_metrics=compute_metrics, ) # In[12]: import os from transformers.trainer_utils import get_last_checkpoint last_checkpoint = None if os.path.isdir(output_dir): last_checkpoint = get_last_checkpoint(output_dir) if last_checkpoint is not None: print(f'Checkpoint detected, resuming training at {last_checkpoint}.') # In[13]: train_result = trainer.train(resume_from_checkpoint=last_checkpoint) trainer.save_model() # In[14]: metrics = train_result.metrics metrics['train_samples'] = len(train_dataset) trainer.log_metrics('train', metrics) trainer.save_metrics('train', metrics) trainer.save_state() # Now evaluate: # In[15]: # https://discuss.huggingface.co/t/evaluation-results-metric-during-training-is-different-from-the-evaluation-results-at-the-end/15401 metrics = trainer.evaluate( max_length=max_target_length, num_beams=num_beams, metric_key_prefix='eval', ) metrics['eval_samples'] = len(eval_dataset) trainer.log_metrics('eval', metrics) trainer.save_metrics('eval', metrics) # Create a model card with meta data about this model: # In[16]: kwargs = { 'finetuned_from': transformer_name, 'tasks': 'translation', 'dataset_tags': dataset_name, 'dataset_args': dataset_config_name, 'dataset': f'{dataset_name} {dataset_config_name}', 'language': [source_lang, target_lang], } trainer.create_model_card(**kwargs) # In[ ]:
356
386
19
chap15-20
chap15-20
15 Implementing Encoder-decoder Methods In this chapter we implement a machine translation application as an example of an encoder-decoder task. In particular, we build on pre-trained encoder-decoder transformer models, which exist in the Hugging Face library for a wide variety of language pairs. We first show how to use one of these models out-of-the-box to perform translation for one of the language pairs it has been exposed to during pre-training: English to Romanian. Afterwards, we fine-tune the model to a new language combination that is has not seen before: Romanian to English. In both use cases, we use the T5 encoder-decoder model, which has been pre-trained for several tasks, including machine translation (Raffel et al., 2020). Please see Chapter 16 for a description of T5’s pre-training process. The data for this task comes from the WMT 2016 dataset (Bojar et al., 2016), which consists of English sentences aligned pairwise to German, Czech, Russian, Finnish, Romanian, and Turkish. In this chapter we only use the English-Romanian texts (in both directions). 15.1 Translating English to Romanian As a first example, we use T5 to translate from English to Romanian, which is one of the language pairs it has been exposed to during pretraining. The code discussed in this section is available in the notebook chap15_translation_en_to_ro. Even though in this exercise we are not fine-tuning the model, we still need to define a few hyper parameters to frame the task and help the model understand how to work with the data: The above settings indicate that we use the t5-small model, a smaller T5 variant, to minimize the amount of memory required. The source_lang 212 15.1 Translating English to Romanian 213 and target_lang variables define the direction of translation, i.e., from English to Romanian. To keep our computing requirements small, we limit the length of our input and output. That is, English text longer than max_source_length tokens will be truncated. Further, we limit our generated Romanian text to max_target_length. We chose a maximum target length of 128 tokens to limit the computational cost incurred during text generation (recall that the text is generated one token at a time). The T5 models are trained to support multiple tasks such as translation and summarization (please see Chapter 16 for details). Thus, during training and inference, the user must specify which task the model should perform using a text prefix. Here we use the prefix "translate English to Romanian: " to indicate that the input text is in English and should be translated to Romanian. Next, we load the model and the corresponding tokenizer, and move them to the GPU if one is available: We use the datasets library to load our translation dataset. Note that the first time one calls load_dataset() the dataset will be downloaded automatically from the Hugging Face repository.1 The load_dataset() function takes a dataset name and configuration, which in our case are wmt16 and ro-en, respectively. Since in this example we are only evaluating the model, we only load the test partition (or split) of the dataset: The dataset consists of a single column called translation. Each element in this column is a dictionary that contains the aligned pair. The dictionary keys are the abbreviated language names and the values are the corresponding sentences. An example of one of these dictionaries is shown below: We encapsulate the logic for translating the English text into Romanian in a function called translate(). Inside this function, for a batch of aligned pairs, we select the English sentence as our input, and prepend the task prefix. Then we tokenize these inputs, including the prefix, specifying that sentences longer than max_source_length should be truncated, the batch should be padded, and the tokenizer should return PyTorch tensors. Once the tokenizer output has been moved to the GPU, we pass it to the model’s generate() method. This is the first time we have seen this method, because only decoder and encoder-decoder models support it. This method generates an output sequence by predicting one token 1 https://huggingface.co/datasets/wmt16 214 Implementing Encoder-decoder Methods at a time, stopping when either the end-of-sequence token is produced or when the sequence reaches a maximum length. Several generation techniques are supported, such as beam search, in which several alternate translations are maintained by the model so that it is able to select an overall best translation from several options. For efficiency purposes, we use a greedy approach, which chooses the best token at each step of the generation. This is equivalent to using a beam search with a beam of size one. Since the model generates its predictions as a sequence of token ids, we need to convert them back into the corresponding tokens to be able to read the translated text. We do this using the tokenizer’s batch_decode() method. Finally, we return the gold and predicted Romanian sentences in a dictionary: Next, we apply our translate() function to our Dataset to translate all the sentences: reference Șeful ONU declară că nu există soluții militar... Șeful ONU a solicitat din nou tuturor părților... Ban și-a exprimat regretul că divizările în co... Nu sunt bani puțini. La sfârșitul mandatului voi face un raport cu ... "Să spună un parlamentar că nu-i ajung banii e... 1999 rows × 2 columns prediction eful ONU declară că nu există o soluţie milita... eful U.N. a cerut din nou tuturor partidelor, ... El şi-a exprimat regretul că diviziunile din c... Banii sunt suficienţi. La sfârşitul biroului voi raporta tot ceea ce ... "A spune că un parlamentar nu are suficienţi b... 1994 1995 1996 1997 1998 0 1 2 3 4 ... Secretarul General Ban Ki-moon afirmă că răspu... Secretarul General Ban Ki-moon declară că răsp... Ban a declarat miercuri în cadrul unei conferi... Ban a declarat la o conferinţă de presă susţin... ... ... Uneori mi-e rușine să ridic banii de la casierie. Uneori mi-e ruşine să iau banii de la biroul c... S-a întâmplat să ridic într-o lună și 30.000 d... Într-o lună am adunat 30 000 de lei cu ramburs... We evaluate the quality of these translations using the BLEU metric, which we introduced in Chapter 14. To this end, we load an existing implementation of BLEU from the datasets library as a Metric object.2 Metric objects have a method called add(), which is used to accumulate the predictions and gold labels, one example at a time. After accumulating all examples, the compute() method returns the results of the evaluation. Note that for each predicted sentence, BLEU expects a list of reference sentences (as there are often many correct ways of translat- 2 https://huggingface.co/docs/datasets/v2.4.0/en/package_reference/main_ classes#datasets. Metric 15.2 Implementation of Greedy Generation 215 ing a given text). Since we only have one reference, we wrap it in a list before passing it to the metric: The score corresponds to the BLEU score. The rest of the items correspond to the components required to compute the score. That is, the counts, totals, and precisions correspond to the counts, totals, and precisions for 1-, 2-, 3-, and 4-grams. The bp is the brevity penalty. The sys_len and ref_len correspond to the predictions and reference lengths. The above BLEU score of 25.2% is slightly lower than the state of the art, but we are being penalized by the peculiarities of diacritic usage in Romanian characters. For example, the letters ș and ț (corresponding to the sounds sh and ts in English) are usually spelled with a comma below the characters s and t, which is the standard imposed by the Romanian Academy. However, in “the wild” these characters are often written using a cedilla instead of a comma, e.g., ţ instead of ț (or, using the names of these Unicode characters, LATIN SMALL LETTER T WITH CEDILLA instead of LATIN SMALL LETTER T WITH COMMA BELOW). Further, some of these characters with diacritics are often omitted altogether in the T5 output. The T5 output below contains an example for each of these two situations (e.g., soluţi(e) instead of soluți(i), and eful instead of Șeful): To avoid being penalized at scoring time for these arbitrary discrepancies, post-processing scripts are sometimes used to normalize diacritic usage.3 Usage of such post-processing scripts can improve the BLEU score substantially. However, this is beyond the scope of this chapter. 15.2 Implementation of Greedy Generation To gain a better intuition of how the encoder-decoder model generates its output sequence, we show below an implementation of the greedy version of the generate() method used above. This function takes as an argument a single English text (i.e., no batching) and returns the corresponding Romanian text: This function interacts directly with the encoder and decoder components of the T5 model, so we must construct the input for both. The encoder’s input is constructed by prepending the task prefix to the English text and tokenizing it. On the other hand, the decoder’s input is constructed incrementally by accumulating the tokens predicted so far 3 https://github.com/huggingface/transformers/blob/main/examples/legacy/ seq2seq/romanian_postprocessing.md 216 Implementing Encoder-decoder Methods in order to predict the next token in the sequence. At the beginning, before any tokens are predicted, the decoder’s input is initialized with a single token that corresponds to the beginning of the sequence. We retrieve this token, called decoder_start_token_id, from the model’s configuration object. The tokens are predicted one at a time, until the model produces eos_token_id, which indicates that the sequence is finished. However, in case the model does not produce this end-of-sequence token within a reasonable number of steps, we also enforce a maximum number of predicted tokens, determined by the max_target_length parameter we defined previously. The T5 model’s forward() method, called indirectly through its __call__()) method, takes the inputs for both the encoder and the decoder. The output returned by this method corresponds to all the tokens in the decoder’s input plus an extra one: the newly predicted token. To select the best prediction, we retrieve the logits from the output and select the logits corresponding to the last token in the sequence (recall that the output shape is (batch size, sequence length, vocabulary size)). From these selected logits, we use the argmax() to select the token id corresponding to the highest-scoring vocabulary item. We append this new token id to the decoder’s input, and repeat the process until we encounter the end-of-sequence token or the decoded text reaches the maximum length. Once we are finished generating token ids, we retrieve the corresponding text by calling the tokenizer’s decode() method. This method is identical to the batch_decode() method we used previously, except that it only decodes a single example. Below is an usage example for the greedy_translation() function: 15.3 Fine-tuning Romanian to English Translation In this section, we fine-tune a T5 model on the translation of Romanian to English, a language pair that was not included in the T5 pre-training. To confirm that this data was not included in pre-training, we evaluated the performance of the vanilla t5-small model on the translation from Romanian to English using code equivalent to the code discussed in the previous section (see the chap15_translation_ro_to_en notebook). The resulting BLEU score was only 3.2%, which is substantially lower than the score we obtained when translating English to Romanian (25.2%). 15.3 Fine-tuning Romanian to English Translation 217 Note that the transformers library includes scripts to fine-tune a translation model directly from the command line.4 For didactic purposes, we will not use these scripts in this section, but instead write the fine-tuning code explicitly. For this exercise, we continue using the WMT16 dataset, but this time
we load the train and validation splits. We employ the same t5-small model that we used previously. The code from the last section to load
the model, tokenizer, and dataset does not need to change for this use-
case, so we do not repeat it here. However, as before, the complete code is available in a Jupyter notebook (chap15_translation_ro_to_en_finetune). We begin by tokenizing the source (Romanian) and target language (English) texts. As in the last section, we need to prepend the task prefix to the source texts prior to tokenizing. This time, since we are translating in the opposite direction, we use the prefix "translate Romanian to English: ", and we prepend it to the Romanian text. Each call to the tokenizer with a batch of texts produces input_ids and an attention_mask. This output is what we need for the Romanian text, which will serve as the input to the model. To generate the labels, i.e., the correct translated tokens, we use the input_ids corresponding to the English text. Recall that "labels" is the default key name expected by trainers in Hugging Face. We apply our tokenize() function to both the train and validation splits: 4 https://github.com/huggingface/transformers/tree/main/examples/pytorch/ translation 218 Implementing Encoder-decoder Methods input_ids [13959, 3871, 29, 12, 1566, 10, 4961, 106, 204... [13959, 3871, 29, 12, 1566, 10, 4961, 106, 204... [13959, 3871, 29, 12, 1566, 10, 374, 6225, 49,... [13959, 3871, 29, 12, 1566, 10, 4540, 4031, 9,... [13959, 3871, 29, 12, 1566, 10, 2262, 900, 17,... [13959, 3871, 29, 12, 1566, 10, 18420, 83, 362... attention_mask labels [19428, 13, 12876, 10, 217, 13687, 7, 1] [19428, 13, 12876, 10, 217, 13687, 7, 1] [11167, 7, 1204, 10, 217, 13687, 7, 1] [4540, 4031, 9, 7, 1672, 7, 2262, 900, 17, 38,... [2262, 900, 17, 641, 65, 46, 3761, 6, 1069, 31... [3625, 32, 5788, 35, 15, 3844, 31, 7, 3, 16143... 0 1 2 3 4 ... 610315 610316 610317 610318 610319 [1,1,1,1,1,1,1,1, [1,1,1,1,1,1,1,1, [1,1,1,1,1,1,1,1, [1,1,1,1,1,1,1,1, [1,1,1,1,1,1,1,1, [1,1,1,1,1,1,1,1, 1,1,1,1, 1,1,1,... 1,1,1,1, 1,1,1,... 1,1,1,1, 1,1,1,... 1,1,1,1, 1,1,1,... 1,1,1,1, 1,1,1,... 1,1,1,1, 1,1,1,... [13959, 3871, 29, 12, 1566, 10, 5085, [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 5840, 49... 1, 1, 1, ... [13959, 3871, 29, 12, 1566, 10, 781, [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 8750, 9, ... 1, 1, 1, ... ... ... [13959, 3871, 29, 12, 1566, 10, 2364, [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 4540, 40... 1, 1, 1, ... 610320 rows × 3 columns [13959, 3871, 29, 12, 1566, 10, 3, [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 25882, 759,... 1, 1, 1, ... [2276, 8843, 138, 13, 13687, 7, 13, 1767, 3823... [781, 2420, 13, 17500, 10, 217, 13687, 7, 1] [242, 4540, 4031, 9, 7, 6, 8, 516, 65, 66, 8, ... [9810, 157, 31, 7, 516, 92, 3088, 21, 46, 3839... ... Recall that in order to construct a trainer, we need a data collator for batching, a function to compute the metrics of interest, and a TrainingArguments object. In this section, we use a data collator called DataCollatorForSeq2Seq, which is included in the transformers library specifically for sequence-to-sequence models. The collator pads the batches using the label_pad_token_id, which we have set to −100, as we did in Chapter 13 (this is the default ignore_index value used by CrossEntropyLoss): The compute_metrics() function computes the BLEU score. It uses the tokenizer to decode the token ids into text, for both the predicted and gold labels, ignoring padding: We use the Seq2SeqTrainingArguments class, which adds the predict_with_generate parameter to the regular TrainingArguments class. This is needed to in-
dicate that the trainer should use the generate() method for inference
in order to compute the metrics (BLUE in this case): Finally, we construct the trainer using the Seq2SeqTrainer class, which is a subclass of Trainer that adds the ability to compute scores such as BLEU during training by calling generate() during evaluation: Fine-tuning a translation model takes considerably longer than training or fine-tuning the models we have developed so far in this book. To account for this, here we add support for resuming training from a checkpoint, i.e., a model that was saved after training on a number of 15.4 Using a Previously Saved Model 219 examples. Similar to how one can resume a video game, this allows one to pick up from the last “save point,” in case training was interrupted and needs to be resumed: When calling the trainer’s train() method, we either provide a model checkpoint or None. In the former case, the trainer will continue training from the provided checkpoint. In the latter case, the trainer will begin training from scratch. Once the training has completed, we save the trained model and tokenizer using the trainer’s save_model() method into the output directory: We then compute and save the metrics corresponding to the training partition. This is not required, but it is helpful to keep a record of the model’s performance on the training data. Note that the metrics do not automatically include the number of examples in the training partition, so we add them explicitly: Next, we evaluate our final model on the validation data and save the corresponding metrics. These metrics indicate that our BLEU score on the validation data is 35.2%, which is evidence that fine-tuning has helped dramatically: Lastly, we save a model card into our output directory. A model card is akin to an automatically-generated README file that includes information about the model used, the data, settings used, and performance throughout the training process. This file is helpful for reproducibility as it contains all of this key information in one place. These cards are often uploaded to the Hugging Face Hub together with the model itself.5 15.4 Using a Previously Saved Model Models that have been saved locally can be loaded using the same from_pretrained() methods we have used before. In particular, instead of providing a model name, we provide the path to the local directory where the model is stored, using the local_files_only parameter to indicate that we want to load the model from the local file system instead of downloading it from the Hugging Face Hub (Make sure you use an output directory that is valid on your machine!): Once our fine-tuned model is loaded, we use it the same way as before. That is, we use the translate() function to generate translations 5 We do not discuss the model uploading process here. Please see the documentation on model sharing at: https://huggingface.co/docs/transformers/v4.14.1/model_sharing. 220 Implementing Encoder-decoder Methods for our test partition. Then we use the BLEU metric to score this output. From this metric, we obtain the final BLEU score of 33.4%, which is markedly better than our initial score (i.e., without fine-tuning) of 3.2%! The code corresponding to this section is available in the notebook chap15_translation_ro_to_en_finetuned. 15.5 Summary In this chapter we used a complete encoder-decoder transformer network to implement a machine translation application. Importantly, transformers with a decoder component have a generate() method that simplifies the generation process and provides multiple options for decoding. We encourage you to explore these options! For example, try comparing the quality of the output with the resources required to produce it (e.g., runtime overhead) when the size of the search beam increases. Additionally, we saw how to fine-tune an encoder-decoder model on a new language pair that it has not seen during its pre-training. This exercise included using checkpoints to support resuming training in case of unexpected interruptions, saving our fine-tuned model, and loading it for later use.
2,012
2,078
#!/usr/bin/env python # coding: utf-8 # # Machine Translation from English (En) to Romanian (Ro) # # Using the T5 Transformer without Fine-tuning # Some initialization: # In[1]: import torch import numpy as np from transformers import set_seed # set to True to use the gpu (if there is one available) use_gpu = True # select device device = torch.device('cuda' if use_gpu and torch.cuda.is_available() else 'cpu') print(f'device: {device.type}') # random seed seed = 42 # set random seed if seed is not None: print(f'random seed: {seed}') set_seed(seed) # In[2]: transformer_name = 't5-small' source_lang = 'en' target_lang = 'ro' max_source_length = 1024 max_target_length = 128 task_prefix = 'translate English to Romanian: ' num_beams = 1 batch_size = 100 # Load tokenizer and pre-trained model: # In[3]: from transformers import AutoTokenizer, AutoModelForSeq2SeqLM tokenizer = AutoTokenizer.from_pretrained(transformer_name) model = AutoModelForSeq2SeqLM.from_pretrained(transformer_name) model = model.to(device) # Load dataset from HuggingFace: # In[4]: from datasets import load_dataset test_ds = load_dataset('wmt16', 'ro-en', split='test') test_ds # In[5]: test_ds['translation'][0] # Implement the `translate` method and apply on this dataset: # In[6]: def translate(batch): # get source language examples and prepend task prefix inputs = [x[source_lang] for x in batch["translation"]] inputs = [task_prefix + x for x in inputs] # tokenize inputs encoded = tokenizer( inputs, max_length=max_source_length, truncation=True, padding=True, return_tensors='pt', ) # move data to gpu if needed input_ids = encoded.input_ids.to(device) attention_mask = encoded.attention_mask.to(device) # generate translated sentences output = model.generate( input_ids=input_ids, attention_mask=attention_mask, num_beams=num_beams, max_length=max_target_length, ) # generate predicted sentences from predicted token ids decoded = tokenizer.batch_decode( output, skip_special_tokens=True, ) # get gold sentences in target language targets = [x[target_lang] for x in batch["translation"]] # return gold and predicted sentences return { 'reference': targets, 'prediction': decoded, } # In[7]: results = test_ds.map( translate, batched=True, batch_size=batch_size, remove_columns=test_ds.column_names, ) results.to_pandas() # Now evaluate the quality of translations using the BLEU metric: # In[8]: from datasets import load_metric metric = load_metric('sacrebleu') for r in results: prediction = r['prediction'] reference = [r['reference']] metric.add(prediction=prediction, reference=reference) metric.compute() # An example of greedy decoding for individual texts: # In[9]: def greedy_translation(text): # prepend task prefix text = task_prefix + text # tokenize input encoded = tokenizer( text, max_length=max_source_length, truncation=True, return_tensors='pt', ) # encoder input ids encoder_input_ids = encoded.input_ids.to(device) # decoder input ids, initialized with start token id start = model.config.decoder_start_token_id decoder_input_ids = torch.LongTensor([[start]]).to(device) # generate tokens, one at a time for _ in range(max_target_length): # get model predictions output = model( encoder_input_ids, decoder_input_ids=decoder_input_ids, ) # get logits for last token next_token_logits = output.logits[0, -1, :] # select most probable token next_token_id = torch.argmax(next_token_logits) # append new token to decoder_input_ids output_id = torch.LongTensor([[next_token_id]]).to(device) decoder_input_ids = torch.cat([decoder_input_ids, output_id], dim=-1) # if predicted token is the end of sequence, stop iterating if next_token_id == tokenizer.eos_token_id: break # return text corresponding to predicted token ids return tokenizer.decode(decoder_input_ids[0], skip_special_tokens=True) # In[10]: greedy_translation("this is a test")
678
702
20
chap15-21
chap15-21
15 Implementing Encoder-decoder Methods In this chapter we implement a machine translation application as an example of an encoder-decoder task. In particular, we build on pre-trained encoder-decoder transformer models, which exist in the Hugging Face library for a wide variety of language pairs. We first show how to use one of these models out-of-the-box to perform translation for one of the language pairs it has been exposed to during pre-training: English to Romanian. Afterwards, we fine-tune the model to a new language combination that is has not seen before: Romanian to English. In both use cases, we use the T5 encoder-decoder model, which has been pre-trained for several tasks, including machine translation (Raffel et al., 2020). Please see Chapter 16 for a description of T5’s pre-training process. The data for this task comes from the WMT 2016 dataset (Bojar et al., 2016), which consists of English sentences aligned pairwise to German, Czech, Russian, Finnish, Romanian, and Turkish. In this chapter we only use the English-Romanian texts (in both directions). 15.1 Translating English to Romanian As a first example, we use T5 to translate from English to Romanian, which is one of the language pairs it has been exposed to during pretraining. The code discussed in this section is available in the notebook chap15_translation_en_to_ro. Even though in this exercise we are not fine-tuning the model, we still need to define a few hyper parameters to frame the task and help the model understand how to work with the data: The above settings indicate that we use the t5-small model, a smaller T5 variant, to minimize the amount of memory required. The source_lang 212 15.1 Translating English to Romanian 213 and target_lang variables define the direction of translation, i.e., from English to Romanian. To keep our computing requirements small, we limit the length of our input and output. That is, English text longer than max_source_length tokens will be truncated. Further, we limit our generated Romanian text to max_target_length. We chose a maximum target length of 128 tokens to limit the computational cost incurred during text generation (recall that the text is generated one token at a time). The T5 models are trained to support multiple tasks such as translation and summarization (please see Chapter 16 for details). Thus, during training and inference, the user must specify which task the model should perform using a text prefix. Here we use the prefix "translate English to Romanian: " to indicate that the input text is in English and should be translated to Romanian. Next, we load the model and the corresponding tokenizer, and move them to the GPU if one is available: We use the datasets library to load our translation dataset. Note that the first time one calls load_dataset() the dataset will be downloaded automatically from the Hugging Face repository.1 The load_dataset() function takes a dataset name and configuration, which in our case are wmt16 and ro-en, respectively. Since in this example we are only evaluating the model, we only load the test partition (or split) of the dataset: The dataset consists of a single column called translation. Each element in this column is a dictionary that contains the aligned pair. The dictionary keys are the abbreviated language names and the values are the corresponding sentences. An example of one of these dictionaries is shown below: We encapsulate the logic for translating the English text into Romanian in a function called translate(). Inside this function, for a batch of aligned pairs, we select the English sentence as our input, and prepend the task prefix. Then we tokenize these inputs, including the prefix, specifying that sentences longer than max_source_length should be truncated, the batch should be padded, and the tokenizer should return PyTorch tensors. Once the tokenizer output has been moved to the GPU, we pass it to the model’s generate() method. This is the first time we have seen this method, because only decoder and encoder-decoder models support it. This method generates an output sequence by predicting one token 1 https://huggingface.co/datasets/wmt16 214 Implementing Encoder-decoder Methods at a time, stopping when either the end-of-sequence token is produced or when the sequence reaches a maximum length. Several generation techniques are supported, such as beam search, in which several alternate translations are maintained by the model so that it is able to select an overall best translation from several options. For efficiency purposes, we use a greedy approach, which chooses the best token at each step of the generation. This is equivalent to using a beam search with a beam of size one. Since the model generates its predictions as a sequence of token ids, we need to convert them back into the corresponding tokens to be able to read the translated text. We do this using the tokenizer’s batch_decode() method. Finally, we return the gold and predicted Romanian sentences in a dictionary: Next, we apply our translate() function to our Dataset to translate all the sentences: reference Șeful ONU declară că nu există soluții militar... Șeful ONU a solicitat din nou tuturor părților... Ban și-a exprimat regretul că divizările în co... Nu sunt bani puțini. La sfârșitul mandatului voi face un raport cu ... "Să spună un parlamentar că nu-i ajung banii e... 1999 rows × 2 columns prediction eful ONU declară că nu există o soluţie milita... eful U.N. a cerut din nou tuturor partidelor, ... El şi-a exprimat regretul că diviziunile din c... Banii sunt suficienţi. La sfârşitul biroului voi raporta tot ceea ce ... "A spune că un parlamentar nu are suficienţi b... 1994 1995 1996 1997 1998 0 1 2 3 4 ... Secretarul General Ban Ki-moon afirmă că răspu... Secretarul General Ban Ki-moon declară că răsp... Ban a declarat miercuri în cadrul unei conferi... Ban a declarat la o conferinţă de presă susţin... ... ... Uneori mi-e rușine să ridic banii de la casierie. Uneori mi-e ruşine să iau banii de la biroul c... S-a întâmplat să ridic într-o lună și 30.000 d... Într-o lună am adunat 30 000 de lei cu ramburs... We evaluate the quality of these translations using the BLEU metric, which we introduced in Chapter 14. To this end, we load an existing implementation of BLEU from the datasets library as a Metric object.2 Metric objects have a method called add(), which is used to accumulate the predictions and gold labels, one example at a time. After accumulating all examples, the compute() method returns the results of the evaluation. Note that for each predicted sentence, BLEU expects a list of reference sentences (as there are often many correct ways of translat- 2 https://huggingface.co/docs/datasets/v2.4.0/en/package_reference/main_ classes#datasets. Metric 15.2 Implementation of Greedy Generation 215 ing a given text). Since we only have one reference, we wrap it in a list before passing it to the metric: The score corresponds to the BLEU score. The rest of the items correspond to the components required to compute the score. That is, the counts, totals, and precisions correspond to the counts, totals, and precisions for 1-, 2-, 3-, and 4-grams. The bp is the brevity penalty. The sys_len and ref_len correspond to the predictions and reference lengths. The above BLEU score of 25.2% is slightly lower than the state of the art, but we are being penalized by the peculiarities of diacritic usage in Romanian characters. For example, the letters ș and ț (corresponding to the sounds sh and ts in English) are usually spelled with a comma below the characters s and t, which is the standard imposed by the Romanian Academy. However, in “the wild” these characters are often written using a cedilla instead of a comma, e.g., ţ instead of ț (or, using the names of these Unicode characters, LATIN SMALL LETTER T WITH CEDILLA instead of LATIN SMALL LETTER T WITH COMMA BELOW). Further, some of these characters with diacritics are often omitted altogether in the T5 output. The T5 output below contains an example for each of these two situations (e.g., soluţi(e) instead of soluți(i), and eful instead of Șeful): To avoid being penalized at scoring time for these arbitrary discrepancies, post-processing scripts are sometimes used to normalize diacritic usage.3 Usage of such post-processing scripts can improve the BLEU score substantially. However, this is beyond the scope of this chapter. 15.2 Implementation of Greedy Generation To gain a better intuition of how the encoder-decoder model generates its output sequence, we show below an implementation of the greedy version of the generate() method used above. This function takes as an argument a single English text (i.e., no batching) and returns the corresponding Romanian text: This function interacts directly with the encoder and decoder components of the T5 model, so we must construct the input for both. The encoder’s input is constructed by prepending the task prefix to the English text and tokenizing it. On the other hand, the decoder’s input is constructed incrementally by accumulating the tokens predicted so far 3 https://github.com/huggingface/transformers/blob/main/examples/legacy/ seq2seq/romanian_postprocessing.md 216 Implementing Encoder-decoder Methods in order to predict the next token in the sequence. At the beginning, before any tokens are predicted, the decoder’s input is initialized with a single token that corresponds to the beginning of the sequence. We retrieve this token, called decoder_start_token_id, from the model’s configuration object. The tokens are predicted one at a time, until the model produces eos_token_id, which indicates that the sequence is finished. However, in case the model does not produce this end-of-sequence token within a reasonable number of steps, we also enforce a maximum number of predicted tokens, determined by the max_target_length parameter we defined previously. The T5 model’s forward() method, called indirectly through its __call__()) method, takes the inputs for both the encoder and the decoder. The output returned by this method corresponds to all the tokens in the decoder’s input plus an extra one: the newly predicted token. To select the best prediction, we retrieve the logits from the output and select the logits corresponding to the last token in the sequence (recall that the output shape is (batch size, sequence length, vocabulary size)). From these selected logits, we use the argmax() to select the token id corresponding to the highest-scoring vocabulary item. We append this new token id to the decoder’s input, and repeat the process until we encounter the end-of-sequence token or the decoded text reaches the maximum length. Once we are finished generating token ids, we retrieve the corresponding text by calling the tokenizer’s decode() method. This method is identical to the batch_decode() method we used previously, except that it only decodes a single example. Below is an usage example for the greedy_translation() function: 15.3 Fine-tuning Romanian to English Translation In this section, we fine-tune a T5 model on the translation of Romanian to English, a language pair that was not included in the T5 pre-training. To confirm that this data was not included in pre-training, we evaluated the performance of the vanilla t5-small model on the translation from Romanian to English using code equivalent to the code discussed in the previous section (see the chap15_translation_ro_to_en notebook). The resulting BLEU score was only 3.2%, which is substantially lower than the score we obtained when translating English to Romanian (25.2%). 15.3 Fine-tuning Romanian to English Translation 217 Note that the transformers library includes scripts to fine-tune a translation model directly from the command line.4 For didactic purposes, we will not use these scripts in this section, but instead write the fine-tuning code explicitly. For this exercise, we continue using the WMT16 dataset, but this time
we load the train and validation splits. We employ the same t5-small model that we used previously. The code from the last section to load
the model, tokenizer, and dataset does not need to change for this use-
case, so we do not repeat it here. However, as before, the complete code is available in a Jupyter notebook (chap15_translation_ro_to_en_finetune). We begin by tokenizing the source (Romanian) and target language (English) texts. As in the last section, we need to prepend the task prefix to the source texts prior to tokenizing. This time, since we are translating in the opposite direction, we use the prefix "translate Romanian to English: ", and we prepend it to the Romanian text. Each call to the tokenizer with a batch of texts produces input_ids and an attention_mask. This output is what we need for the Romanian text, which will serve as the input to the model. To generate the labels, i.e., the correct translated tokens, we use the input_ids corresponding to the English text. Recall that "labels" is the default key name expected by trainers in Hugging Face. We apply our tokenize() function to both the train and validation splits: 4 https://github.com/huggingface/transformers/tree/main/examples/pytorch/ translation 218 Implementing Encoder-decoder Methods input_ids [13959, 3871, 29, 12, 1566, 10, 4961, 106, 204... [13959, 3871, 29, 12, 1566, 10, 4961, 106, 204... [13959, 3871, 29, 12, 1566, 10, 374, 6225, 49,... [13959, 3871, 29, 12, 1566, 10, 4540, 4031, 9,... [13959, 3871, 29, 12, 1566, 10, 2262, 900, 17,... [13959, 3871, 29, 12, 1566, 10, 18420, 83, 362... attention_mask labels [19428, 13, 12876, 10, 217, 13687, 7, 1] [19428, 13, 12876, 10, 217, 13687, 7, 1] [11167, 7, 1204, 10, 217, 13687, 7, 1] [4540, 4031, 9, 7, 1672, 7, 2262, 900, 17, 38,... [2262, 900, 17, 641, 65, 46, 3761, 6, 1069, 31... [3625, 32, 5788, 35, 15, 3844, 31, 7, 3, 16143... 0 1 2 3 4 ... 610315 610316 610317 610318 610319 [1,1,1,1,1,1,1,1, [1,1,1,1,1,1,1,1, [1,1,1,1,1,1,1,1, [1,1,1,1,1,1,1,1, [1,1,1,1,1,1,1,1, [1,1,1,1,1,1,1,1, 1,1,1,1, 1,1,1,... 1,1,1,1, 1,1,1,... 1,1,1,1, 1,1,1,... 1,1,1,1, 1,1,1,... 1,1,1,1, 1,1,1,... 1,1,1,1, 1,1,1,... [13959, 3871, 29, 12, 1566, 10, 5085, [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 5840, 49... 1, 1, 1, ... [13959, 3871, 29, 12, 1566, 10, 781, [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 8750, 9, ... 1, 1, 1, ... ... ... [13959, 3871, 29, 12, 1566, 10, 2364, [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 4540, 40... 1, 1, 1, ... 610320 rows × 3 columns [13959, 3871, 29, 12, 1566, 10, 3, [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 25882, 759,... 1, 1, 1, ... [2276, 8843, 138, 13, 13687, 7, 13, 1767, 3823... [781, 2420, 13, 17500, 10, 217, 13687, 7, 1] [242, 4540, 4031, 9, 7, 6, 8, 516, 65, 66, 8, ... [9810, 157, 31, 7, 516, 92, 3088, 21, 46, 3839... ... Recall that in order to construct a trainer, we need a data collator for batching, a function to compute the metrics of interest, and a TrainingArguments object. In this section, we use a data collator called DataCollatorForSeq2Seq, which is included in the transformers library specifically for sequence-to-sequence models. The collator pads the batches using the label_pad_token_id, which we have set to −100, as we did in Chapter 13 (this is the default ignore_index value used by CrossEntropyLoss): The compute_metrics() function computes the BLEU score. It uses the tokenizer to decode the token ids into text, for both the predicted and gold labels, ignoring padding: We use the Seq2SeqTrainingArguments class, which adds the predict_with_generate parameter to the regular TrainingArguments class. This is needed to in-
dicate that the trainer should use the generate() method for inference
in order to compute the metrics (BLUE in this case): Finally, we construct the trainer using the Seq2SeqTrainer class, which is a subclass of Trainer that adds the ability to compute scores such as BLEU during training by calling generate() during evaluation: Fine-tuning a translation model takes considerably longer than training or fine-tuning the models we have developed so far in this book. To account for this, here we add support for resuming training from a checkpoint, i.e., a model that was saved after training on a number of 15.4 Using a Previously Saved Model 219 examples. Similar to how one can resume a video game, this allows one to pick up from the last “save point,” in case training was interrupted and needs to be resumed: When calling the trainer’s train() method, we either provide a model checkpoint or None. In the former case, the trainer will continue training from the provided checkpoint. In the latter case, the trainer will begin training from scratch. Once the training has completed, we save the trained model and tokenizer using the trainer’s save_model() method into the output directory: We then compute and save the metrics corresponding to the training partition. This is not required, but it is helpful to keep a record of the model’s performance on the training data. Note that the metrics do not automatically include the number of examples in the training partition, so we add them explicitly: Next, we evaluate our final model on the validation data and save the corresponding metrics. These metrics indicate that our BLEU score on the validation data is 35.2%, which is evidence that fine-tuning has helped dramatically: Lastly, we save a model card into our output directory. A model card is akin to an automatically-generated README file that includes information about the model used, the data, settings used, and performance throughout the training process. This file is helpful for reproducibility as it contains all of this key information in one place. These cards are often uploaded to the Hugging Face Hub together with the model itself.5 15.4 Using a Previously Saved Model Models that have been saved locally can be loaded using the same from_pretrained() methods we have used before. In particular, instead of providing a model name, we provide the path to the local directory where the model is stored, using the local_files_only parameter to indicate that we want to load the model from the local file system instead of downloading it from the Hugging Face Hub (Make sure you use an output directory that is valid on your machine!): Once our fine-tuned model is loaded, we use it the same way as before. That is, we use the translate() function to generate translations 5 We do not discuss the model uploading process here. Please see the documentation on model sharing at: https://huggingface.co/docs/transformers/v4.14.1/model_sharing. 220 Implementing Encoder-decoder Methods for our test partition. Then we use the BLEU metric to score this output. From this metric, we obtain the final BLEU score of 33.4%, which is markedly better than our initial score (i.e., without fine-tuning) of 3.2%! The code corresponding to this section is available in the notebook chap15_translation_ro_to_en_finetuned. 15.5 Summary In this chapter we used a complete encoder-decoder transformer network to implement a machine translation application. Importantly, transformers with a decoder component have a generate() method that simplifies the generation process and provides multiple options for decoding. We encourage you to explore these options! For example, try comparing the quality of the output with the resources required to produce it (e.g., runtime overhead) when the size of the search beam increases. Additionally, we saw how to fine-tune an encoder-decoder model on a new language pair that it has not seen during its pre-training. This exercise included using checkpoints to support resuming training in case of unexpected interruptions, saving our fine-tuned model, and loading it for later use.
2,636
2,738
#!/usr/bin/env python # coding: utf-8 # # Machine Translation from English (En) to Romanian (Ro) # # Using the T5 Transformer without Fine-tuning # Some initialization: # In[1]: import torch import numpy as np from transformers import set_seed # set to True to use the gpu (if there is one available) use_gpu = True # select device device = torch.device('cuda' if use_gpu and torch.cuda.is_available() else 'cpu') print(f'device: {device.type}') # random seed seed = 42 # set random seed if seed is not None: print(f'random seed: {seed}') set_seed(seed) # In[2]: transformer_name = 't5-small' source_lang = 'en' target_lang = 'ro' max_source_length = 1024 max_target_length = 128 task_prefix = 'translate English to Romanian: ' num_beams = 1 batch_size = 100 # Load tokenizer and pre-trained model: # In[3]: from transformers import AutoTokenizer, AutoModelForSeq2SeqLM tokenizer = AutoTokenizer.from_pretrained(transformer_name) model = AutoModelForSeq2SeqLM.from_pretrained(transformer_name) model = model.to(device) # Load dataset from HuggingFace: # In[4]: from datasets import load_dataset test_ds = load_dataset('wmt16', 'ro-en', split='test') test_ds # In[5]: test_ds['translation'][0] # Implement the `translate` method and apply on this dataset: # In[6]: def translate(batch): # get source language examples and prepend task prefix inputs = [x[source_lang] for x in batch["translation"]] inputs = [task_prefix + x for x in inputs] # tokenize inputs encoded = tokenizer( inputs, max_length=max_source_length, truncation=True, padding=True, return_tensors='pt', ) # move data to gpu if needed input_ids = encoded.input_ids.to(device) attention_mask = encoded.attention_mask.to(device) # generate translated sentences output = model.generate( input_ids=input_ids, attention_mask=attention_mask, num_beams=num_beams, max_length=max_target_length, ) # generate predicted sentences from predicted token ids decoded = tokenizer.batch_decode( output, skip_special_tokens=True, ) # get gold sentences in target language targets = [x[target_lang] for x in batch["translation"]] # return gold and predicted sentences return { 'reference': targets, 'prediction': decoded, } # In[7]: results = test_ds.map( translate, batched=True, batch_size=batch_size, remove_columns=test_ds.column_names, ) results.to_pandas() # Now evaluate the quality of translations using the BLEU metric: # In[8]: from datasets import load_metric metric = load_metric('sacrebleu') for r in results: prediction = r['prediction'] reference = [r['reference']] metric.add(prediction=prediction, reference=reference) metric.compute() # An example of greedy decoding for individual texts: # In[9]: def greedy_translation(text): # prepend task prefix text = task_prefix + text # tokenize input encoded = tokenizer( text, max_length=max_source_length, truncation=True, return_tensors='pt', ) # encoder input ids encoder_input_ids = encoded.input_ids.to(device) # decoder input ids, initialized with start token id start = model.config.decoder_start_token_id decoder_input_ids = torch.LongTensor([[start]]).to(device) # generate tokens, one at a time for _ in range(max_target_length): # get model predictions output = model( encoder_input_ids, decoder_input_ids=decoder_input_ids, ) # get logits for last token next_token_logits = output.logits[0, -1, :] # select most probable token next_token_id = torch.argmax(next_token_logits) # append new token to decoder_input_ids output_id = torch.LongTensor([[next_token_id]]).to(device) decoder_input_ids = torch.cat([decoder_input_ids, output_id], dim=-1) # if predicted token is the end of sequence, stop iterating if next_token_id == tokenizer.eos_token_id: break # return text corresponding to predicted token ids return tokenizer.decode(decoder_input_ids[0], skip_special_tokens=True) # In[10]: greedy_translation("this is a test")
898
1,047
21
chap15-22
chap15-22
15 Implementing Encoder-decoder Methods In this chapter we implement a machine translation application as an example of an encoder-decoder task. In particular, we build on pre-trained encoder-decoder transformer models, which exist in the Hugging Face library for a wide variety of language pairs. We first show how to use one of these models out-of-the-box to perform translation for one of the language pairs it has been exposed to during pre-training: English to Romanian. Afterwards, we fine-tune the model to a new language combination that is has not seen before: Romanian to English. In both use cases, we use the T5 encoder-decoder model, which has been pre-trained for several tasks, including machine translation (Raffel et al., 2020). Please see Chapter 16 for a description of T5’s pre-training process. The data for this task comes from the WMT 2016 dataset (Bojar et al., 2016), which consists of English sentences aligned pairwise to German, Czech, Russian, Finnish, Romanian, and Turkish. In this chapter we only use the English-Romanian texts (in both directions). 15.1 Translating English to Romanian As a first example, we use T5 to translate from English to Romanian, which is one of the language pairs it has been exposed to during pretraining. The code discussed in this section is available in the notebook chap15_translation_en_to_ro. Even though in this exercise we are not fine-tuning the model, we still need to define a few hyper parameters to frame the task and help the model understand how to work with the data: The above settings indicate that we use the t5-small model, a smaller T5 variant, to minimize the amount of memory required. The source_lang 212 15.1 Translating English to Romanian 213 and target_lang variables define the direction of translation, i.e., from English to Romanian. To keep our computing requirements small, we limit the length of our input and output. That is, English text longer than max_source_length tokens will be truncated. Further, we limit our generated Romanian text to max_target_length. We chose a maximum target length of 128 tokens to limit the computational cost incurred during text generation (recall that the text is generated one token at a time). The T5 models are trained to support multiple tasks such as translation and summarization (please see Chapter 16 for details). Thus, during training and inference, the user must specify which task the model should perform using a text prefix. Here we use the prefix "translate English to Romanian: " to indicate that the input text is in English and should be translated to Romanian. Next, we load the model and the corresponding tokenizer, and move them to the GPU if one is available: We use the datasets library to load our translation dataset. Note that the first time one calls load_dataset() the dataset will be downloaded automatically from the Hugging Face repository.1 The load_dataset() function takes a dataset name and configuration, which in our case are wmt16 and ro-en, respectively. Since in this example we are only evaluating the model, we only load the test partition (or split) of the dataset: The dataset consists of a single column called translation. Each element in this column is a dictionary that contains the aligned pair. The dictionary keys are the abbreviated language names and the values are the corresponding sentences. An example of one of these dictionaries is shown below: We encapsulate the logic for translating the English text into Romanian in a function called translate(). Inside this function, for a batch of aligned pairs, we select the English sentence as our input, and prepend the task prefix. Then we tokenize these inputs, including the prefix, specifying that sentences longer than max_source_length should be truncated, the batch should be padded, and the tokenizer should return PyTorch tensors. Once the tokenizer output has been moved to the GPU, we pass it to the model’s generate() method. This is the first time we have seen this method, because only decoder and encoder-decoder models support it. This method generates an output sequence by predicting one token 1 https://huggingface.co/datasets/wmt16 214 Implementing Encoder-decoder Methods at a time, stopping when either the end-of-sequence token is produced or when the sequence reaches a maximum length. Several generation techniques are supported, such as beam search, in which several alternate translations are maintained by the model so that it is able to select an overall best translation from several options. For efficiency purposes, we use a greedy approach, which chooses the best token at each step of the generation. This is equivalent to using a beam search with a beam of size one. Since the model generates its predictions as a sequence of token ids, we need to convert them back into the corresponding tokens to be able to read the translated text. We do this using the tokenizer’s batch_decode() method. Finally, we return the gold and predicted Romanian sentences in a dictionary: Next, we apply our translate() function to our Dataset to translate all the sentences: reference Șeful ONU declară că nu există soluții militar... Șeful ONU a solicitat din nou tuturor părților... Ban și-a exprimat regretul că divizările în co... Nu sunt bani puțini. La sfârșitul mandatului voi face un raport cu ... "Să spună un parlamentar că nu-i ajung banii e... 1999 rows × 2 columns prediction eful ONU declară că nu există o soluţie milita... eful U.N. a cerut din nou tuturor partidelor, ... El şi-a exprimat regretul că diviziunile din c... Banii sunt suficienţi. La sfârşitul biroului voi raporta tot ceea ce ... "A spune că un parlamentar nu are suficienţi b... 1994 1995 1996 1997 1998 0 1 2 3 4 ... Secretarul General Ban Ki-moon afirmă că răspu... Secretarul General Ban Ki-moon declară că răsp... Ban a declarat miercuri în cadrul unei conferi... Ban a declarat la o conferinţă de presă susţin... ... ... Uneori mi-e rușine să ridic banii de la casierie. Uneori mi-e ruşine să iau banii de la biroul c... S-a întâmplat să ridic într-o lună și 30.000 d... Într-o lună am adunat 30 000 de lei cu ramburs... We evaluate the quality of these translations using the BLEU metric, which we introduced in Chapter 14. To this end, we load an existing implementation of BLEU from the datasets library as a Metric object.2 Metric objects have a method called add(), which is used to accumulate the predictions and gold labels, one example at a time. After accumulating all examples, the compute() method returns the results of the evaluation. Note that for each predicted sentence, BLEU expects a list of reference sentences (as there are often many correct ways of translat- 2 https://huggingface.co/docs/datasets/v2.4.0/en/package_reference/main_ classes#datasets. Metric 15.2 Implementation of Greedy Generation 215 ing a given text). Since we only have one reference, we wrap it in a list before passing it to the metric: The score corresponds to the BLEU score. The rest of the items correspond to the components required to compute the score. That is, the counts, totals, and precisions correspond to the counts, totals, and precisions for 1-, 2-, 3-, and 4-grams. The bp is the brevity penalty. The sys_len and ref_len correspond to the predictions and reference lengths. The above BLEU score of 25.2% is slightly lower than the state of the art, but we are being penalized by the peculiarities of diacritic usage in Romanian characters. For example, the letters ș and ț (corresponding to the sounds sh and ts in English) are usually spelled with a comma below the characters s and t, which is the standard imposed by the Romanian Academy. However, in “the wild” these characters are often written using a cedilla instead of a comma, e.g., ţ instead of ț (or, using the names of these Unicode characters, LATIN SMALL LETTER T WITH CEDILLA instead of LATIN SMALL LETTER T WITH COMMA BELOW). Further, some of these characters with diacritics are often omitted altogether in the T5 output. The T5 output below contains an example for each of these two situations (e.g., soluţi(e) instead of soluți(i), and eful instead of Șeful): To avoid being penalized at scoring time for these arbitrary discrepancies, post-processing scripts are sometimes used to normalize diacritic usage.3 Usage of such post-processing scripts can improve the BLEU score substantially. However, this is beyond the scope of this chapter. 15.2 Implementation of Greedy Generation To gain a better intuition of how the encoder-decoder model generates its output sequence, we show below an implementation of the greedy version of the generate() method used above. This function takes as an argument a single English text (i.e., no batching) and returns the corresponding Romanian text: This function interacts directly with the encoder and decoder components of the T5 model, so we must construct the input for both. The encoder’s input is constructed by prepending the task prefix to the English text and tokenizing it. On the other hand, the decoder’s input is constructed incrementally by accumulating the tokens predicted so far 3 https://github.com/huggingface/transformers/blob/main/examples/legacy/ seq2seq/romanian_postprocessing.md 216 Implementing Encoder-decoder Methods in order to predict the next token in the sequence. At the beginning, before any tokens are predicted, the decoder’s input is initialized with a single token that corresponds to the beginning of the sequence. We retrieve this token, called decoder_start_token_id, from the model’s configuration object. The tokens are predicted one at a time, until the model produces eos_token_id, which indicates that the sequence is finished. However, in case the model does not produce this end-of-sequence token within a reasonable number of steps, we also enforce a maximum number of predicted tokens, determined by the max_target_length parameter we defined previously. The T5 model’s forward() method, called indirectly through its __call__()) method, takes the inputs for both the encoder and the decoder. The output returned by this method corresponds to all the tokens in the decoder’s input plus an extra one: the newly predicted token. To select the best prediction, we retrieve the logits from the output and select the logits corresponding to the last token in the sequence (recall that the output shape is (batch size, sequence length, vocabulary size)). From these selected logits, we use the argmax() to select the token id corresponding to the highest-scoring vocabulary item. We append this new token id to the decoder’s input, and repeat the process until we encounter the end-of-sequence token or the decoded text reaches the maximum length. Once we are finished generating token ids, we retrieve the corresponding text by calling the tokenizer’s decode() method. This method is identical to the batch_decode() method we used previously, except that it only decodes a single example. Below is an usage example for the greedy_translation() function: 15.3 Fine-tuning Romanian to English Translation In this section, we fine-tune a T5 model on the translation of Romanian to English, a language pair that was not included in the T5 pre-training. To confirm that this data was not included in pre-training, we evaluated the performance of the vanilla t5-small model on the translation from Romanian to English using code equivalent to the code discussed in the previous section (see the chap15_translation_ro_to_en notebook). The resulting BLEU score was only 3.2%, which is substantially lower than the score we obtained when translating English to Romanian (25.2%). 15.3 Fine-tuning Romanian to English Translation 217 Note that the transformers library includes scripts to fine-tune a translation model directly from the command line.4 For didactic purposes, we will not use these scripts in this section, but instead write the fine-tuning code explicitly. For this exercise, we continue using the WMT16 dataset, but this time
we load the train and validation splits. We employ the same t5-small model that we used previously. The code from the last section to load
the model, tokenizer, and dataset does not need to change for this use-
case, so we do not repeat it here. However, as before, the complete code is available in a Jupyter notebook (chap15_translation_ro_to_en_finetune). We begin by tokenizing the source (Romanian) and target language (English) texts. As in the last section, we need to prepend the task prefix to the source texts prior to tokenizing. This time, since we are translating in the opposite direction, we use the prefix "translate Romanian to English: ", and we prepend it to the Romanian text. Each call to the tokenizer with a batch of texts produces input_ids and an attention_mask. This output is what we need for the Romanian text, which will serve as the input to the model. To generate the labels, i.e., the correct translated tokens, we use the input_ids corresponding to the English text. Recall that "labels" is the default key name expected by trainers in Hugging Face. We apply our tokenize() function to both the train and validation splits: 4 https://github.com/huggingface/transformers/tree/main/examples/pytorch/ translation 218 Implementing Encoder-decoder Methods input_ids [13959, 3871, 29, 12, 1566, 10, 4961, 106, 204... [13959, 3871, 29, 12, 1566, 10, 4961, 106, 204... [13959, 3871, 29, 12, 1566, 10, 374, 6225, 49,... [13959, 3871, 29, 12, 1566, 10, 4540, 4031, 9,... [13959, 3871, 29, 12, 1566, 10, 2262, 900, 17,... [13959, 3871, 29, 12, 1566, 10, 18420, 83, 362... attention_mask labels [19428, 13, 12876, 10, 217, 13687, 7, 1] [19428, 13, 12876, 10, 217, 13687, 7, 1] [11167, 7, 1204, 10, 217, 13687, 7, 1] [4540, 4031, 9, 7, 1672, 7, 2262, 900, 17, 38,... [2262, 900, 17, 641, 65, 46, 3761, 6, 1069, 31... [3625, 32, 5788, 35, 15, 3844, 31, 7, 3, 16143... 0 1 2 3 4 ... 610315 610316 610317 610318 610319 [1,1,1,1,1,1,1,1, [1,1,1,1,1,1,1,1, [1,1,1,1,1,1,1,1, [1,1,1,1,1,1,1,1, [1,1,1,1,1,1,1,1, [1,1,1,1,1,1,1,1, 1,1,1,1, 1,1,1,... 1,1,1,1, 1,1,1,... 1,1,1,1, 1,1,1,... 1,1,1,1, 1,1,1,... 1,1,1,1, 1,1,1,... 1,1,1,1, 1,1,1,... [13959, 3871, 29, 12, 1566, 10, 5085, [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 5840, 49... 1, 1, 1, ... [13959, 3871, 29, 12, 1566, 10, 781, [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 8750, 9, ... 1, 1, 1, ... ... ... [13959, 3871, 29, 12, 1566, 10, 2364, [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 4540, 40... 1, 1, 1, ... 610320 rows × 3 columns [13959, 3871, 29, 12, 1566, 10, 3, [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 25882, 759,... 1, 1, 1, ... [2276, 8843, 138, 13, 13687, 7, 13, 1767, 3823... [781, 2420, 13, 17500, 10, 217, 13687, 7, 1] [242, 4540, 4031, 9, 7, 6, 8, 516, 65, 66, 8, ... [9810, 157, 31, 7, 516, 92, 3088, 21, 46, 3839... ... Recall that in order to construct a trainer, we need a data collator for batching, a function to compute the metrics of interest, and a TrainingArguments object. In this section, we use a data collator called DataCollatorForSeq2Seq, which is included in the transformers library specifically for sequence-to-sequence models. The collator pads the batches using the label_pad_token_id, which we have set to −100, as we did in Chapter 13 (this is the default ignore_index value used by CrossEntropyLoss): The compute_metrics() function computes the BLEU score. It uses the tokenizer to decode the token ids into text, for both the predicted and gold labels, ignoring padding: We use the Seq2SeqTrainingArguments class, which adds the predict_with_generate parameter to the regular TrainingArguments class. This is needed to in-
dicate that the trainer should use the generate() method for inference
in order to compute the metrics (BLUE in this case): Finally, we construct the trainer using the Seq2SeqTrainer class, which is a subclass of Trainer that adds the ability to compute scores such as BLEU during training by calling generate() during evaluation: Fine-tuning a translation model takes considerably longer than training or fine-tuning the models we have developed so far in this book. To account for this, here we add support for resuming training from a checkpoint, i.e., a model that was saved after training on a number of 15.4 Using a Previously Saved Model 219 examples. Similar to how one can resume a video game, this allows one to pick up from the last “save point,” in case training was interrupted and needs to be resumed: When calling the trainer’s train() method, we either provide a model checkpoint or None. In the former case, the trainer will continue training from the provided checkpoint. In the latter case, the trainer will begin training from scratch. Once the training has completed, we save the trained model and tokenizer using the trainer’s save_model() method into the output directory: We then compute and save the metrics corresponding to the training partition. This is not required, but it is helpful to keep a record of the model’s performance on the training data. Note that the metrics do not automatically include the number of examples in the training partition, so we add them explicitly: Next, we evaluate our final model on the validation data and save the corresponding metrics. These metrics indicate that our BLEU score on the validation data is 35.2%, which is evidence that fine-tuning has helped dramatically: Lastly, we save a model card into our output directory. A model card is akin to an automatically-generated README file that includes information about the model used, the data, settings used, and performance throughout the training process. This file is helpful for reproducibility as it contains all of this key information in one place. These cards are often uploaded to the Hugging Face Hub together with the model itself.5 15.4 Using a Previously Saved Model Models that have been saved locally can be loaded using the same from_pretrained() methods we have used before. In particular, instead of providing a model name, we provide the path to the local directory where the model is stored, using the local_files_only parameter to indicate that we want to load the model from the local file system instead of downloading it from the Hugging Face Hub (Make sure you use an output directory that is valid on your machine!): Once our fine-tuned model is loaded, we use it the same way as before. That is, we use the translate() function to generate translations 5 We do not discuss the model uploading process here. Please see the documentation on model sharing at: https://huggingface.co/docs/transformers/v4.14.1/model_sharing. 220 Implementing Encoder-decoder Methods for our test partition. Then we use the BLEU metric to score this output. From this metric, we obtain the final BLEU score of 33.4%, which is markedly better than our initial score (i.e., without fine-tuning) of 3.2%! The code corresponding to this section is available in the notebook chap15_translation_ro_to_en_finetuned. 15.5 Summary In this chapter we used a complete encoder-decoder transformer network to implement a machine translation application. Importantly, transformers with a decoder component have a generate() method that simplifies the generation process and provides multiple options for decoding. We encourage you to explore these options! For example, try comparing the quality of the output with the resources required to produce it (e.g., runtime overhead) when the size of the search beam increases. Additionally, we saw how to fine-tune an encoder-decoder model on a new language pair that it has not seen during its pre-training. This exercise included using checkpoints to support resuming training in case of unexpected interruptions, saving our fine-tuned model, and loading it for later use.
10,619
10,743
#!/usr/bin/env python # coding: utf-8 # # Machine Translation from English (En) to Romanian (Ro) # # Using the T5 Transformer without Fine-tuning # Some initialization: # In[1]: import torch import numpy as np from transformers import set_seed # set to True to use the gpu (if there is one available) use_gpu = True # select device device = torch.device('cuda' if use_gpu and torch.cuda.is_available() else 'cpu') print(f'device: {device.type}') # random seed seed = 42 # set random seed if seed is not None: print(f'random seed: {seed}') set_seed(seed) # In[2]: transformer_name = 't5-small' source_lang = 'en' target_lang = 'ro' max_source_length = 1024 max_target_length = 128 task_prefix = 'translate English to Romanian: ' num_beams = 1 batch_size = 100 # Load tokenizer and pre-trained model: # In[3]: from transformers import AutoTokenizer, AutoModelForSeq2SeqLM tokenizer = AutoTokenizer.from_pretrained(transformer_name) model = AutoModelForSeq2SeqLM.from_pretrained(transformer_name) model = model.to(device) # Load dataset from HuggingFace: # In[4]: from datasets import load_dataset test_ds = load_dataset('wmt16', 'ro-en', split='test') test_ds # In[5]: test_ds['translation'][0] # Implement the `translate` method and apply on this dataset: # In[6]: def translate(batch): # get source language examples and prepend task prefix inputs = [x[source_lang] for x in batch["translation"]] inputs = [task_prefix + x for x in inputs] # tokenize inputs encoded = tokenizer( inputs, max_length=max_source_length, truncation=True, padding=True, return_tensors='pt', ) # move data to gpu if needed input_ids = encoded.input_ids.to(device) attention_mask = encoded.attention_mask.to(device) # generate translated sentences output = model.generate( input_ids=input_ids, attention_mask=attention_mask, num_beams=num_beams, max_length=max_target_length, ) # generate predicted sentences from predicted token ids decoded = tokenizer.batch_decode( output, skip_special_tokens=True, ) # get gold sentences in target language targets = [x[target_lang] for x in batch["translation"]] # return gold and predicted sentences return { 'reference': targets, 'prediction': decoded, } # In[7]: results = test_ds.map( translate, batched=True, batch_size=batch_size, remove_columns=test_ds.column_names, ) results.to_pandas() # Now evaluate the quality of translations using the BLEU metric: # In[8]: from datasets import load_metric metric = load_metric('sacrebleu') for r in results: prediction = r['prediction'] reference = [r['reference']] metric.add(prediction=prediction, reference=reference) metric.compute() # An example of greedy decoding for individual texts: # In[9]: def greedy_translation(text): # prepend task prefix text = task_prefix + text # tokenize input encoded = tokenizer( text, max_length=max_source_length, truncation=True, return_tensors='pt', ) # encoder input ids encoder_input_ids = encoded.input_ids.to(device) # decoder input ids, initialized with start token id start = model.config.decoder_start_token_id decoder_input_ids = torch.LongTensor([[start]]).to(device) # generate tokens, one at a time for _ in range(max_target_length): # get model predictions output = model( encoder_input_ids, decoder_input_ids=decoder_input_ids, ) # get logits for last token next_token_logits = output.logits[0, -1, :] # select most probable token next_token_id = torch.argmax(next_token_logits) # append new token to decoder_input_ids output_id = torch.LongTensor([[next_token_id]]).to(device) decoder_input_ids = torch.cat([decoder_input_ids, output_id], dim=-1) # if predicted token is the end of sequence, stop iterating if next_token_id == tokenizer.eos_token_id: break # return text corresponding to predicted token ids return tokenizer.decode(decoder_input_ids[0], skip_special_tokens=True) # In[10]: greedy_translation("this is a test")
3,826
3,882
22
chap15-23
chap15-23
15 Implementing Encoder-decoder Methods In this chapter we implement a machine translation application as an example of an encoder-decoder task. In particular, we build on pre-trained encoder-decoder transformer models, which exist in the Hugging Face library for a wide variety of language pairs. We first show how to use one of these models out-of-the-box to perform translation for one of the language pairs it has been exposed to during pre-training: English to Romanian. Afterwards, we fine-tune the model to a new language combination that is has not seen before: Romanian to English. In both use cases, we use the T5 encoder-decoder model, which has been pre-trained for several tasks, including machine translation (Raffel et al., 2020). Please see Chapter 16 for a description of T5’s pre-training process. The data for this task comes from the WMT 2016 dataset (Bojar et al., 2016), which consists of English sentences aligned pairwise to German, Czech, Russian, Finnish, Romanian, and Turkish. In this chapter we only use the English-Romanian texts (in both directions). 15.1 Translating English to Romanian As a first example, we use T5 to translate from English to Romanian, which is one of the language pairs it has been exposed to during pretraining. The code discussed in this section is available in the notebook chap15_translation_en_to_ro. Even though in this exercise we are not fine-tuning the model, we still need to define a few hyper parameters to frame the task and help the model understand how to work with the data: The above settings indicate that we use the t5-small model, a smaller T5 variant, to minimize the amount of memory required. The source_lang 212 15.1 Translating English to Romanian 213 and target_lang variables define the direction of translation, i.e., from English to Romanian. To keep our computing requirements small, we limit the length of our input and output. That is, English text longer than max_source_length tokens will be truncated. Further, we limit our generated Romanian text to max_target_length. We chose a maximum target length of 128 tokens to limit the computational cost incurred during text generation (recall that the text is generated one token at a time). The T5 models are trained to support multiple tasks such as translation and summarization (please see Chapter 16 for details). Thus, during training and inference, the user must specify which task the model should perform using a text prefix. Here we use the prefix "translate English to Romanian: " to indicate that the input text is in English and should be translated to Romanian. Next, we load the model and the corresponding tokenizer, and move them to the GPU if one is available: We use the datasets library to load our translation dataset. Note that the first time one calls load_dataset() the dataset will be downloaded automatically from the Hugging Face repository.1 The load_dataset() function takes a dataset name and configuration, which in our case are wmt16 and ro-en, respectively. Since in this example we are only evaluating the model, we only load the test partition (or split) of the dataset: The dataset consists of a single column called translation. Each element in this column is a dictionary that contains the aligned pair. The dictionary keys are the abbreviated language names and the values are the corresponding sentences. An example of one of these dictionaries is shown below: We encapsulate the logic for translating the English text into Romanian in a function called translate(). Inside this function, for a batch of aligned pairs, we select the English sentence as our input, and prepend the task prefix. Then we tokenize these inputs, including the prefix, specifying that sentences longer than max_source_length should be truncated, the batch should be padded, and the tokenizer should return PyTorch tensors. Once the tokenizer output has been moved to the GPU, we pass it to the model’s generate() method. This is the first time we have seen this method, because only decoder and encoder-decoder models support it. This method generates an output sequence by predicting one token 1 https://huggingface.co/datasets/wmt16 214 Implementing Encoder-decoder Methods at a time, stopping when either the end-of-sequence token is produced or when the sequence reaches a maximum length. Several generation techniques are supported, such as beam search, in which several alternate translations are maintained by the model so that it is able to select an overall best translation from several options. For efficiency purposes, we use a greedy approach, which chooses the best token at each step of the generation. This is equivalent to using a beam search with a beam of size one. Since the model generates its predictions as a sequence of token ids, we need to convert them back into the corresponding tokens to be able to read the translated text. We do this using the tokenizer’s batch_decode() method. Finally, we return the gold and predicted Romanian sentences in a dictionary: Next, we apply our translate() function to our Dataset to translate all the sentences: reference Șeful ONU declară că nu există soluții militar... Șeful ONU a solicitat din nou tuturor părților... Ban și-a exprimat regretul că divizările în co... Nu sunt bani puțini. La sfârșitul mandatului voi face un raport cu ... "Să spună un parlamentar că nu-i ajung banii e... 1999 rows × 2 columns prediction eful ONU declară că nu există o soluţie milita... eful U.N. a cerut din nou tuturor partidelor, ... El şi-a exprimat regretul că diviziunile din c... Banii sunt suficienţi. La sfârşitul biroului voi raporta tot ceea ce ... "A spune că un parlamentar nu are suficienţi b... 1994 1995 1996 1997 1998 0 1 2 3 4 ... Secretarul General Ban Ki-moon afirmă că răspu... Secretarul General Ban Ki-moon declară că răsp... Ban a declarat miercuri în cadrul unei conferi... Ban a declarat la o conferinţă de presă susţin... ... ... Uneori mi-e rușine să ridic banii de la casierie. Uneori mi-e ruşine să iau banii de la biroul c... S-a întâmplat să ridic într-o lună și 30.000 d... Într-o lună am adunat 30 000 de lei cu ramburs... We evaluate the quality of these translations using the BLEU metric, which we introduced in Chapter 14. To this end, we load an existing implementation of BLEU from the datasets library as a Metric object.2 Metric objects have a method called add(), which is used to accumulate the predictions and gold labels, one example at a time. After accumulating all examples, the compute() method returns the results of the evaluation. Note that for each predicted sentence, BLEU expects a list of reference sentences (as there are often many correct ways of translat- 2 https://huggingface.co/docs/datasets/v2.4.0/en/package_reference/main_ classes#datasets. Metric 15.2 Implementation of Greedy Generation 215 ing a given text). Since we only have one reference, we wrap it in a list before passing it to the metric: The score corresponds to the BLEU score. The rest of the items correspond to the components required to compute the score. That is, the counts, totals, and precisions correspond to the counts, totals, and precisions for 1-, 2-, 3-, and 4-grams. The bp is the brevity penalty. The sys_len and ref_len correspond to the predictions and reference lengths. The above BLEU score of 25.2% is slightly lower than the state of the art, but we are being penalized by the peculiarities of diacritic usage in Romanian characters. For example, the letters ș and ț (corresponding to the sounds sh and ts in English) are usually spelled with a comma below the characters s and t, which is the standard imposed by the Romanian Academy. However, in “the wild” these characters are often written using a cedilla instead of a comma, e.g., ţ instead of ț (or, using the names of these Unicode characters, LATIN SMALL LETTER T WITH CEDILLA instead of LATIN SMALL LETTER T WITH COMMA BELOW). Further, some of these characters with diacritics are often omitted altogether in the T5 output. The T5 output below contains an example for each of these two situations (e.g., soluţi(e) instead of soluți(i), and eful instead of Șeful): To avoid being penalized at scoring time for these arbitrary discrepancies, post-processing scripts are sometimes used to normalize diacritic usage.3 Usage of such post-processing scripts can improve the BLEU score substantially. However, this is beyond the scope of this chapter. 15.2 Implementation of Greedy Generation To gain a better intuition of how the encoder-decoder model generates its output sequence, we show below an implementation of the greedy version of the generate() method used above. This function takes as an argument a single English text (i.e., no batching) and returns the corresponding Romanian text: This function interacts directly with the encoder and decoder components of the T5 model, so we must construct the input for both. The encoder’s input is constructed by prepending the task prefix to the English text and tokenizing it. On the other hand, the decoder’s input is constructed incrementally by accumulating the tokens predicted so far 3 https://github.com/huggingface/transformers/blob/main/examples/legacy/ seq2seq/romanian_postprocessing.md 216 Implementing Encoder-decoder Methods in order to predict the next token in the sequence. At the beginning, before any tokens are predicted, the decoder’s input is initialized with a single token that corresponds to the beginning of the sequence. We retrieve this token, called decoder_start_token_id, from the model’s configuration object. The tokens are predicted one at a time, until the model produces eos_token_id, which indicates that the sequence is finished. However, in case the model does not produce this end-of-sequence token within a reasonable number of steps, we also enforce a maximum number of predicted tokens, determined by the max_target_length parameter we defined previously. The T5 model’s forward() method, called indirectly through its __call__()) method, takes the inputs for both the encoder and the decoder. The output returned by this method corresponds to all the tokens in the decoder’s input plus an extra one: the newly predicted token. To select the best prediction, we retrieve the logits from the output and select the logits corresponding to the last token in the sequence (recall that the output shape is (batch size, sequence length, vocabulary size)). From these selected logits, we use the argmax() to select the token id corresponding to the highest-scoring vocabulary item. We append this new token id to the decoder’s input, and repeat the process until we encounter the end-of-sequence token or the decoded text reaches the maximum length. Once we are finished generating token ids, we retrieve the corresponding text by calling the tokenizer’s decode() method. This method is identical to the batch_decode() method we used previously, except that it only decodes a single example. Below is an usage example for the greedy_translation() function: 15.3 Fine-tuning Romanian to English Translation In this section, we fine-tune a T5 model on the translation of Romanian to English, a language pair that was not included in the T5 pre-training. To confirm that this data was not included in pre-training, we evaluated the performance of the vanilla t5-small model on the translation from Romanian to English using code equivalent to the code discussed in the previous section (see the chap15_translation_ro_to_en notebook). The resulting BLEU score was only 3.2%, which is substantially lower than the score we obtained when translating English to Romanian (25.2%). 15.3 Fine-tuning Romanian to English Translation 217 Note that the transformers library includes scripts to fine-tune a translation model directly from the command line.4 For didactic purposes, we will not use these scripts in this section, but instead write the fine-tuning code explicitly. For this exercise, we continue using the WMT16 dataset, but this time
we load the train and validation splits. We employ the same t5-small model that we used previously. The code from the last section to load
the model, tokenizer, and dataset does not need to change for this use-
case, so we do not repeat it here. However, as before, the complete code is available in a Jupyter notebook (chap15_translation_ro_to_en_finetune). We begin by tokenizing the source (Romanian) and target language (English) texts. As in the last section, we need to prepend the task prefix to the source texts prior to tokenizing. This time, since we are translating in the opposite direction, we use the prefix "translate Romanian to English: ", and we prepend it to the Romanian text. Each call to the tokenizer with a batch of texts produces input_ids and an attention_mask. This output is what we need for the Romanian text, which will serve as the input to the model. To generate the labels, i.e., the correct translated tokens, we use the input_ids corresponding to the English text. Recall that "labels" is the default key name expected by trainers in Hugging Face. We apply our tokenize() function to both the train and validation splits: 4 https://github.com/huggingface/transformers/tree/main/examples/pytorch/ translation 218 Implementing Encoder-decoder Methods input_ids [13959, 3871, 29, 12, 1566, 10, 4961, 106, 204... [13959, 3871, 29, 12, 1566, 10, 4961, 106, 204... [13959, 3871, 29, 12, 1566, 10, 374, 6225, 49,... [13959, 3871, 29, 12, 1566, 10, 4540, 4031, 9,... [13959, 3871, 29, 12, 1566, 10, 2262, 900, 17,... [13959, 3871, 29, 12, 1566, 10, 18420, 83, 362... attention_mask labels [19428, 13, 12876, 10, 217, 13687, 7, 1] [19428, 13, 12876, 10, 217, 13687, 7, 1] [11167, 7, 1204, 10, 217, 13687, 7, 1] [4540, 4031, 9, 7, 1672, 7, 2262, 900, 17, 38,... [2262, 900, 17, 641, 65, 46, 3761, 6, 1069, 31... [3625, 32, 5788, 35, 15, 3844, 31, 7, 3, 16143... 0 1 2 3 4 ... 610315 610316 610317 610318 610319 [1,1,1,1,1,1,1,1, [1,1,1,1,1,1,1,1, [1,1,1,1,1,1,1,1, [1,1,1,1,1,1,1,1, [1,1,1,1,1,1,1,1, [1,1,1,1,1,1,1,1, 1,1,1,1, 1,1,1,... 1,1,1,1, 1,1,1,... 1,1,1,1, 1,1,1,... 1,1,1,1, 1,1,1,... 1,1,1,1, 1,1,1,... 1,1,1,1, 1,1,1,... [13959, 3871, 29, 12, 1566, 10, 5085, [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 5840, 49... 1, 1, 1, ... [13959, 3871, 29, 12, 1566, 10, 781, [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 8750, 9, ... 1, 1, 1, ... ... ... [13959, 3871, 29, 12, 1566, 10, 2364, [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 4540, 40... 1, 1, 1, ... 610320 rows × 3 columns [13959, 3871, 29, 12, 1566, 10, 3, [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 25882, 759,... 1, 1, 1, ... [2276, 8843, 138, 13, 13687, 7, 13, 1767, 3823... [781, 2420, 13, 17500, 10, 217, 13687, 7, 1] [242, 4540, 4031, 9, 7, 6, 8, 516, 65, 66, 8, ... [9810, 157, 31, 7, 516, 92, 3088, 21, 46, 3839... ... Recall that in order to construct a trainer, we need a data collator for batching, a function to compute the metrics of interest, and a TrainingArguments object. In this section, we use a data collator called DataCollatorForSeq2Seq, which is included in the transformers library specifically for sequence-to-sequence models. The collator pads the batches using the label_pad_token_id, which we have set to −100, as we did in Chapter 13 (this is the default ignore_index value used by CrossEntropyLoss): The compute_metrics() function computes the BLEU score. It uses the tokenizer to decode the token ids into text, for both the predicted and gold labels, ignoring padding: We use the Seq2SeqTrainingArguments class, which adds the predict_with_generate parameter to the regular TrainingArguments class. This is needed to in-
dicate that the trainer should use the generate() method for inference
in order to compute the metrics (BLUE in this case): Finally, we construct the trainer using the Seq2SeqTrainer class, which is a subclass of Trainer that adds the ability to compute scores such as BLEU during training by calling generate() during evaluation: Fine-tuning a translation model takes considerably longer than training or fine-tuning the models we have developed so far in this book. To account for this, here we add support for resuming training from a checkpoint, i.e., a model that was saved after training on a number of 15.4 Using a Previously Saved Model 219 examples. Similar to how one can resume a video game, this allows one to pick up from the last “save point,” in case training was interrupted and needs to be resumed: When calling the trainer’s train() method, we either provide a model checkpoint or None. In the former case, the trainer will continue training from the provided checkpoint. In the latter case, the trainer will begin training from scratch. Once the training has completed, we save the trained model and tokenizer using the trainer’s save_model() method into the output directory: We then compute and save the metrics corresponding to the training partition. This is not required, but it is helpful to keep a record of the model’s performance on the training data. Note that the metrics do not automatically include the number of examples in the training partition, so we add them explicitly: Next, we evaluate our final model on the validation data and save the corresponding metrics. These metrics indicate that our BLEU score on the validation data is 35.2%, which is evidence that fine-tuning has helped dramatically: Lastly, we save a model card into our output directory. A model card is akin to an automatically-generated README file that includes information about the model used, the data, settings used, and performance throughout the training process. This file is helpful for reproducibility as it contains all of this key information in one place. These cards are often uploaded to the Hugging Face Hub together with the model itself.5 15.4 Using a Previously Saved Model Models that have been saved locally can be loaded using the same from_pretrained() methods we have used before. In particular, instead of providing a model name, we provide the path to the local directory where the model is stored, using the local_files_only parameter to indicate that we want to load the model from the local file system instead of downloading it from the Hugging Face Hub (Make sure you use an output directory that is valid on your machine!): Once our fine-tuned model is loaded, we use it the same way as before. That is, we use the translate() function to generate translations 5 We do not discuss the model uploading process here. Please see the documentation on model sharing at: https://huggingface.co/docs/transformers/v4.14.1/model_sharing. 220 Implementing Encoder-decoder Methods for our test partition. Then we use the BLEU metric to score this output. From this metric, we obtain the final BLEU score of 33.4%, which is markedly better than our initial score (i.e., without fine-tuning) of 3.2%! The code corresponding to this section is available in the notebook chap15_translation_ro_to_en_finetuned. 15.5 Summary In this chapter we used a complete encoder-decoder transformer network to implement a machine translation application. Importantly, transformers with a decoder component have a generate() method that simplifies the generation process and provides multiple options for decoding. We encourage you to explore these options! For example, try comparing the quality of the output with the resources required to produce it (e.g., runtime overhead) when the size of the search beam increases. Additionally, we saw how to fine-tune an encoder-decoder model on a new language pair that it has not seen during its pre-training. This exercise included using checkpoints to support resuming training in case of unexpected interruptions, saving our fine-tuned model, and loading it for later use.
12,570
12,907
#!/usr/bin/env python # coding: utf-8 # # Machine Translation from Ro to En # # Using the T5 Transformer with Fine-tuning # Some initialization: # In[1]: import torch import numpy as np from transformers import set_seed # random seed seed = 42 # set random seed if seed is not None: print(f'random seed: {seed}') set_seed(seed) # In[2]: transformer_name = 't5-small' dataset_name = 'wmt16' dataset_config_name = 'ro-en' source_lang = 'ro' target_lang = 'en' max_source_length = 1024 max_target_length = 128 task_prefix = 'translate Romanian to English: ' batch_size = 4 label_pad_token_id = -100 save_steps = 25_000 num_beams = 1 learning_rate = 1e-3 num_train_epochs = 3 output_dir = '/media/data2/t5-translation-example' # make sure this is a valid path on your machine! # Load dataset from HuggingFace: # In[3]: from datasets import load_dataset wmt16 = load_dataset(dataset_name, dataset_config_name) # Load tokenizer and pre-trained model: # In[4]: from transformers import AutoConfig, AutoTokenizer, AutoModelForSeq2SeqLM config = AutoConfig.from_pretrained(transformer_name) tokenizer = AutoTokenizer.from_pretrained(transformer_name) model = AutoModelForSeq2SeqLM.from_pretrained(transformer_name, config=config) # Tokenize the texts in the dataset: # In[5]: def tokenize(batch): # get source sentences and prepend task prefix sources = [x[source_lang] for x in batch["translation"]] sources = [task_prefix + x for x in sources] # tokenize source sentences output = tokenizer( sources, max_length=max_source_length, truncation=True, ) # get target sentences targets = [x[target_lang] for x in batch["translation"]] # tokenize target sentences labels = tokenizer( targets, max_length=max_target_length, truncation=True, ) # add targets to output output["labels"] = labels["input_ids"] return output # In[6]: train_dataset = wmt16['train'] eval_dataset = wmt16['validation'] column_names = train_dataset.column_names train_dataset = train_dataset.map( tokenize, batched=True, remove_columns=column_names, ) eval_dataset = eval_dataset.map( tokenize, batched=True, remove_columns=column_names, ) # In[7]: train_dataset.to_pandas() # Create `Trainer` object and train: # In[8]: from transformers import DataCollatorForSeq2Seq data_collator = DataCollatorForSeq2Seq( tokenizer, model=model, label_pad_token_id=label_pad_token_id, ) # In[9]: from datasets import load_metric metric = load_metric('sacrebleu') def compute_metrics(eval_preds): preds, labels = eval_preds # get text for predictions predictions = tokenizer.batch_decode( preds, skip_special_tokens=True, ) # replace -100 in labels with pad token labels = np.where( labels != -100, labels, tokenizer.pad_token_id, ) # get text for gold labels references = tokenizer.batch_decode( labels, skip_special_tokens=True, ) # metric expects list of references for each prediction references = [[ref] for ref in references] # compute bleu score results = metric.compute( predictions=predictions, references=references, ) results = {'bleu': results['score']} return results # In[10]: from transformers import Seq2SeqTrainingArguments training_args = Seq2SeqTrainingArguments( output_dir=output_dir, per_device_train_batch_size=batch_size, per_device_eval_batch_size=batch_size, save_steps=save_steps, predict_with_generate=True, evaluation_strategy='steps', eval_steps=save_steps, learning_rate=learning_rate, num_train_epochs=num_train_epochs, ) # In[11]: from transformers import Seq2SeqTrainer trainer = Seq2SeqTrainer( model=model, args=training_args, train_dataset=train_dataset, eval_dataset=eval_dataset, tokenizer=tokenizer, data_collator=data_collator, compute_metrics=compute_metrics, ) # In[12]: import os from transformers.trainer_utils import get_last_checkpoint last_checkpoint = None if os.path.isdir(output_dir): last_checkpoint = get_last_checkpoint(output_dir) if last_checkpoint is not None: print(f'Checkpoint detected, resuming training at {last_checkpoint}.') # In[13]: train_result = trainer.train(resume_from_checkpoint=last_checkpoint) trainer.save_model() # In[14]: metrics = train_result.metrics metrics['train_samples'] = len(train_dataset) trainer.log_metrics('train', metrics) trainer.save_metrics('train', metrics) trainer.save_state() # Now evaluate: # In[15]: # https://discuss.huggingface.co/t/evaluation-results-metric-during-training-is-different-from-the-evaluation-results-at-the-end/15401 metrics = trainer.evaluate( max_length=max_target_length, num_beams=num_beams, metric_key_prefix='eval', ) metrics['eval_samples'] = len(eval_dataset) trainer.log_metrics('eval', metrics) trainer.save_metrics('eval', metrics) # Create a model card with meta data about this model: # In[16]: kwargs = { 'finetuned_from': transformer_name, 'tasks': 'translation', 'dataset_tags': dataset_name, 'dataset_args': dataset_config_name, 'dataset': f'{dataset_name} {dataset_config_name}', 'language': [source_lang, target_lang], } trainer.create_model_card(**kwargs) # In[ ]:
1,303
1,324
23
chap15-24
chap15-24
15 Implementing Encoder-decoder Methods In this chapter we implement a machine translation application as an example of an encoder-decoder task. In particular, we build on pre-trained encoder-decoder transformer models, which exist in the Hugging Face library for a wide variety of language pairs. We first show how to use one of these models out-of-the-box to perform translation for one of the language pairs it has been exposed to during pre-training: English to Romanian. Afterwards, we fine-tune the model to a new language combination that is has not seen before: Romanian to English. In both use cases, we use the T5 encoder-decoder model, which has been pre-trained for several tasks, including machine translation (Raffel et al., 2020). Please see Chapter 16 for a description of T5’s pre-training process. The data for this task comes from the WMT 2016 dataset (Bojar et al., 2016), which consists of English sentences aligned pairwise to German, Czech, Russian, Finnish, Romanian, and Turkish. In this chapter we only use the English-Romanian texts (in both directions). 15.1 Translating English to Romanian As a first example, we use T5 to translate from English to Romanian, which is one of the language pairs it has been exposed to during pretraining. The code discussed in this section is available in the notebook chap15_translation_en_to_ro. Even though in this exercise we are not fine-tuning the model, we still need to define a few hyper parameters to frame the task and help the model understand how to work with the data: The above settings indicate that we use the t5-small model, a smaller T5 variant, to minimize the amount of memory required. The source_lang 212 15.1 Translating English to Romanian 213 and target_lang variables define the direction of translation, i.e., from English to Romanian. To keep our computing requirements small, we limit the length of our input and output. That is, English text longer than max_source_length tokens will be truncated. Further, we limit our generated Romanian text to max_target_length. We chose a maximum target length of 128 tokens to limit the computational cost incurred during text generation (recall that the text is generated one token at a time). The T5 models are trained to support multiple tasks such as translation and summarization (please see Chapter 16 for details). Thus, during training and inference, the user must specify which task the model should perform using a text prefix. Here we use the prefix "translate English to Romanian: " to indicate that the input text is in English and should be translated to Romanian. Next, we load the model and the corresponding tokenizer, and move them to the GPU if one is available: We use the datasets library to load our translation dataset. Note that the first time one calls load_dataset() the dataset will be downloaded automatically from the Hugging Face repository.1 The load_dataset() function takes a dataset name and configuration, which in our case are wmt16 and ro-en, respectively. Since in this example we are only evaluating the model, we only load the test partition (or split) of the dataset: The dataset consists of a single column called translation. Each element in this column is a dictionary that contains the aligned pair. The dictionary keys are the abbreviated language names and the values are the corresponding sentences. An example of one of these dictionaries is shown below: We encapsulate the logic for translating the English text into Romanian in a function called translate(). Inside this function, for a batch of aligned pairs, we select the English sentence as our input, and prepend the task prefix. Then we tokenize these inputs, including the prefix, specifying that sentences longer than max_source_length should be truncated, the batch should be padded, and the tokenizer should return PyTorch tensors. Once the tokenizer output has been moved to the GPU, we pass it to the model’s generate() method. This is the first time we have seen this method, because only decoder and encoder-decoder models support it. This method generates an output sequence by predicting one token 1 https://huggingface.co/datasets/wmt16 214 Implementing Encoder-decoder Methods at a time, stopping when either the end-of-sequence token is produced or when the sequence reaches a maximum length. Several generation techniques are supported, such as beam search, in which several alternate translations are maintained by the model so that it is able to select an overall best translation from several options. For efficiency purposes, we use a greedy approach, which chooses the best token at each step of the generation. This is equivalent to using a beam search with a beam of size one. Since the model generates its predictions as a sequence of token ids, we need to convert them back into the corresponding tokens to be able to read the translated text. We do this using the tokenizer’s batch_decode() method. Finally, we return the gold and predicted Romanian sentences in a dictionary: Next, we apply our translate() function to our Dataset to translate all the sentences: reference Șeful ONU declară că nu există soluții militar... Șeful ONU a solicitat din nou tuturor părților... Ban și-a exprimat regretul că divizările în co... Nu sunt bani puțini. La sfârșitul mandatului voi face un raport cu ... "Să spună un parlamentar că nu-i ajung banii e... 1999 rows × 2 columns prediction eful ONU declară că nu există o soluţie milita... eful U.N. a cerut din nou tuturor partidelor, ... El şi-a exprimat regretul că diviziunile din c... Banii sunt suficienţi. La sfârşitul biroului voi raporta tot ceea ce ... "A spune că un parlamentar nu are suficienţi b... 1994 1995 1996 1997 1998 0 1 2 3 4 ... Secretarul General Ban Ki-moon afirmă că răspu... Secretarul General Ban Ki-moon declară că răsp... Ban a declarat miercuri în cadrul unei conferi... Ban a declarat la o conferinţă de presă susţin... ... ... Uneori mi-e rușine să ridic banii de la casierie. Uneori mi-e ruşine să iau banii de la biroul c... S-a întâmplat să ridic într-o lună și 30.000 d... Într-o lună am adunat 30 000 de lei cu ramburs... We evaluate the quality of these translations using the BLEU metric, which we introduced in Chapter 14. To this end, we load an existing implementation of BLEU from the datasets library as a Metric object.2 Metric objects have a method called add(), which is used to accumulate the predictions and gold labels, one example at a time. After accumulating all examples, the compute() method returns the results of the evaluation. Note that for each predicted sentence, BLEU expects a list of reference sentences (as there are often many correct ways of translat- 2 https://huggingface.co/docs/datasets/v2.4.0/en/package_reference/main_ classes#datasets. Metric 15.2 Implementation of Greedy Generation 215 ing a given text). Since we only have one reference, we wrap it in a list before passing it to the metric: The score corresponds to the BLEU score. The rest of the items correspond to the components required to compute the score. That is, the counts, totals, and precisions correspond to the counts, totals, and precisions for 1-, 2-, 3-, and 4-grams. The bp is the brevity penalty. The sys_len and ref_len correspond to the predictions and reference lengths. The above BLEU score of 25.2% is slightly lower than the state of the art, but we are being penalized by the peculiarities of diacritic usage in Romanian characters. For example, the letters ș and ț (corresponding to the sounds sh and ts in English) are usually spelled with a comma below the characters s and t, which is the standard imposed by the Romanian Academy. However, in “the wild” these characters are often written using a cedilla instead of a comma, e.g., ţ instead of ț (or, using the names of these Unicode characters, LATIN SMALL LETTER T WITH CEDILLA instead of LATIN SMALL LETTER T WITH COMMA BELOW). Further, some of these characters with diacritics are often omitted altogether in the T5 output. The T5 output below contains an example for each of these two situations (e.g., soluţi(e) instead of soluți(i), and eful instead of Șeful): To avoid being penalized at scoring time for these arbitrary discrepancies, post-processing scripts are sometimes used to normalize diacritic usage.3 Usage of such post-processing scripts can improve the BLEU score substantially. However, this is beyond the scope of this chapter. 15.2 Implementation of Greedy Generation To gain a better intuition of how the encoder-decoder model generates its output sequence, we show below an implementation of the greedy version of the generate() method used above. This function takes as an argument a single English text (i.e., no batching) and returns the corresponding Romanian text: This function interacts directly with the encoder and decoder components of the T5 model, so we must construct the input for both. The encoder’s input is constructed by prepending the task prefix to the English text and tokenizing it. On the other hand, the decoder’s input is constructed incrementally by accumulating the tokens predicted so far 3 https://github.com/huggingface/transformers/blob/main/examples/legacy/ seq2seq/romanian_postprocessing.md 216 Implementing Encoder-decoder Methods in order to predict the next token in the sequence. At the beginning, before any tokens are predicted, the decoder’s input is initialized with a single token that corresponds to the beginning of the sequence. We retrieve this token, called decoder_start_token_id, from the model’s configuration object. The tokens are predicted one at a time, until the model produces eos_token_id, which indicates that the sequence is finished. However, in case the model does not produce this end-of-sequence token within a reasonable number of steps, we also enforce a maximum number of predicted tokens, determined by the max_target_length parameter we defined previously. The T5 model’s forward() method, called indirectly through its __call__()) method, takes the inputs for both the encoder and the decoder. The output returned by this method corresponds to all the tokens in the decoder’s input plus an extra one: the newly predicted token. To select the best prediction, we retrieve the logits from the output and select the logits corresponding to the last token in the sequence (recall that the output shape is (batch size, sequence length, vocabulary size)). From these selected logits, we use the argmax() to select the token id corresponding to the highest-scoring vocabulary item. We append this new token id to the decoder’s input, and repeat the process until we encounter the end-of-sequence token or the decoded text reaches the maximum length. Once we are finished generating token ids, we retrieve the corresponding text by calling the tokenizer’s decode() method. This method is identical to the batch_decode() method we used previously, except that it only decodes a single example. Below is an usage example for the greedy_translation() function: 15.3 Fine-tuning Romanian to English Translation In this section, we fine-tune a T5 model on the translation of Romanian to English, a language pair that was not included in the T5 pre-training. To confirm that this data was not included in pre-training, we evaluated the performance of the vanilla t5-small model on the translation from Romanian to English using code equivalent to the code discussed in the previous section (see the chap15_translation_ro_to_en notebook). The resulting BLEU score was only 3.2%, which is substantially lower than the score we obtained when translating English to Romanian (25.2%). 15.3 Fine-tuning Romanian to English Translation 217 Note that the transformers library includes scripts to fine-tune a translation model directly from the command line.4 For didactic purposes, we will not use these scripts in this section, but instead write the fine-tuning code explicitly. For this exercise, we continue using the WMT16 dataset, but this time
we load the train and validation splits. We employ the same t5-small model that we used previously. The code from the last section to load
the model, tokenizer, and dataset does not need to change for this use-
case, so we do not repeat it here. However, as before, the complete code is available in a Jupyter notebook (chap15_translation_ro_to_en_finetune). We begin by tokenizing the source (Romanian) and target language (English) texts. As in the last section, we need to prepend the task prefix to the source texts prior to tokenizing. This time, since we are translating in the opposite direction, we use the prefix "translate Romanian to English: ", and we prepend it to the Romanian text. Each call to the tokenizer with a batch of texts produces input_ids and an attention_mask. This output is what we need for the Romanian text, which will serve as the input to the model. To generate the labels, i.e., the correct translated tokens, we use the input_ids corresponding to the English text. Recall that "labels" is the default key name expected by trainers in Hugging Face. We apply our tokenize() function to both the train and validation splits: 4 https://github.com/huggingface/transformers/tree/main/examples/pytorch/ translation 218 Implementing Encoder-decoder Methods input_ids [13959, 3871, 29, 12, 1566, 10, 4961, 106, 204... [13959, 3871, 29, 12, 1566, 10, 4961, 106, 204... [13959, 3871, 29, 12, 1566, 10, 374, 6225, 49,... [13959, 3871, 29, 12, 1566, 10, 4540, 4031, 9,... [13959, 3871, 29, 12, 1566, 10, 2262, 900, 17,... [13959, 3871, 29, 12, 1566, 10, 18420, 83, 362... attention_mask labels [19428, 13, 12876, 10, 217, 13687, 7, 1] [19428, 13, 12876, 10, 217, 13687, 7, 1] [11167, 7, 1204, 10, 217, 13687, 7, 1] [4540, 4031, 9, 7, 1672, 7, 2262, 900, 17, 38,... [2262, 900, 17, 641, 65, 46, 3761, 6, 1069, 31... [3625, 32, 5788, 35, 15, 3844, 31, 7, 3, 16143... 0 1 2 3 4 ... 610315 610316 610317 610318 610319 [1,1,1,1,1,1,1,1, [1,1,1,1,1,1,1,1, [1,1,1,1,1,1,1,1, [1,1,1,1,1,1,1,1, [1,1,1,1,1,1,1,1, [1,1,1,1,1,1,1,1, 1,1,1,1, 1,1,1,... 1,1,1,1, 1,1,1,... 1,1,1,1, 1,1,1,... 1,1,1,1, 1,1,1,... 1,1,1,1, 1,1,1,... 1,1,1,1, 1,1,1,... [13959, 3871, 29, 12, 1566, 10, 5085, [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 5840, 49... 1, 1, 1, ... [13959, 3871, 29, 12, 1566, 10, 781, [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 8750, 9, ... 1, 1, 1, ... ... ... [13959, 3871, 29, 12, 1566, 10, 2364, [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 4540, 40... 1, 1, 1, ... 610320 rows × 3 columns [13959, 3871, 29, 12, 1566, 10, 3, [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 25882, 759,... 1, 1, 1, ... [2276, 8843, 138, 13, 13687, 7, 13, 1767, 3823... [781, 2420, 13, 17500, 10, 217, 13687, 7, 1] [242, 4540, 4031, 9, 7, 6, 8, 516, 65, 66, 8, ... [9810, 157, 31, 7, 516, 92, 3088, 21, 46, 3839... ... Recall that in order to construct a trainer, we need a data collator for batching, a function to compute the metrics of interest, and a TrainingArguments object. In this section, we use a data collator called DataCollatorForSeq2Seq, which is included in the transformers library specifically for sequence-to-sequence models. The collator pads the batches using the label_pad_token_id, which we have set to −100, as we did in Chapter 13 (this is the default ignore_index value used by CrossEntropyLoss): The compute_metrics() function computes the BLEU score. It uses the tokenizer to decode the token ids into text, for both the predicted and gold labels, ignoring padding: We use the Seq2SeqTrainingArguments class, which adds the predict_with_generate parameter to the regular TrainingArguments class. This is needed to in-
dicate that the trainer should use the generate() method for inference
in order to compute the metrics (BLUE in this case): Finally, we construct the trainer using the Seq2SeqTrainer class, which is a subclass of Trainer that adds the ability to compute scores such as BLEU during training by calling generate() during evaluation: Fine-tuning a translation model takes considerably longer than training or fine-tuning the models we have developed so far in this book. To account for this, here we add support for resuming training from a checkpoint, i.e., a model that was saved after training on a number of 15.4 Using a Previously Saved Model 219 examples. Similar to how one can resume a video game, this allows one to pick up from the last “save point,” in case training was interrupted and needs to be resumed: When calling the trainer’s train() method, we either provide a model checkpoint or None. In the former case, the trainer will continue training from the provided checkpoint. In the latter case, the trainer will begin training from scratch. Once the training has completed, we save the trained model and tokenizer using the trainer’s save_model() method into the output directory: We then compute and save the metrics corresponding to the training partition. This is not required, but it is helpful to keep a record of the model’s performance on the training data. Note that the metrics do not automatically include the number of examples in the training partition, so we add them explicitly: Next, we evaluate our final model on the validation data and save the corresponding metrics. These metrics indicate that our BLEU score on the validation data is 35.2%, which is evidence that fine-tuning has helped dramatically: Lastly, we save a model card into our output directory. A model card is akin to an automatically-generated README file that includes information about the model used, the data, settings used, and performance throughout the training process. This file is helpful for reproducibility as it contains all of this key information in one place. These cards are often uploaded to the Hugging Face Hub together with the model itself.5 15.4 Using a Previously Saved Model Models that have been saved locally can be loaded using the same from_pretrained() methods we have used before. In particular, instead of providing a model name, we provide the path to the local directory where the model is stored, using the local_files_only parameter to indicate that we want to load the model from the local file system instead of downloading it from the Hugging Face Hub (Make sure you use an output directory that is valid on your machine!): Once our fine-tuned model is loaded, we use it the same way as before. That is, we use the translate() function to generate translations 5 We do not discuss the model uploading process here. Please see the documentation on model sharing at: https://huggingface.co/docs/transformers/v4.14.1/model_sharing. 220 Implementing Encoder-decoder Methods for our test partition. Then we use the BLEU metric to score this output. From this metric, we obtain the final BLEU score of 33.4%, which is markedly better than our initial score (i.e., without fine-tuning) of 3.2%! The code corresponding to this section is available in the notebook chap15_translation_ro_to_en_finetuned. 15.5 Summary In this chapter we used a complete encoder-decoder transformer network to implement a machine translation application. Importantly, transformers with a decoder component have a generate() method that simplifies the generation process and provides multiple options for decoding. We encourage you to explore these options! For example, try comparing the quality of the output with the resources required to produce it (e.g., runtime overhead) when the size of the search beam increases. Additionally, we saw how to fine-tune an encoder-decoder model on a new language pair that it has not seen during its pre-training. This exercise included using checkpoints to support resuming training in case of unexpected interruptions, saving our fine-tuned model, and loading it for later use.
1,556
1,680
#!/usr/bin/env python # coding: utf-8 # # Machine Translation from English (En) to Romanian (Ro) # # Using the T5 Transformer without Fine-tuning # Some initialization: # In[1]: import torch import numpy as np from transformers import set_seed # set to True to use the gpu (if there is one available) use_gpu = True # select device device = torch.device('cuda' if use_gpu and torch.cuda.is_available() else 'cpu') print(f'device: {device.type}') # random seed seed = 42 # set random seed if seed is not None: print(f'random seed: {seed}') set_seed(seed) # In[2]: transformer_name = 't5-small' source_lang = 'en' target_lang = 'ro' max_source_length = 1024 max_target_length = 128 task_prefix = 'translate English to Romanian: ' num_beams = 1 batch_size = 100 # Load tokenizer and pre-trained model: # In[3]: from transformers import AutoTokenizer, AutoModelForSeq2SeqLM tokenizer = AutoTokenizer.from_pretrained(transformer_name) model = AutoModelForSeq2SeqLM.from_pretrained(transformer_name) model = model.to(device) # Load dataset from HuggingFace: # In[4]: from datasets import load_dataset test_ds = load_dataset('wmt16', 'ro-en', split='test') test_ds # In[5]: test_ds['translation'][0] # Implement the `translate` method and apply on this dataset: # In[6]: def translate(batch): # get source language examples and prepend task prefix inputs = [x[source_lang] for x in batch["translation"]] inputs = [task_prefix + x for x in inputs] # tokenize inputs encoded = tokenizer( inputs, max_length=max_source_length, truncation=True, padding=True, return_tensors='pt', ) # move data to gpu if needed input_ids = encoded.input_ids.to(device) attention_mask = encoded.attention_mask.to(device) # generate translated sentences output = model.generate( input_ids=input_ids, attention_mask=attention_mask, num_beams=num_beams, max_length=max_target_length, ) # generate predicted sentences from predicted token ids decoded = tokenizer.batch_decode( output, skip_special_tokens=True, ) # get gold sentences in target language targets = [x[target_lang] for x in batch["translation"]] # return gold and predicted sentences return { 'reference': targets, 'prediction': decoded, } # In[7]: results = test_ds.map( translate, batched=True, batch_size=batch_size, remove_columns=test_ds.column_names, ) results.to_pandas() # Now evaluate the quality of translations using the BLEU metric: # In[8]: from datasets import load_metric metric = load_metric('sacrebleu') for r in results: prediction = r['prediction'] reference = [r['reference']] metric.add(prediction=prediction, reference=reference) metric.compute() # An example of greedy decoding for individual texts: # In[9]: def greedy_translation(text): # prepend task prefix text = task_prefix + text # tokenize input encoded = tokenizer( text, max_length=max_source_length, truncation=True, return_tensors='pt', ) # encoder input ids encoder_input_ids = encoded.input_ids.to(device) # decoder input ids, initialized with start token id start = model.config.decoder_start_token_id decoder_input_ids = torch.LongTensor([[start]]).to(device) # generate tokens, one at a time for _ in range(max_target_length): # get model predictions output = model( encoder_input_ids, decoder_input_ids=decoder_input_ids, ) # get logits for last token next_token_logits = output.logits[0, -1, :] # select most probable token next_token_id = torch.argmax(next_token_logits) # append new token to decoder_input_ids output_id = torch.LongTensor([[next_token_id]]).to(device) decoder_input_ids = torch.cat([decoder_input_ids, output_id], dim=-1) # if predicted token is the end of sequence, stop iterating if next_token_id == tokenizer.eos_token_id: break # return text corresponding to predicted token ids return tokenizer.decode(decoder_input_ids[0], skip_special_tokens=True) # In[10]: greedy_translation("this is a test")
585
615
24
chap15-25
chap15-25
15 Implementing Encoder-decoder Methods In this chapter we implement a machine translation application as an example of an encoder-decoder task. In particular, we build on pre-trained encoder-decoder transformer models, which exist in the Hugging Face library for a wide variety of language pairs. We first show how to use one of these models out-of-the-box to perform translation for one of the language pairs it has been exposed to during pre-training: English to Romanian. Afterwards, we fine-tune the model to a new language combination that is has not seen before: Romanian to English. In both use cases, we use the T5 encoder-decoder model, which has been pre-trained for several tasks, including machine translation (Raffel et al., 2020). Please see Chapter 16 for a description of T5’s pre-training process. The data for this task comes from the WMT 2016 dataset (Bojar et al., 2016), which consists of English sentences aligned pairwise to German, Czech, Russian, Finnish, Romanian, and Turkish. In this chapter we only use the English-Romanian texts (in both directions). 15.1 Translating English to Romanian As a first example, we use T5 to translate from English to Romanian, which is one of the language pairs it has been exposed to during pretraining. The code discussed in this section is available in the notebook chap15_translation_en_to_ro. Even though in this exercise we are not fine-tuning the model, we still need to define a few hyper parameters to frame the task and help the model understand how to work with the data: The above settings indicate that we use the t5-small model, a smaller T5 variant, to minimize the amount of memory required. The source_lang 212 15.1 Translating English to Romanian 213 and target_lang variables define the direction of translation, i.e., from English to Romanian. To keep our computing requirements small, we limit the length of our input and output. That is, English text longer than max_source_length tokens will be truncated. Further, we limit our generated Romanian text to max_target_length. We chose a maximum target length of 128 tokens to limit the computational cost incurred during text generation (recall that the text is generated one token at a time). The T5 models are trained to support multiple tasks such as translation and summarization (please see Chapter 16 for details). Thus, during training and inference, the user must specify which task the model should perform using a text prefix. Here we use the prefix "translate English to Romanian: " to indicate that the input text is in English and should be translated to Romanian. Next, we load the model and the corresponding tokenizer, and move them to the GPU if one is available: We use the datasets library to load our translation dataset. Note that the first time one calls load_dataset() the dataset will be downloaded automatically from the Hugging Face repository.1 The load_dataset() function takes a dataset name and configuration, which in our case are wmt16 and ro-en, respectively. Since in this example we are only evaluating the model, we only load the test partition (or split) of the dataset: The dataset consists of a single column called translation. Each element in this column is a dictionary that contains the aligned pair. The dictionary keys are the abbreviated language names and the values are the corresponding sentences. An example of one of these dictionaries is shown below: We encapsulate the logic for translating the English text into Romanian in a function called translate(). Inside this function, for a batch of aligned pairs, we select the English sentence as our input, and prepend the task prefix. Then we tokenize these inputs, including the prefix, specifying that sentences longer than max_source_length should be truncated, the batch should be padded, and the tokenizer should return PyTorch tensors. Once the tokenizer output has been moved to the GPU, we pass it to the model’s generate() method. This is the first time we have seen this method, because only decoder and encoder-decoder models support it. This method generates an output sequence by predicting one token 1 https://huggingface.co/datasets/wmt16 214 Implementing Encoder-decoder Methods at a time, stopping when either the end-of-sequence token is produced or when the sequence reaches a maximum length. Several generation techniques are supported, such as beam search, in which several alternate translations are maintained by the model so that it is able to select an overall best translation from several options. For efficiency purposes, we use a greedy approach, which chooses the best token at each step of the generation. This is equivalent to using a beam search with a beam of size one. Since the model generates its predictions as a sequence of token ids, we need to convert them back into the corresponding tokens to be able to read the translated text. We do this using the tokenizer’s batch_decode() method. Finally, we return the gold and predicted Romanian sentences in a dictionary: Next, we apply our translate() function to our Dataset to translate all the sentences: reference Șeful ONU declară că nu există soluții militar... Șeful ONU a solicitat din nou tuturor părților... Ban și-a exprimat regretul că divizările în co... Nu sunt bani puțini. La sfârșitul mandatului voi face un raport cu ... "Să spună un parlamentar că nu-i ajung banii e... 1999 rows × 2 columns prediction eful ONU declară că nu există o soluţie milita... eful U.N. a cerut din nou tuturor partidelor, ... El şi-a exprimat regretul că diviziunile din c... Banii sunt suficienţi. La sfârşitul biroului voi raporta tot ceea ce ... "A spune că un parlamentar nu are suficienţi b... 1994 1995 1996 1997 1998 0 1 2 3 4 ... Secretarul General Ban Ki-moon afirmă că răspu... Secretarul General Ban Ki-moon declară că răsp... Ban a declarat miercuri în cadrul unei conferi... Ban a declarat la o conferinţă de presă susţin... ... ... Uneori mi-e rușine să ridic banii de la casierie. Uneori mi-e ruşine să iau banii de la biroul c... S-a întâmplat să ridic într-o lună și 30.000 d... Într-o lună am adunat 30 000 de lei cu ramburs... We evaluate the quality of these translations using the BLEU metric, which we introduced in Chapter 14. To this end, we load an existing implementation of BLEU from the datasets library as a Metric object.2 Metric objects have a method called add(), which is used to accumulate the predictions and gold labels, one example at a time. After accumulating all examples, the compute() method returns the results of the evaluation. Note that for each predicted sentence, BLEU expects a list of reference sentences (as there are often many correct ways of translat- 2 https://huggingface.co/docs/datasets/v2.4.0/en/package_reference/main_ classes#datasets. Metric 15.2 Implementation of Greedy Generation 215 ing a given text). Since we only have one reference, we wrap it in a list before passing it to the metric: The score corresponds to the BLEU score. The rest of the items correspond to the components required to compute the score. That is, the counts, totals, and precisions correspond to the counts, totals, and precisions for 1-, 2-, 3-, and 4-grams. The bp is the brevity penalty. The sys_len and ref_len correspond to the predictions and reference lengths. The above BLEU score of 25.2% is slightly lower than the state of the art, but we are being penalized by the peculiarities of diacritic usage in Romanian characters. For example, the letters ș and ț (corresponding to the sounds sh and ts in English) are usually spelled with a comma below the characters s and t, which is the standard imposed by the Romanian Academy. However, in “the wild” these characters are often written using a cedilla instead of a comma, e.g., ţ instead of ț (or, using the names of these Unicode characters, LATIN SMALL LETTER T WITH CEDILLA instead of LATIN SMALL LETTER T WITH COMMA BELOW). Further, some of these characters with diacritics are often omitted altogether in the T5 output. The T5 output below contains an example for each of these two situations (e.g., soluţi(e) instead of soluți(i), and eful instead of Șeful): To avoid being penalized at scoring time for these arbitrary discrepancies, post-processing scripts are sometimes used to normalize diacritic usage.3 Usage of such post-processing scripts can improve the BLEU score substantially. However, this is beyond the scope of this chapter. 15.2 Implementation of Greedy Generation To gain a better intuition of how the encoder-decoder model generates its output sequence, we show below an implementation of the greedy version of the generate() method used above. This function takes as an argument a single English text (i.e., no batching) and returns the corresponding Romanian text: This function interacts directly with the encoder and decoder components of the T5 model, so we must construct the input for both. The encoder’s input is constructed by prepending the task prefix to the English text and tokenizing it. On the other hand, the decoder’s input is constructed incrementally by accumulating the tokens predicted so far 3 https://github.com/huggingface/transformers/blob/main/examples/legacy/ seq2seq/romanian_postprocessing.md 216 Implementing Encoder-decoder Methods in order to predict the next token in the sequence. At the beginning, before any tokens are predicted, the decoder’s input is initialized with a single token that corresponds to the beginning of the sequence. We retrieve this token, called decoder_start_token_id, from the model’s configuration object. The tokens are predicted one at a time, until the model produces eos_token_id, which indicates that the sequence is finished. However, in case the model does not produce this end-of-sequence token within a reasonable number of steps, we also enforce a maximum number of predicted tokens, determined by the max_target_length parameter we defined previously. The T5 model’s forward() method, called indirectly through its __call__()) method, takes the inputs for both the encoder and the decoder. The output returned by this method corresponds to all the tokens in the decoder’s input plus an extra one: the newly predicted token. To select the best prediction, we retrieve the logits from the output and select the logits corresponding to the last token in the sequence (recall that the output shape is (batch size, sequence length, vocabulary size)). From these selected logits, we use the argmax() to select the token id corresponding to the highest-scoring vocabulary item. We append this new token id to the decoder’s input, and repeat the process until we encounter the end-of-sequence token or the decoded text reaches the maximum length. Once we are finished generating token ids, we retrieve the corresponding text by calling the tokenizer’s decode() method. This method is identical to the batch_decode() method we used previously, except that it only decodes a single example. Below is an usage example for the greedy_translation() function: 15.3 Fine-tuning Romanian to English Translation In this section, we fine-tune a T5 model on the translation of Romanian to English, a language pair that was not included in the T5 pre-training. To confirm that this data was not included in pre-training, we evaluated the performance of the vanilla t5-small model on the translation from Romanian to English using code equivalent to the code discussed in the previous section (see the chap15_translation_ro_to_en notebook). The resulting BLEU score was only 3.2%, which is substantially lower than the score we obtained when translating English to Romanian (25.2%). 15.3 Fine-tuning Romanian to English Translation 217 Note that the transformers library includes scripts to fine-tune a translation model directly from the command line.4 For didactic purposes, we will not use these scripts in this section, but instead write the fine-tuning code explicitly. For this exercise, we continue using the WMT16 dataset, but this time
we load the train and validation splits. We employ the same t5-small model that we used previously. The code from the last section to load
the model, tokenizer, and dataset does not need to change for this use-
case, so we do not repeat it here. However, as before, the complete code is available in a Jupyter notebook (chap15_translation_ro_to_en_finetune). We begin by tokenizing the source (Romanian) and target language (English) texts. As in the last section, we need to prepend the task prefix to the source texts prior to tokenizing. This time, since we are translating in the opposite direction, we use the prefix "translate Romanian to English: ", and we prepend it to the Romanian text. Each call to the tokenizer with a batch of texts produces input_ids and an attention_mask. This output is what we need for the Romanian text, which will serve as the input to the model. To generate the labels, i.e., the correct translated tokens, we use the input_ids corresponding to the English text. Recall that "labels" is the default key name expected by trainers in Hugging Face. We apply our tokenize() function to both the train and validation splits: 4 https://github.com/huggingface/transformers/tree/main/examples/pytorch/ translation 218 Implementing Encoder-decoder Methods input_ids [13959, 3871, 29, 12, 1566, 10, 4961, 106, 204... [13959, 3871, 29, 12, 1566, 10, 4961, 106, 204... [13959, 3871, 29, 12, 1566, 10, 374, 6225, 49,... [13959, 3871, 29, 12, 1566, 10, 4540, 4031, 9,... [13959, 3871, 29, 12, 1566, 10, 2262, 900, 17,... [13959, 3871, 29, 12, 1566, 10, 18420, 83, 362... attention_mask labels [19428, 13, 12876, 10, 217, 13687, 7, 1] [19428, 13, 12876, 10, 217, 13687, 7, 1] [11167, 7, 1204, 10, 217, 13687, 7, 1] [4540, 4031, 9, 7, 1672, 7, 2262, 900, 17, 38,... [2262, 900, 17, 641, 65, 46, 3761, 6, 1069, 31... [3625, 32, 5788, 35, 15, 3844, 31, 7, 3, 16143... 0 1 2 3 4 ... 610315 610316 610317 610318 610319 [1,1,1,1,1,1,1,1, [1,1,1,1,1,1,1,1, [1,1,1,1,1,1,1,1, [1,1,1,1,1,1,1,1, [1,1,1,1,1,1,1,1, [1,1,1,1,1,1,1,1, 1,1,1,1, 1,1,1,... 1,1,1,1, 1,1,1,... 1,1,1,1, 1,1,1,... 1,1,1,1, 1,1,1,... 1,1,1,1, 1,1,1,... 1,1,1,1, 1,1,1,... [13959, 3871, 29, 12, 1566, 10, 5085, [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 5840, 49... 1, 1, 1, ... [13959, 3871, 29, 12, 1566, 10, 781, [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 8750, 9, ... 1, 1, 1, ... ... ... [13959, 3871, 29, 12, 1566, 10, 2364, [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 4540, 40... 1, 1, 1, ... 610320 rows × 3 columns [13959, 3871, 29, 12, 1566, 10, 3, [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 25882, 759,... 1, 1, 1, ... [2276, 8843, 138, 13, 13687, 7, 13, 1767, 3823... [781, 2420, 13, 17500, 10, 217, 13687, 7, 1] [242, 4540, 4031, 9, 7, 6, 8, 516, 65, 66, 8, ... [9810, 157, 31, 7, 516, 92, 3088, 21, 46, 3839... ... Recall that in order to construct a trainer, we need a data collator for batching, a function to compute the metrics of interest, and a TrainingArguments object. In this section, we use a data collator called DataCollatorForSeq2Seq, which is included in the transformers library specifically for sequence-to-sequence models. The collator pads the batches using the label_pad_token_id, which we have set to −100, as we did in Chapter 13 (this is the default ignore_index value used by CrossEntropyLoss): The compute_metrics() function computes the BLEU score. It uses the tokenizer to decode the token ids into text, for both the predicted and gold labels, ignoring padding: We use the Seq2SeqTrainingArguments class, which adds the predict_with_generate parameter to the regular TrainingArguments class. This is needed to in-
dicate that the trainer should use the generate() method for inference
in order to compute the metrics (BLUE in this case): Finally, we construct the trainer using the Seq2SeqTrainer class, which is a subclass of Trainer that adds the ability to compute scores such as BLEU during training by calling generate() during evaluation: Fine-tuning a translation model takes considerably longer than training or fine-tuning the models we have developed so far in this book. To account for this, here we add support for resuming training from a checkpoint, i.e., a model that was saved after training on a number of 15.4 Using a Previously Saved Model 219 examples. Similar to how one can resume a video game, this allows one to pick up from the last “save point,” in case training was interrupted and needs to be resumed: When calling the trainer’s train() method, we either provide a model checkpoint or None. In the former case, the trainer will continue training from the provided checkpoint. In the latter case, the trainer will begin training from scratch. Once the training has completed, we save the trained model and tokenizer using the trainer’s save_model() method into the output directory: We then compute and save the metrics corresponding to the training partition. This is not required, but it is helpful to keep a record of the model’s performance on the training data. Note that the metrics do not automatically include the number of examples in the training partition, so we add them explicitly: Next, we evaluate our final model on the validation data and save the corresponding metrics. These metrics indicate that our BLEU score on the validation data is 35.2%, which is evidence that fine-tuning has helped dramatically: Lastly, we save a model card into our output directory. A model card is akin to an automatically-generated README file that includes information about the model used, the data, settings used, and performance throughout the training process. This file is helpful for reproducibility as it contains all of this key information in one place. These cards are often uploaded to the Hugging Face Hub together with the model itself.5 15.4 Using a Previously Saved Model Models that have been saved locally can be loaded using the same from_pretrained() methods we have used before. In particular, instead of providing a model name, we provide the path to the local directory where the model is stored, using the local_files_only parameter to indicate that we want to load the model from the local file system instead of downloading it from the Hugging Face Hub (Make sure you use an output directory that is valid on your machine!): Once our fine-tuned model is loaded, we use it the same way as before. That is, we use the translate() function to generate translations 5 We do not discuss the model uploading process here. Please see the documentation on model sharing at: https://huggingface.co/docs/transformers/v4.14.1/model_sharing. 220 Implementing Encoder-decoder Methods for our test partition. Then we use the BLEU metric to score this output. From this metric, we obtain the final BLEU score of 33.4%, which is markedly better than our initial score (i.e., without fine-tuning) of 3.2%! The code corresponding to this section is available in the notebook chap15_translation_ro_to_en_finetuned. 15.5 Summary In this chapter we used a complete encoder-decoder transformer network to implement a machine translation application. Importantly, transformers with a decoder component have a generate() method that simplifies the generation process and provides multiple options for decoding. We encourage you to explore these options! For example, try comparing the quality of the output with the resources required to produce it (e.g., runtime overhead) when the size of the search beam increases. Additionally, we saw how to fine-tune an encoder-decoder model on a new language pair that it has not seen during its pre-training. This exercise included using checkpoints to support resuming training in case of unexpected interruptions, saving our fine-tuned model, and loading it for later use.
3,573
3,698
#!/usr/bin/env python # coding: utf-8 # # Machine Translation from English (En) to Romanian (Ro) # # Using the T5 Transformer without Fine-tuning # Some initialization: # In[1]: import torch import numpy as np from transformers import set_seed # set to True to use the gpu (if there is one available) use_gpu = True # select device device = torch.device('cuda' if use_gpu and torch.cuda.is_available() else 'cpu') print(f'device: {device.type}') # random seed seed = 42 # set random seed if seed is not None: print(f'random seed: {seed}') set_seed(seed) # In[2]: transformer_name = 't5-small' source_lang = 'en' target_lang = 'ro' max_source_length = 1024 max_target_length = 128 task_prefix = 'translate English to Romanian: ' num_beams = 1 batch_size = 100 # Load tokenizer and pre-trained model: # In[3]: from transformers import AutoTokenizer, AutoModelForSeq2SeqLM tokenizer = AutoTokenizer.from_pretrained(transformer_name) model = AutoModelForSeq2SeqLM.from_pretrained(transformer_name) model = model.to(device) # Load dataset from HuggingFace: # In[4]: from datasets import load_dataset test_ds = load_dataset('wmt16', 'ro-en', split='test') test_ds # In[5]: test_ds['translation'][0] # Implement the `translate` method and apply on this dataset: # In[6]: def translate(batch): # get source language examples and prepend task prefix inputs = [x[source_lang] for x in batch["translation"]] inputs = [task_prefix + x for x in inputs] # tokenize inputs encoded = tokenizer( inputs, max_length=max_source_length, truncation=True, padding=True, return_tensors='pt', ) # move data to gpu if needed input_ids = encoded.input_ids.to(device) attention_mask = encoded.attention_mask.to(device) # generate translated sentences output = model.generate( input_ids=input_ids, attention_mask=attention_mask, num_beams=num_beams, max_length=max_target_length, ) # generate predicted sentences from predicted token ids decoded = tokenizer.batch_decode( output, skip_special_tokens=True, ) # get gold sentences in target language targets = [x[target_lang] for x in batch["translation"]] # return gold and predicted sentences return { 'reference': targets, 'prediction': decoded, } # In[7]: results = test_ds.map( translate, batched=True, batch_size=batch_size, remove_columns=test_ds.column_names, ) results.to_pandas() # Now evaluate the quality of translations using the BLEU metric: # In[8]: from datasets import load_metric metric = load_metric('sacrebleu') for r in results: prediction = r['prediction'] reference = [r['reference']] metric.add(prediction=prediction, reference=reference) metric.compute() # An example of greedy decoding for individual texts: # In[9]: def greedy_translation(text): # prepend task prefix text = task_prefix + text # tokenize input encoded = tokenizer( text, max_length=max_source_length, truncation=True, return_tensors='pt', ) # encoder input ids encoder_input_ids = encoded.input_ids.to(device) # decoder input ids, initialized with start token id start = model.config.decoder_start_token_id decoder_input_ids = torch.LongTensor([[start]]).to(device) # generate tokens, one at a time for _ in range(max_target_length): # get model predictions output = model( encoder_input_ids, decoder_input_ids=decoder_input_ids, ) # get logits for last token next_token_logits = output.logits[0, -1, :] # select most probable token next_token_id = torch.argmax(next_token_logits) # append new token to decoder_input_ids output_id = torch.LongTensor([[next_token_id]]).to(device) decoder_input_ids = torch.cat([decoder_input_ids, output_id], dim=-1) # if predicted token is the end of sequence, stop iterating if next_token_id == tokenizer.eos_token_id: break # return text corresponding to predicted token ids return tokenizer.decode(decoder_input_ids[0], skip_special_tokens=True) # In[10]: greedy_translation("this is a test")
1,388
1,495
25
chap15-26
chap15-26
15 Implementing Encoder-decoder Methods In this chapter we implement a machine translation application as an example of an encoder-decoder task. In particular, we build on pre-trained encoder-decoder transformer models, which exist in the Hugging Face library for a wide variety of language pairs. We first show how to use one of these models out-of-the-box to perform translation for one of the language pairs it has been exposed to during pre-training: English to Romanian. Afterwards, we fine-tune the model to a new language combination that is has not seen before: Romanian to English. In both use cases, we use the T5 encoder-decoder model, which has been pre-trained for several tasks, including machine translation (Raffel et al., 2020). Please see Chapter 16 for a description of T5’s pre-training process. The data for this task comes from the WMT 2016 dataset (Bojar et al., 2016), which consists of English sentences aligned pairwise to German, Czech, Russian, Finnish, Romanian, and Turkish. In this chapter we only use the English-Romanian texts (in both directions). 15.1 Translating English to Romanian As a first example, we use T5 to translate from English to Romanian, which is one of the language pairs it has been exposed to during pretraining. The code discussed in this section is available in the notebook chap15_translation_en_to_ro. Even though in this exercise we are not fine-tuning the model, we still need to define a few hyper parameters to frame the task and help the model understand how to work with the data: The above settings indicate that we use the t5-small model, a smaller T5 variant, to minimize the amount of memory required. The source_lang 212 15.1 Translating English to Romanian 213 and target_lang variables define the direction of translation, i.e., from English to Romanian. To keep our computing requirements small, we limit the length of our input and output. That is, English text longer than max_source_length tokens will be truncated. Further, we limit our generated Romanian text to max_target_length. We chose a maximum target length of 128 tokens to limit the computational cost incurred during text generation (recall that the text is generated one token at a time). The T5 models are trained to support multiple tasks such as translation and summarization (please see Chapter 16 for details). Thus, during training and inference, the user must specify which task the model should perform using a text prefix. Here we use the prefix "translate English to Romanian: " to indicate that the input text is in English and should be translated to Romanian. Next, we load the model and the corresponding tokenizer, and move them to the GPU if one is available: We use the datasets library to load our translation dataset. Note that the first time one calls load_dataset() the dataset will be downloaded automatically from the Hugging Face repository.1 The load_dataset() function takes a dataset name and configuration, which in our case are wmt16 and ro-en, respectively. Since in this example we are only evaluating the model, we only load the test partition (or split) of the dataset: The dataset consists of a single column called translation. Each element in this column is a dictionary that contains the aligned pair. The dictionary keys are the abbreviated language names and the values are the corresponding sentences. An example of one of these dictionaries is shown below: We encapsulate the logic for translating the English text into Romanian in a function called translate(). Inside this function, for a batch of aligned pairs, we select the English sentence as our input, and prepend the task prefix. Then we tokenize these inputs, including the prefix, specifying that sentences longer than max_source_length should be truncated, the batch should be padded, and the tokenizer should return PyTorch tensors. Once the tokenizer output has been moved to the GPU, we pass it to the model’s generate() method. This is the first time we have seen this method, because only decoder and encoder-decoder models support it. This method generates an output sequence by predicting one token 1 https://huggingface.co/datasets/wmt16 214 Implementing Encoder-decoder Methods at a time, stopping when either the end-of-sequence token is produced or when the sequence reaches a maximum length. Several generation techniques are supported, such as beam search, in which several alternate translations are maintained by the model so that it is able to select an overall best translation from several options. For efficiency purposes, we use a greedy approach, which chooses the best token at each step of the generation. This is equivalent to using a beam search with a beam of size one. Since the model generates its predictions as a sequence of token ids, we need to convert them back into the corresponding tokens to be able to read the translated text. We do this using the tokenizer’s batch_decode() method. Finally, we return the gold and predicted Romanian sentences in a dictionary: Next, we apply our translate() function to our Dataset to translate all the sentences: reference Șeful ONU declară că nu există soluții militar... Șeful ONU a solicitat din nou tuturor părților... Ban și-a exprimat regretul că divizările în co... Nu sunt bani puțini. La sfârșitul mandatului voi face un raport cu ... "Să spună un parlamentar că nu-i ajung banii e... 1999 rows × 2 columns prediction eful ONU declară că nu există o soluţie milita... eful U.N. a cerut din nou tuturor partidelor, ... El şi-a exprimat regretul că diviziunile din c... Banii sunt suficienţi. La sfârşitul biroului voi raporta tot ceea ce ... "A spune că un parlamentar nu are suficienţi b... 1994 1995 1996 1997 1998 0 1 2 3 4 ... Secretarul General Ban Ki-moon afirmă că răspu... Secretarul General Ban Ki-moon declară că răsp... Ban a declarat miercuri în cadrul unei conferi... Ban a declarat la o conferinţă de presă susţin... ... ... Uneori mi-e rușine să ridic banii de la casierie. Uneori mi-e ruşine să iau banii de la biroul c... S-a întâmplat să ridic într-o lună și 30.000 d... Într-o lună am adunat 30 000 de lei cu ramburs... We evaluate the quality of these translations using the BLEU metric, which we introduced in Chapter 14. To this end, we load an existing implementation of BLEU from the datasets library as a Metric object.2 Metric objects have a method called add(), which is used to accumulate the predictions and gold labels, one example at a time. After accumulating all examples, the compute() method returns the results of the evaluation. Note that for each predicted sentence, BLEU expects a list of reference sentences (as there are often many correct ways of translat- 2 https://huggingface.co/docs/datasets/v2.4.0/en/package_reference/main_ classes#datasets. Metric 15.2 Implementation of Greedy Generation 215 ing a given text). Since we only have one reference, we wrap it in a list before passing it to the metric: The score corresponds to the BLEU score. The rest of the items correspond to the components required to compute the score. That is, the counts, totals, and precisions correspond to the counts, totals, and precisions for 1-, 2-, 3-, and 4-grams. The bp is the brevity penalty. The sys_len and ref_len correspond to the predictions and reference lengths. The above BLEU score of 25.2% is slightly lower than the state of the art, but we are being penalized by the peculiarities of diacritic usage in Romanian characters. For example, the letters ș and ț (corresponding to the sounds sh and ts in English) are usually spelled with a comma below the characters s and t, which is the standard imposed by the Romanian Academy. However, in “the wild” these characters are often written using a cedilla instead of a comma, e.g., ţ instead of ț (or, using the names of these Unicode characters, LATIN SMALL LETTER T WITH CEDILLA instead of LATIN SMALL LETTER T WITH COMMA BELOW). Further, some of these characters with diacritics are often omitted altogether in the T5 output. The T5 output below contains an example for each of these two situations (e.g., soluţi(e) instead of soluți(i), and eful instead of Șeful): To avoid being penalized at scoring time for these arbitrary discrepancies, post-processing scripts are sometimes used to normalize diacritic usage.3 Usage of such post-processing scripts can improve the BLEU score substantially. However, this is beyond the scope of this chapter. 15.2 Implementation of Greedy Generation To gain a better intuition of how the encoder-decoder model generates its output sequence, we show below an implementation of the greedy version of the generate() method used above. This function takes as an argument a single English text (i.e., no batching) and returns the corresponding Romanian text: This function interacts directly with the encoder and decoder components of the T5 model, so we must construct the input for both. The encoder’s input is constructed by prepending the task prefix to the English text and tokenizing it. On the other hand, the decoder’s input is constructed incrementally by accumulating the tokens predicted so far 3 https://github.com/huggingface/transformers/blob/main/examples/legacy/ seq2seq/romanian_postprocessing.md 216 Implementing Encoder-decoder Methods in order to predict the next token in the sequence. At the beginning, before any tokens are predicted, the decoder’s input is initialized with a single token that corresponds to the beginning of the sequence. We retrieve this token, called decoder_start_token_id, from the model’s configuration object. The tokens are predicted one at a time, until the model produces eos_token_id, which indicates that the sequence is finished. However, in case the model does not produce this end-of-sequence token within a reasonable number of steps, we also enforce a maximum number of predicted tokens, determined by the max_target_length parameter we defined previously. The T5 model’s forward() method, called indirectly through its __call__()) method, takes the inputs for both the encoder and the decoder. The output returned by this method corresponds to all the tokens in the decoder’s input plus an extra one: the newly predicted token. To select the best prediction, we retrieve the logits from the output and select the logits corresponding to the last token in the sequence (recall that the output shape is (batch size, sequence length, vocabulary size)). From these selected logits, we use the argmax() to select the token id corresponding to the highest-scoring vocabulary item. We append this new token id to the decoder’s input, and repeat the process until we encounter the end-of-sequence token or the decoded text reaches the maximum length. Once we are finished generating token ids, we retrieve the corresponding text by calling the tokenizer’s decode() method. This method is identical to the batch_decode() method we used previously, except that it only decodes a single example. Below is an usage example for the greedy_translation() function: 15.3 Fine-tuning Romanian to English Translation In this section, we fine-tune a T5 model on the translation of Romanian to English, a language pair that was not included in the T5 pre-training. To confirm that this data was not included in pre-training, we evaluated the performance of the vanilla t5-small model on the translation from Romanian to English using code equivalent to the code discussed in the previous section (see the chap15_translation_ro_to_en notebook). The resulting BLEU score was only 3.2%, which is substantially lower than the score we obtained when translating English to Romanian (25.2%). 15.3 Fine-tuning Romanian to English Translation 217 Note that the transformers library includes scripts to fine-tune a translation model directly from the command line.4 For didactic purposes, we will not use these scripts in this section, but instead write the fine-tuning code explicitly. For this exercise, we continue using the WMT16 dataset, but this time
we load the train and validation splits. We employ the same t5-small model that we used previously. The code from the last section to load
the model, tokenizer, and dataset does not need to change for this use-
case, so we do not repeat it here. However, as before, the complete code is available in a Jupyter notebook (chap15_translation_ro_to_en_finetune). We begin by tokenizing the source (Romanian) and target language (English) texts. As in the last section, we need to prepend the task prefix to the source texts prior to tokenizing. This time, since we are translating in the opposite direction, we use the prefix "translate Romanian to English: ", and we prepend it to the Romanian text. Each call to the tokenizer with a batch of texts produces input_ids and an attention_mask. This output is what we need for the Romanian text, which will serve as the input to the model. To generate the labels, i.e., the correct translated tokens, we use the input_ids corresponding to the English text. Recall that "labels" is the default key name expected by trainers in Hugging Face. We apply our tokenize() function to both the train and validation splits: 4 https://github.com/huggingface/transformers/tree/main/examples/pytorch/ translation 218 Implementing Encoder-decoder Methods input_ids [13959, 3871, 29, 12, 1566, 10, 4961, 106, 204... [13959, 3871, 29, 12, 1566, 10, 4961, 106, 204... [13959, 3871, 29, 12, 1566, 10, 374, 6225, 49,... [13959, 3871, 29, 12, 1566, 10, 4540, 4031, 9,... [13959, 3871, 29, 12, 1566, 10, 2262, 900, 17,... [13959, 3871, 29, 12, 1566, 10, 18420, 83, 362... attention_mask labels [19428, 13, 12876, 10, 217, 13687, 7, 1] [19428, 13, 12876, 10, 217, 13687, 7, 1] [11167, 7, 1204, 10, 217, 13687, 7, 1] [4540, 4031, 9, 7, 1672, 7, 2262, 900, 17, 38,... [2262, 900, 17, 641, 65, 46, 3761, 6, 1069, 31... [3625, 32, 5788, 35, 15, 3844, 31, 7, 3, 16143... 0 1 2 3 4 ... 610315 610316 610317 610318 610319 [1,1,1,1,1,1,1,1, [1,1,1,1,1,1,1,1, [1,1,1,1,1,1,1,1, [1,1,1,1,1,1,1,1, [1,1,1,1,1,1,1,1, [1,1,1,1,1,1,1,1, 1,1,1,1, 1,1,1,... 1,1,1,1, 1,1,1,... 1,1,1,1, 1,1,1,... 1,1,1,1, 1,1,1,... 1,1,1,1, 1,1,1,... 1,1,1,1, 1,1,1,... [13959, 3871, 29, 12, 1566, 10, 5085, [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 5840, 49... 1, 1, 1, ... [13959, 3871, 29, 12, 1566, 10, 781, [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 8750, 9, ... 1, 1, 1, ... ... ... [13959, 3871, 29, 12, 1566, 10, 2364, [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 4540, 40... 1, 1, 1, ... 610320 rows × 3 columns [13959, 3871, 29, 12, 1566, 10, 3, [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 25882, 759,... 1, 1, 1, ... [2276, 8843, 138, 13, 13687, 7, 13, 1767, 3823... [781, 2420, 13, 17500, 10, 217, 13687, 7, 1] [242, 4540, 4031, 9, 7, 6, 8, 516, 65, 66, 8, ... [9810, 157, 31, 7, 516, 92, 3088, 21, 46, 3839... ... Recall that in order to construct a trainer, we need a data collator for batching, a function to compute the metrics of interest, and a TrainingArguments object. In this section, we use a data collator called DataCollatorForSeq2Seq, which is included in the transformers library specifically for sequence-to-sequence models. The collator pads the batches using the label_pad_token_id, which we have set to −100, as we did in Chapter 13 (this is the default ignore_index value used by CrossEntropyLoss): The compute_metrics() function computes the BLEU score. It uses the tokenizer to decode the token ids into text, for both the predicted and gold labels, ignoring padding: We use the Seq2SeqTrainingArguments class, which adds the predict_with_generate parameter to the regular TrainingArguments class. This is needed to in-
dicate that the trainer should use the generate() method for inference
in order to compute the metrics (BLUE in this case): Finally, we construct the trainer using the Seq2SeqTrainer class, which is a subclass of Trainer that adds the ability to compute scores such as BLEU during training by calling generate() during evaluation: Fine-tuning a translation model takes considerably longer than training or fine-tuning the models we have developed so far in this book. To account for this, here we add support for resuming training from a checkpoint, i.e., a model that was saved after training on a number of 15.4 Using a Previously Saved Model 219 examples. Similar to how one can resume a video game, this allows one to pick up from the last “save point,” in case training was interrupted and needs to be resumed: When calling the trainer’s train() method, we either provide a model checkpoint or None. In the former case, the trainer will continue training from the provided checkpoint. In the latter case, the trainer will begin training from scratch. Once the training has completed, we save the trained model and tokenizer using the trainer’s save_model() method into the output directory: We then compute and save the metrics corresponding to the training partition. This is not required, but it is helpful to keep a record of the model’s performance on the training data. Note that the metrics do not automatically include the number of examples in the training partition, so we add them explicitly: Next, we evaluate our final model on the validation data and save the corresponding metrics. These metrics indicate that our BLEU score on the validation data is 35.2%, which is evidence that fine-tuning has helped dramatically: Lastly, we save a model card into our output directory. A model card is akin to an automatically-generated README file that includes information about the model used, the data, settings used, and performance throughout the training process. This file is helpful for reproducibility as it contains all of this key information in one place. These cards are often uploaded to the Hugging Face Hub together with the model itself.5 15.4 Using a Previously Saved Model Models that have been saved locally can be loaded using the same from_pretrained() methods we have used before. In particular, instead of providing a model name, we provide the path to the local directory where the model is stored, using the local_files_only parameter to indicate that we want to load the model from the local file system instead of downloading it from the Hugging Face Hub (Make sure you use an output directory that is valid on your machine!): Once our fine-tuned model is loaded, we use it the same way as before. That is, we use the translate() function to generate translations 5 We do not discuss the model uploading process here. Please see the documentation on model sharing at: https://huggingface.co/docs/transformers/v4.14.1/model_sharing. 220 Implementing Encoder-decoder Methods for our test partition. Then we use the BLEU metric to score this output. From this metric, we obtain the final BLEU score of 33.4%, which is markedly better than our initial score (i.e., without fine-tuning) of 3.2%! The code corresponding to this section is available in the notebook chap15_translation_ro_to_en_finetuned. 15.5 Summary In this chapter we used a complete encoder-decoder transformer network to implement a machine translation application. Importantly, transformers with a decoder component have a generate() method that simplifies the generation process and provides multiple options for decoding. We encourage you to explore these options! For example, try comparing the quality of the output with the resources required to produce it (e.g., runtime overhead) when the size of the search beam increases. Additionally, we saw how to fine-tune an encoder-decoder model on a new language pair that it has not seen during its pre-training. This exercise included using checkpoints to support resuming training in case of unexpected interruptions, saving our fine-tuned model, and loading it for later use.
8,651
8,954
#!/usr/bin/env python # coding: utf-8 # # Machine Translation from English (En) to Romanian (Ro) # # Using the T5 Transformer without Fine-tuning # Some initialization: # In[1]: import torch import numpy as np from transformers import set_seed # set to True to use the gpu (if there is one available) use_gpu = True # select device device = torch.device('cuda' if use_gpu and torch.cuda.is_available() else 'cpu') print(f'device: {device.type}') # random seed seed = 42 # set random seed if seed is not None: print(f'random seed: {seed}') set_seed(seed) # In[2]: transformer_name = 't5-small' source_lang = 'en' target_lang = 'ro' max_source_length = 1024 max_target_length = 128 task_prefix = 'translate English to Romanian: ' num_beams = 1 batch_size = 100 # Load tokenizer and pre-trained model: # In[3]: from transformers import AutoTokenizer, AutoModelForSeq2SeqLM tokenizer = AutoTokenizer.from_pretrained(transformer_name) model = AutoModelForSeq2SeqLM.from_pretrained(transformer_name) model = model.to(device) # Load dataset from HuggingFace: # In[4]: from datasets import load_dataset test_ds = load_dataset('wmt16', 'ro-en', split='test') test_ds # In[5]: test_ds['translation'][0] # Implement the `translate` method and apply on this dataset: # In[6]: def translate(batch): # get source language examples and prepend task prefix inputs = [x[source_lang] for x in batch["translation"]] inputs = [task_prefix + x for x in inputs] # tokenize inputs encoded = tokenizer( inputs, max_length=max_source_length, truncation=True, padding=True, return_tensors='pt', ) # move data to gpu if needed input_ids = encoded.input_ids.to(device) attention_mask = encoded.attention_mask.to(device) # generate translated sentences output = model.generate( input_ids=input_ids, attention_mask=attention_mask, num_beams=num_beams, max_length=max_target_length, ) # generate predicted sentences from predicted token ids decoded = tokenizer.batch_decode( output, skip_special_tokens=True, ) # get gold sentences in target language targets = [x[target_lang] for x in batch["translation"]] # return gold and predicted sentences return { 'reference': targets, 'prediction': decoded, } # In[7]: results = test_ds.map( translate, batched=True, batch_size=batch_size, remove_columns=test_ds.column_names, ) results.to_pandas() # Now evaluate the quality of translations using the BLEU metric: # In[8]: from datasets import load_metric metric = load_metric('sacrebleu') for r in results: prediction = r['prediction'] reference = [r['reference']] metric.add(prediction=prediction, reference=reference) metric.compute() # An example of greedy decoding for individual texts: # In[9]: def greedy_translation(text): # prepend task prefix text = task_prefix + text # tokenize input encoded = tokenizer( text, max_length=max_source_length, truncation=True, return_tensors='pt', ) # encoder input ids encoder_input_ids = encoded.input_ids.to(device) # decoder input ids, initialized with start token id start = model.config.decoder_start_token_id decoder_input_ids = torch.LongTensor([[start]]).to(device) # generate tokens, one at a time for _ in range(max_target_length): # get model predictions output = model( encoder_input_ids, decoder_input_ids=decoder_input_ids, ) # get logits for last token next_token_logits = output.logits[0, -1, :] # select most probable token next_token_id = torch.argmax(next_token_logits) # append new token to decoder_input_ids output_id = torch.LongTensor([[next_token_id]]).to(device) decoder_input_ids = torch.cat([decoder_input_ids, output_id], dim=-1) # if predicted token is the end of sequence, stop iterating if next_token_id == tokenizer.eos_token_id: break # return text corresponding to predicted token ids return tokenizer.decode(decoder_input_ids[0], skip_special_tokens=True) # In[10]: greedy_translation("this is a test")
2,970
3,000
26
chap07-0
chap07-0
7 Implementing Text Classification with Feed Forward Networks In this chapter we provide an implementation of the multilayer neural network described in Chapter 5, along with several of the best practices discussed in Chapter 6. Remaining fairly simple, our network will consist of three neuron layers that are fully connected: an input layer that stores the input features, a hidden intermediate layer, and an output layer that produces the scores for each class to be learned. In between these layers we will include dropout and a nonlinearity (ReLU). Sidebar 7.1 The PyTorch Linear layer implements the connections between layers of neurons. Before discussing the implementation of more complex neural architectures in PyTorch, it is important to address one potential source of confusion. In PyTorch, the Linear layer implements the connections between two layers of neurons rather than an actual neuron layer. That is, a Linear object contains the weights Wl+1 that connect the neurons in layer l with the neurons in layer l + 1 in Figure 5.2. This is why the Linear constructor includes two dimensions: one for the input neuron layer (in_features) and one for the output neuron layer (out_features). Optionally, if the parameter bias is set to True, the corresponding Linear object also contains the bias weights for the output neurons, i.e., bl+1 in Figure 5.2. Thus, in our Model with three neuron layers, we will have two Linear objects. To stay close to the code, from this point forward when we mention the term layer in the implementation chapters, we refer to a PyTorch Linear layer, unless stated otherwise. 109 110 Implementing Text Classification with Feed Forward Networks Further, we make use of two PyTorch classes: a Dataset and a DataLoader. The advantage of using these classes is that they make several things easy, including data shuffling and batching. Lastly, since the classifier’s architecture has become more complex, for optimization we transition from stochastic gradient descent to the Adam optimizer to take advantage of its additional features such as momentum, and L2 regularization. As before, the code from this chapter is available in a Jupyter notebook: chap7_ffnn. 7.1 Data In this chapter we continue to use the AG News Dataset (Section 4.2.1), including the same loading and preprocessing steps. Also, we continue using the same train and test sets to be able to compare results to the ones obtained in Section 4.2. However, in this chapter we will make use of a development set to tune the model’s hyper parameters. For this purpose, we split the training set in two: 80% of the examples become a new training set, while the other 20% are the development set: In the code above we used scikit-learn’s train_test_split function to split the training set into a development partition and a new training partition. Note that this function can split Python lists, NumPy arrays, and even Pandas dataframes. The returned dataframes preserve the index of the original training dataframe, which can be useful to keep the connection to the original data, but is not what we currently need, as we are trying to create two independent datasets. Therefore, we reset the index of the two new dataframes. A second difference to what was done in Section 4.2 is the introduction of mini-batches. PyTorch provides the DataLoader1 class which can be used for shuffling the data and splitting it into mini-batches. In order to create a DataLoader, we need the data to be in the form of a PyTorch Dataset.2 There are two main types of PyTorch datasets: map-style and iterable-style. We will use the former, as it is simpler and meets our needs, but it is good to know that the other option is available for situations when, for example, you need to stream data from a remote source or random access is expensive. To create a map-style dataset we need to subclass torch.utils.data. Dataset and override its __getitem__() method (to return an example given a 1 https://pytorch.org/docs/stable/data.html#torch.utils.data.DataLoader 2 https://pytorch.org/docs/stable/data.html#torch.utils.data.Dataset 7.2 Fully-Connected Neural Network 111 key), as well as its __len__() method (to return the number of examples in the dataset). Our dataset implementation stores two sequences: one for holding the features, and another for storing the corresponding labels. In our implementation we store two Pandas Series, but Python lists or NumPy arrays would also work. The implementation __len__() is trivial: we simply return the length of the feature sequence, that is, the number of feature vectors. The implementation of __getitem__() is slightly more involved. Recall that each of our feature vectors is represented as a dictionary with word ids as keys, and word counts as values, and any word id not in the dictionary has a count of zero. Our __getitem__() method transforms this representation into one that PyTorch can use. We first create two PyTorch tensors, one for the label and one for the features, which is initially populated with zeros. Then, we retrieve the feature dictionary corresponding to the provided index, and, for each key-value pair in the feature dictionary, we update the corresponding element of the tensor. Once this is complete, we return the feature and label tensors for the datum: 7.2 Fully-Connected Neural Network Having completed the Dataset implementation, we next implement the model, i.e., a fully-connected neural network with two layers.3 In Section 4.2 we used a Linear module directly to implement the simpler models discussed there. This time, we will demonstrate how to implement a model as a new module, by subclassing torch.nn. Module. Although this is not necessary for this model, as it can be represented by a Sequential module, as models get more complex, it becomes helpful to encapsulate their behavior. To implement a Module, we need to implement the constructor and override the forward() method. Note that, in our constructor below, before initializing the object fields, we invoke the constructor of the parent class (i.e., Module) with the line super().__init__(). This allows PyTorch to set up the mechanisms through which any layers defined as attributes in the constructor are properly registered as model parameters. In our example, a Sequential instance is assigned to self.layers; this is enough for our model instance to know about it during back-propagation and parameter updating. 3 Recall that layer here refers to the PyTorch Linear layer that contains the connections between two neuron layers. See Sidebar 7.1 for more details. 112 Implementing Text Classification with Feed Forward Networks Here, our model consists of two linear layers, each one preceded by a dropout layer (which drops out input neurons from the corresponding linear layer). The input of the first linear layer has the same size as our vocabulary, and its output has the dimension of the hidden neuron layer (please see Section 5.1 for a refresher on the architecture of the feed-forward neural network). Consequently, the input size of the second linear layer is equal to the size of the hidden layer, and its output size is the number of classes. Additionally, between the two linear layers we add a ReLU nonlinearity.4 All of the model layers are wrapped in a Sequential module, which simply connects the output of one layer to the input of the next. The second method we need to implement is the forward() method, which defines how the model applies its layers to a given input during the forward pass. Our forward() method simply calls the sequential layer and returns its output. Note that while this method implements the model’s forward pass, in general, this method should not be called directly by the user. Instead, the user should use the model as though it were a function (technically, invoking the __call__() method), and let PyTorch call the forward() method internally. This allows PyTorch to activate necessary features such as module hooks correctly. 7.3 Training In order to train our model, we will first initialize the hyperparameters and the different components we need: model, loss function, optimizer, dataset, and data-loader. Notable differences with respect to Section 4.2 are the use of the Adam optimizer with a weight decay (this is just what PyTorch calls L2 regularization – see Chapter 6), and the use of a dataloader with shuffling and batches of 500 examples. We encourage you to take the time to examine the values we use for the hyper parameters, and to experiment with modifying them in the Jupyter notebook. The basic steps of the learning loop are the same as those in Sec- 4 Note that nonlinearities such as the ReLU function here are necessary to guarantee that the neural network can learn non-linear decision boundaries. See Chapter 5 for an extended discussion on this topic. Further, nonlinearities can be added after each network layer, but, typically, the output layer is omitted. This is because a softmax or sigmoid function usually follows it. In PyTorch, the nn.CrossEntropyFunction, which we also use in this chapter, includes such a softmax function. 7.3 Training 113 tion 4.2, except that we are now using a development set to keep track of the performance of the current model after each training epoch. One important difference between using our model during training and evaluation is that, prior to each training session, we need to set the model to training mode using the train() method, and before evaluating on the development set, we need to set the model to evaluation mode using the eval() method. This is important, because some layers have different behavior depending on whether the model is in training or evaluation mode. In our model, this is the case for the Dropout layer, which randomly zeroes some of its input elements during training and scales its outputs accordingly (see Section 6.6), but during evaluation does nothing. In order to plot some relevant statistics acquired from the training data, we collect the current loss and accuracy for each mini-batch. Note that we call detach() on the tensors corresponding to the loss and the predicted/gold labels so they are no longer considered when computing gradients. Calling cpu() copies the tensors from the GPU to the CPU if we are using the GPU; otherwise it does nothing. Calling numpy() converts the PyTorch tensor into a NumPy array. Unlike the prediction sequence, which is represented as a vector of label scores, the loss is a scalar. For this reason, we retrieve it as a Python number using the item() method. When evaluating on the development set, since we do not need to compute the gradients, we save computation by wrapping the steps in a torch.no_grad() context-manager. Since we are not learning, we do not perform back-propagation or invoke the optimizer. After completing training we have gathered the loss and accuracy values after each epoch for both the training and development partitions. Next, we plot these values in order to visualize the classifier’s progress over time. Plots such as these are important to determine how well our model is learning, which informs decisions regarding adjusting hyper parameters or modifying the model’s architecture. Below we only show the plot for the loss. Plotting the accuracy is very similar; the corresponding code as well as the plot itself is available in the Jupyter notebook. 114 Implementing Text Classification with Feed Forward Networks The plot indicates that both the training and development losses decrease over time. This is good! It indicates that our classifier is neither overfitting nor underfitting. Recall from Chapter 2 that overfitting happens when a classifier performs well in training, but poorly on unseen data. In the plot above this would be indicated by a training loss that continues to decrease, but is associated with a development loss that does not. Underfitting happens when a classifier is unable to learn meaningful associations between the input features and the output labels. In this plot this would be shown as loss curves that do not decrease over time. This analysis means we are ready to evaluate our trained model on the test set, which must be a truly unseen dataset that was not used for training or to tune hyper parameters. In other words, this experiment will indicate how well our model performs “in the wild.” Because we would like these results to be as close as possible to real-world results, the test set should be used sparingly, only after the entire architecture, its trained parameters, and its hyper parameters have been frozen. With our feed-forward neural architecture we have achieved an accuracy of 92%, which is a substantial improvement over the 88% accuracy we obtained in Section 4.2. We strongly suggest that you experiment not only with the different hyper parameters, but also with different model architectures in the Jupyter notebook. Such exercises will help you de- 7.4 Summary 115 velop an intuition about the different effects each design choice has, as well as how these decisions interact with each other. 7.4 Summary In this chapter we have shown how to implement a feed-forward neural network in PyTorch. We have also introduced several PyTorch features that encourage and simplify deep learning best practices. In particular, the built-in Dataset and DataLoader classes make mini-batching straightforward while still allowing for customization such as sampling. The ability to create a custom Dataset object allows us to handle complex data and still have access to the features of a DataLoader. By convention, all the components provided by PyTorch are batch-aware and assume that the first dimension refers to the batch size, simplifying model implementation and improving readability. In building the model itself, we also saw that PyTorch uses layer modularization, i.e., both the network layers themselves and operations on them (such as dropout and activation functions) are modeled as layers in a pipeline. This makes it easy to interweave network layers, add various operations between them, and swap activation functions as desired. The weight initialization is also handled automatically when the layers are created, but can be customized as needed. Further, one can tailor the training process in PyTorch by adding momentum, adaptive learning rates, and regularization through optimizer selection and configuration. In this chapter, we used the Adam optimizer, which, in the authors’ experience, is a good default choice, but there are many other optimizers to choose from. We recommend that the reader read the PyTorch documentation on optimizers for more details: https://pytorch.org/docs/stable/optim.html.
5,455
5,540
#!/usr/bin/env python # coding: utf-8 # # Text Classification with a Feed-forward Neural Network and BOW features # First, we will do some initialization. # In[1]: import random import torch import numpy as np import pandas as pd from tqdm.notebook import tqdm # enable tqdm in pandas tqdm.pandas() # set to True to use the gpu (if there is one available) use_gpu = True # select device device = torch.device('cuda' if use_gpu and torch.cuda.is_available() else 'cpu') print(f'device: {device.type}') # random seed seed = 1234 # set random seed if seed is not None: print(f'random seed: {seed}') random.seed(seed) np.random.seed(seed) torch.manual_seed(seed) # We will be using the AG's News Topic Classification Dataset. # It is stored in two CSV files: `train.csv` and `test.csv`, as well as a `classes.txt` that stores the labels of the classes to predict. # # First, we will load the training dataset using [pandas](https://pandas.pydata.org/) and take a quick look at how the data. # In[2]: train_df = pd.read_csv('data/ag_news_csv/train.csv', header=None) train_df.columns = ['class index', 'title', 'description'] train_df # The dataset consists of 120,000 examples, each consisting of a class index, a title, and a description. # The class labels are distributed in a separated file. We will add the labels to the dataset so that we can interpret the data more easily. Note that the label indexes are one-based, so we need to subtract one to retrieve them from the list. # In[3]: labels = open('data/ag_news_csv/classes.txt').read().splitlines() classes = train_df['class index'].map(lambda i: labels[i-1]) train_df.insert(1, 'class', classes) train_df # Let's inspect how balanced our examples are by using a bar plot. # In[4]: pd.value_counts(train_df['class']).plot.bar() # The classes are evenly distributed. That's great! # # However, the text contains some spurious backslashes in some parts of the text. # They are meant to represent newlines in the original text. # An example can be seen below, between the words "dwindling" and "band". # In[5]: print(train_df.loc[0, 'description']) # We will replace the backslashes with spaces on the whole column using pandas replace method. # In[6]: train_df['text'] = train_df['title'].str.lower() + " " + train_df['description'].str.lower() train_df['text'] = train_df['text'].str.replace('\\', ' ', regex=False) train_df # Now we will proceed to tokenize the title and description columns using NLTK's word_tokenize(). # We will add a new column to our dataframe with the list of tokens. # In[7]: from nltk.tokenize import word_tokenize train_df['tokens'] = train_df['text'].progress_map(word_tokenize) train_df # In[8]: from sklearn.model_selection import train_test_split train_df, dev_df = train_test_split(train_df, train_size=0.8) train_df.reset_index(inplace=True) dev_df.reset_index(inplace=True) print(f'train rows: {len(train_df.index):,}') print(f'dev rows: {len(dev_df.index):,}') # Now we will create a vocabulary from the training data. We will only keep the terms that repeat beyond some threshold established below. # In[9]: threshold = 10 tokens = train_df['tokens'].explode().value_counts() tokens = tokens[tokens > threshold] id_to_token = ['[UNK]'] + tokens.index.tolist() token_to_id = {w:i for i,w in enumerate(id_to_token)} vocabulary_size = len(id_to_token) print(f'vocabulary size: {vocabulary_size:,}') # In[10]: from collections import defaultdict def make_feature_vector(tokens, unk_id=0): vector = defaultdict(int) for t in tokens: i = token_to_id.get(t, unk_id) vector[i] += 1 return vector train_df['features'] = train_df['tokens'].progress_map(make_feature_vector) dev_df['features'] = dev_df['tokens'].progress_map(make_feature_vector) train_df # In[11]: from torch.utils.data import Dataset class MyDataset(Dataset): def __init__(self, x, y): self.x = x self.y = y def __len__(self): return len(self.x) def __getitem__(self, index): x = torch.zeros(vocabulary_size, dtype=torch.float32) y = torch.tensor(self.y[index]) for k,v in self.x[index].items(): x[k] = v return x, y # In[12]: from torch import nn import torch.nn.functional as F class Model(nn.Module): def __init__(self, input_dim, hidden_dim, output_dim, dropout): super().__init__() self.layers = nn.Sequential( nn.Dropout(dropout), nn.Linear(input_dim, hidden_dim), nn.ReLU(), nn.Dropout(dropout), nn.Linear(hidden_dim, output_dim), ) def forward(self, x): return self.layers(x) # In[13]: from torch import optim from torch.utils.data import DataLoader from sklearn.metrics import accuracy_score # hyperparameters lr = 1e-3 weight_decay = 1e-5 batch_size = 500 shuffle = True n_epochs = 5 input_dim = vocabulary_size hidden_dim = 50 output_dim = len(labels) dropout = 0.3 # initialize the model, loss function, optimizer, and data-loader model = Model(input_dim, hidden_dim, output_dim, dropout).to(device) loss_func = nn.CrossEntropyLoss() optimizer = optim.Adam( model.parameters(), lr=lr, weight_decay=weight_decay) train_ds = MyDataset( train_df['features'], train_df['class index'] - 1) train_dl = DataLoader( train_ds, batch_size=batch_size, shuffle=shuffle) dev_ds = MyDataset( dev_df['features'], dev_df['class index'] - 1) dev_dl = DataLoader( dev_ds, batch_size=batch_size, shuffle=shuffle) # lists used to store plotting data train_loss, train_acc = [], [] dev_loss, dev_acc = [], [] # In[14]: # train the model for epoch in range(n_epochs): losses, acc = [], [] # set model to training mode model.train() for X, y_true in tqdm(train_dl, desc=f'epoch {epoch+1} (train)'): # clear gradients model.zero_grad() # send batch to right device X = X.to(device) y_true = y_true.to(device) # predict label scores y_pred = model(X) # compute loss loss = loss_func(y_pred, y_true) # compute accuracy gold = y_true.detach().cpu().numpy() pred = np.argmax(y_pred.detach().cpu().numpy(), axis=1) # accumulate for plotting losses.append(loss.detach().cpu().item()) acc.append(accuracy_score(gold, pred)) # backpropagate loss.backward() # optimize model parameters optimizer.step() # save epoch stats train_loss.append(np.mean(losses)) train_acc.append(np.mean(acc)) # set model to evaluation mode model.eval() # disable gradient calculation with torch.no_grad(): losses, acc = [], [] for X, y_true in tqdm(dev_dl, desc=f'epoch {epoch+1} (dev)'): # send batch to right device X = X.to(device) y_true = y_true.to(device) # predict label scores y_pred = model(X) # compute loss loss = loss_func(y_pred, y_true) # compute accuracy gold = y_true.cpu().numpy() pred = np.argmax(y_pred.cpu().numpy(), axis=1) # accumulate for plotting losses.append(loss.cpu().item()) acc.append(accuracy_score(gold, pred)) # save epoch stats dev_loss.append(np.mean(losses)) dev_acc.append(np.mean(acc)) # In[15]: import matplotlib.pyplot as plt x = np.arange(n_epochs) + 1 plt.plot(x, train_loss) plt.plot(x, dev_loss) plt.legend(['train', 'dev']) plt.xlabel('epoch') plt.ylabel('loss') plt.grid(True) # In[16]: plt.plot(x, train_acc) plt.plot(x, dev_acc) plt.legend(['train', 'dev']) plt.xlabel('epoch') plt.ylabel('accuracy') plt.grid(True) # ## Evaluate on test dataset # In[17]: # repeat all preprocessing done above, this time on the test set test_df = pd.read_csv('data/ag_news_csv/test.csv', header=None) test_df.columns = ['class index', 'title', 'description'] test_df['text'] = test_df['title'].str.lower() + " " + test_df['description'].str.lower() test_df['text'] = test_df['text'].str.replace('\\', ' ', regex=False) test_df['tokens'] = test_df['text'].progress_map(word_tokenize) test_df['features'] = test_df['tokens'].progress_map(make_feature_vector) test_df # In[18]: from sklearn.metrics import classification_report # set model to evaluation mode model.eval() dataset = MyDataset(test_df['features'], test_df['class index'] - 1) data_loader = DataLoader(dataset, batch_size=batch_size) y_pred = [] # disable gradient calculation with torch.no_grad(): for X, _ in tqdm(data_loader): X = X.to(device) # predict one class per example y = torch.argmax(model(X), dim=1) # convert tensor to numpy array y_pred.append(y.cpu().numpy()) # print results y_true = dataset.y y_pred = np.concatenate(y_pred) print(classification_report(y_true, y_pred, target_names=labels)) # In[19]: from sklearn.metrics import ConfusionMatrixDisplay, confusion_matrix cm = confusion_matrix(y_true, y_pred, normalize='true') disp = ConfusionMatrixDisplay(confusion_matrix=cm, display_labels=labels) fig, ax = plt.subplots(figsize=(4,4)) disp.plot(cmap='Blues', values_format='.2f', colorbar=False, ax=ax, xticks_rotation=45) # In[ ]:
4,336
4,360
0
chap07-1
chap07-1
7 Implementing Text Classification with Feed Forward Networks In this chapter we provide an implementation of the multilayer neural network described in Chapter 5, along with several of the best practices discussed in Chapter 6. Remaining fairly simple, our network will consist of three neuron layers that are fully connected: an input layer that stores the input features, a hidden intermediate layer, and an output layer that produces the scores for each class to be learned. In between these layers we will include dropout and a nonlinearity (ReLU). Sidebar 7.1 The PyTorch Linear layer implements the connections between layers of neurons. Before discussing the implementation of more complex neural architectures in PyTorch, it is important to address one potential source of confusion. In PyTorch, the Linear layer implements the connections between two layers of neurons rather than an actual neuron layer. That is, a Linear object contains the weights Wl+1 that connect the neurons in layer l with the neurons in layer l + 1 in Figure 5.2. This is why the Linear constructor includes two dimensions: one for the input neuron layer (in_features) and one for the output neuron layer (out_features). Optionally, if the parameter bias is set to True, the corresponding Linear object also contains the bias weights for the output neurons, i.e., bl+1 in Figure 5.2. Thus, in our Model with three neuron layers, we will have two Linear objects. To stay close to the code, from this point forward when we mention the term layer in the implementation chapters, we refer to a PyTorch Linear layer, unless stated otherwise. 109 110 Implementing Text Classification with Feed Forward Networks Further, we make use of two PyTorch classes: a Dataset and a DataLoader. The advantage of using these classes is that they make several things easy, including data shuffling and batching. Lastly, since the classifier’s architecture has become more complex, for optimization we transition from stochastic gradient descent to the Adam optimizer to take advantage of its additional features such as momentum, and L2 regularization. As before, the code from this chapter is available in a Jupyter notebook: chap7_ffnn. 7.1 Data In this chapter we continue to use the AG News Dataset (Section 4.2.1), including the same loading and preprocessing steps. Also, we continue using the same train and test sets to be able to compare results to the ones obtained in Section 4.2. However, in this chapter we will make use of a development set to tune the model’s hyper parameters. For this purpose, we split the training set in two: 80% of the examples become a new training set, while the other 20% are the development set: In the code above we used scikit-learn’s train_test_split function to split the training set into a development partition and a new training partition. Note that this function can split Python lists, NumPy arrays, and even Pandas dataframes. The returned dataframes preserve the index of the original training dataframe, which can be useful to keep the connection to the original data, but is not what we currently need, as we are trying to create two independent datasets. Therefore, we reset the index of the two new dataframes. A second difference to what was done in Section 4.2 is the introduction of mini-batches. PyTorch provides the DataLoader1 class which can be used for shuffling the data and splitting it into mini-batches. In order to create a DataLoader, we need the data to be in the form of a PyTorch Dataset.2 There are two main types of PyTorch datasets: map-style and iterable-style. We will use the former, as it is simpler and meets our needs, but it is good to know that the other option is available for situations when, for example, you need to stream data from a remote source or random access is expensive. To create a map-style dataset we need to subclass torch.utils.data. Dataset and override its __getitem__() method (to return an example given a 1 https://pytorch.org/docs/stable/data.html#torch.utils.data.DataLoader 2 https://pytorch.org/docs/stable/data.html#torch.utils.data.Dataset 7.2 Fully-Connected Neural Network 111 key), as well as its __len__() method (to return the number of examples in the dataset). Our dataset implementation stores two sequences: one for holding the features, and another for storing the corresponding labels. In our implementation we store two Pandas Series, but Python lists or NumPy arrays would also work. The implementation __len__() is trivial: we simply return the length of the feature sequence, that is, the number of feature vectors. The implementation of __getitem__() is slightly more involved. Recall that each of our feature vectors is represented as a dictionary with word ids as keys, and word counts as values, and any word id not in the dictionary has a count of zero. Our __getitem__() method transforms this representation into one that PyTorch can use. We first create two PyTorch tensors, one for the label and one for the features, which is initially populated with zeros. Then, we retrieve the feature dictionary corresponding to the provided index, and, for each key-value pair in the feature dictionary, we update the corresponding element of the tensor. Once this is complete, we return the feature and label tensors for the datum: 7.2 Fully-Connected Neural Network Having completed the Dataset implementation, we next implement the model, i.e., a fully-connected neural network with two layers.3 In Section 4.2 we used a Linear module directly to implement the simpler models discussed there. This time, we will demonstrate how to implement a model as a new module, by subclassing torch.nn. Module. Although this is not necessary for this model, as it can be represented by a Sequential module, as models get more complex, it becomes helpful to encapsulate their behavior. To implement a Module, we need to implement the constructor and override the forward() method. Note that, in our constructor below, before initializing the object fields, we invoke the constructor of the parent class (i.e., Module) with the line super().__init__(). This allows PyTorch to set up the mechanisms through which any layers defined as attributes in the constructor are properly registered as model parameters. In our example, a Sequential instance is assigned to self.layers; this is enough for our model instance to know about it during back-propagation and parameter updating. 3 Recall that layer here refers to the PyTorch Linear layer that contains the connections between two neuron layers. See Sidebar 7.1 for more details. 112 Implementing Text Classification with Feed Forward Networks Here, our model consists of two linear layers, each one preceded by a dropout layer (which drops out input neurons from the corresponding linear layer). The input of the first linear layer has the same size as our vocabulary, and its output has the dimension of the hidden neuron layer (please see Section 5.1 for a refresher on the architecture of the feed-forward neural network). Consequently, the input size of the second linear layer is equal to the size of the hidden layer, and its output size is the number of classes. Additionally, between the two linear layers we add a ReLU nonlinearity.4 All of the model layers are wrapped in a Sequential module, which simply connects the output of one layer to the input of the next. The second method we need to implement is the forward() method, which defines how the model applies its layers to a given input during the forward pass. Our forward() method simply calls the sequential layer and returns its output. Note that while this method implements the model’s forward pass, in general, this method should not be called directly by the user. Instead, the user should use the model as though it were a function (technically, invoking the __call__() method), and let PyTorch call the forward() method internally. This allows PyTorch to activate necessary features such as module hooks correctly. 7.3 Training In order to train our model, we will first initialize the hyperparameters and the different components we need: model, loss function, optimizer, dataset, and data-loader. Notable differences with respect to Section 4.2 are the use of the Adam optimizer with a weight decay (this is just what PyTorch calls L2 regularization – see Chapter 6), and the use of a dataloader with shuffling and batches of 500 examples. We encourage you to take the time to examine the values we use for the hyper parameters, and to experiment with modifying them in the Jupyter notebook. The basic steps of the learning loop are the same as those in Sec- 4 Note that nonlinearities such as the ReLU function here are necessary to guarantee that the neural network can learn non-linear decision boundaries. See Chapter 5 for an extended discussion on this topic. Further, nonlinearities can be added after each network layer, but, typically, the output layer is omitted. This is because a softmax or sigmoid function usually follows it. In PyTorch, the nn.CrossEntropyFunction, which we also use in this chapter, includes such a softmax function. 7.3 Training 113 tion 4.2, except that we are now using a development set to keep track of the performance of the current model after each training epoch. One important difference between using our model during training and evaluation is that, prior to each training session, we need to set the model to training mode using the train() method, and before evaluating on the development set, we need to set the model to evaluation mode using the eval() method. This is important, because some layers have different behavior depending on whether the model is in training or evaluation mode. In our model, this is the case for the Dropout layer, which randomly zeroes some of its input elements during training and scales its outputs accordingly (see Section 6.6), but during evaluation does nothing. In order to plot some relevant statistics acquired from the training data, we collect the current loss and accuracy for each mini-batch. Note that we call detach() on the tensors corresponding to the loss and the predicted/gold labels so they are no longer considered when computing gradients. Calling cpu() copies the tensors from the GPU to the CPU if we are using the GPU; otherwise it does nothing. Calling numpy() converts the PyTorch tensor into a NumPy array. Unlike the prediction sequence, which is represented as a vector of label scores, the loss is a scalar. For this reason, we retrieve it as a Python number using the item() method. When evaluating on the development set, since we do not need to compute the gradients, we save computation by wrapping the steps in a torch.no_grad() context-manager. Since we are not learning, we do not perform back-propagation or invoke the optimizer. After completing training we have gathered the loss and accuracy values after each epoch for both the training and development partitions. Next, we plot these values in order to visualize the classifier’s progress over time. Plots such as these are important to determine how well our model is learning, which informs decisions regarding adjusting hyper parameters or modifying the model’s architecture. Below we only show the plot for the loss. Plotting the accuracy is very similar; the corresponding code as well as the plot itself is available in the Jupyter notebook. 114 Implementing Text Classification with Feed Forward Networks The plot indicates that both the training and development losses decrease over time. This is good! It indicates that our classifier is neither overfitting nor underfitting. Recall from Chapter 2 that overfitting happens when a classifier performs well in training, but poorly on unseen data. In the plot above this would be indicated by a training loss that continues to decrease, but is associated with a development loss that does not. Underfitting happens when a classifier is unable to learn meaningful associations between the input features and the output labels. In this plot this would be shown as loss curves that do not decrease over time. This analysis means we are ready to evaluate our trained model on the test set, which must be a truly unseen dataset that was not used for training or to tune hyper parameters. In other words, this experiment will indicate how well our model performs “in the wild.” Because we would like these results to be as close as possible to real-world results, the test set should be used sparingly, only after the entire architecture, its trained parameters, and its hyper parameters have been frozen. With our feed-forward neural architecture we have achieved an accuracy of 92%, which is a substantial improvement over the 88% accuracy we obtained in Section 4.2. We strongly suggest that you experiment not only with the different hyper parameters, but also with different model architectures in the Jupyter notebook. Such exercises will help you de- 7.4 Summary 115 velop an intuition about the different effects each design choice has, as well as how these decisions interact with each other. 7.4 Summary In this chapter we have shown how to implement a feed-forward neural network in PyTorch. We have also introduced several PyTorch features that encourage and simplify deep learning best practices. In particular, the built-in Dataset and DataLoader classes make mini-batching straightforward while still allowing for customization such as sampling. The ability to create a custom Dataset object allows us to handle complex data and still have access to the features of a DataLoader. By convention, all the components provided by PyTorch are batch-aware and assume that the first dimension refers to the batch size, simplifying model implementation and improving readability. In building the model itself, we also saw that PyTorch uses layer modularization, i.e., both the network layers themselves and operations on them (such as dropout and activation functions) are modeled as layers in a pipeline. This makes it easy to interweave network layers, add various operations between them, and swap activation functions as desired. The weight initialization is also handled automatically when the layers are created, but can be customized as needed. Further, one can tailor the training process in PyTorch by adding momentum, adaptive learning rates, and regularization through optimizer selection and configuration. In this chapter, we used the Adam optimizer, which, in the authors’ experience, is a good default choice, but there are many other optimizers to choose from. We recommend that the reader read the PyTorch documentation on optimizers for more details: https://pytorch.org/docs/stable/optim.html.
2,587
2,730
#!/usr/bin/env python # coding: utf-8 # # Text Classification with a Feed-forward Neural Network and BOW features # First, we will do some initialization. # In[1]: import random import torch import numpy as np import pandas as pd from tqdm.notebook import tqdm # enable tqdm in pandas tqdm.pandas() # set to True to use the gpu (if there is one available) use_gpu = True # select device device = torch.device('cuda' if use_gpu and torch.cuda.is_available() else 'cpu') print(f'device: {device.type}') # random seed seed = 1234 # set random seed if seed is not None: print(f'random seed: {seed}') random.seed(seed) np.random.seed(seed) torch.manual_seed(seed) # We will be using the AG's News Topic Classification Dataset. # It is stored in two CSV files: `train.csv` and `test.csv`, as well as a `classes.txt` that stores the labels of the classes to predict. # # First, we will load the training dataset using [pandas](https://pandas.pydata.org/) and take a quick look at how the data. # In[2]: train_df = pd.read_csv('data/ag_news_csv/train.csv', header=None) train_df.columns = ['class index', 'title', 'description'] train_df # The dataset consists of 120,000 examples, each consisting of a class index, a title, and a description. # The class labels are distributed in a separated file. We will add the labels to the dataset so that we can interpret the data more easily. Note that the label indexes are one-based, so we need to subtract one to retrieve them from the list. # In[3]: labels = open('data/ag_news_csv/classes.txt').read().splitlines() classes = train_df['class index'].map(lambda i: labels[i-1]) train_df.insert(1, 'class', classes) train_df # Let's inspect how balanced our examples are by using a bar plot. # In[4]: pd.value_counts(train_df['class']).plot.bar() # The classes are evenly distributed. That's great! # # However, the text contains some spurious backslashes in some parts of the text. # They are meant to represent newlines in the original text. # An example can be seen below, between the words "dwindling" and "band". # In[5]: print(train_df.loc[0, 'description']) # We will replace the backslashes with spaces on the whole column using pandas replace method. # In[6]: train_df['text'] = train_df['title'].str.lower() + " " + train_df['description'].str.lower() train_df['text'] = train_df['text'].str.replace('\\', ' ', regex=False) train_df # Now we will proceed to tokenize the title and description columns using NLTK's word_tokenize(). # We will add a new column to our dataframe with the list of tokens. # In[7]: from nltk.tokenize import word_tokenize train_df['tokens'] = train_df['text'].progress_map(word_tokenize) train_df # In[8]: from sklearn.model_selection import train_test_split train_df, dev_df = train_test_split(train_df, train_size=0.8) train_df.reset_index(inplace=True) dev_df.reset_index(inplace=True) print(f'train rows: {len(train_df.index):,}') print(f'dev rows: {len(dev_df.index):,}') # Now we will create a vocabulary from the training data. We will only keep the terms that repeat beyond some threshold established below. # In[9]: threshold = 10 tokens = train_df['tokens'].explode().value_counts() tokens = tokens[tokens > threshold] id_to_token = ['[UNK]'] + tokens.index.tolist() token_to_id = {w:i for i,w in enumerate(id_to_token)} vocabulary_size = len(id_to_token) print(f'vocabulary size: {vocabulary_size:,}') # In[10]: from collections import defaultdict def make_feature_vector(tokens, unk_id=0): vector = defaultdict(int) for t in tokens: i = token_to_id.get(t, unk_id) vector[i] += 1 return vector train_df['features'] = train_df['tokens'].progress_map(make_feature_vector) dev_df['features'] = dev_df['tokens'].progress_map(make_feature_vector) train_df # In[11]: from torch.utils.data import Dataset class MyDataset(Dataset): def __init__(self, x, y): self.x = x self.y = y def __len__(self): return len(self.x) def __getitem__(self, index): x = torch.zeros(vocabulary_size, dtype=torch.float32) y = torch.tensor(self.y[index]) for k,v in self.x[index].items(): x[k] = v return x, y # In[12]: from torch import nn import torch.nn.functional as F class Model(nn.Module): def __init__(self, input_dim, hidden_dim, output_dim, dropout): super().__init__() self.layers = nn.Sequential( nn.Dropout(dropout), nn.Linear(input_dim, hidden_dim), nn.ReLU(), nn.Dropout(dropout), nn.Linear(hidden_dim, output_dim), ) def forward(self, x): return self.layers(x) # In[13]: from torch import optim from torch.utils.data import DataLoader from sklearn.metrics import accuracy_score # hyperparameters lr = 1e-3 weight_decay = 1e-5 batch_size = 500 shuffle = True n_epochs = 5 input_dim = vocabulary_size hidden_dim = 50 output_dim = len(labels) dropout = 0.3 # initialize the model, loss function, optimizer, and data-loader model = Model(input_dim, hidden_dim, output_dim, dropout).to(device) loss_func = nn.CrossEntropyLoss() optimizer = optim.Adam( model.parameters(), lr=lr, weight_decay=weight_decay) train_ds = MyDataset( train_df['features'], train_df['class index'] - 1) train_dl = DataLoader( train_ds, batch_size=batch_size, shuffle=shuffle) dev_ds = MyDataset( dev_df['features'], dev_df['class index'] - 1) dev_dl = DataLoader( dev_ds, batch_size=batch_size, shuffle=shuffle) # lists used to store plotting data train_loss, train_acc = [], [] dev_loss, dev_acc = [], [] # In[14]: # train the model for epoch in range(n_epochs): losses, acc = [], [] # set model to training mode model.train() for X, y_true in tqdm(train_dl, desc=f'epoch {epoch+1} (train)'): # clear gradients model.zero_grad() # send batch to right device X = X.to(device) y_true = y_true.to(device) # predict label scores y_pred = model(X) # compute loss loss = loss_func(y_pred, y_true) # compute accuracy gold = y_true.detach().cpu().numpy() pred = np.argmax(y_pred.detach().cpu().numpy(), axis=1) # accumulate for plotting losses.append(loss.detach().cpu().item()) acc.append(accuracy_score(gold, pred)) # backpropagate loss.backward() # optimize model parameters optimizer.step() # save epoch stats train_loss.append(np.mean(losses)) train_acc.append(np.mean(acc)) # set model to evaluation mode model.eval() # disable gradient calculation with torch.no_grad(): losses, acc = [], [] for X, y_true in tqdm(dev_dl, desc=f'epoch {epoch+1} (dev)'): # send batch to right device X = X.to(device) y_true = y_true.to(device) # predict label scores y_pred = model(X) # compute loss loss = loss_func(y_pred, y_true) # compute accuracy gold = y_true.cpu().numpy() pred = np.argmax(y_pred.cpu().numpy(), axis=1) # accumulate for plotting losses.append(loss.cpu().item()) acc.append(accuracy_score(gold, pred)) # save epoch stats dev_loss.append(np.mean(losses)) dev_acc.append(np.mean(acc)) # In[15]: import matplotlib.pyplot as plt x = np.arange(n_epochs) + 1 plt.plot(x, train_loss) plt.plot(x, dev_loss) plt.legend(['train', 'dev']) plt.xlabel('epoch') plt.ylabel('loss') plt.grid(True) # In[16]: plt.plot(x, train_acc) plt.plot(x, dev_acc) plt.legend(['train', 'dev']) plt.xlabel('epoch') plt.ylabel('accuracy') plt.grid(True) # ## Evaluate on test dataset # In[17]: # repeat all preprocessing done above, this time on the test set test_df = pd.read_csv('data/ag_news_csv/test.csv', header=None) test_df.columns = ['class index', 'title', 'description'] test_df['text'] = test_df['title'].str.lower() + " " + test_df['description'].str.lower() test_df['text'] = test_df['text'].str.replace('\\', ' ', regex=False) test_df['tokens'] = test_df['text'].progress_map(word_tokenize) test_df['features'] = test_df['tokens'].progress_map(make_feature_vector) test_df # In[18]: from sklearn.metrics import classification_report # set model to evaluation mode model.eval() dataset = MyDataset(test_df['features'], test_df['class index'] - 1) data_loader = DataLoader(dataset, batch_size=batch_size) y_pred = [] # disable gradient calculation with torch.no_grad(): for X, _ in tqdm(data_loader): X = X.to(device) # predict one class per example y = torch.argmax(model(X), dim=1) # convert tensor to numpy array y_pred.append(y.cpu().numpy()) # print results y_true = dataset.y y_pred = np.concatenate(y_pred) print(classification_report(y_true, y_pred, target_names=labels)) # In[19]: from sklearn.metrics import ConfusionMatrixDisplay, confusion_matrix cm = confusion_matrix(y_true, y_pred, normalize='true') disp = ConfusionMatrixDisplay(confusion_matrix=cm, display_labels=labels) fig, ax = plt.subplots(figsize=(4,4)) disp.plot(cmap='Blues', values_format='.2f', colorbar=False, ax=ax, xticks_rotation=45) # In[ ]:
2,796
2,858
1
chap07-2
chap07-2
7 Implementing Text Classification with Feed Forward Networks In this chapter we provide an implementation of the multilayer neural network described in Chapter 5, along with several of the best practices discussed in Chapter 6. Remaining fairly simple, our network will consist of three neuron layers that are fully connected: an input layer that stores the input features, a hidden intermediate layer, and an output layer that produces the scores for each class to be learned. In between these layers we will include dropout and a nonlinearity (ReLU). Sidebar 7.1 The PyTorch Linear layer implements the connections between layers of neurons. Before discussing the implementation of more complex neural architectures in PyTorch, it is important to address one potential source of confusion. In PyTorch, the Linear layer implements the connections between two layers of neurons rather than an actual neuron layer. That is, a Linear object contains the weights Wl+1 that connect the neurons in layer l with the neurons in layer l + 1 in Figure 5.2. This is why the Linear constructor includes two dimensions: one for the input neuron layer (in_features) and one for the output neuron layer (out_features). Optionally, if the parameter bias is set to True, the corresponding Linear object also contains the bias weights for the output neurons, i.e., bl+1 in Figure 5.2. Thus, in our Model with three neuron layers, we will have two Linear objects. To stay close to the code, from this point forward when we mention the term layer in the implementation chapters, we refer to a PyTorch Linear layer, unless stated otherwise. 109 110 Implementing Text Classification with Feed Forward Networks Further, we make use of two PyTorch classes: a Dataset and a DataLoader. The advantage of using these classes is that they make several things easy, including data shuffling and batching. Lastly, since the classifier’s architecture has become more complex, for optimization we transition from stochastic gradient descent to the Adam optimizer to take advantage of its additional features such as momentum, and L2 regularization. As before, the code from this chapter is available in a Jupyter notebook: chap7_ffnn. 7.1 Data In this chapter we continue to use the AG News Dataset (Section 4.2.1), including the same loading and preprocessing steps. Also, we continue using the same train and test sets to be able to compare results to the ones obtained in Section 4.2. However, in this chapter we will make use of a development set to tune the model’s hyper parameters. For this purpose, we split the training set in two: 80% of the examples become a new training set, while the other 20% are the development set: In the code above we used scikit-learn’s train_test_split function to split the training set into a development partition and a new training partition. Note that this function can split Python lists, NumPy arrays, and even Pandas dataframes. The returned dataframes preserve the index of the original training dataframe, which can be useful to keep the connection to the original data, but is not what we currently need, as we are trying to create two independent datasets. Therefore, we reset the index of the two new dataframes. A second difference to what was done in Section 4.2 is the introduction of mini-batches. PyTorch provides the DataLoader1 class which can be used for shuffling the data and splitting it into mini-batches. In order to create a DataLoader, we need the data to be in the form of a PyTorch Dataset.2 There are two main types of PyTorch datasets: map-style and iterable-style. We will use the former, as it is simpler and meets our needs, but it is good to know that the other option is available for situations when, for example, you need to stream data from a remote source or random access is expensive. To create a map-style dataset we need to subclass torch.utils.data. Dataset and override its __getitem__() method (to return an example given a 1 https://pytorch.org/docs/stable/data.html#torch.utils.data.DataLoader 2 https://pytorch.org/docs/stable/data.html#torch.utils.data.Dataset 7.2 Fully-Connected Neural Network 111 key), as well as its __len__() method (to return the number of examples in the dataset). Our dataset implementation stores two sequences: one for holding the features, and another for storing the corresponding labels. In our implementation we store two Pandas Series, but Python lists or NumPy arrays would also work. The implementation __len__() is trivial: we simply return the length of the feature sequence, that is, the number of feature vectors. The implementation of __getitem__() is slightly more involved. Recall that each of our feature vectors is represented as a dictionary with word ids as keys, and word counts as values, and any word id not in the dictionary has a count of zero. Our __getitem__() method transforms this representation into one that PyTorch can use. We first create two PyTorch tensors, one for the label and one for the features, which is initially populated with zeros. Then, we retrieve the feature dictionary corresponding to the provided index, and, for each key-value pair in the feature dictionary, we update the corresponding element of the tensor. Once this is complete, we return the feature and label tensors for the datum: 7.2 Fully-Connected Neural Network Having completed the Dataset implementation, we next implement the model, i.e., a fully-connected neural network with two layers.3 In Section 4.2 we used a Linear module directly to implement the simpler models discussed there. This time, we will demonstrate how to implement a model as a new module, by subclassing torch.nn. Module. Although this is not necessary for this model, as it can be represented by a Sequential module, as models get more complex, it becomes helpful to encapsulate their behavior. To implement a Module, we need to implement the constructor and override the forward() method. Note that, in our constructor below, before initializing the object fields, we invoke the constructor of the parent class (i.e., Module) with the line super().__init__(). This allows PyTorch to set up the mechanisms through which any layers defined as attributes in the constructor are properly registered as model parameters. In our example, a Sequential instance is assigned to self.layers; this is enough for our model instance to know about it during back-propagation and parameter updating. 3 Recall that layer here refers to the PyTorch Linear layer that contains the connections between two neuron layers. See Sidebar 7.1 for more details. 112 Implementing Text Classification with Feed Forward Networks Here, our model consists of two linear layers, each one preceded by a dropout layer (which drops out input neurons from the corresponding linear layer). The input of the first linear layer has the same size as our vocabulary, and its output has the dimension of the hidden neuron layer (please see Section 5.1 for a refresher on the architecture of the feed-forward neural network). Consequently, the input size of the second linear layer is equal to the size of the hidden layer, and its output size is the number of classes. Additionally, between the two linear layers we add a ReLU nonlinearity.4 All of the model layers are wrapped in a Sequential module, which simply connects the output of one layer to the input of the next. The second method we need to implement is the forward() method, which defines how the model applies its layers to a given input during the forward pass. Our forward() method simply calls the sequential layer and returns its output. Note that while this method implements the model’s forward pass, in general, this method should not be called directly by the user. Instead, the user should use the model as though it were a function (technically, invoking the __call__() method), and let PyTorch call the forward() method internally. This allows PyTorch to activate necessary features such as module hooks correctly. 7.3 Training In order to train our model, we will first initialize the hyperparameters and the different components we need: model, loss function, optimizer, dataset, and data-loader. Notable differences with respect to Section 4.2 are the use of the Adam optimizer with a weight decay (this is just what PyTorch calls L2 regularization – see Chapter 6), and the use of a dataloader with shuffling and batches of 500 examples. We encourage you to take the time to examine the values we use for the hyper parameters, and to experiment with modifying them in the Jupyter notebook. The basic steps of the learning loop are the same as those in Sec- 4 Note that nonlinearities such as the ReLU function here are necessary to guarantee that the neural network can learn non-linear decision boundaries. See Chapter 5 for an extended discussion on this topic. Further, nonlinearities can be added after each network layer, but, typically, the output layer is omitted. This is because a softmax or sigmoid function usually follows it. In PyTorch, the nn.CrossEntropyFunction, which we also use in this chapter, includes such a softmax function. 7.3 Training 113 tion 4.2, except that we are now using a development set to keep track of the performance of the current model after each training epoch. One important difference between using our model during training and evaluation is that, prior to each training session, we need to set the model to training mode using the train() method, and before evaluating on the development set, we need to set the model to evaluation mode using the eval() method. This is important, because some layers have different behavior depending on whether the model is in training or evaluation mode. In our model, this is the case for the Dropout layer, which randomly zeroes some of its input elements during training and scales its outputs accordingly (see Section 6.6), but during evaluation does nothing. In order to plot some relevant statistics acquired from the training data, we collect the current loss and accuracy for each mini-batch. Note that we call detach() on the tensors corresponding to the loss and the predicted/gold labels so they are no longer considered when computing gradients. Calling cpu() copies the tensors from the GPU to the CPU if we are using the GPU; otherwise it does nothing. Calling numpy() converts the PyTorch tensor into a NumPy array. Unlike the prediction sequence, which is represented as a vector of label scores, the loss is a scalar. For this reason, we retrieve it as a Python number using the item() method. When evaluating on the development set, since we do not need to compute the gradients, we save computation by wrapping the steps in a torch.no_grad() context-manager. Since we are not learning, we do not perform back-propagation or invoke the optimizer. After completing training we have gathered the loss and accuracy values after each epoch for both the training and development partitions. Next, we plot these values in order to visualize the classifier’s progress over time. Plots such as these are important to determine how well our model is learning, which informs decisions regarding adjusting hyper parameters or modifying the model’s architecture. Below we only show the plot for the loss. Plotting the accuracy is very similar; the corresponding code as well as the plot itself is available in the Jupyter notebook. 114 Implementing Text Classification with Feed Forward Networks The plot indicates that both the training and development losses decrease over time. This is good! It indicates that our classifier is neither overfitting nor underfitting. Recall from Chapter 2 that overfitting happens when a classifier performs well in training, but poorly on unseen data. In the plot above this would be indicated by a training loss that continues to decrease, but is associated with a development loss that does not. Underfitting happens when a classifier is unable to learn meaningful associations between the input features and the output labels. In this plot this would be shown as loss curves that do not decrease over time. This analysis means we are ready to evaluate our trained model on the test set, which must be a truly unseen dataset that was not used for training or to tune hyper parameters. In other words, this experiment will indicate how well our model performs “in the wild.” Because we would like these results to be as close as possible to real-world results, the test set should be used sparingly, only after the entire architecture, its trained parameters, and its hyper parameters have been frozen. With our feed-forward neural architecture we have achieved an accuracy of 92%, which is a substantial improvement over the 88% accuracy we obtained in Section 4.2. We strongly suggest that you experiment not only with the different hyper parameters, but also with different model architectures in the Jupyter notebook. Such exercises will help you de- 7.4 Summary 115 velop an intuition about the different effects each design choice has, as well as how these decisions interact with each other. 7.4 Summary In this chapter we have shown how to implement a feed-forward neural network in PyTorch. We have also introduced several PyTorch features that encourage and simplify deep learning best practices. In particular, the built-in Dataset and DataLoader classes make mini-batching straightforward while still allowing for customization such as sampling. The ability to create a custom Dataset object allows us to handle complex data and still have access to the features of a DataLoader. By convention, all the components provided by PyTorch are batch-aware and assume that the first dimension refers to the batch size, simplifying model implementation and improving readability. In building the model itself, we also saw that PyTorch uses layer modularization, i.e., both the network layers themselves and operations on them (such as dropout and activation functions) are modeled as layers in a pipeline. This makes it easy to interweave network layers, add various operations between them, and swap activation functions as desired. The weight initialization is also handled automatically when the layers are created, but can be customized as needed. Further, one can tailor the training process in PyTorch by adding momentum, adaptive learning rates, and regularization through optimizer selection and configuration. In this chapter, we used the Adam optimizer, which, in the authors’ experience, is a good default choice, but there are many other optimizers to choose from. We recommend that the reader read the PyTorch documentation on optimizers for more details: https://pytorch.org/docs/stable/optim.html.
8,099
8,269
#!/usr/bin/env python # coding: utf-8 # # Text Classification with a Feed-forward Neural Network and BOW features # First, we will do some initialization. # In[1]: import random import torch import numpy as np import pandas as pd from tqdm.notebook import tqdm # enable tqdm in pandas tqdm.pandas() # set to True to use the gpu (if there is one available) use_gpu = True # select device device = torch.device('cuda' if use_gpu and torch.cuda.is_available() else 'cpu') print(f'device: {device.type}') # random seed seed = 1234 # set random seed if seed is not None: print(f'random seed: {seed}') random.seed(seed) np.random.seed(seed) torch.manual_seed(seed) # We will be using the AG's News Topic Classification Dataset. # It is stored in two CSV files: `train.csv` and `test.csv`, as well as a `classes.txt` that stores the labels of the classes to predict. # # First, we will load the training dataset using [pandas](https://pandas.pydata.org/) and take a quick look at how the data. # In[2]: train_df = pd.read_csv('data/ag_news_csv/train.csv', header=None) train_df.columns = ['class index', 'title', 'description'] train_df # The dataset consists of 120,000 examples, each consisting of a class index, a title, and a description. # The class labels are distributed in a separated file. We will add the labels to the dataset so that we can interpret the data more easily. Note that the label indexes are one-based, so we need to subtract one to retrieve them from the list. # In[3]: labels = open('data/ag_news_csv/classes.txt').read().splitlines() classes = train_df['class index'].map(lambda i: labels[i-1]) train_df.insert(1, 'class', classes) train_df # Let's inspect how balanced our examples are by using a bar plot. # In[4]: pd.value_counts(train_df['class']).plot.bar() # The classes are evenly distributed. That's great! # # However, the text contains some spurious backslashes in some parts of the text. # They are meant to represent newlines in the original text. # An example can be seen below, between the words "dwindling" and "band". # In[5]: print(train_df.loc[0, 'description']) # We will replace the backslashes with spaces on the whole column using pandas replace method. # In[6]: train_df['text'] = train_df['title'].str.lower() + " " + train_df['description'].str.lower() train_df['text'] = train_df['text'].str.replace('\\', ' ', regex=False) train_df # Now we will proceed to tokenize the title and description columns using NLTK's word_tokenize(). # We will add a new column to our dataframe with the list of tokens. # In[7]: from nltk.tokenize import word_tokenize train_df['tokens'] = train_df['text'].progress_map(word_tokenize) train_df # In[8]: from sklearn.model_selection import train_test_split train_df, dev_df = train_test_split(train_df, train_size=0.8) train_df.reset_index(inplace=True) dev_df.reset_index(inplace=True) print(f'train rows: {len(train_df.index):,}') print(f'dev rows: {len(dev_df.index):,}') # Now we will create a vocabulary from the training data. We will only keep the terms that repeat beyond some threshold established below. # In[9]: threshold = 10 tokens = train_df['tokens'].explode().value_counts() tokens = tokens[tokens > threshold] id_to_token = ['[UNK]'] + tokens.index.tolist() token_to_id = {w:i for i,w in enumerate(id_to_token)} vocabulary_size = len(id_to_token) print(f'vocabulary size: {vocabulary_size:,}') # In[10]: from collections import defaultdict def make_feature_vector(tokens, unk_id=0): vector = defaultdict(int) for t in tokens: i = token_to_id.get(t, unk_id) vector[i] += 1 return vector train_df['features'] = train_df['tokens'].progress_map(make_feature_vector) dev_df['features'] = dev_df['tokens'].progress_map(make_feature_vector) train_df # In[11]: from torch.utils.data import Dataset class MyDataset(Dataset): def __init__(self, x, y): self.x = x self.y = y def __len__(self): return len(self.x) def __getitem__(self, index): x = torch.zeros(vocabulary_size, dtype=torch.float32) y = torch.tensor(self.y[index]) for k,v in self.x[index].items(): x[k] = v return x, y # In[12]: from torch import nn import torch.nn.functional as F class Model(nn.Module): def __init__(self, input_dim, hidden_dim, output_dim, dropout): super().__init__() self.layers = nn.Sequential( nn.Dropout(dropout), nn.Linear(input_dim, hidden_dim), nn.ReLU(), nn.Dropout(dropout), nn.Linear(hidden_dim, output_dim), ) def forward(self, x): return self.layers(x) # In[13]: from torch import optim from torch.utils.data import DataLoader from sklearn.metrics import accuracy_score # hyperparameters lr = 1e-3 weight_decay = 1e-5 batch_size = 500 shuffle = True n_epochs = 5 input_dim = vocabulary_size hidden_dim = 50 output_dim = len(labels) dropout = 0.3 # initialize the model, loss function, optimizer, and data-loader model = Model(input_dim, hidden_dim, output_dim, dropout).to(device) loss_func = nn.CrossEntropyLoss() optimizer = optim.Adam( model.parameters(), lr=lr, weight_decay=weight_decay) train_ds = MyDataset( train_df['features'], train_df['class index'] - 1) train_dl = DataLoader( train_ds, batch_size=batch_size, shuffle=shuffle) dev_ds = MyDataset( dev_df['features'], dev_df['class index'] - 1) dev_dl = DataLoader( dev_ds, batch_size=batch_size, shuffle=shuffle) # lists used to store plotting data train_loss, train_acc = [], [] dev_loss, dev_acc = [], [] # In[14]: # train the model for epoch in range(n_epochs): losses, acc = [], [] # set model to training mode model.train() for X, y_true in tqdm(train_dl, desc=f'epoch {epoch+1} (train)'): # clear gradients model.zero_grad() # send batch to right device X = X.to(device) y_true = y_true.to(device) # predict label scores y_pred = model(X) # compute loss loss = loss_func(y_pred, y_true) # compute accuracy gold = y_true.detach().cpu().numpy() pred = np.argmax(y_pred.detach().cpu().numpy(), axis=1) # accumulate for plotting losses.append(loss.detach().cpu().item()) acc.append(accuracy_score(gold, pred)) # backpropagate loss.backward() # optimize model parameters optimizer.step() # save epoch stats train_loss.append(np.mean(losses)) train_acc.append(np.mean(acc)) # set model to evaluation mode model.eval() # disable gradient calculation with torch.no_grad(): losses, acc = [], [] for X, y_true in tqdm(dev_dl, desc=f'epoch {epoch+1} (dev)'): # send batch to right device X = X.to(device) y_true = y_true.to(device) # predict label scores y_pred = model(X) # compute loss loss = loss_func(y_pred, y_true) # compute accuracy gold = y_true.cpu().numpy() pred = np.argmax(y_pred.cpu().numpy(), axis=1) # accumulate for plotting losses.append(loss.cpu().item()) acc.append(accuracy_score(gold, pred)) # save epoch stats dev_loss.append(np.mean(losses)) dev_acc.append(np.mean(acc)) # In[15]: import matplotlib.pyplot as plt x = np.arange(n_epochs) + 1 plt.plot(x, train_loss) plt.plot(x, dev_loss) plt.legend(['train', 'dev']) plt.xlabel('epoch') plt.ylabel('loss') plt.grid(True) # In[16]: plt.plot(x, train_acc) plt.plot(x, dev_acc) plt.legend(['train', 'dev']) plt.xlabel('epoch') plt.ylabel('accuracy') plt.grid(True) # ## Evaluate on test dataset # In[17]: # repeat all preprocessing done above, this time on the test set test_df = pd.read_csv('data/ag_news_csv/test.csv', header=None) test_df.columns = ['class index', 'title', 'description'] test_df['text'] = test_df['title'].str.lower() + " " + test_df['description'].str.lower() test_df['text'] = test_df['text'].str.replace('\\', ' ', regex=False) test_df['tokens'] = test_df['text'].progress_map(word_tokenize) test_df['features'] = test_df['tokens'].progress_map(make_feature_vector) test_df # In[18]: from sklearn.metrics import classification_report # set model to evaluation mode model.eval() dataset = MyDataset(test_df['features'], test_df['class index'] - 1) data_loader = DataLoader(dataset, batch_size=batch_size) y_pred = [] # disable gradient calculation with torch.no_grad(): for X, _ in tqdm(data_loader): X = X.to(device) # predict one class per example y = torch.argmax(model(X), dim=1) # convert tensor to numpy array y_pred.append(y.cpu().numpy()) # print results y_true = dataset.y y_pred = np.concatenate(y_pred) print(classification_report(y_true, y_pred, target_names=labels)) # In[19]: from sklearn.metrics import ConfusionMatrixDisplay, confusion_matrix cm = confusion_matrix(y_true, y_pred, normalize='true') disp = ConfusionMatrixDisplay(confusion_matrix=cm, display_labels=labels) fig, ax = plt.subplots(figsize=(4,4)) disp.plot(cmap='Blues', values_format='.2f', colorbar=False, ax=ax, xticks_rotation=45) # In[ ]:
5,114
5,307
2
chap07-3
chap07-3
7 Implementing Text Classification with Feed Forward Networks In this chapter we provide an implementation of the multilayer neural network described in Chapter 5, along with several of the best practices discussed in Chapter 6. Remaining fairly simple, our network will consist of three neuron layers that are fully connected: an input layer that stores the input features, a hidden intermediate layer, and an output layer that produces the scores for each class to be learned. In between these layers we will include dropout and a nonlinearity (ReLU). Sidebar 7.1 The PyTorch Linear layer implements the connections between layers of neurons. Before discussing the implementation of more complex neural architectures in PyTorch, it is important to address one potential source of confusion. In PyTorch, the Linear layer implements the connections between two layers of neurons rather than an actual neuron layer. That is, a Linear object contains the weights Wl+1 that connect the neurons in layer l with the neurons in layer l + 1 in Figure 5.2. This is why the Linear constructor includes two dimensions: one for the input neuron layer (in_features) and one for the output neuron layer (out_features). Optionally, if the parameter bias is set to True, the corresponding Linear object also contains the bias weights for the output neurons, i.e., bl+1 in Figure 5.2. Thus, in our Model with three neuron layers, we will have two Linear objects. To stay close to the code, from this point forward when we mention the term layer in the implementation chapters, we refer to a PyTorch Linear layer, unless stated otherwise. 109 110 Implementing Text Classification with Feed Forward Networks Further, we make use of two PyTorch classes: a Dataset and a DataLoader. The advantage of using these classes is that they make several things easy, including data shuffling and batching. Lastly, since the classifier’s architecture has become more complex, for optimization we transition from stochastic gradient descent to the Adam optimizer to take advantage of its additional features such as momentum, and L2 regularization. As before, the code from this chapter is available in a Jupyter notebook: chap7_ffnn. 7.1 Data In this chapter we continue to use the AG News Dataset (Section 4.2.1), including the same loading and preprocessing steps. Also, we continue using the same train and test sets to be able to compare results to the ones obtained in Section 4.2. However, in this chapter we will make use of a development set to tune the model’s hyper parameters. For this purpose, we split the training set in two: 80% of the examples become a new training set, while the other 20% are the development set: In the code above we used scikit-learn’s train_test_split function to split the training set into a development partition and a new training partition. Note that this function can split Python lists, NumPy arrays, and even Pandas dataframes. The returned dataframes preserve the index of the original training dataframe, which can be useful to keep the connection to the original data, but is not what we currently need, as we are trying to create two independent datasets. Therefore, we reset the index of the two new dataframes. A second difference to what was done in Section 4.2 is the introduction of mini-batches. PyTorch provides the DataLoader1 class which can be used for shuffling the data and splitting it into mini-batches. In order to create a DataLoader, we need the data to be in the form of a PyTorch Dataset.2 There are two main types of PyTorch datasets: map-style and iterable-style. We will use the former, as it is simpler and meets our needs, but it is good to know that the other option is available for situations when, for example, you need to stream data from a remote source or random access is expensive. To create a map-style dataset we need to subclass torch.utils.data. Dataset and override its __getitem__() method (to return an example given a 1 https://pytorch.org/docs/stable/data.html#torch.utils.data.DataLoader 2 https://pytorch.org/docs/stable/data.html#torch.utils.data.Dataset 7.2 Fully-Connected Neural Network 111 key), as well as its __len__() method (to return the number of examples in the dataset). Our dataset implementation stores two sequences: one for holding the features, and another for storing the corresponding labels. In our implementation we store two Pandas Series, but Python lists or NumPy arrays would also work. The implementation __len__() is trivial: we simply return the length of the feature sequence, that is, the number of feature vectors. The implementation of __getitem__() is slightly more involved. Recall that each of our feature vectors is represented as a dictionary with word ids as keys, and word counts as values, and any word id not in the dictionary has a count of zero. Our __getitem__() method transforms this representation into one that PyTorch can use. We first create two PyTorch tensors, one for the label and one for the features, which is initially populated with zeros. Then, we retrieve the feature dictionary corresponding to the provided index, and, for each key-value pair in the feature dictionary, we update the corresponding element of the tensor. Once this is complete, we return the feature and label tensors for the datum: 7.2 Fully-Connected Neural Network Having completed the Dataset implementation, we next implement the model, i.e., a fully-connected neural network with two layers.3 In Section 4.2 we used a Linear module directly to implement the simpler models discussed there. This time, we will demonstrate how to implement a model as a new module, by subclassing torch.nn. Module. Although this is not necessary for this model, as it can be represented by a Sequential module, as models get more complex, it becomes helpful to encapsulate their behavior. To implement a Module, we need to implement the constructor and override the forward() method. Note that, in our constructor below, before initializing the object fields, we invoke the constructor of the parent class (i.e., Module) with the line super().__init__(). This allows PyTorch to set up the mechanisms through which any layers defined as attributes in the constructor are properly registered as model parameters. In our example, a Sequential instance is assigned to self.layers; this is enough for our model instance to know about it during back-propagation and parameter updating. 3 Recall that layer here refers to the PyTorch Linear layer that contains the connections between two neuron layers. See Sidebar 7.1 for more details. 112 Implementing Text Classification with Feed Forward Networks Here, our model consists of two linear layers, each one preceded by a dropout layer (which drops out input neurons from the corresponding linear layer). The input of the first linear layer has the same size as our vocabulary, and its output has the dimension of the hidden neuron layer (please see Section 5.1 for a refresher on the architecture of the feed-forward neural network). Consequently, the input size of the second linear layer is equal to the size of the hidden layer, and its output size is the number of classes. Additionally, between the two linear layers we add a ReLU nonlinearity.4 All of the model layers are wrapped in a Sequential module, which simply connects the output of one layer to the input of the next. The second method we need to implement is the forward() method, which defines how the model applies its layers to a given input during the forward pass. Our forward() method simply calls the sequential layer and returns its output. Note that while this method implements the model’s forward pass, in general, this method should not be called directly by the user. Instead, the user should use the model as though it were a function (technically, invoking the __call__() method), and let PyTorch call the forward() method internally. This allows PyTorch to activate necessary features such as module hooks correctly. 7.3 Training In order to train our model, we will first initialize the hyperparameters and the different components we need: model, loss function, optimizer, dataset, and data-loader. Notable differences with respect to Section 4.2 are the use of the Adam optimizer with a weight decay (this is just what PyTorch calls L2 regularization – see Chapter 6), and the use of a dataloader with shuffling and batches of 500 examples. We encourage you to take the time to examine the values we use for the hyper parameters, and to experiment with modifying them in the Jupyter notebook. The basic steps of the learning loop are the same as those in Sec- 4 Note that nonlinearities such as the ReLU function here are necessary to guarantee that the neural network can learn non-linear decision boundaries. See Chapter 5 for an extended discussion on this topic. Further, nonlinearities can be added after each network layer, but, typically, the output layer is omitted. This is because a softmax or sigmoid function usually follows it. In PyTorch, the nn.CrossEntropyFunction, which we also use in this chapter, includes such a softmax function. 7.3 Training 113 tion 4.2, except that we are now using a development set to keep track of the performance of the current model after each training epoch. One important difference between using our model during training and evaluation is that, prior to each training session, we need to set the model to training mode using the train() method, and before evaluating on the development set, we need to set the model to evaluation mode using the eval() method. This is important, because some layers have different behavior depending on whether the model is in training or evaluation mode. In our model, this is the case for the Dropout layer, which randomly zeroes some of its input elements during training and scales its outputs accordingly (see Section 6.6), but during evaluation does nothing. In order to plot some relevant statistics acquired from the training data, we collect the current loss and accuracy for each mini-batch. Note that we call detach() on the tensors corresponding to the loss and the predicted/gold labels so they are no longer considered when computing gradients. Calling cpu() copies the tensors from the GPU to the CPU if we are using the GPU; otherwise it does nothing. Calling numpy() converts the PyTorch tensor into a NumPy array. Unlike the prediction sequence, which is represented as a vector of label scores, the loss is a scalar. For this reason, we retrieve it as a Python number using the item() method. When evaluating on the development set, since we do not need to compute the gradients, we save computation by wrapping the steps in a torch.no_grad() context-manager. Since we are not learning, we do not perform back-propagation or invoke the optimizer. After completing training we have gathered the loss and accuracy values after each epoch for both the training and development partitions. Next, we plot these values in order to visualize the classifier’s progress over time. Plots such as these are important to determine how well our model is learning, which informs decisions regarding adjusting hyper parameters or modifying the model’s architecture. Below we only show the plot for the loss. Plotting the accuracy is very similar; the corresponding code as well as the plot itself is available in the Jupyter notebook. 114 Implementing Text Classification with Feed Forward Networks The plot indicates that both the training and development losses decrease over time. This is good! It indicates that our classifier is neither overfitting nor underfitting. Recall from Chapter 2 that overfitting happens when a classifier performs well in training, but poorly on unseen data. In the plot above this would be indicated by a training loss that continues to decrease, but is associated with a development loss that does not. Underfitting happens when a classifier is unable to learn meaningful associations between the input features and the output labels. In this plot this would be shown as loss curves that do not decrease over time. This analysis means we are ready to evaluate our trained model on the test set, which must be a truly unseen dataset that was not used for training or to tune hyper parameters. In other words, this experiment will indicate how well our model performs “in the wild.” Because we would like these results to be as close as possible to real-world results, the test set should be used sparingly, only after the entire architecture, its trained parameters, and its hyper parameters have been frozen. With our feed-forward neural architecture we have achieved an accuracy of 92%, which is a substantial improvement over the 88% accuracy we obtained in Section 4.2. We strongly suggest that you experiment not only with the different hyper parameters, but also with different model architectures in the Jupyter notebook. Such exercises will help you de- 7.4 Summary 115 velop an intuition about the different effects each design choice has, as well as how these decisions interact with each other. 7.4 Summary In this chapter we have shown how to implement a feed-forward neural network in PyTorch. We have also introduced several PyTorch features that encourage and simplify deep learning best practices. In particular, the built-in Dataset and DataLoader classes make mini-batching straightforward while still allowing for customization such as sampling. The ability to create a custom Dataset object allows us to handle complex data and still have access to the features of a DataLoader. By convention, all the components provided by PyTorch are batch-aware and assume that the first dimension refers to the batch size, simplifying model implementation and improving readability. In building the model itself, we also saw that PyTorch uses layer modularization, i.e., both the network layers themselves and operations on them (such as dropout and activation functions) are modeled as layers in a pipeline. This makes it easy to interweave network layers, add various operations between them, and swap activation functions as desired. The weight initialization is also handled automatically when the layers are created, but can be customized as needed. Further, one can tailor the training process in PyTorch by adding momentum, adaptive learning rates, and regularization through optimizer selection and configuration. In this chapter, we used the Adam optimizer, which, in the authors’ experience, is a good default choice, but there are many other optimizers to choose from. We recommend that the reader read the PyTorch documentation on optimizers for more details: https://pytorch.org/docs/stable/optim.html.
8,456
8,512
#!/usr/bin/env python # coding: utf-8 # # Text Classification with a Feed-forward Neural Network and BOW features # First, we will do some initialization. # In[1]: import random import torch import numpy as np import pandas as pd from tqdm.notebook import tqdm # enable tqdm in pandas tqdm.pandas() # set to True to use the gpu (if there is one available) use_gpu = True # select device device = torch.device('cuda' if use_gpu and torch.cuda.is_available() else 'cpu') print(f'device: {device.type}') # random seed seed = 1234 # set random seed if seed is not None: print(f'random seed: {seed}') random.seed(seed) np.random.seed(seed) torch.manual_seed(seed) # We will be using the AG's News Topic Classification Dataset. # It is stored in two CSV files: `train.csv` and `test.csv`, as well as a `classes.txt` that stores the labels of the classes to predict. # # First, we will load the training dataset using [pandas](https://pandas.pydata.org/) and take a quick look at how the data. # In[2]: train_df = pd.read_csv('data/ag_news_csv/train.csv', header=None) train_df.columns = ['class index', 'title', 'description'] train_df # The dataset consists of 120,000 examples, each consisting of a class index, a title, and a description. # The class labels are distributed in a separated file. We will add the labels to the dataset so that we can interpret the data more easily. Note that the label indexes are one-based, so we need to subtract one to retrieve them from the list. # In[3]: labels = open('data/ag_news_csv/classes.txt').read().splitlines() classes = train_df['class index'].map(lambda i: labels[i-1]) train_df.insert(1, 'class', classes) train_df # Let's inspect how balanced our examples are by using a bar plot. # In[4]: pd.value_counts(train_df['class']).plot.bar() # The classes are evenly distributed. That's great! # # However, the text contains some spurious backslashes in some parts of the text. # They are meant to represent newlines in the original text. # An example can be seen below, between the words "dwindling" and "band". # In[5]: print(train_df.loc[0, 'description']) # We will replace the backslashes with spaces on the whole column using pandas replace method. # In[6]: train_df['text'] = train_df['title'].str.lower() + " " + train_df['description'].str.lower() train_df['text'] = train_df['text'].str.replace('\\', ' ', regex=False) train_df # Now we will proceed to tokenize the title and description columns using NLTK's word_tokenize(). # We will add a new column to our dataframe with the list of tokens. # In[7]: from nltk.tokenize import word_tokenize train_df['tokens'] = train_df['text'].progress_map(word_tokenize) train_df # In[8]: from sklearn.model_selection import train_test_split train_df, dev_df = train_test_split(train_df, train_size=0.8) train_df.reset_index(inplace=True) dev_df.reset_index(inplace=True) print(f'train rows: {len(train_df.index):,}') print(f'dev rows: {len(dev_df.index):,}') # Now we will create a vocabulary from the training data. We will only keep the terms that repeat beyond some threshold established below. # In[9]: threshold = 10 tokens = train_df['tokens'].explode().value_counts() tokens = tokens[tokens > threshold] id_to_token = ['[UNK]'] + tokens.index.tolist() token_to_id = {w:i for i,w in enumerate(id_to_token)} vocabulary_size = len(id_to_token) print(f'vocabulary size: {vocabulary_size:,}') # In[10]: from collections import defaultdict def make_feature_vector(tokens, unk_id=0): vector = defaultdict(int) for t in tokens: i = token_to_id.get(t, unk_id) vector[i] += 1 return vector train_df['features'] = train_df['tokens'].progress_map(make_feature_vector) dev_df['features'] = dev_df['tokens'].progress_map(make_feature_vector) train_df # In[11]: from torch.utils.data import Dataset class MyDataset(Dataset): def __init__(self, x, y): self.x = x self.y = y def __len__(self): return len(self.x) def __getitem__(self, index): x = torch.zeros(vocabulary_size, dtype=torch.float32) y = torch.tensor(self.y[index]) for k,v in self.x[index].items(): x[k] = v return x, y # In[12]: from torch import nn import torch.nn.functional as F class Model(nn.Module): def __init__(self, input_dim, hidden_dim, output_dim, dropout): super().__init__() self.layers = nn.Sequential( nn.Dropout(dropout), nn.Linear(input_dim, hidden_dim), nn.ReLU(), nn.Dropout(dropout), nn.Linear(hidden_dim, output_dim), ) def forward(self, x): return self.layers(x) # In[13]: from torch import optim from torch.utils.data import DataLoader from sklearn.metrics import accuracy_score # hyperparameters lr = 1e-3 weight_decay = 1e-5 batch_size = 500 shuffle = True n_epochs = 5 input_dim = vocabulary_size hidden_dim = 50 output_dim = len(labels) dropout = 0.3 # initialize the model, loss function, optimizer, and data-loader model = Model(input_dim, hidden_dim, output_dim, dropout).to(device) loss_func = nn.CrossEntropyLoss() optimizer = optim.Adam( model.parameters(), lr=lr, weight_decay=weight_decay) train_ds = MyDataset( train_df['features'], train_df['class index'] - 1) train_dl = DataLoader( train_ds, batch_size=batch_size, shuffle=shuffle) dev_ds = MyDataset( dev_df['features'], dev_df['class index'] - 1) dev_dl = DataLoader( dev_ds, batch_size=batch_size, shuffle=shuffle) # lists used to store plotting data train_loss, train_acc = [], [] dev_loss, dev_acc = [], [] # In[14]: # train the model for epoch in range(n_epochs): losses, acc = [], [] # set model to training mode model.train() for X, y_true in tqdm(train_dl, desc=f'epoch {epoch+1} (train)'): # clear gradients model.zero_grad() # send batch to right device X = X.to(device) y_true = y_true.to(device) # predict label scores y_pred = model(X) # compute loss loss = loss_func(y_pred, y_true) # compute accuracy gold = y_true.detach().cpu().numpy() pred = np.argmax(y_pred.detach().cpu().numpy(), axis=1) # accumulate for plotting losses.append(loss.detach().cpu().item()) acc.append(accuracy_score(gold, pred)) # backpropagate loss.backward() # optimize model parameters optimizer.step() # save epoch stats train_loss.append(np.mean(losses)) train_acc.append(np.mean(acc)) # set model to evaluation mode model.eval() # disable gradient calculation with torch.no_grad(): losses, acc = [], [] for X, y_true in tqdm(dev_dl, desc=f'epoch {epoch+1} (dev)'): # send batch to right device X = X.to(device) y_true = y_true.to(device) # predict label scores y_pred = model(X) # compute loss loss = loss_func(y_pred, y_true) # compute accuracy gold = y_true.cpu().numpy() pred = np.argmax(y_pred.cpu().numpy(), axis=1) # accumulate for plotting losses.append(loss.cpu().item()) acc.append(accuracy_score(gold, pred)) # save epoch stats dev_loss.append(np.mean(losses)) dev_acc.append(np.mean(acc)) # In[15]: import matplotlib.pyplot as plt x = np.arange(n_epochs) + 1 plt.plot(x, train_loss) plt.plot(x, dev_loss) plt.legend(['train', 'dev']) plt.xlabel('epoch') plt.ylabel('loss') plt.grid(True) # In[16]: plt.plot(x, train_acc) plt.plot(x, dev_acc) plt.legend(['train', 'dev']) plt.xlabel('epoch') plt.ylabel('accuracy') plt.grid(True) # ## Evaluate on test dataset # In[17]: # repeat all preprocessing done above, this time on the test set test_df = pd.read_csv('data/ag_news_csv/test.csv', header=None) test_df.columns = ['class index', 'title', 'description'] test_df['text'] = test_df['title'].str.lower() + " " + test_df['description'].str.lower() test_df['text'] = test_df['text'].str.replace('\\', ' ', regex=False) test_df['tokens'] = test_df['text'].progress_map(word_tokenize) test_df['features'] = test_df['tokens'].progress_map(make_feature_vector) test_df # In[18]: from sklearn.metrics import classification_report # set model to evaluation mode model.eval() dataset = MyDataset(test_df['features'], test_df['class index'] - 1) data_loader = DataLoader(dataset, batch_size=batch_size) y_pred = [] # disable gradient calculation with torch.no_grad(): for X, _ in tqdm(data_loader): X = X.to(device) # predict one class per example y = torch.argmax(model(X), dim=1) # convert tensor to numpy array y_pred.append(y.cpu().numpy()) # print results y_true = dataset.y y_pred = np.concatenate(y_pred) print(classification_report(y_true, y_pred, target_names=labels)) # In[19]: from sklearn.metrics import ConfusionMatrixDisplay, confusion_matrix cm = confusion_matrix(y_true, y_pred, normalize='true') disp = ConfusionMatrixDisplay(confusion_matrix=cm, display_labels=labels) fig, ax = plt.subplots(figsize=(4,4)) disp.plot(cmap='Blues', values_format='.2f', colorbar=False, ax=ax, xticks_rotation=45) # In[ ]:
5,388
5,473
3
chap07-4
chap07-4
7 Implementing Text Classification with Feed Forward Networks In this chapter we provide an implementation of the multilayer neural network described in Chapter 5, along with several of the best practices discussed in Chapter 6. Remaining fairly simple, our network will consist of three neuron layers that are fully connected: an input layer that stores the input features, a hidden intermediate layer, and an output layer that produces the scores for each class to be learned. In between these layers we will include dropout and a nonlinearity (ReLU). Sidebar 7.1 The PyTorch Linear layer implements the connections between layers of neurons. Before discussing the implementation of more complex neural architectures in PyTorch, it is important to address one potential source of confusion. In PyTorch, the Linear layer implements the connections between two layers of neurons rather than an actual neuron layer. That is, a Linear object contains the weights Wl+1 that connect the neurons in layer l with the neurons in layer l + 1 in Figure 5.2. This is why the Linear constructor includes two dimensions: one for the input neuron layer (in_features) and one for the output neuron layer (out_features). Optionally, if the parameter bias is set to True, the corresponding Linear object also contains the bias weights for the output neurons, i.e., bl+1 in Figure 5.2. Thus, in our Model with three neuron layers, we will have two Linear objects. To stay close to the code, from this point forward when we mention the term layer in the implementation chapters, we refer to a PyTorch Linear layer, unless stated otherwise. 109 110 Implementing Text Classification with Feed Forward Networks Further, we make use of two PyTorch classes: a Dataset and a DataLoader. The advantage of using these classes is that they make several things easy, including data shuffling and batching. Lastly, since the classifier’s architecture has become more complex, for optimization we transition from stochastic gradient descent to the Adam optimizer to take advantage of its additional features such as momentum, and L2 regularization. As before, the code from this chapter is available in a Jupyter notebook: chap7_ffnn. 7.1 Data In this chapter we continue to use the AG News Dataset (Section 4.2.1), including the same loading and preprocessing steps. Also, we continue using the same train and test sets to be able to compare results to the ones obtained in Section 4.2. However, in this chapter we will make use of a development set to tune the model’s hyper parameters. For this purpose, we split the training set in two: 80% of the examples become a new training set, while the other 20% are the development set: In the code above we used scikit-learn’s train_test_split function to split the training set into a development partition and a new training partition. Note that this function can split Python lists, NumPy arrays, and even Pandas dataframes. The returned dataframes preserve the index of the original training dataframe, which can be useful to keep the connection to the original data, but is not what we currently need, as we are trying to create two independent datasets. Therefore, we reset the index of the two new dataframes. A second difference to what was done in Section 4.2 is the introduction of mini-batches. PyTorch provides the DataLoader1 class which can be used for shuffling the data and splitting it into mini-batches. In order to create a DataLoader, we need the data to be in the form of a PyTorch Dataset.2 There are two main types of PyTorch datasets: map-style and iterable-style. We will use the former, as it is simpler and meets our needs, but it is good to know that the other option is available for situations when, for example, you need to stream data from a remote source or random access is expensive. To create a map-style dataset we need to subclass torch.utils.data. Dataset and override its __getitem__() method (to return an example given a 1 https://pytorch.org/docs/stable/data.html#torch.utils.data.DataLoader 2 https://pytorch.org/docs/stable/data.html#torch.utils.data.Dataset 7.2 Fully-Connected Neural Network 111 key), as well as its __len__() method (to return the number of examples in the dataset). Our dataset implementation stores two sequences: one for holding the features, and another for storing the corresponding labels. In our implementation we store two Pandas Series, but Python lists or NumPy arrays would also work. The implementation __len__() is trivial: we simply return the length of the feature sequence, that is, the number of feature vectors. The implementation of __getitem__() is slightly more involved. Recall that each of our feature vectors is represented as a dictionary with word ids as keys, and word counts as values, and any word id not in the dictionary has a count of zero. Our __getitem__() method transforms this representation into one that PyTorch can use. We first create two PyTorch tensors, one for the label and one for the features, which is initially populated with zeros. Then, we retrieve the feature dictionary corresponding to the provided index, and, for each key-value pair in the feature dictionary, we update the corresponding element of the tensor. Once this is complete, we return the feature and label tensors for the datum: 7.2 Fully-Connected Neural Network Having completed the Dataset implementation, we next implement the model, i.e., a fully-connected neural network with two layers.3 In Section 4.2 we used a Linear module directly to implement the simpler models discussed there. This time, we will demonstrate how to implement a model as a new module, by subclassing torch.nn. Module. Although this is not necessary for this model, as it can be represented by a Sequential module, as models get more complex, it becomes helpful to encapsulate their behavior. To implement a Module, we need to implement the constructor and override the forward() method. Note that, in our constructor below, before initializing the object fields, we invoke the constructor of the parent class (i.e., Module) with the line super().__init__(). This allows PyTorch to set up the mechanisms through which any layers defined as attributes in the constructor are properly registered as model parameters. In our example, a Sequential instance is assigned to self.layers; this is enough for our model instance to know about it during back-propagation and parameter updating. 3 Recall that layer here refers to the PyTorch Linear layer that contains the connections between two neuron layers. See Sidebar 7.1 for more details. 112 Implementing Text Classification with Feed Forward Networks Here, our model consists of two linear layers, each one preceded by a dropout layer (which drops out input neurons from the corresponding linear layer). The input of the first linear layer has the same size as our vocabulary, and its output has the dimension of the hidden neuron layer (please see Section 5.1 for a refresher on the architecture of the feed-forward neural network). Consequently, the input size of the second linear layer is equal to the size of the hidden layer, and its output size is the number of classes. Additionally, between the two linear layers we add a ReLU nonlinearity.4 All of the model layers are wrapped in a Sequential module, which simply connects the output of one layer to the input of the next. The second method we need to implement is the forward() method, which defines how the model applies its layers to a given input during the forward pass. Our forward() method simply calls the sequential layer and returns its output. Note that while this method implements the model’s forward pass, in general, this method should not be called directly by the user. Instead, the user should use the model as though it were a function (technically, invoking the __call__() method), and let PyTorch call the forward() method internally. This allows PyTorch to activate necessary features such as module hooks correctly. 7.3 Training In order to train our model, we will first initialize the hyperparameters and the different components we need: model, loss function, optimizer, dataset, and data-loader. Notable differences with respect to Section 4.2 are the use of the Adam optimizer with a weight decay (this is just what PyTorch calls L2 regularization – see Chapter 6), and the use of a dataloader with shuffling and batches of 500 examples. We encourage you to take the time to examine the values we use for the hyper parameters, and to experiment with modifying them in the Jupyter notebook. The basic steps of the learning loop are the same as those in Sec- 4 Note that nonlinearities such as the ReLU function here are necessary to guarantee that the neural network can learn non-linear decision boundaries. See Chapter 5 for an extended discussion on this topic. Further, nonlinearities can be added after each network layer, but, typically, the output layer is omitted. This is because a softmax or sigmoid function usually follows it. In PyTorch, the nn.CrossEntropyFunction, which we also use in this chapter, includes such a softmax function. 7.3 Training 113 tion 4.2, except that we are now using a development set to keep track of the performance of the current model after each training epoch. One important difference between using our model during training and evaluation is that, prior to each training session, we need to set the model to training mode using the train() method, and before evaluating on the development set, we need to set the model to evaluation mode using the eval() method. This is important, because some layers have different behavior depending on whether the model is in training or evaluation mode. In our model, this is the case for the Dropout layer, which randomly zeroes some of its input elements during training and scales its outputs accordingly (see Section 6.6), but during evaluation does nothing. In order to plot some relevant statistics acquired from the training data, we collect the current loss and accuracy for each mini-batch. Note that we call detach() on the tensors corresponding to the loss and the predicted/gold labels so they are no longer considered when computing gradients. Calling cpu() copies the tensors from the GPU to the CPU if we are using the GPU; otherwise it does nothing. Calling numpy() converts the PyTorch tensor into a NumPy array. Unlike the prediction sequence, which is represented as a vector of label scores, the loss is a scalar. For this reason, we retrieve it as a Python number using the item() method. When evaluating on the development set, since we do not need to compute the gradients, we save computation by wrapping the steps in a torch.no_grad() context-manager. Since we are not learning, we do not perform back-propagation or invoke the optimizer. After completing training we have gathered the loss and accuracy values after each epoch for both the training and development partitions. Next, we plot these values in order to visualize the classifier’s progress over time. Plots such as these are important to determine how well our model is learning, which informs decisions regarding adjusting hyper parameters or modifying the model’s architecture. Below we only show the plot for the loss. Plotting the accuracy is very similar; the corresponding code as well as the plot itself is available in the Jupyter notebook. 114 Implementing Text Classification with Feed Forward Networks The plot indicates that both the training and development losses decrease over time. This is good! It indicates that our classifier is neither overfitting nor underfitting. Recall from Chapter 2 that overfitting happens when a classifier performs well in training, but poorly on unseen data. In the plot above this would be indicated by a training loss that continues to decrease, but is associated with a development loss that does not. Underfitting happens when a classifier is unable to learn meaningful associations between the input features and the output labels. In this plot this would be shown as loss curves that do not decrease over time. This analysis means we are ready to evaluate our trained model on the test set, which must be a truly unseen dataset that was not used for training or to tune hyper parameters. In other words, this experiment will indicate how well our model performs “in the wild.” Because we would like these results to be as close as possible to real-world results, the test set should be used sparingly, only after the entire architecture, its trained parameters, and its hyper parameters have been frozen. With our feed-forward neural architecture we have achieved an accuracy of 92%, which is a substantial improvement over the 88% accuracy we obtained in Section 4.2. We strongly suggest that you experiment not only with the different hyper parameters, but also with different model architectures in the Jupyter notebook. Such exercises will help you de- 7.4 Summary 115 velop an intuition about the different effects each design choice has, as well as how these decisions interact with each other. 7.4 Summary In this chapter we have shown how to implement a feed-forward neural network in PyTorch. We have also introduced several PyTorch features that encourage and simplify deep learning best practices. In particular, the built-in Dataset and DataLoader classes make mini-batching straightforward while still allowing for customization such as sampling. The ability to create a custom Dataset object allows us to handle complex data and still have access to the features of a DataLoader. By convention, all the components provided by PyTorch are batch-aware and assume that the first dimension refers to the batch size, simplifying model implementation and improving readability. In building the model itself, we also saw that PyTorch uses layer modularization, i.e., both the network layers themselves and operations on them (such as dropout and activation functions) are modeled as layers in a pipeline. This makes it easy to interweave network layers, add various operations between them, and swap activation functions as desired. The weight initialization is also handled automatically when the layers are created, but can be customized as needed. Further, one can tailor the training process in PyTorch by adding momentum, adaptive learning rates, and regularization through optimizer selection and configuration. In this chapter, we used the Adam optimizer, which, in the authors’ experience, is a good default choice, but there are many other optimizers to choose from. We recommend that the reader read the PyTorch documentation on optimizers for more details: https://pytorch.org/docs/stable/optim.html.
3,870
4,292
#!/usr/bin/env python # coding: utf-8 # # Text Classification with a Feed-forward Neural Network and BOW features # First, we will do some initialization. # In[1]: import random import torch import numpy as np import pandas as pd from tqdm.notebook import tqdm # enable tqdm in pandas tqdm.pandas() # set to True to use the gpu (if there is one available) use_gpu = True # select device device = torch.device('cuda' if use_gpu and torch.cuda.is_available() else 'cpu') print(f'device: {device.type}') # random seed seed = 1234 # set random seed if seed is not None: print(f'random seed: {seed}') random.seed(seed) np.random.seed(seed) torch.manual_seed(seed) # We will be using the AG's News Topic Classification Dataset. # It is stored in two CSV files: `train.csv` and `test.csv`, as well as a `classes.txt` that stores the labels of the classes to predict. # # First, we will load the training dataset using [pandas](https://pandas.pydata.org/) and take a quick look at how the data. # In[2]: train_df = pd.read_csv('data/ag_news_csv/train.csv', header=None) train_df.columns = ['class index', 'title', 'description'] train_df # The dataset consists of 120,000 examples, each consisting of a class index, a title, and a description. # The class labels are distributed in a separated file. We will add the labels to the dataset so that we can interpret the data more easily. Note that the label indexes are one-based, so we need to subtract one to retrieve them from the list. # In[3]: labels = open('data/ag_news_csv/classes.txt').read().splitlines() classes = train_df['class index'].map(lambda i: labels[i-1]) train_df.insert(1, 'class', classes) train_df # Let's inspect how balanced our examples are by using a bar plot. # In[4]: pd.value_counts(train_df['class']).plot.bar() # The classes are evenly distributed. That's great! # # However, the text contains some spurious backslashes in some parts of the text. # They are meant to represent newlines in the original text. # An example can be seen below, between the words "dwindling" and "band". # In[5]: print(train_df.loc[0, 'description']) # We will replace the backslashes with spaces on the whole column using pandas replace method. # In[6]: train_df['text'] = train_df['title'].str.lower() + " " + train_df['description'].str.lower() train_df['text'] = train_df['text'].str.replace('\\', ' ', regex=False) train_df # Now we will proceed to tokenize the title and description columns using NLTK's word_tokenize(). # We will add a new column to our dataframe with the list of tokens. # In[7]: from nltk.tokenize import word_tokenize train_df['tokens'] = train_df['text'].progress_map(word_tokenize) train_df # In[8]: from sklearn.model_selection import train_test_split train_df, dev_df = train_test_split(train_df, train_size=0.8) train_df.reset_index(inplace=True) dev_df.reset_index(inplace=True) print(f'train rows: {len(train_df.index):,}') print(f'dev rows: {len(dev_df.index):,}') # Now we will create a vocabulary from the training data. We will only keep the terms that repeat beyond some threshold established below. # In[9]: threshold = 10 tokens = train_df['tokens'].explode().value_counts() tokens = tokens[tokens > threshold] id_to_token = ['[UNK]'] + tokens.index.tolist() token_to_id = {w:i for i,w in enumerate(id_to_token)} vocabulary_size = len(id_to_token) print(f'vocabulary size: {vocabulary_size:,}') # In[10]: from collections import defaultdict def make_feature_vector(tokens, unk_id=0): vector = defaultdict(int) for t in tokens: i = token_to_id.get(t, unk_id) vector[i] += 1 return vector train_df['features'] = train_df['tokens'].progress_map(make_feature_vector) dev_df['features'] = dev_df['tokens'].progress_map(make_feature_vector) train_df # In[11]: from torch.utils.data import Dataset class MyDataset(Dataset): def __init__(self, x, y): self.x = x self.y = y def __len__(self): return len(self.x) def __getitem__(self, index): x = torch.zeros(vocabulary_size, dtype=torch.float32) y = torch.tensor(self.y[index]) for k,v in self.x[index].items(): x[k] = v return x, y # In[12]: from torch import nn import torch.nn.functional as F class Model(nn.Module): def __init__(self, input_dim, hidden_dim, output_dim, dropout): super().__init__() self.layers = nn.Sequential( nn.Dropout(dropout), nn.Linear(input_dim, hidden_dim), nn.ReLU(), nn.Dropout(dropout), nn.Linear(hidden_dim, output_dim), ) def forward(self, x): return self.layers(x) # In[13]: from torch import optim from torch.utils.data import DataLoader from sklearn.metrics import accuracy_score # hyperparameters lr = 1e-3 weight_decay = 1e-5 batch_size = 500 shuffle = True n_epochs = 5 input_dim = vocabulary_size hidden_dim = 50 output_dim = len(labels) dropout = 0.3 # initialize the model, loss function, optimizer, and data-loader model = Model(input_dim, hidden_dim, output_dim, dropout).to(device) loss_func = nn.CrossEntropyLoss() optimizer = optim.Adam( model.parameters(), lr=lr, weight_decay=weight_decay) train_ds = MyDataset( train_df['features'], train_df['class index'] - 1) train_dl = DataLoader( train_ds, batch_size=batch_size, shuffle=shuffle) dev_ds = MyDataset( dev_df['features'], dev_df['class index'] - 1) dev_dl = DataLoader( dev_ds, batch_size=batch_size, shuffle=shuffle) # lists used to store plotting data train_loss, train_acc = [], [] dev_loss, dev_acc = [], [] # In[14]: # train the model for epoch in range(n_epochs): losses, acc = [], [] # set model to training mode model.train() for X, y_true in tqdm(train_dl, desc=f'epoch {epoch+1} (train)'): # clear gradients model.zero_grad() # send batch to right device X = X.to(device) y_true = y_true.to(device) # predict label scores y_pred = model(X) # compute loss loss = loss_func(y_pred, y_true) # compute accuracy gold = y_true.detach().cpu().numpy() pred = np.argmax(y_pred.detach().cpu().numpy(), axis=1) # accumulate for plotting losses.append(loss.detach().cpu().item()) acc.append(accuracy_score(gold, pred)) # backpropagate loss.backward() # optimize model parameters optimizer.step() # save epoch stats train_loss.append(np.mean(losses)) train_acc.append(np.mean(acc)) # set model to evaluation mode model.eval() # disable gradient calculation with torch.no_grad(): losses, acc = [], [] for X, y_true in tqdm(dev_dl, desc=f'epoch {epoch+1} (dev)'): # send batch to right device X = X.to(device) y_true = y_true.to(device) # predict label scores y_pred = model(X) # compute loss loss = loss_func(y_pred, y_true) # compute accuracy gold = y_true.cpu().numpy() pred = np.argmax(y_pred.cpu().numpy(), axis=1) # accumulate for plotting losses.append(loss.cpu().item()) acc.append(accuracy_score(gold, pred)) # save epoch stats dev_loss.append(np.mean(losses)) dev_acc.append(np.mean(acc)) # In[15]: import matplotlib.pyplot as plt x = np.arange(n_epochs) + 1 plt.plot(x, train_loss) plt.plot(x, dev_loss) plt.legend(['train', 'dev']) plt.xlabel('epoch') plt.ylabel('loss') plt.grid(True) # In[16]: plt.plot(x, train_acc) plt.plot(x, dev_acc) plt.legend(['train', 'dev']) plt.xlabel('epoch') plt.ylabel('accuracy') plt.grid(True) # ## Evaluate on test dataset # In[17]: # repeat all preprocessing done above, this time on the test set test_df = pd.read_csv('data/ag_news_csv/test.csv', header=None) test_df.columns = ['class index', 'title', 'description'] test_df['text'] = test_df['title'].str.lower() + " " + test_df['description'].str.lower() test_df['text'] = test_df['text'].str.replace('\\', ' ', regex=False) test_df['tokens'] = test_df['text'].progress_map(word_tokenize) test_df['features'] = test_df['tokens'].progress_map(make_feature_vector) test_df # In[18]: from sklearn.metrics import classification_report # set model to evaluation mode model.eval() dataset = MyDataset(test_df['features'], test_df['class index'] - 1) data_loader = DataLoader(dataset, batch_size=batch_size) y_pred = [] # disable gradient calculation with torch.no_grad(): for X, _ in tqdm(data_loader): X = X.to(device) # predict one class per example y = torch.argmax(model(X), dim=1) # convert tensor to numpy array y_pred.append(y.cpu().numpy()) # print results y_true = dataset.y y_pred = np.concatenate(y_pred) print(classification_report(y_true, y_pred, target_names=labels)) # In[19]: from sklearn.metrics import ConfusionMatrixDisplay, confusion_matrix cm = confusion_matrix(y_true, y_pred, normalize='true') disp = ConfusionMatrixDisplay(confusion_matrix=cm, display_labels=labels) fig, ax = plt.subplots(figsize=(4,4)) disp.plot(cmap='Blues', values_format='.2f', colorbar=False, ax=ax, xticks_rotation=45) # In[ ]:
3,891
3,917
4
chap07-5
chap07-5
7 Implementing Text Classification with Feed Forward Networks In this chapter we provide an implementation of the multilayer neural network described in Chapter 5, along with several of the best practices discussed in Chapter 6. Remaining fairly simple, our network will consist of three neuron layers that are fully connected: an input layer that stores the input features, a hidden intermediate layer, and an output layer that produces the scores for each class to be learned. In between these layers we will include dropout and a nonlinearity (ReLU). Sidebar 7.1 The PyTorch Linear layer implements the connections between layers of neurons. Before discussing the implementation of more complex neural architectures in PyTorch, it is important to address one potential source of confusion. In PyTorch, the Linear layer implements the connections between two layers of neurons rather than an actual neuron layer. That is, a Linear object contains the weights Wl+1 that connect the neurons in layer l with the neurons in layer l + 1 in Figure 5.2. This is why the Linear constructor includes two dimensions: one for the input neuron layer (in_features) and one for the output neuron layer (out_features). Optionally, if the parameter bias is set to True, the corresponding Linear object also contains the bias weights for the output neurons, i.e., bl+1 in Figure 5.2. Thus, in our Model with three neuron layers, we will have two Linear objects. To stay close to the code, from this point forward when we mention the term layer in the implementation chapters, we refer to a PyTorch Linear layer, unless stated otherwise. 109 110 Implementing Text Classification with Feed Forward Networks Further, we make use of two PyTorch classes: a Dataset and a DataLoader. The advantage of using these classes is that they make several things easy, including data shuffling and batching. Lastly, since the classifier’s architecture has become more complex, for optimization we transition from stochastic gradient descent to the Adam optimizer to take advantage of its additional features such as momentum, and L2 regularization. As before, the code from this chapter is available in a Jupyter notebook: chap7_ffnn. 7.1 Data In this chapter we continue to use the AG News Dataset (Section 4.2.1), including the same loading and preprocessing steps. Also, we continue using the same train and test sets to be able to compare results to the ones obtained in Section 4.2. However, in this chapter we will make use of a development set to tune the model’s hyper parameters. For this purpose, we split the training set in two: 80% of the examples become a new training set, while the other 20% are the development set: In the code above we used scikit-learn’s train_test_split function to split the training set into a development partition and a new training partition. Note that this function can split Python lists, NumPy arrays, and even Pandas dataframes. The returned dataframes preserve the index of the original training dataframe, which can be useful to keep the connection to the original data, but is not what we currently need, as we are trying to create two independent datasets. Therefore, we reset the index of the two new dataframes. A second difference to what was done in Section 4.2 is the introduction of mini-batches. PyTorch provides the DataLoader1 class which can be used for shuffling the data and splitting it into mini-batches. In order to create a DataLoader, we need the data to be in the form of a PyTorch Dataset.2 There are two main types of PyTorch datasets: map-style and iterable-style. We will use the former, as it is simpler and meets our needs, but it is good to know that the other option is available for situations when, for example, you need to stream data from a remote source or random access is expensive. To create a map-style dataset we need to subclass torch.utils.data. Dataset and override its __getitem__() method (to return an example given a 1 https://pytorch.org/docs/stable/data.html#torch.utils.data.DataLoader 2 https://pytorch.org/docs/stable/data.html#torch.utils.data.Dataset 7.2 Fully-Connected Neural Network 111 key), as well as its __len__() method (to return the number of examples in the dataset). Our dataset implementation stores two sequences: one for holding the features, and another for storing the corresponding labels. In our implementation we store two Pandas Series, but Python lists or NumPy arrays would also work. The implementation __len__() is trivial: we simply return the length of the feature sequence, that is, the number of feature vectors. The implementation of __getitem__() is slightly more involved. Recall that each of our feature vectors is represented as a dictionary with word ids as keys, and word counts as values, and any word id not in the dictionary has a count of zero. Our __getitem__() method transforms this representation into one that PyTorch can use. We first create two PyTorch tensors, one for the label and one for the features, which is initially populated with zeros. Then, we retrieve the feature dictionary corresponding to the provided index, and, for each key-value pair in the feature dictionary, we update the corresponding element of the tensor. Once this is complete, we return the feature and label tensors for the datum: 7.2 Fully-Connected Neural Network Having completed the Dataset implementation, we next implement the model, i.e., a fully-connected neural network with two layers.3 In Section 4.2 we used a Linear module directly to implement the simpler models discussed there. This time, we will demonstrate how to implement a model as a new module, by subclassing torch.nn. Module. Although this is not necessary for this model, as it can be represented by a Sequential module, as models get more complex, it becomes helpful to encapsulate their behavior. To implement a Module, we need to implement the constructor and override the forward() method. Note that, in our constructor below, before initializing the object fields, we invoke the constructor of the parent class (i.e., Module) with the line super().__init__(). This allows PyTorch to set up the mechanisms through which any layers defined as attributes in the constructor are properly registered as model parameters. In our example, a Sequential instance is assigned to self.layers; this is enough for our model instance to know about it during back-propagation and parameter updating. 3 Recall that layer here refers to the PyTorch Linear layer that contains the connections between two neuron layers. See Sidebar 7.1 for more details. 112 Implementing Text Classification with Feed Forward Networks Here, our model consists of two linear layers, each one preceded by a dropout layer (which drops out input neurons from the corresponding linear layer). The input of the first linear layer has the same size as our vocabulary, and its output has the dimension of the hidden neuron layer (please see Section 5.1 for a refresher on the architecture of the feed-forward neural network). Consequently, the input size of the second linear layer is equal to the size of the hidden layer, and its output size is the number of classes. Additionally, between the two linear layers we add a ReLU nonlinearity.4 All of the model layers are wrapped in a Sequential module, which simply connects the output of one layer to the input of the next. The second method we need to implement is the forward() method, which defines how the model applies its layers to a given input during the forward pass. Our forward() method simply calls the sequential layer and returns its output. Note that while this method implements the model’s forward pass, in general, this method should not be called directly by the user. Instead, the user should use the model as though it were a function (technically, invoking the __call__() method), and let PyTorch call the forward() method internally. This allows PyTorch to activate necessary features such as module hooks correctly. 7.3 Training In order to train our model, we will first initialize the hyperparameters and the different components we need: model, loss function, optimizer, dataset, and data-loader. Notable differences with respect to Section 4.2 are the use of the Adam optimizer with a weight decay (this is just what PyTorch calls L2 regularization – see Chapter 6), and the use of a dataloader with shuffling and batches of 500 examples. We encourage you to take the time to examine the values we use for the hyper parameters, and to experiment with modifying them in the Jupyter notebook. The basic steps of the learning loop are the same as those in Sec- 4 Note that nonlinearities such as the ReLU function here are necessary to guarantee that the neural network can learn non-linear decision boundaries. See Chapter 5 for an extended discussion on this topic. Further, nonlinearities can be added after each network layer, but, typically, the output layer is omitted. This is because a softmax or sigmoid function usually follows it. In PyTorch, the nn.CrossEntropyFunction, which we also use in this chapter, includes such a softmax function. 7.3 Training 113 tion 4.2, except that we are now using a development set to keep track of the performance of the current model after each training epoch. One important difference between using our model during training and evaluation is that, prior to each training session, we need to set the model to training mode using the train() method, and before evaluating on the development set, we need to set the model to evaluation mode using the eval() method. This is important, because some layers have different behavior depending on whether the model is in training or evaluation mode. In our model, this is the case for the Dropout layer, which randomly zeroes some of its input elements during training and scales its outputs accordingly (see Section 6.6), but during evaluation does nothing. In order to plot some relevant statistics acquired from the training data, we collect the current loss and accuracy for each mini-batch. Note that we call detach() on the tensors corresponding to the loss and the predicted/gold labels so they are no longer considered when computing gradients. Calling cpu() copies the tensors from the GPU to the CPU if we are using the GPU; otherwise it does nothing. Calling numpy() converts the PyTorch tensor into a NumPy array. Unlike the prediction sequence, which is represented as a vector of label scores, the loss is a scalar. For this reason, we retrieve it as a Python number using the item() method. When evaluating on the development set, since we do not need to compute the gradients, we save computation by wrapping the steps in a torch.no_grad() context-manager. Since we are not learning, we do not perform back-propagation or invoke the optimizer. After completing training we have gathered the loss and accuracy values after each epoch for both the training and development partitions. Next, we plot these values in order to visualize the classifier’s progress over time. Plots such as these are important to determine how well our model is learning, which informs decisions regarding adjusting hyper parameters or modifying the model’s architecture. Below we only show the plot for the loss. Plotting the accuracy is very similar; the corresponding code as well as the plot itself is available in the Jupyter notebook. 114 Implementing Text Classification with Feed Forward Networks The plot indicates that both the training and development losses decrease over time. This is good! It indicates that our classifier is neither overfitting nor underfitting. Recall from Chapter 2 that overfitting happens when a classifier performs well in training, but poorly on unseen data. In the plot above this would be indicated by a training loss that continues to decrease, but is associated with a development loss that does not. Underfitting happens when a classifier is unable to learn meaningful associations between the input features and the output labels. In this plot this would be shown as loss curves that do not decrease over time. This analysis means we are ready to evaluate our trained model on the test set, which must be a truly unseen dataset that was not used for training or to tune hyper parameters. In other words, this experiment will indicate how well our model performs “in the wild.” Because we would like these results to be as close as possible to real-world results, the test set should be used sparingly, only after the entire architecture, its trained parameters, and its hyper parameters have been frozen. With our feed-forward neural architecture we have achieved an accuracy of 92%, which is a substantial improvement over the 88% accuracy we obtained in Section 4.2. We strongly suggest that you experiment not only with the different hyper parameters, but also with different model architectures in the Jupyter notebook. Such exercises will help you de- 7.4 Summary 115 velop an intuition about the different effects each design choice has, as well as how these decisions interact with each other. 7.4 Summary In this chapter we have shown how to implement a feed-forward neural network in PyTorch. We have also introduced several PyTorch features that encourage and simplify deep learning best practices. In particular, the built-in Dataset and DataLoader classes make mini-batching straightforward while still allowing for customization such as sampling. The ability to create a custom Dataset object allows us to handle complex data and still have access to the features of a DataLoader. By convention, all the components provided by PyTorch are batch-aware and assume that the first dimension refers to the batch size, simplifying model implementation and improving readability. In building the model itself, we also saw that PyTorch uses layer modularization, i.e., both the network layers themselves and operations on them (such as dropout and activation functions) are modeled as layers in a pipeline. This makes it easy to interweave network layers, add various operations between them, and swap activation functions as desired. The weight initialization is also handled automatically when the layers are created, but can be customized as needed. Further, one can tailor the training process in PyTorch by adding momentum, adaptive learning rates, and regularization through optimizer selection and configuration. In this chapter, we used the Adam optimizer, which, in the authors’ experience, is a good default choice, but there are many other optimizers to choose from. We recommend that the reader read the PyTorch documentation on optimizers for more details: https://pytorch.org/docs/stable/optim.html.
6,732
6,884
#!/usr/bin/env python # coding: utf-8 # # Text Classification with a Feed-forward Neural Network and BOW features # First, we will do some initialization. # In[1]: import random import torch import numpy as np import pandas as pd from tqdm.notebook import tqdm # enable tqdm in pandas tqdm.pandas() # set to True to use the gpu (if there is one available) use_gpu = True # select device device = torch.device('cuda' if use_gpu and torch.cuda.is_available() else 'cpu') print(f'device: {device.type}') # random seed seed = 1234 # set random seed if seed is not None: print(f'random seed: {seed}') random.seed(seed) np.random.seed(seed) torch.manual_seed(seed) # We will be using the AG's News Topic Classification Dataset. # It is stored in two CSV files: `train.csv` and `test.csv`, as well as a `classes.txt` that stores the labels of the classes to predict. # # First, we will load the training dataset using [pandas](https://pandas.pydata.org/) and take a quick look at how the data. # In[2]: train_df = pd.read_csv('data/ag_news_csv/train.csv', header=None) train_df.columns = ['class index', 'title', 'description'] train_df # The dataset consists of 120,000 examples, each consisting of a class index, a title, and a description. # The class labels are distributed in a separated file. We will add the labels to the dataset so that we can interpret the data more easily. Note that the label indexes are one-based, so we need to subtract one to retrieve them from the list. # In[3]: labels = open('data/ag_news_csv/classes.txt').read().splitlines() classes = train_df['class index'].map(lambda i: labels[i-1]) train_df.insert(1, 'class', classes) train_df # Let's inspect how balanced our examples are by using a bar plot. # In[4]: pd.value_counts(train_df['class']).plot.bar() # The classes are evenly distributed. That's great! # # However, the text contains some spurious backslashes in some parts of the text. # They are meant to represent newlines in the original text. # An example can be seen below, between the words "dwindling" and "band". # In[5]: print(train_df.loc[0, 'description']) # We will replace the backslashes with spaces on the whole column using pandas replace method. # In[6]: train_df['text'] = train_df['title'].str.lower() + " " + train_df['description'].str.lower() train_df['text'] = train_df['text'].str.replace('\\', ' ', regex=False) train_df # Now we will proceed to tokenize the title and description columns using NLTK's word_tokenize(). # We will add a new column to our dataframe with the list of tokens. # In[7]: from nltk.tokenize import word_tokenize train_df['tokens'] = train_df['text'].progress_map(word_tokenize) train_df # In[8]: from sklearn.model_selection import train_test_split train_df, dev_df = train_test_split(train_df, train_size=0.8) train_df.reset_index(inplace=True) dev_df.reset_index(inplace=True) print(f'train rows: {len(train_df.index):,}') print(f'dev rows: {len(dev_df.index):,}') # Now we will create a vocabulary from the training data. We will only keep the terms that repeat beyond some threshold established below. # In[9]: threshold = 10 tokens = train_df['tokens'].explode().value_counts() tokens = tokens[tokens > threshold] id_to_token = ['[UNK]'] + tokens.index.tolist() token_to_id = {w:i for i,w in enumerate(id_to_token)} vocabulary_size = len(id_to_token) print(f'vocabulary size: {vocabulary_size:,}') # In[10]: from collections import defaultdict def make_feature_vector(tokens, unk_id=0): vector = defaultdict(int) for t in tokens: i = token_to_id.get(t, unk_id) vector[i] += 1 return vector train_df['features'] = train_df['tokens'].progress_map(make_feature_vector) dev_df['features'] = dev_df['tokens'].progress_map(make_feature_vector) train_df # In[11]: from torch.utils.data import Dataset class MyDataset(Dataset): def __init__(self, x, y): self.x = x self.y = y def __len__(self): return len(self.x) def __getitem__(self, index): x = torch.zeros(vocabulary_size, dtype=torch.float32) y = torch.tensor(self.y[index]) for k,v in self.x[index].items(): x[k] = v return x, y # In[12]: from torch import nn import torch.nn.functional as F class Model(nn.Module): def __init__(self, input_dim, hidden_dim, output_dim, dropout): super().__init__() self.layers = nn.Sequential( nn.Dropout(dropout), nn.Linear(input_dim, hidden_dim), nn.ReLU(), nn.Dropout(dropout), nn.Linear(hidden_dim, output_dim), ) def forward(self, x): return self.layers(x) # In[13]: from torch import optim from torch.utils.data import DataLoader from sklearn.metrics import accuracy_score # hyperparameters lr = 1e-3 weight_decay = 1e-5 batch_size = 500 shuffle = True n_epochs = 5 input_dim = vocabulary_size hidden_dim = 50 output_dim = len(labels) dropout = 0.3 # initialize the model, loss function, optimizer, and data-loader model = Model(input_dim, hidden_dim, output_dim, dropout).to(device) loss_func = nn.CrossEntropyLoss() optimizer = optim.Adam( model.parameters(), lr=lr, weight_decay=weight_decay) train_ds = MyDataset( train_df['features'], train_df['class index'] - 1) train_dl = DataLoader( train_ds, batch_size=batch_size, shuffle=shuffle) dev_ds = MyDataset( dev_df['features'], dev_df['class index'] - 1) dev_dl = DataLoader( dev_ds, batch_size=batch_size, shuffle=shuffle) # lists used to store plotting data train_loss, train_acc = [], [] dev_loss, dev_acc = [], [] # In[14]: # train the model for epoch in range(n_epochs): losses, acc = [], [] # set model to training mode model.train() for X, y_true in tqdm(train_dl, desc=f'epoch {epoch+1} (train)'): # clear gradients model.zero_grad() # send batch to right device X = X.to(device) y_true = y_true.to(device) # predict label scores y_pred = model(X) # compute loss loss = loss_func(y_pred, y_true) # compute accuracy gold = y_true.detach().cpu().numpy() pred = np.argmax(y_pred.detach().cpu().numpy(), axis=1) # accumulate for plotting losses.append(loss.detach().cpu().item()) acc.append(accuracy_score(gold, pred)) # backpropagate loss.backward() # optimize model parameters optimizer.step() # save epoch stats train_loss.append(np.mean(losses)) train_acc.append(np.mean(acc)) # set model to evaluation mode model.eval() # disable gradient calculation with torch.no_grad(): losses, acc = [], [] for X, y_true in tqdm(dev_dl, desc=f'epoch {epoch+1} (dev)'): # send batch to right device X = X.to(device) y_true = y_true.to(device) # predict label scores y_pred = model(X) # compute loss loss = loss_func(y_pred, y_true) # compute accuracy gold = y_true.cpu().numpy() pred = np.argmax(y_pred.cpu().numpy(), axis=1) # accumulate for plotting losses.append(loss.cpu().item()) acc.append(accuracy_score(gold, pred)) # save epoch stats dev_loss.append(np.mean(losses)) dev_acc.append(np.mean(acc)) # In[15]: import matplotlib.pyplot as plt x = np.arange(n_epochs) + 1 plt.plot(x, train_loss) plt.plot(x, dev_loss) plt.legend(['train', 'dev']) plt.xlabel('epoch') plt.ylabel('loss') plt.grid(True) # In[16]: plt.plot(x, train_acc) plt.plot(x, dev_acc) plt.legend(['train', 'dev']) plt.xlabel('epoch') plt.ylabel('accuracy') plt.grid(True) # ## Evaluate on test dataset # In[17]: # repeat all preprocessing done above, this time on the test set test_df = pd.read_csv('data/ag_news_csv/test.csv', header=None) test_df.columns = ['class index', 'title', 'description'] test_df['text'] = test_df['title'].str.lower() + " " + test_df['description'].str.lower() test_df['text'] = test_df['text'].str.replace('\\', ' ', regex=False) test_df['tokens'] = test_df['text'].progress_map(word_tokenize) test_df['features'] = test_df['tokens'].progress_map(make_feature_vector) test_df # In[18]: from sklearn.metrics import classification_report # set model to evaluation mode model.eval() dataset = MyDataset(test_df['features'], test_df['class index'] - 1) data_loader = DataLoader(dataset, batch_size=batch_size) y_pred = [] # disable gradient calculation with torch.no_grad(): for X, _ in tqdm(data_loader): X = X.to(device) # predict one class per example y = torch.argmax(model(X), dim=1) # convert tensor to numpy array y_pred.append(y.cpu().numpy()) # print results y_true = dataset.y y_pred = np.concatenate(y_pred) print(classification_report(y_true, y_pred, target_names=labels)) # In[19]: from sklearn.metrics import ConfusionMatrixDisplay, confusion_matrix cm = confusion_matrix(y_true, y_pred, normalize='true') disp = ConfusionMatrixDisplay(confusion_matrix=cm, display_labels=labels) fig, ax = plt.subplots(figsize=(4,4)) disp.plot(cmap='Blues', values_format='.2f', colorbar=False, ax=ax, xticks_rotation=45) # In[ ]:
4,455
4,684
5
chap07-6
chap07-6
7 Implementing Text Classification with Feed Forward Networks In this chapter we provide an implementation of the multilayer neural network described in Chapter 5, along with several of the best practices discussed in Chapter 6. Remaining fairly simple, our network will consist of three neuron layers that are fully connected: an input layer that stores the input features, a hidden intermediate layer, and an output layer that produces the scores for each class to be learned. In between these layers we will include dropout and a nonlinearity (ReLU). Sidebar 7.1 The PyTorch Linear layer implements the connections between layers of neurons. Before discussing the implementation of more complex neural architectures in PyTorch, it is important to address one potential source of confusion. In PyTorch, the Linear layer implements the connections between two layers of neurons rather than an actual neuron layer. That is, a Linear object contains the weights Wl+1 that connect the neurons in layer l with the neurons in layer l + 1 in Figure 5.2. This is why the Linear constructor includes two dimensions: one for the input neuron layer (in_features) and one for the output neuron layer (out_features). Optionally, if the parameter bias is set to True, the corresponding Linear object also contains the bias weights for the output neurons, i.e., bl+1 in Figure 5.2. Thus, in our Model with three neuron layers, we will have two Linear objects. To stay close to the code, from this point forward when we mention the term layer in the implementation chapters, we refer to a PyTorch Linear layer, unless stated otherwise. 109 110 Implementing Text Classification with Feed Forward Networks Further, we make use of two PyTorch classes: a Dataset and a DataLoader. The advantage of using these classes is that they make several things easy, including data shuffling and batching. Lastly, since the classifier’s architecture has become more complex, for optimization we transition from stochastic gradient descent to the Adam optimizer to take advantage of its additional features such as momentum, and L2 regularization. As before, the code from this chapter is available in a Jupyter notebook: chap7_ffnn. 7.1 Data In this chapter we continue to use the AG News Dataset (Section 4.2.1), including the same loading and preprocessing steps. Also, we continue using the same train and test sets to be able to compare results to the ones obtained in Section 4.2. However, in this chapter we will make use of a development set to tune the model’s hyper parameters. For this purpose, we split the training set in two: 80% of the examples become a new training set, while the other 20% are the development set: In the code above we used scikit-learn’s train_test_split function to split the training set into a development partition and a new training partition. Note that this function can split Python lists, NumPy arrays, and even Pandas dataframes. The returned dataframes preserve the index of the original training dataframe, which can be useful to keep the connection to the original data, but is not what we currently need, as we are trying to create two independent datasets. Therefore, we reset the index of the two new dataframes. A second difference to what was done in Section 4.2 is the introduction of mini-batches. PyTorch provides the DataLoader1 class which can be used for shuffling the data and splitting it into mini-batches. In order to create a DataLoader, we need the data to be in the form of a PyTorch Dataset.2 There are two main types of PyTorch datasets: map-style and iterable-style. We will use the former, as it is simpler and meets our needs, but it is good to know that the other option is available for situations when, for example, you need to stream data from a remote source or random access is expensive. To create a map-style dataset we need to subclass torch.utils.data. Dataset and override its __getitem__() method (to return an example given a 1 https://pytorch.org/docs/stable/data.html#torch.utils.data.DataLoader 2 https://pytorch.org/docs/stable/data.html#torch.utils.data.Dataset 7.2 Fully-Connected Neural Network 111 key), as well as its __len__() method (to return the number of examples in the dataset). Our dataset implementation stores two sequences: one for holding the features, and another for storing the corresponding labels. In our implementation we store two Pandas Series, but Python lists or NumPy arrays would also work. The implementation __len__() is trivial: we simply return the length of the feature sequence, that is, the number of feature vectors. The implementation of __getitem__() is slightly more involved. Recall that each of our feature vectors is represented as a dictionary with word ids as keys, and word counts as values, and any word id not in the dictionary has a count of zero. Our __getitem__() method transforms this representation into one that PyTorch can use. We first create two PyTorch tensors, one for the label and one for the features, which is initially populated with zeros. Then, we retrieve the feature dictionary corresponding to the provided index, and, for each key-value pair in the feature dictionary, we update the corresponding element of the tensor. Once this is complete, we return the feature and label tensors for the datum: 7.2 Fully-Connected Neural Network Having completed the Dataset implementation, we next implement the model, i.e., a fully-connected neural network with two layers.3 In Section 4.2 we used a Linear module directly to implement the simpler models discussed there. This time, we will demonstrate how to implement a model as a new module, by subclassing torch.nn. Module. Although this is not necessary for this model, as it can be represented by a Sequential module, as models get more complex, it becomes helpful to encapsulate their behavior. To implement a Module, we need to implement the constructor and override the forward() method. Note that, in our constructor below, before initializing the object fields, we invoke the constructor of the parent class (i.e., Module) with the line super().__init__(). This allows PyTorch to set up the mechanisms through which any layers defined as attributes in the constructor are properly registered as model parameters. In our example, a Sequential instance is assigned to self.layers; this is enough for our model instance to know about it during back-propagation and parameter updating. 3 Recall that layer here refers to the PyTorch Linear layer that contains the connections between two neuron layers. See Sidebar 7.1 for more details. 112 Implementing Text Classification with Feed Forward Networks Here, our model consists of two linear layers, each one preceded by a dropout layer (which drops out input neurons from the corresponding linear layer). The input of the first linear layer has the same size as our vocabulary, and its output has the dimension of the hidden neuron layer (please see Section 5.1 for a refresher on the architecture of the feed-forward neural network). Consequently, the input size of the second linear layer is equal to the size of the hidden layer, and its output size is the number of classes. Additionally, between the two linear layers we add a ReLU nonlinearity.4 All of the model layers are wrapped in a Sequential module, which simply connects the output of one layer to the input of the next. The second method we need to implement is the forward() method, which defines how the model applies its layers to a given input during the forward pass. Our forward() method simply calls the sequential layer and returns its output. Note that while this method implements the model’s forward pass, in general, this method should not be called directly by the user. Instead, the user should use the model as though it were a function (technically, invoking the __call__() method), and let PyTorch call the forward() method internally. This allows PyTorch to activate necessary features such as module hooks correctly. 7.3 Training In order to train our model, we will first initialize the hyperparameters and the different components we need: model, loss function, optimizer, dataset, and data-loader. Notable differences with respect to Section 4.2 are the use of the Adam optimizer with a weight decay (this is just what PyTorch calls L2 regularization – see Chapter 6), and the use of a dataloader with shuffling and batches of 500 examples. We encourage you to take the time to examine the values we use for the hyper parameters, and to experiment with modifying them in the Jupyter notebook. The basic steps of the learning loop are the same as those in Sec- 4 Note that nonlinearities such as the ReLU function here are necessary to guarantee that the neural network can learn non-linear decision boundaries. See Chapter 5 for an extended discussion on this topic. Further, nonlinearities can be added after each network layer, but, typically, the output layer is omitted. This is because a softmax or sigmoid function usually follows it. In PyTorch, the nn.CrossEntropyFunction, which we also use in this chapter, includes such a softmax function. 7.3 Training 113 tion 4.2, except that we are now using a development set to keep track of the performance of the current model after each training epoch. One important difference between using our model during training and evaluation is that, prior to each training session, we need to set the model to training mode using the train() method, and before evaluating on the development set, we need to set the model to evaluation mode using the eval() method. This is important, because some layers have different behavior depending on whether the model is in training or evaluation mode. In our model, this is the case for the Dropout layer, which randomly zeroes some of its input elements during training and scales its outputs accordingly (see Section 6.6), but during evaluation does nothing. In order to plot some relevant statistics acquired from the training data, we collect the current loss and accuracy for each mini-batch. Note that we call detach() on the tensors corresponding to the loss and the predicted/gold labels so they are no longer considered when computing gradients. Calling cpu() copies the tensors from the GPU to the CPU if we are using the GPU; otherwise it does nothing. Calling numpy() converts the PyTorch tensor into a NumPy array. Unlike the prediction sequence, which is represented as a vector of label scores, the loss is a scalar. For this reason, we retrieve it as a Python number using the item() method. When evaluating on the development set, since we do not need to compute the gradients, we save computation by wrapping the steps in a torch.no_grad() context-manager. Since we are not learning, we do not perform back-propagation or invoke the optimizer. After completing training we have gathered the loss and accuracy values after each epoch for both the training and development partitions. Next, we plot these values in order to visualize the classifier’s progress over time. Plots such as these are important to determine how well our model is learning, which informs decisions regarding adjusting hyper parameters or modifying the model’s architecture. Below we only show the plot for the loss. Plotting the accuracy is very similar; the corresponding code as well as the plot itself is available in the Jupyter notebook. 114 Implementing Text Classification with Feed Forward Networks The plot indicates that both the training and development losses decrease over time. This is good! It indicates that our classifier is neither overfitting nor underfitting. Recall from Chapter 2 that overfitting happens when a classifier performs well in training, but poorly on unseen data. In the plot above this would be indicated by a training loss that continues to decrease, but is associated with a development loss that does not. Underfitting happens when a classifier is unable to learn meaningful associations between the input features and the output labels. In this plot this would be shown as loss curves that do not decrease over time. This analysis means we are ready to evaluate our trained model on the test set, which must be a truly unseen dataset that was not used for training or to tune hyper parameters. In other words, this experiment will indicate how well our model performs “in the wild.” Because we would like these results to be as close as possible to real-world results, the test set should be used sparingly, only after the entire architecture, its trained parameters, and its hyper parameters have been frozen. With our feed-forward neural architecture we have achieved an accuracy of 92%, which is a substantial improvement over the 88% accuracy we obtained in Section 4.2. We strongly suggest that you experiment not only with the different hyper parameters, but also with different model architectures in the Jupyter notebook. Such exercises will help you de- 7.4 Summary 115 velop an intuition about the different effects each design choice has, as well as how these decisions interact with each other. 7.4 Summary In this chapter we have shown how to implement a feed-forward neural network in PyTorch. We have also introduced several PyTorch features that encourage and simplify deep learning best practices. In particular, the built-in Dataset and DataLoader classes make mini-batching straightforward while still allowing for customization such as sampling. The ability to create a custom Dataset object allows us to handle complex data and still have access to the features of a DataLoader. By convention, all the components provided by PyTorch are batch-aware and assume that the first dimension refers to the batch size, simplifying model implementation and improving readability. In building the model itself, we also saw that PyTorch uses layer modularization, i.e., both the network layers themselves and operations on them (such as dropout and activation functions) are modeled as layers in a pipeline. This makes it easy to interweave network layers, add various operations between them, and swap activation functions as desired. The weight initialization is also handled automatically when the layers are created, but can be customized as needed. Further, one can tailor the training process in PyTorch by adding momentum, adaptive learning rates, and regularization through optimizer selection and configuration. In this chapter, we used the Adam optimizer, which, in the authors’ experience, is a good default choice, but there are many other optimizers to choose from. We recommend that the reader read the PyTorch documentation on optimizers for more details: https://pytorch.org/docs/stable/optim.html.
235
559
#!/usr/bin/env python # coding: utf-8 # # Text Classification with a Feed-forward Neural Network and BOW features # First, we will do some initialization. # In[1]: import random import torch import numpy as np import pandas as pd from tqdm.notebook import tqdm # enable tqdm in pandas tqdm.pandas() # set to True to use the gpu (if there is one available) use_gpu = True # select device device = torch.device('cuda' if use_gpu and torch.cuda.is_available() else 'cpu') print(f'device: {device.type}') # random seed seed = 1234 # set random seed if seed is not None: print(f'random seed: {seed}') random.seed(seed) np.random.seed(seed) torch.manual_seed(seed) # We will be using the AG's News Topic Classification Dataset. # It is stored in two CSV files: `train.csv` and `test.csv`, as well as a `classes.txt` that stores the labels of the classes to predict. # # First, we will load the training dataset using [pandas](https://pandas.pydata.org/) and take a quick look at how the data. # In[2]: train_df = pd.read_csv('data/ag_news_csv/train.csv', header=None) train_df.columns = ['class index', 'title', 'description'] train_df # The dataset consists of 120,000 examples, each consisting of a class index, a title, and a description. # The class labels are distributed in a separated file. We will add the labels to the dataset so that we can interpret the data more easily. Note that the label indexes are one-based, so we need to subtract one to retrieve them from the list. # In[3]: labels = open('data/ag_news_csv/classes.txt').read().splitlines() classes = train_df['class index'].map(lambda i: labels[i-1]) train_df.insert(1, 'class', classes) train_df # Let's inspect how balanced our examples are by using a bar plot. # In[4]: pd.value_counts(train_df['class']).plot.bar() # The classes are evenly distributed. That's great! # # However, the text contains some spurious backslashes in some parts of the text. # They are meant to represent newlines in the original text. # An example can be seen below, between the words "dwindling" and "band". # In[5]: print(train_df.loc[0, 'description']) # We will replace the backslashes with spaces on the whole column using pandas replace method. # In[6]: train_df['text'] = train_df['title'].str.lower() + " " + train_df['description'].str.lower() train_df['text'] = train_df['text'].str.replace('\\', ' ', regex=False) train_df # Now we will proceed to tokenize the title and description columns using NLTK's word_tokenize(). # We will add a new column to our dataframe with the list of tokens. # In[7]: from nltk.tokenize import word_tokenize train_df['tokens'] = train_df['text'].progress_map(word_tokenize) train_df # In[8]: from sklearn.model_selection import train_test_split train_df, dev_df = train_test_split(train_df, train_size=0.8) train_df.reset_index(inplace=True) dev_df.reset_index(inplace=True) print(f'train rows: {len(train_df.index):,}') print(f'dev rows: {len(dev_df.index):,}') # Now we will create a vocabulary from the training data. We will only keep the terms that repeat beyond some threshold established below. # In[9]: threshold = 10 tokens = train_df['tokens'].explode().value_counts() tokens = tokens[tokens > threshold] id_to_token = ['[UNK]'] + tokens.index.tolist() token_to_id = {w:i for i,w in enumerate(id_to_token)} vocabulary_size = len(id_to_token) print(f'vocabulary size: {vocabulary_size:,}') # In[10]: from collections import defaultdict def make_feature_vector(tokens, unk_id=0): vector = defaultdict(int) for t in tokens: i = token_to_id.get(t, unk_id) vector[i] += 1 return vector train_df['features'] = train_df['tokens'].progress_map(make_feature_vector) dev_df['features'] = dev_df['tokens'].progress_map(make_feature_vector) train_df # In[11]: from torch.utils.data import Dataset class MyDataset(Dataset): def __init__(self, x, y): self.x = x self.y = y def __len__(self): return len(self.x) def __getitem__(self, index): x = torch.zeros(vocabulary_size, dtype=torch.float32) y = torch.tensor(self.y[index]) for k,v in self.x[index].items(): x[k] = v return x, y # In[12]: from torch import nn import torch.nn.functional as F class Model(nn.Module): def __init__(self, input_dim, hidden_dim, output_dim, dropout): super().__init__() self.layers = nn.Sequential( nn.Dropout(dropout), nn.Linear(input_dim, hidden_dim), nn.ReLU(), nn.Dropout(dropout), nn.Linear(hidden_dim, output_dim), ) def forward(self, x): return self.layers(x) # In[13]: from torch import optim from torch.utils.data import DataLoader from sklearn.metrics import accuracy_score # hyperparameters lr = 1e-3 weight_decay = 1e-5 batch_size = 500 shuffle = True n_epochs = 5 input_dim = vocabulary_size hidden_dim = 50 output_dim = len(labels) dropout = 0.3 # initialize the model, loss function, optimizer, and data-loader model = Model(input_dim, hidden_dim, output_dim, dropout).to(device) loss_func = nn.CrossEntropyLoss() optimizer = optim.Adam( model.parameters(), lr=lr, weight_decay=weight_decay) train_ds = MyDataset( train_df['features'], train_df['class index'] - 1) train_dl = DataLoader( train_ds, batch_size=batch_size, shuffle=shuffle) dev_ds = MyDataset( dev_df['features'], dev_df['class index'] - 1) dev_dl = DataLoader( dev_ds, batch_size=batch_size, shuffle=shuffle) # lists used to store plotting data train_loss, train_acc = [], [] dev_loss, dev_acc = [], [] # In[14]: # train the model for epoch in range(n_epochs): losses, acc = [], [] # set model to training mode model.train() for X, y_true in tqdm(train_dl, desc=f'epoch {epoch+1} (train)'): # clear gradients model.zero_grad() # send batch to right device X = X.to(device) y_true = y_true.to(device) # predict label scores y_pred = model(X) # compute loss loss = loss_func(y_pred, y_true) # compute accuracy gold = y_true.detach().cpu().numpy() pred = np.argmax(y_pred.detach().cpu().numpy(), axis=1) # accumulate for plotting losses.append(loss.detach().cpu().item()) acc.append(accuracy_score(gold, pred)) # backpropagate loss.backward() # optimize model parameters optimizer.step() # save epoch stats train_loss.append(np.mean(losses)) train_acc.append(np.mean(acc)) # set model to evaluation mode model.eval() # disable gradient calculation with torch.no_grad(): losses, acc = [], [] for X, y_true in tqdm(dev_dl, desc=f'epoch {epoch+1} (dev)'): # send batch to right device X = X.to(device) y_true = y_true.to(device) # predict label scores y_pred = model(X) # compute loss loss = loss_func(y_pred, y_true) # compute accuracy gold = y_true.cpu().numpy() pred = np.argmax(y_pred.cpu().numpy(), axis=1) # accumulate for plotting losses.append(loss.cpu().item()) acc.append(accuracy_score(gold, pred)) # save epoch stats dev_loss.append(np.mean(losses)) dev_acc.append(np.mean(acc)) # In[15]: import matplotlib.pyplot as plt x = np.arange(n_epochs) + 1 plt.plot(x, train_loss) plt.plot(x, dev_loss) plt.legend(['train', 'dev']) plt.xlabel('epoch') plt.ylabel('loss') plt.grid(True) # In[16]: plt.plot(x, train_acc) plt.plot(x, dev_acc) plt.legend(['train', 'dev']) plt.xlabel('epoch') plt.ylabel('accuracy') plt.grid(True) # ## Evaluate on test dataset # In[17]: # repeat all preprocessing done above, this time on the test set test_df = pd.read_csv('data/ag_news_csv/test.csv', header=None) test_df.columns = ['class index', 'title', 'description'] test_df['text'] = test_df['title'].str.lower() + " " + test_df['description'].str.lower() test_df['text'] = test_df['text'].str.replace('\\', ' ', regex=False) test_df['tokens'] = test_df['text'].progress_map(word_tokenize) test_df['features'] = test_df['tokens'].progress_map(make_feature_vector) test_df # In[18]: from sklearn.metrics import classification_report # set model to evaluation mode model.eval() dataset = MyDataset(test_df['features'], test_df['class index'] - 1) data_loader = DataLoader(dataset, batch_size=batch_size) y_pred = [] # disable gradient calculation with torch.no_grad(): for X, _ in tqdm(data_loader): X = X.to(device) # predict one class per example y = torch.argmax(model(X), dim=1) # convert tensor to numpy array y_pred.append(y.cpu().numpy()) # print results y_true = dataset.y y_pred = np.concatenate(y_pred) print(classification_report(y_true, y_pred, target_names=labels)) # In[19]: from sklearn.metrics import ConfusionMatrixDisplay, confusion_matrix cm = confusion_matrix(y_true, y_pred, normalize='true') disp = ConfusionMatrixDisplay(confusion_matrix=cm, display_labels=labels) fig, ax = plt.subplots(figsize=(4,4)) disp.plot(cmap='Blues', values_format='.2f', colorbar=False, ax=ax, xticks_rotation=45) # In[ ]:
4,336
4,360
6
chap07-7
chap07-7
7 Implementing Text Classification with Feed Forward Networks In this chapter we provide an implementation of the multilayer neural network described in Chapter 5, along with several of the best practices discussed in Chapter 6. Remaining fairly simple, our network will consist of three neuron layers that are fully connected: an input layer that stores the input features, a hidden intermediate layer, and an output layer that produces the scores for each class to be learned. In between these layers we will include dropout and a nonlinearity (ReLU). Sidebar 7.1 The PyTorch Linear layer implements the connections between layers of neurons. Before discussing the implementation of more complex neural architectures in PyTorch, it is important to address one potential source of confusion. In PyTorch, the Linear layer implements the connections between two layers of neurons rather than an actual neuron layer. That is, a Linear object contains the weights Wl+1 that connect the neurons in layer l with the neurons in layer l + 1 in Figure 5.2. This is why the Linear constructor includes two dimensions: one for the input neuron layer (in_features) and one for the output neuron layer (out_features). Optionally, if the parameter bias is set to True, the corresponding Linear object also contains the bias weights for the output neurons, i.e., bl+1 in Figure 5.2. Thus, in our Model with three neuron layers, we will have two Linear objects. To stay close to the code, from this point forward when we mention the term layer in the implementation chapters, we refer to a PyTorch Linear layer, unless stated otherwise. 109 110 Implementing Text Classification with Feed Forward Networks Further, we make use of two PyTorch classes: a Dataset and a DataLoader. The advantage of using these classes is that they make several things easy, including data shuffling and batching. Lastly, since the classifier’s architecture has become more complex, for optimization we transition from stochastic gradient descent to the Adam optimizer to take advantage of its additional features such as momentum, and L2 regularization. As before, the code from this chapter is available in a Jupyter notebook: chap7_ffnn. 7.1 Data In this chapter we continue to use the AG News Dataset (Section 4.2.1), including the same loading and preprocessing steps. Also, we continue using the same train and test sets to be able to compare results to the ones obtained in Section 4.2. However, in this chapter we will make use of a development set to tune the model’s hyper parameters. For this purpose, we split the training set in two: 80% of the examples become a new training set, while the other 20% are the development set: In the code above we used scikit-learn’s train_test_split function to split the training set into a development partition and a new training partition. Note that this function can split Python lists, NumPy arrays, and even Pandas dataframes. The returned dataframes preserve the index of the original training dataframe, which can be useful to keep the connection to the original data, but is not what we currently need, as we are trying to create two independent datasets. Therefore, we reset the index of the two new dataframes. A second difference to what was done in Section 4.2 is the introduction of mini-batches. PyTorch provides the DataLoader1 class which can be used for shuffling the data and splitting it into mini-batches. In order to create a DataLoader, we need the data to be in the form of a PyTorch Dataset.2 There are two main types of PyTorch datasets: map-style and iterable-style. We will use the former, as it is simpler and meets our needs, but it is good to know that the other option is available for situations when, for example, you need to stream data from a remote source or random access is expensive. To create a map-style dataset we need to subclass torch.utils.data. Dataset and override its __getitem__() method (to return an example given a 1 https://pytorch.org/docs/stable/data.html#torch.utils.data.DataLoader 2 https://pytorch.org/docs/stable/data.html#torch.utils.data.Dataset 7.2 Fully-Connected Neural Network 111 key), as well as its __len__() method (to return the number of examples in the dataset). Our dataset implementation stores two sequences: one for holding the features, and another for storing the corresponding labels. In our implementation we store two Pandas Series, but Python lists or NumPy arrays would also work. The implementation __len__() is trivial: we simply return the length of the feature sequence, that is, the number of feature vectors. The implementation of __getitem__() is slightly more involved. Recall that each of our feature vectors is represented as a dictionary with word ids as keys, and word counts as values, and any word id not in the dictionary has a count of zero. Our __getitem__() method transforms this representation into one that PyTorch can use. We first create two PyTorch tensors, one for the label and one for the features, which is initially populated with zeros. Then, we retrieve the feature dictionary corresponding to the provided index, and, for each key-value pair in the feature dictionary, we update the corresponding element of the tensor. Once this is complete, we return the feature and label tensors for the datum: 7.2 Fully-Connected Neural Network Having completed the Dataset implementation, we next implement the model, i.e., a fully-connected neural network with two layers.3 In Section 4.2 we used a Linear module directly to implement the simpler models discussed there. This time, we will demonstrate how to implement a model as a new module, by subclassing torch.nn. Module. Although this is not necessary for this model, as it can be represented by a Sequential module, as models get more complex, it becomes helpful to encapsulate their behavior. To implement a Module, we need to implement the constructor and override the forward() method. Note that, in our constructor below, before initializing the object fields, we invoke the constructor of the parent class (i.e., Module) with the line super().__init__(). This allows PyTorch to set up the mechanisms through which any layers defined as attributes in the constructor are properly registered as model parameters. In our example, a Sequential instance is assigned to self.layers; this is enough for our model instance to know about it during back-propagation and parameter updating. 3 Recall that layer here refers to the PyTorch Linear layer that contains the connections between two neuron layers. See Sidebar 7.1 for more details. 112 Implementing Text Classification with Feed Forward Networks Here, our model consists of two linear layers, each one preceded by a dropout layer (which drops out input neurons from the corresponding linear layer). The input of the first linear layer has the same size as our vocabulary, and its output has the dimension of the hidden neuron layer (please see Section 5.1 for a refresher on the architecture of the feed-forward neural network). Consequently, the input size of the second linear layer is equal to the size of the hidden layer, and its output size is the number of classes. Additionally, between the two linear layers we add a ReLU nonlinearity.4 All of the model layers are wrapped in a Sequential module, which simply connects the output of one layer to the input of the next. The second method we need to implement is the forward() method, which defines how the model applies its layers to a given input during the forward pass. Our forward() method simply calls the sequential layer and returns its output. Note that while this method implements the model’s forward pass, in general, this method should not be called directly by the user. Instead, the user should use the model as though it were a function (technically, invoking the __call__() method), and let PyTorch call the forward() method internally. This allows PyTorch to activate necessary features such as module hooks correctly. 7.3 Training In order to train our model, we will first initialize the hyperparameters and the different components we need: model, loss function, optimizer, dataset, and data-loader. Notable differences with respect to Section 4.2 are the use of the Adam optimizer with a weight decay (this is just what PyTorch calls L2 regularization – see Chapter 6), and the use of a dataloader with shuffling and batches of 500 examples. We encourage you to take the time to examine the values we use for the hyper parameters, and to experiment with modifying them in the Jupyter notebook. The basic steps of the learning loop are the same as those in Sec- 4 Note that nonlinearities such as the ReLU function here are necessary to guarantee that the neural network can learn non-linear decision boundaries. See Chapter 5 for an extended discussion on this topic. Further, nonlinearities can be added after each network layer, but, typically, the output layer is omitted. This is because a softmax or sigmoid function usually follows it. In PyTorch, the nn.CrossEntropyFunction, which we also use in this chapter, includes such a softmax function. 7.3 Training 113 tion 4.2, except that we are now using a development set to keep track of the performance of the current model after each training epoch. One important difference between using our model during training and evaluation is that, prior to each training session, we need to set the model to training mode using the train() method, and before evaluating on the development set, we need to set the model to evaluation mode using the eval() method. This is important, because some layers have different behavior depending on whether the model is in training or evaluation mode. In our model, this is the case for the Dropout layer, which randomly zeroes some of its input elements during training and scales its outputs accordingly (see Section 6.6), but during evaluation does nothing. In order to plot some relevant statistics acquired from the training data, we collect the current loss and accuracy for each mini-batch. Note that we call detach() on the tensors corresponding to the loss and the predicted/gold labels so they are no longer considered when computing gradients. Calling cpu() copies the tensors from the GPU to the CPU if we are using the GPU; otherwise it does nothing. Calling numpy() converts the PyTorch tensor into a NumPy array. Unlike the prediction sequence, which is represented as a vector of label scores, the loss is a scalar. For this reason, we retrieve it as a Python number using the item() method. When evaluating on the development set, since we do not need to compute the gradients, we save computation by wrapping the steps in a torch.no_grad() context-manager. Since we are not learning, we do not perform back-propagation or invoke the optimizer. After completing training we have gathered the loss and accuracy values after each epoch for both the training and development partitions. Next, we plot these values in order to visualize the classifier’s progress over time. Plots such as these are important to determine how well our model is learning, which informs decisions regarding adjusting hyper parameters or modifying the model’s architecture. Below we only show the plot for the loss. Plotting the accuracy is very similar; the corresponding code as well as the plot itself is available in the Jupyter notebook. 114 Implementing Text Classification with Feed Forward Networks The plot indicates that both the training and development losses decrease over time. This is good! It indicates that our classifier is neither overfitting nor underfitting. Recall from Chapter 2 that overfitting happens when a classifier performs well in training, but poorly on unseen data. In the plot above this would be indicated by a training loss that continues to decrease, but is associated with a development loss that does not. Underfitting happens when a classifier is unable to learn meaningful associations between the input features and the output labels. In this plot this would be shown as loss curves that do not decrease over time. This analysis means we are ready to evaluate our trained model on the test set, which must be a truly unseen dataset that was not used for training or to tune hyper parameters. In other words, this experiment will indicate how well our model performs “in the wild.” Because we would like these results to be as close as possible to real-world results, the test set should be used sparingly, only after the entire architecture, its trained parameters, and its hyper parameters have been frozen. With our feed-forward neural architecture we have achieved an accuracy of 92%, which is a substantial improvement over the 88% accuracy we obtained in Section 4.2. We strongly suggest that you experiment not only with the different hyper parameters, but also with different model architectures in the Jupyter notebook. Such exercises will help you de- 7.4 Summary 115 velop an intuition about the different effects each design choice has, as well as how these decisions interact with each other. 7.4 Summary In this chapter we have shown how to implement a feed-forward neural network in PyTorch. We have also introduced several PyTorch features that encourage and simplify deep learning best practices. In particular, the built-in Dataset and DataLoader classes make mini-batching straightforward while still allowing for customization such as sampling. The ability to create a custom Dataset object allows us to handle complex data and still have access to the features of a DataLoader. By convention, all the components provided by PyTorch are batch-aware and assume that the first dimension refers to the batch size, simplifying model implementation and improving readability. In building the model itself, we also saw that PyTorch uses layer modularization, i.e., both the network layers themselves and operations on them (such as dropout and activation functions) are modeled as layers in a pipeline. This makes it easy to interweave network layers, add various operations between them, and swap activation functions as desired. The weight initialization is also handled automatically when the layers are created, but can be customized as needed. Further, one can tailor the training process in PyTorch by adding momentum, adaptive learning rates, and regularization through optimizer selection and configuration. In this chapter, we used the Adam optimizer, which, in the authors’ experience, is a good default choice, but there are many other optimizers to choose from. We recommend that the reader read the PyTorch documentation on optimizers for more details: https://pytorch.org/docs/stable/optim.html.
4,522
4,655
#!/usr/bin/env python # coding: utf-8 # # Text Classification with a Feed-forward Neural Network and BOW features # First, we will do some initialization. # In[1]: import random import torch import numpy as np import pandas as pd from tqdm.notebook import tqdm # enable tqdm in pandas tqdm.pandas() # set to True to use the gpu (if there is one available) use_gpu = True # select device device = torch.device('cuda' if use_gpu and torch.cuda.is_available() else 'cpu') print(f'device: {device.type}') # random seed seed = 1234 # set random seed if seed is not None: print(f'random seed: {seed}') random.seed(seed) np.random.seed(seed) torch.manual_seed(seed) # We will be using the AG's News Topic Classification Dataset. # It is stored in two CSV files: `train.csv` and `test.csv`, as well as a `classes.txt` that stores the labels of the classes to predict. # # First, we will load the training dataset using [pandas](https://pandas.pydata.org/) and take a quick look at how the data. # In[2]: train_df = pd.read_csv('data/ag_news_csv/train.csv', header=None) train_df.columns = ['class index', 'title', 'description'] train_df # The dataset consists of 120,000 examples, each consisting of a class index, a title, and a description. # The class labels are distributed in a separated file. We will add the labels to the dataset so that we can interpret the data more easily. Note that the label indexes are one-based, so we need to subtract one to retrieve them from the list. # In[3]: labels = open('data/ag_news_csv/classes.txt').read().splitlines() classes = train_df['class index'].map(lambda i: labels[i-1]) train_df.insert(1, 'class', classes) train_df # Let's inspect how balanced our examples are by using a bar plot. # In[4]: pd.value_counts(train_df['class']).plot.bar() # The classes are evenly distributed. That's great! # # However, the text contains some spurious backslashes in some parts of the text. # They are meant to represent newlines in the original text. # An example can be seen below, between the words "dwindling" and "band". # In[5]: print(train_df.loc[0, 'description']) # We will replace the backslashes with spaces on the whole column using pandas replace method. # In[6]: train_df['text'] = train_df['title'].str.lower() + " " + train_df['description'].str.lower() train_df['text'] = train_df['text'].str.replace('\\', ' ', regex=False) train_df # Now we will proceed to tokenize the title and description columns using NLTK's word_tokenize(). # We will add a new column to our dataframe with the list of tokens. # In[7]: from nltk.tokenize import word_tokenize train_df['tokens'] = train_df['text'].progress_map(word_tokenize) train_df # In[8]: from sklearn.model_selection import train_test_split train_df, dev_df = train_test_split(train_df, train_size=0.8) train_df.reset_index(inplace=True) dev_df.reset_index(inplace=True) print(f'train rows: {len(train_df.index):,}') print(f'dev rows: {len(dev_df.index):,}') # Now we will create a vocabulary from the training data. We will only keep the terms that repeat beyond some threshold established below. # In[9]: threshold = 10 tokens = train_df['tokens'].explode().value_counts() tokens = tokens[tokens > threshold] id_to_token = ['[UNK]'] + tokens.index.tolist() token_to_id = {w:i for i,w in enumerate(id_to_token)} vocabulary_size = len(id_to_token) print(f'vocabulary size: {vocabulary_size:,}') # In[10]: from collections import defaultdict def make_feature_vector(tokens, unk_id=0): vector = defaultdict(int) for t in tokens: i = token_to_id.get(t, unk_id) vector[i] += 1 return vector train_df['features'] = train_df['tokens'].progress_map(make_feature_vector) dev_df['features'] = dev_df['tokens'].progress_map(make_feature_vector) train_df # In[11]: from torch.utils.data import Dataset class MyDataset(Dataset): def __init__(self, x, y): self.x = x self.y = y def __len__(self): return len(self.x) def __getitem__(self, index): x = torch.zeros(vocabulary_size, dtype=torch.float32) y = torch.tensor(self.y[index]) for k,v in self.x[index].items(): x[k] = v return x, y # In[12]: from torch import nn import torch.nn.functional as F class Model(nn.Module): def __init__(self, input_dim, hidden_dim, output_dim, dropout): super().__init__() self.layers = nn.Sequential( nn.Dropout(dropout), nn.Linear(input_dim, hidden_dim), nn.ReLU(), nn.Dropout(dropout), nn.Linear(hidden_dim, output_dim), ) def forward(self, x): return self.layers(x) # In[13]: from torch import optim from torch.utils.data import DataLoader from sklearn.metrics import accuracy_score # hyperparameters lr = 1e-3 weight_decay = 1e-5 batch_size = 500 shuffle = True n_epochs = 5 input_dim = vocabulary_size hidden_dim = 50 output_dim = len(labels) dropout = 0.3 # initialize the model, loss function, optimizer, and data-loader model = Model(input_dim, hidden_dim, output_dim, dropout).to(device) loss_func = nn.CrossEntropyLoss() optimizer = optim.Adam( model.parameters(), lr=lr, weight_decay=weight_decay) train_ds = MyDataset( train_df['features'], train_df['class index'] - 1) train_dl = DataLoader( train_ds, batch_size=batch_size, shuffle=shuffle) dev_ds = MyDataset( dev_df['features'], dev_df['class index'] - 1) dev_dl = DataLoader( dev_ds, batch_size=batch_size, shuffle=shuffle) # lists used to store plotting data train_loss, train_acc = [], [] dev_loss, dev_acc = [], [] # In[14]: # train the model for epoch in range(n_epochs): losses, acc = [], [] # set model to training mode model.train() for X, y_true in tqdm(train_dl, desc=f'epoch {epoch+1} (train)'): # clear gradients model.zero_grad() # send batch to right device X = X.to(device) y_true = y_true.to(device) # predict label scores y_pred = model(X) # compute loss loss = loss_func(y_pred, y_true) # compute accuracy gold = y_true.detach().cpu().numpy() pred = np.argmax(y_pred.detach().cpu().numpy(), axis=1) # accumulate for plotting losses.append(loss.detach().cpu().item()) acc.append(accuracy_score(gold, pred)) # backpropagate loss.backward() # optimize model parameters optimizer.step() # save epoch stats train_loss.append(np.mean(losses)) train_acc.append(np.mean(acc)) # set model to evaluation mode model.eval() # disable gradient calculation with torch.no_grad(): losses, acc = [], [] for X, y_true in tqdm(dev_dl, desc=f'epoch {epoch+1} (dev)'): # send batch to right device X = X.to(device) y_true = y_true.to(device) # predict label scores y_pred = model(X) # compute loss loss = loss_func(y_pred, y_true) # compute accuracy gold = y_true.cpu().numpy() pred = np.argmax(y_pred.cpu().numpy(), axis=1) # accumulate for plotting losses.append(loss.cpu().item()) acc.append(accuracy_score(gold, pred)) # save epoch stats dev_loss.append(np.mean(losses)) dev_acc.append(np.mean(acc)) # In[15]: import matplotlib.pyplot as plt x = np.arange(n_epochs) + 1 plt.plot(x, train_loss) plt.plot(x, dev_loss) plt.legend(['train', 'dev']) plt.xlabel('epoch') plt.ylabel('loss') plt.grid(True) # In[16]: plt.plot(x, train_acc) plt.plot(x, dev_acc) plt.legend(['train', 'dev']) plt.xlabel('epoch') plt.ylabel('accuracy') plt.grid(True) # ## Evaluate on test dataset # In[17]: # repeat all preprocessing done above, this time on the test set test_df = pd.read_csv('data/ag_news_csv/test.csv', header=None) test_df.columns = ['class index', 'title', 'description'] test_df['text'] = test_df['title'].str.lower() + " " + test_df['description'].str.lower() test_df['text'] = test_df['text'].str.replace('\\', ' ', regex=False) test_df['tokens'] = test_df['text'].progress_map(word_tokenize) test_df['features'] = test_df['tokens'].progress_map(make_feature_vector) test_df # In[18]: from sklearn.metrics import classification_report # set model to evaluation mode model.eval() dataset = MyDataset(test_df['features'], test_df['class index'] - 1) data_loader = DataLoader(dataset, batch_size=batch_size) y_pred = [] # disable gradient calculation with torch.no_grad(): for X, _ in tqdm(data_loader): X = X.to(device) # predict one class per example y = torch.argmax(model(X), dim=1) # convert tensor to numpy array y_pred.append(y.cpu().numpy()) # print results y_true = dataset.y y_pred = np.concatenate(y_pred) print(classification_report(y_true, y_pred, target_names=labels)) # In[19]: from sklearn.metrics import ConfusionMatrixDisplay, confusion_matrix cm = confusion_matrix(y_true, y_pred, normalize='true') disp = ConfusionMatrixDisplay(confusion_matrix=cm, display_labels=labels) fig, ax = plt.subplots(figsize=(4,4)) disp.plot(cmap='Blues', values_format='.2f', colorbar=False, ax=ax, xticks_rotation=45) # In[ ]:
3,994
4,017
7
chap07-8
chap07-8
7 Implementing Text Classification with Feed Forward Networks In this chapter we provide an implementation of the multilayer neural network described in Chapter 5, along with several of the best practices discussed in Chapter 6. Remaining fairly simple, our network will consist of three neuron layers that are fully connected: an input layer that stores the input features, a hidden intermediate layer, and an output layer that produces the scores for each class to be learned. In between these layers we will include dropout and a nonlinearity (ReLU). Sidebar 7.1 The PyTorch Linear layer implements the connections between layers of neurons. Before discussing the implementation of more complex neural architectures in PyTorch, it is important to address one potential source of confusion. In PyTorch, the Linear layer implements the connections between two layers of neurons rather than an actual neuron layer. That is, a Linear object contains the weights Wl+1 that connect the neurons in layer l with the neurons in layer l + 1 in Figure 5.2. This is why the Linear constructor includes two dimensions: one for the input neuron layer (in_features) and one for the output neuron layer (out_features). Optionally, if the parameter bias is set to True, the corresponding Linear object also contains the bias weights for the output neurons, i.e., bl+1 in Figure 5.2. Thus, in our Model with three neuron layers, we will have two Linear objects. To stay close to the code, from this point forward when we mention the term layer in the implementation chapters, we refer to a PyTorch Linear layer, unless stated otherwise. 109 110 Implementing Text Classification with Feed Forward Networks Further, we make use of two PyTorch classes: a Dataset and a DataLoader. The advantage of using these classes is that they make several things easy, including data shuffling and batching. Lastly, since the classifier’s architecture has become more complex, for optimization we transition from stochastic gradient descent to the Adam optimizer to take advantage of its additional features such as momentum, and L2 regularization. As before, the code from this chapter is available in a Jupyter notebook: chap7_ffnn. 7.1 Data In this chapter we continue to use the AG News Dataset (Section 4.2.1), including the same loading and preprocessing steps. Also, we continue using the same train and test sets to be able to compare results to the ones obtained in Section 4.2. However, in this chapter we will make use of a development set to tune the model’s hyper parameters. For this purpose, we split the training set in two: 80% of the examples become a new training set, while the other 20% are the development set: In the code above we used scikit-learn’s train_test_split function to split the training set into a development partition and a new training partition. Note that this function can split Python lists, NumPy arrays, and even Pandas dataframes. The returned dataframes preserve the index of the original training dataframe, which can be useful to keep the connection to the original data, but is not what we currently need, as we are trying to create two independent datasets. Therefore, we reset the index of the two new dataframes. A second difference to what was done in Section 4.2 is the introduction of mini-batches. PyTorch provides the DataLoader1 class which can be used for shuffling the data and splitting it into mini-batches. In order to create a DataLoader, we need the data to be in the form of a PyTorch Dataset.2 There are two main types of PyTorch datasets: map-style and iterable-style. We will use the former, as it is simpler and meets our needs, but it is good to know that the other option is available for situations when, for example, you need to stream data from a remote source or random access is expensive. To create a map-style dataset we need to subclass torch.utils.data. Dataset and override its __getitem__() method (to return an example given a 1 https://pytorch.org/docs/stable/data.html#torch.utils.data.DataLoader 2 https://pytorch.org/docs/stable/data.html#torch.utils.data.Dataset 7.2 Fully-Connected Neural Network 111 key), as well as its __len__() method (to return the number of examples in the dataset). Our dataset implementation stores two sequences: one for holding the features, and another for storing the corresponding labels. In our implementation we store two Pandas Series, but Python lists or NumPy arrays would also work. The implementation __len__() is trivial: we simply return the length of the feature sequence, that is, the number of feature vectors. The implementation of __getitem__() is slightly more involved. Recall that each of our feature vectors is represented as a dictionary with word ids as keys, and word counts as values, and any word id not in the dictionary has a count of zero. Our __getitem__() method transforms this representation into one that PyTorch can use. We first create two PyTorch tensors, one for the label and one for the features, which is initially populated with zeros. Then, we retrieve the feature dictionary corresponding to the provided index, and, for each key-value pair in the feature dictionary, we update the corresponding element of the tensor. Once this is complete, we return the feature and label tensors for the datum: 7.2 Fully-Connected Neural Network Having completed the Dataset implementation, we next implement the model, i.e., a fully-connected neural network with two layers.3 In Section 4.2 we used a Linear module directly to implement the simpler models discussed there. This time, we will demonstrate how to implement a model as a new module, by subclassing torch.nn. Module. Although this is not necessary for this model, as it can be represented by a Sequential module, as models get more complex, it becomes helpful to encapsulate their behavior. To implement a Module, we need to implement the constructor and override the forward() method. Note that, in our constructor below, before initializing the object fields, we invoke the constructor of the parent class (i.e., Module) with the line super().__init__(). This allows PyTorch to set up the mechanisms through which any layers defined as attributes in the constructor are properly registered as model parameters. In our example, a Sequential instance is assigned to self.layers; this is enough for our model instance to know about it during back-propagation and parameter updating. 3 Recall that layer here refers to the PyTorch Linear layer that contains the connections between two neuron layers. See Sidebar 7.1 for more details. 112 Implementing Text Classification with Feed Forward Networks Here, our model consists of two linear layers, each one preceded by a dropout layer (which drops out input neurons from the corresponding linear layer). The input of the first linear layer has the same size as our vocabulary, and its output has the dimension of the hidden neuron layer (please see Section 5.1 for a refresher on the architecture of the feed-forward neural network). Consequently, the input size of the second linear layer is equal to the size of the hidden layer, and its output size is the number of classes. Additionally, between the two linear layers we add a ReLU nonlinearity.4 All of the model layers are wrapped in a Sequential module, which simply connects the output of one layer to the input of the next. The second method we need to implement is the forward() method, which defines how the model applies its layers to a given input during the forward pass. Our forward() method simply calls the sequential layer and returns its output. Note that while this method implements the model’s forward pass, in general, this method should not be called directly by the user. Instead, the user should use the model as though it were a function (technically, invoking the __call__() method), and let PyTorch call the forward() method internally. This allows PyTorch to activate necessary features such as module hooks correctly. 7.3 Training In order to train our model, we will first initialize the hyperparameters and the different components we need: model, loss function, optimizer, dataset, and data-loader. Notable differences with respect to Section 4.2 are the use of the Adam optimizer with a weight decay (this is just what PyTorch calls L2 regularization – see Chapter 6), and the use of a dataloader with shuffling and batches of 500 examples. We encourage you to take the time to examine the values we use for the hyper parameters, and to experiment with modifying them in the Jupyter notebook. The basic steps of the learning loop are the same as those in Sec- 4 Note that nonlinearities such as the ReLU function here are necessary to guarantee that the neural network can learn non-linear decision boundaries. See Chapter 5 for an extended discussion on this topic. Further, nonlinearities can be added after each network layer, but, typically, the output layer is omitted. This is because a softmax or sigmoid function usually follows it. In PyTorch, the nn.CrossEntropyFunction, which we also use in this chapter, includes such a softmax function. 7.3 Training 113 tion 4.2, except that we are now using a development set to keep track of the performance of the current model after each training epoch. One important difference between using our model during training and evaluation is that, prior to each training session, we need to set the model to training mode using the train() method, and before evaluating on the development set, we need to set the model to evaluation mode using the eval() method. This is important, because some layers have different behavior depending on whether the model is in training or evaluation mode. In our model, this is the case for the Dropout layer, which randomly zeroes some of its input elements during training and scales its outputs accordingly (see Section 6.6), but during evaluation does nothing. In order to plot some relevant statistics acquired from the training data, we collect the current loss and accuracy for each mini-batch. Note that we call detach() on the tensors corresponding to the loss and the predicted/gold labels so they are no longer considered when computing gradients. Calling cpu() copies the tensors from the GPU to the CPU if we are using the GPU; otherwise it does nothing. Calling numpy() converts the PyTorch tensor into a NumPy array. Unlike the prediction sequence, which is represented as a vector of label scores, the loss is a scalar. For this reason, we retrieve it as a Python number using the item() method. When evaluating on the development set, since we do not need to compute the gradients, we save computation by wrapping the steps in a torch.no_grad() context-manager. Since we are not learning, we do not perform back-propagation or invoke the optimizer. After completing training we have gathered the loss and accuracy values after each epoch for both the training and development partitions. Next, we plot these values in order to visualize the classifier’s progress over time. Plots such as these are important to determine how well our model is learning, which informs decisions regarding adjusting hyper parameters or modifying the model’s architecture. Below we only show the plot for the loss. Plotting the accuracy is very similar; the corresponding code as well as the plot itself is available in the Jupyter notebook. 114 Implementing Text Classification with Feed Forward Networks The plot indicates that both the training and development losses decrease over time. This is good! It indicates that our classifier is neither overfitting nor underfitting. Recall from Chapter 2 that overfitting happens when a classifier performs well in training, but poorly on unseen data. In the plot above this would be indicated by a training loss that continues to decrease, but is associated with a development loss that does not. Underfitting happens when a classifier is unable to learn meaningful associations between the input features and the output labels. In this plot this would be shown as loss curves that do not decrease over time. This analysis means we are ready to evaluate our trained model on the test set, which must be a truly unseen dataset that was not used for training or to tune hyper parameters. In other words, this experiment will indicate how well our model performs “in the wild.” Because we would like these results to be as close as possible to real-world results, the test set should be used sparingly, only after the entire architecture, its trained parameters, and its hyper parameters have been frozen. With our feed-forward neural architecture we have achieved an accuracy of 92%, which is a substantial improvement over the 88% accuracy we obtained in Section 4.2. We strongly suggest that you experiment not only with the different hyper parameters, but also with different model architectures in the Jupyter notebook. Such exercises will help you de- 7.4 Summary 115 velop an intuition about the different effects each design choice has, as well as how these decisions interact with each other. 7.4 Summary In this chapter we have shown how to implement a feed-forward neural network in PyTorch. We have also introduced several PyTorch features that encourage and simplify deep learning best practices. In particular, the built-in Dataset and DataLoader classes make mini-batching straightforward while still allowing for customization such as sampling. The ability to create a custom Dataset object allows us to handle complex data and still have access to the features of a DataLoader. By convention, all the components provided by PyTorch are batch-aware and assume that the first dimension refers to the batch size, simplifying model implementation and improving readability. In building the model itself, we also saw that PyTorch uses layer modularization, i.e., both the network layers themselves and operations on them (such as dropout and activation functions) are modeled as layers in a pipeline. This makes it easy to interweave network layers, add various operations between them, and swap activation functions as desired. The weight initialization is also handled automatically when the layers are created, but can be customized as needed. Further, one can tailor the training process in PyTorch by adding momentum, adaptive learning rates, and regularization through optimizer selection and configuration. In this chapter, we used the Adam optimizer, which, in the authors’ experience, is a good default choice, but there are many other optimizers to choose from. We recommend that the reader read the PyTorch documentation on optimizers for more details: https://pytorch.org/docs/stable/optim.html.
7,619
7,697
#!/usr/bin/env python # coding: utf-8 # # Text Classification with a Feed-forward Neural Network and BOW features # First, we will do some initialization. # In[1]: import random import torch import numpy as np import pandas as pd from tqdm.notebook import tqdm # enable tqdm in pandas tqdm.pandas() # set to True to use the gpu (if there is one available) use_gpu = True # select device device = torch.device('cuda' if use_gpu and torch.cuda.is_available() else 'cpu') print(f'device: {device.type}') # random seed seed = 1234 # set random seed if seed is not None: print(f'random seed: {seed}') random.seed(seed) np.random.seed(seed) torch.manual_seed(seed) # We will be using the AG's News Topic Classification Dataset. # It is stored in two CSV files: `train.csv` and `test.csv`, as well as a `classes.txt` that stores the labels of the classes to predict. # # First, we will load the training dataset using [pandas](https://pandas.pydata.org/) and take a quick look at how the data. # In[2]: train_df = pd.read_csv('data/ag_news_csv/train.csv', header=None) train_df.columns = ['class index', 'title', 'description'] train_df # The dataset consists of 120,000 examples, each consisting of a class index, a title, and a description. # The class labels are distributed in a separated file. We will add the labels to the dataset so that we can interpret the data more easily. Note that the label indexes are one-based, so we need to subtract one to retrieve them from the list. # In[3]: labels = open('data/ag_news_csv/classes.txt').read().splitlines() classes = train_df['class index'].map(lambda i: labels[i-1]) train_df.insert(1, 'class', classes) train_df # Let's inspect how balanced our examples are by using a bar plot. # In[4]: pd.value_counts(train_df['class']).plot.bar() # The classes are evenly distributed. That's great! # # However, the text contains some spurious backslashes in some parts of the text. # They are meant to represent newlines in the original text. # An example can be seen below, between the words "dwindling" and "band". # In[5]: print(train_df.loc[0, 'description']) # We will replace the backslashes with spaces on the whole column using pandas replace method. # In[6]: train_df['text'] = train_df['title'].str.lower() + " " + train_df['description'].str.lower() train_df['text'] = train_df['text'].str.replace('\\', ' ', regex=False) train_df # Now we will proceed to tokenize the title and description columns using NLTK's word_tokenize(). # We will add a new column to our dataframe with the list of tokens. # In[7]: from nltk.tokenize import word_tokenize train_df['tokens'] = train_df['text'].progress_map(word_tokenize) train_df # In[8]: from sklearn.model_selection import train_test_split train_df, dev_df = train_test_split(train_df, train_size=0.8) train_df.reset_index(inplace=True) dev_df.reset_index(inplace=True) print(f'train rows: {len(train_df.index):,}') print(f'dev rows: {len(dev_df.index):,}') # Now we will create a vocabulary from the training data. We will only keep the terms that repeat beyond some threshold established below. # In[9]: threshold = 10 tokens = train_df['tokens'].explode().value_counts() tokens = tokens[tokens > threshold] id_to_token = ['[UNK]'] + tokens.index.tolist() token_to_id = {w:i for i,w in enumerate(id_to_token)} vocabulary_size = len(id_to_token) print(f'vocabulary size: {vocabulary_size:,}') # In[10]: from collections import defaultdict def make_feature_vector(tokens, unk_id=0): vector = defaultdict(int) for t in tokens: i = token_to_id.get(t, unk_id) vector[i] += 1 return vector train_df['features'] = train_df['tokens'].progress_map(make_feature_vector) dev_df['features'] = dev_df['tokens'].progress_map(make_feature_vector) train_df # In[11]: from torch.utils.data import Dataset class MyDataset(Dataset): def __init__(self, x, y): self.x = x self.y = y def __len__(self): return len(self.x) def __getitem__(self, index): x = torch.zeros(vocabulary_size, dtype=torch.float32) y = torch.tensor(self.y[index]) for k,v in self.x[index].items(): x[k] = v return x, y # In[12]: from torch import nn import torch.nn.functional as F class Model(nn.Module): def __init__(self, input_dim, hidden_dim, output_dim, dropout): super().__init__() self.layers = nn.Sequential( nn.Dropout(dropout), nn.Linear(input_dim, hidden_dim), nn.ReLU(), nn.Dropout(dropout), nn.Linear(hidden_dim, output_dim), ) def forward(self, x): return self.layers(x) # In[13]: from torch import optim from torch.utils.data import DataLoader from sklearn.metrics import accuracy_score # hyperparameters lr = 1e-3 weight_decay = 1e-5 batch_size = 500 shuffle = True n_epochs = 5 input_dim = vocabulary_size hidden_dim = 50 output_dim = len(labels) dropout = 0.3 # initialize the model, loss function, optimizer, and data-loader model = Model(input_dim, hidden_dim, output_dim, dropout).to(device) loss_func = nn.CrossEntropyLoss() optimizer = optim.Adam( model.parameters(), lr=lr, weight_decay=weight_decay) train_ds = MyDataset( train_df['features'], train_df['class index'] - 1) train_dl = DataLoader( train_ds, batch_size=batch_size, shuffle=shuffle) dev_ds = MyDataset( dev_df['features'], dev_df['class index'] - 1) dev_dl = DataLoader( dev_ds, batch_size=batch_size, shuffle=shuffle) # lists used to store plotting data train_loss, train_acc = [], [] dev_loss, dev_acc = [], [] # In[14]: # train the model for epoch in range(n_epochs): losses, acc = [], [] # set model to training mode model.train() for X, y_true in tqdm(train_dl, desc=f'epoch {epoch+1} (train)'): # clear gradients model.zero_grad() # send batch to right device X = X.to(device) y_true = y_true.to(device) # predict label scores y_pred = model(X) # compute loss loss = loss_func(y_pred, y_true) # compute accuracy gold = y_true.detach().cpu().numpy() pred = np.argmax(y_pred.detach().cpu().numpy(), axis=1) # accumulate for plotting losses.append(loss.detach().cpu().item()) acc.append(accuracy_score(gold, pred)) # backpropagate loss.backward() # optimize model parameters optimizer.step() # save epoch stats train_loss.append(np.mean(losses)) train_acc.append(np.mean(acc)) # set model to evaluation mode model.eval() # disable gradient calculation with torch.no_grad(): losses, acc = [], [] for X, y_true in tqdm(dev_dl, desc=f'epoch {epoch+1} (dev)'): # send batch to right device X = X.to(device) y_true = y_true.to(device) # predict label scores y_pred = model(X) # compute loss loss = loss_func(y_pred, y_true) # compute accuracy gold = y_true.cpu().numpy() pred = np.argmax(y_pred.cpu().numpy(), axis=1) # accumulate for plotting losses.append(loss.cpu().item()) acc.append(accuracy_score(gold, pred)) # save epoch stats dev_loss.append(np.mean(losses)) dev_acc.append(np.mean(acc)) # In[15]: import matplotlib.pyplot as plt x = np.arange(n_epochs) + 1 plt.plot(x, train_loss) plt.plot(x, dev_loss) plt.legend(['train', 'dev']) plt.xlabel('epoch') plt.ylabel('loss') plt.grid(True) # In[16]: plt.plot(x, train_acc) plt.plot(x, dev_acc) plt.legend(['train', 'dev']) plt.xlabel('epoch') plt.ylabel('accuracy') plt.grid(True) # ## Evaluate on test dataset # In[17]: # repeat all preprocessing done above, this time on the test set test_df = pd.read_csv('data/ag_news_csv/test.csv', header=None) test_df.columns = ['class index', 'title', 'description'] test_df['text'] = test_df['title'].str.lower() + " " + test_df['description'].str.lower() test_df['text'] = test_df['text'].str.replace('\\', ' ', regex=False) test_df['tokens'] = test_df['text'].progress_map(word_tokenize) test_df['features'] = test_df['tokens'].progress_map(make_feature_vector) test_df # In[18]: from sklearn.metrics import classification_report # set model to evaluation mode model.eval() dataset = MyDataset(test_df['features'], test_df['class index'] - 1) data_loader = DataLoader(dataset, batch_size=batch_size) y_pred = [] # disable gradient calculation with torch.no_grad(): for X, _ in tqdm(data_loader): X = X.to(device) # predict one class per example y = torch.argmax(model(X), dim=1) # convert tensor to numpy array y_pred.append(y.cpu().numpy()) # print results y_true = dataset.y y_pred = np.concatenate(y_pred) print(classification_report(y_true, y_pred, target_names=labels)) # In[19]: from sklearn.metrics import ConfusionMatrixDisplay, confusion_matrix cm = confusion_matrix(y_true, y_pred, normalize='true') disp = ConfusionMatrixDisplay(confusion_matrix=cm, display_labels=labels) fig, ax = plt.subplots(figsize=(4,4)) disp.plot(cmap='Blues', values_format='.2f', colorbar=False, ax=ax, xticks_rotation=45) # In[ ]:
4,693
4,749
8
chap07-9
chap07-9
7 Implementing Text Classification with Feed Forward Networks In this chapter we provide an implementation of the multilayer neural network described in Chapter 5, along with several of the best practices discussed in Chapter 6. Remaining fairly simple, our network will consist of three neuron layers that are fully connected: an input layer that stores the input features, a hidden intermediate layer, and an output layer that produces the scores for each class to be learned. In between these layers we will include dropout and a nonlinearity (ReLU). Sidebar 7.1 The PyTorch Linear layer implements the connections between layers of neurons. Before discussing the implementation of more complex neural architectures in PyTorch, it is important to address one potential source of confusion. In PyTorch, the Linear layer implements the connections between two layers of neurons rather than an actual neuron layer. That is, a Linear object contains the weights Wl+1 that connect the neurons in layer l with the neurons in layer l + 1 in Figure 5.2. This is why the Linear constructor includes two dimensions: one for the input neuron layer (in_features) and one for the output neuron layer (out_features). Optionally, if the parameter bias is set to True, the corresponding Linear object also contains the bias weights for the output neurons, i.e., bl+1 in Figure 5.2. Thus, in our Model with three neuron layers, we will have two Linear objects. To stay close to the code, from this point forward when we mention the term layer in the implementation chapters, we refer to a PyTorch Linear layer, unless stated otherwise. 109 110 Implementing Text Classification with Feed Forward Networks Further, we make use of two PyTorch classes: a Dataset and a DataLoader. The advantage of using these classes is that they make several things easy, including data shuffling and batching. Lastly, since the classifier’s architecture has become more complex, for optimization we transition from stochastic gradient descent to the Adam optimizer to take advantage of its additional features such as momentum, and L2 regularization. As before, the code from this chapter is available in a Jupyter notebook: chap7_ffnn. 7.1 Data In this chapter we continue to use the AG News Dataset (Section 4.2.1), including the same loading and preprocessing steps. Also, we continue using the same train and test sets to be able to compare results to the ones obtained in Section 4.2. However, in this chapter we will make use of a development set to tune the model’s hyper parameters. For this purpose, we split the training set in two: 80% of the examples become a new training set, while the other 20% are the development set: In the code above we used scikit-learn’s train_test_split function to split the training set into a development partition and a new training partition. Note that this function can split Python lists, NumPy arrays, and even Pandas dataframes. The returned dataframes preserve the index of the original training dataframe, which can be useful to keep the connection to the original data, but is not what we currently need, as we are trying to create two independent datasets. Therefore, we reset the index of the two new dataframes. A second difference to what was done in Section 4.2 is the introduction of mini-batches. PyTorch provides the DataLoader1 class which can be used for shuffling the data and splitting it into mini-batches. In order to create a DataLoader, we need the data to be in the form of a PyTorch Dataset.2 There are two main types of PyTorch datasets: map-style and iterable-style. We will use the former, as it is simpler and meets our needs, but it is good to know that the other option is available for situations when, for example, you need to stream data from a remote source or random access is expensive. To create a map-style dataset we need to subclass torch.utils.data. Dataset and override its __getitem__() method (to return an example given a 1 https://pytorch.org/docs/stable/data.html#torch.utils.data.DataLoader 2 https://pytorch.org/docs/stable/data.html#torch.utils.data.Dataset 7.2 Fully-Connected Neural Network 111 key), as well as its __len__() method (to return the number of examples in the dataset). Our dataset implementation stores two sequences: one for holding the features, and another for storing the corresponding labels. In our implementation we store two Pandas Series, but Python lists or NumPy arrays would also work. The implementation __len__() is trivial: we simply return the length of the feature sequence, that is, the number of feature vectors. The implementation of __getitem__() is slightly more involved. Recall that each of our feature vectors is represented as a dictionary with word ids as keys, and word counts as values, and any word id not in the dictionary has a count of zero. Our __getitem__() method transforms this representation into one that PyTorch can use. We first create two PyTorch tensors, one for the label and one for the features, which is initially populated with zeros. Then, we retrieve the feature dictionary corresponding to the provided index, and, for each key-value pair in the feature dictionary, we update the corresponding element of the tensor. Once this is complete, we return the feature and label tensors for the datum: 7.2 Fully-Connected Neural Network Having completed the Dataset implementation, we next implement the model, i.e., a fully-connected neural network with two layers.3 In Section 4.2 we used a Linear module directly to implement the simpler models discussed there. This time, we will demonstrate how to implement a model as a new module, by subclassing torch.nn. Module. Although this is not necessary for this model, as it can be represented by a Sequential module, as models get more complex, it becomes helpful to encapsulate their behavior. To implement a Module, we need to implement the constructor and override the forward() method. Note that, in our constructor below, before initializing the object fields, we invoke the constructor of the parent class (i.e., Module) with the line super().__init__(). This allows PyTorch to set up the mechanisms through which any layers defined as attributes in the constructor are properly registered as model parameters. In our example, a Sequential instance is assigned to self.layers; this is enough for our model instance to know about it during back-propagation and parameter updating. 3 Recall that layer here refers to the PyTorch Linear layer that contains the connections between two neuron layers. See Sidebar 7.1 for more details. 112 Implementing Text Classification with Feed Forward Networks Here, our model consists of two linear layers, each one preceded by a dropout layer (which drops out input neurons from the corresponding linear layer). The input of the first linear layer has the same size as our vocabulary, and its output has the dimension of the hidden neuron layer (please see Section 5.1 for a refresher on the architecture of the feed-forward neural network). Consequently, the input size of the second linear layer is equal to the size of the hidden layer, and its output size is the number of classes. Additionally, between the two linear layers we add a ReLU nonlinearity.4 All of the model layers are wrapped in a Sequential module, which simply connects the output of one layer to the input of the next. The second method we need to implement is the forward() method, which defines how the model applies its layers to a given input during the forward pass. Our forward() method simply calls the sequential layer and returns its output. Note that while this method implements the model’s forward pass, in general, this method should not be called directly by the user. Instead, the user should use the model as though it were a function (technically, invoking the __call__() method), and let PyTorch call the forward() method internally. This allows PyTorch to activate necessary features such as module hooks correctly. 7.3 Training In order to train our model, we will first initialize the hyperparameters and the different components we need: model, loss function, optimizer, dataset, and data-loader. Notable differences with respect to Section 4.2 are the use of the Adam optimizer with a weight decay (this is just what PyTorch calls L2 regularization – see Chapter 6), and the use of a dataloader with shuffling and batches of 500 examples. We encourage you to take the time to examine the values we use for the hyper parameters, and to experiment with modifying them in the Jupyter notebook. The basic steps of the learning loop are the same as those in Sec- 4 Note that nonlinearities such as the ReLU function here are necessary to guarantee that the neural network can learn non-linear decision boundaries. See Chapter 5 for an extended discussion on this topic. Further, nonlinearities can be added after each network layer, but, typically, the output layer is omitted. This is because a softmax or sigmoid function usually follows it. In PyTorch, the nn.CrossEntropyFunction, which we also use in this chapter, includes such a softmax function. 7.3 Training 113 tion 4.2, except that we are now using a development set to keep track of the performance of the current model after each training epoch. One important difference between using our model during training and evaluation is that, prior to each training session, we need to set the model to training mode using the train() method, and before evaluating on the development set, we need to set the model to evaluation mode using the eval() method. This is important, because some layers have different behavior depending on whether the model is in training or evaluation mode. In our model, this is the case for the Dropout layer, which randomly zeroes some of its input elements during training and scales its outputs accordingly (see Section 6.6), but during evaluation does nothing. In order to plot some relevant statistics acquired from the training data, we collect the current loss and accuracy for each mini-batch. Note that we call detach() on the tensors corresponding to the loss and the predicted/gold labels so they are no longer considered when computing gradients. Calling cpu() copies the tensors from the GPU to the CPU if we are using the GPU; otherwise it does nothing. Calling numpy() converts the PyTorch tensor into a NumPy array. Unlike the prediction sequence, which is represented as a vector of label scores, the loss is a scalar. For this reason, we retrieve it as a Python number using the item() method. When evaluating on the development set, since we do not need to compute the gradients, we save computation by wrapping the steps in a torch.no_grad() context-manager. Since we are not learning, we do not perform back-propagation or invoke the optimizer. After completing training we have gathered the loss and accuracy values after each epoch for both the training and development partitions. Next, we plot these values in order to visualize the classifier’s progress over time. Plots such as these are important to determine how well our model is learning, which informs decisions regarding adjusting hyper parameters or modifying the model’s architecture. Below we only show the plot for the loss. Plotting the accuracy is very similar; the corresponding code as well as the plot itself is available in the Jupyter notebook. 114 Implementing Text Classification with Feed Forward Networks The plot indicates that both the training and development losses decrease over time. This is good! It indicates that our classifier is neither overfitting nor underfitting. Recall from Chapter 2 that overfitting happens when a classifier performs well in training, but poorly on unseen data. In the plot above this would be indicated by a training loss that continues to decrease, but is associated with a development loss that does not. Underfitting happens when a classifier is unable to learn meaningful associations between the input features and the output labels. In this plot this would be shown as loss curves that do not decrease over time. This analysis means we are ready to evaluate our trained model on the test set, which must be a truly unseen dataset that was not used for training or to tune hyper parameters. In other words, this experiment will indicate how well our model performs “in the wild.” Because we would like these results to be as close as possible to real-world results, the test set should be used sparingly, only after the entire architecture, its trained parameters, and its hyper parameters have been frozen. With our feed-forward neural architecture we have achieved an accuracy of 92%, which is a substantial improvement over the 88% accuracy we obtained in Section 4.2. We strongly suggest that you experiment not only with the different hyper parameters, but also with different model architectures in the Jupyter notebook. Such exercises will help you de- 7.4 Summary 115 velop an intuition about the different effects each design choice has, as well as how these decisions interact with each other. 7.4 Summary In this chapter we have shown how to implement a feed-forward neural network in PyTorch. We have also introduced several PyTorch features that encourage and simplify deep learning best practices. In particular, the built-in Dataset and DataLoader classes make mini-batching straightforward while still allowing for customization such as sampling. The ability to create a custom Dataset object allows us to handle complex data and still have access to the features of a DataLoader. By convention, all the components provided by PyTorch are batch-aware and assume that the first dimension refers to the batch size, simplifying model implementation and improving readability. In building the model itself, we also saw that PyTorch uses layer modularization, i.e., both the network layers themselves and operations on them (such as dropout and activation functions) are modeled as layers in a pipeline. This makes it easy to interweave network layers, add various operations between them, and swap activation functions as desired. The weight initialization is also handled automatically when the layers are created, but can be customized as needed. Further, one can tailor the training process in PyTorch by adding momentum, adaptive learning rates, and regularization through optimizer selection and configuration. In this chapter, we used the Adam optimizer, which, in the authors’ experience, is a good default choice, but there are many other optimizers to choose from. We recommend that the reader read the PyTorch documentation on optimizers for more details: https://pytorch.org/docs/stable/optim.html.
4,656
4,718
#!/usr/bin/env python # coding: utf-8 # # Text Classification with a Feed-forward Neural Network and BOW features # First, we will do some initialization. # In[1]: import random import torch import numpy as np import pandas as pd from tqdm.notebook import tqdm # enable tqdm in pandas tqdm.pandas() # set to True to use the gpu (if there is one available) use_gpu = True # select device device = torch.device('cuda' if use_gpu and torch.cuda.is_available() else 'cpu') print(f'device: {device.type}') # random seed seed = 1234 # set random seed if seed is not None: print(f'random seed: {seed}') random.seed(seed) np.random.seed(seed) torch.manual_seed(seed) # We will be using the AG's News Topic Classification Dataset. # It is stored in two CSV files: `train.csv` and `test.csv`, as well as a `classes.txt` that stores the labels of the classes to predict. # # First, we will load the training dataset using [pandas](https://pandas.pydata.org/) and take a quick look at how the data. # In[2]: train_df = pd.read_csv('data/ag_news_csv/train.csv', header=None) train_df.columns = ['class index', 'title', 'description'] train_df # The dataset consists of 120,000 examples, each consisting of a class index, a title, and a description. # The class labels are distributed in a separated file. We will add the labels to the dataset so that we can interpret the data more easily. Note that the label indexes are one-based, so we need to subtract one to retrieve them from the list. # In[3]: labels = open('data/ag_news_csv/classes.txt').read().splitlines() classes = train_df['class index'].map(lambda i: labels[i-1]) train_df.insert(1, 'class', classes) train_df # Let's inspect how balanced our examples are by using a bar plot. # In[4]: pd.value_counts(train_df['class']).plot.bar() # The classes are evenly distributed. That's great! # # However, the text contains some spurious backslashes in some parts of the text. # They are meant to represent newlines in the original text. # An example can be seen below, between the words "dwindling" and "band". # In[5]: print(train_df.loc[0, 'description']) # We will replace the backslashes with spaces on the whole column using pandas replace method. # In[6]: train_df['text'] = train_df['title'].str.lower() + " " + train_df['description'].str.lower() train_df['text'] = train_df['text'].str.replace('\\', ' ', regex=False) train_df # Now we will proceed to tokenize the title and description columns using NLTK's word_tokenize(). # We will add a new column to our dataframe with the list of tokens. # In[7]: from nltk.tokenize import word_tokenize train_df['tokens'] = train_df['text'].progress_map(word_tokenize) train_df # In[8]: from sklearn.model_selection import train_test_split train_df, dev_df = train_test_split(train_df, train_size=0.8) train_df.reset_index(inplace=True) dev_df.reset_index(inplace=True) print(f'train rows: {len(train_df.index):,}') print(f'dev rows: {len(dev_df.index):,}') # Now we will create a vocabulary from the training data. We will only keep the terms that repeat beyond some threshold established below. # In[9]: threshold = 10 tokens = train_df['tokens'].explode().value_counts() tokens = tokens[tokens > threshold] id_to_token = ['[UNK]'] + tokens.index.tolist() token_to_id = {w:i for i,w in enumerate(id_to_token)} vocabulary_size = len(id_to_token) print(f'vocabulary size: {vocabulary_size:,}') # In[10]: from collections import defaultdict def make_feature_vector(tokens, unk_id=0): vector = defaultdict(int) for t in tokens: i = token_to_id.get(t, unk_id) vector[i] += 1 return vector train_df['features'] = train_df['tokens'].progress_map(make_feature_vector) dev_df['features'] = dev_df['tokens'].progress_map(make_feature_vector) train_df # In[11]: from torch.utils.data import Dataset class MyDataset(Dataset): def __init__(self, x, y): self.x = x self.y = y def __len__(self): return len(self.x) def __getitem__(self, index): x = torch.zeros(vocabulary_size, dtype=torch.float32) y = torch.tensor(self.y[index]) for k,v in self.x[index].items(): x[k] = v return x, y # In[12]: from torch import nn import torch.nn.functional as F class Model(nn.Module): def __init__(self, input_dim, hidden_dim, output_dim, dropout): super().__init__() self.layers = nn.Sequential( nn.Dropout(dropout), nn.Linear(input_dim, hidden_dim), nn.ReLU(), nn.Dropout(dropout), nn.Linear(hidden_dim, output_dim), ) def forward(self, x): return self.layers(x) # In[13]: from torch import optim from torch.utils.data import DataLoader from sklearn.metrics import accuracy_score # hyperparameters lr = 1e-3 weight_decay = 1e-5 batch_size = 500 shuffle = True n_epochs = 5 input_dim = vocabulary_size hidden_dim = 50 output_dim = len(labels) dropout = 0.3 # initialize the model, loss function, optimizer, and data-loader model = Model(input_dim, hidden_dim, output_dim, dropout).to(device) loss_func = nn.CrossEntropyLoss() optimizer = optim.Adam( model.parameters(), lr=lr, weight_decay=weight_decay) train_ds = MyDataset( train_df['features'], train_df['class index'] - 1) train_dl = DataLoader( train_ds, batch_size=batch_size, shuffle=shuffle) dev_ds = MyDataset( dev_df['features'], dev_df['class index'] - 1) dev_dl = DataLoader( dev_ds, batch_size=batch_size, shuffle=shuffle) # lists used to store plotting data train_loss, train_acc = [], [] dev_loss, dev_acc = [], [] # In[14]: # train the model for epoch in range(n_epochs): losses, acc = [], [] # set model to training mode model.train() for X, y_true in tqdm(train_dl, desc=f'epoch {epoch+1} (train)'): # clear gradients model.zero_grad() # send batch to right device X = X.to(device) y_true = y_true.to(device) # predict label scores y_pred = model(X) # compute loss loss = loss_func(y_pred, y_true) # compute accuracy gold = y_true.detach().cpu().numpy() pred = np.argmax(y_pred.detach().cpu().numpy(), axis=1) # accumulate for plotting losses.append(loss.detach().cpu().item()) acc.append(accuracy_score(gold, pred)) # backpropagate loss.backward() # optimize model parameters optimizer.step() # save epoch stats train_loss.append(np.mean(losses)) train_acc.append(np.mean(acc)) # set model to evaluation mode model.eval() # disable gradient calculation with torch.no_grad(): losses, acc = [], [] for X, y_true in tqdm(dev_dl, desc=f'epoch {epoch+1} (dev)'): # send batch to right device X = X.to(device) y_true = y_true.to(device) # predict label scores y_pred = model(X) # compute loss loss = loss_func(y_pred, y_true) # compute accuracy gold = y_true.cpu().numpy() pred = np.argmax(y_pred.cpu().numpy(), axis=1) # accumulate for plotting losses.append(loss.cpu().item()) acc.append(accuracy_score(gold, pred)) # save epoch stats dev_loss.append(np.mean(losses)) dev_acc.append(np.mean(acc)) # In[15]: import matplotlib.pyplot as plt x = np.arange(n_epochs) + 1 plt.plot(x, train_loss) plt.plot(x, dev_loss) plt.legend(['train', 'dev']) plt.xlabel('epoch') plt.ylabel('loss') plt.grid(True) # In[16]: plt.plot(x, train_acc) plt.plot(x, dev_acc) plt.legend(['train', 'dev']) plt.xlabel('epoch') plt.ylabel('accuracy') plt.grid(True) # ## Evaluate on test dataset # In[17]: # repeat all preprocessing done above, this time on the test set test_df = pd.read_csv('data/ag_news_csv/test.csv', header=None) test_df.columns = ['class index', 'title', 'description'] test_df['text'] = test_df['title'].str.lower() + " " + test_df['description'].str.lower() test_df['text'] = test_df['text'].str.replace('\\', ' ', regex=False) test_df['tokens'] = test_df['text'].progress_map(word_tokenize) test_df['features'] = test_df['tokens'].progress_map(make_feature_vector) test_df # In[18]: from sklearn.metrics import classification_report # set model to evaluation mode model.eval() dataset = MyDataset(test_df['features'], test_df['class index'] - 1) data_loader = DataLoader(dataset, batch_size=batch_size) y_pred = [] # disable gradient calculation with torch.no_grad(): for X, _ in tqdm(data_loader): X = X.to(device) # predict one class per example y = torch.argmax(model(X), dim=1) # convert tensor to numpy array y_pred.append(y.cpu().numpy()) # print results y_true = dataset.y y_pred = np.concatenate(y_pred) print(classification_report(y_true, y_pred, target_names=labels)) # In[19]: from sklearn.metrics import ConfusionMatrixDisplay, confusion_matrix cm = confusion_matrix(y_true, y_pred, normalize='true') disp = ConfusionMatrixDisplay(confusion_matrix=cm, display_labels=labels) fig, ax = plt.subplots(figsize=(4,4)) disp.plot(cmap='Blues', values_format='.2f', colorbar=False, ax=ax, xticks_rotation=45) # In[ ]:
4,049
4,083
9
chap11-0
chap11-0
11 Implementing Part-of-speech Tagging Using Recurrent Neural Networks The previous chapter was our first exposure to recurrent neural networks, which included intuitions for why they are useful for natural language processing, various architectures, and training algorithms. In this chapter we will put them to use, to implement a common sequence modeling task. 11.1 Part-of-speech Tagging The task we will use as an example for this chapter is part-of-speech (POS) tagging, an NLP application that, as we discussed in the previous chapter, benefits from word order. Please see Chapter 16 for a more thorough discussion of POS tagging. The entire code presented in this chapter is available in the chap11_pos_tagging Jupyter notebook. To take a break from NLP applications for English, in this chapter we use the AnCora corpus (Taulé et al., 2008), which primarily consists of newspaper texts in Spanish and Catalan with different linguistic annotations. In this chapter we work with the Spanish portion of the corpus, and the annotations for Universal POS tags (see Chapter 16 for a description of these tags). The Spanish portion of the corpus is divided into a training set with 14,305 sentences, a development set with 1,654 sentences, and a test set with 1,721 sentences. The data is distributed in the CoNLL-U format. In this format, all sentences in a dataset are stored in the same file, separated by a blank line. Each individual token in a sentence is represented in a line, which contains 10 annotation fields separated by tabs: ID, FORM, LEMMA, UPOS, XPOS, FEATS, HEAD, DEPREL, DEPS, and MISC. A comprehensive explanation of this format and the 162 11.1 Part-of-speech Tagging 163 meaning of the different fields is beyond the goal of this chapter; however, the curious reader can find one at the CoNLL-U website.1 Here, we are only concerned with the fields FORM (the raw word), and UPOS (the Universal part-of-speech tag). As in previous chapters, we use pandas to preprocess the data. For parsing the CoNLL-U files, we rely on the conllu Python module.2 We implement a function called read_tags that reads the CoNLL-U file corresponding to a dataset and returns a pandas dataframe that combines all tokens in a sentence into a single row with two columns, one for the words, and one for the POS tags in the corresponding sentence: 0 1 2 3 4 ... 14300 14301 14302 14303 14304 words [El, presidente, de, el, órgano, regulador, de... [Afirmó, que, sigue, el, criterio, europeo, y,... [Durante, la, presentación, de, el, libro, ", ... [Y, todas, las, miradas, convergen, en, la, lu... [Cambiar, las, formas, parece, de, rigor, ,, p... tags [DET, NOUN, ADP, DET, NOUN, ADJ, ADP, DET, PRO... [VERB, SCONJ, VERB, DET, NOUN, ADJ, CCONJ, SCO... [ADP, DET, NOUN, ADP, DET, NOUN, PUNCT, DET, P... [CCONJ, DET, DET, NOUN, VERB, ADP, DET, NOUN, ... [VERB, DET, NOUN, VERB, ADP, NOUN, PUNCT, CCON... [PRON, VERB, ADP, VERB, PRON, ADP, DET, NOUN, ... [Sobre, la, oferta, de, interconexión, con, Te... [ADP, DET, NOUN, ADP, NOUN, ADP, PROPN, ADP, D... [La, inversión, en, investigación, básica, es,... [DET, NOUN, ADP, NOUN, ADJ, AUX, DET, NOUN, AD... ... ... [Conviene, que, ahora, ,, en, plena, apoteosis... [VERB, SCONJ, ADV, PUNCT, ADP, ADJ, NOUN, ADP,... [Carlos, y, Fayna, se, enzarzan, en, una, bron... [PROPN, CCONJ, PROPN, PRON, VERB, ADP, DET, NO... [Él, llega, a, tirar, la, sobre, la, cama, y, ... 14305 rows × 2 columns In order to implement our POS tagging application, we need word embeddings that have been pretrained for Spanish. Here we use the publicly-available GloVe embeddings trained on the Spanish Billion Word Corpus3 by the Departamento de Ciencias de la Computación of Universidad de Chile.4 In contrast to the GloVe embeddings used in Chapter 9, these do include a header that stores meta data about the embeddings (i.e., size of the vocabulary and the dimension of the embedding vectors), so in this case we do not use the no_header=True argument: Another difference between these GloVe embeddings and the ones we used in Chapter 9 is that these already include an embedding for unknown words. Therefore, there is no need to introduce our own. However, we do need to include a new embedding for padding, which . 1  https://universaldependencies.org/format.html 
 . 2  https://github.com/EmilStenstrom/conllu/ 
 . 3  https://crscardellino.ar/SBWCE/ 
 . 4  https://github.com/dccuchile/spanishwordembeddings# 
gloveembeddingsfromsbwc 
 164 Implementing POS Tagging Using RNNs we will use later to guarantee that all sentences in the same mini-batch have the same length. We add a vector of zeros for the padding token in the same way as before: Next, we need to preprocess our tokens to match the vocabulary of the embeddings. In particular, these embeddings were trained on words that were lowercased and on sequences of digits that were replaced with a single 0. We will apply the same modifications to our tokens: (From now on we will omit the pandas tables for readability, but, as usual, the corresponding Jupyter notebook contains all necessary information.) Next, we add a new column to the dataframe that stores the word ids corresponding to the embedding vocabulary. Note that at this point we are not padding the sequences of word ids. We will address padding later. We also need to generate the ids for the POS tags. To this end, we first need to construct a vocabulary of POS tags. Once again, we generate a list of tags using explode(), which linearizes our sequence of sequences of tags, and remove repeated tags using unique(). We also add a special tag for the padding token: We now use this POS tag vocabulary to construct a new dataframe column that stores the POS tag ids: The implementation of the Dataset class that stores our POS dataset is trivial: we simply return the lists of word and tag ids, converted to PyTorch tensors. Now it’s time to handle padding. This time we will use some features of PyTorch that we have not seen before. The DataLoader object can receive an optional argument, collate_fn, which expects a function that can be used to form a mini-batch. We will implement this function using PyTorch’s torch.nn.utils.rnn.pad_sequence() function, which, unsurprisingly, pads a group of tensors. We will take advantage of this function to pad the tensors while forming the mini-batch itself. The advantage of this strategy is that, rather than needing to pad all the examples to be the same length as the largest sentence in the corpus, we will instead pad them to the same length as the largest sentence in the minibatch. The latter strategy reduces the amount of padding necessary, which should yield more efficient code. The collate_fn() function takes a single argument, batch, which is a list of tuples. Each tuple has two elements: the list of word ids and the list of tag ids corresponding to a single example. We first unzip this list of tuples into two lists; the first list has all the word ids, and the 11.1 Part-of-speech Tagging 165 second has the tag ids. An explanation of how zip(*batch) works is provided in Appendix A. Next, we compute the lengths of each of the examples in the batch, which we will use later to inform the recurrent neural network where padding starts for each example. We then use the pad_sequence() function to add padding. This function will find the longest sequence in the batch and pad all examples accordingly using the provided padding value. This method is designed to work with PyTorch’s recurrent neural networks, which by default assume the batch index is in the second dimension. However, we will be organizing our tensors such that the batch index is always in the first dimension, which we feel to be more intuitive. For this reason, we also need to provide the batch_first=True argument to pad_sequence. Finally, we return the padded data, as well as the original lengths of the examples. Next, we implement our POS tagging model class. The model consists of: (a) an embedding layer for our Spanish pretrained embeddings; (b) an LSTM that can be set to be unior bi-directional (see Figure 10.3; the RNN is configured to be bidirectional by setting the bidirectional argument to True in the LSTM constructor), with a configurable number of layers (see Figure 10.2; the number of layers is set through the num_layers argument of the constructor) and (c) a linear layer on top of each hidden state, which is used to predict the scores for each of the POS tags for the corresponding token. The forward() method receives the padded minibatch and the list of lengths for the (unpadded) examples in this mini-batch. The first step in the function is to retrieve the embeddings for all words referenced in this mini-batch. We then apply dropout over these embedding vectors. Next, before passing the data to the LSTM, we pack the padded data. Note that the PyTorch PackedSequence5 class, which is the output of the pack_padded_sequence() function, stores a batch of sequences that had different lengths before padding. One important advantage of using PackedSequence is that its internal data structure removes the padding tokens (which is why we had to keep track of the example lengths before padding in x_lengths), and, thus, the recurrent neural network will not back-propagate over the padded elements.6 Once we have a PackedSequence, we pass it to the LSTM. Since the . 5  https://pytorch.org/docs/stable/generated/torch.nn.utils.rnn. PackedSequence.html 
 . 6  The astute reader might ask at this point, “Why did we pad the mini-batch examples in the first place, if we are removing the padding later?” The padding is needed because this allows us to store the mini-batch as a single three-dimensional tensor. 
 166 Implementing POS Tagging Using RNNs output of the LSTM is also packed, we then unpack it using pad_packed_sequence(). Next we apply dropout to this unpacked LSTM output. Finally, we pass
this to the linear layer to predict the tag scores for the tokens. Despite the small number of lines of code, the code of the forward() method, which switches between embedding vectors, padded tensors, and packed sequences, is not trivial. To clarify it, let us walk through an example. Imagine that the input to the forward() method is a batch, x_padded, with shape (10, 20), corresponding to 10 examples, each with 20 word ids (some of which are padding). Then we retrieve the embeddings. Assuming our word embeddings, i.e., the input vectors xi in Chapter 10, are of dimension 300, the new tensor will have a shape of (10, 20, 300), corresponding to 10 examples, each with 20 embeddings, each with dimension 300. After dropout the shape hasn’t changed, but some of the elements have been zeroed out. After unpacking the output of the LSTM, we will have a tensor of shape (10, 20, hidden_size), where hidden_size is the size of the LSTM hidden state, i.e., the ht vector in Equation 10.6, (hidden_size is a hyper parameter we will set later on). After passing this tensor to the linear layer, we will obtain a tensor of shape (10, 20, tag_vocab_size), where tag_vocab_size is the number of POS tags in our vocabulary. Thus, for each token in each example, we will have a distribution of POS tag scores. For each token, the assigned POS tag will be the one corresponding to the highest score. We next initialize all the hyper parameters and all the required components: The training procedure is very similar to the one implemented in Chapter 7. One notable difference is that the output of this model has three dimensions instead of two: number of examples, number of tokens, and number of POS tag scores. Thus, we have to reshape the output to pass it to the loss function. Additionally, we need to discard the padding before computing the loss. We reshape the gold tag ids using the torch.flatten() function, to transform the 2-dimensional tensor of shape (n_examples, n_tokens) to a 1-dimensional tensor with n_examples * n_tokens elements. The predictions are reshaped using the view(-1, output_size) method. By passing two arguments we are stipulating that we want two dimensions. The second dimension will be of size output_size. The -1 indicates that the first dimension should be inferred from the size of the tensor. This means that for a tensor of shape (n_examples, n_tokens, output_size) we will get a tensor of shape (n_examples * n_tokens, output_size). Then, we use a Boolean mask to discard the elements corresponding to the padding. This way, the loss 11.2 Summary 167 function will consider each actual word individually, as if the whole batch was just one big sentence. Note that treating a mini-batch as a single virtual sentence does affect the evaluation results. Lastly, we evaluate the performance of our POS tagger on the test set, similarly to how we have done it before: The results indicate that our POS tagger obtains an overall accuracy of 97%, which is in line with state-of-the-art approaches! This is encouraging considering that our approach does not include the CRF layer we discussed in Chapter 10. We challenge the reader to add this layer,7 and experiment with this architecture for other sequence tasks such as named entity recognition. 11.2 Summary In this chapter we have implemented a Spanish part-of-speech tagger using a bidirectional LSTM and a set of pretrained, static word embeddings. Through this process, we have also introduced several new PyTorch features such as the pad_sequence, pack_padded_sequence, and pad_packed_sequence functions, which allow us to work more efficiently with variable length sequences for recurrent neural networks. 7 See, for example, the LSTM-CRF implementation from the PyTorch tutorial: https://pytorch.org/tutorials/beginner/nlp/advanced_tutorial.html
8,135
8,185
#!/usr/bin/env python # coding: utf-8 # # Part-of-speech Tagging Using RNNs # Some initialization: # In[4]: import random import torch import numpy as np import pandas as pd from tqdm.notebook import tqdm # enable tqdm in pandas tqdm.pandas() # set to True to use the gpu (if there is one available) use_gpu = True # select device device = torch.device('cuda' if use_gpu and torch.cuda.is_available() else 'cpu') print(f'device: {device.type}') # random seed seed = 1234 # set random seed if seed is not None: print(f'random seed: {seed}') random.seed(seed) np.random.seed(seed) torch.manual_seed(seed) # Next, let's read the words and their POS tags from the CoNLLUP format: # In[5]: from conllu import parse_incr def read_tags(filename): data = {'words': [], 'tags': []} with open(filename) as f: for sent in parse_incr(f): words = [tok['form'] for tok in sent] tags = [tok['upos'] for tok in sent] data['words'].append(words) data['tags'].append(tags) return pd.DataFrame(data) # In[6]: train_df = read_tags('data/UD_Spanish-AnCora/es_ancora-ud-train.conllup') train_df # We now load the GloVe embeddings for Spanish, which include a representation for the unknown token: # In[7]: from gensim.models import KeyedVectors glove = KeyedVectors.load_word2vec_format('glove-sbwc.i25.vec') glove.vectors.shape # In[8]: # these embeddings already include <unk> unk_tok = '<unk>' unk_id = glove.key_to_index[unk_tok] unk_tok, unk_id # In[9]: # add padding embedding pad_tok = '<pad>' pad_emb = np.zeros(300) glove.add_vector(pad_tok, pad_emb) pad_tok_id = glove.key_to_index[pad_tok] pad_tok, pad_tok_id # Preprocessing: lower case all words, and replace all numbers with '0': # In[10]: def preprocess(words): result = [] for w in words: w = w.lower() if w.isdecimal(): w = '0' result.append(w) return result train_df['words'] = train_df['words'].progress_map(preprocess) train_df # Next, construct actual PyTorch `Dataset` and `DataLoader` objects for the train/dev/test partitions: # In[11]: def get_ids(tokens, key_to_index, unk_id=None): return [key_to_index.get(tok, unk_id) for tok in tokens] def get_word_ids(tokens): return get_ids(tokens, glove.key_to_index, unk_id) # add new column to the dataframe train_df['word ids'] = train_df['words'].progress_map(get_word_ids) train_df # In[12]: pad_tag = '<pad>' index_to_tag = train_df['tags'].explode().unique().tolist() + [pad_tag] tag_to_index = {t:i for i,t in enumerate(index_to_tag)} pad_tag_id = tag_to_index[pad_tag] pad_tag, pad_tag_id # In[13]: index_to_tag # In[14]: def get_tag_ids(tags): return get_ids(tags, tag_to_index) train_df['tag ids'] = train_df['tags'].progress_map(get_tag_ids) train_df # In[15]: dev_df = read_tags('data/UD_Spanish-AnCora/es_ancora-ud-dev.conllup') dev_df['words'] = dev_df['words'].progress_map(preprocess) dev_df['word ids'] = dev_df['words'].progress_map(lambda x: get_ids(x, glove.key_to_index, unk_id)) dev_df['tag ids'] = dev_df['tags'].progress_map(lambda x: get_ids(x, tag_to_index)) dev_df # In[16]: from torch.utils.data import Dataset class MyDataset(Dataset): def __init__(self, x, y): self.x = x self.y = y def __len__(self): return len(self.y) def __getitem__(self, index): x = torch.tensor(self.x[index]) y = torch.tensor(self.y[index]) return x, y # `collate_fn` will be used by `DataLoader` to pad all sentences in the same batch to the same length. # In[17]: from torch.nn.utils.rnn import pad_sequence def collate_fn(batch): # separate xs and ys xs, ys = zip(*batch) # get lengths lengths = [len(x) for x in xs] # pad sequences x_padded = pad_sequence(xs, batch_first=True, padding_value=pad_tok_id) y_padded = pad_sequence(ys, batch_first=True, padding_value=pad_tag_id) # return padded return x_padded, y_padded, lengths # Now construct our PyTorch model: # In[18]: from torch import nn from torch.nn.utils.rnn import pack_padded_sequence, pad_packed_sequence class MyModel(nn.Module): def __init__(self, vectors, hidden_size, num_layers, bidirectional, dropout, output_size): super().__init__() # ensure vectors is a tensor if not torch.is_tensor(vectors): vectors = torch.tensor(vectors) # init embedding layer self.embedding = nn.Embedding.from_pretrained(embeddings=vectors) # init lstm self.lstm = nn.LSTM( input_size=vectors.shape[1], hidden_size=hidden_size, num_layers=num_layers, bidirectional=bidirectional, dropout=dropout, batch_first=True) # init dropout self.dropout = nn.Dropout(dropout) # init classifier self.classifier = nn.Linear( in_features=hidden_size * 2 if bidirectional else hidden_size, out_features=output_size) def forward(self, x_padded, x_lengths): # get embeddings output = self.embedding(x_padded) output = self.dropout(output) # pack data before lstm packed = pack_padded_sequence(output, x_lengths, batch_first=True, enforce_sorted=False) packed, _ = self.lstm(packed) # unpack data before rest of model output, _ = pad_packed_sequence(packed, batch_first=True) output = self.dropout(output) output = self.classifier(output) return output # In[19]: from torch import optim from torch.utils.data import DataLoader from sklearn.metrics import accuracy_score # hyperparameters lr = 1e-3 weight_decay = 1e-5 batch_size = 100 shuffle = True n_epochs = 10 vectors = glove.vectors hidden_size = 100 num_layers = 2 bidirectional = True dropout = 0.1 output_size = len(index_to_tag) # initialize the model, loss function, optimizer, and data-loader model = MyModel(vectors, hidden_size, num_layers, bidirectional, dropout, output_size).to(device) loss_func = nn.CrossEntropyLoss() optimizer = optim.Adam(model.parameters(), lr=lr, weight_decay=weight_decay) train_ds = MyDataset(train_df['word ids'], train_df['tag ids']) train_dl = DataLoader(train_ds, batch_size=batch_size, shuffle=shuffle, collate_fn=collate_fn) dev_ds = MyDataset(dev_df['word ids'], dev_df['tag ids']) dev_dl = DataLoader(dev_ds, batch_size=batch_size, shuffle=shuffle, collate_fn=collate_fn) train_loss, train_acc = [], [] dev_loss, dev_acc = [], [] # We are now ready to train! # In[20]: # train the model for epoch in range(n_epochs): losses, acc = [], [] model.train() for x_padded, y_padded, lengths in tqdm(train_dl, desc=f'epoch {epoch+1} (train)'): # clear gradients model.zero_grad() # send batch to right device x_padded = x_padded.to(device) y_padded = y_padded.to(device) # predict label scores y_pred = model(x_padded, lengths) # reshape output y_true = torch.flatten(y_padded) y_pred = y_pred.view(-1, output_size) mask = y_true != pad_tag_id y_true = y_true[mask] y_pred = y_pred[mask] # compute loss loss = loss_func(y_pred, y_true) # accumulate for plotting gold = y_true.detach().cpu().numpy() pred = np.argmax(y_pred.detach().cpu().numpy(), axis=1) losses.append(loss.detach().cpu().item()) acc.append(accuracy_score(gold, pred)) # backpropagate loss.backward() # optimize model parameters optimizer.step() train_loss.append(np.mean(losses)) train_acc.append(np.mean(acc)) model.eval() with torch.no_grad(): losses, acc = [], [] for x_padded, y_padded, lengths in tqdm(dev_dl, desc=f'epoch {epoch+1} (dev)'): x_padded = x_padded.to(device) y_padded = y_padded.to(device) y_pred = model(x_padded, lengths) y_true = torch.flatten(y_padded) y_pred = y_pred.view(-1, output_size) mask = y_true != pad_tag_id y_true = y_true[mask] y_pred = y_pred[mask] loss = loss_func(y_pred, y_true) gold = y_true.cpu().numpy() pred = np.argmax(y_pred.cpu().numpy(), axis=1) losses.append(loss.cpu().item()) acc.append(accuracy_score(gold, pred)) dev_loss.append(np.mean(losses)) dev_acc.append(np.mean(acc)) # Plot loss and accuracy on dev after each epoch: # In[21]: import matplotlib.pyplot as plt x = np.arange(n_epochs) + 1 plt.plot(x, train_loss) plt.plot(x, dev_loss) plt.legend(['train', 'dev']) plt.xlabel('epoch') plt.ylabel('loss') plt.grid(True) # In[22]: plt.plot(x, train_acc) plt.plot(x, dev_acc) plt.legend(['train', 'dev']) plt.xlabel('epoch') plt.ylabel('accuracy') plt.grid(True) # In[23]: test_df = read_tags('data/UD_Spanish-AnCora/es_ancora-ud-test.conllup') test_df['words'] = test_df['words'].progress_map(preprocess) test_df['word ids'] = test_df['words'].progress_map(lambda x: get_ids(x, glove.key_to_index, unk_id)) test_df['tag ids'] = test_df['tags'].progress_map(lambda x: get_ids(x, tag_to_index)) test_df # Now let's evaluate on the test partition: # In[24]: from sklearn.metrics import classification_report model.eval() test_ds = MyDataset(test_df['word ids'], test_df['tag ids']) test_dl = DataLoader(test_ds, batch_size=batch_size, shuffle=shuffle, collate_fn=collate_fn) all_y_true = [] all_y_pred = [] with torch.no_grad(): for x_padded, y_padded, lengths in tqdm(test_dl): x_padded = x_padded.to(device) y_pred = model(x_padded, lengths) y_true = torch.flatten(y_padded) y_pred = y_pred.view(-1, output_size) mask = y_true != pad_tag_id y_true = y_true[mask] y_pred = torch.argmax(y_pred[mask], dim=1) all_y_true.append(y_true.cpu().numpy()) all_y_pred.append(y_pred.cpu().numpy()) y_true = np.concatenate(all_y_true) y_pred = np.concatenate(all_y_pred) target_names = index_to_tag[:-2] print(classification_report(y_true, y_pred, target_names=target_names)) # Let's generate a confusion matrix for all POS tags in the data: # In[25]: from sklearn.metrics import ConfusionMatrixDisplay, confusion_matrix cm = confusion_matrix(y_true, y_pred, normalize='true') disp = ConfusionMatrixDisplay( confusion_matrix=cm, display_labels=target_names, ) fig, ax = plt.subplots(figsize=(10,10)) disp.plot( cmap='Blues', values_format='.2f', colorbar=False, ax=ax, xticks_rotation=45, ) # In[ ]:
4,599
4,841
0
chap11-1
chap11-1
11 Implementing Part-of-speech Tagging Using Recurrent Neural Networks The previous chapter was our first exposure to recurrent neural networks, which included intuitions for why they are useful for natural language processing, various architectures, and training algorithms. In this chapter we will put them to use, to implement a common sequence modeling task. 11.1 Part-of-speech Tagging The task we will use as an example for this chapter is part-of-speech (POS) tagging, an NLP application that, as we discussed in the previous chapter, benefits from word order. Please see Chapter 16 for a more thorough discussion of POS tagging. The entire code presented in this chapter is available in the chap11_pos_tagging Jupyter notebook. To take a break from NLP applications for English, in this chapter we use the AnCora corpus (Taulé et al., 2008), which primarily consists of newspaper texts in Spanish and Catalan with different linguistic annotations. In this chapter we work with the Spanish portion of the corpus, and the annotations for Universal POS tags (see Chapter 16 for a description of these tags). The Spanish portion of the corpus is divided into a training set with 14,305 sentences, a development set with 1,654 sentences, and a test set with 1,721 sentences. The data is distributed in the CoNLL-U format. In this format, all sentences in a dataset are stored in the same file, separated by a blank line. Each individual token in a sentence is represented in a line, which contains 10 annotation fields separated by tabs: ID, FORM, LEMMA, UPOS, XPOS, FEATS, HEAD, DEPREL, DEPS, and MISC. A comprehensive explanation of this format and the 162 11.1 Part-of-speech Tagging 163 meaning of the different fields is beyond the goal of this chapter; however, the curious reader can find one at the CoNLL-U website.1 Here, we are only concerned with the fields FORM (the raw word), and UPOS (the Universal part-of-speech tag). As in previous chapters, we use pandas to preprocess the data. For parsing the CoNLL-U files, we rely on the conllu Python module.2 We implement a function called read_tags that reads the CoNLL-U file corresponding to a dataset and returns a pandas dataframe that combines all tokens in a sentence into a single row with two columns, one for the words, and one for the POS tags in the corresponding sentence: 0 1 2 3 4 ... 14300 14301 14302 14303 14304 words [El, presidente, de, el, órgano, regulador, de... [Afirmó, que, sigue, el, criterio, europeo, y,... [Durante, la, presentación, de, el, libro, ", ... [Y, todas, las, miradas, convergen, en, la, lu... [Cambiar, las, formas, parece, de, rigor, ,, p... tags [DET, NOUN, ADP, DET, NOUN, ADJ, ADP, DET, PRO... [VERB, SCONJ, VERB, DET, NOUN, ADJ, CCONJ, SCO... [ADP, DET, NOUN, ADP, DET, NOUN, PUNCT, DET, P... [CCONJ, DET, DET, NOUN, VERB, ADP, DET, NOUN, ... [VERB, DET, NOUN, VERB, ADP, NOUN, PUNCT, CCON... [PRON, VERB, ADP, VERB, PRON, ADP, DET, NOUN, ... [Sobre, la, oferta, de, interconexión, con, Te... [ADP, DET, NOUN, ADP, NOUN, ADP, PROPN, ADP, D... [La, inversión, en, investigación, básica, es,... [DET, NOUN, ADP, NOUN, ADJ, AUX, DET, NOUN, AD... ... ... [Conviene, que, ahora, ,, en, plena, apoteosis... [VERB, SCONJ, ADV, PUNCT, ADP, ADJ, NOUN, ADP,... [Carlos, y, Fayna, se, enzarzan, en, una, bron... [PROPN, CCONJ, PROPN, PRON, VERB, ADP, DET, NO... [Él, llega, a, tirar, la, sobre, la, cama, y, ... 14305 rows × 2 columns In order to implement our POS tagging application, we need word embeddings that have been pretrained for Spanish. Here we use the publicly-available GloVe embeddings trained on the Spanish Billion Word Corpus3 by the Departamento de Ciencias de la Computación of Universidad de Chile.4 In contrast to the GloVe embeddings used in Chapter 9, these do include a header that stores meta data about the embeddings (i.e., size of the vocabulary and the dimension of the embedding vectors), so in this case we do not use the no_header=True argument: Another difference between these GloVe embeddings and the ones we used in Chapter 9 is that these already include an embedding for unknown words. Therefore, there is no need to introduce our own. However, we do need to include a new embedding for padding, which . 1  https://universaldependencies.org/format.html 
 . 2  https://github.com/EmilStenstrom/conllu/ 
 . 3  https://crscardellino.ar/SBWCE/ 
 . 4  https://github.com/dccuchile/spanishwordembeddings# 
gloveembeddingsfromsbwc 
 164 Implementing POS Tagging Using RNNs we will use later to guarantee that all sentences in the same mini-batch have the same length. We add a vector of zeros for the padding token in the same way as before: Next, we need to preprocess our tokens to match the vocabulary of the embeddings. In particular, these embeddings were trained on words that were lowercased and on sequences of digits that were replaced with a single 0. We will apply the same modifications to our tokens: (From now on we will omit the pandas tables for readability, but, as usual, the corresponding Jupyter notebook contains all necessary information.) Next, we add a new column to the dataframe that stores the word ids corresponding to the embedding vocabulary. Note that at this point we are not padding the sequences of word ids. We will address padding later. We also need to generate the ids for the POS tags. To this end, we first need to construct a vocabulary of POS tags. Once again, we generate a list of tags using explode(), which linearizes our sequence of sequences of tags, and remove repeated tags using unique(). We also add a special tag for the padding token: We now use this POS tag vocabulary to construct a new dataframe column that stores the POS tag ids: The implementation of the Dataset class that stores our POS dataset is trivial: we simply return the lists of word and tag ids, converted to PyTorch tensors. Now it’s time to handle padding. This time we will use some features of PyTorch that we have not seen before. The DataLoader object can receive an optional argument, collate_fn, which expects a function that can be used to form a mini-batch. We will implement this function using PyTorch’s torch.nn.utils.rnn.pad_sequence() function, which, unsurprisingly, pads a group of tensors. We will take advantage of this function to pad the tensors while forming the mini-batch itself. The advantage of this strategy is that, rather than needing to pad all the examples to be the same length as the largest sentence in the corpus, we will instead pad them to the same length as the largest sentence in the minibatch. The latter strategy reduces the amount of padding necessary, which should yield more efficient code. The collate_fn() function takes a single argument, batch, which is a list of tuples. Each tuple has two elements: the list of word ids and the list of tag ids corresponding to a single example. We first unzip this list of tuples into two lists; the first list has all the word ids, and the 11.1 Part-of-speech Tagging 165 second has the tag ids. An explanation of how zip(*batch) works is provided in Appendix A. Next, we compute the lengths of each of the examples in the batch, which we will use later to inform the recurrent neural network where padding starts for each example. We then use the pad_sequence() function to add padding. This function will find the longest sequence in the batch and pad all examples accordingly using the provided padding value. This method is designed to work with PyTorch’s recurrent neural networks, which by default assume the batch index is in the second dimension. However, we will be organizing our tensors such that the batch index is always in the first dimension, which we feel to be more intuitive. For this reason, we also need to provide the batch_first=True argument to pad_sequence. Finally, we return the padded data, as well as the original lengths of the examples. Next, we implement our POS tagging model class. The model consists of: (a) an embedding layer for our Spanish pretrained embeddings; (b) an LSTM that can be set to be unior bi-directional (see Figure 10.3; the RNN is configured to be bidirectional by setting the bidirectional argument to True in the LSTM constructor), with a configurable number of layers (see Figure 10.2; the number of layers is set through the num_layers argument of the constructor) and (c) a linear layer on top of each hidden state, which is used to predict the scores for each of the POS tags for the corresponding token. The forward() method receives the padded minibatch and the list of lengths for the (unpadded) examples in this mini-batch. The first step in the function is to retrieve the embeddings for all words referenced in this mini-batch. We then apply dropout over these embedding vectors. Next, before passing the data to the LSTM, we pack the padded data. Note that the PyTorch PackedSequence5 class, which is the output of the pack_padded_sequence() function, stores a batch of sequences that had different lengths before padding. One important advantage of using PackedSequence is that its internal data structure removes the padding tokens (which is why we had to keep track of the example lengths before padding in x_lengths), and, thus, the recurrent neural network will not back-propagate over the padded elements.6 Once we have a PackedSequence, we pass it to the LSTM. Since the . 5  https://pytorch.org/docs/stable/generated/torch.nn.utils.rnn. PackedSequence.html 
 . 6  The astute reader might ask at this point, “Why did we pad the mini-batch examples in the first place, if we are removing the padding later?” The padding is needed because this allows us to store the mini-batch as a single three-dimensional tensor. 
 166 Implementing POS Tagging Using RNNs output of the LSTM is also packed, we then unpack it using pad_packed_sequence(). Next we apply dropout to this unpacked LSTM output. Finally, we pass
this to the linear layer to predict the tag scores for the tokens. Despite the small number of lines of code, the code of the forward() method, which switches between embedding vectors, padded tensors, and packed sequences, is not trivial. To clarify it, let us walk through an example. Imagine that the input to the forward() method is a batch, x_padded, with shape (10, 20), corresponding to 10 examples, each with 20 word ids (some of which are padding). Then we retrieve the embeddings. Assuming our word embeddings, i.e., the input vectors xi in Chapter 10, are of dimension 300, the new tensor will have a shape of (10, 20, 300), corresponding to 10 examples, each with 20 embeddings, each with dimension 300. After dropout the shape hasn’t changed, but some of the elements have been zeroed out. After unpacking the output of the LSTM, we will have a tensor of shape (10, 20, hidden_size), where hidden_size is the size of the LSTM hidden state, i.e., the ht vector in Equation 10.6, (hidden_size is a hyper parameter we will set later on). After passing this tensor to the linear layer, we will obtain a tensor of shape (10, 20, tag_vocab_size), where tag_vocab_size is the number of POS tags in our vocabulary. Thus, for each token in each example, we will have a distribution of POS tag scores. For each token, the assigned POS tag will be the one corresponding to the highest score. We next initialize all the hyper parameters and all the required components: The training procedure is very similar to the one implemented in Chapter 7. One notable difference is that the output of this model has three dimensions instead of two: number of examples, number of tokens, and number of POS tag scores. Thus, we have to reshape the output to pass it to the loss function. Additionally, we need to discard the padding before computing the loss. We reshape the gold tag ids using the torch.flatten() function, to transform the 2-dimensional tensor of shape (n_examples, n_tokens) to a 1-dimensional tensor with n_examples * n_tokens elements. The predictions are reshaped using the view(-1, output_size) method. By passing two arguments we are stipulating that we want two dimensions. The second dimension will be of size output_size. The -1 indicates that the first dimension should be inferred from the size of the tensor. This means that for a tensor of shape (n_examples, n_tokens, output_size) we will get a tensor of shape (n_examples * n_tokens, output_size). Then, we use a Boolean mask to discard the elements corresponding to the padding. This way, the loss 11.2 Summary 167 function will consider each actual word individually, as if the whole batch was just one big sentence. Note that treating a mini-batch as a single virtual sentence does affect the evaluation results. Lastly, we evaluate the performance of our POS tagger on the test set, similarly to how we have done it before: The results indicate that our POS tagger obtains an overall accuracy of 97%, which is in line with state-of-the-art approaches! This is encouraging considering that our approach does not include the CRF layer we discussed in Chapter 10. We challenge the reader to add this layer,7 and experiment with this architecture for other sequence tasks such as named entity recognition. 11.2 Summary In this chapter we have implemented a Spanish part-of-speech tagger using a bidirectional LSTM and a set of pretrained, static word embeddings. Through this process, we have also introduced several new PyTorch features such as the pad_sequence, pack_padded_sequence, and pad_packed_sequence functions, which allow us to work more efficiently with variable length sequences for recurrent neural networks. 7 See, for example, the LSTM-CRF implementation from the PyTorch tutorial: https://pytorch.org/tutorials/beginner/nlp/advanced_tutorial.html
11,882
12,078
#!/usr/bin/env python # coding: utf-8 # # Part-of-speech Tagging Using RNNs # Some initialization: # In[4]: import random import torch import numpy as np import pandas as pd from tqdm.notebook import tqdm # enable tqdm in pandas tqdm.pandas() # set to True to use the gpu (if there is one available) use_gpu = True # select device device = torch.device('cuda' if use_gpu and torch.cuda.is_available() else 'cpu') print(f'device: {device.type}') # random seed seed = 1234 # set random seed if seed is not None: print(f'random seed: {seed}') random.seed(seed) np.random.seed(seed) torch.manual_seed(seed) # Next, let's read the words and their POS tags from the CoNLLUP format: # In[5]: from conllu import parse_incr def read_tags(filename): data = {'words': [], 'tags': []} with open(filename) as f: for sent in parse_incr(f): words = [tok['form'] for tok in sent] tags = [tok['upos'] for tok in sent] data['words'].append(words) data['tags'].append(tags) return pd.DataFrame(data) # In[6]: train_df = read_tags('data/UD_Spanish-AnCora/es_ancora-ud-train.conllup') train_df # We now load the GloVe embeddings for Spanish, which include a representation for the unknown token: # In[7]: from gensim.models import KeyedVectors glove = KeyedVectors.load_word2vec_format('glove-sbwc.i25.vec') glove.vectors.shape # In[8]: # these embeddings already include <unk> unk_tok = '<unk>' unk_id = glove.key_to_index[unk_tok] unk_tok, unk_id # In[9]: # add padding embedding pad_tok = '<pad>' pad_emb = np.zeros(300) glove.add_vector(pad_tok, pad_emb) pad_tok_id = glove.key_to_index[pad_tok] pad_tok, pad_tok_id # Preprocessing: lower case all words, and replace all numbers with '0': # In[10]: def preprocess(words): result = [] for w in words: w = w.lower() if w.isdecimal(): w = '0' result.append(w) return result train_df['words'] = train_df['words'].progress_map(preprocess) train_df # Next, construct actual PyTorch `Dataset` and `DataLoader` objects for the train/dev/test partitions: # In[11]: def get_ids(tokens, key_to_index, unk_id=None): return [key_to_index.get(tok, unk_id) for tok in tokens] def get_word_ids(tokens): return get_ids(tokens, glove.key_to_index, unk_id) # add new column to the dataframe train_df['word ids'] = train_df['words'].progress_map(get_word_ids) train_df # In[12]: pad_tag = '<pad>' index_to_tag = train_df['tags'].explode().unique().tolist() + [pad_tag] tag_to_index = {t:i for i,t in enumerate(index_to_tag)} pad_tag_id = tag_to_index[pad_tag] pad_tag, pad_tag_id # In[13]: index_to_tag # In[14]: def get_tag_ids(tags): return get_ids(tags, tag_to_index) train_df['tag ids'] = train_df['tags'].progress_map(get_tag_ids) train_df # In[15]: dev_df = read_tags('data/UD_Spanish-AnCora/es_ancora-ud-dev.conllup') dev_df['words'] = dev_df['words'].progress_map(preprocess) dev_df['word ids'] = dev_df['words'].progress_map(lambda x: get_ids(x, glove.key_to_index, unk_id)) dev_df['tag ids'] = dev_df['tags'].progress_map(lambda x: get_ids(x, tag_to_index)) dev_df # In[16]: from torch.utils.data import Dataset class MyDataset(Dataset): def __init__(self, x, y): self.x = x self.y = y def __len__(self): return len(self.y) def __getitem__(self, index): x = torch.tensor(self.x[index]) y = torch.tensor(self.y[index]) return x, y # `collate_fn` will be used by `DataLoader` to pad all sentences in the same batch to the same length. # In[17]: from torch.nn.utils.rnn import pad_sequence def collate_fn(batch): # separate xs and ys xs, ys = zip(*batch) # get lengths lengths = [len(x) for x in xs] # pad sequences x_padded = pad_sequence(xs, batch_first=True, padding_value=pad_tok_id) y_padded = pad_sequence(ys, batch_first=True, padding_value=pad_tag_id) # return padded return x_padded, y_padded, lengths # Now construct our PyTorch model: # In[18]: from torch import nn from torch.nn.utils.rnn import pack_padded_sequence, pad_packed_sequence class MyModel(nn.Module): def __init__(self, vectors, hidden_size, num_layers, bidirectional, dropout, output_size): super().__init__() # ensure vectors is a tensor if not torch.is_tensor(vectors): vectors = torch.tensor(vectors) # init embedding layer self.embedding = nn.Embedding.from_pretrained(embeddings=vectors) # init lstm self.lstm = nn.LSTM( input_size=vectors.shape[1], hidden_size=hidden_size, num_layers=num_layers, bidirectional=bidirectional, dropout=dropout, batch_first=True) # init dropout self.dropout = nn.Dropout(dropout) # init classifier self.classifier = nn.Linear( in_features=hidden_size * 2 if bidirectional else hidden_size, out_features=output_size) def forward(self, x_padded, x_lengths): # get embeddings output = self.embedding(x_padded) output = self.dropout(output) # pack data before lstm packed = pack_padded_sequence(output, x_lengths, batch_first=True, enforce_sorted=False) packed, _ = self.lstm(packed) # unpack data before rest of model output, _ = pad_packed_sequence(packed, batch_first=True) output = self.dropout(output) output = self.classifier(output) return output # In[19]: from torch import optim from torch.utils.data import DataLoader from sklearn.metrics import accuracy_score # hyperparameters lr = 1e-3 weight_decay = 1e-5 batch_size = 100 shuffle = True n_epochs = 10 vectors = glove.vectors hidden_size = 100 num_layers = 2 bidirectional = True dropout = 0.1 output_size = len(index_to_tag) # initialize the model, loss function, optimizer, and data-loader model = MyModel(vectors, hidden_size, num_layers, bidirectional, dropout, output_size).to(device) loss_func = nn.CrossEntropyLoss() optimizer = optim.Adam(model.parameters(), lr=lr, weight_decay=weight_decay) train_ds = MyDataset(train_df['word ids'], train_df['tag ids']) train_dl = DataLoader(train_ds, batch_size=batch_size, shuffle=shuffle, collate_fn=collate_fn) dev_ds = MyDataset(dev_df['word ids'], dev_df['tag ids']) dev_dl = DataLoader(dev_ds, batch_size=batch_size, shuffle=shuffle, collate_fn=collate_fn) train_loss, train_acc = [], [] dev_loss, dev_acc = [], [] # We are now ready to train! # In[20]: # train the model for epoch in range(n_epochs): losses, acc = [], [] model.train() for x_padded, y_padded, lengths in tqdm(train_dl, desc=f'epoch {epoch+1} (train)'): # clear gradients model.zero_grad() # send batch to right device x_padded = x_padded.to(device) y_padded = y_padded.to(device) # predict label scores y_pred = model(x_padded, lengths) # reshape output y_true = torch.flatten(y_padded) y_pred = y_pred.view(-1, output_size) mask = y_true != pad_tag_id y_true = y_true[mask] y_pred = y_pred[mask] # compute loss loss = loss_func(y_pred, y_true) # accumulate for plotting gold = y_true.detach().cpu().numpy() pred = np.argmax(y_pred.detach().cpu().numpy(), axis=1) losses.append(loss.detach().cpu().item()) acc.append(accuracy_score(gold, pred)) # backpropagate loss.backward() # optimize model parameters optimizer.step() train_loss.append(np.mean(losses)) train_acc.append(np.mean(acc)) model.eval() with torch.no_grad(): losses, acc = [], [] for x_padded, y_padded, lengths in tqdm(dev_dl, desc=f'epoch {epoch+1} (dev)'): x_padded = x_padded.to(device) y_padded = y_padded.to(device) y_pred = model(x_padded, lengths) y_true = torch.flatten(y_padded) y_pred = y_pred.view(-1, output_size) mask = y_true != pad_tag_id y_true = y_true[mask] y_pred = y_pred[mask] loss = loss_func(y_pred, y_true) gold = y_true.cpu().numpy() pred = np.argmax(y_pred.cpu().numpy(), axis=1) losses.append(loss.cpu().item()) acc.append(accuracy_score(gold, pred)) dev_loss.append(np.mean(losses)) dev_acc.append(np.mean(acc)) # Plot loss and accuracy on dev after each epoch: # In[21]: import matplotlib.pyplot as plt x = np.arange(n_epochs) + 1 plt.plot(x, train_loss) plt.plot(x, dev_loss) plt.legend(['train', 'dev']) plt.xlabel('epoch') plt.ylabel('loss') plt.grid(True) # In[22]: plt.plot(x, train_acc) plt.plot(x, dev_acc) plt.legend(['train', 'dev']) plt.xlabel('epoch') plt.ylabel('accuracy') plt.grid(True) # In[23]: test_df = read_tags('data/UD_Spanish-AnCora/es_ancora-ud-test.conllup') test_df['words'] = test_df['words'].progress_map(preprocess) test_df['word ids'] = test_df['words'].progress_map(lambda x: get_ids(x, glove.key_to_index, unk_id)) test_df['tag ids'] = test_df['tags'].progress_map(lambda x: get_ids(x, tag_to_index)) test_df # Now let's evaluate on the test partition: # In[24]: from sklearn.metrics import classification_report model.eval() test_ds = MyDataset(test_df['word ids'], test_df['tag ids']) test_dl = DataLoader(test_ds, batch_size=batch_size, shuffle=shuffle, collate_fn=collate_fn) all_y_true = [] all_y_pred = [] with torch.no_grad(): for x_padded, y_padded, lengths in tqdm(test_dl): x_padded = x_padded.to(device) y_pred = model(x_padded, lengths) y_true = torch.flatten(y_padded) y_pred = y_pred.view(-1, output_size) mask = y_true != pad_tag_id y_true = y_true[mask] y_pred = torch.argmax(y_pred[mask], dim=1) all_y_true.append(y_true.cpu().numpy()) all_y_pred.append(y_pred.cpu().numpy()) y_true = np.concatenate(all_y_true) y_pred = np.concatenate(all_y_pred) target_names = index_to_tag[:-2] print(classification_report(y_true, y_pred, target_names=target_names)) # Let's generate a confusion matrix for all POS tags in the data: # In[25]: from sklearn.metrics import ConfusionMatrixDisplay, confusion_matrix cm = confusion_matrix(y_true, y_pred, normalize='true') disp = ConfusionMatrixDisplay( confusion_matrix=cm, display_labels=target_names, ) fig, ax = plt.subplots(figsize=(10,10)) disp.plot( cmap='Blues', values_format='.2f', colorbar=False, ax=ax, xticks_rotation=45, ) # In[ ]:
7,089
7,130
1
chap11-2
chap11-2
11 Implementing Part-of-speech Tagging Using Recurrent Neural Networks The previous chapter was our first exposure to recurrent neural networks, which included intuitions for why they are useful for natural language processing, various architectures, and training algorithms. In this chapter we will put them to use, to implement a common sequence modeling task. 11.1 Part-of-speech Tagging The task we will use as an example for this chapter is part-of-speech (POS) tagging, an NLP application that, as we discussed in the previous chapter, benefits from word order. Please see Chapter 16 for a more thorough discussion of POS tagging. The entire code presented in this chapter is available in the chap11_pos_tagging Jupyter notebook. To take a break from NLP applications for English, in this chapter we use the AnCora corpus (Taulé et al., 2008), which primarily consists of newspaper texts in Spanish and Catalan with different linguistic annotations. In this chapter we work with the Spanish portion of the corpus, and the annotations for Universal POS tags (see Chapter 16 for a description of these tags). The Spanish portion of the corpus is divided into a training set with 14,305 sentences, a development set with 1,654 sentences, and a test set with 1,721 sentences. The data is distributed in the CoNLL-U format. In this format, all sentences in a dataset are stored in the same file, separated by a blank line. Each individual token in a sentence is represented in a line, which contains 10 annotation fields separated by tabs: ID, FORM, LEMMA, UPOS, XPOS, FEATS, HEAD, DEPREL, DEPS, and MISC. A comprehensive explanation of this format and the 162 11.1 Part-of-speech Tagging 163 meaning of the different fields is beyond the goal of this chapter; however, the curious reader can find one at the CoNLL-U website.1 Here, we are only concerned with the fields FORM (the raw word), and UPOS (the Universal part-of-speech tag). As in previous chapters, we use pandas to preprocess the data. For parsing the CoNLL-U files, we rely on the conllu Python module.2 We implement a function called read_tags that reads the CoNLL-U file corresponding to a dataset and returns a pandas dataframe that combines all tokens in a sentence into a single row with two columns, one for the words, and one for the POS tags in the corresponding sentence: 0 1 2 3 4 ... 14300 14301 14302 14303 14304 words [El, presidente, de, el, órgano, regulador, de... [Afirmó, que, sigue, el, criterio, europeo, y,... [Durante, la, presentación, de, el, libro, ", ... [Y, todas, las, miradas, convergen, en, la, lu... [Cambiar, las, formas, parece, de, rigor, ,, p... tags [DET, NOUN, ADP, DET, NOUN, ADJ, ADP, DET, PRO... [VERB, SCONJ, VERB, DET, NOUN, ADJ, CCONJ, SCO... [ADP, DET, NOUN, ADP, DET, NOUN, PUNCT, DET, P... [CCONJ, DET, DET, NOUN, VERB, ADP, DET, NOUN, ... [VERB, DET, NOUN, VERB, ADP, NOUN, PUNCT, CCON... [PRON, VERB, ADP, VERB, PRON, ADP, DET, NOUN, ... [Sobre, la, oferta, de, interconexión, con, Te... [ADP, DET, NOUN, ADP, NOUN, ADP, PROPN, ADP, D... [La, inversión, en, investigación, básica, es,... [DET, NOUN, ADP, NOUN, ADJ, AUX, DET, NOUN, AD... ... ... [Conviene, que, ahora, ,, en, plena, apoteosis... [VERB, SCONJ, ADV, PUNCT, ADP, ADJ, NOUN, ADP,... [Carlos, y, Fayna, se, enzarzan, en, una, bron... [PROPN, CCONJ, PROPN, PRON, VERB, ADP, DET, NO... [Él, llega, a, tirar, la, sobre, la, cama, y, ... 14305 rows × 2 columns In order to implement our POS tagging application, we need word embeddings that have been pretrained for Spanish. Here we use the publicly-available GloVe embeddings trained on the Spanish Billion Word Corpus3 by the Departamento de Ciencias de la Computación of Universidad de Chile.4 In contrast to the GloVe embeddings used in Chapter 9, these do include a header that stores meta data about the embeddings (i.e., size of the vocabulary and the dimension of the embedding vectors), so in this case we do not use the no_header=True argument: Another difference between these GloVe embeddings and the ones we used in Chapter 9 is that these already include an embedding for unknown words. Therefore, there is no need to introduce our own. However, we do need to include a new embedding for padding, which . 1  https://universaldependencies.org/format.html 
 . 2  https://github.com/EmilStenstrom/conllu/ 
 . 3  https://crscardellino.ar/SBWCE/ 
 . 4  https://github.com/dccuchile/spanishwordembeddings# 
gloveembeddingsfromsbwc 
 164 Implementing POS Tagging Using RNNs we will use later to guarantee that all sentences in the same mini-batch have the same length. We add a vector of zeros for the padding token in the same way as before: Next, we need to preprocess our tokens to match the vocabulary of the embeddings. In particular, these embeddings were trained on words that were lowercased and on sequences of digits that were replaced with a single 0. We will apply the same modifications to our tokens: (From now on we will omit the pandas tables for readability, but, as usual, the corresponding Jupyter notebook contains all necessary information.) Next, we add a new column to the dataframe that stores the word ids corresponding to the embedding vocabulary. Note that at this point we are not padding the sequences of word ids. We will address padding later. We also need to generate the ids for the POS tags. To this end, we first need to construct a vocabulary of POS tags. Once again, we generate a list of tags using explode(), which linearizes our sequence of sequences of tags, and remove repeated tags using unique(). We also add a special tag for the padding token: We now use this POS tag vocabulary to construct a new dataframe column that stores the POS tag ids: The implementation of the Dataset class that stores our POS dataset is trivial: we simply return the lists of word and tag ids, converted to PyTorch tensors. Now it’s time to handle padding. This time we will use some features of PyTorch that we have not seen before. The DataLoader object can receive an optional argument, collate_fn, which expects a function that can be used to form a mini-batch. We will implement this function using PyTorch’s torch.nn.utils.rnn.pad_sequence() function, which, unsurprisingly, pads a group of tensors. We will take advantage of this function to pad the tensors while forming the mini-batch itself. The advantage of this strategy is that, rather than needing to pad all the examples to be the same length as the largest sentence in the corpus, we will instead pad them to the same length as the largest sentence in the minibatch. The latter strategy reduces the amount of padding necessary, which should yield more efficient code. The collate_fn() function takes a single argument, batch, which is a list of tuples. Each tuple has two elements: the list of word ids and the list of tag ids corresponding to a single example. We first unzip this list of tuples into two lists; the first list has all the word ids, and the 11.1 Part-of-speech Tagging 165 second has the tag ids. An explanation of how zip(*batch) works is provided in Appendix A. Next, we compute the lengths of each of the examples in the batch, which we will use later to inform the recurrent neural network where padding starts for each example. We then use the pad_sequence() function to add padding. This function will find the longest sequence in the batch and pad all examples accordingly using the provided padding value. This method is designed to work with PyTorch’s recurrent neural networks, which by default assume the batch index is in the second dimension. However, we will be organizing our tensors such that the batch index is always in the first dimension, which we feel to be more intuitive. For this reason, we also need to provide the batch_first=True argument to pad_sequence. Finally, we return the padded data, as well as the original lengths of the examples. Next, we implement our POS tagging model class. The model consists of: (a) an embedding layer for our Spanish pretrained embeddings; (b) an LSTM that can be set to be unior bi-directional (see Figure 10.3; the RNN is configured to be bidirectional by setting the bidirectional argument to True in the LSTM constructor), with a configurable number of layers (see Figure 10.2; the number of layers is set through the num_layers argument of the constructor) and (c) a linear layer on top of each hidden state, which is used to predict the scores for each of the POS tags for the corresponding token. The forward() method receives the padded minibatch and the list of lengths for the (unpadded) examples in this mini-batch. The first step in the function is to retrieve the embeddings for all words referenced in this mini-batch. We then apply dropout over these embedding vectors. Next, before passing the data to the LSTM, we pack the padded data. Note that the PyTorch PackedSequence5 class, which is the output of the pack_padded_sequence() function, stores a batch of sequences that had different lengths before padding. One important advantage of using PackedSequence is that its internal data structure removes the padding tokens (which is why we had to keep track of the example lengths before padding in x_lengths), and, thus, the recurrent neural network will not back-propagate over the padded elements.6 Once we have a PackedSequence, we pass it to the LSTM. Since the . 5  https://pytorch.org/docs/stable/generated/torch.nn.utils.rnn. PackedSequence.html 
 . 6  The astute reader might ask at this point, “Why did we pad the mini-batch examples in the first place, if we are removing the padding later?” The padding is needed because this allows us to store the mini-batch as a single three-dimensional tensor. 
 166 Implementing POS Tagging Using RNNs output of the LSTM is also packed, we then unpack it using pad_packed_sequence(). Next we apply dropout to this unpacked LSTM output. Finally, we pass
this to the linear layer to predict the tag scores for the tokens. Despite the small number of lines of code, the code of the forward() method, which switches between embedding vectors, padded tensors, and packed sequences, is not trivial. To clarify it, let us walk through an example. Imagine that the input to the forward() method is a batch, x_padded, with shape (10, 20), corresponding to 10 examples, each with 20 word ids (some of which are padding). Then we retrieve the embeddings. Assuming our word embeddings, i.e., the input vectors xi in Chapter 10, are of dimension 300, the new tensor will have a shape of (10, 20, 300), corresponding to 10 examples, each with 20 embeddings, each with dimension 300. After dropout the shape hasn’t changed, but some of the elements have been zeroed out. After unpacking the output of the LSTM, we will have a tensor of shape (10, 20, hidden_size), where hidden_size is the size of the LSTM hidden state, i.e., the ht vector in Equation 10.6, (hidden_size is a hyper parameter we will set later on). After passing this tensor to the linear layer, we will obtain a tensor of shape (10, 20, tag_vocab_size), where tag_vocab_size is the number of POS tags in our vocabulary. Thus, for each token in each example, we will have a distribution of POS tag scores. For each token, the assigned POS tag will be the one corresponding to the highest score. We next initialize all the hyper parameters and all the required components: The training procedure is very similar to the one implemented in Chapter 7. One notable difference is that the output of this model has three dimensions instead of two: number of examples, number of tokens, and number of POS tag scores. Thus, we have to reshape the output to pass it to the loss function. Additionally, we need to discard the padding before computing the loss. We reshape the gold tag ids using the torch.flatten() function, to transform the 2-dimensional tensor of shape (n_examples, n_tokens) to a 1-dimensional tensor with n_examples * n_tokens elements. The predictions are reshaped using the view(-1, output_size) method. By passing two arguments we are stipulating that we want two dimensions. The second dimension will be of size output_size. The -1 indicates that the first dimension should be inferred from the size of the tensor. This means that for a tensor of shape (n_examples, n_tokens, output_size) we will get a tensor of shape (n_examples * n_tokens, output_size). Then, we use a Boolean mask to discard the elements corresponding to the padding. This way, the loss 11.2 Summary 167 function will consider each actual word individually, as if the whole batch was just one big sentence. Note that treating a mini-batch as a single virtual sentence does affect the evaluation results. Lastly, we evaluate the performance of our POS tagger on the test set, similarly to how we have done it before: The results indicate that our POS tagger obtains an overall accuracy of 97%, which is in line with state-of-the-art approaches! This is encouraging considering that our approach does not include the CRF layer we discussed in Chapter 10. We challenge the reader to add this layer,7 and experiment with this architecture for other sequence tasks such as named entity recognition. 11.2 Summary In this chapter we have implemented a Spanish part-of-speech tagger using a bidirectional LSTM and a set of pretrained, static word embeddings. Through this process, we have also introduced several new PyTorch features such as the pad_sequence, pack_padded_sequence, and pad_packed_sequence functions, which allow us to work more efficiently with variable length sequences for recurrent neural networks. 7 See, for example, the LSTM-CRF implementation from the PyTorch tutorial: https://pytorch.org/tutorials/beginner/nlp/advanced_tutorial.html
11,504
11,579
#!/usr/bin/env python # coding: utf-8 # # Part-of-speech Tagging Using RNNs # Some initialization: # In[4]: import random import torch import numpy as np import pandas as pd from tqdm.notebook import tqdm # enable tqdm in pandas tqdm.pandas() # set to True to use the gpu (if there is one available) use_gpu = True # select device device = torch.device('cuda' if use_gpu and torch.cuda.is_available() else 'cpu') print(f'device: {device.type}') # random seed seed = 1234 # set random seed if seed is not None: print(f'random seed: {seed}') random.seed(seed) np.random.seed(seed) torch.manual_seed(seed) # Next, let's read the words and their POS tags from the CoNLLUP format: # In[5]: from conllu import parse_incr def read_tags(filename): data = {'words': [], 'tags': []} with open(filename) as f: for sent in parse_incr(f): words = [tok['form'] for tok in sent] tags = [tok['upos'] for tok in sent] data['words'].append(words) data['tags'].append(tags) return pd.DataFrame(data) # In[6]: train_df = read_tags('data/UD_Spanish-AnCora/es_ancora-ud-train.conllup') train_df # We now load the GloVe embeddings for Spanish, which include a representation for the unknown token: # In[7]: from gensim.models import KeyedVectors glove = KeyedVectors.load_word2vec_format('glove-sbwc.i25.vec') glove.vectors.shape # In[8]: # these embeddings already include <unk> unk_tok = '<unk>' unk_id = glove.key_to_index[unk_tok] unk_tok, unk_id # In[9]: # add padding embedding pad_tok = '<pad>' pad_emb = np.zeros(300) glove.add_vector(pad_tok, pad_emb) pad_tok_id = glove.key_to_index[pad_tok] pad_tok, pad_tok_id # Preprocessing: lower case all words, and replace all numbers with '0': # In[10]: def preprocess(words): result = [] for w in words: w = w.lower() if w.isdecimal(): w = '0' result.append(w) return result train_df['words'] = train_df['words'].progress_map(preprocess) train_df # Next, construct actual PyTorch `Dataset` and `DataLoader` objects for the train/dev/test partitions: # In[11]: def get_ids(tokens, key_to_index, unk_id=None): return [key_to_index.get(tok, unk_id) for tok in tokens] def get_word_ids(tokens): return get_ids(tokens, glove.key_to_index, unk_id) # add new column to the dataframe train_df['word ids'] = train_df['words'].progress_map(get_word_ids) train_df # In[12]: pad_tag = '<pad>' index_to_tag = train_df['tags'].explode().unique().tolist() + [pad_tag] tag_to_index = {t:i for i,t in enumerate(index_to_tag)} pad_tag_id = tag_to_index[pad_tag] pad_tag, pad_tag_id # In[13]: index_to_tag # In[14]: def get_tag_ids(tags): return get_ids(tags, tag_to_index) train_df['tag ids'] = train_df['tags'].progress_map(get_tag_ids) train_df # In[15]: dev_df = read_tags('data/UD_Spanish-AnCora/es_ancora-ud-dev.conllup') dev_df['words'] = dev_df['words'].progress_map(preprocess) dev_df['word ids'] = dev_df['words'].progress_map(lambda x: get_ids(x, glove.key_to_index, unk_id)) dev_df['tag ids'] = dev_df['tags'].progress_map(lambda x: get_ids(x, tag_to_index)) dev_df # In[16]: from torch.utils.data import Dataset class MyDataset(Dataset): def __init__(self, x, y): self.x = x self.y = y def __len__(self): return len(self.y) def __getitem__(self, index): x = torch.tensor(self.x[index]) y = torch.tensor(self.y[index]) return x, y # `collate_fn` will be used by `DataLoader` to pad all sentences in the same batch to the same length. # In[17]: from torch.nn.utils.rnn import pad_sequence def collate_fn(batch): # separate xs and ys xs, ys = zip(*batch) # get lengths lengths = [len(x) for x in xs] # pad sequences x_padded = pad_sequence(xs, batch_first=True, padding_value=pad_tok_id) y_padded = pad_sequence(ys, batch_first=True, padding_value=pad_tag_id) # return padded return x_padded, y_padded, lengths # Now construct our PyTorch model: # In[18]: from torch import nn from torch.nn.utils.rnn import pack_padded_sequence, pad_packed_sequence class MyModel(nn.Module): def __init__(self, vectors, hidden_size, num_layers, bidirectional, dropout, output_size): super().__init__() # ensure vectors is a tensor if not torch.is_tensor(vectors): vectors = torch.tensor(vectors) # init embedding layer self.embedding = nn.Embedding.from_pretrained(embeddings=vectors) # init lstm self.lstm = nn.LSTM( input_size=vectors.shape[1], hidden_size=hidden_size, num_layers=num_layers, bidirectional=bidirectional, dropout=dropout, batch_first=True) # init dropout self.dropout = nn.Dropout(dropout) # init classifier self.classifier = nn.Linear( in_features=hidden_size * 2 if bidirectional else hidden_size, out_features=output_size) def forward(self, x_padded, x_lengths): # get embeddings output = self.embedding(x_padded) output = self.dropout(output) # pack data before lstm packed = pack_padded_sequence(output, x_lengths, batch_first=True, enforce_sorted=False) packed, _ = self.lstm(packed) # unpack data before rest of model output, _ = pad_packed_sequence(packed, batch_first=True) output = self.dropout(output) output = self.classifier(output) return output # In[19]: from torch import optim from torch.utils.data import DataLoader from sklearn.metrics import accuracy_score # hyperparameters lr = 1e-3 weight_decay = 1e-5 batch_size = 100 shuffle = True n_epochs = 10 vectors = glove.vectors hidden_size = 100 num_layers = 2 bidirectional = True dropout = 0.1 output_size = len(index_to_tag) # initialize the model, loss function, optimizer, and data-loader model = MyModel(vectors, hidden_size, num_layers, bidirectional, dropout, output_size).to(device) loss_func = nn.CrossEntropyLoss() optimizer = optim.Adam(model.parameters(), lr=lr, weight_decay=weight_decay) train_ds = MyDataset(train_df['word ids'], train_df['tag ids']) train_dl = DataLoader(train_ds, batch_size=batch_size, shuffle=shuffle, collate_fn=collate_fn) dev_ds = MyDataset(dev_df['word ids'], dev_df['tag ids']) dev_dl = DataLoader(dev_ds, batch_size=batch_size, shuffle=shuffle, collate_fn=collate_fn) train_loss, train_acc = [], [] dev_loss, dev_acc = [], [] # We are now ready to train! # In[20]: # train the model for epoch in range(n_epochs): losses, acc = [], [] model.train() for x_padded, y_padded, lengths in tqdm(train_dl, desc=f'epoch {epoch+1} (train)'): # clear gradients model.zero_grad() # send batch to right device x_padded = x_padded.to(device) y_padded = y_padded.to(device) # predict label scores y_pred = model(x_padded, lengths) # reshape output y_true = torch.flatten(y_padded) y_pred = y_pred.view(-1, output_size) mask = y_true != pad_tag_id y_true = y_true[mask] y_pred = y_pred[mask] # compute loss loss = loss_func(y_pred, y_true) # accumulate for plotting gold = y_true.detach().cpu().numpy() pred = np.argmax(y_pred.detach().cpu().numpy(), axis=1) losses.append(loss.detach().cpu().item()) acc.append(accuracy_score(gold, pred)) # backpropagate loss.backward() # optimize model parameters optimizer.step() train_loss.append(np.mean(losses)) train_acc.append(np.mean(acc)) model.eval() with torch.no_grad(): losses, acc = [], [] for x_padded, y_padded, lengths in tqdm(dev_dl, desc=f'epoch {epoch+1} (dev)'): x_padded = x_padded.to(device) y_padded = y_padded.to(device) y_pred = model(x_padded, lengths) y_true = torch.flatten(y_padded) y_pred = y_pred.view(-1, output_size) mask = y_true != pad_tag_id y_true = y_true[mask] y_pred = y_pred[mask] loss = loss_func(y_pred, y_true) gold = y_true.cpu().numpy() pred = np.argmax(y_pred.cpu().numpy(), axis=1) losses.append(loss.cpu().item()) acc.append(accuracy_score(gold, pred)) dev_loss.append(np.mean(losses)) dev_acc.append(np.mean(acc)) # Plot loss and accuracy on dev after each epoch: # In[21]: import matplotlib.pyplot as plt x = np.arange(n_epochs) + 1 plt.plot(x, train_loss) plt.plot(x, dev_loss) plt.legend(['train', 'dev']) plt.xlabel('epoch') plt.ylabel('loss') plt.grid(True) # In[22]: plt.plot(x, train_acc) plt.plot(x, dev_acc) plt.legend(['train', 'dev']) plt.xlabel('epoch') plt.ylabel('accuracy') plt.grid(True) # In[23]: test_df = read_tags('data/UD_Spanish-AnCora/es_ancora-ud-test.conllup') test_df['words'] = test_df['words'].progress_map(preprocess) test_df['word ids'] = test_df['words'].progress_map(lambda x: get_ids(x, glove.key_to_index, unk_id)) test_df['tag ids'] = test_df['tags'].progress_map(lambda x: get_ids(x, tag_to_index)) test_df # Now let's evaluate on the test partition: # In[24]: from sklearn.metrics import classification_report model.eval() test_ds = MyDataset(test_df['word ids'], test_df['tag ids']) test_dl = DataLoader(test_ds, batch_size=batch_size, shuffle=shuffle, collate_fn=collate_fn) all_y_true = [] all_y_pred = [] with torch.no_grad(): for x_padded, y_padded, lengths in tqdm(test_dl): x_padded = x_padded.to(device) y_pred = model(x_padded, lengths) y_true = torch.flatten(y_padded) y_pred = y_pred.view(-1, output_size) mask = y_true != pad_tag_id y_true = y_true[mask] y_pred = torch.argmax(y_pred[mask], dim=1) all_y_true.append(y_true.cpu().numpy()) all_y_pred.append(y_pred.cpu().numpy()) y_true = np.concatenate(all_y_true) y_pred = np.concatenate(all_y_pred) target_names = index_to_tag[:-2] print(classification_report(y_true, y_pred, target_names=target_names)) # Let's generate a confusion matrix for all POS tags in the data: # In[25]: from sklearn.metrics import ConfusionMatrixDisplay, confusion_matrix cm = confusion_matrix(y_true, y_pred, normalize='true') disp = ConfusionMatrixDisplay( confusion_matrix=cm, display_labels=target_names, ) fig, ax = plt.subplots(figsize=(10,10)) disp.plot( cmap='Blues', values_format='.2f', colorbar=False, ax=ax, xticks_rotation=45, ) # In[ ]:
6,663
6,693
2
chap11-3
chap11-3
11 Implementing Part-of-speech Tagging Using Recurrent Neural Networks The previous chapter was our first exposure to recurrent neural networks, which included intuitions for why they are useful for natural language processing, various architectures, and training algorithms. In this chapter we will put them to use, to implement a common sequence modeling task. 11.1 Part-of-speech Tagging The task we will use as an example for this chapter is part-of-speech (POS) tagging, an NLP application that, as we discussed in the previous chapter, benefits from word order. Please see Chapter 16 for a more thorough discussion of POS tagging. The entire code presented in this chapter is available in the chap11_pos_tagging Jupyter notebook. To take a break from NLP applications for English, in this chapter we use the AnCora corpus (Taulé et al., 2008), which primarily consists of newspaper texts in Spanish and Catalan with different linguistic annotations. In this chapter we work with the Spanish portion of the corpus, and the annotations for Universal POS tags (see Chapter 16 for a description of these tags). The Spanish portion of the corpus is divided into a training set with 14,305 sentences, a development set with 1,654 sentences, and a test set with 1,721 sentences. The data is distributed in the CoNLL-U format. In this format, all sentences in a dataset are stored in the same file, separated by a blank line. Each individual token in a sentence is represented in a line, which contains 10 annotation fields separated by tabs: ID, FORM, LEMMA, UPOS, XPOS, FEATS, HEAD, DEPREL, DEPS, and MISC. A comprehensive explanation of this format and the 162 11.1 Part-of-speech Tagging 163 meaning of the different fields is beyond the goal of this chapter; however, the curious reader can find one at the CoNLL-U website.1 Here, we are only concerned with the fields FORM (the raw word), and UPOS (the Universal part-of-speech tag). As in previous chapters, we use pandas to preprocess the data. For parsing the CoNLL-U files, we rely on the conllu Python module.2 We implement a function called read_tags that reads the CoNLL-U file corresponding to a dataset and returns a pandas dataframe that combines all tokens in a sentence into a single row with two columns, one for the words, and one for the POS tags in the corresponding sentence: 0 1 2 3 4 ... 14300 14301 14302 14303 14304 words [El, presidente, de, el, órgano, regulador, de... [Afirmó, que, sigue, el, criterio, europeo, y,... [Durante, la, presentación, de, el, libro, ", ... [Y, todas, las, miradas, convergen, en, la, lu... [Cambiar, las, formas, parece, de, rigor, ,, p... tags [DET, NOUN, ADP, DET, NOUN, ADJ, ADP, DET, PRO... [VERB, SCONJ, VERB, DET, NOUN, ADJ, CCONJ, SCO... [ADP, DET, NOUN, ADP, DET, NOUN, PUNCT, DET, P... [CCONJ, DET, DET, NOUN, VERB, ADP, DET, NOUN, ... [VERB, DET, NOUN, VERB, ADP, NOUN, PUNCT, CCON... [PRON, VERB, ADP, VERB, PRON, ADP, DET, NOUN, ... [Sobre, la, oferta, de, interconexión, con, Te... [ADP, DET, NOUN, ADP, NOUN, ADP, PROPN, ADP, D... [La, inversión, en, investigación, básica, es,... [DET, NOUN, ADP, NOUN, ADJ, AUX, DET, NOUN, AD... ... ... [Conviene, que, ahora, ,, en, plena, apoteosis... [VERB, SCONJ, ADV, PUNCT, ADP, ADJ, NOUN, ADP,... [Carlos, y, Fayna, se, enzarzan, en, una, bron... [PROPN, CCONJ, PROPN, PRON, VERB, ADP, DET, NO... [Él, llega, a, tirar, la, sobre, la, cama, y, ... 14305 rows × 2 columns In order to implement our POS tagging application, we need word embeddings that have been pretrained for Spanish. Here we use the publicly-available GloVe embeddings trained on the Spanish Billion Word Corpus3 by the Departamento de Ciencias de la Computación of Universidad de Chile.4 In contrast to the GloVe embeddings used in Chapter 9, these do include a header that stores meta data about the embeddings (i.e., size of the vocabulary and the dimension of the embedding vectors), so in this case we do not use the no_header=True argument: Another difference between these GloVe embeddings and the ones we used in Chapter 9 is that these already include an embedding for unknown words. Therefore, there is no need to introduce our own. However, we do need to include a new embedding for padding, which . 1  https://universaldependencies.org/format.html 
 . 2  https://github.com/EmilStenstrom/conllu/ 
 . 3  https://crscardellino.ar/SBWCE/ 
 . 4  https://github.com/dccuchile/spanishwordembeddings# 
gloveembeddingsfromsbwc 
 164 Implementing POS Tagging Using RNNs we will use later to guarantee that all sentences in the same mini-batch have the same length. We add a vector of zeros for the padding token in the same way as before: Next, we need to preprocess our tokens to match the vocabulary of the embeddings. In particular, these embeddings were trained on words that were lowercased and on sequences of digits that were replaced with a single 0. We will apply the same modifications to our tokens: (From now on we will omit the pandas tables for readability, but, as usual, the corresponding Jupyter notebook contains all necessary information.) Next, we add a new column to the dataframe that stores the word ids corresponding to the embedding vocabulary. Note that at this point we are not padding the sequences of word ids. We will address padding later. We also need to generate the ids for the POS tags. To this end, we first need to construct a vocabulary of POS tags. Once again, we generate a list of tags using explode(), which linearizes our sequence of sequences of tags, and remove repeated tags using unique(). We also add a special tag for the padding token: We now use this POS tag vocabulary to construct a new dataframe column that stores the POS tag ids: The implementation of the Dataset class that stores our POS dataset is trivial: we simply return the lists of word and tag ids, converted to PyTorch tensors. Now it’s time to handle padding. This time we will use some features of PyTorch that we have not seen before. The DataLoader object can receive an optional argument, collate_fn, which expects a function that can be used to form a mini-batch. We will implement this function using PyTorch’s torch.nn.utils.rnn.pad_sequence() function, which, unsurprisingly, pads a group of tensors. We will take advantage of this function to pad the tensors while forming the mini-batch itself. The advantage of this strategy is that, rather than needing to pad all the examples to be the same length as the largest sentence in the corpus, we will instead pad them to the same length as the largest sentence in the minibatch. The latter strategy reduces the amount of padding necessary, which should yield more efficient code. The collate_fn() function takes a single argument, batch, which is a list of tuples. Each tuple has two elements: the list of word ids and the list of tag ids corresponding to a single example. We first unzip this list of tuples into two lists; the first list has all the word ids, and the 11.1 Part-of-speech Tagging 165 second has the tag ids. An explanation of how zip(*batch) works is provided in Appendix A. Next, we compute the lengths of each of the examples in the batch, which we will use later to inform the recurrent neural network where padding starts for each example. We then use the pad_sequence() function to add padding. This function will find the longest sequence in the batch and pad all examples accordingly using the provided padding value. This method is designed to work with PyTorch’s recurrent neural networks, which by default assume the batch index is in the second dimension. However, we will be organizing our tensors such that the batch index is always in the first dimension, which we feel to be more intuitive. For this reason, we also need to provide the batch_first=True argument to pad_sequence. Finally, we return the padded data, as well as the original lengths of the examples. Next, we implement our POS tagging model class. The model consists of: (a) an embedding layer for our Spanish pretrained embeddings; (b) an LSTM that can be set to be unior bi-directional (see Figure 10.3; the RNN is configured to be bidirectional by setting the bidirectional argument to True in the LSTM constructor), with a configurable number of layers (see Figure 10.2; the number of layers is set through the num_layers argument of the constructor) and (c) a linear layer on top of each hidden state, which is used to predict the scores for each of the POS tags for the corresponding token. The forward() method receives the padded minibatch and the list of lengths for the (unpadded) examples in this mini-batch. The first step in the function is to retrieve the embeddings for all words referenced in this mini-batch. We then apply dropout over these embedding vectors. Next, before passing the data to the LSTM, we pack the padded data. Note that the PyTorch PackedSequence5 class, which is the output of the pack_padded_sequence() function, stores a batch of sequences that had different lengths before padding. One important advantage of using PackedSequence is that its internal data structure removes the padding tokens (which is why we had to keep track of the example lengths before padding in x_lengths), and, thus, the recurrent neural network will not back-propagate over the padded elements.6 Once we have a PackedSequence, we pass it to the LSTM. Since the . 5  https://pytorch.org/docs/stable/generated/torch.nn.utils.rnn. PackedSequence.html 
 . 6  The astute reader might ask at this point, “Why did we pad the mini-batch examples in the first place, if we are removing the padding later?” The padding is needed because this allows us to store the mini-batch as a single three-dimensional tensor. 
 166 Implementing POS Tagging Using RNNs output of the LSTM is also packed, we then unpack it using pad_packed_sequence(). Next we apply dropout to this unpacked LSTM output. Finally, we pass
this to the linear layer to predict the tag scores for the tokens. Despite the small number of lines of code, the code of the forward() method, which switches between embedding vectors, padded tensors, and packed sequences, is not trivial. To clarify it, let us walk through an example. Imagine that the input to the forward() method is a batch, x_padded, with shape (10, 20), corresponding to 10 examples, each with 20 word ids (some of which are padding). Then we retrieve the embeddings. Assuming our word embeddings, i.e., the input vectors xi in Chapter 10, are of dimension 300, the new tensor will have a shape of (10, 20, 300), corresponding to 10 examples, each with 20 embeddings, each with dimension 300. After dropout the shape hasn’t changed, but some of the elements have been zeroed out. After unpacking the output of the LSTM, we will have a tensor of shape (10, 20, hidden_size), where hidden_size is the size of the LSTM hidden state, i.e., the ht vector in Equation 10.6, (hidden_size is a hyper parameter we will set later on). After passing this tensor to the linear layer, we will obtain a tensor of shape (10, 20, tag_vocab_size), where tag_vocab_size is the number of POS tags in our vocabulary. Thus, for each token in each example, we will have a distribution of POS tag scores. For each token, the assigned POS tag will be the one corresponding to the highest score. We next initialize all the hyper parameters and all the required components: The training procedure is very similar to the one implemented in Chapter 7. One notable difference is that the output of this model has three dimensions instead of two: number of examples, number of tokens, and number of POS tag scores. Thus, we have to reshape the output to pass it to the loss function. Additionally, we need to discard the padding before computing the loss. We reshape the gold tag ids using the torch.flatten() function, to transform the 2-dimensional tensor of shape (n_examples, n_tokens) to a 1-dimensional tensor with n_examples * n_tokens elements. The predictions are reshaped using the view(-1, output_size) method. By passing two arguments we are stipulating that we want two dimensions. The second dimension will be of size output_size. The -1 indicates that the first dimension should be inferred from the size of the tensor. This means that for a tensor of shape (n_examples, n_tokens, output_size) we will get a tensor of shape (n_examples * n_tokens, output_size). Then, we use a Boolean mask to discard the elements corresponding to the padding. This way, the loss 11.2 Summary 167 function will consider each actual word individually, as if the whole batch was just one big sentence. Note that treating a mini-batch as a single virtual sentence does affect the evaluation results. Lastly, we evaluate the performance of our POS tagger on the test set, similarly to how we have done it before: The results indicate that our POS tagger obtains an overall accuracy of 97%, which is in line with state-of-the-art approaches! This is encouraging considering that our approach does not include the CRF layer we discussed in Chapter 10. We challenge the reader to add this layer,7 and experiment with this architecture for other sequence tasks such as named entity recognition. 11.2 Summary In this chapter we have implemented a Spanish part-of-speech tagger using a bidirectional LSTM and a set of pretrained, static word embeddings. Through this process, we have also introduced several new PyTorch features such as the pad_sequence, pack_padded_sequence, and pad_packed_sequence functions, which allow us to work more efficiently with variable length sequences for recurrent neural networks. 7 See, for example, the LSTM-CRF implementation from the PyTorch tutorial: https://pytorch.org/tutorials/beginner/nlp/advanced_tutorial.html
5,385
5,699
#!/usr/bin/env python # coding: utf-8 # # Part-of-speech Tagging Using RNNs # Some initialization: # In[4]: import random import torch import numpy as np import pandas as pd from tqdm.notebook import tqdm # enable tqdm in pandas tqdm.pandas() # set to True to use the gpu (if there is one available) use_gpu = True # select device device = torch.device('cuda' if use_gpu and torch.cuda.is_available() else 'cpu') print(f'device: {device.type}') # random seed seed = 1234 # set random seed if seed is not None: print(f'random seed: {seed}') random.seed(seed) np.random.seed(seed) torch.manual_seed(seed) # Next, let's read the words and their POS tags from the CoNLLUP format: # In[5]: from conllu import parse_incr def read_tags(filename): data = {'words': [], 'tags': []} with open(filename) as f: for sent in parse_incr(f): words = [tok['form'] for tok in sent] tags = [tok['upos'] for tok in sent] data['words'].append(words) data['tags'].append(tags) return pd.DataFrame(data) # In[6]: train_df = read_tags('data/UD_Spanish-AnCora/es_ancora-ud-train.conllup') train_df # We now load the GloVe embeddings for Spanish, which include a representation for the unknown token: # In[7]: from gensim.models import KeyedVectors glove = KeyedVectors.load_word2vec_format('glove-sbwc.i25.vec') glove.vectors.shape # In[8]: # these embeddings already include <unk> unk_tok = '<unk>' unk_id = glove.key_to_index[unk_tok] unk_tok, unk_id # In[9]: # add padding embedding pad_tok = '<pad>' pad_emb = np.zeros(300) glove.add_vector(pad_tok, pad_emb) pad_tok_id = glove.key_to_index[pad_tok] pad_tok, pad_tok_id # Preprocessing: lower case all words, and replace all numbers with '0': # In[10]: def preprocess(words): result = [] for w in words: w = w.lower() if w.isdecimal(): w = '0' result.append(w) return result train_df['words'] = train_df['words'].progress_map(preprocess) train_df # Next, construct actual PyTorch `Dataset` and `DataLoader` objects for the train/dev/test partitions: # In[11]: def get_ids(tokens, key_to_index, unk_id=None): return [key_to_index.get(tok, unk_id) for tok in tokens] def get_word_ids(tokens): return get_ids(tokens, glove.key_to_index, unk_id) # add new column to the dataframe train_df['word ids'] = train_df['words'].progress_map(get_word_ids) train_df # In[12]: pad_tag = '<pad>' index_to_tag = train_df['tags'].explode().unique().tolist() + [pad_tag] tag_to_index = {t:i for i,t in enumerate(index_to_tag)} pad_tag_id = tag_to_index[pad_tag] pad_tag, pad_tag_id # In[13]: index_to_tag # In[14]: def get_tag_ids(tags): return get_ids(tags, tag_to_index) train_df['tag ids'] = train_df['tags'].progress_map(get_tag_ids) train_df # In[15]: dev_df = read_tags('data/UD_Spanish-AnCora/es_ancora-ud-dev.conllup') dev_df['words'] = dev_df['words'].progress_map(preprocess) dev_df['word ids'] = dev_df['words'].progress_map(lambda x: get_ids(x, glove.key_to_index, unk_id)) dev_df['tag ids'] = dev_df['tags'].progress_map(lambda x: get_ids(x, tag_to_index)) dev_df # In[16]: from torch.utils.data import Dataset class MyDataset(Dataset): def __init__(self, x, y): self.x = x self.y = y def __len__(self): return len(self.y) def __getitem__(self, index): x = torch.tensor(self.x[index]) y = torch.tensor(self.y[index]) return x, y # `collate_fn` will be used by `DataLoader` to pad all sentences in the same batch to the same length. # In[17]: from torch.nn.utils.rnn import pad_sequence def collate_fn(batch): # separate xs and ys xs, ys = zip(*batch) # get lengths lengths = [len(x) for x in xs] # pad sequences x_padded = pad_sequence(xs, batch_first=True, padding_value=pad_tok_id) y_padded = pad_sequence(ys, batch_first=True, padding_value=pad_tag_id) # return padded return x_padded, y_padded, lengths # Now construct our PyTorch model: # In[18]: from torch import nn from torch.nn.utils.rnn import pack_padded_sequence, pad_packed_sequence class MyModel(nn.Module): def __init__(self, vectors, hidden_size, num_layers, bidirectional, dropout, output_size): super().__init__() # ensure vectors is a tensor if not torch.is_tensor(vectors): vectors = torch.tensor(vectors) # init embedding layer self.embedding = nn.Embedding.from_pretrained(embeddings=vectors) # init lstm self.lstm = nn.LSTM( input_size=vectors.shape[1], hidden_size=hidden_size, num_layers=num_layers, bidirectional=bidirectional, dropout=dropout, batch_first=True) # init dropout self.dropout = nn.Dropout(dropout) # init classifier self.classifier = nn.Linear( in_features=hidden_size * 2 if bidirectional else hidden_size, out_features=output_size) def forward(self, x_padded, x_lengths): # get embeddings output = self.embedding(x_padded) output = self.dropout(output) # pack data before lstm packed = pack_padded_sequence(output, x_lengths, batch_first=True, enforce_sorted=False) packed, _ = self.lstm(packed) # unpack data before rest of model output, _ = pad_packed_sequence(packed, batch_first=True) output = self.dropout(output) output = self.classifier(output) return output # In[19]: from torch import optim from torch.utils.data import DataLoader from sklearn.metrics import accuracy_score # hyperparameters lr = 1e-3 weight_decay = 1e-5 batch_size = 100 shuffle = True n_epochs = 10 vectors = glove.vectors hidden_size = 100 num_layers = 2 bidirectional = True dropout = 0.1 output_size = len(index_to_tag) # initialize the model, loss function, optimizer, and data-loader model = MyModel(vectors, hidden_size, num_layers, bidirectional, dropout, output_size).to(device) loss_func = nn.CrossEntropyLoss() optimizer = optim.Adam(model.parameters(), lr=lr, weight_decay=weight_decay) train_ds = MyDataset(train_df['word ids'], train_df['tag ids']) train_dl = DataLoader(train_ds, batch_size=batch_size, shuffle=shuffle, collate_fn=collate_fn) dev_ds = MyDataset(dev_df['word ids'], dev_df['tag ids']) dev_dl = DataLoader(dev_ds, batch_size=batch_size, shuffle=shuffle, collate_fn=collate_fn) train_loss, train_acc = [], [] dev_loss, dev_acc = [], [] # We are now ready to train! # In[20]: # train the model for epoch in range(n_epochs): losses, acc = [], [] model.train() for x_padded, y_padded, lengths in tqdm(train_dl, desc=f'epoch {epoch+1} (train)'): # clear gradients model.zero_grad() # send batch to right device x_padded = x_padded.to(device) y_padded = y_padded.to(device) # predict label scores y_pred = model(x_padded, lengths) # reshape output y_true = torch.flatten(y_padded) y_pred = y_pred.view(-1, output_size) mask = y_true != pad_tag_id y_true = y_true[mask] y_pred = y_pred[mask] # compute loss loss = loss_func(y_pred, y_true) # accumulate for plotting gold = y_true.detach().cpu().numpy() pred = np.argmax(y_pred.detach().cpu().numpy(), axis=1) losses.append(loss.detach().cpu().item()) acc.append(accuracy_score(gold, pred)) # backpropagate loss.backward() # optimize model parameters optimizer.step() train_loss.append(np.mean(losses)) train_acc.append(np.mean(acc)) model.eval() with torch.no_grad(): losses, acc = [], [] for x_padded, y_padded, lengths in tqdm(dev_dl, desc=f'epoch {epoch+1} (dev)'): x_padded = x_padded.to(device) y_padded = y_padded.to(device) y_pred = model(x_padded, lengths) y_true = torch.flatten(y_padded) y_pred = y_pred.view(-1, output_size) mask = y_true != pad_tag_id y_true = y_true[mask] y_pred = y_pred[mask] loss = loss_func(y_pred, y_true) gold = y_true.cpu().numpy() pred = np.argmax(y_pred.cpu().numpy(), axis=1) losses.append(loss.cpu().item()) acc.append(accuracy_score(gold, pred)) dev_loss.append(np.mean(losses)) dev_acc.append(np.mean(acc)) # Plot loss and accuracy on dev after each epoch: # In[21]: import matplotlib.pyplot as plt x = np.arange(n_epochs) + 1 plt.plot(x, train_loss) plt.plot(x, dev_loss) plt.legend(['train', 'dev']) plt.xlabel('epoch') plt.ylabel('loss') plt.grid(True) # In[22]: plt.plot(x, train_acc) plt.plot(x, dev_acc) plt.legend(['train', 'dev']) plt.xlabel('epoch') plt.ylabel('accuracy') plt.grid(True) # In[23]: test_df = read_tags('data/UD_Spanish-AnCora/es_ancora-ud-test.conllup') test_df['words'] = test_df['words'].progress_map(preprocess) test_df['word ids'] = test_df['words'].progress_map(lambda x: get_ids(x, glove.key_to_index, unk_id)) test_df['tag ids'] = test_df['tags'].progress_map(lambda x: get_ids(x, tag_to_index)) test_df # Now let's evaluate on the test partition: # In[24]: from sklearn.metrics import classification_report model.eval() test_ds = MyDataset(test_df['word ids'], test_df['tag ids']) test_dl = DataLoader(test_ds, batch_size=batch_size, shuffle=shuffle, collate_fn=collate_fn) all_y_true = [] all_y_pred = [] with torch.no_grad(): for x_padded, y_padded, lengths in tqdm(test_dl): x_padded = x_padded.to(device) y_pred = model(x_padded, lengths) y_true = torch.flatten(y_padded) y_pred = y_pred.view(-1, output_size) mask = y_true != pad_tag_id y_true = y_true[mask] y_pred = torch.argmax(y_pred[mask], dim=1) all_y_true.append(y_true.cpu().numpy()) all_y_pred.append(y_pred.cpu().numpy()) y_true = np.concatenate(all_y_true) y_pred = np.concatenate(all_y_pred) target_names = index_to_tag[:-2] print(classification_report(y_true, y_pred, target_names=target_names)) # Let's generate a confusion matrix for all POS tags in the data: # In[25]: from sklearn.metrics import ConfusionMatrixDisplay, confusion_matrix cm = confusion_matrix(y_true, y_pred, normalize='true') disp = ConfusionMatrixDisplay( confusion_matrix=cm, display_labels=target_names, ) fig, ax = plt.subplots(figsize=(10,10)) disp.plot( cmap='Blues', values_format='.2f', colorbar=False, ax=ax, xticks_rotation=45, ) # In[ ]:
2,482
2,663
3
chap11-4
chap11-4
11 Implementing Part-of-speech Tagging Using Recurrent Neural Networks The previous chapter was our first exposure to recurrent neural networks, which included intuitions for why they are useful for natural language processing, various architectures, and training algorithms. In this chapter we will put them to use, to implement a common sequence modeling task. 11.1 Part-of-speech Tagging The task we will use as an example for this chapter is part-of-speech (POS) tagging, an NLP application that, as we discussed in the previous chapter, benefits from word order. Please see Chapter 16 for a more thorough discussion of POS tagging. The entire code presented in this chapter is available in the chap11_pos_tagging Jupyter notebook. To take a break from NLP applications for English, in this chapter we use the AnCora corpus (Taulé et al., 2008), which primarily consists of newspaper texts in Spanish and Catalan with different linguistic annotations. In this chapter we work with the Spanish portion of the corpus, and the annotations for Universal POS tags (see Chapter 16 for a description of these tags). The Spanish portion of the corpus is divided into a training set with 14,305 sentences, a development set with 1,654 sentences, and a test set with 1,721 sentences. The data is distributed in the CoNLL-U format. In this format, all sentences in a dataset are stored in the same file, separated by a blank line. Each individual token in a sentence is represented in a line, which contains 10 annotation fields separated by tabs: ID, FORM, LEMMA, UPOS, XPOS, FEATS, HEAD, DEPREL, DEPS, and MISC. A comprehensive explanation of this format and the 162 11.1 Part-of-speech Tagging 163 meaning of the different fields is beyond the goal of this chapter; however, the curious reader can find one at the CoNLL-U website.1 Here, we are only concerned with the fields FORM (the raw word), and UPOS (the Universal part-of-speech tag). As in previous chapters, we use pandas to preprocess the data. For parsing the CoNLL-U files, we rely on the conllu Python module.2 We implement a function called read_tags that reads the CoNLL-U file corresponding to a dataset and returns a pandas dataframe that combines all tokens in a sentence into a single row with two columns, one for the words, and one for the POS tags in the corresponding sentence: 0 1 2 3 4 ... 14300 14301 14302 14303 14304 words [El, presidente, de, el, órgano, regulador, de... [Afirmó, que, sigue, el, criterio, europeo, y,... [Durante, la, presentación, de, el, libro, ", ... [Y, todas, las, miradas, convergen, en, la, lu... [Cambiar, las, formas, parece, de, rigor, ,, p... tags [DET, NOUN, ADP, DET, NOUN, ADJ, ADP, DET, PRO... [VERB, SCONJ, VERB, DET, NOUN, ADJ, CCONJ, SCO... [ADP, DET, NOUN, ADP, DET, NOUN, PUNCT, DET, P... [CCONJ, DET, DET, NOUN, VERB, ADP, DET, NOUN, ... [VERB, DET, NOUN, VERB, ADP, NOUN, PUNCT, CCON... [PRON, VERB, ADP, VERB, PRON, ADP, DET, NOUN, ... [Sobre, la, oferta, de, interconexión, con, Te... [ADP, DET, NOUN, ADP, NOUN, ADP, PROPN, ADP, D... [La, inversión, en, investigación, básica, es,... [DET, NOUN, ADP, NOUN, ADJ, AUX, DET, NOUN, AD... ... ... [Conviene, que, ahora, ,, en, plena, apoteosis... [VERB, SCONJ, ADV, PUNCT, ADP, ADJ, NOUN, ADP,... [Carlos, y, Fayna, se, enzarzan, en, una, bron... [PROPN, CCONJ, PROPN, PRON, VERB, ADP, DET, NO... [Él, llega, a, tirar, la, sobre, la, cama, y, ... 14305 rows × 2 columns In order to implement our POS tagging application, we need word embeddings that have been pretrained for Spanish. Here we use the publicly-available GloVe embeddings trained on the Spanish Billion Word Corpus3 by the Departamento de Ciencias de la Computación of Universidad de Chile.4 In contrast to the GloVe embeddings used in Chapter 9, these do include a header that stores meta data about the embeddings (i.e., size of the vocabulary and the dimension of the embedding vectors), so in this case we do not use the no_header=True argument: Another difference between these GloVe embeddings and the ones we used in Chapter 9 is that these already include an embedding for unknown words. Therefore, there is no need to introduce our own. However, we do need to include a new embedding for padding, which . 1  https://universaldependencies.org/format.html 
 . 2  https://github.com/EmilStenstrom/conllu/ 
 . 3  https://crscardellino.ar/SBWCE/ 
 . 4  https://github.com/dccuchile/spanishwordembeddings# 
gloveembeddingsfromsbwc 
 164 Implementing POS Tagging Using RNNs we will use later to guarantee that all sentences in the same mini-batch have the same length. We add a vector of zeros for the padding token in the same way as before: Next, we need to preprocess our tokens to match the vocabulary of the embeddings. In particular, these embeddings were trained on words that were lowercased and on sequences of digits that were replaced with a single 0. We will apply the same modifications to our tokens: (From now on we will omit the pandas tables for readability, but, as usual, the corresponding Jupyter notebook contains all necessary information.) Next, we add a new column to the dataframe that stores the word ids corresponding to the embedding vocabulary. Note that at this point we are not padding the sequences of word ids. We will address padding later. We also need to generate the ids for the POS tags. To this end, we first need to construct a vocabulary of POS tags. Once again, we generate a list of tags using explode(), which linearizes our sequence of sequences of tags, and remove repeated tags using unique(). We also add a special tag for the padding token: We now use this POS tag vocabulary to construct a new dataframe column that stores the POS tag ids: The implementation of the Dataset class that stores our POS dataset is trivial: we simply return the lists of word and tag ids, converted to PyTorch tensors. Now it’s time to handle padding. This time we will use some features of PyTorch that we have not seen before. The DataLoader object can receive an optional argument, collate_fn, which expects a function that can be used to form a mini-batch. We will implement this function using PyTorch’s torch.nn.utils.rnn.pad_sequence() function, which, unsurprisingly, pads a group of tensors. We will take advantage of this function to pad the tensors while forming the mini-batch itself. The advantage of this strategy is that, rather than needing to pad all the examples to be the same length as the largest sentence in the corpus, we will instead pad them to the same length as the largest sentence in the minibatch. The latter strategy reduces the amount of padding necessary, which should yield more efficient code. The collate_fn() function takes a single argument, batch, which is a list of tuples. Each tuple has two elements: the list of word ids and the list of tag ids corresponding to a single example. We first unzip this list of tuples into two lists; the first list has all the word ids, and the 11.1 Part-of-speech Tagging 165 second has the tag ids. An explanation of how zip(*batch) works is provided in Appendix A. Next, we compute the lengths of each of the examples in the batch, which we will use later to inform the recurrent neural network where padding starts for each example. We then use the pad_sequence() function to add padding. This function will find the longest sequence in the batch and pad all examples accordingly using the provided padding value. This method is designed to work with PyTorch’s recurrent neural networks, which by default assume the batch index is in the second dimension. However, we will be organizing our tensors such that the batch index is always in the first dimension, which we feel to be more intuitive. For this reason, we also need to provide the batch_first=True argument to pad_sequence. Finally, we return the padded data, as well as the original lengths of the examples. Next, we implement our POS tagging model class. The model consists of: (a) an embedding layer for our Spanish pretrained embeddings; (b) an LSTM that can be set to be unior bi-directional (see Figure 10.3; the RNN is configured to be bidirectional by setting the bidirectional argument to True in the LSTM constructor), with a configurable number of layers (see Figure 10.2; the number of layers is set through the num_layers argument of the constructor) and (c) a linear layer on top of each hidden state, which is used to predict the scores for each of the POS tags for the corresponding token. The forward() method receives the padded minibatch and the list of lengths for the (unpadded) examples in this mini-batch. The first step in the function is to retrieve the embeddings for all words referenced in this mini-batch. We then apply dropout over these embedding vectors. Next, before passing the data to the LSTM, we pack the padded data. Note that the PyTorch PackedSequence5 class, which is the output of the pack_padded_sequence() function, stores a batch of sequences that had different lengths before padding. One important advantage of using PackedSequence is that its internal data structure removes the padding tokens (which is why we had to keep track of the example lengths before padding in x_lengths), and, thus, the recurrent neural network will not back-propagate over the padded elements.6 Once we have a PackedSequence, we pass it to the LSTM. Since the . 5  https://pytorch.org/docs/stable/generated/torch.nn.utils.rnn. PackedSequence.html 
 . 6  The astute reader might ask at this point, “Why did we pad the mini-batch examples in the first place, if we are removing the padding later?” The padding is needed because this allows us to store the mini-batch as a single three-dimensional tensor. 
 166 Implementing POS Tagging Using RNNs output of the LSTM is also packed, we then unpack it using pad_packed_sequence(). Next we apply dropout to this unpacked LSTM output. Finally, we pass
this to the linear layer to predict the tag scores for the tokens. Despite the small number of lines of code, the code of the forward() method, which switches between embedding vectors, padded tensors, and packed sequences, is not trivial. To clarify it, let us walk through an example. Imagine that the input to the forward() method is a batch, x_padded, with shape (10, 20), corresponding to 10 examples, each with 20 word ids (some of which are padding). Then we retrieve the embeddings. Assuming our word embeddings, i.e., the input vectors xi in Chapter 10, are of dimension 300, the new tensor will have a shape of (10, 20, 300), corresponding to 10 examples, each with 20 embeddings, each with dimension 300. After dropout the shape hasn’t changed, but some of the elements have been zeroed out. After unpacking the output of the LSTM, we will have a tensor of shape (10, 20, hidden_size), where hidden_size is the size of the LSTM hidden state, i.e., the ht vector in Equation 10.6, (hidden_size is a hyper parameter we will set later on). After passing this tensor to the linear layer, we will obtain a tensor of shape (10, 20, tag_vocab_size), where tag_vocab_size is the number of POS tags in our vocabulary. Thus, for each token in each example, we will have a distribution of POS tag scores. For each token, the assigned POS tag will be the one corresponding to the highest score. We next initialize all the hyper parameters and all the required components: The training procedure is very similar to the one implemented in Chapter 7. One notable difference is that the output of this model has three dimensions instead of two: number of examples, number of tokens, and number of POS tag scores. Thus, we have to reshape the output to pass it to the loss function. Additionally, we need to discard the padding before computing the loss. We reshape the gold tag ids using the torch.flatten() function, to transform the 2-dimensional tensor of shape (n_examples, n_tokens) to a 1-dimensional tensor with n_examples * n_tokens elements. The predictions are reshaped using the view(-1, output_size) method. By passing two arguments we are stipulating that we want two dimensions. The second dimension will be of size output_size. The -1 indicates that the first dimension should be inferred from the size of the tensor. This means that for a tensor of shape (n_examples, n_tokens, output_size) we will get a tensor of shape (n_examples * n_tokens, output_size). Then, we use a Boolean mask to discard the elements corresponding to the padding. This way, the loss 11.2 Summary 167 function will consider each actual word individually, as if the whole batch was just one big sentence. Note that treating a mini-batch as a single virtual sentence does affect the evaluation results. Lastly, we evaluate the performance of our POS tagger on the test set, similarly to how we have done it before: The results indicate that our POS tagger obtains an overall accuracy of 97%, which is in line with state-of-the-art approaches! This is encouraging considering that our approach does not include the CRF layer we discussed in Chapter 10. We challenge the reader to add this layer,7 and experiment with this architecture for other sequence tasks such as named entity recognition. 11.2 Summary In this chapter we have implemented a Spanish part-of-speech tagger using a bidirectional LSTM and a set of pretrained, static word embeddings. Through this process, we have also introduced several new PyTorch features such as the pad_sequence, pack_padded_sequence, and pad_packed_sequence functions, which allow us to work more efficiently with variable length sequences for recurrent neural networks. 7 See, for example, the LSTM-CRF implementation from the PyTorch tutorial: https://pytorch.org/tutorials/beginner/nlp/advanced_tutorial.html
7,998
8,045
#!/usr/bin/env python # coding: utf-8 # # Part-of-speech Tagging Using RNNs # Some initialization: # In[4]: import random import torch import numpy as np import pandas as pd from tqdm.notebook import tqdm # enable tqdm in pandas tqdm.pandas() # set to True to use the gpu (if there is one available) use_gpu = True # select device device = torch.device('cuda' if use_gpu and torch.cuda.is_available() else 'cpu') print(f'device: {device.type}') # random seed seed = 1234 # set random seed if seed is not None: print(f'random seed: {seed}') random.seed(seed) np.random.seed(seed) torch.manual_seed(seed) # Next, let's read the words and their POS tags from the CoNLLUP format: # In[5]: from conllu import parse_incr def read_tags(filename): data = {'words': [], 'tags': []} with open(filename) as f: for sent in parse_incr(f): words = [tok['form'] for tok in sent] tags = [tok['upos'] for tok in sent] data['words'].append(words) data['tags'].append(tags) return pd.DataFrame(data) # In[6]: train_df = read_tags('data/UD_Spanish-AnCora/es_ancora-ud-train.conllup') train_df # We now load the GloVe embeddings for Spanish, which include a representation for the unknown token: # In[7]: from gensim.models import KeyedVectors glove = KeyedVectors.load_word2vec_format('glove-sbwc.i25.vec') glove.vectors.shape # In[8]: # these embeddings already include <unk> unk_tok = '<unk>' unk_id = glove.key_to_index[unk_tok] unk_tok, unk_id # In[9]: # add padding embedding pad_tok = '<pad>' pad_emb = np.zeros(300) glove.add_vector(pad_tok, pad_emb) pad_tok_id = glove.key_to_index[pad_tok] pad_tok, pad_tok_id # Preprocessing: lower case all words, and replace all numbers with '0': # In[10]: def preprocess(words): result = [] for w in words: w = w.lower() if w.isdecimal(): w = '0' result.append(w) return result train_df['words'] = train_df['words'].progress_map(preprocess) train_df # Next, construct actual PyTorch `Dataset` and `DataLoader` objects for the train/dev/test partitions: # In[11]: def get_ids(tokens, key_to_index, unk_id=None): return [key_to_index.get(tok, unk_id) for tok in tokens] def get_word_ids(tokens): return get_ids(tokens, glove.key_to_index, unk_id) # add new column to the dataframe train_df['word ids'] = train_df['words'].progress_map(get_word_ids) train_df # In[12]: pad_tag = '<pad>' index_to_tag = train_df['tags'].explode().unique().tolist() + [pad_tag] tag_to_index = {t:i for i,t in enumerate(index_to_tag)} pad_tag_id = tag_to_index[pad_tag] pad_tag, pad_tag_id # In[13]: index_to_tag # In[14]: def get_tag_ids(tags): return get_ids(tags, tag_to_index) train_df['tag ids'] = train_df['tags'].progress_map(get_tag_ids) train_df # In[15]: dev_df = read_tags('data/UD_Spanish-AnCora/es_ancora-ud-dev.conllup') dev_df['words'] = dev_df['words'].progress_map(preprocess) dev_df['word ids'] = dev_df['words'].progress_map(lambda x: get_ids(x, glove.key_to_index, unk_id)) dev_df['tag ids'] = dev_df['tags'].progress_map(lambda x: get_ids(x, tag_to_index)) dev_df # In[16]: from torch.utils.data import Dataset class MyDataset(Dataset): def __init__(self, x, y): self.x = x self.y = y def __len__(self): return len(self.y) def __getitem__(self, index): x = torch.tensor(self.x[index]) y = torch.tensor(self.y[index]) return x, y # `collate_fn` will be used by `DataLoader` to pad all sentences in the same batch to the same length. # In[17]: from torch.nn.utils.rnn import pad_sequence def collate_fn(batch): # separate xs and ys xs, ys = zip(*batch) # get lengths lengths = [len(x) for x in xs] # pad sequences x_padded = pad_sequence(xs, batch_first=True, padding_value=pad_tok_id) y_padded = pad_sequence(ys, batch_first=True, padding_value=pad_tag_id) # return padded return x_padded, y_padded, lengths # Now construct our PyTorch model: # In[18]: from torch import nn from torch.nn.utils.rnn import pack_padded_sequence, pad_packed_sequence class MyModel(nn.Module): def __init__(self, vectors, hidden_size, num_layers, bidirectional, dropout, output_size): super().__init__() # ensure vectors is a tensor if not torch.is_tensor(vectors): vectors = torch.tensor(vectors) # init embedding layer self.embedding = nn.Embedding.from_pretrained(embeddings=vectors) # init lstm self.lstm = nn.LSTM( input_size=vectors.shape[1], hidden_size=hidden_size, num_layers=num_layers, bidirectional=bidirectional, dropout=dropout, batch_first=True) # init dropout self.dropout = nn.Dropout(dropout) # init classifier self.classifier = nn.Linear( in_features=hidden_size * 2 if bidirectional else hidden_size, out_features=output_size) def forward(self, x_padded, x_lengths): # get embeddings output = self.embedding(x_padded) output = self.dropout(output) # pack data before lstm packed = pack_padded_sequence(output, x_lengths, batch_first=True, enforce_sorted=False) packed, _ = self.lstm(packed) # unpack data before rest of model output, _ = pad_packed_sequence(packed, batch_first=True) output = self.dropout(output) output = self.classifier(output) return output # In[19]: from torch import optim from torch.utils.data import DataLoader from sklearn.metrics import accuracy_score # hyperparameters lr = 1e-3 weight_decay = 1e-5 batch_size = 100 shuffle = True n_epochs = 10 vectors = glove.vectors hidden_size = 100 num_layers = 2 bidirectional = True dropout = 0.1 output_size = len(index_to_tag) # initialize the model, loss function, optimizer, and data-loader model = MyModel(vectors, hidden_size, num_layers, bidirectional, dropout, output_size).to(device) loss_func = nn.CrossEntropyLoss() optimizer = optim.Adam(model.parameters(), lr=lr, weight_decay=weight_decay) train_ds = MyDataset(train_df['word ids'], train_df['tag ids']) train_dl = DataLoader(train_ds, batch_size=batch_size, shuffle=shuffle, collate_fn=collate_fn) dev_ds = MyDataset(dev_df['word ids'], dev_df['tag ids']) dev_dl = DataLoader(dev_ds, batch_size=batch_size, shuffle=shuffle, collate_fn=collate_fn) train_loss, train_acc = [], [] dev_loss, dev_acc = [], [] # We are now ready to train! # In[20]: # train the model for epoch in range(n_epochs): losses, acc = [], [] model.train() for x_padded, y_padded, lengths in tqdm(train_dl, desc=f'epoch {epoch+1} (train)'): # clear gradients model.zero_grad() # send batch to right device x_padded = x_padded.to(device) y_padded = y_padded.to(device) # predict label scores y_pred = model(x_padded, lengths) # reshape output y_true = torch.flatten(y_padded) y_pred = y_pred.view(-1, output_size) mask = y_true != pad_tag_id y_true = y_true[mask] y_pred = y_pred[mask] # compute loss loss = loss_func(y_pred, y_true) # accumulate for plotting gold = y_true.detach().cpu().numpy() pred = np.argmax(y_pred.detach().cpu().numpy(), axis=1) losses.append(loss.detach().cpu().item()) acc.append(accuracy_score(gold, pred)) # backpropagate loss.backward() # optimize model parameters optimizer.step() train_loss.append(np.mean(losses)) train_acc.append(np.mean(acc)) model.eval() with torch.no_grad(): losses, acc = [], [] for x_padded, y_padded, lengths in tqdm(dev_dl, desc=f'epoch {epoch+1} (dev)'): x_padded = x_padded.to(device) y_padded = y_padded.to(device) y_pred = model(x_padded, lengths) y_true = torch.flatten(y_padded) y_pred = y_pred.view(-1, output_size) mask = y_true != pad_tag_id y_true = y_true[mask] y_pred = y_pred[mask] loss = loss_func(y_pred, y_true) gold = y_true.cpu().numpy() pred = np.argmax(y_pred.cpu().numpy(), axis=1) losses.append(loss.cpu().item()) acc.append(accuracy_score(gold, pred)) dev_loss.append(np.mean(losses)) dev_acc.append(np.mean(acc)) # Plot loss and accuracy on dev after each epoch: # In[21]: import matplotlib.pyplot as plt x = np.arange(n_epochs) + 1 plt.plot(x, train_loss) plt.plot(x, dev_loss) plt.legend(['train', 'dev']) plt.xlabel('epoch') plt.ylabel('loss') plt.grid(True) # In[22]: plt.plot(x, train_acc) plt.plot(x, dev_acc) plt.legend(['train', 'dev']) plt.xlabel('epoch') plt.ylabel('accuracy') plt.grid(True) # In[23]: test_df = read_tags('data/UD_Spanish-AnCora/es_ancora-ud-test.conllup') test_df['words'] = test_df['words'].progress_map(preprocess) test_df['word ids'] = test_df['words'].progress_map(lambda x: get_ids(x, glove.key_to_index, unk_id)) test_df['tag ids'] = test_df['tags'].progress_map(lambda x: get_ids(x, tag_to_index)) test_df # Now let's evaluate on the test partition: # In[24]: from sklearn.metrics import classification_report model.eval() test_ds = MyDataset(test_df['word ids'], test_df['tag ids']) test_dl = DataLoader(test_ds, batch_size=batch_size, shuffle=shuffle, collate_fn=collate_fn) all_y_true = [] all_y_pred = [] with torch.no_grad(): for x_padded, y_padded, lengths in tqdm(test_dl): x_padded = x_padded.to(device) y_pred = model(x_padded, lengths) y_true = torch.flatten(y_padded) y_pred = y_pred.view(-1, output_size) mask = y_true != pad_tag_id y_true = y_true[mask] y_pred = torch.argmax(y_pred[mask], dim=1) all_y_true.append(y_true.cpu().numpy()) all_y_pred.append(y_pred.cpu().numpy()) y_true = np.concatenate(all_y_true) y_pred = np.concatenate(all_y_pred) target_names = index_to_tag[:-2] print(classification_report(y_true, y_pred, target_names=target_names)) # Let's generate a confusion matrix for all POS tags in the data: # In[25]: from sklearn.metrics import ConfusionMatrixDisplay, confusion_matrix cm = confusion_matrix(y_true, y_pred, normalize='true') disp = ConfusionMatrixDisplay( confusion_matrix=cm, display_labels=target_names, ) fig, ax = plt.subplots(figsize=(10,10)) disp.plot( cmap='Blues', values_format='.2f', colorbar=False, ax=ax, xticks_rotation=45, ) # In[ ]:
4,204
4,230
4
chap11-5
chap11-5
11 Implementing Part-of-speech Tagging Using Recurrent Neural Networks The previous chapter was our first exposure to recurrent neural networks, which included intuitions for why they are useful for natural language processing, various architectures, and training algorithms. In this chapter we will put them to use, to implement a common sequence modeling task. 11.1 Part-of-speech Tagging The task we will use as an example for this chapter is part-of-speech (POS) tagging, an NLP application that, as we discussed in the previous chapter, benefits from word order. Please see Chapter 16 for a more thorough discussion of POS tagging. The entire code presented in this chapter is available in the chap11_pos_tagging Jupyter notebook. To take a break from NLP applications for English, in this chapter we use the AnCora corpus (Taulé et al., 2008), which primarily consists of newspaper texts in Spanish and Catalan with different linguistic annotations. In this chapter we work with the Spanish portion of the corpus, and the annotations for Universal POS tags (see Chapter 16 for a description of these tags). The Spanish portion of the corpus is divided into a training set with 14,305 sentences, a development set with 1,654 sentences, and a test set with 1,721 sentences. The data is distributed in the CoNLL-U format. In this format, all sentences in a dataset are stored in the same file, separated by a blank line. Each individual token in a sentence is represented in a line, which contains 10 annotation fields separated by tabs: ID, FORM, LEMMA, UPOS, XPOS, FEATS, HEAD, DEPREL, DEPS, and MISC. A comprehensive explanation of this format and the 162 11.1 Part-of-speech Tagging 163 meaning of the different fields is beyond the goal of this chapter; however, the curious reader can find one at the CoNLL-U website.1 Here, we are only concerned with the fields FORM (the raw word), and UPOS (the Universal part-of-speech tag). As in previous chapters, we use pandas to preprocess the data. For parsing the CoNLL-U files, we rely on the conllu Python module.2 We implement a function called read_tags that reads the CoNLL-U file corresponding to a dataset and returns a pandas dataframe that combines all tokens in a sentence into a single row with two columns, one for the words, and one for the POS tags in the corresponding sentence: 0 1 2 3 4 ... 14300 14301 14302 14303 14304 words [El, presidente, de, el, órgano, regulador, de... [Afirmó, que, sigue, el, criterio, europeo, y,... [Durante, la, presentación, de, el, libro, ", ... [Y, todas, las, miradas, convergen, en, la, lu... [Cambiar, las, formas, parece, de, rigor, ,, p... tags [DET, NOUN, ADP, DET, NOUN, ADJ, ADP, DET, PRO... [VERB, SCONJ, VERB, DET, NOUN, ADJ, CCONJ, SCO... [ADP, DET, NOUN, ADP, DET, NOUN, PUNCT, DET, P... [CCONJ, DET, DET, NOUN, VERB, ADP, DET, NOUN, ... [VERB, DET, NOUN, VERB, ADP, NOUN, PUNCT, CCON... [PRON, VERB, ADP, VERB, PRON, ADP, DET, NOUN, ... [Sobre, la, oferta, de, interconexión, con, Te... [ADP, DET, NOUN, ADP, NOUN, ADP, PROPN, ADP, D... [La, inversión, en, investigación, básica, es,... [DET, NOUN, ADP, NOUN, ADJ, AUX, DET, NOUN, AD... ... ... [Conviene, que, ahora, ,, en, plena, apoteosis... [VERB, SCONJ, ADV, PUNCT, ADP, ADJ, NOUN, ADP,... [Carlos, y, Fayna, se, enzarzan, en, una, bron... [PROPN, CCONJ, PROPN, PRON, VERB, ADP, DET, NO... [Él, llega, a, tirar, la, sobre, la, cama, y, ... 14305 rows × 2 columns In order to implement our POS tagging application, we need word embeddings that have been pretrained for Spanish. Here we use the publicly-available GloVe embeddings trained on the Spanish Billion Word Corpus3 by the Departamento de Ciencias de la Computación of Universidad de Chile.4 In contrast to the GloVe embeddings used in Chapter 9, these do include a header that stores meta data about the embeddings (i.e., size of the vocabulary and the dimension of the embedding vectors), so in this case we do not use the no_header=True argument: Another difference between these GloVe embeddings and the ones we used in Chapter 9 is that these already include an embedding for unknown words. Therefore, there is no need to introduce our own. However, we do need to include a new embedding for padding, which . 1  https://universaldependencies.org/format.html 
 . 2  https://github.com/EmilStenstrom/conllu/ 
 . 3  https://crscardellino.ar/SBWCE/ 
 . 4  https://github.com/dccuchile/spanishwordembeddings# 
gloveembeddingsfromsbwc 
 164 Implementing POS Tagging Using RNNs we will use later to guarantee that all sentences in the same mini-batch have the same length. We add a vector of zeros for the padding token in the same way as before: Next, we need to preprocess our tokens to match the vocabulary of the embeddings. In particular, these embeddings were trained on words that were lowercased and on sequences of digits that were replaced with a single 0. We will apply the same modifications to our tokens: (From now on we will omit the pandas tables for readability, but, as usual, the corresponding Jupyter notebook contains all necessary information.) Next, we add a new column to the dataframe that stores the word ids corresponding to the embedding vocabulary. Note that at this point we are not padding the sequences of word ids. We will address padding later. We also need to generate the ids for the POS tags. To this end, we first need to construct a vocabulary of POS tags. Once again, we generate a list of tags using explode(), which linearizes our sequence of sequences of tags, and remove repeated tags using unique(). We also add a special tag for the padding token: We now use this POS tag vocabulary to construct a new dataframe column that stores the POS tag ids: The implementation of the Dataset class that stores our POS dataset is trivial: we simply return the lists of word and tag ids, converted to PyTorch tensors. Now it’s time to handle padding. This time we will use some features of PyTorch that we have not seen before. The DataLoader object can receive an optional argument, collate_fn, which expects a function that can be used to form a mini-batch. We will implement this function using PyTorch’s torch.nn.utils.rnn.pad_sequence() function, which, unsurprisingly, pads a group of tensors. We will take advantage of this function to pad the tensors while forming the mini-batch itself. The advantage of this strategy is that, rather than needing to pad all the examples to be the same length as the largest sentence in the corpus, we will instead pad them to the same length as the largest sentence in the minibatch. The latter strategy reduces the amount of padding necessary, which should yield more efficient code. The collate_fn() function takes a single argument, batch, which is a list of tuples. Each tuple has two elements: the list of word ids and the list of tag ids corresponding to a single example. We first unzip this list of tuples into two lists; the first list has all the word ids, and the 11.1 Part-of-speech Tagging 165 second has the tag ids. An explanation of how zip(*batch) works is provided in Appendix A. Next, we compute the lengths of each of the examples in the batch, which we will use later to inform the recurrent neural network where padding starts for each example. We then use the pad_sequence() function to add padding. This function will find the longest sequence in the batch and pad all examples accordingly using the provided padding value. This method is designed to work with PyTorch’s recurrent neural networks, which by default assume the batch index is in the second dimension. However, we will be organizing our tensors such that the batch index is always in the first dimension, which we feel to be more intuitive. For this reason, we also need to provide the batch_first=True argument to pad_sequence. Finally, we return the padded data, as well as the original lengths of the examples. Next, we implement our POS tagging model class. The model consists of: (a) an embedding layer for our Spanish pretrained embeddings; (b) an LSTM that can be set to be unior bi-directional (see Figure 10.3; the RNN is configured to be bidirectional by setting the bidirectional argument to True in the LSTM constructor), with a configurable number of layers (see Figure 10.2; the number of layers is set through the num_layers argument of the constructor) and (c) a linear layer on top of each hidden state, which is used to predict the scores for each of the POS tags for the corresponding token. The forward() method receives the padded minibatch and the list of lengths for the (unpadded) examples in this mini-batch. The first step in the function is to retrieve the embeddings for all words referenced in this mini-batch. We then apply dropout over these embedding vectors. Next, before passing the data to the LSTM, we pack the padded data. Note that the PyTorch PackedSequence5 class, which is the output of the pack_padded_sequence() function, stores a batch of sequences that had different lengths before padding. One important advantage of using PackedSequence is that its internal data structure removes the padding tokens (which is why we had to keep track of the example lengths before padding in x_lengths), and, thus, the recurrent neural network will not back-propagate over the padded elements.6 Once we have a PackedSequence, we pass it to the LSTM. Since the . 5  https://pytorch.org/docs/stable/generated/torch.nn.utils.rnn. PackedSequence.html 
 . 6  The astute reader might ask at this point, “Why did we pad the mini-batch examples in the first place, if we are removing the padding later?” The padding is needed because this allows us to store the mini-batch as a single three-dimensional tensor. 
 166 Implementing POS Tagging Using RNNs output of the LSTM is also packed, we then unpack it using pad_packed_sequence(). Next we apply dropout to this unpacked LSTM output. Finally, we pass
this to the linear layer to predict the tag scores for the tokens. Despite the small number of lines of code, the code of the forward() method, which switches between embedding vectors, padded tensors, and packed sequences, is not trivial. To clarify it, let us walk through an example. Imagine that the input to the forward() method is a batch, x_padded, with shape (10, 20), corresponding to 10 examples, each with 20 word ids (some of which are padding). Then we retrieve the embeddings. Assuming our word embeddings, i.e., the input vectors xi in Chapter 10, are of dimension 300, the new tensor will have a shape of (10, 20, 300), corresponding to 10 examples, each with 20 embeddings, each with dimension 300. After dropout the shape hasn’t changed, but some of the elements have been zeroed out. After unpacking the output of the LSTM, we will have a tensor of shape (10, 20, hidden_size), where hidden_size is the size of the LSTM hidden state, i.e., the ht vector in Equation 10.6, (hidden_size is a hyper parameter we will set later on). After passing this tensor to the linear layer, we will obtain a tensor of shape (10, 20, tag_vocab_size), where tag_vocab_size is the number of POS tags in our vocabulary. Thus, for each token in each example, we will have a distribution of POS tag scores. For each token, the assigned POS tag will be the one corresponding to the highest score. We next initialize all the hyper parameters and all the required components: The training procedure is very similar to the one implemented in Chapter 7. One notable difference is that the output of this model has three dimensions instead of two: number of examples, number of tokens, and number of POS tag scores. Thus, we have to reshape the output to pass it to the loss function. Additionally, we need to discard the padding before computing the loss. We reshape the gold tag ids using the torch.flatten() function, to transform the 2-dimensional tensor of shape (n_examples, n_tokens) to a 1-dimensional tensor with n_examples * n_tokens elements. The predictions are reshaped using the view(-1, output_size) method. By passing two arguments we are stipulating that we want two dimensions. The second dimension will be of size output_size. The -1 indicates that the first dimension should be inferred from the size of the tensor. This means that for a tensor of shape (n_examples, n_tokens, output_size) we will get a tensor of shape (n_examples * n_tokens, output_size). Then, we use a Boolean mask to discard the elements corresponding to the padding. This way, the loss 11.2 Summary 167 function will consider each actual word individually, as if the whole batch was just one big sentence. Note that treating a mini-batch as a single virtual sentence does affect the evaluation results. Lastly, we evaluate the performance of our POS tagger on the test set, similarly to how we have done it before: The results indicate that our POS tagger obtains an overall accuracy of 97%, which is in line with state-of-the-art approaches! This is encouraging considering that our approach does not include the CRF layer we discussed in Chapter 10. We challenge the reader to add this layer,7 and experiment with this architecture for other sequence tasks such as named entity recognition. 11.2 Summary In this chapter we have implemented a Spanish part-of-speech tagger using a bidirectional LSTM and a set of pretrained, static word embeddings. Through this process, we have also introduced several new PyTorch features such as the pad_sequence, pack_padded_sequence, and pad_packed_sequence functions, which allow us to work more efficiently with variable length sequences for recurrent neural networks. 7 See, for example, the LSTM-CRF implementation from the PyTorch tutorial: https://pytorch.org/tutorials/beginner/nlp/advanced_tutorial.html
4,229
4,744
#!/usr/bin/env python # coding: utf-8 # # Part-of-speech Tagging Using RNNs # Some initialization: # In[4]: import random import torch import numpy as np import pandas as pd from tqdm.notebook import tqdm # enable tqdm in pandas tqdm.pandas() # set to True to use the gpu (if there is one available) use_gpu = True # select device device = torch.device('cuda' if use_gpu and torch.cuda.is_available() else 'cpu') print(f'device: {device.type}') # random seed seed = 1234 # set random seed if seed is not None: print(f'random seed: {seed}') random.seed(seed) np.random.seed(seed) torch.manual_seed(seed) # Next, let's read the words and their POS tags from the CoNLLUP format: # In[5]: from conllu import parse_incr def read_tags(filename): data = {'words': [], 'tags': []} with open(filename) as f: for sent in parse_incr(f): words = [tok['form'] for tok in sent] tags = [tok['upos'] for tok in sent] data['words'].append(words) data['tags'].append(tags) return pd.DataFrame(data) # In[6]: train_df = read_tags('data/UD_Spanish-AnCora/es_ancora-ud-train.conllup') train_df # We now load the GloVe embeddings for Spanish, which include a representation for the unknown token: # In[7]: from gensim.models import KeyedVectors glove = KeyedVectors.load_word2vec_format('glove-sbwc.i25.vec') glove.vectors.shape # In[8]: # these embeddings already include <unk> unk_tok = '<unk>' unk_id = glove.key_to_index[unk_tok] unk_tok, unk_id # In[9]: # add padding embedding pad_tok = '<pad>' pad_emb = np.zeros(300) glove.add_vector(pad_tok, pad_emb) pad_tok_id = glove.key_to_index[pad_tok] pad_tok, pad_tok_id # Preprocessing: lower case all words, and replace all numbers with '0': # In[10]: def preprocess(words): result = [] for w in words: w = w.lower() if w.isdecimal(): w = '0' result.append(w) return result train_df['words'] = train_df['words'].progress_map(preprocess) train_df # Next, construct actual PyTorch `Dataset` and `DataLoader` objects for the train/dev/test partitions: # In[11]: def get_ids(tokens, key_to_index, unk_id=None): return [key_to_index.get(tok, unk_id) for tok in tokens] def get_word_ids(tokens): return get_ids(tokens, glove.key_to_index, unk_id) # add new column to the dataframe train_df['word ids'] = train_df['words'].progress_map(get_word_ids) train_df # In[12]: pad_tag = '<pad>' index_to_tag = train_df['tags'].explode().unique().tolist() + [pad_tag] tag_to_index = {t:i for i,t in enumerate(index_to_tag)} pad_tag_id = tag_to_index[pad_tag] pad_tag, pad_tag_id # In[13]: index_to_tag # In[14]: def get_tag_ids(tags): return get_ids(tags, tag_to_index) train_df['tag ids'] = train_df['tags'].progress_map(get_tag_ids) train_df # In[15]: dev_df = read_tags('data/UD_Spanish-AnCora/es_ancora-ud-dev.conllup') dev_df['words'] = dev_df['words'].progress_map(preprocess) dev_df['word ids'] = dev_df['words'].progress_map(lambda x: get_ids(x, glove.key_to_index, unk_id)) dev_df['tag ids'] = dev_df['tags'].progress_map(lambda x: get_ids(x, tag_to_index)) dev_df # In[16]: from torch.utils.data import Dataset class MyDataset(Dataset): def __init__(self, x, y): self.x = x self.y = y def __len__(self): return len(self.y) def __getitem__(self, index): x = torch.tensor(self.x[index]) y = torch.tensor(self.y[index]) return x, y # `collate_fn` will be used by `DataLoader` to pad all sentences in the same batch to the same length. # In[17]: from torch.nn.utils.rnn import pad_sequence def collate_fn(batch): # separate xs and ys xs, ys = zip(*batch) # get lengths lengths = [len(x) for x in xs] # pad sequences x_padded = pad_sequence(xs, batch_first=True, padding_value=pad_tok_id) y_padded = pad_sequence(ys, batch_first=True, padding_value=pad_tag_id) # return padded return x_padded, y_padded, lengths # Now construct our PyTorch model: # In[18]: from torch import nn from torch.nn.utils.rnn import pack_padded_sequence, pad_packed_sequence class MyModel(nn.Module): def __init__(self, vectors, hidden_size, num_layers, bidirectional, dropout, output_size): super().__init__() # ensure vectors is a tensor if not torch.is_tensor(vectors): vectors = torch.tensor(vectors) # init embedding layer self.embedding = nn.Embedding.from_pretrained(embeddings=vectors) # init lstm self.lstm = nn.LSTM( input_size=vectors.shape[1], hidden_size=hidden_size, num_layers=num_layers, bidirectional=bidirectional, dropout=dropout, batch_first=True) # init dropout self.dropout = nn.Dropout(dropout) # init classifier self.classifier = nn.Linear( in_features=hidden_size * 2 if bidirectional else hidden_size, out_features=output_size) def forward(self, x_padded, x_lengths): # get embeddings output = self.embedding(x_padded) output = self.dropout(output) # pack data before lstm packed = pack_padded_sequence(output, x_lengths, batch_first=True, enforce_sorted=False) packed, _ = self.lstm(packed) # unpack data before rest of model output, _ = pad_packed_sequence(packed, batch_first=True) output = self.dropout(output) output = self.classifier(output) return output # In[19]: from torch import optim from torch.utils.data import DataLoader from sklearn.metrics import accuracy_score # hyperparameters lr = 1e-3 weight_decay = 1e-5 batch_size = 100 shuffle = True n_epochs = 10 vectors = glove.vectors hidden_size = 100 num_layers = 2 bidirectional = True dropout = 0.1 output_size = len(index_to_tag) # initialize the model, loss function, optimizer, and data-loader model = MyModel(vectors, hidden_size, num_layers, bidirectional, dropout, output_size).to(device) loss_func = nn.CrossEntropyLoss() optimizer = optim.Adam(model.parameters(), lr=lr, weight_decay=weight_decay) train_ds = MyDataset(train_df['word ids'], train_df['tag ids']) train_dl = DataLoader(train_ds, batch_size=batch_size, shuffle=shuffle, collate_fn=collate_fn) dev_ds = MyDataset(dev_df['word ids'], dev_df['tag ids']) dev_dl = DataLoader(dev_ds, batch_size=batch_size, shuffle=shuffle, collate_fn=collate_fn) train_loss, train_acc = [], [] dev_loss, dev_acc = [], [] # We are now ready to train! # In[20]: # train the model for epoch in range(n_epochs): losses, acc = [], [] model.train() for x_padded, y_padded, lengths in tqdm(train_dl, desc=f'epoch {epoch+1} (train)'): # clear gradients model.zero_grad() # send batch to right device x_padded = x_padded.to(device) y_padded = y_padded.to(device) # predict label scores y_pred = model(x_padded, lengths) # reshape output y_true = torch.flatten(y_padded) y_pred = y_pred.view(-1, output_size) mask = y_true != pad_tag_id y_true = y_true[mask] y_pred = y_pred[mask] # compute loss loss = loss_func(y_pred, y_true) # accumulate for plotting gold = y_true.detach().cpu().numpy() pred = np.argmax(y_pred.detach().cpu().numpy(), axis=1) losses.append(loss.detach().cpu().item()) acc.append(accuracy_score(gold, pred)) # backpropagate loss.backward() # optimize model parameters optimizer.step() train_loss.append(np.mean(losses)) train_acc.append(np.mean(acc)) model.eval() with torch.no_grad(): losses, acc = [], [] for x_padded, y_padded, lengths in tqdm(dev_dl, desc=f'epoch {epoch+1} (dev)'): x_padded = x_padded.to(device) y_padded = y_padded.to(device) y_pred = model(x_padded, lengths) y_true = torch.flatten(y_padded) y_pred = y_pred.view(-1, output_size) mask = y_true != pad_tag_id y_true = y_true[mask] y_pred = y_pred[mask] loss = loss_func(y_pred, y_true) gold = y_true.cpu().numpy() pred = np.argmax(y_pred.cpu().numpy(), axis=1) losses.append(loss.cpu().item()) acc.append(accuracy_score(gold, pred)) dev_loss.append(np.mean(losses)) dev_acc.append(np.mean(acc)) # Plot loss and accuracy on dev after each epoch: # In[21]: import matplotlib.pyplot as plt x = np.arange(n_epochs) + 1 plt.plot(x, train_loss) plt.plot(x, dev_loss) plt.legend(['train', 'dev']) plt.xlabel('epoch') plt.ylabel('loss') plt.grid(True) # In[22]: plt.plot(x, train_acc) plt.plot(x, dev_acc) plt.legend(['train', 'dev']) plt.xlabel('epoch') plt.ylabel('accuracy') plt.grid(True) # In[23]: test_df = read_tags('data/UD_Spanish-AnCora/es_ancora-ud-test.conllup') test_df['words'] = test_df['words'].progress_map(preprocess) test_df['word ids'] = test_df['words'].progress_map(lambda x: get_ids(x, glove.key_to_index, unk_id)) test_df['tag ids'] = test_df['tags'].progress_map(lambda x: get_ids(x, tag_to_index)) test_df # Now let's evaluate on the test partition: # In[24]: from sklearn.metrics import classification_report model.eval() test_ds = MyDataset(test_df['word ids'], test_df['tag ids']) test_dl = DataLoader(test_ds, batch_size=batch_size, shuffle=shuffle, collate_fn=collate_fn) all_y_true = [] all_y_pred = [] with torch.no_grad(): for x_padded, y_padded, lengths in tqdm(test_dl): x_padded = x_padded.to(device) y_pred = model(x_padded, lengths) y_true = torch.flatten(y_padded) y_pred = y_pred.view(-1, output_size) mask = y_true != pad_tag_id y_true = y_true[mask] y_pred = torch.argmax(y_pred[mask], dim=1) all_y_true.append(y_true.cpu().numpy()) all_y_pred.append(y_pred.cpu().numpy()) y_true = np.concatenate(all_y_true) y_pred = np.concatenate(all_y_pred) target_names = index_to_tag[:-2] print(classification_report(y_true, y_pred, target_names=target_names)) # Let's generate a confusion matrix for all POS tags in the data: # In[25]: from sklearn.metrics import ConfusionMatrixDisplay, confusion_matrix cm = confusion_matrix(y_true, y_pred, normalize='true') disp = ConfusionMatrixDisplay( confusion_matrix=cm, display_labels=target_names, ) fig, ax = plt.subplots(figsize=(10,10)) disp.plot( cmap='Blues', values_format='.2f', colorbar=False, ax=ax, xticks_rotation=45, ) # In[ ]:
1,578
1,655
5
chap11-6
chap11-6
11 Implementing Part-of-speech Tagging Using Recurrent Neural Networks The previous chapter was our first exposure to recurrent neural networks, which included intuitions for why they are useful for natural language processing, various architectures, and training algorithms. In this chapter we will put them to use, to implement a common sequence modeling task. 11.1 Part-of-speech Tagging The task we will use as an example for this chapter is part-of-speech (POS) tagging, an NLP application that, as we discussed in the previous chapter, benefits from word order. Please see Chapter 16 for a more thorough discussion of POS tagging. The entire code presented in this chapter is available in the chap11_pos_tagging Jupyter notebook. To take a break from NLP applications for English, in this chapter we use the AnCora corpus (Taulé et al., 2008), which primarily consists of newspaper texts in Spanish and Catalan with different linguistic annotations. In this chapter we work with the Spanish portion of the corpus, and the annotations for Universal POS tags (see Chapter 16 for a description of these tags). The Spanish portion of the corpus is divided into a training set with 14,305 sentences, a development set with 1,654 sentences, and a test set with 1,721 sentences. The data is distributed in the CoNLL-U format. In this format, all sentences in a dataset are stored in the same file, separated by a blank line. Each individual token in a sentence is represented in a line, which contains 10 annotation fields separated by tabs: ID, FORM, LEMMA, UPOS, XPOS, FEATS, HEAD, DEPREL, DEPS, and MISC. A comprehensive explanation of this format and the 162 11.1 Part-of-speech Tagging 163 meaning of the different fields is beyond the goal of this chapter; however, the curious reader can find one at the CoNLL-U website.1 Here, we are only concerned with the fields FORM (the raw word), and UPOS (the Universal part-of-speech tag). As in previous chapters, we use pandas to preprocess the data. For parsing the CoNLL-U files, we rely on the conllu Python module.2 We implement a function called read_tags that reads the CoNLL-U file corresponding to a dataset and returns a pandas dataframe that combines all tokens in a sentence into a single row with two columns, one for the words, and one for the POS tags in the corresponding sentence: 0 1 2 3 4 ... 14300 14301 14302 14303 14304 words [El, presidente, de, el, órgano, regulador, de... [Afirmó, que, sigue, el, criterio, europeo, y,... [Durante, la, presentación, de, el, libro, ", ... [Y, todas, las, miradas, convergen, en, la, lu... [Cambiar, las, formas, parece, de, rigor, ,, p... tags [DET, NOUN, ADP, DET, NOUN, ADJ, ADP, DET, PRO... [VERB, SCONJ, VERB, DET, NOUN, ADJ, CCONJ, SCO... [ADP, DET, NOUN, ADP, DET, NOUN, PUNCT, DET, P... [CCONJ, DET, DET, NOUN, VERB, ADP, DET, NOUN, ... [VERB, DET, NOUN, VERB, ADP, NOUN, PUNCT, CCON... [PRON, VERB, ADP, VERB, PRON, ADP, DET, NOUN, ... [Sobre, la, oferta, de, interconexión, con, Te... [ADP, DET, NOUN, ADP, NOUN, ADP, PROPN, ADP, D... [La, inversión, en, investigación, básica, es,... [DET, NOUN, ADP, NOUN, ADJ, AUX, DET, NOUN, AD... ... ... [Conviene, que, ahora, ,, en, plena, apoteosis... [VERB, SCONJ, ADV, PUNCT, ADP, ADJ, NOUN, ADP,... [Carlos, y, Fayna, se, enzarzan, en, una, bron... [PROPN, CCONJ, PROPN, PRON, VERB, ADP, DET, NO... [Él, llega, a, tirar, la, sobre, la, cama, y, ... 14305 rows × 2 columns In order to implement our POS tagging application, we need word embeddings that have been pretrained for Spanish. Here we use the publicly-available GloVe embeddings trained on the Spanish Billion Word Corpus3 by the Departamento de Ciencias de la Computación of Universidad de Chile.4 In contrast to the GloVe embeddings used in Chapter 9, these do include a header that stores meta data about the embeddings (i.e., size of the vocabulary and the dimension of the embedding vectors), so in this case we do not use the no_header=True argument: Another difference between these GloVe embeddings and the ones we used in Chapter 9 is that these already include an embedding for unknown words. Therefore, there is no need to introduce our own. However, we do need to include a new embedding for padding, which . 1  https://universaldependencies.org/format.html 
 . 2  https://github.com/EmilStenstrom/conllu/ 
 . 3  https://crscardellino.ar/SBWCE/ 
 . 4  https://github.com/dccuchile/spanishwordembeddings# 
gloveembeddingsfromsbwc 
 164 Implementing POS Tagging Using RNNs we will use later to guarantee that all sentences in the same mini-batch have the same length. We add a vector of zeros for the padding token in the same way as before: Next, we need to preprocess our tokens to match the vocabulary of the embeddings. In particular, these embeddings were trained on words that were lowercased and on sequences of digits that were replaced with a single 0. We will apply the same modifications to our tokens: (From now on we will omit the pandas tables for readability, but, as usual, the corresponding Jupyter notebook contains all necessary information.) Next, we add a new column to the dataframe that stores the word ids corresponding to the embedding vocabulary. Note that at this point we are not padding the sequences of word ids. We will address padding later. We also need to generate the ids for the POS tags. To this end, we first need to construct a vocabulary of POS tags. Once again, we generate a list of tags using explode(), which linearizes our sequence of sequences of tags, and remove repeated tags using unique(). We also add a special tag for the padding token: We now use this POS tag vocabulary to construct a new dataframe column that stores the POS tag ids: The implementation of the Dataset class that stores our POS dataset is trivial: we simply return the lists of word and tag ids, converted to PyTorch tensors. Now it’s time to handle padding. This time we will use some features of PyTorch that we have not seen before. The DataLoader object can receive an optional argument, collate_fn, which expects a function that can be used to form a mini-batch. We will implement this function using PyTorch’s torch.nn.utils.rnn.pad_sequence() function, which, unsurprisingly, pads a group of tensors. We will take advantage of this function to pad the tensors while forming the mini-batch itself. The advantage of this strategy is that, rather than needing to pad all the examples to be the same length as the largest sentence in the corpus, we will instead pad them to the same length as the largest sentence in the minibatch. The latter strategy reduces the amount of padding necessary, which should yield more efficient code. The collate_fn() function takes a single argument, batch, which is a list of tuples. Each tuple has two elements: the list of word ids and the list of tag ids corresponding to a single example. We first unzip this list of tuples into two lists; the first list has all the word ids, and the 11.1 Part-of-speech Tagging 165 second has the tag ids. An explanation of how zip(*batch) works is provided in Appendix A. Next, we compute the lengths of each of the examples in the batch, which we will use later to inform the recurrent neural network where padding starts for each example. We then use the pad_sequence() function to add padding. This function will find the longest sequence in the batch and pad all examples accordingly using the provided padding value. This method is designed to work with PyTorch’s recurrent neural networks, which by default assume the batch index is in the second dimension. However, we will be organizing our tensors such that the batch index is always in the first dimension, which we feel to be more intuitive. For this reason, we also need to provide the batch_first=True argument to pad_sequence. Finally, we return the padded data, as well as the original lengths of the examples. Next, we implement our POS tagging model class. The model consists of: (a) an embedding layer for our Spanish pretrained embeddings; (b) an LSTM that can be set to be unior bi-directional (see Figure 10.3; the RNN is configured to be bidirectional by setting the bidirectional argument to True in the LSTM constructor), with a configurable number of layers (see Figure 10.2; the number of layers is set through the num_layers argument of the constructor) and (c) a linear layer on top of each hidden state, which is used to predict the scores for each of the POS tags for the corresponding token. The forward() method receives the padded minibatch and the list of lengths for the (unpadded) examples in this mini-batch. The first step in the function is to retrieve the embeddings for all words referenced in this mini-batch. We then apply dropout over these embedding vectors. Next, before passing the data to the LSTM, we pack the padded data. Note that the PyTorch PackedSequence5 class, which is the output of the pack_padded_sequence() function, stores a batch of sequences that had different lengths before padding. One important advantage of using PackedSequence is that its internal data structure removes the padding tokens (which is why we had to keep track of the example lengths before padding in x_lengths), and, thus, the recurrent neural network will not back-propagate over the padded elements.6 Once we have a PackedSequence, we pass it to the LSTM. Since the . 5  https://pytorch.org/docs/stable/generated/torch.nn.utils.rnn. PackedSequence.html 
 . 6  The astute reader might ask at this point, “Why did we pad the mini-batch examples in the first place, if we are removing the padding later?” The padding is needed because this allows us to store the mini-batch as a single three-dimensional tensor. 
 166 Implementing POS Tagging Using RNNs output of the LSTM is also packed, we then unpack it using pad_packed_sequence(). Next we apply dropout to this unpacked LSTM output. Finally, we pass
this to the linear layer to predict the tag scores for the tokens. Despite the small number of lines of code, the code of the forward() method, which switches between embedding vectors, padded tensors, and packed sequences, is not trivial. To clarify it, let us walk through an example. Imagine that the input to the forward() method is a batch, x_padded, with shape (10, 20), corresponding to 10 examples, each with 20 word ids (some of which are padding). Then we retrieve the embeddings. Assuming our word embeddings, i.e., the input vectors xi in Chapter 10, are of dimension 300, the new tensor will have a shape of (10, 20, 300), corresponding to 10 examples, each with 20 embeddings, each with dimension 300. After dropout the shape hasn’t changed, but some of the elements have been zeroed out. After unpacking the output of the LSTM, we will have a tensor of shape (10, 20, hidden_size), where hidden_size is the size of the LSTM hidden state, i.e., the ht vector in Equation 10.6, (hidden_size is a hyper parameter we will set later on). After passing this tensor to the linear layer, we will obtain a tensor of shape (10, 20, tag_vocab_size), where tag_vocab_size is the number of POS tags in our vocabulary. Thus, for each token in each example, we will have a distribution of POS tag scores. For each token, the assigned POS tag will be the one corresponding to the highest score. We next initialize all the hyper parameters and all the required components: The training procedure is very similar to the one implemented in Chapter 7. One notable difference is that the output of this model has three dimensions instead of two: number of examples, number of tokens, and number of POS tag scores. Thus, we have to reshape the output to pass it to the loss function. Additionally, we need to discard the padding before computing the loss. We reshape the gold tag ids using the torch.flatten() function, to transform the 2-dimensional tensor of shape (n_examples, n_tokens) to a 1-dimensional tensor with n_examples * n_tokens elements. The predictions are reshaped using the view(-1, output_size) method. By passing two arguments we are stipulating that we want two dimensions. The second dimension will be of size output_size. The -1 indicates that the first dimension should be inferred from the size of the tensor. This means that for a tensor of shape (n_examples, n_tokens, output_size) we will get a tensor of shape (n_examples * n_tokens, output_size). Then, we use a Boolean mask to discard the elements corresponding to the padding. This way, the loss 11.2 Summary 167 function will consider each actual word individually, as if the whole batch was just one big sentence. Note that treating a mini-batch as a single virtual sentence does affect the evaluation results. Lastly, we evaluate the performance of our POS tagger on the test set, similarly to how we have done it before: The results indicate that our POS tagger obtains an overall accuracy of 97%, which is in line with state-of-the-art approaches! This is encouraging considering that our approach does not include the CRF layer we discussed in Chapter 10. We challenge the reader to add this layer,7 and experiment with this architecture for other sequence tasks such as named entity recognition. 11.2 Summary In this chapter we have implemented a Spanish part-of-speech tagger using a bidirectional LSTM and a set of pretrained, static word embeddings. Through this process, we have also introduced several new PyTorch features such as the pad_sequence, pack_padded_sequence, and pad_packed_sequence functions, which allow us to work more efficiently with variable length sequences for recurrent neural networks. 7 See, for example, the LSTM-CRF implementation from the PyTorch tutorial: https://pytorch.org/tutorials/beginner/nlp/advanced_tutorial.html
9,958
10,009
#!/usr/bin/env python # coding: utf-8 # # Part-of-speech Tagging Using RNNs # Some initialization: # In[4]: import random import torch import numpy as np import pandas as pd from tqdm.notebook import tqdm # enable tqdm in pandas tqdm.pandas() # set to True to use the gpu (if there is one available) use_gpu = True # select device device = torch.device('cuda' if use_gpu and torch.cuda.is_available() else 'cpu') print(f'device: {device.type}') # random seed seed = 1234 # set random seed if seed is not None: print(f'random seed: {seed}') random.seed(seed) np.random.seed(seed) torch.manual_seed(seed) # Next, let's read the words and their POS tags from the CoNLLUP format: # In[5]: from conllu import parse_incr def read_tags(filename): data = {'words': [], 'tags': []} with open(filename) as f: for sent in parse_incr(f): words = [tok['form'] for tok in sent] tags = [tok['upos'] for tok in sent] data['words'].append(words) data['tags'].append(tags) return pd.DataFrame(data) # In[6]: train_df = read_tags('data/UD_Spanish-AnCora/es_ancora-ud-train.conllup') train_df # We now load the GloVe embeddings for Spanish, which include a representation for the unknown token: # In[7]: from gensim.models import KeyedVectors glove = KeyedVectors.load_word2vec_format('glove-sbwc.i25.vec') glove.vectors.shape # In[8]: # these embeddings already include <unk> unk_tok = '<unk>' unk_id = glove.key_to_index[unk_tok] unk_tok, unk_id # In[9]: # add padding embedding pad_tok = '<pad>' pad_emb = np.zeros(300) glove.add_vector(pad_tok, pad_emb) pad_tok_id = glove.key_to_index[pad_tok] pad_tok, pad_tok_id # Preprocessing: lower case all words, and replace all numbers with '0': # In[10]: def preprocess(words): result = [] for w in words: w = w.lower() if w.isdecimal(): w = '0' result.append(w) return result train_df['words'] = train_df['words'].progress_map(preprocess) train_df # Next, construct actual PyTorch `Dataset` and `DataLoader` objects for the train/dev/test partitions: # In[11]: def get_ids(tokens, key_to_index, unk_id=None): return [key_to_index.get(tok, unk_id) for tok in tokens] def get_word_ids(tokens): return get_ids(tokens, glove.key_to_index, unk_id) # add new column to the dataframe train_df['word ids'] = train_df['words'].progress_map(get_word_ids) train_df # In[12]: pad_tag = '<pad>' index_to_tag = train_df['tags'].explode().unique().tolist() + [pad_tag] tag_to_index = {t:i for i,t in enumerate(index_to_tag)} pad_tag_id = tag_to_index[pad_tag] pad_tag, pad_tag_id # In[13]: index_to_tag # In[14]: def get_tag_ids(tags): return get_ids(tags, tag_to_index) train_df['tag ids'] = train_df['tags'].progress_map(get_tag_ids) train_df # In[15]: dev_df = read_tags('data/UD_Spanish-AnCora/es_ancora-ud-dev.conllup') dev_df['words'] = dev_df['words'].progress_map(preprocess) dev_df['word ids'] = dev_df['words'].progress_map(lambda x: get_ids(x, glove.key_to_index, unk_id)) dev_df['tag ids'] = dev_df['tags'].progress_map(lambda x: get_ids(x, tag_to_index)) dev_df # In[16]: from torch.utils.data import Dataset class MyDataset(Dataset): def __init__(self, x, y): self.x = x self.y = y def __len__(self): return len(self.y) def __getitem__(self, index): x = torch.tensor(self.x[index]) y = torch.tensor(self.y[index]) return x, y # `collate_fn` will be used by `DataLoader` to pad all sentences in the same batch to the same length. # In[17]: from torch.nn.utils.rnn import pad_sequence def collate_fn(batch): # separate xs and ys xs, ys = zip(*batch) # get lengths lengths = [len(x) for x in xs] # pad sequences x_padded = pad_sequence(xs, batch_first=True, padding_value=pad_tok_id) y_padded = pad_sequence(ys, batch_first=True, padding_value=pad_tag_id) # return padded return x_padded, y_padded, lengths # Now construct our PyTorch model: # In[18]: from torch import nn from torch.nn.utils.rnn import pack_padded_sequence, pad_packed_sequence class MyModel(nn.Module): def __init__(self, vectors, hidden_size, num_layers, bidirectional, dropout, output_size): super().__init__() # ensure vectors is a tensor if not torch.is_tensor(vectors): vectors = torch.tensor(vectors) # init embedding layer self.embedding = nn.Embedding.from_pretrained(embeddings=vectors) # init lstm self.lstm = nn.LSTM( input_size=vectors.shape[1], hidden_size=hidden_size, num_layers=num_layers, bidirectional=bidirectional, dropout=dropout, batch_first=True) # init dropout self.dropout = nn.Dropout(dropout) # init classifier self.classifier = nn.Linear( in_features=hidden_size * 2 if bidirectional else hidden_size, out_features=output_size) def forward(self, x_padded, x_lengths): # get embeddings output = self.embedding(x_padded) output = self.dropout(output) # pack data before lstm packed = pack_padded_sequence(output, x_lengths, batch_first=True, enforce_sorted=False) packed, _ = self.lstm(packed) # unpack data before rest of model output, _ = pad_packed_sequence(packed, batch_first=True) output = self.dropout(output) output = self.classifier(output) return output # In[19]: from torch import optim from torch.utils.data import DataLoader from sklearn.metrics import accuracy_score # hyperparameters lr = 1e-3 weight_decay = 1e-5 batch_size = 100 shuffle = True n_epochs = 10 vectors = glove.vectors hidden_size = 100 num_layers = 2 bidirectional = True dropout = 0.1 output_size = len(index_to_tag) # initialize the model, loss function, optimizer, and data-loader model = MyModel(vectors, hidden_size, num_layers, bidirectional, dropout, output_size).to(device) loss_func = nn.CrossEntropyLoss() optimizer = optim.Adam(model.parameters(), lr=lr, weight_decay=weight_decay) train_ds = MyDataset(train_df['word ids'], train_df['tag ids']) train_dl = DataLoader(train_ds, batch_size=batch_size, shuffle=shuffle, collate_fn=collate_fn) dev_ds = MyDataset(dev_df['word ids'], dev_df['tag ids']) dev_dl = DataLoader(dev_ds, batch_size=batch_size, shuffle=shuffle, collate_fn=collate_fn) train_loss, train_acc = [], [] dev_loss, dev_acc = [], [] # We are now ready to train! # In[20]: # train the model for epoch in range(n_epochs): losses, acc = [], [] model.train() for x_padded, y_padded, lengths in tqdm(train_dl, desc=f'epoch {epoch+1} (train)'): # clear gradients model.zero_grad() # send batch to right device x_padded = x_padded.to(device) y_padded = y_padded.to(device) # predict label scores y_pred = model(x_padded, lengths) # reshape output y_true = torch.flatten(y_padded) y_pred = y_pred.view(-1, output_size) mask = y_true != pad_tag_id y_true = y_true[mask] y_pred = y_pred[mask] # compute loss loss = loss_func(y_pred, y_true) # accumulate for plotting gold = y_true.detach().cpu().numpy() pred = np.argmax(y_pred.detach().cpu().numpy(), axis=1) losses.append(loss.detach().cpu().item()) acc.append(accuracy_score(gold, pred)) # backpropagate loss.backward() # optimize model parameters optimizer.step() train_loss.append(np.mean(losses)) train_acc.append(np.mean(acc)) model.eval() with torch.no_grad(): losses, acc = [], [] for x_padded, y_padded, lengths in tqdm(dev_dl, desc=f'epoch {epoch+1} (dev)'): x_padded = x_padded.to(device) y_padded = y_padded.to(device) y_pred = model(x_padded, lengths) y_true = torch.flatten(y_padded) y_pred = y_pred.view(-1, output_size) mask = y_true != pad_tag_id y_true = y_true[mask] y_pred = y_pred[mask] loss = loss_func(y_pred, y_true) gold = y_true.cpu().numpy() pred = np.argmax(y_pred.cpu().numpy(), axis=1) losses.append(loss.cpu().item()) acc.append(accuracy_score(gold, pred)) dev_loss.append(np.mean(losses)) dev_acc.append(np.mean(acc)) # Plot loss and accuracy on dev after each epoch: # In[21]: import matplotlib.pyplot as plt x = np.arange(n_epochs) + 1 plt.plot(x, train_loss) plt.plot(x, dev_loss) plt.legend(['train', 'dev']) plt.xlabel('epoch') plt.ylabel('loss') plt.grid(True) # In[22]: plt.plot(x, train_acc) plt.plot(x, dev_acc) plt.legend(['train', 'dev']) plt.xlabel('epoch') plt.ylabel('accuracy') plt.grid(True) # In[23]: test_df = read_tags('data/UD_Spanish-AnCora/es_ancora-ud-test.conllup') test_df['words'] = test_df['words'].progress_map(preprocess) test_df['word ids'] = test_df['words'].progress_map(lambda x: get_ids(x, glove.key_to_index, unk_id)) test_df['tag ids'] = test_df['tags'].progress_map(lambda x: get_ids(x, tag_to_index)) test_df # Now let's evaluate on the test partition: # In[24]: from sklearn.metrics import classification_report model.eval() test_ds = MyDataset(test_df['word ids'], test_df['tag ids']) test_dl = DataLoader(test_ds, batch_size=batch_size, shuffle=shuffle, collate_fn=collate_fn) all_y_true = [] all_y_pred = [] with torch.no_grad(): for x_padded, y_padded, lengths in tqdm(test_dl): x_padded = x_padded.to(device) y_pred = model(x_padded, lengths) y_true = torch.flatten(y_padded) y_pred = y_pred.view(-1, output_size) mask = y_true != pad_tag_id y_true = y_true[mask] y_pred = torch.argmax(y_pred[mask], dim=1) all_y_true.append(y_true.cpu().numpy()) all_y_pred.append(y_pred.cpu().numpy()) y_true = np.concatenate(all_y_true) y_pred = np.concatenate(all_y_pred) target_names = index_to_tag[:-2] print(classification_report(y_true, y_pred, target_names=target_names)) # Let's generate a confusion matrix for all POS tags in the data: # In[25]: from sklearn.metrics import ConfusionMatrixDisplay, confusion_matrix cm = confusion_matrix(y_true, y_pred, normalize='true') disp = ConfusionMatrixDisplay( confusion_matrix=cm, display_labels=target_names, ) fig, ax = plt.subplots(figsize=(10,10)) disp.plot( cmap='Blues', values_format='.2f', colorbar=False, ax=ax, xticks_rotation=45, ) # In[ ]:
5,517
5,555
6
chap11-7
chap11-7
11 Implementing Part-of-speech Tagging Using Recurrent Neural Networks The previous chapter was our first exposure to recurrent neural networks, which included intuitions for why they are useful for natural language processing, various architectures, and training algorithms. In this chapter we will put them to use, to implement a common sequence modeling task. 11.1 Part-of-speech Tagging The task we will use as an example for this chapter is part-of-speech (POS) tagging, an NLP application that, as we discussed in the previous chapter, benefits from word order. Please see Chapter 16 for a more thorough discussion of POS tagging. The entire code presented in this chapter is available in the chap11_pos_tagging Jupyter notebook. To take a break from NLP applications for English, in this chapter we use the AnCora corpus (Taulé et al., 2008), which primarily consists of newspaper texts in Spanish and Catalan with different linguistic annotations. In this chapter we work with the Spanish portion of the corpus, and the annotations for Universal POS tags (see Chapter 16 for a description of these tags). The Spanish portion of the corpus is divided into a training set with 14,305 sentences, a development set with 1,654 sentences, and a test set with 1,721 sentences. The data is distributed in the CoNLL-U format. In this format, all sentences in a dataset are stored in the same file, separated by a blank line. Each individual token in a sentence is represented in a line, which contains 10 annotation fields separated by tabs: ID, FORM, LEMMA, UPOS, XPOS, FEATS, HEAD, DEPREL, DEPS, and MISC. A comprehensive explanation of this format and the 162 11.1 Part-of-speech Tagging 163 meaning of the different fields is beyond the goal of this chapter; however, the curious reader can find one at the CoNLL-U website.1 Here, we are only concerned with the fields FORM (the raw word), and UPOS (the Universal part-of-speech tag). As in previous chapters, we use pandas to preprocess the data. For parsing the CoNLL-U files, we rely on the conllu Python module.2 We implement a function called read_tags that reads the CoNLL-U file corresponding to a dataset and returns a pandas dataframe that combines all tokens in a sentence into a single row with two columns, one for the words, and one for the POS tags in the corresponding sentence: 0 1 2 3 4 ... 14300 14301 14302 14303 14304 words [El, presidente, de, el, órgano, regulador, de... [Afirmó, que, sigue, el, criterio, europeo, y,... [Durante, la, presentación, de, el, libro, ", ... [Y, todas, las, miradas, convergen, en, la, lu... [Cambiar, las, formas, parece, de, rigor, ,, p... tags [DET, NOUN, ADP, DET, NOUN, ADJ, ADP, DET, PRO... [VERB, SCONJ, VERB, DET, NOUN, ADJ, CCONJ, SCO... [ADP, DET, NOUN, ADP, DET, NOUN, PUNCT, DET, P... [CCONJ, DET, DET, NOUN, VERB, ADP, DET, NOUN, ... [VERB, DET, NOUN, VERB, ADP, NOUN, PUNCT, CCON... [PRON, VERB, ADP, VERB, PRON, ADP, DET, NOUN, ... [Sobre, la, oferta, de, interconexión, con, Te... [ADP, DET, NOUN, ADP, NOUN, ADP, PROPN, ADP, D... [La, inversión, en, investigación, básica, es,... [DET, NOUN, ADP, NOUN, ADJ, AUX, DET, NOUN, AD... ... ... [Conviene, que, ahora, ,, en, plena, apoteosis... [VERB, SCONJ, ADV, PUNCT, ADP, ADJ, NOUN, ADP,... [Carlos, y, Fayna, se, enzarzan, en, una, bron... [PROPN, CCONJ, PROPN, PRON, VERB, ADP, DET, NO... [Él, llega, a, tirar, la, sobre, la, cama, y, ... 14305 rows × 2 columns In order to implement our POS tagging application, we need word embeddings that have been pretrained for Spanish. Here we use the publicly-available GloVe embeddings trained on the Spanish Billion Word Corpus3 by the Departamento de Ciencias de la Computación of Universidad de Chile.4 In contrast to the GloVe embeddings used in Chapter 9, these do include a header that stores meta data about the embeddings (i.e., size of the vocabulary and the dimension of the embedding vectors), so in this case we do not use the no_header=True argument: Another difference between these GloVe embeddings and the ones we used in Chapter 9 is that these already include an embedding for unknown words. Therefore, there is no need to introduce our own. However, we do need to include a new embedding for padding, which . 1  https://universaldependencies.org/format.html 
 . 2  https://github.com/EmilStenstrom/conllu/ 
 . 3  https://crscardellino.ar/SBWCE/ 
 . 4  https://github.com/dccuchile/spanishwordembeddings# 
gloveembeddingsfromsbwc 
 164 Implementing POS Tagging Using RNNs we will use later to guarantee that all sentences in the same mini-batch have the same length. We add a vector of zeros for the padding token in the same way as before: Next, we need to preprocess our tokens to match the vocabulary of the embeddings. In particular, these embeddings were trained on words that were lowercased and on sequences of digits that were replaced with a single 0. We will apply the same modifications to our tokens: (From now on we will omit the pandas tables for readability, but, as usual, the corresponding Jupyter notebook contains all necessary information.) Next, we add a new column to the dataframe that stores the word ids corresponding to the embedding vocabulary. Note that at this point we are not padding the sequences of word ids. We will address padding later. We also need to generate the ids for the POS tags. To this end, we first need to construct a vocabulary of POS tags. Once again, we generate a list of tags using explode(), which linearizes our sequence of sequences of tags, and remove repeated tags using unique(). We also add a special tag for the padding token: We now use this POS tag vocabulary to construct a new dataframe column that stores the POS tag ids: The implementation of the Dataset class that stores our POS dataset is trivial: we simply return the lists of word and tag ids, converted to PyTorch tensors. Now it’s time to handle padding. This time we will use some features of PyTorch that we have not seen before. The DataLoader object can receive an optional argument, collate_fn, which expects a function that can be used to form a mini-batch. We will implement this function using PyTorch’s torch.nn.utils.rnn.pad_sequence() function, which, unsurprisingly, pads a group of tensors. We will take advantage of this function to pad the tensors while forming the mini-batch itself. The advantage of this strategy is that, rather than needing to pad all the examples to be the same length as the largest sentence in the corpus, we will instead pad them to the same length as the largest sentence in the minibatch. The latter strategy reduces the amount of padding necessary, which should yield more efficient code. The collate_fn() function takes a single argument, batch, which is a list of tuples. Each tuple has two elements: the list of word ids and the list of tag ids corresponding to a single example. We first unzip this list of tuples into two lists; the first list has all the word ids, and the 11.1 Part-of-speech Tagging 165 second has the tag ids. An explanation of how zip(*batch) works is provided in Appendix A. Next, we compute the lengths of each of the examples in the batch, which we will use later to inform the recurrent neural network where padding starts for each example. We then use the pad_sequence() function to add padding. This function will find the longest sequence in the batch and pad all examples accordingly using the provided padding value. This method is designed to work with PyTorch’s recurrent neural networks, which by default assume the batch index is in the second dimension. However, we will be organizing our tensors such that the batch index is always in the first dimension, which we feel to be more intuitive. For this reason, we also need to provide the batch_first=True argument to pad_sequence. Finally, we return the padded data, as well as the original lengths of the examples. Next, we implement our POS tagging model class. The model consists of: (a) an embedding layer for our Spanish pretrained embeddings; (b) an LSTM that can be set to be unior bi-directional (see Figure 10.3; the RNN is configured to be bidirectional by setting the bidirectional argument to True in the LSTM constructor), with a configurable number of layers (see Figure 10.2; the number of layers is set through the num_layers argument of the constructor) and (c) a linear layer on top of each hidden state, which is used to predict the scores for each of the POS tags for the corresponding token. The forward() method receives the padded minibatch and the list of lengths for the (unpadded) examples in this mini-batch. The first step in the function is to retrieve the embeddings for all words referenced in this mini-batch. We then apply dropout over these embedding vectors. Next, before passing the data to the LSTM, we pack the padded data. Note that the PyTorch PackedSequence5 class, which is the output of the pack_padded_sequence() function, stores a batch of sequences that had different lengths before padding. One important advantage of using PackedSequence is that its internal data structure removes the padding tokens (which is why we had to keep track of the example lengths before padding in x_lengths), and, thus, the recurrent neural network will not back-propagate over the padded elements.6 Once we have a PackedSequence, we pass it to the LSTM. Since the . 5  https://pytorch.org/docs/stable/generated/torch.nn.utils.rnn. PackedSequence.html 
 . 6  The astute reader might ask at this point, “Why did we pad the mini-batch examples in the first place, if we are removing the padding later?” The padding is needed because this allows us to store the mini-batch as a single three-dimensional tensor. 
 166 Implementing POS Tagging Using RNNs output of the LSTM is also packed, we then unpack it using pad_packed_sequence(). Next we apply dropout to this unpacked LSTM output. Finally, we pass
this to the linear layer to predict the tag scores for the tokens. Despite the small number of lines of code, the code of the forward() method, which switches between embedding vectors, padded tensors, and packed sequences, is not trivial. To clarify it, let us walk through an example. Imagine that the input to the forward() method is a batch, x_padded, with shape (10, 20), corresponding to 10 examples, each with 20 word ids (some of which are padding). Then we retrieve the embeddings. Assuming our word embeddings, i.e., the input vectors xi in Chapter 10, are of dimension 300, the new tensor will have a shape of (10, 20, 300), corresponding to 10 examples, each with 20 embeddings, each with dimension 300. After dropout the shape hasn’t changed, but some of the elements have been zeroed out. After unpacking the output of the LSTM, we will have a tensor of shape (10, 20, hidden_size), where hidden_size is the size of the LSTM hidden state, i.e., the ht vector in Equation 10.6, (hidden_size is a hyper parameter we will set later on). After passing this tensor to the linear layer, we will obtain a tensor of shape (10, 20, tag_vocab_size), where tag_vocab_size is the number of POS tags in our vocabulary. Thus, for each token in each example, we will have a distribution of POS tag scores. For each token, the assigned POS tag will be the one corresponding to the highest score. We next initialize all the hyper parameters and all the required components: The training procedure is very similar to the one implemented in Chapter 7. One notable difference is that the output of this model has three dimensions instead of two: number of examples, number of tokens, and number of POS tag scores. Thus, we have to reshape the output to pass it to the loss function. Additionally, we need to discard the padding before computing the loss. We reshape the gold tag ids using the torch.flatten() function, to transform the 2-dimensional tensor of shape (n_examples, n_tokens) to a 1-dimensional tensor with n_examples * n_tokens elements. The predictions are reshaped using the view(-1, output_size) method. By passing two arguments we are stipulating that we want two dimensions. The second dimension will be of size output_size. The -1 indicates that the first dimension should be inferred from the size of the tensor. This means that for a tensor of shape (n_examples, n_tokens, output_size) we will get a tensor of shape (n_examples * n_tokens, output_size). Then, we use a Boolean mask to discard the elements corresponding to the padding. This way, the loss 11.2 Summary 167 function will consider each actual word individually, as if the whole batch was just one big sentence. Note that treating a mini-batch as a single virtual sentence does affect the evaluation results. Lastly, we evaluate the performance of our POS tagger on the test set, similarly to how we have done it before: The results indicate that our POS tagger obtains an overall accuracy of 97%, which is in line with state-of-the-art approaches! This is encouraging considering that our approach does not include the CRF layer we discussed in Chapter 10. We challenge the reader to add this layer,7 and experiment with this architecture for other sequence tasks such as named entity recognition. 11.2 Summary In this chapter we have implemented a Spanish part-of-speech tagger using a bidirectional LSTM and a set of pretrained, static word embeddings. Through this process, we have also introduced several new PyTorch features such as the pad_sequence, pack_padded_sequence, and pad_packed_sequence functions, which allow us to work more efficiently with variable length sequences for recurrent neural networks. 7 See, for example, the LSTM-CRF implementation from the PyTorch tutorial: https://pytorch.org/tutorials/beginner/nlp/advanced_tutorial.html
8,461
8,594
#!/usr/bin/env python # coding: utf-8 # # Part-of-speech Tagging Using RNNs # Some initialization: # In[4]: import random import torch import numpy as np import pandas as pd from tqdm.notebook import tqdm # enable tqdm in pandas tqdm.pandas() # set to True to use the gpu (if there is one available) use_gpu = True # select device device = torch.device('cuda' if use_gpu and torch.cuda.is_available() else 'cpu') print(f'device: {device.type}') # random seed seed = 1234 # set random seed if seed is not None: print(f'random seed: {seed}') random.seed(seed) np.random.seed(seed) torch.manual_seed(seed) # Next, let's read the words and their POS tags from the CoNLLUP format: # In[5]: from conllu import parse_incr def read_tags(filename): data = {'words': [], 'tags': []} with open(filename) as f: for sent in parse_incr(f): words = [tok['form'] for tok in sent] tags = [tok['upos'] for tok in sent] data['words'].append(words) data['tags'].append(tags) return pd.DataFrame(data) # In[6]: train_df = read_tags('data/UD_Spanish-AnCora/es_ancora-ud-train.conllup') train_df # We now load the GloVe embeddings for Spanish, which include a representation for the unknown token: # In[7]: from gensim.models import KeyedVectors glove = KeyedVectors.load_word2vec_format('glove-sbwc.i25.vec') glove.vectors.shape # In[8]: # these embeddings already include <unk> unk_tok = '<unk>' unk_id = glove.key_to_index[unk_tok] unk_tok, unk_id # In[9]: # add padding embedding pad_tok = '<pad>' pad_emb = np.zeros(300) glove.add_vector(pad_tok, pad_emb) pad_tok_id = glove.key_to_index[pad_tok] pad_tok, pad_tok_id # Preprocessing: lower case all words, and replace all numbers with '0': # In[10]: def preprocess(words): result = [] for w in words: w = w.lower() if w.isdecimal(): w = '0' result.append(w) return result train_df['words'] = train_df['words'].progress_map(preprocess) train_df # Next, construct actual PyTorch `Dataset` and `DataLoader` objects for the train/dev/test partitions: # In[11]: def get_ids(tokens, key_to_index, unk_id=None): return [key_to_index.get(tok, unk_id) for tok in tokens] def get_word_ids(tokens): return get_ids(tokens, glove.key_to_index, unk_id) # add new column to the dataframe train_df['word ids'] = train_df['words'].progress_map(get_word_ids) train_df # In[12]: pad_tag = '<pad>' index_to_tag = train_df['tags'].explode().unique().tolist() + [pad_tag] tag_to_index = {t:i for i,t in enumerate(index_to_tag)} pad_tag_id = tag_to_index[pad_tag] pad_tag, pad_tag_id # In[13]: index_to_tag # In[14]: def get_tag_ids(tags): return get_ids(tags, tag_to_index) train_df['tag ids'] = train_df['tags'].progress_map(get_tag_ids) train_df # In[15]: dev_df = read_tags('data/UD_Spanish-AnCora/es_ancora-ud-dev.conllup') dev_df['words'] = dev_df['words'].progress_map(preprocess) dev_df['word ids'] = dev_df['words'].progress_map(lambda x: get_ids(x, glove.key_to_index, unk_id)) dev_df['tag ids'] = dev_df['tags'].progress_map(lambda x: get_ids(x, tag_to_index)) dev_df # In[16]: from torch.utils.data import Dataset class MyDataset(Dataset): def __init__(self, x, y): self.x = x self.y = y def __len__(self): return len(self.y) def __getitem__(self, index): x = torch.tensor(self.x[index]) y = torch.tensor(self.y[index]) return x, y # `collate_fn` will be used by `DataLoader` to pad all sentences in the same batch to the same length. # In[17]: from torch.nn.utils.rnn import pad_sequence def collate_fn(batch): # separate xs and ys xs, ys = zip(*batch) # get lengths lengths = [len(x) for x in xs] # pad sequences x_padded = pad_sequence(xs, batch_first=True, padding_value=pad_tok_id) y_padded = pad_sequence(ys, batch_first=True, padding_value=pad_tag_id) # return padded return x_padded, y_padded, lengths # Now construct our PyTorch model: # In[18]: from torch import nn from torch.nn.utils.rnn import pack_padded_sequence, pad_packed_sequence class MyModel(nn.Module): def __init__(self, vectors, hidden_size, num_layers, bidirectional, dropout, output_size): super().__init__() # ensure vectors is a tensor if not torch.is_tensor(vectors): vectors = torch.tensor(vectors) # init embedding layer self.embedding = nn.Embedding.from_pretrained(embeddings=vectors) # init lstm self.lstm = nn.LSTM( input_size=vectors.shape[1], hidden_size=hidden_size, num_layers=num_layers, bidirectional=bidirectional, dropout=dropout, batch_first=True) # init dropout self.dropout = nn.Dropout(dropout) # init classifier self.classifier = nn.Linear( in_features=hidden_size * 2 if bidirectional else hidden_size, out_features=output_size) def forward(self, x_padded, x_lengths): # get embeddings output = self.embedding(x_padded) output = self.dropout(output) # pack data before lstm packed = pack_padded_sequence(output, x_lengths, batch_first=True, enforce_sorted=False) packed, _ = self.lstm(packed) # unpack data before rest of model output, _ = pad_packed_sequence(packed, batch_first=True) output = self.dropout(output) output = self.classifier(output) return output # In[19]: from torch import optim from torch.utils.data import DataLoader from sklearn.metrics import accuracy_score # hyperparameters lr = 1e-3 weight_decay = 1e-5 batch_size = 100 shuffle = True n_epochs = 10 vectors = glove.vectors hidden_size = 100 num_layers = 2 bidirectional = True dropout = 0.1 output_size = len(index_to_tag) # initialize the model, loss function, optimizer, and data-loader model = MyModel(vectors, hidden_size, num_layers, bidirectional, dropout, output_size).to(device) loss_func = nn.CrossEntropyLoss() optimizer = optim.Adam(model.parameters(), lr=lr, weight_decay=weight_decay) train_ds = MyDataset(train_df['word ids'], train_df['tag ids']) train_dl = DataLoader(train_ds, batch_size=batch_size, shuffle=shuffle, collate_fn=collate_fn) dev_ds = MyDataset(dev_df['word ids'], dev_df['tag ids']) dev_dl = DataLoader(dev_ds, batch_size=batch_size, shuffle=shuffle, collate_fn=collate_fn) train_loss, train_acc = [], [] dev_loss, dev_acc = [], [] # We are now ready to train! # In[20]: # train the model for epoch in range(n_epochs): losses, acc = [], [] model.train() for x_padded, y_padded, lengths in tqdm(train_dl, desc=f'epoch {epoch+1} (train)'): # clear gradients model.zero_grad() # send batch to right device x_padded = x_padded.to(device) y_padded = y_padded.to(device) # predict label scores y_pred = model(x_padded, lengths) # reshape output y_true = torch.flatten(y_padded) y_pred = y_pred.view(-1, output_size) mask = y_true != pad_tag_id y_true = y_true[mask] y_pred = y_pred[mask] # compute loss loss = loss_func(y_pred, y_true) # accumulate for plotting gold = y_true.detach().cpu().numpy() pred = np.argmax(y_pred.detach().cpu().numpy(), axis=1) losses.append(loss.detach().cpu().item()) acc.append(accuracy_score(gold, pred)) # backpropagate loss.backward() # optimize model parameters optimizer.step() train_loss.append(np.mean(losses)) train_acc.append(np.mean(acc)) model.eval() with torch.no_grad(): losses, acc = [], [] for x_padded, y_padded, lengths in tqdm(dev_dl, desc=f'epoch {epoch+1} (dev)'): x_padded = x_padded.to(device) y_padded = y_padded.to(device) y_pred = model(x_padded, lengths) y_true = torch.flatten(y_padded) y_pred = y_pred.view(-1, output_size) mask = y_true != pad_tag_id y_true = y_true[mask] y_pred = y_pred[mask] loss = loss_func(y_pred, y_true) gold = y_true.cpu().numpy() pred = np.argmax(y_pred.cpu().numpy(), axis=1) losses.append(loss.cpu().item()) acc.append(accuracy_score(gold, pred)) dev_loss.append(np.mean(losses)) dev_acc.append(np.mean(acc)) # Plot loss and accuracy on dev after each epoch: # In[21]: import matplotlib.pyplot as plt x = np.arange(n_epochs) + 1 plt.plot(x, train_loss) plt.plot(x, dev_loss) plt.legend(['train', 'dev']) plt.xlabel('epoch') plt.ylabel('loss') plt.grid(True) # In[22]: plt.plot(x, train_acc) plt.plot(x, dev_acc) plt.legend(['train', 'dev']) plt.xlabel('epoch') plt.ylabel('accuracy') plt.grid(True) # In[23]: test_df = read_tags('data/UD_Spanish-AnCora/es_ancora-ud-test.conllup') test_df['words'] = test_df['words'].progress_map(preprocess) test_df['word ids'] = test_df['words'].progress_map(lambda x: get_ids(x, glove.key_to_index, unk_id)) test_df['tag ids'] = test_df['tags'].progress_map(lambda x: get_ids(x, tag_to_index)) test_df # Now let's evaluate on the test partition: # In[24]: from sklearn.metrics import classification_report model.eval() test_ds = MyDataset(test_df['word ids'], test_df['tag ids']) test_dl = DataLoader(test_ds, batch_size=batch_size, shuffle=shuffle, collate_fn=collate_fn) all_y_true = [] all_y_pred = [] with torch.no_grad(): for x_padded, y_padded, lengths in tqdm(test_dl): x_padded = x_padded.to(device) y_pred = model(x_padded, lengths) y_true = torch.flatten(y_padded) y_pred = y_pred.view(-1, output_size) mask = y_true != pad_tag_id y_true = y_true[mask] y_pred = torch.argmax(y_pred[mask], dim=1) all_y_true.append(y_true.cpu().numpy()) all_y_pred.append(y_pred.cpu().numpy()) y_true = np.concatenate(all_y_true) y_pred = np.concatenate(all_y_pred) target_names = index_to_tag[:-2] print(classification_report(y_true, y_pred, target_names=target_names)) # Let's generate a confusion matrix for all POS tags in the data: # In[25]: from sklearn.metrics import ConfusionMatrixDisplay, confusion_matrix cm = confusion_matrix(y_true, y_pred, normalize='true') disp = ConfusionMatrixDisplay( confusion_matrix=cm, display_labels=target_names, ) fig, ax = plt.subplots(figsize=(10,10)) disp.plot( cmap='Blues', values_format='.2f', colorbar=False, ax=ax, xticks_rotation=45, ) # In[ ]:
4,933
5,083
7
chap11-8
chap11-8
11 Implementing Part-of-speech Tagging Using Recurrent Neural Networks The previous chapter was our first exposure to recurrent neural networks, which included intuitions for why they are useful for natural language processing, various architectures, and training algorithms. In this chapter we will put them to use, to implement a common sequence modeling task. 11.1 Part-of-speech Tagging The task we will use as an example for this chapter is part-of-speech (POS) tagging, an NLP application that, as we discussed in the previous chapter, benefits from word order. Please see Chapter 16 for a more thorough discussion of POS tagging. The entire code presented in this chapter is available in the chap11_pos_tagging Jupyter notebook. To take a break from NLP applications for English, in this chapter we use the AnCora corpus (Taulé et al., 2008), which primarily consists of newspaper texts in Spanish and Catalan with different linguistic annotations. In this chapter we work with the Spanish portion of the corpus, and the annotations for Universal POS tags (see Chapter 16 for a description of these tags). The Spanish portion of the corpus is divided into a training set with 14,305 sentences, a development set with 1,654 sentences, and a test set with 1,721 sentences. The data is distributed in the CoNLL-U format. In this format, all sentences in a dataset are stored in the same file, separated by a blank line. Each individual token in a sentence is represented in a line, which contains 10 annotation fields separated by tabs: ID, FORM, LEMMA, UPOS, XPOS, FEATS, HEAD, DEPREL, DEPS, and MISC. A comprehensive explanation of this format and the 162 11.1 Part-of-speech Tagging 163 meaning of the different fields is beyond the goal of this chapter; however, the curious reader can find one at the CoNLL-U website.1 Here, we are only concerned with the fields FORM (the raw word), and UPOS (the Universal part-of-speech tag). As in previous chapters, we use pandas to preprocess the data. For parsing the CoNLL-U files, we rely on the conllu Python module.2 We implement a function called read_tags that reads the CoNLL-U file corresponding to a dataset and returns a pandas dataframe that combines all tokens in a sentence into a single row with two columns, one for the words, and one for the POS tags in the corresponding sentence: 0 1 2 3 4 ... 14300 14301 14302 14303 14304 words [El, presidente, de, el, órgano, regulador, de... [Afirmó, que, sigue, el, criterio, europeo, y,... [Durante, la, presentación, de, el, libro, ", ... [Y, todas, las, miradas, convergen, en, la, lu... [Cambiar, las, formas, parece, de, rigor, ,, p... tags [DET, NOUN, ADP, DET, NOUN, ADJ, ADP, DET, PRO... [VERB, SCONJ, VERB, DET, NOUN, ADJ, CCONJ, SCO... [ADP, DET, NOUN, ADP, DET, NOUN, PUNCT, DET, P... [CCONJ, DET, DET, NOUN, VERB, ADP, DET, NOUN, ... [VERB, DET, NOUN, VERB, ADP, NOUN, PUNCT, CCON... [PRON, VERB, ADP, VERB, PRON, ADP, DET, NOUN, ... [Sobre, la, oferta, de, interconexión, con, Te... [ADP, DET, NOUN, ADP, NOUN, ADP, PROPN, ADP, D... [La, inversión, en, investigación, básica, es,... [DET, NOUN, ADP, NOUN, ADJ, AUX, DET, NOUN, AD... ... ... [Conviene, que, ahora, ,, en, plena, apoteosis... [VERB, SCONJ, ADV, PUNCT, ADP, ADJ, NOUN, ADP,... [Carlos, y, Fayna, se, enzarzan, en, una, bron... [PROPN, CCONJ, PROPN, PRON, VERB, ADP, DET, NO... [Él, llega, a, tirar, la, sobre, la, cama, y, ... 14305 rows × 2 columns In order to implement our POS tagging application, we need word embeddings that have been pretrained for Spanish. Here we use the publicly-available GloVe embeddings trained on the Spanish Billion Word Corpus3 by the Departamento de Ciencias de la Computación of Universidad de Chile.4 In contrast to the GloVe embeddings used in Chapter 9, these do include a header that stores meta data about the embeddings (i.e., size of the vocabulary and the dimension of the embedding vectors), so in this case we do not use the no_header=True argument: Another difference between these GloVe embeddings and the ones we used in Chapter 9 is that these already include an embedding for unknown words. Therefore, there is no need to introduce our own. However, we do need to include a new embedding for padding, which . 1  https://universaldependencies.org/format.html 
 . 2  https://github.com/EmilStenstrom/conllu/ 
 . 3  https://crscardellino.ar/SBWCE/ 
 . 4  https://github.com/dccuchile/spanishwordembeddings# 
gloveembeddingsfromsbwc 
 164 Implementing POS Tagging Using RNNs we will use later to guarantee that all sentences in the same mini-batch have the same length. We add a vector of zeros for the padding token in the same way as before: Next, we need to preprocess our tokens to match the vocabulary of the embeddings. In particular, these embeddings were trained on words that were lowercased and on sequences of digits that were replaced with a single 0. We will apply the same modifications to our tokens: (From now on we will omit the pandas tables for readability, but, as usual, the corresponding Jupyter notebook contains all necessary information.) Next, we add a new column to the dataframe that stores the word ids corresponding to the embedding vocabulary. Note that at this point we are not padding the sequences of word ids. We will address padding later. We also need to generate the ids for the POS tags. To this end, we first need to construct a vocabulary of POS tags. Once again, we generate a list of tags using explode(), which linearizes our sequence of sequences of tags, and remove repeated tags using unique(). We also add a special tag for the padding token: We now use this POS tag vocabulary to construct a new dataframe column that stores the POS tag ids: The implementation of the Dataset class that stores our POS dataset is trivial: we simply return the lists of word and tag ids, converted to PyTorch tensors. Now it’s time to handle padding. This time we will use some features of PyTorch that we have not seen before. The DataLoader object can receive an optional argument, collate_fn, which expects a function that can be used to form a mini-batch. We will implement this function using PyTorch’s torch.nn.utils.rnn.pad_sequence() function, which, unsurprisingly, pads a group of tensors. We will take advantage of this function to pad the tensors while forming the mini-batch itself. The advantage of this strategy is that, rather than needing to pad all the examples to be the same length as the largest sentence in the corpus, we will instead pad them to the same length as the largest sentence in the minibatch. The latter strategy reduces the amount of padding necessary, which should yield more efficient code. The collate_fn() function takes a single argument, batch, which is a list of tuples. Each tuple has two elements: the list of word ids and the list of tag ids corresponding to a single example. We first unzip this list of tuples into two lists; the first list has all the word ids, and the 11.1 Part-of-speech Tagging 165 second has the tag ids. An explanation of how zip(*batch) works is provided in Appendix A. Next, we compute the lengths of each of the examples in the batch, which we will use later to inform the recurrent neural network where padding starts for each example. We then use the pad_sequence() function to add padding. This function will find the longest sequence in the batch and pad all examples accordingly using the provided padding value. This method is designed to work with PyTorch’s recurrent neural networks, which by default assume the batch index is in the second dimension. However, we will be organizing our tensors such that the batch index is always in the first dimension, which we feel to be more intuitive. For this reason, we also need to provide the batch_first=True argument to pad_sequence. Finally, we return the padded data, as well as the original lengths of the examples. Next, we implement our POS tagging model class. The model consists of: (a) an embedding layer for our Spanish pretrained embeddings; (b) an LSTM that can be set to be unior bi-directional (see Figure 10.3; the RNN is configured to be bidirectional by setting the bidirectional argument to True in the LSTM constructor), with a configurable number of layers (see Figure 10.2; the number of layers is set through the num_layers argument of the constructor) and (c) a linear layer on top of each hidden state, which is used to predict the scores for each of the POS tags for the corresponding token. The forward() method receives the padded minibatch and the list of lengths for the (unpadded) examples in this mini-batch. The first step in the function is to retrieve the embeddings for all words referenced in this mini-batch. We then apply dropout over these embedding vectors. Next, before passing the data to the LSTM, we pack the padded data. Note that the PyTorch PackedSequence5 class, which is the output of the pack_padded_sequence() function, stores a batch of sequences that had different lengths before padding. One important advantage of using PackedSequence is that its internal data structure removes the padding tokens (which is why we had to keep track of the example lengths before padding in x_lengths), and, thus, the recurrent neural network will not back-propagate over the padded elements.6 Once we have a PackedSequence, we pass it to the LSTM. Since the . 5  https://pytorch.org/docs/stable/generated/torch.nn.utils.rnn. PackedSequence.html 
 . 6  The astute reader might ask at this point, “Why did we pad the mini-batch examples in the first place, if we are removing the padding later?” The padding is needed because this allows us to store the mini-batch as a single three-dimensional tensor. 
 166 Implementing POS Tagging Using RNNs output of the LSTM is also packed, we then unpack it using pad_packed_sequence(). Next we apply dropout to this unpacked LSTM output. Finally, we pass
this to the linear layer to predict the tag scores for the tokens. Despite the small number of lines of code, the code of the forward() method, which switches between embedding vectors, padded tensors, and packed sequences, is not trivial. To clarify it, let us walk through an example. Imagine that the input to the forward() method is a batch, x_padded, with shape (10, 20), corresponding to 10 examples, each with 20 word ids (some of which are padding). Then we retrieve the embeddings. Assuming our word embeddings, i.e., the input vectors xi in Chapter 10, are of dimension 300, the new tensor will have a shape of (10, 20, 300), corresponding to 10 examples, each with 20 embeddings, each with dimension 300. After dropout the shape hasn’t changed, but some of the elements have been zeroed out. After unpacking the output of the LSTM, we will have a tensor of shape (10, 20, hidden_size), where hidden_size is the size of the LSTM hidden state, i.e., the ht vector in Equation 10.6, (hidden_size is a hyper parameter we will set later on). After passing this tensor to the linear layer, we will obtain a tensor of shape (10, 20, tag_vocab_size), where tag_vocab_size is the number of POS tags in our vocabulary. Thus, for each token in each example, we will have a distribution of POS tag scores. For each token, the assigned POS tag will be the one corresponding to the highest score. We next initialize all the hyper parameters and all the required components: The training procedure is very similar to the one implemented in Chapter 7. One notable difference is that the output of this model has three dimensions instead of two: number of examples, number of tokens, and number of POS tag scores. Thus, we have to reshape the output to pass it to the loss function. Additionally, we need to discard the padding before computing the loss. We reshape the gold tag ids using the torch.flatten() function, to transform the 2-dimensional tensor of shape (n_examples, n_tokens) to a 1-dimensional tensor with n_examples * n_tokens elements. The predictions are reshaped using the view(-1, output_size) method. By passing two arguments we are stipulating that we want two dimensions. The second dimension will be of size output_size. The -1 indicates that the first dimension should be inferred from the size of the tensor. This means that for a tensor of shape (n_examples, n_tokens, output_size) we will get a tensor of shape (n_examples * n_tokens, output_size). Then, we use a Boolean mask to discard the elements corresponding to the padding. This way, the loss 11.2 Summary 167 function will consider each actual word individually, as if the whole batch was just one big sentence. Note that treating a mini-batch as a single virtual sentence does affect the evaluation results. Lastly, we evaluate the performance of our POS tagger on the test set, similarly to how we have done it before: The results indicate that our POS tagger obtains an overall accuracy of 97%, which is in line with state-of-the-art approaches! This is encouraging considering that our approach does not include the CRF layer we discussed in Chapter 10. We challenge the reader to add this layer,7 and experiment with this architecture for other sequence tasks such as named entity recognition. 11.2 Summary In this chapter we have implemented a Spanish part-of-speech tagger using a bidirectional LSTM and a set of pretrained, static word embeddings. Through this process, we have also introduced several new PyTorch features such as the pad_sequence, pack_padded_sequence, and pad_packed_sequence functions, which allow us to work more efficiently with variable length sequences for recurrent neural networks. 7 See, for example, the LSTM-CRF implementation from the PyTorch tutorial: https://pytorch.org/tutorials/beginner/nlp/advanced_tutorial.html
6,775
6,968
#!/usr/bin/env python # coding: utf-8 # # Part-of-speech Tagging Using RNNs # Some initialization: # In[4]: import random import torch import numpy as np import pandas as pd from tqdm.notebook import tqdm # enable tqdm in pandas tqdm.pandas() # set to True to use the gpu (if there is one available) use_gpu = True # select device device = torch.device('cuda' if use_gpu and torch.cuda.is_available() else 'cpu') print(f'device: {device.type}') # random seed seed = 1234 # set random seed if seed is not None: print(f'random seed: {seed}') random.seed(seed) np.random.seed(seed) torch.manual_seed(seed) # Next, let's read the words and their POS tags from the CoNLLUP format: # In[5]: from conllu import parse_incr def read_tags(filename): data = {'words': [], 'tags': []} with open(filename) as f: for sent in parse_incr(f): words = [tok['form'] for tok in sent] tags = [tok['upos'] for tok in sent] data['words'].append(words) data['tags'].append(tags) return pd.DataFrame(data) # In[6]: train_df = read_tags('data/UD_Spanish-AnCora/es_ancora-ud-train.conllup') train_df # We now load the GloVe embeddings for Spanish, which include a representation for the unknown token: # In[7]: from gensim.models import KeyedVectors glove = KeyedVectors.load_word2vec_format('glove-sbwc.i25.vec') glove.vectors.shape # In[8]: # these embeddings already include <unk> unk_tok = '<unk>' unk_id = glove.key_to_index[unk_tok] unk_tok, unk_id # In[9]: # add padding embedding pad_tok = '<pad>' pad_emb = np.zeros(300) glove.add_vector(pad_tok, pad_emb) pad_tok_id = glove.key_to_index[pad_tok] pad_tok, pad_tok_id # Preprocessing: lower case all words, and replace all numbers with '0': # In[10]: def preprocess(words): result = [] for w in words: w = w.lower() if w.isdecimal(): w = '0' result.append(w) return result train_df['words'] = train_df['words'].progress_map(preprocess) train_df # Next, construct actual PyTorch `Dataset` and `DataLoader` objects for the train/dev/test partitions: # In[11]: def get_ids(tokens, key_to_index, unk_id=None): return [key_to_index.get(tok, unk_id) for tok in tokens] def get_word_ids(tokens): return get_ids(tokens, glove.key_to_index, unk_id) # add new column to the dataframe train_df['word ids'] = train_df['words'].progress_map(get_word_ids) train_df # In[12]: pad_tag = '<pad>' index_to_tag = train_df['tags'].explode().unique().tolist() + [pad_tag] tag_to_index = {t:i for i,t in enumerate(index_to_tag)} pad_tag_id = tag_to_index[pad_tag] pad_tag, pad_tag_id # In[13]: index_to_tag # In[14]: def get_tag_ids(tags): return get_ids(tags, tag_to_index) train_df['tag ids'] = train_df['tags'].progress_map(get_tag_ids) train_df # In[15]: dev_df = read_tags('data/UD_Spanish-AnCora/es_ancora-ud-dev.conllup') dev_df['words'] = dev_df['words'].progress_map(preprocess) dev_df['word ids'] = dev_df['words'].progress_map(lambda x: get_ids(x, glove.key_to_index, unk_id)) dev_df['tag ids'] = dev_df['tags'].progress_map(lambda x: get_ids(x, tag_to_index)) dev_df # In[16]: from torch.utils.data import Dataset class MyDataset(Dataset): def __init__(self, x, y): self.x = x self.y = y def __len__(self): return len(self.y) def __getitem__(self, index): x = torch.tensor(self.x[index]) y = torch.tensor(self.y[index]) return x, y # `collate_fn` will be used by `DataLoader` to pad all sentences in the same batch to the same length. # In[17]: from torch.nn.utils.rnn import pad_sequence def collate_fn(batch): # separate xs and ys xs, ys = zip(*batch) # get lengths lengths = [len(x) for x in xs] # pad sequences x_padded = pad_sequence(xs, batch_first=True, padding_value=pad_tok_id) y_padded = pad_sequence(ys, batch_first=True, padding_value=pad_tag_id) # return padded return x_padded, y_padded, lengths # Now construct our PyTorch model: # In[18]: from torch import nn from torch.nn.utils.rnn import pack_padded_sequence, pad_packed_sequence class MyModel(nn.Module): def __init__(self, vectors, hidden_size, num_layers, bidirectional, dropout, output_size): super().__init__() # ensure vectors is a tensor if not torch.is_tensor(vectors): vectors = torch.tensor(vectors) # init embedding layer self.embedding = nn.Embedding.from_pretrained(embeddings=vectors) # init lstm self.lstm = nn.LSTM( input_size=vectors.shape[1], hidden_size=hidden_size, num_layers=num_layers, bidirectional=bidirectional, dropout=dropout, batch_first=True) # init dropout self.dropout = nn.Dropout(dropout) # init classifier self.classifier = nn.Linear( in_features=hidden_size * 2 if bidirectional else hidden_size, out_features=output_size) def forward(self, x_padded, x_lengths): # get embeddings output = self.embedding(x_padded) output = self.dropout(output) # pack data before lstm packed = pack_padded_sequence(output, x_lengths, batch_first=True, enforce_sorted=False) packed, _ = self.lstm(packed) # unpack data before rest of model output, _ = pad_packed_sequence(packed, batch_first=True) output = self.dropout(output) output = self.classifier(output) return output # In[19]: from torch import optim from torch.utils.data import DataLoader from sklearn.metrics import accuracy_score # hyperparameters lr = 1e-3 weight_decay = 1e-5 batch_size = 100 shuffle = True n_epochs = 10 vectors = glove.vectors hidden_size = 100 num_layers = 2 bidirectional = True dropout = 0.1 output_size = len(index_to_tag) # initialize the model, loss function, optimizer, and data-loader model = MyModel(vectors, hidden_size, num_layers, bidirectional, dropout, output_size).to(device) loss_func = nn.CrossEntropyLoss() optimizer = optim.Adam(model.parameters(), lr=lr, weight_decay=weight_decay) train_ds = MyDataset(train_df['word ids'], train_df['tag ids']) train_dl = DataLoader(train_ds, batch_size=batch_size, shuffle=shuffle, collate_fn=collate_fn) dev_ds = MyDataset(dev_df['word ids'], dev_df['tag ids']) dev_dl = DataLoader(dev_ds, batch_size=batch_size, shuffle=shuffle, collate_fn=collate_fn) train_loss, train_acc = [], [] dev_loss, dev_acc = [], [] # We are now ready to train! # In[20]: # train the model for epoch in range(n_epochs): losses, acc = [], [] model.train() for x_padded, y_padded, lengths in tqdm(train_dl, desc=f'epoch {epoch+1} (train)'): # clear gradients model.zero_grad() # send batch to right device x_padded = x_padded.to(device) y_padded = y_padded.to(device) # predict label scores y_pred = model(x_padded, lengths) # reshape output y_true = torch.flatten(y_padded) y_pred = y_pred.view(-1, output_size) mask = y_true != pad_tag_id y_true = y_true[mask] y_pred = y_pred[mask] # compute loss loss = loss_func(y_pred, y_true) # accumulate for plotting gold = y_true.detach().cpu().numpy() pred = np.argmax(y_pred.detach().cpu().numpy(), axis=1) losses.append(loss.detach().cpu().item()) acc.append(accuracy_score(gold, pred)) # backpropagate loss.backward() # optimize model parameters optimizer.step() train_loss.append(np.mean(losses)) train_acc.append(np.mean(acc)) model.eval() with torch.no_grad(): losses, acc = [], [] for x_padded, y_padded, lengths in tqdm(dev_dl, desc=f'epoch {epoch+1} (dev)'): x_padded = x_padded.to(device) y_padded = y_padded.to(device) y_pred = model(x_padded, lengths) y_true = torch.flatten(y_padded) y_pred = y_pred.view(-1, output_size) mask = y_true != pad_tag_id y_true = y_true[mask] y_pred = y_pred[mask] loss = loss_func(y_pred, y_true) gold = y_true.cpu().numpy() pred = np.argmax(y_pred.cpu().numpy(), axis=1) losses.append(loss.cpu().item()) acc.append(accuracy_score(gold, pred)) dev_loss.append(np.mean(losses)) dev_acc.append(np.mean(acc)) # Plot loss and accuracy on dev after each epoch: # In[21]: import matplotlib.pyplot as plt x = np.arange(n_epochs) + 1 plt.plot(x, train_loss) plt.plot(x, dev_loss) plt.legend(['train', 'dev']) plt.xlabel('epoch') plt.ylabel('loss') plt.grid(True) # In[22]: plt.plot(x, train_acc) plt.plot(x, dev_acc) plt.legend(['train', 'dev']) plt.xlabel('epoch') plt.ylabel('accuracy') plt.grid(True) # In[23]: test_df = read_tags('data/UD_Spanish-AnCora/es_ancora-ud-test.conllup') test_df['words'] = test_df['words'].progress_map(preprocess) test_df['word ids'] = test_df['words'].progress_map(lambda x: get_ids(x, glove.key_to_index, unk_id)) test_df['tag ids'] = test_df['tags'].progress_map(lambda x: get_ids(x, tag_to_index)) test_df # Now let's evaluate on the test partition: # In[24]: from sklearn.metrics import classification_report model.eval() test_ds = MyDataset(test_df['word ids'], test_df['tag ids']) test_dl = DataLoader(test_ds, batch_size=batch_size, shuffle=shuffle, collate_fn=collate_fn) all_y_true = [] all_y_pred = [] with torch.no_grad(): for x_padded, y_padded, lengths in tqdm(test_dl): x_padded = x_padded.to(device) y_pred = model(x_padded, lengths) y_true = torch.flatten(y_padded) y_pred = y_pred.view(-1, output_size) mask = y_true != pad_tag_id y_true = y_true[mask] y_pred = torch.argmax(y_pred[mask], dim=1) all_y_true.append(y_true.cpu().numpy()) all_y_pred.append(y_pred.cpu().numpy()) y_true = np.concatenate(all_y_true) y_pred = np.concatenate(all_y_pred) target_names = index_to_tag[:-2] print(classification_report(y_true, y_pred, target_names=target_names)) # Let's generate a confusion matrix for all POS tags in the data: # In[25]: from sklearn.metrics import ConfusionMatrixDisplay, confusion_matrix cm = confusion_matrix(y_true, y_pred, normalize='true') disp = ConfusionMatrixDisplay( confusion_matrix=cm, display_labels=target_names, ) fig, ax = plt.subplots(figsize=(10,10)) disp.plot( cmap='Blues', values_format='.2f', colorbar=False, ax=ax, xticks_rotation=45, ) # In[ ]:
3,702
3,725
8
chap11-9
chap11-9
11 Implementing Part-of-speech Tagging Using Recurrent Neural Networks The previous chapter was our first exposure to recurrent neural networks, which included intuitions for why they are useful for natural language processing, various architectures, and training algorithms. In this chapter we will put them to use, to implement a common sequence modeling task. 11.1 Part-of-speech Tagging The task we will use as an example for this chapter is part-of-speech (POS) tagging, an NLP application that, as we discussed in the previous chapter, benefits from word order. Please see Chapter 16 for a more thorough discussion of POS tagging. The entire code presented in this chapter is available in the chap11_pos_tagging Jupyter notebook. To take a break from NLP applications for English, in this chapter we use the AnCora corpus (Taulé et al., 2008), which primarily consists of newspaper texts in Spanish and Catalan with different linguistic annotations. In this chapter we work with the Spanish portion of the corpus, and the annotations for Universal POS tags (see Chapter 16 for a description of these tags). The Spanish portion of the corpus is divided into a training set with 14,305 sentences, a development set with 1,654 sentences, and a test set with 1,721 sentences. The data is distributed in the CoNLL-U format. In this format, all sentences in a dataset are stored in the same file, separated by a blank line. Each individual token in a sentence is represented in a line, which contains 10 annotation fields separated by tabs: ID, FORM, LEMMA, UPOS, XPOS, FEATS, HEAD, DEPREL, DEPS, and MISC. A comprehensive explanation of this format and the 162 11.1 Part-of-speech Tagging 163 meaning of the different fields is beyond the goal of this chapter; however, the curious reader can find one at the CoNLL-U website.1 Here, we are only concerned with the fields FORM (the raw word), and UPOS (the Universal part-of-speech tag). As in previous chapters, we use pandas to preprocess the data. For parsing the CoNLL-U files, we rely on the conllu Python module.2 We implement a function called read_tags that reads the CoNLL-U file corresponding to a dataset and returns a pandas dataframe that combines all tokens in a sentence into a single row with two columns, one for the words, and one for the POS tags in the corresponding sentence: 0 1 2 3 4 ... 14300 14301 14302 14303 14304 words [El, presidente, de, el, órgano, regulador, de... [Afirmó, que, sigue, el, criterio, europeo, y,... [Durante, la, presentación, de, el, libro, ", ... [Y, todas, las, miradas, convergen, en, la, lu... [Cambiar, las, formas, parece, de, rigor, ,, p... tags [DET, NOUN, ADP, DET, NOUN, ADJ, ADP, DET, PRO... [VERB, SCONJ, VERB, DET, NOUN, ADJ, CCONJ, SCO... [ADP, DET, NOUN, ADP, DET, NOUN, PUNCT, DET, P... [CCONJ, DET, DET, NOUN, VERB, ADP, DET, NOUN, ... [VERB, DET, NOUN, VERB, ADP, NOUN, PUNCT, CCON... [PRON, VERB, ADP, VERB, PRON, ADP, DET, NOUN, ... [Sobre, la, oferta, de, interconexión, con, Te... [ADP, DET, NOUN, ADP, NOUN, ADP, PROPN, ADP, D... [La, inversión, en, investigación, básica, es,... [DET, NOUN, ADP, NOUN, ADJ, AUX, DET, NOUN, AD... ... ... [Conviene, que, ahora, ,, en, plena, apoteosis... [VERB, SCONJ, ADV, PUNCT, ADP, ADJ, NOUN, ADP,... [Carlos, y, Fayna, se, enzarzan, en, una, bron... [PROPN, CCONJ, PROPN, PRON, VERB, ADP, DET, NO... [Él, llega, a, tirar, la, sobre, la, cama, y, ... 14305 rows × 2 columns In order to implement our POS tagging application, we need word embeddings that have been pretrained for Spanish. Here we use the publicly-available GloVe embeddings trained on the Spanish Billion Word Corpus3 by the Departamento de Ciencias de la Computación of Universidad de Chile.4 In contrast to the GloVe embeddings used in Chapter 9, these do include a header that stores meta data about the embeddings (i.e., size of the vocabulary and the dimension of the embedding vectors), so in this case we do not use the no_header=True argument: Another difference between these GloVe embeddings and the ones we used in Chapter 9 is that these already include an embedding for unknown words. Therefore, there is no need to introduce our own. However, we do need to include a new embedding for padding, which . 1  https://universaldependencies.org/format.html 
 . 2  https://github.com/EmilStenstrom/conllu/ 
 . 3  https://crscardellino.ar/SBWCE/ 
 . 4  https://github.com/dccuchile/spanishwordembeddings# 
gloveembeddingsfromsbwc 
 164 Implementing POS Tagging Using RNNs we will use later to guarantee that all sentences in the same mini-batch have the same length. We add a vector of zeros for the padding token in the same way as before: Next, we need to preprocess our tokens to match the vocabulary of the embeddings. In particular, these embeddings were trained on words that were lowercased and on sequences of digits that were replaced with a single 0. We will apply the same modifications to our tokens: (From now on we will omit the pandas tables for readability, but, as usual, the corresponding Jupyter notebook contains all necessary information.) Next, we add a new column to the dataframe that stores the word ids corresponding to the embedding vocabulary. Note that at this point we are not padding the sequences of word ids. We will address padding later. We also need to generate the ids for the POS tags. To this end, we first need to construct a vocabulary of POS tags. Once again, we generate a list of tags using explode(), which linearizes our sequence of sequences of tags, and remove repeated tags using unique(). We also add a special tag for the padding token: We now use this POS tag vocabulary to construct a new dataframe column that stores the POS tag ids: The implementation of the Dataset class that stores our POS dataset is trivial: we simply return the lists of word and tag ids, converted to PyTorch tensors. Now it’s time to handle padding. This time we will use some features of PyTorch that we have not seen before. The DataLoader object can receive an optional argument, collate_fn, which expects a function that can be used to form a mini-batch. We will implement this function using PyTorch’s torch.nn.utils.rnn.pad_sequence() function, which, unsurprisingly, pads a group of tensors. We will take advantage of this function to pad the tensors while forming the mini-batch itself. The advantage of this strategy is that, rather than needing to pad all the examples to be the same length as the largest sentence in the corpus, we will instead pad them to the same length as the largest sentence in the minibatch. The latter strategy reduces the amount of padding necessary, which should yield more efficient code. The collate_fn() function takes a single argument, batch, which is a list of tuples. Each tuple has two elements: the list of word ids and the list of tag ids corresponding to a single example. We first unzip this list of tuples into two lists; the first list has all the word ids, and the 11.1 Part-of-speech Tagging 165 second has the tag ids. An explanation of how zip(*batch) works is provided in Appendix A. Next, we compute the lengths of each of the examples in the batch, which we will use later to inform the recurrent neural network where padding starts for each example. We then use the pad_sequence() function to add padding. This function will find the longest sequence in the batch and pad all examples accordingly using the provided padding value. This method is designed to work with PyTorch’s recurrent neural networks, which by default assume the batch index is in the second dimension. However, we will be organizing our tensors such that the batch index is always in the first dimension, which we feel to be more intuitive. For this reason, we also need to provide the batch_first=True argument to pad_sequence. Finally, we return the padded data, as well as the original lengths of the examples. Next, we implement our POS tagging model class. The model consists of: (a) an embedding layer for our Spanish pretrained embeddings; (b) an LSTM that can be set to be unior bi-directional (see Figure 10.3; the RNN is configured to be bidirectional by setting the bidirectional argument to True in the LSTM constructor), with a configurable number of layers (see Figure 10.2; the number of layers is set through the num_layers argument of the constructor) and (c) a linear layer on top of each hidden state, which is used to predict the scores for each of the POS tags for the corresponding token. The forward() method receives the padded minibatch and the list of lengths for the (unpadded) examples in this mini-batch. The first step in the function is to retrieve the embeddings for all words referenced in this mini-batch. We then apply dropout over these embedding vectors. Next, before passing the data to the LSTM, we pack the padded data. Note that the PyTorch PackedSequence5 class, which is the output of the pack_padded_sequence() function, stores a batch of sequences that had different lengths before padding. One important advantage of using PackedSequence is that its internal data structure removes the padding tokens (which is why we had to keep track of the example lengths before padding in x_lengths), and, thus, the recurrent neural network will not back-propagate over the padded elements.6 Once we have a PackedSequence, we pass it to the LSTM. Since the . 5  https://pytorch.org/docs/stable/generated/torch.nn.utils.rnn. PackedSequence.html 
 . 6  The astute reader might ask at this point, “Why did we pad the mini-batch examples in the first place, if we are removing the padding later?” The padding is needed because this allows us to store the mini-batch as a single three-dimensional tensor. 
 166 Implementing POS Tagging Using RNNs output of the LSTM is also packed, we then unpack it using pad_packed_sequence(). Next we apply dropout to this unpacked LSTM output. Finally, we pass
this to the linear layer to predict the tag scores for the tokens. Despite the small number of lines of code, the code of the forward() method, which switches between embedding vectors, padded tensors, and packed sequences, is not trivial. To clarify it, let us walk through an example. Imagine that the input to the forward() method is a batch, x_padded, with shape (10, 20), corresponding to 10 examples, each with 20 word ids (some of which are padding). Then we retrieve the embeddings. Assuming our word embeddings, i.e., the input vectors xi in Chapter 10, are of dimension 300, the new tensor will have a shape of (10, 20, 300), corresponding to 10 examples, each with 20 embeddings, each with dimension 300. After dropout the shape hasn’t changed, but some of the elements have been zeroed out. After unpacking the output of the LSTM, we will have a tensor of shape (10, 20, hidden_size), where hidden_size is the size of the LSTM hidden state, i.e., the ht vector in Equation 10.6, (hidden_size is a hyper parameter we will set later on). After passing this tensor to the linear layer, we will obtain a tensor of shape (10, 20, tag_vocab_size), where tag_vocab_size is the number of POS tags in our vocabulary. Thus, for each token in each example, we will have a distribution of POS tag scores. For each token, the assigned POS tag will be the one corresponding to the highest score. We next initialize all the hyper parameters and all the required components: The training procedure is very similar to the one implemented in Chapter 7. One notable difference is that the output of this model has three dimensions instead of two: number of examples, number of tokens, and number of POS tag scores. Thus, we have to reshape the output to pass it to the loss function. Additionally, we need to discard the padding before computing the loss. We reshape the gold tag ids using the torch.flatten() function, to transform the 2-dimensional tensor of shape (n_examples, n_tokens) to a 1-dimensional tensor with n_examples * n_tokens elements. The predictions are reshaped using the view(-1, output_size) method. By passing two arguments we are stipulating that we want two dimensions. The second dimension will be of size output_size. The -1 indicates that the first dimension should be inferred from the size of the tensor. This means that for a tensor of shape (n_examples, n_tokens, output_size) we will get a tensor of shape (n_examples * n_tokens, output_size). Then, we use a Boolean mask to discard the elements corresponding to the padding. This way, the loss 11.2 Summary 167 function will consider each actual word individually, as if the whole batch was just one big sentence. Note that treating a mini-batch as a single virtual sentence does affect the evaluation results. Lastly, we evaluate the performance of our POS tagger on the test set, similarly to how we have done it before: The results indicate that our POS tagger obtains an overall accuracy of 97%, which is in line with state-of-the-art approaches! This is encouraging considering that our approach does not include the CRF layer we discussed in Chapter 10. We challenge the reader to add this layer,7 and experiment with this architecture for other sequence tasks such as named entity recognition. 11.2 Summary In this chapter we have implemented a Spanish part-of-speech tagger using a bidirectional LSTM and a set of pretrained, static word embeddings. Through this process, we have also introduced several new PyTorch features such as the pad_sequence, pack_padded_sequence, and pad_packed_sequence functions, which allow us to work more efficiently with variable length sequences for recurrent neural networks. 7 See, for example, the LSTM-CRF implementation from the PyTorch tutorial: https://pytorch.org/tutorials/beginner/nlp/advanced_tutorial.html
3,599
3,769
#!/usr/bin/env python # coding: utf-8 # # Part-of-speech Tagging Using RNNs # Some initialization: # In[4]: import random import torch import numpy as np import pandas as pd from tqdm.notebook import tqdm # enable tqdm in pandas tqdm.pandas() # set to True to use the gpu (if there is one available) use_gpu = True # select device device = torch.device('cuda' if use_gpu and torch.cuda.is_available() else 'cpu') print(f'device: {device.type}') # random seed seed = 1234 # set random seed if seed is not None: print(f'random seed: {seed}') random.seed(seed) np.random.seed(seed) torch.manual_seed(seed) # Next, let's read the words and their POS tags from the CoNLLUP format: # In[5]: from conllu import parse_incr def read_tags(filename): data = {'words': [], 'tags': []} with open(filename) as f: for sent in parse_incr(f): words = [tok['form'] for tok in sent] tags = [tok['upos'] for tok in sent] data['words'].append(words) data['tags'].append(tags) return pd.DataFrame(data) # In[6]: train_df = read_tags('data/UD_Spanish-AnCora/es_ancora-ud-train.conllup') train_df # We now load the GloVe embeddings for Spanish, which include a representation for the unknown token: # In[7]: from gensim.models import KeyedVectors glove = KeyedVectors.load_word2vec_format('glove-sbwc.i25.vec') glove.vectors.shape # In[8]: # these embeddings already include <unk> unk_tok = '<unk>' unk_id = glove.key_to_index[unk_tok] unk_tok, unk_id # In[9]: # add padding embedding pad_tok = '<pad>' pad_emb = np.zeros(300) glove.add_vector(pad_tok, pad_emb) pad_tok_id = glove.key_to_index[pad_tok] pad_tok, pad_tok_id # Preprocessing: lower case all words, and replace all numbers with '0': # In[10]: def preprocess(words): result = [] for w in words: w = w.lower() if w.isdecimal(): w = '0' result.append(w) return result train_df['words'] = train_df['words'].progress_map(preprocess) train_df # Next, construct actual PyTorch `Dataset` and `DataLoader` objects for the train/dev/test partitions: # In[11]: def get_ids(tokens, key_to_index, unk_id=None): return [key_to_index.get(tok, unk_id) for tok in tokens] def get_word_ids(tokens): return get_ids(tokens, glove.key_to_index, unk_id) # add new column to the dataframe train_df['word ids'] = train_df['words'].progress_map(get_word_ids) train_df # In[12]: pad_tag = '<pad>' index_to_tag = train_df['tags'].explode().unique().tolist() + [pad_tag] tag_to_index = {t:i for i,t in enumerate(index_to_tag)} pad_tag_id = tag_to_index[pad_tag] pad_tag, pad_tag_id # In[13]: index_to_tag # In[14]: def get_tag_ids(tags): return get_ids(tags, tag_to_index) train_df['tag ids'] = train_df['tags'].progress_map(get_tag_ids) train_df # In[15]: dev_df = read_tags('data/UD_Spanish-AnCora/es_ancora-ud-dev.conllup') dev_df['words'] = dev_df['words'].progress_map(preprocess) dev_df['word ids'] = dev_df['words'].progress_map(lambda x: get_ids(x, glove.key_to_index, unk_id)) dev_df['tag ids'] = dev_df['tags'].progress_map(lambda x: get_ids(x, tag_to_index)) dev_df # In[16]: from torch.utils.data import Dataset class MyDataset(Dataset): def __init__(self, x, y): self.x = x self.y = y def __len__(self): return len(self.y) def __getitem__(self, index): x = torch.tensor(self.x[index]) y = torch.tensor(self.y[index]) return x, y # `collate_fn` will be used by `DataLoader` to pad all sentences in the same batch to the same length. # In[17]: from torch.nn.utils.rnn import pad_sequence def collate_fn(batch): # separate xs and ys xs, ys = zip(*batch) # get lengths lengths = [len(x) for x in xs] # pad sequences x_padded = pad_sequence(xs, batch_first=True, padding_value=pad_tok_id) y_padded = pad_sequence(ys, batch_first=True, padding_value=pad_tag_id) # return padded return x_padded, y_padded, lengths # Now construct our PyTorch model: # In[18]: from torch import nn from torch.nn.utils.rnn import pack_padded_sequence, pad_packed_sequence class MyModel(nn.Module): def __init__(self, vectors, hidden_size, num_layers, bidirectional, dropout, output_size): super().__init__() # ensure vectors is a tensor if not torch.is_tensor(vectors): vectors = torch.tensor(vectors) # init embedding layer self.embedding = nn.Embedding.from_pretrained(embeddings=vectors) # init lstm self.lstm = nn.LSTM( input_size=vectors.shape[1], hidden_size=hidden_size, num_layers=num_layers, bidirectional=bidirectional, dropout=dropout, batch_first=True) # init dropout self.dropout = nn.Dropout(dropout) # init classifier self.classifier = nn.Linear( in_features=hidden_size * 2 if bidirectional else hidden_size, out_features=output_size) def forward(self, x_padded, x_lengths): # get embeddings output = self.embedding(x_padded) output = self.dropout(output) # pack data before lstm packed = pack_padded_sequence(output, x_lengths, batch_first=True, enforce_sorted=False) packed, _ = self.lstm(packed) # unpack data before rest of model output, _ = pad_packed_sequence(packed, batch_first=True) output = self.dropout(output) output = self.classifier(output) return output # In[19]: from torch import optim from torch.utils.data import DataLoader from sklearn.metrics import accuracy_score # hyperparameters lr = 1e-3 weight_decay = 1e-5 batch_size = 100 shuffle = True n_epochs = 10 vectors = glove.vectors hidden_size = 100 num_layers = 2 bidirectional = True dropout = 0.1 output_size = len(index_to_tag) # initialize the model, loss function, optimizer, and data-loader model = MyModel(vectors, hidden_size, num_layers, bidirectional, dropout, output_size).to(device) loss_func = nn.CrossEntropyLoss() optimizer = optim.Adam(model.parameters(), lr=lr, weight_decay=weight_decay) train_ds = MyDataset(train_df['word ids'], train_df['tag ids']) train_dl = DataLoader(train_ds, batch_size=batch_size, shuffle=shuffle, collate_fn=collate_fn) dev_ds = MyDataset(dev_df['word ids'], dev_df['tag ids']) dev_dl = DataLoader(dev_ds, batch_size=batch_size, shuffle=shuffle, collate_fn=collate_fn) train_loss, train_acc = [], [] dev_loss, dev_acc = [], [] # We are now ready to train! # In[20]: # train the model for epoch in range(n_epochs): losses, acc = [], [] model.train() for x_padded, y_padded, lengths in tqdm(train_dl, desc=f'epoch {epoch+1} (train)'): # clear gradients model.zero_grad() # send batch to right device x_padded = x_padded.to(device) y_padded = y_padded.to(device) # predict label scores y_pred = model(x_padded, lengths) # reshape output y_true = torch.flatten(y_padded) y_pred = y_pred.view(-1, output_size) mask = y_true != pad_tag_id y_true = y_true[mask] y_pred = y_pred[mask] # compute loss loss = loss_func(y_pred, y_true) # accumulate for plotting gold = y_true.detach().cpu().numpy() pred = np.argmax(y_pred.detach().cpu().numpy(), axis=1) losses.append(loss.detach().cpu().item()) acc.append(accuracy_score(gold, pred)) # backpropagate loss.backward() # optimize model parameters optimizer.step() train_loss.append(np.mean(losses)) train_acc.append(np.mean(acc)) model.eval() with torch.no_grad(): losses, acc = [], [] for x_padded, y_padded, lengths in tqdm(dev_dl, desc=f'epoch {epoch+1} (dev)'): x_padded = x_padded.to(device) y_padded = y_padded.to(device) y_pred = model(x_padded, lengths) y_true = torch.flatten(y_padded) y_pred = y_pred.view(-1, output_size) mask = y_true != pad_tag_id y_true = y_true[mask] y_pred = y_pred[mask] loss = loss_func(y_pred, y_true) gold = y_true.cpu().numpy() pred = np.argmax(y_pred.cpu().numpy(), axis=1) losses.append(loss.cpu().item()) acc.append(accuracy_score(gold, pred)) dev_loss.append(np.mean(losses)) dev_acc.append(np.mean(acc)) # Plot loss and accuracy on dev after each epoch: # In[21]: import matplotlib.pyplot as plt x = np.arange(n_epochs) + 1 plt.plot(x, train_loss) plt.plot(x, dev_loss) plt.legend(['train', 'dev']) plt.xlabel('epoch') plt.ylabel('loss') plt.grid(True) # In[22]: plt.plot(x, train_acc) plt.plot(x, dev_acc) plt.legend(['train', 'dev']) plt.xlabel('epoch') plt.ylabel('accuracy') plt.grid(True) # In[23]: test_df = read_tags('data/UD_Spanish-AnCora/es_ancora-ud-test.conllup') test_df['words'] = test_df['words'].progress_map(preprocess) test_df['word ids'] = test_df['words'].progress_map(lambda x: get_ids(x, glove.key_to_index, unk_id)) test_df['tag ids'] = test_df['tags'].progress_map(lambda x: get_ids(x, tag_to_index)) test_df # Now let's evaluate on the test partition: # In[24]: from sklearn.metrics import classification_report model.eval() test_ds = MyDataset(test_df['word ids'], test_df['tag ids']) test_dl = DataLoader(test_ds, batch_size=batch_size, shuffle=shuffle, collate_fn=collate_fn) all_y_true = [] all_y_pred = [] with torch.no_grad(): for x_padded, y_padded, lengths in tqdm(test_dl): x_padded = x_padded.to(device) y_pred = model(x_padded, lengths) y_true = torch.flatten(y_padded) y_pred = y_pred.view(-1, output_size) mask = y_true != pad_tag_id y_true = y_true[mask] y_pred = torch.argmax(y_pred[mask], dim=1) all_y_true.append(y_true.cpu().numpy()) all_y_pred.append(y_pred.cpu().numpy()) y_true = np.concatenate(all_y_true) y_pred = np.concatenate(all_y_pred) target_names = index_to_tag[:-2] print(classification_report(y_true, y_pred, target_names=target_names)) # Let's generate a confusion matrix for all POS tags in the data: # In[25]: from sklearn.metrics import ConfusionMatrixDisplay, confusion_matrix cm = confusion_matrix(y_true, y_pred, normalize='true') disp = ConfusionMatrixDisplay( confusion_matrix=cm, display_labels=target_names, ) fig, ax = plt.subplots(figsize=(10,10)) disp.plot( cmap='Blues', values_format='.2f', colorbar=False, ax=ax, xticks_rotation=45, ) # In[ ]:
1,332
1,396
9
chap11-10
chap11-10
11 Implementing Part-of-speech Tagging Using Recurrent Neural Networks The previous chapter was our first exposure to recurrent neural networks, which included intuitions for why they are useful for natural language processing, various architectures, and training algorithms. In this chapter we will put them to use, to implement a common sequence modeling task. 11.1 Part-of-speech Tagging The task we will use as an example for this chapter is part-of-speech (POS) tagging, an NLP application that, as we discussed in the previous chapter, benefits from word order. Please see Chapter 16 for a more thorough discussion of POS tagging. The entire code presented in this chapter is available in the chap11_pos_tagging Jupyter notebook. To take a break from NLP applications for English, in this chapter we use the AnCora corpus (Taulé et al., 2008), which primarily consists of newspaper texts in Spanish and Catalan with different linguistic annotations. In this chapter we work with the Spanish portion of the corpus, and the annotations for Universal POS tags (see Chapter 16 for a description of these tags). The Spanish portion of the corpus is divided into a training set with 14,305 sentences, a development set with 1,654 sentences, and a test set with 1,721 sentences. The data is distributed in the CoNLL-U format. In this format, all sentences in a dataset are stored in the same file, separated by a blank line. Each individual token in a sentence is represented in a line, which contains 10 annotation fields separated by tabs: ID, FORM, LEMMA, UPOS, XPOS, FEATS, HEAD, DEPREL, DEPS, and MISC. A comprehensive explanation of this format and the 162 11.1 Part-of-speech Tagging 163 meaning of the different fields is beyond the goal of this chapter; however, the curious reader can find one at the CoNLL-U website.1 Here, we are only concerned with the fields FORM (the raw word), and UPOS (the Universal part-of-speech tag). As in previous chapters, we use pandas to preprocess the data. For parsing the CoNLL-U files, we rely on the conllu Python module.2 We implement a function called read_tags that reads the CoNLL-U file corresponding to a dataset and returns a pandas dataframe that combines all tokens in a sentence into a single row with two columns, one for the words, and one for the POS tags in the corresponding sentence: 0 1 2 3 4 ... 14300 14301 14302 14303 14304 words [El, presidente, de, el, órgano, regulador, de... [Afirmó, que, sigue, el, criterio, europeo, y,... [Durante, la, presentación, de, el, libro, ", ... [Y, todas, las, miradas, convergen, en, la, lu... [Cambiar, las, formas, parece, de, rigor, ,, p... tags [DET, NOUN, ADP, DET, NOUN, ADJ, ADP, DET, PRO... [VERB, SCONJ, VERB, DET, NOUN, ADJ, CCONJ, SCO... [ADP, DET, NOUN, ADP, DET, NOUN, PUNCT, DET, P... [CCONJ, DET, DET, NOUN, VERB, ADP, DET, NOUN, ... [VERB, DET, NOUN, VERB, ADP, NOUN, PUNCT, CCON... [PRON, VERB, ADP, VERB, PRON, ADP, DET, NOUN, ... [Sobre, la, oferta, de, interconexión, con, Te... [ADP, DET, NOUN, ADP, NOUN, ADP, PROPN, ADP, D... [La, inversión, en, investigación, básica, es,... [DET, NOUN, ADP, NOUN, ADJ, AUX, DET, NOUN, AD... ... ... [Conviene, que, ahora, ,, en, plena, apoteosis... [VERB, SCONJ, ADV, PUNCT, ADP, ADJ, NOUN, ADP,... [Carlos, y, Fayna, se, enzarzan, en, una, bron... [PROPN, CCONJ, PROPN, PRON, VERB, ADP, DET, NO... [Él, llega, a, tirar, la, sobre, la, cama, y, ... 14305 rows × 2 columns In order to implement our POS tagging application, we need word embeddings that have been pretrained for Spanish. Here we use the publicly-available GloVe embeddings trained on the Spanish Billion Word Corpus3 by the Departamento de Ciencias de la Computación of Universidad de Chile.4 In contrast to the GloVe embeddings used in Chapter 9, these do include a header that stores meta data about the embeddings (i.e., size of the vocabulary and the dimension of the embedding vectors), so in this case we do not use the no_header=True argument: Another difference between these GloVe embeddings and the ones we used in Chapter 9 is that these already include an embedding for unknown words. Therefore, there is no need to introduce our own. However, we do need to include a new embedding for padding, which . 1  https://universaldependencies.org/format.html 
 . 2  https://github.com/EmilStenstrom/conllu/ 
 . 3  https://crscardellino.ar/SBWCE/ 
 . 4  https://github.com/dccuchile/spanishwordembeddings# 
gloveembeddingsfromsbwc 
 164 Implementing POS Tagging Using RNNs we will use later to guarantee that all sentences in the same mini-batch have the same length. We add a vector of zeros for the padding token in the same way as before: Next, we need to preprocess our tokens to match the vocabulary of the embeddings. In particular, these embeddings were trained on words that were lowercased and on sequences of digits that were replaced with a single 0. We will apply the same modifications to our tokens: (From now on we will omit the pandas tables for readability, but, as usual, the corresponding Jupyter notebook contains all necessary information.) Next, we add a new column to the dataframe that stores the word ids corresponding to the embedding vocabulary. Note that at this point we are not padding the sequences of word ids. We will address padding later. We also need to generate the ids for the POS tags. To this end, we first need to construct a vocabulary of POS tags. Once again, we generate a list of tags using explode(), which linearizes our sequence of sequences of tags, and remove repeated tags using unique(). We also add a special tag for the padding token: We now use this POS tag vocabulary to construct a new dataframe column that stores the POS tag ids: The implementation of the Dataset class that stores our POS dataset is trivial: we simply return the lists of word and tag ids, converted to PyTorch tensors. Now it’s time to handle padding. This time we will use some features of PyTorch that we have not seen before. The DataLoader object can receive an optional argument, collate_fn, which expects a function that can be used to form a mini-batch. We will implement this function using PyTorch’s torch.nn.utils.rnn.pad_sequence() function, which, unsurprisingly, pads a group of tensors. We will take advantage of this function to pad the tensors while forming the mini-batch itself. The advantage of this strategy is that, rather than needing to pad all the examples to be the same length as the largest sentence in the corpus, we will instead pad them to the same length as the largest sentence in the minibatch. The latter strategy reduces the amount of padding necessary, which should yield more efficient code. The collate_fn() function takes a single argument, batch, which is a list of tuples. Each tuple has two elements: the list of word ids and the list of tag ids corresponding to a single example. We first unzip this list of tuples into two lists; the first list has all the word ids, and the 11.1 Part-of-speech Tagging 165 second has the tag ids. An explanation of how zip(*batch) works is provided in Appendix A. Next, we compute the lengths of each of the examples in the batch, which we will use later to inform the recurrent neural network where padding starts for each example. We then use the pad_sequence() function to add padding. This function will find the longest sequence in the batch and pad all examples accordingly using the provided padding value. This method is designed to work with PyTorch’s recurrent neural networks, which by default assume the batch index is in the second dimension. However, we will be organizing our tensors such that the batch index is always in the first dimension, which we feel to be more intuitive. For this reason, we also need to provide the batch_first=True argument to pad_sequence. Finally, we return the padded data, as well as the original lengths of the examples. Next, we implement our POS tagging model class. The model consists of: (a) an embedding layer for our Spanish pretrained embeddings; (b) an LSTM that can be set to be unior bi-directional (see Figure 10.3; the RNN is configured to be bidirectional by setting the bidirectional argument to True in the LSTM constructor), with a configurable number of layers (see Figure 10.2; the number of layers is set through the num_layers argument of the constructor) and (c) a linear layer on top of each hidden state, which is used to predict the scores for each of the POS tags for the corresponding token. The forward() method receives the padded minibatch and the list of lengths for the (unpadded) examples in this mini-batch. The first step in the function is to retrieve the embeddings for all words referenced in this mini-batch. We then apply dropout over these embedding vectors. Next, before passing the data to the LSTM, we pack the padded data. Note that the PyTorch PackedSequence5 class, which is the output of the pack_padded_sequence() function, stores a batch of sequences that had different lengths before padding. One important advantage of using PackedSequence is that its internal data structure removes the padding tokens (which is why we had to keep track of the example lengths before padding in x_lengths), and, thus, the recurrent neural network will not back-propagate over the padded elements.6 Once we have a PackedSequence, we pass it to the LSTM. Since the . 5  https://pytorch.org/docs/stable/generated/torch.nn.utils.rnn. PackedSequence.html 
 . 6  The astute reader might ask at this point, “Why did we pad the mini-batch examples in the first place, if we are removing the padding later?” The padding is needed because this allows us to store the mini-batch as a single three-dimensional tensor. 
 166 Implementing POS Tagging Using RNNs output of the LSTM is also packed, we then unpack it using pad_packed_sequence(). Next we apply dropout to this unpacked LSTM output. Finally, we pass
this to the linear layer to predict the tag scores for the tokens. Despite the small number of lines of code, the code of the forward() method, which switches between embedding vectors, padded tensors, and packed sequences, is not trivial. To clarify it, let us walk through an example. Imagine that the input to the forward() method is a batch, x_padded, with shape (10, 20), corresponding to 10 examples, each with 20 word ids (some of which are padding). Then we retrieve the embeddings. Assuming our word embeddings, i.e., the input vectors xi in Chapter 10, are of dimension 300, the new tensor will have a shape of (10, 20, 300), corresponding to 10 examples, each with 20 embeddings, each with dimension 300. After dropout the shape hasn’t changed, but some of the elements have been zeroed out. After unpacking the output of the LSTM, we will have a tensor of shape (10, 20, hidden_size), where hidden_size is the size of the LSTM hidden state, i.e., the ht vector in Equation 10.6, (hidden_size is a hyper parameter we will set later on). After passing this tensor to the linear layer, we will obtain a tensor of shape (10, 20, tag_vocab_size), where tag_vocab_size is the number of POS tags in our vocabulary. Thus, for each token in each example, we will have a distribution of POS tag scores. For each token, the assigned POS tag will be the one corresponding to the highest score. We next initialize all the hyper parameters and all the required components: The training procedure is very similar to the one implemented in Chapter 7. One notable difference is that the output of this model has three dimensions instead of two: number of examples, number of tokens, and number of POS tag scores. Thus, we have to reshape the output to pass it to the loss function. Additionally, we need to discard the padding before computing the loss. We reshape the gold tag ids using the torch.flatten() function, to transform the 2-dimensional tensor of shape (n_examples, n_tokens) to a 1-dimensional tensor with n_examples * n_tokens elements. The predictions are reshaped using the view(-1, output_size) method. By passing two arguments we are stipulating that we want two dimensions. The second dimension will be of size output_size. The -1 indicates that the first dimension should be inferred from the size of the tensor. This means that for a tensor of shape (n_examples, n_tokens, output_size) we will get a tensor of shape (n_examples * n_tokens, output_size). Then, we use a Boolean mask to discard the elements corresponding to the padding. This way, the loss 11.2 Summary 167 function will consider each actual word individually, as if the whole batch was just one big sentence. Note that treating a mini-batch as a single virtual sentence does affect the evaluation results. Lastly, we evaluate the performance of our POS tagger on the test set, similarly to how we have done it before: The results indicate that our POS tagger obtains an overall accuracy of 97%, which is in line with state-of-the-art approaches! This is encouraging considering that our approach does not include the CRF layer we discussed in Chapter 10. We challenge the reader to add this layer,7 and experiment with this architecture for other sequence tasks such as named entity recognition. 11.2 Summary In this chapter we have implemented a Spanish part-of-speech tagger using a bidirectional LSTM and a set of pretrained, static word embeddings. Through this process, we have also introduced several new PyTorch features such as the pad_sequence, pack_padded_sequence, and pad_packed_sequence functions, which allow us to work more efficiently with variable length sequences for recurrent neural networks. 7 See, for example, the LSTM-CRF implementation from the PyTorch tutorial: https://pytorch.org/tutorials/beginner/nlp/advanced_tutorial.html
8,073
8,129
#!/usr/bin/env python # coding: utf-8 # # Part-of-speech Tagging Using RNNs # Some initialization: # In[4]: import random import torch import numpy as np import pandas as pd from tqdm.notebook import tqdm # enable tqdm in pandas tqdm.pandas() # set to True to use the gpu (if there is one available) use_gpu = True # select device device = torch.device('cuda' if use_gpu and torch.cuda.is_available() else 'cpu') print(f'device: {device.type}') # random seed seed = 1234 # set random seed if seed is not None: print(f'random seed: {seed}') random.seed(seed) np.random.seed(seed) torch.manual_seed(seed) # Next, let's read the words and their POS tags from the CoNLLUP format: # In[5]: from conllu import parse_incr def read_tags(filename): data = {'words': [], 'tags': []} with open(filename) as f: for sent in parse_incr(f): words = [tok['form'] for tok in sent] tags = [tok['upos'] for tok in sent] data['words'].append(words) data['tags'].append(tags) return pd.DataFrame(data) # In[6]: train_df = read_tags('data/UD_Spanish-AnCora/es_ancora-ud-train.conllup') train_df # We now load the GloVe embeddings for Spanish, which include a representation for the unknown token: # In[7]: from gensim.models import KeyedVectors glove = KeyedVectors.load_word2vec_format('glove-sbwc.i25.vec') glove.vectors.shape # In[8]: # these embeddings already include <unk> unk_tok = '<unk>' unk_id = glove.key_to_index[unk_tok] unk_tok, unk_id # In[9]: # add padding embedding pad_tok = '<pad>' pad_emb = np.zeros(300) glove.add_vector(pad_tok, pad_emb) pad_tok_id = glove.key_to_index[pad_tok] pad_tok, pad_tok_id # Preprocessing: lower case all words, and replace all numbers with '0': # In[10]: def preprocess(words): result = [] for w in words: w = w.lower() if w.isdecimal(): w = '0' result.append(w) return result train_df['words'] = train_df['words'].progress_map(preprocess) train_df # Next, construct actual PyTorch `Dataset` and `DataLoader` objects for the train/dev/test partitions: # In[11]: def get_ids(tokens, key_to_index, unk_id=None): return [key_to_index.get(tok, unk_id) for tok in tokens] def get_word_ids(tokens): return get_ids(tokens, glove.key_to_index, unk_id) # add new column to the dataframe train_df['word ids'] = train_df['words'].progress_map(get_word_ids) train_df # In[12]: pad_tag = '<pad>' index_to_tag = train_df['tags'].explode().unique().tolist() + [pad_tag] tag_to_index = {t:i for i,t in enumerate(index_to_tag)} pad_tag_id = tag_to_index[pad_tag] pad_tag, pad_tag_id # In[13]: index_to_tag # In[14]: def get_tag_ids(tags): return get_ids(tags, tag_to_index) train_df['tag ids'] = train_df['tags'].progress_map(get_tag_ids) train_df # In[15]: dev_df = read_tags('data/UD_Spanish-AnCora/es_ancora-ud-dev.conllup') dev_df['words'] = dev_df['words'].progress_map(preprocess) dev_df['word ids'] = dev_df['words'].progress_map(lambda x: get_ids(x, glove.key_to_index, unk_id)) dev_df['tag ids'] = dev_df['tags'].progress_map(lambda x: get_ids(x, tag_to_index)) dev_df # In[16]: from torch.utils.data import Dataset class MyDataset(Dataset): def __init__(self, x, y): self.x = x self.y = y def __len__(self): return len(self.y) def __getitem__(self, index): x = torch.tensor(self.x[index]) y = torch.tensor(self.y[index]) return x, y # `collate_fn` will be used by `DataLoader` to pad all sentences in the same batch to the same length. # In[17]: from torch.nn.utils.rnn import pad_sequence def collate_fn(batch): # separate xs and ys xs, ys = zip(*batch) # get lengths lengths = [len(x) for x in xs] # pad sequences x_padded = pad_sequence(xs, batch_first=True, padding_value=pad_tok_id) y_padded = pad_sequence(ys, batch_first=True, padding_value=pad_tag_id) # return padded return x_padded, y_padded, lengths # Now construct our PyTorch model: # In[18]: from torch import nn from torch.nn.utils.rnn import pack_padded_sequence, pad_packed_sequence class MyModel(nn.Module): def __init__(self, vectors, hidden_size, num_layers, bidirectional, dropout, output_size): super().__init__() # ensure vectors is a tensor if not torch.is_tensor(vectors): vectors = torch.tensor(vectors) # init embedding layer self.embedding = nn.Embedding.from_pretrained(embeddings=vectors) # init lstm self.lstm = nn.LSTM( input_size=vectors.shape[1], hidden_size=hidden_size, num_layers=num_layers, bidirectional=bidirectional, dropout=dropout, batch_first=True) # init dropout self.dropout = nn.Dropout(dropout) # init classifier self.classifier = nn.Linear( in_features=hidden_size * 2 if bidirectional else hidden_size, out_features=output_size) def forward(self, x_padded, x_lengths): # get embeddings output = self.embedding(x_padded) output = self.dropout(output) # pack data before lstm packed = pack_padded_sequence(output, x_lengths, batch_first=True, enforce_sorted=False) packed, _ = self.lstm(packed) # unpack data before rest of model output, _ = pad_packed_sequence(packed, batch_first=True) output = self.dropout(output) output = self.classifier(output) return output # In[19]: from torch import optim from torch.utils.data import DataLoader from sklearn.metrics import accuracy_score # hyperparameters lr = 1e-3 weight_decay = 1e-5 batch_size = 100 shuffle = True n_epochs = 10 vectors = glove.vectors hidden_size = 100 num_layers = 2 bidirectional = True dropout = 0.1 output_size = len(index_to_tag) # initialize the model, loss function, optimizer, and data-loader model = MyModel(vectors, hidden_size, num_layers, bidirectional, dropout, output_size).to(device) loss_func = nn.CrossEntropyLoss() optimizer = optim.Adam(model.parameters(), lr=lr, weight_decay=weight_decay) train_ds = MyDataset(train_df['word ids'], train_df['tag ids']) train_dl = DataLoader(train_ds, batch_size=batch_size, shuffle=shuffle, collate_fn=collate_fn) dev_ds = MyDataset(dev_df['word ids'], dev_df['tag ids']) dev_dl = DataLoader(dev_ds, batch_size=batch_size, shuffle=shuffle, collate_fn=collate_fn) train_loss, train_acc = [], [] dev_loss, dev_acc = [], [] # We are now ready to train! # In[20]: # train the model for epoch in range(n_epochs): losses, acc = [], [] model.train() for x_padded, y_padded, lengths in tqdm(train_dl, desc=f'epoch {epoch+1} (train)'): # clear gradients model.zero_grad() # send batch to right device x_padded = x_padded.to(device) y_padded = y_padded.to(device) # predict label scores y_pred = model(x_padded, lengths) # reshape output y_true = torch.flatten(y_padded) y_pred = y_pred.view(-1, output_size) mask = y_true != pad_tag_id y_true = y_true[mask] y_pred = y_pred[mask] # compute loss loss = loss_func(y_pred, y_true) # accumulate for plotting gold = y_true.detach().cpu().numpy() pred = np.argmax(y_pred.detach().cpu().numpy(), axis=1) losses.append(loss.detach().cpu().item()) acc.append(accuracy_score(gold, pred)) # backpropagate loss.backward() # optimize model parameters optimizer.step() train_loss.append(np.mean(losses)) train_acc.append(np.mean(acc)) model.eval() with torch.no_grad(): losses, acc = [], [] for x_padded, y_padded, lengths in tqdm(dev_dl, desc=f'epoch {epoch+1} (dev)'): x_padded = x_padded.to(device) y_padded = y_padded.to(device) y_pred = model(x_padded, lengths) y_true = torch.flatten(y_padded) y_pred = y_pred.view(-1, output_size) mask = y_true != pad_tag_id y_true = y_true[mask] y_pred = y_pred[mask] loss = loss_func(y_pred, y_true) gold = y_true.cpu().numpy() pred = np.argmax(y_pred.cpu().numpy(), axis=1) losses.append(loss.cpu().item()) acc.append(accuracy_score(gold, pred)) dev_loss.append(np.mean(losses)) dev_acc.append(np.mean(acc)) # Plot loss and accuracy on dev after each epoch: # In[21]: import matplotlib.pyplot as plt x = np.arange(n_epochs) + 1 plt.plot(x, train_loss) plt.plot(x, dev_loss) plt.legend(['train', 'dev']) plt.xlabel('epoch') plt.ylabel('loss') plt.grid(True) # In[22]: plt.plot(x, train_acc) plt.plot(x, dev_acc) plt.legend(['train', 'dev']) plt.xlabel('epoch') plt.ylabel('accuracy') plt.grid(True) # In[23]: test_df = read_tags('data/UD_Spanish-AnCora/es_ancora-ud-test.conllup') test_df['words'] = test_df['words'].progress_map(preprocess) test_df['word ids'] = test_df['words'].progress_map(lambda x: get_ids(x, glove.key_to_index, unk_id)) test_df['tag ids'] = test_df['tags'].progress_map(lambda x: get_ids(x, tag_to_index)) test_df # Now let's evaluate on the test partition: # In[24]: from sklearn.metrics import classification_report model.eval() test_ds = MyDataset(test_df['word ids'], test_df['tag ids']) test_dl = DataLoader(test_ds, batch_size=batch_size, shuffle=shuffle, collate_fn=collate_fn) all_y_true = [] all_y_pred = [] with torch.no_grad(): for x_padded, y_padded, lengths in tqdm(test_dl): x_padded = x_padded.to(device) y_pred = model(x_padded, lengths) y_true = torch.flatten(y_padded) y_pred = y_pred.view(-1, output_size) mask = y_true != pad_tag_id y_true = y_true[mask] y_pred = torch.argmax(y_pred[mask], dim=1) all_y_true.append(y_true.cpu().numpy()) all_y_pred.append(y_pred.cpu().numpy()) y_true = np.concatenate(all_y_true) y_pred = np.concatenate(all_y_pred) target_names = index_to_tag[:-2] print(classification_report(y_true, y_pred, target_names=target_names)) # Let's generate a confusion matrix for all POS tags in the data: # In[25]: from sklearn.metrics import ConfusionMatrixDisplay, confusion_matrix cm = confusion_matrix(y_true, y_pred, normalize='true') disp = ConfusionMatrixDisplay( confusion_matrix=cm, display_labels=target_names, ) fig, ax = plt.subplots(figsize=(10,10)) disp.plot( cmap='Blues', values_format='.2f', colorbar=False, ax=ax, xticks_rotation=45, ) # In[ ]:
4,505
4,579
10
chap11-11
chap11-11
11 Implementing Part-of-speech Tagging Using Recurrent Neural Networks The previous chapter was our first exposure to recurrent neural networks, which included intuitions for why they are useful for natural language processing, various architectures, and training algorithms. In this chapter we will put them to use, to implement a common sequence modeling task. 11.1 Part-of-speech Tagging The task we will use as an example for this chapter is part-of-speech (POS) tagging, an NLP application that, as we discussed in the previous chapter, benefits from word order. Please see Chapter 16 for a more thorough discussion of POS tagging. The entire code presented in this chapter is available in the chap11_pos_tagging Jupyter notebook. To take a break from NLP applications for English, in this chapter we use the AnCora corpus (Taulé et al., 2008), which primarily consists of newspaper texts in Spanish and Catalan with different linguistic annotations. In this chapter we work with the Spanish portion of the corpus, and the annotations for Universal POS tags (see Chapter 16 for a description of these tags). The Spanish portion of the corpus is divided into a training set with 14,305 sentences, a development set with 1,654 sentences, and a test set with 1,721 sentences. The data is distributed in the CoNLL-U format. In this format, all sentences in a dataset are stored in the same file, separated by a blank line. Each individual token in a sentence is represented in a line, which contains 10 annotation fields separated by tabs: ID, FORM, LEMMA, UPOS, XPOS, FEATS, HEAD, DEPREL, DEPS, and MISC. A comprehensive explanation of this format and the 162 11.1 Part-of-speech Tagging 163 meaning of the different fields is beyond the goal of this chapter; however, the curious reader can find one at the CoNLL-U website.1 Here, we are only concerned with the fields FORM (the raw word), and UPOS (the Universal part-of-speech tag). As in previous chapters, we use pandas to preprocess the data. For parsing the CoNLL-U files, we rely on the conllu Python module.2 We implement a function called read_tags that reads the CoNLL-U file corresponding to a dataset and returns a pandas dataframe that combines all tokens in a sentence into a single row with two columns, one for the words, and one for the POS tags in the corresponding sentence: 0 1 2 3 4 ... 14300 14301 14302 14303 14304 words [El, presidente, de, el, órgano, regulador, de... [Afirmó, que, sigue, el, criterio, europeo, y,... [Durante, la, presentación, de, el, libro, ", ... [Y, todas, las, miradas, convergen, en, la, lu... [Cambiar, las, formas, parece, de, rigor, ,, p... tags [DET, NOUN, ADP, DET, NOUN, ADJ, ADP, DET, PRO... [VERB, SCONJ, VERB, DET, NOUN, ADJ, CCONJ, SCO... [ADP, DET, NOUN, ADP, DET, NOUN, PUNCT, DET, P... [CCONJ, DET, DET, NOUN, VERB, ADP, DET, NOUN, ... [VERB, DET, NOUN, VERB, ADP, NOUN, PUNCT, CCON... [PRON, VERB, ADP, VERB, PRON, ADP, DET, NOUN, ... [Sobre, la, oferta, de, interconexión, con, Te... [ADP, DET, NOUN, ADP, NOUN, ADP, PROPN, ADP, D... [La, inversión, en, investigación, básica, es,... [DET, NOUN, ADP, NOUN, ADJ, AUX, DET, NOUN, AD... ... ... [Conviene, que, ahora, ,, en, plena, apoteosis... [VERB, SCONJ, ADV, PUNCT, ADP, ADJ, NOUN, ADP,... [Carlos, y, Fayna, se, enzarzan, en, una, bron... [PROPN, CCONJ, PROPN, PRON, VERB, ADP, DET, NO... [Él, llega, a, tirar, la, sobre, la, cama, y, ... 14305 rows × 2 columns In order to implement our POS tagging application, we need word embeddings that have been pretrained for Spanish. Here we use the publicly-available GloVe embeddings trained on the Spanish Billion Word Corpus3 by the Departamento de Ciencias de la Computación of Universidad de Chile.4 In contrast to the GloVe embeddings used in Chapter 9, these do include a header that stores meta data about the embeddings (i.e., size of the vocabulary and the dimension of the embedding vectors), so in this case we do not use the no_header=True argument: Another difference between these GloVe embeddings and the ones we used in Chapter 9 is that these already include an embedding for unknown words. Therefore, there is no need to introduce our own. However, we do need to include a new embedding for padding, which . 1  https://universaldependencies.org/format.html 
 . 2  https://github.com/EmilStenstrom/conllu/ 
 . 3  https://crscardellino.ar/SBWCE/ 
 . 4  https://github.com/dccuchile/spanishwordembeddings# 
gloveembeddingsfromsbwc 
 164 Implementing POS Tagging Using RNNs we will use later to guarantee that all sentences in the same mini-batch have the same length. We add a vector of zeros for the padding token in the same way as before: Next, we need to preprocess our tokens to match the vocabulary of the embeddings. In particular, these embeddings were trained on words that were lowercased and on sequences of digits that were replaced with a single 0. We will apply the same modifications to our tokens: (From now on we will omit the pandas tables for readability, but, as usual, the corresponding Jupyter notebook contains all necessary information.) Next, we add a new column to the dataframe that stores the word ids corresponding to the embedding vocabulary. Note that at this point we are not padding the sequences of word ids. We will address padding later. We also need to generate the ids for the POS tags. To this end, we first need to construct a vocabulary of POS tags. Once again, we generate a list of tags using explode(), which linearizes our sequence of sequences of tags, and remove repeated tags using unique(). We also add a special tag for the padding token: We now use this POS tag vocabulary to construct a new dataframe column that stores the POS tag ids: The implementation of the Dataset class that stores our POS dataset is trivial: we simply return the lists of word and tag ids, converted to PyTorch tensors. Now it’s time to handle padding. This time we will use some features of PyTorch that we have not seen before. The DataLoader object can receive an optional argument, collate_fn, which expects a function that can be used to form a mini-batch. We will implement this function using PyTorch’s torch.nn.utils.rnn.pad_sequence() function, which, unsurprisingly, pads a group of tensors. We will take advantage of this function to pad the tensors while forming the mini-batch itself. The advantage of this strategy is that, rather than needing to pad all the examples to be the same length as the largest sentence in the corpus, we will instead pad them to the same length as the largest sentence in the minibatch. The latter strategy reduces the amount of padding necessary, which should yield more efficient code. The collate_fn() function takes a single argument, batch, which is a list of tuples. Each tuple has two elements: the list of word ids and the list of tag ids corresponding to a single example. We first unzip this list of tuples into two lists; the first list has all the word ids, and the 11.1 Part-of-speech Tagging 165 second has the tag ids. An explanation of how zip(*batch) works is provided in Appendix A. Next, we compute the lengths of each of the examples in the batch, which we will use later to inform the recurrent neural network where padding starts for each example. We then use the pad_sequence() function to add padding. This function will find the longest sequence in the batch and pad all examples accordingly using the provided padding value. This method is designed to work with PyTorch’s recurrent neural networks, which by default assume the batch index is in the second dimension. However, we will be organizing our tensors such that the batch index is always in the first dimension, which we feel to be more intuitive. For this reason, we also need to provide the batch_first=True argument to pad_sequence. Finally, we return the padded data, as well as the original lengths of the examples. Next, we implement our POS tagging model class. The model consists of: (a) an embedding layer for our Spanish pretrained embeddings; (b) an LSTM that can be set to be unior bi-directional (see Figure 10.3; the RNN is configured to be bidirectional by setting the bidirectional argument to True in the LSTM constructor), with a configurable number of layers (see Figure 10.2; the number of layers is set through the num_layers argument of the constructor) and (c) a linear layer on top of each hidden state, which is used to predict the scores for each of the POS tags for the corresponding token. The forward() method receives the padded minibatch and the list of lengths for the (unpadded) examples in this mini-batch. The first step in the function is to retrieve the embeddings for all words referenced in this mini-batch. We then apply dropout over these embedding vectors. Next, before passing the data to the LSTM, we pack the padded data. Note that the PyTorch PackedSequence5 class, which is the output of the pack_padded_sequence() function, stores a batch of sequences that had different lengths before padding. One important advantage of using PackedSequence is that its internal data structure removes the padding tokens (which is why we had to keep track of the example lengths before padding in x_lengths), and, thus, the recurrent neural network will not back-propagate over the padded elements.6 Once we have a PackedSequence, we pass it to the LSTM. Since the . 5  https://pytorch.org/docs/stable/generated/torch.nn.utils.rnn. PackedSequence.html 
 . 6  The astute reader might ask at this point, “Why did we pad the mini-batch examples in the first place, if we are removing the padding later?” The padding is needed because this allows us to store the mini-batch as a single three-dimensional tensor. 
 166 Implementing POS Tagging Using RNNs output of the LSTM is also packed, we then unpack it using pad_packed_sequence(). Next we apply dropout to this unpacked LSTM output. Finally, we pass
this to the linear layer to predict the tag scores for the tokens. Despite the small number of lines of code, the code of the forward() method, which switches between embedding vectors, padded tensors, and packed sequences, is not trivial. To clarify it, let us walk through an example. Imagine that the input to the forward() method is a batch, x_padded, with shape (10, 20), corresponding to 10 examples, each with 20 word ids (some of which are padding). Then we retrieve the embeddings. Assuming our word embeddings, i.e., the input vectors xi in Chapter 10, are of dimension 300, the new tensor will have a shape of (10, 20, 300), corresponding to 10 examples, each with 20 embeddings, each with dimension 300. After dropout the shape hasn’t changed, but some of the elements have been zeroed out. After unpacking the output of the LSTM, we will have a tensor of shape (10, 20, hidden_size), where hidden_size is the size of the LSTM hidden state, i.e., the ht vector in Equation 10.6, (hidden_size is a hyper parameter we will set later on). After passing this tensor to the linear layer, we will obtain a tensor of shape (10, 20, tag_vocab_size), where tag_vocab_size is the number of POS tags in our vocabulary. Thus, for each token in each example, we will have a distribution of POS tag scores. For each token, the assigned POS tag will be the one corresponding to the highest score. We next initialize all the hyper parameters and all the required components: The training procedure is very similar to the one implemented in Chapter 7. One notable difference is that the output of this model has three dimensions instead of two: number of examples, number of tokens, and number of POS tag scores. Thus, we have to reshape the output to pass it to the loss function. Additionally, we need to discard the padding before computing the loss. We reshape the gold tag ids using the torch.flatten() function, to transform the 2-dimensional tensor of shape (n_examples, n_tokens) to a 1-dimensional tensor with n_examples * n_tokens elements. The predictions are reshaped using the view(-1, output_size) method. By passing two arguments we are stipulating that we want two dimensions. The second dimension will be of size output_size. The -1 indicates that the first dimension should be inferred from the size of the tensor. This means that for a tensor of shape (n_examples, n_tokens, output_size) we will get a tensor of shape (n_examples * n_tokens, output_size). Then, we use a Boolean mask to discard the elements corresponding to the padding. This way, the loss 11.2 Summary 167 function will consider each actual word individually, as if the whole batch was just one big sentence. Note that treating a mini-batch as a single virtual sentence does affect the evaluation results. Lastly, we evaluate the performance of our POS tagger on the test set, similarly to how we have done it before: The results indicate that our POS tagger obtains an overall accuracy of 97%, which is in line with state-of-the-art approaches! This is encouraging considering that our approach does not include the CRF layer we discussed in Chapter 10. We challenge the reader to add this layer,7 and experiment with this architecture for other sequence tasks such as named entity recognition. 11.2 Summary In this chapter we have implemented a Spanish part-of-speech tagger using a bidirectional LSTM and a set of pretrained, static word embeddings. Through this process, we have also introduced several new PyTorch features such as the pad_sequence, pack_padded_sequence, and pad_packed_sequence functions, which allow us to work more efficiently with variable length sequences for recurrent neural networks. 7 See, for example, the LSTM-CRF implementation from the PyTorch tutorial: https://pytorch.org/tutorials/beginner/nlp/advanced_tutorial.html
8,720
8,825
#!/usr/bin/env python # coding: utf-8 # # Part-of-speech Tagging Using RNNs # Some initialization: # In[4]: import random import torch import numpy as np import pandas as pd from tqdm.notebook import tqdm # enable tqdm in pandas tqdm.pandas() # set to True to use the gpu (if there is one available) use_gpu = True # select device device = torch.device('cuda' if use_gpu and torch.cuda.is_available() else 'cpu') print(f'device: {device.type}') # random seed seed = 1234 # set random seed if seed is not None: print(f'random seed: {seed}') random.seed(seed) np.random.seed(seed) torch.manual_seed(seed) # Next, let's read the words and their POS tags from the CoNLLUP format: # In[5]: from conllu import parse_incr def read_tags(filename): data = {'words': [], 'tags': []} with open(filename) as f: for sent in parse_incr(f): words = [tok['form'] for tok in sent] tags = [tok['upos'] for tok in sent] data['words'].append(words) data['tags'].append(tags) return pd.DataFrame(data) # In[6]: train_df = read_tags('data/UD_Spanish-AnCora/es_ancora-ud-train.conllup') train_df # We now load the GloVe embeddings for Spanish, which include a representation for the unknown token: # In[7]: from gensim.models import KeyedVectors glove = KeyedVectors.load_word2vec_format('glove-sbwc.i25.vec') glove.vectors.shape # In[8]: # these embeddings already include <unk> unk_tok = '<unk>' unk_id = glove.key_to_index[unk_tok] unk_tok, unk_id # In[9]: # add padding embedding pad_tok = '<pad>' pad_emb = np.zeros(300) glove.add_vector(pad_tok, pad_emb) pad_tok_id = glove.key_to_index[pad_tok] pad_tok, pad_tok_id # Preprocessing: lower case all words, and replace all numbers with '0': # In[10]: def preprocess(words): result = [] for w in words: w = w.lower() if w.isdecimal(): w = '0' result.append(w) return result train_df['words'] = train_df['words'].progress_map(preprocess) train_df # Next, construct actual PyTorch `Dataset` and `DataLoader` objects for the train/dev/test partitions: # In[11]: def get_ids(tokens, key_to_index, unk_id=None): return [key_to_index.get(tok, unk_id) for tok in tokens] def get_word_ids(tokens): return get_ids(tokens, glove.key_to_index, unk_id) # add new column to the dataframe train_df['word ids'] = train_df['words'].progress_map(get_word_ids) train_df # In[12]: pad_tag = '<pad>' index_to_tag = train_df['tags'].explode().unique().tolist() + [pad_tag] tag_to_index = {t:i for i,t in enumerate(index_to_tag)} pad_tag_id = tag_to_index[pad_tag] pad_tag, pad_tag_id # In[13]: index_to_tag # In[14]: def get_tag_ids(tags): return get_ids(tags, tag_to_index) train_df['tag ids'] = train_df['tags'].progress_map(get_tag_ids) train_df # In[15]: dev_df = read_tags('data/UD_Spanish-AnCora/es_ancora-ud-dev.conllup') dev_df['words'] = dev_df['words'].progress_map(preprocess) dev_df['word ids'] = dev_df['words'].progress_map(lambda x: get_ids(x, glove.key_to_index, unk_id)) dev_df['tag ids'] = dev_df['tags'].progress_map(lambda x: get_ids(x, tag_to_index)) dev_df # In[16]: from torch.utils.data import Dataset class MyDataset(Dataset): def __init__(self, x, y): self.x = x self.y = y def __len__(self): return len(self.y) def __getitem__(self, index): x = torch.tensor(self.x[index]) y = torch.tensor(self.y[index]) return x, y # `collate_fn` will be used by `DataLoader` to pad all sentences in the same batch to the same length. # In[17]: from torch.nn.utils.rnn import pad_sequence def collate_fn(batch): # separate xs and ys xs, ys = zip(*batch) # get lengths lengths = [len(x) for x in xs] # pad sequences x_padded = pad_sequence(xs, batch_first=True, padding_value=pad_tok_id) y_padded = pad_sequence(ys, batch_first=True, padding_value=pad_tag_id) # return padded return x_padded, y_padded, lengths # Now construct our PyTorch model: # In[18]: from torch import nn from torch.nn.utils.rnn import pack_padded_sequence, pad_packed_sequence class MyModel(nn.Module): def __init__(self, vectors, hidden_size, num_layers, bidirectional, dropout, output_size): super().__init__() # ensure vectors is a tensor if not torch.is_tensor(vectors): vectors = torch.tensor(vectors) # init embedding layer self.embedding = nn.Embedding.from_pretrained(embeddings=vectors) # init lstm self.lstm = nn.LSTM( input_size=vectors.shape[1], hidden_size=hidden_size, num_layers=num_layers, bidirectional=bidirectional, dropout=dropout, batch_first=True) # init dropout self.dropout = nn.Dropout(dropout) # init classifier self.classifier = nn.Linear( in_features=hidden_size * 2 if bidirectional else hidden_size, out_features=output_size) def forward(self, x_padded, x_lengths): # get embeddings output = self.embedding(x_padded) output = self.dropout(output) # pack data before lstm packed = pack_padded_sequence(output, x_lengths, batch_first=True, enforce_sorted=False) packed, _ = self.lstm(packed) # unpack data before rest of model output, _ = pad_packed_sequence(packed, batch_first=True) output = self.dropout(output) output = self.classifier(output) return output # In[19]: from torch import optim from torch.utils.data import DataLoader from sklearn.metrics import accuracy_score # hyperparameters lr = 1e-3 weight_decay = 1e-5 batch_size = 100 shuffle = True n_epochs = 10 vectors = glove.vectors hidden_size = 100 num_layers = 2 bidirectional = True dropout = 0.1 output_size = len(index_to_tag) # initialize the model, loss function, optimizer, and data-loader model = MyModel(vectors, hidden_size, num_layers, bidirectional, dropout, output_size).to(device) loss_func = nn.CrossEntropyLoss() optimizer = optim.Adam(model.parameters(), lr=lr, weight_decay=weight_decay) train_ds = MyDataset(train_df['word ids'], train_df['tag ids']) train_dl = DataLoader(train_ds, batch_size=batch_size, shuffle=shuffle, collate_fn=collate_fn) dev_ds = MyDataset(dev_df['word ids'], dev_df['tag ids']) dev_dl = DataLoader(dev_ds, batch_size=batch_size, shuffle=shuffle, collate_fn=collate_fn) train_loss, train_acc = [], [] dev_loss, dev_acc = [], [] # We are now ready to train! # In[20]: # train the model for epoch in range(n_epochs): losses, acc = [], [] model.train() for x_padded, y_padded, lengths in tqdm(train_dl, desc=f'epoch {epoch+1} (train)'): # clear gradients model.zero_grad() # send batch to right device x_padded = x_padded.to(device) y_padded = y_padded.to(device) # predict label scores y_pred = model(x_padded, lengths) # reshape output y_true = torch.flatten(y_padded) y_pred = y_pred.view(-1, output_size) mask = y_true != pad_tag_id y_true = y_true[mask] y_pred = y_pred[mask] # compute loss loss = loss_func(y_pred, y_true) # accumulate for plotting gold = y_true.detach().cpu().numpy() pred = np.argmax(y_pred.detach().cpu().numpy(), axis=1) losses.append(loss.detach().cpu().item()) acc.append(accuracy_score(gold, pred)) # backpropagate loss.backward() # optimize model parameters optimizer.step() train_loss.append(np.mean(losses)) train_acc.append(np.mean(acc)) model.eval() with torch.no_grad(): losses, acc = [], [] for x_padded, y_padded, lengths in tqdm(dev_dl, desc=f'epoch {epoch+1} (dev)'): x_padded = x_padded.to(device) y_padded = y_padded.to(device) y_pred = model(x_padded, lengths) y_true = torch.flatten(y_padded) y_pred = y_pred.view(-1, output_size) mask = y_true != pad_tag_id y_true = y_true[mask] y_pred = y_pred[mask] loss = loss_func(y_pred, y_true) gold = y_true.cpu().numpy() pred = np.argmax(y_pred.cpu().numpy(), axis=1) losses.append(loss.cpu().item()) acc.append(accuracy_score(gold, pred)) dev_loss.append(np.mean(losses)) dev_acc.append(np.mean(acc)) # Plot loss and accuracy on dev after each epoch: # In[21]: import matplotlib.pyplot as plt x = np.arange(n_epochs) + 1 plt.plot(x, train_loss) plt.plot(x, dev_loss) plt.legend(['train', 'dev']) plt.xlabel('epoch') plt.ylabel('loss') plt.grid(True) # In[22]: plt.plot(x, train_acc) plt.plot(x, dev_acc) plt.legend(['train', 'dev']) plt.xlabel('epoch') plt.ylabel('accuracy') plt.grid(True) # In[23]: test_df = read_tags('data/UD_Spanish-AnCora/es_ancora-ud-test.conllup') test_df['words'] = test_df['words'].progress_map(preprocess) test_df['word ids'] = test_df['words'].progress_map(lambda x: get_ids(x, glove.key_to_index, unk_id)) test_df['tag ids'] = test_df['tags'].progress_map(lambda x: get_ids(x, tag_to_index)) test_df # Now let's evaluate on the test partition: # In[24]: from sklearn.metrics import classification_report model.eval() test_ds = MyDataset(test_df['word ids'], test_df['tag ids']) test_dl = DataLoader(test_ds, batch_size=batch_size, shuffle=shuffle, collate_fn=collate_fn) all_y_true = [] all_y_pred = [] with torch.no_grad(): for x_padded, y_padded, lengths in tqdm(test_dl): x_padded = x_padded.to(device) y_pred = model(x_padded, lengths) y_true = torch.flatten(y_padded) y_pred = y_pred.view(-1, output_size) mask = y_true != pad_tag_id y_true = y_true[mask] y_pred = torch.argmax(y_pred[mask], dim=1) all_y_true.append(y_true.cpu().numpy()) all_y_pred.append(y_pred.cpu().numpy()) y_true = np.concatenate(all_y_true) y_pred = np.concatenate(all_y_pred) target_names = index_to_tag[:-2] print(classification_report(y_true, y_pred, target_names=target_names)) # Let's generate a confusion matrix for all POS tags in the data: # In[25]: from sklearn.metrics import ConfusionMatrixDisplay, confusion_matrix cm = confusion_matrix(y_true, y_pred, normalize='true') disp = ConfusionMatrixDisplay( confusion_matrix=cm, display_labels=target_names, ) fig, ax = plt.subplots(figsize=(10,10)) disp.plot( cmap='Blues', values_format='.2f', colorbar=False, ax=ax, xticks_rotation=45, ) # In[ ]:
5,161
5,203
11
chap11-12
chap11-12
11 Implementing Part-of-speech Tagging Using Recurrent Neural Networks The previous chapter was our first exposure to recurrent neural networks, which included intuitions for why they are useful for natural language processing, various architectures, and training algorithms. In this chapter we will put them to use, to implement a common sequence modeling task. 11.1 Part-of-speech Tagging The task we will use as an example for this chapter is part-of-speech (POS) tagging, an NLP application that, as we discussed in the previous chapter, benefits from word order. Please see Chapter 16 for a more thorough discussion of POS tagging. The entire code presented in this chapter is available in the chap11_pos_tagging Jupyter notebook. To take a break from NLP applications for English, in this chapter we use the AnCora corpus (Taulé et al., 2008), which primarily consists of newspaper texts in Spanish and Catalan with different linguistic annotations. In this chapter we work with the Spanish portion of the corpus, and the annotations for Universal POS tags (see Chapter 16 for a description of these tags). The Spanish portion of the corpus is divided into a training set with 14,305 sentences, a development set with 1,654 sentences, and a test set with 1,721 sentences. The data is distributed in the CoNLL-U format. In this format, all sentences in a dataset are stored in the same file, separated by a blank line. Each individual token in a sentence is represented in a line, which contains 10 annotation fields separated by tabs: ID, FORM, LEMMA, UPOS, XPOS, FEATS, HEAD, DEPREL, DEPS, and MISC. A comprehensive explanation of this format and the 162 11.1 Part-of-speech Tagging 163 meaning of the different fields is beyond the goal of this chapter; however, the curious reader can find one at the CoNLL-U website.1 Here, we are only concerned with the fields FORM (the raw word), and UPOS (the Universal part-of-speech tag). As in previous chapters, we use pandas to preprocess the data. For parsing the CoNLL-U files, we rely on the conllu Python module.2 We implement a function called read_tags that reads the CoNLL-U file corresponding to a dataset and returns a pandas dataframe that combines all tokens in a sentence into a single row with two columns, one for the words, and one for the POS tags in the corresponding sentence: 0 1 2 3 4 ... 14300 14301 14302 14303 14304 words [El, presidente, de, el, órgano, regulador, de... [Afirmó, que, sigue, el, criterio, europeo, y,... [Durante, la, presentación, de, el, libro, ", ... [Y, todas, las, miradas, convergen, en, la, lu... [Cambiar, las, formas, parece, de, rigor, ,, p... tags [DET, NOUN, ADP, DET, NOUN, ADJ, ADP, DET, PRO... [VERB, SCONJ, VERB, DET, NOUN, ADJ, CCONJ, SCO... [ADP, DET, NOUN, ADP, DET, NOUN, PUNCT, DET, P... [CCONJ, DET, DET, NOUN, VERB, ADP, DET, NOUN, ... [VERB, DET, NOUN, VERB, ADP, NOUN, PUNCT, CCON... [PRON, VERB, ADP, VERB, PRON, ADP, DET, NOUN, ... [Sobre, la, oferta, de, interconexión, con, Te... [ADP, DET, NOUN, ADP, NOUN, ADP, PROPN, ADP, D... [La, inversión, en, investigación, básica, es,... [DET, NOUN, ADP, NOUN, ADJ, AUX, DET, NOUN, AD... ... ... [Conviene, que, ahora, ,, en, plena, apoteosis... [VERB, SCONJ, ADV, PUNCT, ADP, ADJ, NOUN, ADP,... [Carlos, y, Fayna, se, enzarzan, en, una, bron... [PROPN, CCONJ, PROPN, PRON, VERB, ADP, DET, NO... [Él, llega, a, tirar, la, sobre, la, cama, y, ... 14305 rows × 2 columns In order to implement our POS tagging application, we need word embeddings that have been pretrained for Spanish. Here we use the publicly-available GloVe embeddings trained on the Spanish Billion Word Corpus3 by the Departamento de Ciencias de la Computación of Universidad de Chile.4 In contrast to the GloVe embeddings used in Chapter 9, these do include a header that stores meta data about the embeddings (i.e., size of the vocabulary and the dimension of the embedding vectors), so in this case we do not use the no_header=True argument: Another difference between these GloVe embeddings and the ones we used in Chapter 9 is that these already include an embedding for unknown words. Therefore, there is no need to introduce our own. However, we do need to include a new embedding for padding, which . 1  https://universaldependencies.org/format.html 
 . 2  https://github.com/EmilStenstrom/conllu/ 
 . 3  https://crscardellino.ar/SBWCE/ 
 . 4  https://github.com/dccuchile/spanishwordembeddings# 
gloveembeddingsfromsbwc 
 164 Implementing POS Tagging Using RNNs we will use later to guarantee that all sentences in the same mini-batch have the same length. We add a vector of zeros for the padding token in the same way as before: Next, we need to preprocess our tokens to match the vocabulary of the embeddings. In particular, these embeddings were trained on words that were lowercased and on sequences of digits that were replaced with a single 0. We will apply the same modifications to our tokens: (From now on we will omit the pandas tables for readability, but, as usual, the corresponding Jupyter notebook contains all necessary information.) Next, we add a new column to the dataframe that stores the word ids corresponding to the embedding vocabulary. Note that at this point we are not padding the sequences of word ids. We will address padding later. We also need to generate the ids for the POS tags. To this end, we first need to construct a vocabulary of POS tags. Once again, we generate a list of tags using explode(), which linearizes our sequence of sequences of tags, and remove repeated tags using unique(). We also add a special tag for the padding token: We now use this POS tag vocabulary to construct a new dataframe column that stores the POS tag ids: The implementation of the Dataset class that stores our POS dataset is trivial: we simply return the lists of word and tag ids, converted to PyTorch tensors. Now it’s time to handle padding. This time we will use some features of PyTorch that we have not seen before. The DataLoader object can receive an optional argument, collate_fn, which expects a function that can be used to form a mini-batch. We will implement this function using PyTorch’s torch.nn.utils.rnn.pad_sequence() function, which, unsurprisingly, pads a group of tensors. We will take advantage of this function to pad the tensors while forming the mini-batch itself. The advantage of this strategy is that, rather than needing to pad all the examples to be the same length as the largest sentence in the corpus, we will instead pad them to the same length as the largest sentence in the minibatch. The latter strategy reduces the amount of padding necessary, which should yield more efficient code. The collate_fn() function takes a single argument, batch, which is a list of tuples. Each tuple has two elements: the list of word ids and the list of tag ids corresponding to a single example. We first unzip this list of tuples into two lists; the first list has all the word ids, and the 11.1 Part-of-speech Tagging 165 second has the tag ids. An explanation of how zip(*batch) works is provided in Appendix A. Next, we compute the lengths of each of the examples in the batch, which we will use later to inform the recurrent neural network where padding starts for each example. We then use the pad_sequence() function to add padding. This function will find the longest sequence in the batch and pad all examples accordingly using the provided padding value. This method is designed to work with PyTorch’s recurrent neural networks, which by default assume the batch index is in the second dimension. However, we will be organizing our tensors such that the batch index is always in the first dimension, which we feel to be more intuitive. For this reason, we also need to provide the batch_first=True argument to pad_sequence. Finally, we return the padded data, as well as the original lengths of the examples. Next, we implement our POS tagging model class. The model consists of: (a) an embedding layer for our Spanish pretrained embeddings; (b) an LSTM that can be set to be unior bi-directional (see Figure 10.3; the RNN is configured to be bidirectional by setting the bidirectional argument to True in the LSTM constructor), with a configurable number of layers (see Figure 10.2; the number of layers is set through the num_layers argument of the constructor) and (c) a linear layer on top of each hidden state, which is used to predict the scores for each of the POS tags for the corresponding token. The forward() method receives the padded minibatch and the list of lengths for the (unpadded) examples in this mini-batch. The first step in the function is to retrieve the embeddings for all words referenced in this mini-batch. We then apply dropout over these embedding vectors. Next, before passing the data to the LSTM, we pack the padded data. Note that the PyTorch PackedSequence5 class, which is the output of the pack_padded_sequence() function, stores a batch of sequences that had different lengths before padding. One important advantage of using PackedSequence is that its internal data structure removes the padding tokens (which is why we had to keep track of the example lengths before padding in x_lengths), and, thus, the recurrent neural network will not back-propagate over the padded elements.6 Once we have a PackedSequence, we pass it to the LSTM. Since the . 5  https://pytorch.org/docs/stable/generated/torch.nn.utils.rnn. PackedSequence.html 
 . 6  The astute reader might ask at this point, “Why did we pad the mini-batch examples in the first place, if we are removing the padding later?” The padding is needed because this allows us to store the mini-batch as a single three-dimensional tensor. 
 166 Implementing POS Tagging Using RNNs output of the LSTM is also packed, we then unpack it using pad_packed_sequence(). Next we apply dropout to this unpacked LSTM output. Finally, we pass
this to the linear layer to predict the tag scores for the tokens. Despite the small number of lines of code, the code of the forward() method, which switches between embedding vectors, padded tensors, and packed sequences, is not trivial. To clarify it, let us walk through an example. Imagine that the input to the forward() method is a batch, x_padded, with shape (10, 20), corresponding to 10 examples, each with 20 word ids (some of which are padding). Then we retrieve the embeddings. Assuming our word embeddings, i.e., the input vectors xi in Chapter 10, are of dimension 300, the new tensor will have a shape of (10, 20, 300), corresponding to 10 examples, each with 20 embeddings, each with dimension 300. After dropout the shape hasn’t changed, but some of the elements have been zeroed out. After unpacking the output of the LSTM, we will have a tensor of shape (10, 20, hidden_size), where hidden_size is the size of the LSTM hidden state, i.e., the ht vector in Equation 10.6, (hidden_size is a hyper parameter we will set later on). After passing this tensor to the linear layer, we will obtain a tensor of shape (10, 20, tag_vocab_size), where tag_vocab_size is the number of POS tags in our vocabulary. Thus, for each token in each example, we will have a distribution of POS tag scores. For each token, the assigned POS tag will be the one corresponding to the highest score. We next initialize all the hyper parameters and all the required components: The training procedure is very similar to the one implemented in Chapter 7. One notable difference is that the output of this model has three dimensions instead of two: number of examples, number of tokens, and number of POS tag scores. Thus, we have to reshape the output to pass it to the loss function. Additionally, we need to discard the padding before computing the loss. We reshape the gold tag ids using the torch.flatten() function, to transform the 2-dimensional tensor of shape (n_examples, n_tokens) to a 1-dimensional tensor with n_examples * n_tokens elements. The predictions are reshaped using the view(-1, output_size) method. By passing two arguments we are stipulating that we want two dimensions. The second dimension will be of size output_size. The -1 indicates that the first dimension should be inferred from the size of the tensor. This means that for a tensor of shape (n_examples, n_tokens, output_size) we will get a tensor of shape (n_examples * n_tokens, output_size). Then, we use a Boolean mask to discard the elements corresponding to the padding. This way, the loss 11.2 Summary 167 function will consider each actual word individually, as if the whole batch was just one big sentence. Note that treating a mini-batch as a single virtual sentence does affect the evaluation results. Lastly, we evaluate the performance of our POS tagger on the test set, similarly to how we have done it before: The results indicate that our POS tagger obtains an overall accuracy of 97%, which is in line with state-of-the-art approaches! This is encouraging considering that our approach does not include the CRF layer we discussed in Chapter 10. We challenge the reader to add this layer,7 and experiment with this architecture for other sequence tasks such as named entity recognition. 11.2 Summary In this chapter we have implemented a Spanish part-of-speech tagger using a bidirectional LSTM and a set of pretrained, static word embeddings. Through this process, we have also introduced several new PyTorch features such as the pad_sequence, pack_padded_sequence, and pad_packed_sequence functions, which allow us to work more efficiently with variable length sequences for recurrent neural networks. 7 See, for example, the LSTM-CRF implementation from the PyTorch tutorial: https://pytorch.org/tutorials/beginner/nlp/advanced_tutorial.html
12,827
12,897
#!/usr/bin/env python # coding: utf-8 # # Part-of-speech Tagging Using RNNs # Some initialization: # In[4]: import random import torch import numpy as np import pandas as pd from tqdm.notebook import tqdm # enable tqdm in pandas tqdm.pandas() # set to True to use the gpu (if there is one available) use_gpu = True # select device device = torch.device('cuda' if use_gpu and torch.cuda.is_available() else 'cpu') print(f'device: {device.type}') # random seed seed = 1234 # set random seed if seed is not None: print(f'random seed: {seed}') random.seed(seed) np.random.seed(seed) torch.manual_seed(seed) # Next, let's read the words and their POS tags from the CoNLLUP format: # In[5]: from conllu import parse_incr def read_tags(filename): data = {'words': [], 'tags': []} with open(filename) as f: for sent in parse_incr(f): words = [tok['form'] for tok in sent] tags = [tok['upos'] for tok in sent] data['words'].append(words) data['tags'].append(tags) return pd.DataFrame(data) # In[6]: train_df = read_tags('data/UD_Spanish-AnCora/es_ancora-ud-train.conllup') train_df # We now load the GloVe embeddings for Spanish, which include a representation for the unknown token: # In[7]: from gensim.models import KeyedVectors glove = KeyedVectors.load_word2vec_format('glove-sbwc.i25.vec') glove.vectors.shape # In[8]: # these embeddings already include <unk> unk_tok = '<unk>' unk_id = glove.key_to_index[unk_tok] unk_tok, unk_id # In[9]: # add padding embedding pad_tok = '<pad>' pad_emb = np.zeros(300) glove.add_vector(pad_tok, pad_emb) pad_tok_id = glove.key_to_index[pad_tok] pad_tok, pad_tok_id # Preprocessing: lower case all words, and replace all numbers with '0': # In[10]: def preprocess(words): result = [] for w in words: w = w.lower() if w.isdecimal(): w = '0' result.append(w) return result train_df['words'] = train_df['words'].progress_map(preprocess) train_df # Next, construct actual PyTorch `Dataset` and `DataLoader` objects for the train/dev/test partitions: # In[11]: def get_ids(tokens, key_to_index, unk_id=None): return [key_to_index.get(tok, unk_id) for tok in tokens] def get_word_ids(tokens): return get_ids(tokens, glove.key_to_index, unk_id) # add new column to the dataframe train_df['word ids'] = train_df['words'].progress_map(get_word_ids) train_df # In[12]: pad_tag = '<pad>' index_to_tag = train_df['tags'].explode().unique().tolist() + [pad_tag] tag_to_index = {t:i for i,t in enumerate(index_to_tag)} pad_tag_id = tag_to_index[pad_tag] pad_tag, pad_tag_id # In[13]: index_to_tag # In[14]: def get_tag_ids(tags): return get_ids(tags, tag_to_index) train_df['tag ids'] = train_df['tags'].progress_map(get_tag_ids) train_df # In[15]: dev_df = read_tags('data/UD_Spanish-AnCora/es_ancora-ud-dev.conllup') dev_df['words'] = dev_df['words'].progress_map(preprocess) dev_df['word ids'] = dev_df['words'].progress_map(lambda x: get_ids(x, glove.key_to_index, unk_id)) dev_df['tag ids'] = dev_df['tags'].progress_map(lambda x: get_ids(x, tag_to_index)) dev_df # In[16]: from torch.utils.data import Dataset class MyDataset(Dataset): def __init__(self, x, y): self.x = x self.y = y def __len__(self): return len(self.y) def __getitem__(self, index): x = torch.tensor(self.x[index]) y = torch.tensor(self.y[index]) return x, y # `collate_fn` will be used by `DataLoader` to pad all sentences in the same batch to the same length. # In[17]: from torch.nn.utils.rnn import pad_sequence def collate_fn(batch): # separate xs and ys xs, ys = zip(*batch) # get lengths lengths = [len(x) for x in xs] # pad sequences x_padded = pad_sequence(xs, batch_first=True, padding_value=pad_tok_id) y_padded = pad_sequence(ys, batch_first=True, padding_value=pad_tag_id) # return padded return x_padded, y_padded, lengths # Now construct our PyTorch model: # In[18]: from torch import nn from torch.nn.utils.rnn import pack_padded_sequence, pad_packed_sequence class MyModel(nn.Module): def __init__(self, vectors, hidden_size, num_layers, bidirectional, dropout, output_size): super().__init__() # ensure vectors is a tensor if not torch.is_tensor(vectors): vectors = torch.tensor(vectors) # init embedding layer self.embedding = nn.Embedding.from_pretrained(embeddings=vectors) # init lstm self.lstm = nn.LSTM( input_size=vectors.shape[1], hidden_size=hidden_size, num_layers=num_layers, bidirectional=bidirectional, dropout=dropout, batch_first=True) # init dropout self.dropout = nn.Dropout(dropout) # init classifier self.classifier = nn.Linear( in_features=hidden_size * 2 if bidirectional else hidden_size, out_features=output_size) def forward(self, x_padded, x_lengths): # get embeddings output = self.embedding(x_padded) output = self.dropout(output) # pack data before lstm packed = pack_padded_sequence(output, x_lengths, batch_first=True, enforce_sorted=False) packed, _ = self.lstm(packed) # unpack data before rest of model output, _ = pad_packed_sequence(packed, batch_first=True) output = self.dropout(output) output = self.classifier(output) return output # In[19]: from torch import optim from torch.utils.data import DataLoader from sklearn.metrics import accuracy_score # hyperparameters lr = 1e-3 weight_decay = 1e-5 batch_size = 100 shuffle = True n_epochs = 10 vectors = glove.vectors hidden_size = 100 num_layers = 2 bidirectional = True dropout = 0.1 output_size = len(index_to_tag) # initialize the model, loss function, optimizer, and data-loader model = MyModel(vectors, hidden_size, num_layers, bidirectional, dropout, output_size).to(device) loss_func = nn.CrossEntropyLoss() optimizer = optim.Adam(model.parameters(), lr=lr, weight_decay=weight_decay) train_ds = MyDataset(train_df['word ids'], train_df['tag ids']) train_dl = DataLoader(train_ds, batch_size=batch_size, shuffle=shuffle, collate_fn=collate_fn) dev_ds = MyDataset(dev_df['word ids'], dev_df['tag ids']) dev_dl = DataLoader(dev_ds, batch_size=batch_size, shuffle=shuffle, collate_fn=collate_fn) train_loss, train_acc = [], [] dev_loss, dev_acc = [], [] # We are now ready to train! # In[20]: # train the model for epoch in range(n_epochs): losses, acc = [], [] model.train() for x_padded, y_padded, lengths in tqdm(train_dl, desc=f'epoch {epoch+1} (train)'): # clear gradients model.zero_grad() # send batch to right device x_padded = x_padded.to(device) y_padded = y_padded.to(device) # predict label scores y_pred = model(x_padded, lengths) # reshape output y_true = torch.flatten(y_padded) y_pred = y_pred.view(-1, output_size) mask = y_true != pad_tag_id y_true = y_true[mask] y_pred = y_pred[mask] # compute loss loss = loss_func(y_pred, y_true) # accumulate for plotting gold = y_true.detach().cpu().numpy() pred = np.argmax(y_pred.detach().cpu().numpy(), axis=1) losses.append(loss.detach().cpu().item()) acc.append(accuracy_score(gold, pred)) # backpropagate loss.backward() # optimize model parameters optimizer.step() train_loss.append(np.mean(losses)) train_acc.append(np.mean(acc)) model.eval() with torch.no_grad(): losses, acc = [], [] for x_padded, y_padded, lengths in tqdm(dev_dl, desc=f'epoch {epoch+1} (dev)'): x_padded = x_padded.to(device) y_padded = y_padded.to(device) y_pred = model(x_padded, lengths) y_true = torch.flatten(y_padded) y_pred = y_pred.view(-1, output_size) mask = y_true != pad_tag_id y_true = y_true[mask] y_pred = y_pred[mask] loss = loss_func(y_pred, y_true) gold = y_true.cpu().numpy() pred = np.argmax(y_pred.cpu().numpy(), axis=1) losses.append(loss.cpu().item()) acc.append(accuracy_score(gold, pred)) dev_loss.append(np.mean(losses)) dev_acc.append(np.mean(acc)) # Plot loss and accuracy on dev after each epoch: # In[21]: import matplotlib.pyplot as plt x = np.arange(n_epochs) + 1 plt.plot(x, train_loss) plt.plot(x, dev_loss) plt.legend(['train', 'dev']) plt.xlabel('epoch') plt.ylabel('loss') plt.grid(True) # In[22]: plt.plot(x, train_acc) plt.plot(x, dev_acc) plt.legend(['train', 'dev']) plt.xlabel('epoch') plt.ylabel('accuracy') plt.grid(True) # In[23]: test_df = read_tags('data/UD_Spanish-AnCora/es_ancora-ud-test.conllup') test_df['words'] = test_df['words'].progress_map(preprocess) test_df['word ids'] = test_df['words'].progress_map(lambda x: get_ids(x, glove.key_to_index, unk_id)) test_df['tag ids'] = test_df['tags'].progress_map(lambda x: get_ids(x, tag_to_index)) test_df # Now let's evaluate on the test partition: # In[24]: from sklearn.metrics import classification_report model.eval() test_ds = MyDataset(test_df['word ids'], test_df['tag ids']) test_dl = DataLoader(test_ds, batch_size=batch_size, shuffle=shuffle, collate_fn=collate_fn) all_y_true = [] all_y_pred = [] with torch.no_grad(): for x_padded, y_padded, lengths in tqdm(test_dl): x_padded = x_padded.to(device) y_pred = model(x_padded, lengths) y_true = torch.flatten(y_padded) y_pred = y_pred.view(-1, output_size) mask = y_true != pad_tag_id y_true = y_true[mask] y_pred = torch.argmax(y_pred[mask], dim=1) all_y_true.append(y_true.cpu().numpy()) all_y_pred.append(y_pred.cpu().numpy()) y_true = np.concatenate(all_y_true) y_pred = np.concatenate(all_y_pred) target_names = index_to_tag[:-2] print(classification_report(y_true, y_pred, target_names=target_names)) # Let's generate a confusion matrix for all POS tags in the data: # In[25]: from sklearn.metrics import ConfusionMatrixDisplay, confusion_matrix cm = confusion_matrix(y_true, y_pred, normalize='true') disp = ConfusionMatrixDisplay( confusion_matrix=cm, display_labels=target_names, ) fig, ax = plt.subplots(figsize=(10,10)) disp.plot( cmap='Blues', values_format='.2f', colorbar=False, ax=ax, xticks_rotation=45, ) # In[ ]:
7,781
8,577
12
chap11-13
chap11-13
11 Implementing Part-of-speech Tagging Using Recurrent Neural Networks The previous chapter was our first exposure to recurrent neural networks, which included intuitions for why they are useful for natural language processing, various architectures, and training algorithms. In this chapter we will put them to use, to implement a common sequence modeling task. 11.1 Part-of-speech Tagging The task we will use as an example for this chapter is part-of-speech (POS) tagging, an NLP application that, as we discussed in the previous chapter, benefits from word order. Please see Chapter 16 for a more thorough discussion of POS tagging. The entire code presented in this chapter is available in the chap11_pos_tagging Jupyter notebook. To take a break from NLP applications for English, in this chapter we use the AnCora corpus (Taulé et al., 2008), which primarily consists of newspaper texts in Spanish and Catalan with different linguistic annotations. In this chapter we work with the Spanish portion of the corpus, and the annotations for Universal POS tags (see Chapter 16 for a description of these tags). The Spanish portion of the corpus is divided into a training set with 14,305 sentences, a development set with 1,654 sentences, and a test set with 1,721 sentences. The data is distributed in the CoNLL-U format. In this format, all sentences in a dataset are stored in the same file, separated by a blank line. Each individual token in a sentence is represented in a line, which contains 10 annotation fields separated by tabs: ID, FORM, LEMMA, UPOS, XPOS, FEATS, HEAD, DEPREL, DEPS, and MISC. A comprehensive explanation of this format and the 162 11.1 Part-of-speech Tagging 163 meaning of the different fields is beyond the goal of this chapter; however, the curious reader can find one at the CoNLL-U website.1 Here, we are only concerned with the fields FORM (the raw word), and UPOS (the Universal part-of-speech tag). As in previous chapters, we use pandas to preprocess the data. For parsing the CoNLL-U files, we rely on the conllu Python module.2 We implement a function called read_tags that reads the CoNLL-U file corresponding to a dataset and returns a pandas dataframe that combines all tokens in a sentence into a single row with two columns, one for the words, and one for the POS tags in the corresponding sentence: 0 1 2 3 4 ... 14300 14301 14302 14303 14304 words [El, presidente, de, el, órgano, regulador, de... [Afirmó, que, sigue, el, criterio, europeo, y,... [Durante, la, presentación, de, el, libro, ", ... [Y, todas, las, miradas, convergen, en, la, lu... [Cambiar, las, formas, parece, de, rigor, ,, p... tags [DET, NOUN, ADP, DET, NOUN, ADJ, ADP, DET, PRO... [VERB, SCONJ, VERB, DET, NOUN, ADJ, CCONJ, SCO... [ADP, DET, NOUN, ADP, DET, NOUN, PUNCT, DET, P... [CCONJ, DET, DET, NOUN, VERB, ADP, DET, NOUN, ... [VERB, DET, NOUN, VERB, ADP, NOUN, PUNCT, CCON... [PRON, VERB, ADP, VERB, PRON, ADP, DET, NOUN, ... [Sobre, la, oferta, de, interconexión, con, Te... [ADP, DET, NOUN, ADP, NOUN, ADP, PROPN, ADP, D... [La, inversión, en, investigación, básica, es,... [DET, NOUN, ADP, NOUN, ADJ, AUX, DET, NOUN, AD... ... ... [Conviene, que, ahora, ,, en, plena, apoteosis... [VERB, SCONJ, ADV, PUNCT, ADP, ADJ, NOUN, ADP,... [Carlos, y, Fayna, se, enzarzan, en, una, bron... [PROPN, CCONJ, PROPN, PRON, VERB, ADP, DET, NO... [Él, llega, a, tirar, la, sobre, la, cama, y, ... 14305 rows × 2 columns In order to implement our POS tagging application, we need word embeddings that have been pretrained for Spanish. Here we use the publicly-available GloVe embeddings trained on the Spanish Billion Word Corpus3 by the Departamento de Ciencias de la Computación of Universidad de Chile.4 In contrast to the GloVe embeddings used in Chapter 9, these do include a header that stores meta data about the embeddings (i.e., size of the vocabulary and the dimension of the embedding vectors), so in this case we do not use the no_header=True argument: Another difference between these GloVe embeddings and the ones we used in Chapter 9 is that these already include an embedding for unknown words. Therefore, there is no need to introduce our own. However, we do need to include a new embedding for padding, which . 1  https://universaldependencies.org/format.html 
 . 2  https://github.com/EmilStenstrom/conllu/ 
 . 3  https://crscardellino.ar/SBWCE/ 
 . 4  https://github.com/dccuchile/spanishwordembeddings# 
gloveembeddingsfromsbwc 
 164 Implementing POS Tagging Using RNNs we will use later to guarantee that all sentences in the same mini-batch have the same length. We add a vector of zeros for the padding token in the same way as before: Next, we need to preprocess our tokens to match the vocabulary of the embeddings. In particular, these embeddings were trained on words that were lowercased and on sequences of digits that were replaced with a single 0. We will apply the same modifications to our tokens: (From now on we will omit the pandas tables for readability, but, as usual, the corresponding Jupyter notebook contains all necessary information.) Next, we add a new column to the dataframe that stores the word ids corresponding to the embedding vocabulary. Note that at this point we are not padding the sequences of word ids. We will address padding later. We also need to generate the ids for the POS tags. To this end, we first need to construct a vocabulary of POS tags. Once again, we generate a list of tags using explode(), which linearizes our sequence of sequences of tags, and remove repeated tags using unique(). We also add a special tag for the padding token: We now use this POS tag vocabulary to construct a new dataframe column that stores the POS tag ids: The implementation of the Dataset class that stores our POS dataset is trivial: we simply return the lists of word and tag ids, converted to PyTorch tensors. Now it’s time to handle padding. This time we will use some features of PyTorch that we have not seen before. The DataLoader object can receive an optional argument, collate_fn, which expects a function that can be used to form a mini-batch. We will implement this function using PyTorch’s torch.nn.utils.rnn.pad_sequence() function, which, unsurprisingly, pads a group of tensors. We will take advantage of this function to pad the tensors while forming the mini-batch itself. The advantage of this strategy is that, rather than needing to pad all the examples to be the same length as the largest sentence in the corpus, we will instead pad them to the same length as the largest sentence in the minibatch. The latter strategy reduces the amount of padding necessary, which should yield more efficient code. The collate_fn() function takes a single argument, batch, which is a list of tuples. Each tuple has two elements: the list of word ids and the list of tag ids corresponding to a single example. We first unzip this list of tuples into two lists; the first list has all the word ids, and the 11.1 Part-of-speech Tagging 165 second has the tag ids. An explanation of how zip(*batch) works is provided in Appendix A. Next, we compute the lengths of each of the examples in the batch, which we will use later to inform the recurrent neural network where padding starts for each example. We then use the pad_sequence() function to add padding. This function will find the longest sequence in the batch and pad all examples accordingly using the provided padding value. This method is designed to work with PyTorch’s recurrent neural networks, which by default assume the batch index is in the second dimension. However, we will be organizing our tensors such that the batch index is always in the first dimension, which we feel to be more intuitive. For this reason, we also need to provide the batch_first=True argument to pad_sequence. Finally, we return the padded data, as well as the original lengths of the examples. Next, we implement our POS tagging model class. The model consists of: (a) an embedding layer for our Spanish pretrained embeddings; (b) an LSTM that can be set to be unior bi-directional (see Figure 10.3; the RNN is configured to be bidirectional by setting the bidirectional argument to True in the LSTM constructor), with a configurable number of layers (see Figure 10.2; the number of layers is set through the num_layers argument of the constructor) and (c) a linear layer on top of each hidden state, which is used to predict the scores for each of the POS tags for the corresponding token. The forward() method receives the padded minibatch and the list of lengths for the (unpadded) examples in this mini-batch. The first step in the function is to retrieve the embeddings for all words referenced in this mini-batch. We then apply dropout over these embedding vectors. Next, before passing the data to the LSTM, we pack the padded data. Note that the PyTorch PackedSequence5 class, which is the output of the pack_padded_sequence() function, stores a batch of sequences that had different lengths before padding. One important advantage of using PackedSequence is that its internal data structure removes the padding tokens (which is why we had to keep track of the example lengths before padding in x_lengths), and, thus, the recurrent neural network will not back-propagate over the padded elements.6 Once we have a PackedSequence, we pass it to the LSTM. Since the . 5  https://pytorch.org/docs/stable/generated/torch.nn.utils.rnn. PackedSequence.html 
 . 6  The astute reader might ask at this point, “Why did we pad the mini-batch examples in the first place, if we are removing the padding later?” The padding is needed because this allows us to store the mini-batch as a single three-dimensional tensor. 
 166 Implementing POS Tagging Using RNNs output of the LSTM is also packed, we then unpack it using pad_packed_sequence(). Next we apply dropout to this unpacked LSTM output. Finally, we pass
this to the linear layer to predict the tag scores for the tokens. Despite the small number of lines of code, the code of the forward() method, which switches between embedding vectors, padded tensors, and packed sequences, is not trivial. To clarify it, let us walk through an example. Imagine that the input to the forward() method is a batch, x_padded, with shape (10, 20), corresponding to 10 examples, each with 20 word ids (some of which are padding). Then we retrieve the embeddings. Assuming our word embeddings, i.e., the input vectors xi in Chapter 10, are of dimension 300, the new tensor will have a shape of (10, 20, 300), corresponding to 10 examples, each with 20 embeddings, each with dimension 300. After dropout the shape hasn’t changed, but some of the elements have been zeroed out. After unpacking the output of the LSTM, we will have a tensor of shape (10, 20, hidden_size), where hidden_size is the size of the LSTM hidden state, i.e., the ht vector in Equation 10.6, (hidden_size is a hyper parameter we will set later on). After passing this tensor to the linear layer, we will obtain a tensor of shape (10, 20, tag_vocab_size), where tag_vocab_size is the number of POS tags in our vocabulary. Thus, for each token in each example, we will have a distribution of POS tag scores. For each token, the assigned POS tag will be the one corresponding to the highest score. We next initialize all the hyper parameters and all the required components: The training procedure is very similar to the one implemented in Chapter 7. One notable difference is that the output of this model has three dimensions instead of two: number of examples, number of tokens, and number of POS tag scores. Thus, we have to reshape the output to pass it to the loss function. Additionally, we need to discard the padding before computing the loss. We reshape the gold tag ids using the torch.flatten() function, to transform the 2-dimensional tensor of shape (n_examples, n_tokens) to a 1-dimensional tensor with n_examples * n_tokens elements. The predictions are reshaped using the view(-1, output_size) method. By passing two arguments we are stipulating that we want two dimensions. The second dimension will be of size output_size. The -1 indicates that the first dimension should be inferred from the size of the tensor. This means that for a tensor of shape (n_examples, n_tokens, output_size) we will get a tensor of shape (n_examples * n_tokens, output_size). Then, we use a Boolean mask to discard the elements corresponding to the padding. This way, the loss 11.2 Summary 167 function will consider each actual word individually, as if the whole batch was just one big sentence. Note that treating a mini-batch as a single virtual sentence does affect the evaluation results. Lastly, we evaluate the performance of our POS tagger on the test set, similarly to how we have done it before: The results indicate that our POS tagger obtains an overall accuracy of 97%, which is in line with state-of-the-art approaches! This is encouraging considering that our approach does not include the CRF layer we discussed in Chapter 10. We challenge the reader to add this layer,7 and experiment with this architecture for other sequence tasks such as named entity recognition. 11.2 Summary In this chapter we have implemented a Spanish part-of-speech tagger using a bidirectional LSTM and a set of pretrained, static word embeddings. Through this process, we have also introduced several new PyTorch features such as the pad_sequence, pack_padded_sequence, and pad_packed_sequence functions, which allow us to work more efficiently with variable length sequences for recurrent neural networks. 7 See, for example, the LSTM-CRF implementation from the PyTorch tutorial: https://pytorch.org/tutorials/beginner/nlp/advanced_tutorial.html
5,804
5,961
#!/usr/bin/env python # coding: utf-8 # # Part-of-speech Tagging Using RNNs # Some initialization: # In[4]: import random import torch import numpy as np import pandas as pd from tqdm.notebook import tqdm # enable tqdm in pandas tqdm.pandas() # set to True to use the gpu (if there is one available) use_gpu = True # select device device = torch.device('cuda' if use_gpu and torch.cuda.is_available() else 'cpu') print(f'device: {device.type}') # random seed seed = 1234 # set random seed if seed is not None: print(f'random seed: {seed}') random.seed(seed) np.random.seed(seed) torch.manual_seed(seed) # Next, let's read the words and their POS tags from the CoNLLUP format: # In[5]: from conllu import parse_incr def read_tags(filename): data = {'words': [], 'tags': []} with open(filename) as f: for sent in parse_incr(f): words = [tok['form'] for tok in sent] tags = [tok['upos'] for tok in sent] data['words'].append(words) data['tags'].append(tags) return pd.DataFrame(data) # In[6]: train_df = read_tags('data/UD_Spanish-AnCora/es_ancora-ud-train.conllup') train_df # We now load the GloVe embeddings for Spanish, which include a representation for the unknown token: # In[7]: from gensim.models import KeyedVectors glove = KeyedVectors.load_word2vec_format('glove-sbwc.i25.vec') glove.vectors.shape # In[8]: # these embeddings already include <unk> unk_tok = '<unk>' unk_id = glove.key_to_index[unk_tok] unk_tok, unk_id # In[9]: # add padding embedding pad_tok = '<pad>' pad_emb = np.zeros(300) glove.add_vector(pad_tok, pad_emb) pad_tok_id = glove.key_to_index[pad_tok] pad_tok, pad_tok_id # Preprocessing: lower case all words, and replace all numbers with '0': # In[10]: def preprocess(words): result = [] for w in words: w = w.lower() if w.isdecimal(): w = '0' result.append(w) return result train_df['words'] = train_df['words'].progress_map(preprocess) train_df # Next, construct actual PyTorch `Dataset` and `DataLoader` objects for the train/dev/test partitions: # In[11]: def get_ids(tokens, key_to_index, unk_id=None): return [key_to_index.get(tok, unk_id) for tok in tokens] def get_word_ids(tokens): return get_ids(tokens, glove.key_to_index, unk_id) # add new column to the dataframe train_df['word ids'] = train_df['words'].progress_map(get_word_ids) train_df # In[12]: pad_tag = '<pad>' index_to_tag = train_df['tags'].explode().unique().tolist() + [pad_tag] tag_to_index = {t:i for i,t in enumerate(index_to_tag)} pad_tag_id = tag_to_index[pad_tag] pad_tag, pad_tag_id # In[13]: index_to_tag # In[14]: def get_tag_ids(tags): return get_ids(tags, tag_to_index) train_df['tag ids'] = train_df['tags'].progress_map(get_tag_ids) train_df # In[15]: dev_df = read_tags('data/UD_Spanish-AnCora/es_ancora-ud-dev.conllup') dev_df['words'] = dev_df['words'].progress_map(preprocess) dev_df['word ids'] = dev_df['words'].progress_map(lambda x: get_ids(x, glove.key_to_index, unk_id)) dev_df['tag ids'] = dev_df['tags'].progress_map(lambda x: get_ids(x, tag_to_index)) dev_df # In[16]: from torch.utils.data import Dataset class MyDataset(Dataset): def __init__(self, x, y): self.x = x self.y = y def __len__(self): return len(self.y) def __getitem__(self, index): x = torch.tensor(self.x[index]) y = torch.tensor(self.y[index]) return x, y # `collate_fn` will be used by `DataLoader` to pad all sentences in the same batch to the same length. # In[17]: from torch.nn.utils.rnn import pad_sequence def collate_fn(batch): # separate xs and ys xs, ys = zip(*batch) # get lengths lengths = [len(x) for x in xs] # pad sequences x_padded = pad_sequence(xs, batch_first=True, padding_value=pad_tok_id) y_padded = pad_sequence(ys, batch_first=True, padding_value=pad_tag_id) # return padded return x_padded, y_padded, lengths # Now construct our PyTorch model: # In[18]: from torch import nn from torch.nn.utils.rnn import pack_padded_sequence, pad_packed_sequence class MyModel(nn.Module): def __init__(self, vectors, hidden_size, num_layers, bidirectional, dropout, output_size): super().__init__() # ensure vectors is a tensor if not torch.is_tensor(vectors): vectors = torch.tensor(vectors) # init embedding layer self.embedding = nn.Embedding.from_pretrained(embeddings=vectors) # init lstm self.lstm = nn.LSTM( input_size=vectors.shape[1], hidden_size=hidden_size, num_layers=num_layers, bidirectional=bidirectional, dropout=dropout, batch_first=True) # init dropout self.dropout = nn.Dropout(dropout) # init classifier self.classifier = nn.Linear( in_features=hidden_size * 2 if bidirectional else hidden_size, out_features=output_size) def forward(self, x_padded, x_lengths): # get embeddings output = self.embedding(x_padded) output = self.dropout(output) # pack data before lstm packed = pack_padded_sequence(output, x_lengths, batch_first=True, enforce_sorted=False) packed, _ = self.lstm(packed) # unpack data before rest of model output, _ = pad_packed_sequence(packed, batch_first=True) output = self.dropout(output) output = self.classifier(output) return output # In[19]: from torch import optim from torch.utils.data import DataLoader from sklearn.metrics import accuracy_score # hyperparameters lr = 1e-3 weight_decay = 1e-5 batch_size = 100 shuffle = True n_epochs = 10 vectors = glove.vectors hidden_size = 100 num_layers = 2 bidirectional = True dropout = 0.1 output_size = len(index_to_tag) # initialize the model, loss function, optimizer, and data-loader model = MyModel(vectors, hidden_size, num_layers, bidirectional, dropout, output_size).to(device) loss_func = nn.CrossEntropyLoss() optimizer = optim.Adam(model.parameters(), lr=lr, weight_decay=weight_decay) train_ds = MyDataset(train_df['word ids'], train_df['tag ids']) train_dl = DataLoader(train_ds, batch_size=batch_size, shuffle=shuffle, collate_fn=collate_fn) dev_ds = MyDataset(dev_df['word ids'], dev_df['tag ids']) dev_dl = DataLoader(dev_ds, batch_size=batch_size, shuffle=shuffle, collate_fn=collate_fn) train_loss, train_acc = [], [] dev_loss, dev_acc = [], [] # We are now ready to train! # In[20]: # train the model for epoch in range(n_epochs): losses, acc = [], [] model.train() for x_padded, y_padded, lengths in tqdm(train_dl, desc=f'epoch {epoch+1} (train)'): # clear gradients model.zero_grad() # send batch to right device x_padded = x_padded.to(device) y_padded = y_padded.to(device) # predict label scores y_pred = model(x_padded, lengths) # reshape output y_true = torch.flatten(y_padded) y_pred = y_pred.view(-1, output_size) mask = y_true != pad_tag_id y_true = y_true[mask] y_pred = y_pred[mask] # compute loss loss = loss_func(y_pred, y_true) # accumulate for plotting gold = y_true.detach().cpu().numpy() pred = np.argmax(y_pred.detach().cpu().numpy(), axis=1) losses.append(loss.detach().cpu().item()) acc.append(accuracy_score(gold, pred)) # backpropagate loss.backward() # optimize model parameters optimizer.step() train_loss.append(np.mean(losses)) train_acc.append(np.mean(acc)) model.eval() with torch.no_grad(): losses, acc = [], [] for x_padded, y_padded, lengths in tqdm(dev_dl, desc=f'epoch {epoch+1} (dev)'): x_padded = x_padded.to(device) y_padded = y_padded.to(device) y_pred = model(x_padded, lengths) y_true = torch.flatten(y_padded) y_pred = y_pred.view(-1, output_size) mask = y_true != pad_tag_id y_true = y_true[mask] y_pred = y_pred[mask] loss = loss_func(y_pred, y_true) gold = y_true.cpu().numpy() pred = np.argmax(y_pred.cpu().numpy(), axis=1) losses.append(loss.cpu().item()) acc.append(accuracy_score(gold, pred)) dev_loss.append(np.mean(losses)) dev_acc.append(np.mean(acc)) # Plot loss and accuracy on dev after each epoch: # In[21]: import matplotlib.pyplot as plt x = np.arange(n_epochs) + 1 plt.plot(x, train_loss) plt.plot(x, dev_loss) plt.legend(['train', 'dev']) plt.xlabel('epoch') plt.ylabel('loss') plt.grid(True) # In[22]: plt.plot(x, train_acc) plt.plot(x, dev_acc) plt.legend(['train', 'dev']) plt.xlabel('epoch') plt.ylabel('accuracy') plt.grid(True) # In[23]: test_df = read_tags('data/UD_Spanish-AnCora/es_ancora-ud-test.conllup') test_df['words'] = test_df['words'].progress_map(preprocess) test_df['word ids'] = test_df['words'].progress_map(lambda x: get_ids(x, glove.key_to_index, unk_id)) test_df['tag ids'] = test_df['tags'].progress_map(lambda x: get_ids(x, tag_to_index)) test_df # Now let's evaluate on the test partition: # In[24]: from sklearn.metrics import classification_report model.eval() test_ds = MyDataset(test_df['word ids'], test_df['tag ids']) test_dl = DataLoader(test_ds, batch_size=batch_size, shuffle=shuffle, collate_fn=collate_fn) all_y_true = [] all_y_pred = [] with torch.no_grad(): for x_padded, y_padded, lengths in tqdm(test_dl): x_padded = x_padded.to(device) y_pred = model(x_padded, lengths) y_true = torch.flatten(y_padded) y_pred = y_pred.view(-1, output_size) mask = y_true != pad_tag_id y_true = y_true[mask] y_pred = torch.argmax(y_pred[mask], dim=1) all_y_true.append(y_true.cpu().numpy()) all_y_pred.append(y_pred.cpu().numpy()) y_true = np.concatenate(all_y_true) y_pred = np.concatenate(all_y_pred) target_names = index_to_tag[:-2] print(classification_report(y_true, y_pred, target_names=target_names)) # Let's generate a confusion matrix for all POS tags in the data: # In[25]: from sklearn.metrics import ConfusionMatrixDisplay, confusion_matrix cm = confusion_matrix(y_true, y_pred, normalize='true') disp = ConfusionMatrixDisplay( confusion_matrix=cm, display_labels=target_names, ) fig, ax = plt.subplots(figsize=(10,10)) disp.plot( cmap='Blues', values_format='.2f', colorbar=False, ax=ax, xticks_rotation=45, ) # In[ ]:
3,247
3,273
13
chap11-14
chap11-14
11 Implementing Part-of-speech Tagging Using Recurrent Neural Networks The previous chapter was our first exposure to recurrent neural networks, which included intuitions for why they are useful for natural language processing, various architectures, and training algorithms. In this chapter we will put them to use, to implement a common sequence modeling task. 11.1 Part-of-speech Tagging The task we will use as an example for this chapter is part-of-speech (POS) tagging, an NLP application that, as we discussed in the previous chapter, benefits from word order. Please see Chapter 16 for a more thorough discussion of POS tagging. The entire code presented in this chapter is available in the chap11_pos_tagging Jupyter notebook. To take a break from NLP applications for English, in this chapter we use the AnCora corpus (Taulé et al., 2008), which primarily consists of newspaper texts in Spanish and Catalan with different linguistic annotations. In this chapter we work with the Spanish portion of the corpus, and the annotations for Universal POS tags (see Chapter 16 for a description of these tags). The Spanish portion of the corpus is divided into a training set with 14,305 sentences, a development set with 1,654 sentences, and a test set with 1,721 sentences. The data is distributed in the CoNLL-U format. In this format, all sentences in a dataset are stored in the same file, separated by a blank line. Each individual token in a sentence is represented in a line, which contains 10 annotation fields separated by tabs: ID, FORM, LEMMA, UPOS, XPOS, FEATS, HEAD, DEPREL, DEPS, and MISC. A comprehensive explanation of this format and the 162 11.1 Part-of-speech Tagging 163 meaning of the different fields is beyond the goal of this chapter; however, the curious reader can find one at the CoNLL-U website.1 Here, we are only concerned with the fields FORM (the raw word), and UPOS (the Universal part-of-speech tag). As in previous chapters, we use pandas to preprocess the data. For parsing the CoNLL-U files, we rely on the conllu Python module.2 We implement a function called read_tags that reads the CoNLL-U file corresponding to a dataset and returns a pandas dataframe that combines all tokens in a sentence into a single row with two columns, one for the words, and one for the POS tags in the corresponding sentence: 0 1 2 3 4 ... 14300 14301 14302 14303 14304 words [El, presidente, de, el, órgano, regulador, de... [Afirmó, que, sigue, el, criterio, europeo, y,... [Durante, la, presentación, de, el, libro, ", ... [Y, todas, las, miradas, convergen, en, la, lu... [Cambiar, las, formas, parece, de, rigor, ,, p... tags [DET, NOUN, ADP, DET, NOUN, ADJ, ADP, DET, PRO... [VERB, SCONJ, VERB, DET, NOUN, ADJ, CCONJ, SCO... [ADP, DET, NOUN, ADP, DET, NOUN, PUNCT, DET, P... [CCONJ, DET, DET, NOUN, VERB, ADP, DET, NOUN, ... [VERB, DET, NOUN, VERB, ADP, NOUN, PUNCT, CCON... [PRON, VERB, ADP, VERB, PRON, ADP, DET, NOUN, ... [Sobre, la, oferta, de, interconexión, con, Te... [ADP, DET, NOUN, ADP, NOUN, ADP, PROPN, ADP, D... [La, inversión, en, investigación, básica, es,... [DET, NOUN, ADP, NOUN, ADJ, AUX, DET, NOUN, AD... ... ... [Conviene, que, ahora, ,, en, plena, apoteosis... [VERB, SCONJ, ADV, PUNCT, ADP, ADJ, NOUN, ADP,... [Carlos, y, Fayna, se, enzarzan, en, una, bron... [PROPN, CCONJ, PROPN, PRON, VERB, ADP, DET, NO... [Él, llega, a, tirar, la, sobre, la, cama, y, ... 14305 rows × 2 columns In order to implement our POS tagging application, we need word embeddings that have been pretrained for Spanish. Here we use the publicly-available GloVe embeddings trained on the Spanish Billion Word Corpus3 by the Departamento de Ciencias de la Computación of Universidad de Chile.4 In contrast to the GloVe embeddings used in Chapter 9, these do include a header that stores meta data about the embeddings (i.e., size of the vocabulary and the dimension of the embedding vectors), so in this case we do not use the no_header=True argument: Another difference between these GloVe embeddings and the ones we used in Chapter 9 is that these already include an embedding for unknown words. Therefore, there is no need to introduce our own. However, we do need to include a new embedding for padding, which . 1  https://universaldependencies.org/format.html 
 . 2  https://github.com/EmilStenstrom/conllu/ 
 . 3  https://crscardellino.ar/SBWCE/ 
 . 4  https://github.com/dccuchile/spanishwordembeddings# 
gloveembeddingsfromsbwc 
 164 Implementing POS Tagging Using RNNs we will use later to guarantee that all sentences in the same mini-batch have the same length. We add a vector of zeros for the padding token in the same way as before: Next, we need to preprocess our tokens to match the vocabulary of the embeddings. In particular, these embeddings were trained on words that were lowercased and on sequences of digits that were replaced with a single 0. We will apply the same modifications to our tokens: (From now on we will omit the pandas tables for readability, but, as usual, the corresponding Jupyter notebook contains all necessary information.) Next, we add a new column to the dataframe that stores the word ids corresponding to the embedding vocabulary. Note that at this point we are not padding the sequences of word ids. We will address padding later. We also need to generate the ids for the POS tags. To this end, we first need to construct a vocabulary of POS tags. Once again, we generate a list of tags using explode(), which linearizes our sequence of sequences of tags, and remove repeated tags using unique(). We also add a special tag for the padding token: We now use this POS tag vocabulary to construct a new dataframe column that stores the POS tag ids: The implementation of the Dataset class that stores our POS dataset is trivial: we simply return the lists of word and tag ids, converted to PyTorch tensors. Now it’s time to handle padding. This time we will use some features of PyTorch that we have not seen before. The DataLoader object can receive an optional argument, collate_fn, which expects a function that can be used to form a mini-batch. We will implement this function using PyTorch’s torch.nn.utils.rnn.pad_sequence() function, which, unsurprisingly, pads a group of tensors. We will take advantage of this function to pad the tensors while forming the mini-batch itself. The advantage of this strategy is that, rather than needing to pad all the examples to be the same length as the largest sentence in the corpus, we will instead pad them to the same length as the largest sentence in the minibatch. The latter strategy reduces the amount of padding necessary, which should yield more efficient code. The collate_fn() function takes a single argument, batch, which is a list of tuples. Each tuple has two elements: the list of word ids and the list of tag ids corresponding to a single example. We first unzip this list of tuples into two lists; the first list has all the word ids, and the 11.1 Part-of-speech Tagging 165 second has the tag ids. An explanation of how zip(*batch) works is provided in Appendix A. Next, we compute the lengths of each of the examples in the batch, which we will use later to inform the recurrent neural network where padding starts for each example. We then use the pad_sequence() function to add padding. This function will find the longest sequence in the batch and pad all examples accordingly using the provided padding value. This method is designed to work with PyTorch’s recurrent neural networks, which by default assume the batch index is in the second dimension. However, we will be organizing our tensors such that the batch index is always in the first dimension, which we feel to be more intuitive. For this reason, we also need to provide the batch_first=True argument to pad_sequence. Finally, we return the padded data, as well as the original lengths of the examples. Next, we implement our POS tagging model class. The model consists of: (a) an embedding layer for our Spanish pretrained embeddings; (b) an LSTM that can be set to be unior bi-directional (see Figure 10.3; the RNN is configured to be bidirectional by setting the bidirectional argument to True in the LSTM constructor), with a configurable number of layers (see Figure 10.2; the number of layers is set through the num_layers argument of the constructor) and (c) a linear layer on top of each hidden state, which is used to predict the scores for each of the POS tags for the corresponding token. The forward() method receives the padded minibatch and the list of lengths for the (unpadded) examples in this mini-batch. The first step in the function is to retrieve the embeddings for all words referenced in this mini-batch. We then apply dropout over these embedding vectors. Next, before passing the data to the LSTM, we pack the padded data. Note that the PyTorch PackedSequence5 class, which is the output of the pack_padded_sequence() function, stores a batch of sequences that had different lengths before padding. One important advantage of using PackedSequence is that its internal data structure removes the padding tokens (which is why we had to keep track of the example lengths before padding in x_lengths), and, thus, the recurrent neural network will not back-propagate over the padded elements.6 Once we have a PackedSequence, we pass it to the LSTM. Since the . 5  https://pytorch.org/docs/stable/generated/torch.nn.utils.rnn. PackedSequence.html 
 . 6  The astute reader might ask at this point, “Why did we pad the mini-batch examples in the first place, if we are removing the padding later?” The padding is needed because this allows us to store the mini-batch as a single three-dimensional tensor. 
 166 Implementing POS Tagging Using RNNs output of the LSTM is also packed, we then unpack it using pad_packed_sequence(). Next we apply dropout to this unpacked LSTM output. Finally, we pass
this to the linear layer to predict the tag scores for the tokens. Despite the small number of lines of code, the code of the forward() method, which switches between embedding vectors, padded tensors, and packed sequences, is not trivial. To clarify it, let us walk through an example. Imagine that the input to the forward() method is a batch, x_padded, with shape (10, 20), corresponding to 10 examples, each with 20 word ids (some of which are padding). Then we retrieve the embeddings. Assuming our word embeddings, i.e., the input vectors xi in Chapter 10, are of dimension 300, the new tensor will have a shape of (10, 20, 300), corresponding to 10 examples, each with 20 embeddings, each with dimension 300. After dropout the shape hasn’t changed, but some of the elements have been zeroed out. After unpacking the output of the LSTM, we will have a tensor of shape (10, 20, hidden_size), where hidden_size is the size of the LSTM hidden state, i.e., the ht vector in Equation 10.6, (hidden_size is a hyper parameter we will set later on). After passing this tensor to the linear layer, we will obtain a tensor of shape (10, 20, tag_vocab_size), where tag_vocab_size is the number of POS tags in our vocabulary. Thus, for each token in each example, we will have a distribution of POS tag scores. For each token, the assigned POS tag will be the one corresponding to the highest score. We next initialize all the hyper parameters and all the required components: The training procedure is very similar to the one implemented in Chapter 7. One notable difference is that the output of this model has three dimensions instead of two: number of examples, number of tokens, and number of POS tag scores. Thus, we have to reshape the output to pass it to the loss function. Additionally, we need to discard the padding before computing the loss. We reshape the gold tag ids using the torch.flatten() function, to transform the 2-dimensional tensor of shape (n_examples, n_tokens) to a 1-dimensional tensor with n_examples * n_tokens elements. The predictions are reshaped using the view(-1, output_size) method. By passing two arguments we are stipulating that we want two dimensions. The second dimension will be of size output_size. The -1 indicates that the first dimension should be inferred from the size of the tensor. This means that for a tensor of shape (n_examples, n_tokens, output_size) we will get a tensor of shape (n_examples * n_tokens, output_size). Then, we use a Boolean mask to discard the elements corresponding to the padding. This way, the loss 11.2 Summary 167 function will consider each actual word individually, as if the whole batch was just one big sentence. Note that treating a mini-batch as a single virtual sentence does affect the evaluation results. Lastly, we evaluate the performance of our POS tagger on the test set, similarly to how we have done it before: The results indicate that our POS tagger obtains an overall accuracy of 97%, which is in line with state-of-the-art approaches! This is encouraging considering that our approach does not include the CRF layer we discussed in Chapter 10. We challenge the reader to add this layer,7 and experiment with this architecture for other sequence tasks such as named entity recognition. 11.2 Summary In this chapter we have implemented a Spanish part-of-speech tagger using a bidirectional LSTM and a set of pretrained, static word embeddings. Through this process, we have also introduced several new PyTorch features such as the pad_sequence, pack_padded_sequence, and pad_packed_sequence functions, which allow us to work more efficiently with variable length sequences for recurrent neural networks. 7 See, for example, the LSTM-CRF implementation from the PyTorch tutorial: https://pytorch.org/tutorials/beginner/nlp/advanced_tutorial.html
9,414
9,468
#!/usr/bin/env python # coding: utf-8 # # Part-of-speech Tagging Using RNNs # Some initialization: # In[4]: import random import torch import numpy as np import pandas as pd from tqdm.notebook import tqdm # enable tqdm in pandas tqdm.pandas() # set to True to use the gpu (if there is one available) use_gpu = True # select device device = torch.device('cuda' if use_gpu and torch.cuda.is_available() else 'cpu') print(f'device: {device.type}') # random seed seed = 1234 # set random seed if seed is not None: print(f'random seed: {seed}') random.seed(seed) np.random.seed(seed) torch.manual_seed(seed) # Next, let's read the words and their POS tags from the CoNLLUP format: # In[5]: from conllu import parse_incr def read_tags(filename): data = {'words': [], 'tags': []} with open(filename) as f: for sent in parse_incr(f): words = [tok['form'] for tok in sent] tags = [tok['upos'] for tok in sent] data['words'].append(words) data['tags'].append(tags) return pd.DataFrame(data) # In[6]: train_df = read_tags('data/UD_Spanish-AnCora/es_ancora-ud-train.conllup') train_df # We now load the GloVe embeddings for Spanish, which include a representation for the unknown token: # In[7]: from gensim.models import KeyedVectors glove = KeyedVectors.load_word2vec_format('glove-sbwc.i25.vec') glove.vectors.shape # In[8]: # these embeddings already include <unk> unk_tok = '<unk>' unk_id = glove.key_to_index[unk_tok] unk_tok, unk_id # In[9]: # add padding embedding pad_tok = '<pad>' pad_emb = np.zeros(300) glove.add_vector(pad_tok, pad_emb) pad_tok_id = glove.key_to_index[pad_tok] pad_tok, pad_tok_id # Preprocessing: lower case all words, and replace all numbers with '0': # In[10]: def preprocess(words): result = [] for w in words: w = w.lower() if w.isdecimal(): w = '0' result.append(w) return result train_df['words'] = train_df['words'].progress_map(preprocess) train_df # Next, construct actual PyTorch `Dataset` and `DataLoader` objects for the train/dev/test partitions: # In[11]: def get_ids(tokens, key_to_index, unk_id=None): return [key_to_index.get(tok, unk_id) for tok in tokens] def get_word_ids(tokens): return get_ids(tokens, glove.key_to_index, unk_id) # add new column to the dataframe train_df['word ids'] = train_df['words'].progress_map(get_word_ids) train_df # In[12]: pad_tag = '<pad>' index_to_tag = train_df['tags'].explode().unique().tolist() + [pad_tag] tag_to_index = {t:i for i,t in enumerate(index_to_tag)} pad_tag_id = tag_to_index[pad_tag] pad_tag, pad_tag_id # In[13]: index_to_tag # In[14]: def get_tag_ids(tags): return get_ids(tags, tag_to_index) train_df['tag ids'] = train_df['tags'].progress_map(get_tag_ids) train_df # In[15]: dev_df = read_tags('data/UD_Spanish-AnCora/es_ancora-ud-dev.conllup') dev_df['words'] = dev_df['words'].progress_map(preprocess) dev_df['word ids'] = dev_df['words'].progress_map(lambda x: get_ids(x, glove.key_to_index, unk_id)) dev_df['tag ids'] = dev_df['tags'].progress_map(lambda x: get_ids(x, tag_to_index)) dev_df # In[16]: from torch.utils.data import Dataset class MyDataset(Dataset): def __init__(self, x, y): self.x = x self.y = y def __len__(self): return len(self.y) def __getitem__(self, index): x = torch.tensor(self.x[index]) y = torch.tensor(self.y[index]) return x, y # `collate_fn` will be used by `DataLoader` to pad all sentences in the same batch to the same length. # In[17]: from torch.nn.utils.rnn import pad_sequence def collate_fn(batch): # separate xs and ys xs, ys = zip(*batch) # get lengths lengths = [len(x) for x in xs] # pad sequences x_padded = pad_sequence(xs, batch_first=True, padding_value=pad_tok_id) y_padded = pad_sequence(ys, batch_first=True, padding_value=pad_tag_id) # return padded return x_padded, y_padded, lengths # Now construct our PyTorch model: # In[18]: from torch import nn from torch.nn.utils.rnn import pack_padded_sequence, pad_packed_sequence class MyModel(nn.Module): def __init__(self, vectors, hidden_size, num_layers, bidirectional, dropout, output_size): super().__init__() # ensure vectors is a tensor if not torch.is_tensor(vectors): vectors = torch.tensor(vectors) # init embedding layer self.embedding = nn.Embedding.from_pretrained(embeddings=vectors) # init lstm self.lstm = nn.LSTM( input_size=vectors.shape[1], hidden_size=hidden_size, num_layers=num_layers, bidirectional=bidirectional, dropout=dropout, batch_first=True) # init dropout self.dropout = nn.Dropout(dropout) # init classifier self.classifier = nn.Linear( in_features=hidden_size * 2 if bidirectional else hidden_size, out_features=output_size) def forward(self, x_padded, x_lengths): # get embeddings output = self.embedding(x_padded) output = self.dropout(output) # pack data before lstm packed = pack_padded_sequence(output, x_lengths, batch_first=True, enforce_sorted=False) packed, _ = self.lstm(packed) # unpack data before rest of model output, _ = pad_packed_sequence(packed, batch_first=True) output = self.dropout(output) output = self.classifier(output) return output # In[19]: from torch import optim from torch.utils.data import DataLoader from sklearn.metrics import accuracy_score # hyperparameters lr = 1e-3 weight_decay = 1e-5 batch_size = 100 shuffle = True n_epochs = 10 vectors = glove.vectors hidden_size = 100 num_layers = 2 bidirectional = True dropout = 0.1 output_size = len(index_to_tag) # initialize the model, loss function, optimizer, and data-loader model = MyModel(vectors, hidden_size, num_layers, bidirectional, dropout, output_size).to(device) loss_func = nn.CrossEntropyLoss() optimizer = optim.Adam(model.parameters(), lr=lr, weight_decay=weight_decay) train_ds = MyDataset(train_df['word ids'], train_df['tag ids']) train_dl = DataLoader(train_ds, batch_size=batch_size, shuffle=shuffle, collate_fn=collate_fn) dev_ds = MyDataset(dev_df['word ids'], dev_df['tag ids']) dev_dl = DataLoader(dev_ds, batch_size=batch_size, shuffle=shuffle, collate_fn=collate_fn) train_loss, train_acc = [], [] dev_loss, dev_acc = [], [] # We are now ready to train! # In[20]: # train the model for epoch in range(n_epochs): losses, acc = [], [] model.train() for x_padded, y_padded, lengths in tqdm(train_dl, desc=f'epoch {epoch+1} (train)'): # clear gradients model.zero_grad() # send batch to right device x_padded = x_padded.to(device) y_padded = y_padded.to(device) # predict label scores y_pred = model(x_padded, lengths) # reshape output y_true = torch.flatten(y_padded) y_pred = y_pred.view(-1, output_size) mask = y_true != pad_tag_id y_true = y_true[mask] y_pred = y_pred[mask] # compute loss loss = loss_func(y_pred, y_true) # accumulate for plotting gold = y_true.detach().cpu().numpy() pred = np.argmax(y_pred.detach().cpu().numpy(), axis=1) losses.append(loss.detach().cpu().item()) acc.append(accuracy_score(gold, pred)) # backpropagate loss.backward() # optimize model parameters optimizer.step() train_loss.append(np.mean(losses)) train_acc.append(np.mean(acc)) model.eval() with torch.no_grad(): losses, acc = [], [] for x_padded, y_padded, lengths in tqdm(dev_dl, desc=f'epoch {epoch+1} (dev)'): x_padded = x_padded.to(device) y_padded = y_padded.to(device) y_pred = model(x_padded, lengths) y_true = torch.flatten(y_padded) y_pred = y_pred.view(-1, output_size) mask = y_true != pad_tag_id y_true = y_true[mask] y_pred = y_pred[mask] loss = loss_func(y_pred, y_true) gold = y_true.cpu().numpy() pred = np.argmax(y_pred.cpu().numpy(), axis=1) losses.append(loss.cpu().item()) acc.append(accuracy_score(gold, pred)) dev_loss.append(np.mean(losses)) dev_acc.append(np.mean(acc)) # Plot loss and accuracy on dev after each epoch: # In[21]: import matplotlib.pyplot as plt x = np.arange(n_epochs) + 1 plt.plot(x, train_loss) plt.plot(x, dev_loss) plt.legend(['train', 'dev']) plt.xlabel('epoch') plt.ylabel('loss') plt.grid(True) # In[22]: plt.plot(x, train_acc) plt.plot(x, dev_acc) plt.legend(['train', 'dev']) plt.xlabel('epoch') plt.ylabel('accuracy') plt.grid(True) # In[23]: test_df = read_tags('data/UD_Spanish-AnCora/es_ancora-ud-test.conllup') test_df['words'] = test_df['words'].progress_map(preprocess) test_df['word ids'] = test_df['words'].progress_map(lambda x: get_ids(x, glove.key_to_index, unk_id)) test_df['tag ids'] = test_df['tags'].progress_map(lambda x: get_ids(x, tag_to_index)) test_df # Now let's evaluate on the test partition: # In[24]: from sklearn.metrics import classification_report model.eval() test_ds = MyDataset(test_df['word ids'], test_df['tag ids']) test_dl = DataLoader(test_ds, batch_size=batch_size, shuffle=shuffle, collate_fn=collate_fn) all_y_true = [] all_y_pred = [] with torch.no_grad(): for x_padded, y_padded, lengths in tqdm(test_dl): x_padded = x_padded.to(device) y_pred = model(x_padded, lengths) y_true = torch.flatten(y_padded) y_pred = y_pred.view(-1, output_size) mask = y_true != pad_tag_id y_true = y_true[mask] y_pred = torch.argmax(y_pred[mask], dim=1) all_y_true.append(y_true.cpu().numpy()) all_y_pred.append(y_pred.cpu().numpy()) y_true = np.concatenate(all_y_true) y_pred = np.concatenate(all_y_pred) target_names = index_to_tag[:-2] print(classification_report(y_true, y_pred, target_names=target_names)) # Let's generate a confusion matrix for all POS tags in the data: # In[25]: from sklearn.metrics import ConfusionMatrixDisplay, confusion_matrix cm = confusion_matrix(y_true, y_pred, normalize='true') disp = ConfusionMatrixDisplay( confusion_matrix=cm, display_labels=target_names, ) fig, ax = plt.subplots(figsize=(10,10)) disp.plot( cmap='Blues', values_format='.2f', colorbar=False, ax=ax, xticks_rotation=45, ) # In[ ]:
5,370
5,408
14
chap11-15
chap11-15
11 Implementing Part-of-speech Tagging Using Recurrent Neural Networks The previous chapter was our first exposure to recurrent neural networks, which included intuitions for why they are useful for natural language processing, various architectures, and training algorithms. In this chapter we will put them to use, to implement a common sequence modeling task. 11.1 Part-of-speech Tagging The task we will use as an example for this chapter is part-of-speech (POS) tagging, an NLP application that, as we discussed in the previous chapter, benefits from word order. Please see Chapter 16 for a more thorough discussion of POS tagging. The entire code presented in this chapter is available in the chap11_pos_tagging Jupyter notebook. To take a break from NLP applications for English, in this chapter we use the AnCora corpus (Taulé et al., 2008), which primarily consists of newspaper texts in Spanish and Catalan with different linguistic annotations. In this chapter we work with the Spanish portion of the corpus, and the annotations for Universal POS tags (see Chapter 16 for a description of these tags). The Spanish portion of the corpus is divided into a training set with 14,305 sentences, a development set with 1,654 sentences, and a test set with 1,721 sentences. The data is distributed in the CoNLL-U format. In this format, all sentences in a dataset are stored in the same file, separated by a blank line. Each individual token in a sentence is represented in a line, which contains 10 annotation fields separated by tabs: ID, FORM, LEMMA, UPOS, XPOS, FEATS, HEAD, DEPREL, DEPS, and MISC. A comprehensive explanation of this format and the 162 11.1 Part-of-speech Tagging 163 meaning of the different fields is beyond the goal of this chapter; however, the curious reader can find one at the CoNLL-U website.1 Here, we are only concerned with the fields FORM (the raw word), and UPOS (the Universal part-of-speech tag). As in previous chapters, we use pandas to preprocess the data. For parsing the CoNLL-U files, we rely on the conllu Python module.2 We implement a function called read_tags that reads the CoNLL-U file corresponding to a dataset and returns a pandas dataframe that combines all tokens in a sentence into a single row with two columns, one for the words, and one for the POS tags in the corresponding sentence: 0 1 2 3 4 ... 14300 14301 14302 14303 14304 words [El, presidente, de, el, órgano, regulador, de... [Afirmó, que, sigue, el, criterio, europeo, y,... [Durante, la, presentación, de, el, libro, ", ... [Y, todas, las, miradas, convergen, en, la, lu... [Cambiar, las, formas, parece, de, rigor, ,, p... tags [DET, NOUN, ADP, DET, NOUN, ADJ, ADP, DET, PRO... [VERB, SCONJ, VERB, DET, NOUN, ADJ, CCONJ, SCO... [ADP, DET, NOUN, ADP, DET, NOUN, PUNCT, DET, P... [CCONJ, DET, DET, NOUN, VERB, ADP, DET, NOUN, ... [VERB, DET, NOUN, VERB, ADP, NOUN, PUNCT, CCON... [PRON, VERB, ADP, VERB, PRON, ADP, DET, NOUN, ... [Sobre, la, oferta, de, interconexión, con, Te... [ADP, DET, NOUN, ADP, NOUN, ADP, PROPN, ADP, D... [La, inversión, en, investigación, básica, es,... [DET, NOUN, ADP, NOUN, ADJ, AUX, DET, NOUN, AD... ... ... [Conviene, que, ahora, ,, en, plena, apoteosis... [VERB, SCONJ, ADV, PUNCT, ADP, ADJ, NOUN, ADP,... [Carlos, y, Fayna, se, enzarzan, en, una, bron... [PROPN, CCONJ, PROPN, PRON, VERB, ADP, DET, NO... [Él, llega, a, tirar, la, sobre, la, cama, y, ... 14305 rows × 2 columns In order to implement our POS tagging application, we need word embeddings that have been pretrained for Spanish. Here we use the publicly-available GloVe embeddings trained on the Spanish Billion Word Corpus3 by the Departamento de Ciencias de la Computación of Universidad de Chile.4 In contrast to the GloVe embeddings used in Chapter 9, these do include a header that stores meta data about the embeddings (i.e., size of the vocabulary and the dimension of the embedding vectors), so in this case we do not use the no_header=True argument: Another difference between these GloVe embeddings and the ones we used in Chapter 9 is that these already include an embedding for unknown words. Therefore, there is no need to introduce our own. However, we do need to include a new embedding for padding, which . 1  https://universaldependencies.org/format.html 
 . 2  https://github.com/EmilStenstrom/conllu/ 
 . 3  https://crscardellino.ar/SBWCE/ 
 . 4  https://github.com/dccuchile/spanishwordembeddings# 
gloveembeddingsfromsbwc 
 164 Implementing POS Tagging Using RNNs we will use later to guarantee that all sentences in the same mini-batch have the same length. We add a vector of zeros for the padding token in the same way as before: Next, we need to preprocess our tokens to match the vocabulary of the embeddings. In particular, these embeddings were trained on words that were lowercased and on sequences of digits that were replaced with a single 0. We will apply the same modifications to our tokens: (From now on we will omit the pandas tables for readability, but, as usual, the corresponding Jupyter notebook contains all necessary information.) Next, we add a new column to the dataframe that stores the word ids corresponding to the embedding vocabulary. Note that at this point we are not padding the sequences of word ids. We will address padding later. We also need to generate the ids for the POS tags. To this end, we first need to construct a vocabulary of POS tags. Once again, we generate a list of tags using explode(), which linearizes our sequence of sequences of tags, and remove repeated tags using unique(). We also add a special tag for the padding token: We now use this POS tag vocabulary to construct a new dataframe column that stores the POS tag ids: The implementation of the Dataset class that stores our POS dataset is trivial: we simply return the lists of word and tag ids, converted to PyTorch tensors. Now it’s time to handle padding. This time we will use some features of PyTorch that we have not seen before. The DataLoader object can receive an optional argument, collate_fn, which expects a function that can be used to form a mini-batch. We will implement this function using PyTorch’s torch.nn.utils.rnn.pad_sequence() function, which, unsurprisingly, pads a group of tensors. We will take advantage of this function to pad the tensors while forming the mini-batch itself. The advantage of this strategy is that, rather than needing to pad all the examples to be the same length as the largest sentence in the corpus, we will instead pad them to the same length as the largest sentence in the minibatch. The latter strategy reduces the amount of padding necessary, which should yield more efficient code. The collate_fn() function takes a single argument, batch, which is a list of tuples. Each tuple has two elements: the list of word ids and the list of tag ids corresponding to a single example. We first unzip this list of tuples into two lists; the first list has all the word ids, and the 11.1 Part-of-speech Tagging 165 second has the tag ids. An explanation of how zip(*batch) works is provided in Appendix A. Next, we compute the lengths of each of the examples in the batch, which we will use later to inform the recurrent neural network where padding starts for each example. We then use the pad_sequence() function to add padding. This function will find the longest sequence in the batch and pad all examples accordingly using the provided padding value. This method is designed to work with PyTorch’s recurrent neural networks, which by default assume the batch index is in the second dimension. However, we will be organizing our tensors such that the batch index is always in the first dimension, which we feel to be more intuitive. For this reason, we also need to provide the batch_first=True argument to pad_sequence. Finally, we return the padded data, as well as the original lengths of the examples. Next, we implement our POS tagging model class. The model consists of: (a) an embedding layer for our Spanish pretrained embeddings; (b) an LSTM that can be set to be unior bi-directional (see Figure 10.3; the RNN is configured to be bidirectional by setting the bidirectional argument to True in the LSTM constructor), with a configurable number of layers (see Figure 10.2; the number of layers is set through the num_layers argument of the constructor) and (c) a linear layer on top of each hidden state, which is used to predict the scores for each of the POS tags for the corresponding token. The forward() method receives the padded minibatch and the list of lengths for the (unpadded) examples in this mini-batch. The first step in the function is to retrieve the embeddings for all words referenced in this mini-batch. We then apply dropout over these embedding vectors. Next, before passing the data to the LSTM, we pack the padded data. Note that the PyTorch PackedSequence5 class, which is the output of the pack_padded_sequence() function, stores a batch of sequences that had different lengths before padding. One important advantage of using PackedSequence is that its internal data structure removes the padding tokens (which is why we had to keep track of the example lengths before padding in x_lengths), and, thus, the recurrent neural network will not back-propagate over the padded elements.6 Once we have a PackedSequence, we pass it to the LSTM. Since the . 5  https://pytorch.org/docs/stable/generated/torch.nn.utils.rnn. PackedSequence.html 
 . 6  The astute reader might ask at this point, “Why did we pad the mini-batch examples in the first place, if we are removing the padding later?” The padding is needed because this allows us to store the mini-batch as a single three-dimensional tensor. 
 166 Implementing POS Tagging Using RNNs output of the LSTM is also packed, we then unpack it using pad_packed_sequence(). Next we apply dropout to this unpacked LSTM output. Finally, we pass
this to the linear layer to predict the tag scores for the tokens. Despite the small number of lines of code, the code of the forward() method, which switches between embedding vectors, padded tensors, and packed sequences, is not trivial. To clarify it, let us walk through an example. Imagine that the input to the forward() method is a batch, x_padded, with shape (10, 20), corresponding to 10 examples, each with 20 word ids (some of which are padding). Then we retrieve the embeddings. Assuming our word embeddings, i.e., the input vectors xi in Chapter 10, are of dimension 300, the new tensor will have a shape of (10, 20, 300), corresponding to 10 examples, each with 20 embeddings, each with dimension 300. After dropout the shape hasn’t changed, but some of the elements have been zeroed out. After unpacking the output of the LSTM, we will have a tensor of shape (10, 20, hidden_size), where hidden_size is the size of the LSTM hidden state, i.e., the ht vector in Equation 10.6, (hidden_size is a hyper parameter we will set later on). After passing this tensor to the linear layer, we will obtain a tensor of shape (10, 20, tag_vocab_size), where tag_vocab_size is the number of POS tags in our vocabulary. Thus, for each token in each example, we will have a distribution of POS tag scores. For each token, the assigned POS tag will be the one corresponding to the highest score. We next initialize all the hyper parameters and all the required components: The training procedure is very similar to the one implemented in Chapter 7. One notable difference is that the output of this model has three dimensions instead of two: number of examples, number of tokens, and number of POS tag scores. Thus, we have to reshape the output to pass it to the loss function. Additionally, we need to discard the padding before computing the loss. We reshape the gold tag ids using the torch.flatten() function, to transform the 2-dimensional tensor of shape (n_examples, n_tokens) to a 1-dimensional tensor with n_examples * n_tokens elements. The predictions are reshaped using the view(-1, output_size) method. By passing two arguments we are stipulating that we want two dimensions. The second dimension will be of size output_size. The -1 indicates that the first dimension should be inferred from the size of the tensor. This means that for a tensor of shape (n_examples, n_tokens, output_size) we will get a tensor of shape (n_examples * n_tokens, output_size). Then, we use a Boolean mask to discard the elements corresponding to the padding. This way, the loss 11.2 Summary 167 function will consider each actual word individually, as if the whole batch was just one big sentence. Note that treating a mini-batch as a single virtual sentence does affect the evaluation results. Lastly, we evaluate the performance of our POS tagger on the test set, similarly to how we have done it before: The results indicate that our POS tagger obtains an overall accuracy of 97%, which is in line with state-of-the-art approaches! This is encouraging considering that our approach does not include the CRF layer we discussed in Chapter 10. We challenge the reader to add this layer,7 and experiment with this architecture for other sequence tasks such as named entity recognition. 11.2 Summary In this chapter we have implemented a Spanish part-of-speech tagger using a bidirectional LSTM and a set of pretrained, static word embeddings. Through this process, we have also introduced several new PyTorch features such as the pad_sequence, pack_padded_sequence, and pad_packed_sequence functions, which allow us to work more efficiently with variable length sequences for recurrent neural networks. 7 See, for example, the LSTM-CRF implementation from the PyTorch tutorial: https://pytorch.org/tutorials/beginner/nlp/advanced_tutorial.html
4,747
4,966
#!/usr/bin/env python # coding: utf-8 # # Part-of-speech Tagging Using RNNs # Some initialization: # In[4]: import random import torch import numpy as np import pandas as pd from tqdm.notebook import tqdm # enable tqdm in pandas tqdm.pandas() # set to True to use the gpu (if there is one available) use_gpu = True # select device device = torch.device('cuda' if use_gpu and torch.cuda.is_available() else 'cpu') print(f'device: {device.type}') # random seed seed = 1234 # set random seed if seed is not None: print(f'random seed: {seed}') random.seed(seed) np.random.seed(seed) torch.manual_seed(seed) # Next, let's read the words and their POS tags from the CoNLLUP format: # In[5]: from conllu import parse_incr def read_tags(filename): data = {'words': [], 'tags': []} with open(filename) as f: for sent in parse_incr(f): words = [tok['form'] for tok in sent] tags = [tok['upos'] for tok in sent] data['words'].append(words) data['tags'].append(tags) return pd.DataFrame(data) # In[6]: train_df = read_tags('data/UD_Spanish-AnCora/es_ancora-ud-train.conllup') train_df # We now load the GloVe embeddings for Spanish, which include a representation for the unknown token: # In[7]: from gensim.models import KeyedVectors glove = KeyedVectors.load_word2vec_format('glove-sbwc.i25.vec') glove.vectors.shape # In[8]: # these embeddings already include <unk> unk_tok = '<unk>' unk_id = glove.key_to_index[unk_tok] unk_tok, unk_id # In[9]: # add padding embedding pad_tok = '<pad>' pad_emb = np.zeros(300) glove.add_vector(pad_tok, pad_emb) pad_tok_id = glove.key_to_index[pad_tok] pad_tok, pad_tok_id # Preprocessing: lower case all words, and replace all numbers with '0': # In[10]: def preprocess(words): result = [] for w in words: w = w.lower() if w.isdecimal(): w = '0' result.append(w) return result train_df['words'] = train_df['words'].progress_map(preprocess) train_df # Next, construct actual PyTorch `Dataset` and `DataLoader` objects for the train/dev/test partitions: # In[11]: def get_ids(tokens, key_to_index, unk_id=None): return [key_to_index.get(tok, unk_id) for tok in tokens] def get_word_ids(tokens): return get_ids(tokens, glove.key_to_index, unk_id) # add new column to the dataframe train_df['word ids'] = train_df['words'].progress_map(get_word_ids) train_df # In[12]: pad_tag = '<pad>' index_to_tag = train_df['tags'].explode().unique().tolist() + [pad_tag] tag_to_index = {t:i for i,t in enumerate(index_to_tag)} pad_tag_id = tag_to_index[pad_tag] pad_tag, pad_tag_id # In[13]: index_to_tag # In[14]: def get_tag_ids(tags): return get_ids(tags, tag_to_index) train_df['tag ids'] = train_df['tags'].progress_map(get_tag_ids) train_df # In[15]: dev_df = read_tags('data/UD_Spanish-AnCora/es_ancora-ud-dev.conllup') dev_df['words'] = dev_df['words'].progress_map(preprocess) dev_df['word ids'] = dev_df['words'].progress_map(lambda x: get_ids(x, glove.key_to_index, unk_id)) dev_df['tag ids'] = dev_df['tags'].progress_map(lambda x: get_ids(x, tag_to_index)) dev_df # In[16]: from torch.utils.data import Dataset class MyDataset(Dataset): def __init__(self, x, y): self.x = x self.y = y def __len__(self): return len(self.y) def __getitem__(self, index): x = torch.tensor(self.x[index]) y = torch.tensor(self.y[index]) return x, y # `collate_fn` will be used by `DataLoader` to pad all sentences in the same batch to the same length. # In[17]: from torch.nn.utils.rnn import pad_sequence def collate_fn(batch): # separate xs and ys xs, ys = zip(*batch) # get lengths lengths = [len(x) for x in xs] # pad sequences x_padded = pad_sequence(xs, batch_first=True, padding_value=pad_tok_id) y_padded = pad_sequence(ys, batch_first=True, padding_value=pad_tag_id) # return padded return x_padded, y_padded, lengths # Now construct our PyTorch model: # In[18]: from torch import nn from torch.nn.utils.rnn import pack_padded_sequence, pad_packed_sequence class MyModel(nn.Module): def __init__(self, vectors, hidden_size, num_layers, bidirectional, dropout, output_size): super().__init__() # ensure vectors is a tensor if not torch.is_tensor(vectors): vectors = torch.tensor(vectors) # init embedding layer self.embedding = nn.Embedding.from_pretrained(embeddings=vectors) # init lstm self.lstm = nn.LSTM( input_size=vectors.shape[1], hidden_size=hidden_size, num_layers=num_layers, bidirectional=bidirectional, dropout=dropout, batch_first=True) # init dropout self.dropout = nn.Dropout(dropout) # init classifier self.classifier = nn.Linear( in_features=hidden_size * 2 if bidirectional else hidden_size, out_features=output_size) def forward(self, x_padded, x_lengths): # get embeddings output = self.embedding(x_padded) output = self.dropout(output) # pack data before lstm packed = pack_padded_sequence(output, x_lengths, batch_first=True, enforce_sorted=False) packed, _ = self.lstm(packed) # unpack data before rest of model output, _ = pad_packed_sequence(packed, batch_first=True) output = self.dropout(output) output = self.classifier(output) return output # In[19]: from torch import optim from torch.utils.data import DataLoader from sklearn.metrics import accuracy_score # hyperparameters lr = 1e-3 weight_decay = 1e-5 batch_size = 100 shuffle = True n_epochs = 10 vectors = glove.vectors hidden_size = 100 num_layers = 2 bidirectional = True dropout = 0.1 output_size = len(index_to_tag) # initialize the model, loss function, optimizer, and data-loader model = MyModel(vectors, hidden_size, num_layers, bidirectional, dropout, output_size).to(device) loss_func = nn.CrossEntropyLoss() optimizer = optim.Adam(model.parameters(), lr=lr, weight_decay=weight_decay) train_ds = MyDataset(train_df['word ids'], train_df['tag ids']) train_dl = DataLoader(train_ds, batch_size=batch_size, shuffle=shuffle, collate_fn=collate_fn) dev_ds = MyDataset(dev_df['word ids'], dev_df['tag ids']) dev_dl = DataLoader(dev_ds, batch_size=batch_size, shuffle=shuffle, collate_fn=collate_fn) train_loss, train_acc = [], [] dev_loss, dev_acc = [], [] # We are now ready to train! # In[20]: # train the model for epoch in range(n_epochs): losses, acc = [], [] model.train() for x_padded, y_padded, lengths in tqdm(train_dl, desc=f'epoch {epoch+1} (train)'): # clear gradients model.zero_grad() # send batch to right device x_padded = x_padded.to(device) y_padded = y_padded.to(device) # predict label scores y_pred = model(x_padded, lengths) # reshape output y_true = torch.flatten(y_padded) y_pred = y_pred.view(-1, output_size) mask = y_true != pad_tag_id y_true = y_true[mask] y_pred = y_pred[mask] # compute loss loss = loss_func(y_pred, y_true) # accumulate for plotting gold = y_true.detach().cpu().numpy() pred = np.argmax(y_pred.detach().cpu().numpy(), axis=1) losses.append(loss.detach().cpu().item()) acc.append(accuracy_score(gold, pred)) # backpropagate loss.backward() # optimize model parameters optimizer.step() train_loss.append(np.mean(losses)) train_acc.append(np.mean(acc)) model.eval() with torch.no_grad(): losses, acc = [], [] for x_padded, y_padded, lengths in tqdm(dev_dl, desc=f'epoch {epoch+1} (dev)'): x_padded = x_padded.to(device) y_padded = y_padded.to(device) y_pred = model(x_padded, lengths) y_true = torch.flatten(y_padded) y_pred = y_pred.view(-1, output_size) mask = y_true != pad_tag_id y_true = y_true[mask] y_pred = y_pred[mask] loss = loss_func(y_pred, y_true) gold = y_true.cpu().numpy() pred = np.argmax(y_pred.cpu().numpy(), axis=1) losses.append(loss.cpu().item()) acc.append(accuracy_score(gold, pred)) dev_loss.append(np.mean(losses)) dev_acc.append(np.mean(acc)) # Plot loss and accuracy on dev after each epoch: # In[21]: import matplotlib.pyplot as plt x = np.arange(n_epochs) + 1 plt.plot(x, train_loss) plt.plot(x, dev_loss) plt.legend(['train', 'dev']) plt.xlabel('epoch') plt.ylabel('loss') plt.grid(True) # In[22]: plt.plot(x, train_acc) plt.plot(x, dev_acc) plt.legend(['train', 'dev']) plt.xlabel('epoch') plt.ylabel('accuracy') plt.grid(True) # In[23]: test_df = read_tags('data/UD_Spanish-AnCora/es_ancora-ud-test.conllup') test_df['words'] = test_df['words'].progress_map(preprocess) test_df['word ids'] = test_df['words'].progress_map(lambda x: get_ids(x, glove.key_to_index, unk_id)) test_df['tag ids'] = test_df['tags'].progress_map(lambda x: get_ids(x, tag_to_index)) test_df # Now let's evaluate on the test partition: # In[24]: from sklearn.metrics import classification_report model.eval() test_ds = MyDataset(test_df['word ids'], test_df['tag ids']) test_dl = DataLoader(test_ds, batch_size=batch_size, shuffle=shuffle, collate_fn=collate_fn) all_y_true = [] all_y_pred = [] with torch.no_grad(): for x_padded, y_padded, lengths in tqdm(test_dl): x_padded = x_padded.to(device) y_pred = model(x_padded, lengths) y_true = torch.flatten(y_padded) y_pred = y_pred.view(-1, output_size) mask = y_true != pad_tag_id y_true = y_true[mask] y_pred = torch.argmax(y_pred[mask], dim=1) all_y_true.append(y_true.cpu().numpy()) all_y_pred.append(y_pred.cpu().numpy()) y_true = np.concatenate(all_y_true) y_pred = np.concatenate(all_y_pred) target_names = index_to_tag[:-2] print(classification_report(y_true, y_pred, target_names=target_names)) # Let's generate a confusion matrix for all POS tags in the data: # In[25]: from sklearn.metrics import ConfusionMatrixDisplay, confusion_matrix cm = confusion_matrix(y_true, y_pred, normalize='true') disp = ConfusionMatrixDisplay( confusion_matrix=cm, display_labels=target_names, ) fig, ax = plt.subplots(figsize=(10,10)) disp.plot( cmap='Blues', values_format='.2f', colorbar=False, ax=ax, xticks_rotation=45, ) # In[ ]:
1,804
1,827
15
chap11-16
chap11-16
11 Implementing Part-of-speech Tagging Using Recurrent Neural Networks The previous chapter was our first exposure to recurrent neural networks, which included intuitions for why they are useful for natural language processing, various architectures, and training algorithms. In this chapter we will put them to use, to implement a common sequence modeling task. 11.1 Part-of-speech Tagging The task we will use as an example for this chapter is part-of-speech (POS) tagging, an NLP application that, as we discussed in the previous chapter, benefits from word order. Please see Chapter 16 for a more thorough discussion of POS tagging. The entire code presented in this chapter is available in the chap11_pos_tagging Jupyter notebook. To take a break from NLP applications for English, in this chapter we use the AnCora corpus (Taulé et al., 2008), which primarily consists of newspaper texts in Spanish and Catalan with different linguistic annotations. In this chapter we work with the Spanish portion of the corpus, and the annotations for Universal POS tags (see Chapter 16 for a description of these tags). The Spanish portion of the corpus is divided into a training set with 14,305 sentences, a development set with 1,654 sentences, and a test set with 1,721 sentences. The data is distributed in the CoNLL-U format. In this format, all sentences in a dataset are stored in the same file, separated by a blank line. Each individual token in a sentence is represented in a line, which contains 10 annotation fields separated by tabs: ID, FORM, LEMMA, UPOS, XPOS, FEATS, HEAD, DEPREL, DEPS, and MISC. A comprehensive explanation of this format and the 162 11.1 Part-of-speech Tagging 163 meaning of the different fields is beyond the goal of this chapter; however, the curious reader can find one at the CoNLL-U website.1 Here, we are only concerned with the fields FORM (the raw word), and UPOS (the Universal part-of-speech tag). As in previous chapters, we use pandas to preprocess the data. For parsing the CoNLL-U files, we rely on the conllu Python module.2 We implement a function called read_tags that reads the CoNLL-U file corresponding to a dataset and returns a pandas dataframe that combines all tokens in a sentence into a single row with two columns, one for the words, and one for the POS tags in the corresponding sentence: 0 1 2 3 4 ... 14300 14301 14302 14303 14304 words [El, presidente, de, el, órgano, regulador, de... [Afirmó, que, sigue, el, criterio, europeo, y,... [Durante, la, presentación, de, el, libro, ", ... [Y, todas, las, miradas, convergen, en, la, lu... [Cambiar, las, formas, parece, de, rigor, ,, p... tags [DET, NOUN, ADP, DET, NOUN, ADJ, ADP, DET, PRO... [VERB, SCONJ, VERB, DET, NOUN, ADJ, CCONJ, SCO... [ADP, DET, NOUN, ADP, DET, NOUN, PUNCT, DET, P... [CCONJ, DET, DET, NOUN, VERB, ADP, DET, NOUN, ... [VERB, DET, NOUN, VERB, ADP, NOUN, PUNCT, CCON... [PRON, VERB, ADP, VERB, PRON, ADP, DET, NOUN, ... [Sobre, la, oferta, de, interconexión, con, Te... [ADP, DET, NOUN, ADP, NOUN, ADP, PROPN, ADP, D... [La, inversión, en, investigación, básica, es,... [DET, NOUN, ADP, NOUN, ADJ, AUX, DET, NOUN, AD... ... ... [Conviene, que, ahora, ,, en, plena, apoteosis... [VERB, SCONJ, ADV, PUNCT, ADP, ADJ, NOUN, ADP,... [Carlos, y, Fayna, se, enzarzan, en, una, bron... [PROPN, CCONJ, PROPN, PRON, VERB, ADP, DET, NO... [Él, llega, a, tirar, la, sobre, la, cama, y, ... 14305 rows × 2 columns In order to implement our POS tagging application, we need word embeddings that have been pretrained for Spanish. Here we use the publicly-available GloVe embeddings trained on the Spanish Billion Word Corpus3 by the Departamento de Ciencias de la Computación of Universidad de Chile.4 In contrast to the GloVe embeddings used in Chapter 9, these do include a header that stores meta data about the embeddings (i.e., size of the vocabulary and the dimension of the embedding vectors), so in this case we do not use the no_header=True argument: Another difference between these GloVe embeddings and the ones we used in Chapter 9 is that these already include an embedding for unknown words. Therefore, there is no need to introduce our own. However, we do need to include a new embedding for padding, which . 1  https://universaldependencies.org/format.html 
 . 2  https://github.com/EmilStenstrom/conllu/ 
 . 3  https://crscardellino.ar/SBWCE/ 
 . 4  https://github.com/dccuchile/spanishwordembeddings# 
gloveembeddingsfromsbwc 
 164 Implementing POS Tagging Using RNNs we will use later to guarantee that all sentences in the same mini-batch have the same length. We add a vector of zeros for the padding token in the same way as before: Next, we need to preprocess our tokens to match the vocabulary of the embeddings. In particular, these embeddings were trained on words that were lowercased and on sequences of digits that were replaced with a single 0. We will apply the same modifications to our tokens: (From now on we will omit the pandas tables for readability, but, as usual, the corresponding Jupyter notebook contains all necessary information.) Next, we add a new column to the dataframe that stores the word ids corresponding to the embedding vocabulary. Note that at this point we are not padding the sequences of word ids. We will address padding later. We also need to generate the ids for the POS tags. To this end, we first need to construct a vocabulary of POS tags. Once again, we generate a list of tags using explode(), which linearizes our sequence of sequences of tags, and remove repeated tags using unique(). We also add a special tag for the padding token: We now use this POS tag vocabulary to construct a new dataframe column that stores the POS tag ids: The implementation of the Dataset class that stores our POS dataset is trivial: we simply return the lists of word and tag ids, converted to PyTorch tensors. Now it’s time to handle padding. This time we will use some features of PyTorch that we have not seen before. The DataLoader object can receive an optional argument, collate_fn, which expects a function that can be used to form a mini-batch. We will implement this function using PyTorch’s torch.nn.utils.rnn.pad_sequence() function, which, unsurprisingly, pads a group of tensors. We will take advantage of this function to pad the tensors while forming the mini-batch itself. The advantage of this strategy is that, rather than needing to pad all the examples to be the same length as the largest sentence in the corpus, we will instead pad them to the same length as the largest sentence in the minibatch. The latter strategy reduces the amount of padding necessary, which should yield more efficient code. The collate_fn() function takes a single argument, batch, which is a list of tuples. Each tuple has two elements: the list of word ids and the list of tag ids corresponding to a single example. We first unzip this list of tuples into two lists; the first list has all the word ids, and the 11.1 Part-of-speech Tagging 165 second has the tag ids. An explanation of how zip(*batch) works is provided in Appendix A. Next, we compute the lengths of each of the examples in the batch, which we will use later to inform the recurrent neural network where padding starts for each example. We then use the pad_sequence() function to add padding. This function will find the longest sequence in the batch and pad all examples accordingly using the provided padding value. This method is designed to work with PyTorch’s recurrent neural networks, which by default assume the batch index is in the second dimension. However, we will be organizing our tensors such that the batch index is always in the first dimension, which we feel to be more intuitive. For this reason, we also need to provide the batch_first=True argument to pad_sequence. Finally, we return the padded data, as well as the original lengths of the examples. Next, we implement our POS tagging model class. The model consists of: (a) an embedding layer for our Spanish pretrained embeddings; (b) an LSTM that can be set to be unior bi-directional (see Figure 10.3; the RNN is configured to be bidirectional by setting the bidirectional argument to True in the LSTM constructor), with a configurable number of layers (see Figure 10.2; the number of layers is set through the num_layers argument of the constructor) and (c) a linear layer on top of each hidden state, which is used to predict the scores for each of the POS tags for the corresponding token. The forward() method receives the padded minibatch and the list of lengths for the (unpadded) examples in this mini-batch. The first step in the function is to retrieve the embeddings for all words referenced in this mini-batch. We then apply dropout over these embedding vectors. Next, before passing the data to the LSTM, we pack the padded data. Note that the PyTorch PackedSequence5 class, which is the output of the pack_padded_sequence() function, stores a batch of sequences that had different lengths before padding. One important advantage of using PackedSequence is that its internal data structure removes the padding tokens (which is why we had to keep track of the example lengths before padding in x_lengths), and, thus, the recurrent neural network will not back-propagate over the padded elements.6 Once we have a PackedSequence, we pass it to the LSTM. Since the . 5  https://pytorch.org/docs/stable/generated/torch.nn.utils.rnn. PackedSequence.html 
 . 6  The astute reader might ask at this point, “Why did we pad the mini-batch examples in the first place, if we are removing the padding later?” The padding is needed because this allows us to store the mini-batch as a single three-dimensional tensor. 
 166 Implementing POS Tagging Using RNNs output of the LSTM is also packed, we then unpack it using pad_packed_sequence(). Next we apply dropout to this unpacked LSTM output. Finally, we pass
this to the linear layer to predict the tag scores for the tokens. Despite the small number of lines of code, the code of the forward() method, which switches between embedding vectors, padded tensors, and packed sequences, is not trivial. To clarify it, let us walk through an example. Imagine that the input to the forward() method is a batch, x_padded, with shape (10, 20), corresponding to 10 examples, each with 20 word ids (some of which are padding). Then we retrieve the embeddings. Assuming our word embeddings, i.e., the input vectors xi in Chapter 10, are of dimension 300, the new tensor will have a shape of (10, 20, 300), corresponding to 10 examples, each with 20 embeddings, each with dimension 300. After dropout the shape hasn’t changed, but some of the elements have been zeroed out. After unpacking the output of the LSTM, we will have a tensor of shape (10, 20, hidden_size), where hidden_size is the size of the LSTM hidden state, i.e., the ht vector in Equation 10.6, (hidden_size is a hyper parameter we will set later on). After passing this tensor to the linear layer, we will obtain a tensor of shape (10, 20, tag_vocab_size), where tag_vocab_size is the number of POS tags in our vocabulary. Thus, for each token in each example, we will have a distribution of POS tag scores. For each token, the assigned POS tag will be the one corresponding to the highest score. We next initialize all the hyper parameters and all the required components: The training procedure is very similar to the one implemented in Chapter 7. One notable difference is that the output of this model has three dimensions instead of two: number of examples, number of tokens, and number of POS tag scores. Thus, we have to reshape the output to pass it to the loss function. Additionally, we need to discard the padding before computing the loss. We reshape the gold tag ids using the torch.flatten() function, to transform the 2-dimensional tensor of shape (n_examples, n_tokens) to a 1-dimensional tensor with n_examples * n_tokens elements. The predictions are reshaped using the view(-1, output_size) method. By passing two arguments we are stipulating that we want two dimensions. The second dimension will be of size output_size. The -1 indicates that the first dimension should be inferred from the size of the tensor. This means that for a tensor of shape (n_examples, n_tokens, output_size) we will get a tensor of shape (n_examples * n_tokens, output_size). Then, we use a Boolean mask to discard the elements corresponding to the padding. This way, the loss 11.2 Summary 167 function will consider each actual word individually, as if the whole batch was just one big sentence. Note that treating a mini-batch as a single virtual sentence does affect the evaluation results. Lastly, we evaluate the performance of our POS tagger on the test set, similarly to how we have done it before: The results indicate that our POS tagger obtains an overall accuracy of 97%, which is in line with state-of-the-art approaches! This is encouraging considering that our approach does not include the CRF layer we discussed in Chapter 10. We challenge the reader to add this layer,7 and experiment with this architecture for other sequence tasks such as named entity recognition. 11.2 Summary In this chapter we have implemented a Spanish part-of-speech tagger using a bidirectional LSTM and a set of pretrained, static word embeddings. Through this process, we have also introduced several new PyTorch features such as the pad_sequence, pack_padded_sequence, and pad_packed_sequence functions, which allow us to work more efficiently with variable length sequences for recurrent neural networks. 7 See, for example, the LSTM-CRF implementation from the PyTorch tutorial: https://pytorch.org/tutorials/beginner/nlp/advanced_tutorial.html
5,171
5,281
#!/usr/bin/env python # coding: utf-8 # # Part-of-speech Tagging Using RNNs # Some initialization: # In[4]: import random import torch import numpy as np import pandas as pd from tqdm.notebook import tqdm # enable tqdm in pandas tqdm.pandas() # set to True to use the gpu (if there is one available) use_gpu = True # select device device = torch.device('cuda' if use_gpu and torch.cuda.is_available() else 'cpu') print(f'device: {device.type}') # random seed seed = 1234 # set random seed if seed is not None: print(f'random seed: {seed}') random.seed(seed) np.random.seed(seed) torch.manual_seed(seed) # Next, let's read the words and their POS tags from the CoNLLUP format: # In[5]: from conllu import parse_incr def read_tags(filename): data = {'words': [], 'tags': []} with open(filename) as f: for sent in parse_incr(f): words = [tok['form'] for tok in sent] tags = [tok['upos'] for tok in sent] data['words'].append(words) data['tags'].append(tags) return pd.DataFrame(data) # In[6]: train_df = read_tags('data/UD_Spanish-AnCora/es_ancora-ud-train.conllup') train_df # We now load the GloVe embeddings for Spanish, which include a representation for the unknown token: # In[7]: from gensim.models import KeyedVectors glove = KeyedVectors.load_word2vec_format('glove-sbwc.i25.vec') glove.vectors.shape # In[8]: # these embeddings already include <unk> unk_tok = '<unk>' unk_id = glove.key_to_index[unk_tok] unk_tok, unk_id # In[9]: # add padding embedding pad_tok = '<pad>' pad_emb = np.zeros(300) glove.add_vector(pad_tok, pad_emb) pad_tok_id = glove.key_to_index[pad_tok] pad_tok, pad_tok_id # Preprocessing: lower case all words, and replace all numbers with '0': # In[10]: def preprocess(words): result = [] for w in words: w = w.lower() if w.isdecimal(): w = '0' result.append(w) return result train_df['words'] = train_df['words'].progress_map(preprocess) train_df # Next, construct actual PyTorch `Dataset` and `DataLoader` objects for the train/dev/test partitions: # In[11]: def get_ids(tokens, key_to_index, unk_id=None): return [key_to_index.get(tok, unk_id) for tok in tokens] def get_word_ids(tokens): return get_ids(tokens, glove.key_to_index, unk_id) # add new column to the dataframe train_df['word ids'] = train_df['words'].progress_map(get_word_ids) train_df # In[12]: pad_tag = '<pad>' index_to_tag = train_df['tags'].explode().unique().tolist() + [pad_tag] tag_to_index = {t:i for i,t in enumerate(index_to_tag)} pad_tag_id = tag_to_index[pad_tag] pad_tag, pad_tag_id # In[13]: index_to_tag # In[14]: def get_tag_ids(tags): return get_ids(tags, tag_to_index) train_df['tag ids'] = train_df['tags'].progress_map(get_tag_ids) train_df # In[15]: dev_df = read_tags('data/UD_Spanish-AnCora/es_ancora-ud-dev.conllup') dev_df['words'] = dev_df['words'].progress_map(preprocess) dev_df['word ids'] = dev_df['words'].progress_map(lambda x: get_ids(x, glove.key_to_index, unk_id)) dev_df['tag ids'] = dev_df['tags'].progress_map(lambda x: get_ids(x, tag_to_index)) dev_df # In[16]: from torch.utils.data import Dataset class MyDataset(Dataset): def __init__(self, x, y): self.x = x self.y = y def __len__(self): return len(self.y) def __getitem__(self, index): x = torch.tensor(self.x[index]) y = torch.tensor(self.y[index]) return x, y # `collate_fn` will be used by `DataLoader` to pad all sentences in the same batch to the same length. # In[17]: from torch.nn.utils.rnn import pad_sequence def collate_fn(batch): # separate xs and ys xs, ys = zip(*batch) # get lengths lengths = [len(x) for x in xs] # pad sequences x_padded = pad_sequence(xs, batch_first=True, padding_value=pad_tok_id) y_padded = pad_sequence(ys, batch_first=True, padding_value=pad_tag_id) # return padded return x_padded, y_padded, lengths # Now construct our PyTorch model: # In[18]: from torch import nn from torch.nn.utils.rnn import pack_padded_sequence, pad_packed_sequence class MyModel(nn.Module): def __init__(self, vectors, hidden_size, num_layers, bidirectional, dropout, output_size): super().__init__() # ensure vectors is a tensor if not torch.is_tensor(vectors): vectors = torch.tensor(vectors) # init embedding layer self.embedding = nn.Embedding.from_pretrained(embeddings=vectors) # init lstm self.lstm = nn.LSTM( input_size=vectors.shape[1], hidden_size=hidden_size, num_layers=num_layers, bidirectional=bidirectional, dropout=dropout, batch_first=True) # init dropout self.dropout = nn.Dropout(dropout) # init classifier self.classifier = nn.Linear( in_features=hidden_size * 2 if bidirectional else hidden_size, out_features=output_size) def forward(self, x_padded, x_lengths): # get embeddings output = self.embedding(x_padded) output = self.dropout(output) # pack data before lstm packed = pack_padded_sequence(output, x_lengths, batch_first=True, enforce_sorted=False) packed, _ = self.lstm(packed) # unpack data before rest of model output, _ = pad_packed_sequence(packed, batch_first=True) output = self.dropout(output) output = self.classifier(output) return output # In[19]: from torch import optim from torch.utils.data import DataLoader from sklearn.metrics import accuracy_score # hyperparameters lr = 1e-3 weight_decay = 1e-5 batch_size = 100 shuffle = True n_epochs = 10 vectors = glove.vectors hidden_size = 100 num_layers = 2 bidirectional = True dropout = 0.1 output_size = len(index_to_tag) # initialize the model, loss function, optimizer, and data-loader model = MyModel(vectors, hidden_size, num_layers, bidirectional, dropout, output_size).to(device) loss_func = nn.CrossEntropyLoss() optimizer = optim.Adam(model.parameters(), lr=lr, weight_decay=weight_decay) train_ds = MyDataset(train_df['word ids'], train_df['tag ids']) train_dl = DataLoader(train_ds, batch_size=batch_size, shuffle=shuffle, collate_fn=collate_fn) dev_ds = MyDataset(dev_df['word ids'], dev_df['tag ids']) dev_dl = DataLoader(dev_ds, batch_size=batch_size, shuffle=shuffle, collate_fn=collate_fn) train_loss, train_acc = [], [] dev_loss, dev_acc = [], [] # We are now ready to train! # In[20]: # train the model for epoch in range(n_epochs): losses, acc = [], [] model.train() for x_padded, y_padded, lengths in tqdm(train_dl, desc=f'epoch {epoch+1} (train)'): # clear gradients model.zero_grad() # send batch to right device x_padded = x_padded.to(device) y_padded = y_padded.to(device) # predict label scores y_pred = model(x_padded, lengths) # reshape output y_true = torch.flatten(y_padded) y_pred = y_pred.view(-1, output_size) mask = y_true != pad_tag_id y_true = y_true[mask] y_pred = y_pred[mask] # compute loss loss = loss_func(y_pred, y_true) # accumulate for plotting gold = y_true.detach().cpu().numpy() pred = np.argmax(y_pred.detach().cpu().numpy(), axis=1) losses.append(loss.detach().cpu().item()) acc.append(accuracy_score(gold, pred)) # backpropagate loss.backward() # optimize model parameters optimizer.step() train_loss.append(np.mean(losses)) train_acc.append(np.mean(acc)) model.eval() with torch.no_grad(): losses, acc = [], [] for x_padded, y_padded, lengths in tqdm(dev_dl, desc=f'epoch {epoch+1} (dev)'): x_padded = x_padded.to(device) y_padded = y_padded.to(device) y_pred = model(x_padded, lengths) y_true = torch.flatten(y_padded) y_pred = y_pred.view(-1, output_size) mask = y_true != pad_tag_id y_true = y_true[mask] y_pred = y_pred[mask] loss = loss_func(y_pred, y_true) gold = y_true.cpu().numpy() pred = np.argmax(y_pred.cpu().numpy(), axis=1) losses.append(loss.cpu().item()) acc.append(accuracy_score(gold, pred)) dev_loss.append(np.mean(losses)) dev_acc.append(np.mean(acc)) # Plot loss and accuracy on dev after each epoch: # In[21]: import matplotlib.pyplot as plt x = np.arange(n_epochs) + 1 plt.plot(x, train_loss) plt.plot(x, dev_loss) plt.legend(['train', 'dev']) plt.xlabel('epoch') plt.ylabel('loss') plt.grid(True) # In[22]: plt.plot(x, train_acc) plt.plot(x, dev_acc) plt.legend(['train', 'dev']) plt.xlabel('epoch') plt.ylabel('accuracy') plt.grid(True) # In[23]: test_df = read_tags('data/UD_Spanish-AnCora/es_ancora-ud-test.conllup') test_df['words'] = test_df['words'].progress_map(preprocess) test_df['word ids'] = test_df['words'].progress_map(lambda x: get_ids(x, glove.key_to_index, unk_id)) test_df['tag ids'] = test_df['tags'].progress_map(lambda x: get_ids(x, tag_to_index)) test_df # Now let's evaluate on the test partition: # In[24]: from sklearn.metrics import classification_report model.eval() test_ds = MyDataset(test_df['word ids'], test_df['tag ids']) test_dl = DataLoader(test_ds, batch_size=batch_size, shuffle=shuffle, collate_fn=collate_fn) all_y_true = [] all_y_pred = [] with torch.no_grad(): for x_padded, y_padded, lengths in tqdm(test_dl): x_padded = x_padded.to(device) y_pred = model(x_padded, lengths) y_true = torch.flatten(y_padded) y_pred = y_pred.view(-1, output_size) mask = y_true != pad_tag_id y_true = y_true[mask] y_pred = torch.argmax(y_pred[mask], dim=1) all_y_true.append(y_true.cpu().numpy()) all_y_pred.append(y_pred.cpu().numpy()) y_true = np.concatenate(all_y_true) y_pred = np.concatenate(all_y_pred) target_names = index_to_tag[:-2] print(classification_report(y_true, y_pred, target_names=target_names)) # Let's generate a confusion matrix for all POS tags in the data: # In[25]: from sklearn.metrics import ConfusionMatrixDisplay, confusion_matrix cm = confusion_matrix(y_true, y_pred, normalize='true') disp = ConfusionMatrixDisplay( confusion_matrix=cm, display_labels=target_names, ) fig, ax = plt.subplots(figsize=(10,10)) disp.plot( cmap='Blues', values_format='.2f', colorbar=False, ax=ax, xticks_rotation=45, ) # In[ ]:
2,391
2,459
16
chap11-17
chap11-17
11 Implementing Part-of-speech Tagging Using Recurrent Neural Networks The previous chapter was our first exposure to recurrent neural networks, which included intuitions for why they are useful for natural language processing, various architectures, and training algorithms. In this chapter we will put them to use, to implement a common sequence modeling task. 11.1 Part-of-speech Tagging The task we will use as an example for this chapter is part-of-speech (POS) tagging, an NLP application that, as we discussed in the previous chapter, benefits from word order. Please see Chapter 16 for a more thorough discussion of POS tagging. The entire code presented in this chapter is available in the chap11_pos_tagging Jupyter notebook. To take a break from NLP applications for English, in this chapter we use the AnCora corpus (Taulé et al., 2008), which primarily consists of newspaper texts in Spanish and Catalan with different linguistic annotations. In this chapter we work with the Spanish portion of the corpus, and the annotations for Universal POS tags (see Chapter 16 for a description of these tags). The Spanish portion of the corpus is divided into a training set with 14,305 sentences, a development set with 1,654 sentences, and a test set with 1,721 sentences. The data is distributed in the CoNLL-U format. In this format, all sentences in a dataset are stored in the same file, separated by a blank line. Each individual token in a sentence is represented in a line, which contains 10 annotation fields separated by tabs: ID, FORM, LEMMA, UPOS, XPOS, FEATS, HEAD, DEPREL, DEPS, and MISC. A comprehensive explanation of this format and the 162 11.1 Part-of-speech Tagging 163 meaning of the different fields is beyond the goal of this chapter; however, the curious reader can find one at the CoNLL-U website.1 Here, we are only concerned with the fields FORM (the raw word), and UPOS (the Universal part-of-speech tag). As in previous chapters, we use pandas to preprocess the data. For parsing the CoNLL-U files, we rely on the conllu Python module.2 We implement a function called read_tags that reads the CoNLL-U file corresponding to a dataset and returns a pandas dataframe that combines all tokens in a sentence into a single row with two columns, one for the words, and one for the POS tags in the corresponding sentence: 0 1 2 3 4 ... 14300 14301 14302 14303 14304 words [El, presidente, de, el, órgano, regulador, de... [Afirmó, que, sigue, el, criterio, europeo, y,... [Durante, la, presentación, de, el, libro, ", ... [Y, todas, las, miradas, convergen, en, la, lu... [Cambiar, las, formas, parece, de, rigor, ,, p... tags [DET, NOUN, ADP, DET, NOUN, ADJ, ADP, DET, PRO... [VERB, SCONJ, VERB, DET, NOUN, ADJ, CCONJ, SCO... [ADP, DET, NOUN, ADP, DET, NOUN, PUNCT, DET, P... [CCONJ, DET, DET, NOUN, VERB, ADP, DET, NOUN, ... [VERB, DET, NOUN, VERB, ADP, NOUN, PUNCT, CCON... [PRON, VERB, ADP, VERB, PRON, ADP, DET, NOUN, ... [Sobre, la, oferta, de, interconexión, con, Te... [ADP, DET, NOUN, ADP, NOUN, ADP, PROPN, ADP, D... [La, inversión, en, investigación, básica, es,... [DET, NOUN, ADP, NOUN, ADJ, AUX, DET, NOUN, AD... ... ... [Conviene, que, ahora, ,, en, plena, apoteosis... [VERB, SCONJ, ADV, PUNCT, ADP, ADJ, NOUN, ADP,... [Carlos, y, Fayna, se, enzarzan, en, una, bron... [PROPN, CCONJ, PROPN, PRON, VERB, ADP, DET, NO... [Él, llega, a, tirar, la, sobre, la, cama, y, ... 14305 rows × 2 columns In order to implement our POS tagging application, we need word embeddings that have been pretrained for Spanish. Here we use the publicly-available GloVe embeddings trained on the Spanish Billion Word Corpus3 by the Departamento de Ciencias de la Computación of Universidad de Chile.4 In contrast to the GloVe embeddings used in Chapter 9, these do include a header that stores meta data about the embeddings (i.e., size of the vocabulary and the dimension of the embedding vectors), so in this case we do not use the no_header=True argument: Another difference between these GloVe embeddings and the ones we used in Chapter 9 is that these already include an embedding for unknown words. Therefore, there is no need to introduce our own. However, we do need to include a new embedding for padding, which . 1  https://universaldependencies.org/format.html 
 . 2  https://github.com/EmilStenstrom/conllu/ 
 . 3  https://crscardellino.ar/SBWCE/ 
 . 4  https://github.com/dccuchile/spanishwordembeddings# 
gloveembeddingsfromsbwc 
 164 Implementing POS Tagging Using RNNs we will use later to guarantee that all sentences in the same mini-batch have the same length. We add a vector of zeros for the padding token in the same way as before: Next, we need to preprocess our tokens to match the vocabulary of the embeddings. In particular, these embeddings were trained on words that were lowercased and on sequences of digits that were replaced with a single 0. We will apply the same modifications to our tokens: (From now on we will omit the pandas tables for readability, but, as usual, the corresponding Jupyter notebook contains all necessary information.) Next, we add a new column to the dataframe that stores the word ids corresponding to the embedding vocabulary. Note that at this point we are not padding the sequences of word ids. We will address padding later. We also need to generate the ids for the POS tags. To this end, we first need to construct a vocabulary of POS tags. Once again, we generate a list of tags using explode(), which linearizes our sequence of sequences of tags, and remove repeated tags using unique(). We also add a special tag for the padding token: We now use this POS tag vocabulary to construct a new dataframe column that stores the POS tag ids: The implementation of the Dataset class that stores our POS dataset is trivial: we simply return the lists of word and tag ids, converted to PyTorch tensors. Now it’s time to handle padding. This time we will use some features of PyTorch that we have not seen before. The DataLoader object can receive an optional argument, collate_fn, which expects a function that can be used to form a mini-batch. We will implement this function using PyTorch’s torch.nn.utils.rnn.pad_sequence() function, which, unsurprisingly, pads a group of tensors. We will take advantage of this function to pad the tensors while forming the mini-batch itself. The advantage of this strategy is that, rather than needing to pad all the examples to be the same length as the largest sentence in the corpus, we will instead pad them to the same length as the largest sentence in the minibatch. The latter strategy reduces the amount of padding necessary, which should yield more efficient code. The collate_fn() function takes a single argument, batch, which is a list of tuples. Each tuple has two elements: the list of word ids and the list of tag ids corresponding to a single example. We first unzip this list of tuples into two lists; the first list has all the word ids, and the 11.1 Part-of-speech Tagging 165 second has the tag ids. An explanation of how zip(*batch) works is provided in Appendix A. Next, we compute the lengths of each of the examples in the batch, which we will use later to inform the recurrent neural network where padding starts for each example. We then use the pad_sequence() function to add padding. This function will find the longest sequence in the batch and pad all examples accordingly using the provided padding value. This method is designed to work with PyTorch’s recurrent neural networks, which by default assume the batch index is in the second dimension. However, we will be organizing our tensors such that the batch index is always in the first dimension, which we feel to be more intuitive. For this reason, we also need to provide the batch_first=True argument to pad_sequence. Finally, we return the padded data, as well as the original lengths of the examples. Next, we implement our POS tagging model class. The model consists of: (a) an embedding layer for our Spanish pretrained embeddings; (b) an LSTM that can be set to be unior bi-directional (see Figure 10.3; the RNN is configured to be bidirectional by setting the bidirectional argument to True in the LSTM constructor), with a configurable number of layers (see Figure 10.2; the number of layers is set through the num_layers argument of the constructor) and (c) a linear layer on top of each hidden state, which is used to predict the scores for each of the POS tags for the corresponding token. The forward() method receives the padded minibatch and the list of lengths for the (unpadded) examples in this mini-batch. The first step in the function is to retrieve the embeddings for all words referenced in this mini-batch. We then apply dropout over these embedding vectors. Next, before passing the data to the LSTM, we pack the padded data. Note that the PyTorch PackedSequence5 class, which is the output of the pack_padded_sequence() function, stores a batch of sequences that had different lengths before padding. One important advantage of using PackedSequence is that its internal data structure removes the padding tokens (which is why we had to keep track of the example lengths before padding in x_lengths), and, thus, the recurrent neural network will not back-propagate over the padded elements.6 Once we have a PackedSequence, we pass it to the LSTM. Since the . 5  https://pytorch.org/docs/stable/generated/torch.nn.utils.rnn. PackedSequence.html 
 . 6  The astute reader might ask at this point, “Why did we pad the mini-batch examples in the first place, if we are removing the padding later?” The padding is needed because this allows us to store the mini-batch as a single three-dimensional tensor. 
 166 Implementing POS Tagging Using RNNs output of the LSTM is also packed, we then unpack it using pad_packed_sequence(). Next we apply dropout to this unpacked LSTM output. Finally, we pass
this to the linear layer to predict the tag scores for the tokens. Despite the small number of lines of code, the code of the forward() method, which switches between embedding vectors, padded tensors, and packed sequences, is not trivial. To clarify it, let us walk through an example. Imagine that the input to the forward() method is a batch, x_padded, with shape (10, 20), corresponding to 10 examples, each with 20 word ids (some of which are padding). Then we retrieve the embeddings. Assuming our word embeddings, i.e., the input vectors xi in Chapter 10, are of dimension 300, the new tensor will have a shape of (10, 20, 300), corresponding to 10 examples, each with 20 embeddings, each with dimension 300. After dropout the shape hasn’t changed, but some of the elements have been zeroed out. After unpacking the output of the LSTM, we will have a tensor of shape (10, 20, hidden_size), where hidden_size is the size of the LSTM hidden state, i.e., the ht vector in Equation 10.6, (hidden_size is a hyper parameter we will set later on). After passing this tensor to the linear layer, we will obtain a tensor of shape (10, 20, tag_vocab_size), where tag_vocab_size is the number of POS tags in our vocabulary. Thus, for each token in each example, we will have a distribution of POS tag scores. For each token, the assigned POS tag will be the one corresponding to the highest score. We next initialize all the hyper parameters and all the required components: The training procedure is very similar to the one implemented in Chapter 7. One notable difference is that the output of this model has three dimensions instead of two: number of examples, number of tokens, and number of POS tag scores. Thus, we have to reshape the output to pass it to the loss function. Additionally, we need to discard the padding before computing the loss. We reshape the gold tag ids using the torch.flatten() function, to transform the 2-dimensional tensor of shape (n_examples, n_tokens) to a 1-dimensional tensor with n_examples * n_tokens elements. The predictions are reshaped using the view(-1, output_size) method. By passing two arguments we are stipulating that we want two dimensions. The second dimension will be of size output_size. The -1 indicates that the first dimension should be inferred from the size of the tensor. This means that for a tensor of shape (n_examples, n_tokens, output_size) we will get a tensor of shape (n_examples * n_tokens, output_size). Then, we use a Boolean mask to discard the elements corresponding to the padding. This way, the loss 11.2 Summary 167 function will consider each actual word individually, as if the whole batch was just one big sentence. Note that treating a mini-batch as a single virtual sentence does affect the evaluation results. Lastly, we evaluate the performance of our POS tagger on the test set, similarly to how we have done it before: The results indicate that our POS tagger obtains an overall accuracy of 97%, which is in line with state-of-the-art approaches! This is encouraging considering that our approach does not include the CRF layer we discussed in Chapter 10. We challenge the reader to add this layer,7 and experiment with this architecture for other sequence tasks such as named entity recognition. 11.2 Summary In this chapter we have implemented a Spanish part-of-speech tagger using a bidirectional LSTM and a set of pretrained, static word embeddings. Through this process, we have also introduced several new PyTorch features such as the pad_sequence, pack_padded_sequence, and pad_packed_sequence functions, which allow us to work more efficiently with variable length sequences for recurrent neural networks. 7 See, for example, the LSTM-CRF implementation from the PyTorch tutorial: https://pytorch.org/tutorials/beginner/nlp/advanced_tutorial.html
8,878
8,945
#!/usr/bin/env python # coding: utf-8 # # Part-of-speech Tagging Using RNNs # Some initialization: # In[4]: import random import torch import numpy as np import pandas as pd from tqdm.notebook import tqdm # enable tqdm in pandas tqdm.pandas() # set to True to use the gpu (if there is one available) use_gpu = True # select device device = torch.device('cuda' if use_gpu and torch.cuda.is_available() else 'cpu') print(f'device: {device.type}') # random seed seed = 1234 # set random seed if seed is not None: print(f'random seed: {seed}') random.seed(seed) np.random.seed(seed) torch.manual_seed(seed) # Next, let's read the words and their POS tags from the CoNLLUP format: # In[5]: from conllu import parse_incr def read_tags(filename): data = {'words': [], 'tags': []} with open(filename) as f: for sent in parse_incr(f): words = [tok['form'] for tok in sent] tags = [tok['upos'] for tok in sent] data['words'].append(words) data['tags'].append(tags) return pd.DataFrame(data) # In[6]: train_df = read_tags('data/UD_Spanish-AnCora/es_ancora-ud-train.conllup') train_df # We now load the GloVe embeddings for Spanish, which include a representation for the unknown token: # In[7]: from gensim.models import KeyedVectors glove = KeyedVectors.load_word2vec_format('glove-sbwc.i25.vec') glove.vectors.shape # In[8]: # these embeddings already include <unk> unk_tok = '<unk>' unk_id = glove.key_to_index[unk_tok] unk_tok, unk_id # In[9]: # add padding embedding pad_tok = '<pad>' pad_emb = np.zeros(300) glove.add_vector(pad_tok, pad_emb) pad_tok_id = glove.key_to_index[pad_tok] pad_tok, pad_tok_id # Preprocessing: lower case all words, and replace all numbers with '0': # In[10]: def preprocess(words): result = [] for w in words: w = w.lower() if w.isdecimal(): w = '0' result.append(w) return result train_df['words'] = train_df['words'].progress_map(preprocess) train_df # Next, construct actual PyTorch `Dataset` and `DataLoader` objects for the train/dev/test partitions: # In[11]: def get_ids(tokens, key_to_index, unk_id=None): return [key_to_index.get(tok, unk_id) for tok in tokens] def get_word_ids(tokens): return get_ids(tokens, glove.key_to_index, unk_id) # add new column to the dataframe train_df['word ids'] = train_df['words'].progress_map(get_word_ids) train_df # In[12]: pad_tag = '<pad>' index_to_tag = train_df['tags'].explode().unique().tolist() + [pad_tag] tag_to_index = {t:i for i,t in enumerate(index_to_tag)} pad_tag_id = tag_to_index[pad_tag] pad_tag, pad_tag_id # In[13]: index_to_tag # In[14]: def get_tag_ids(tags): return get_ids(tags, tag_to_index) train_df['tag ids'] = train_df['tags'].progress_map(get_tag_ids) train_df # In[15]: dev_df = read_tags('data/UD_Spanish-AnCora/es_ancora-ud-dev.conllup') dev_df['words'] = dev_df['words'].progress_map(preprocess) dev_df['word ids'] = dev_df['words'].progress_map(lambda x: get_ids(x, glove.key_to_index, unk_id)) dev_df['tag ids'] = dev_df['tags'].progress_map(lambda x: get_ids(x, tag_to_index)) dev_df # In[16]: from torch.utils.data import Dataset class MyDataset(Dataset): def __init__(self, x, y): self.x = x self.y = y def __len__(self): return len(self.y) def __getitem__(self, index): x = torch.tensor(self.x[index]) y = torch.tensor(self.y[index]) return x, y # `collate_fn` will be used by `DataLoader` to pad all sentences in the same batch to the same length. # In[17]: from torch.nn.utils.rnn import pad_sequence def collate_fn(batch): # separate xs and ys xs, ys = zip(*batch) # get lengths lengths = [len(x) for x in xs] # pad sequences x_padded = pad_sequence(xs, batch_first=True, padding_value=pad_tok_id) y_padded = pad_sequence(ys, batch_first=True, padding_value=pad_tag_id) # return padded return x_padded, y_padded, lengths # Now construct our PyTorch model: # In[18]: from torch import nn from torch.nn.utils.rnn import pack_padded_sequence, pad_packed_sequence class MyModel(nn.Module): def __init__(self, vectors, hidden_size, num_layers, bidirectional, dropout, output_size): super().__init__() # ensure vectors is a tensor if not torch.is_tensor(vectors): vectors = torch.tensor(vectors) # init embedding layer self.embedding = nn.Embedding.from_pretrained(embeddings=vectors) # init lstm self.lstm = nn.LSTM( input_size=vectors.shape[1], hidden_size=hidden_size, num_layers=num_layers, bidirectional=bidirectional, dropout=dropout, batch_first=True) # init dropout self.dropout = nn.Dropout(dropout) # init classifier self.classifier = nn.Linear( in_features=hidden_size * 2 if bidirectional else hidden_size, out_features=output_size) def forward(self, x_padded, x_lengths): # get embeddings output = self.embedding(x_padded) output = self.dropout(output) # pack data before lstm packed = pack_padded_sequence(output, x_lengths, batch_first=True, enforce_sorted=False) packed, _ = self.lstm(packed) # unpack data before rest of model output, _ = pad_packed_sequence(packed, batch_first=True) output = self.dropout(output) output = self.classifier(output) return output # In[19]: from torch import optim from torch.utils.data import DataLoader from sklearn.metrics import accuracy_score # hyperparameters lr = 1e-3 weight_decay = 1e-5 batch_size = 100 shuffle = True n_epochs = 10 vectors = glove.vectors hidden_size = 100 num_layers = 2 bidirectional = True dropout = 0.1 output_size = len(index_to_tag) # initialize the model, loss function, optimizer, and data-loader model = MyModel(vectors, hidden_size, num_layers, bidirectional, dropout, output_size).to(device) loss_func = nn.CrossEntropyLoss() optimizer = optim.Adam(model.parameters(), lr=lr, weight_decay=weight_decay) train_ds = MyDataset(train_df['word ids'], train_df['tag ids']) train_dl = DataLoader(train_ds, batch_size=batch_size, shuffle=shuffle, collate_fn=collate_fn) dev_ds = MyDataset(dev_df['word ids'], dev_df['tag ids']) dev_dl = DataLoader(dev_ds, batch_size=batch_size, shuffle=shuffle, collate_fn=collate_fn) train_loss, train_acc = [], [] dev_loss, dev_acc = [], [] # We are now ready to train! # In[20]: # train the model for epoch in range(n_epochs): losses, acc = [], [] model.train() for x_padded, y_padded, lengths in tqdm(train_dl, desc=f'epoch {epoch+1} (train)'): # clear gradients model.zero_grad() # send batch to right device x_padded = x_padded.to(device) y_padded = y_padded.to(device) # predict label scores y_pred = model(x_padded, lengths) # reshape output y_true = torch.flatten(y_padded) y_pred = y_pred.view(-1, output_size) mask = y_true != pad_tag_id y_true = y_true[mask] y_pred = y_pred[mask] # compute loss loss = loss_func(y_pred, y_true) # accumulate for plotting gold = y_true.detach().cpu().numpy() pred = np.argmax(y_pred.detach().cpu().numpy(), axis=1) losses.append(loss.detach().cpu().item()) acc.append(accuracy_score(gold, pred)) # backpropagate loss.backward() # optimize model parameters optimizer.step() train_loss.append(np.mean(losses)) train_acc.append(np.mean(acc)) model.eval() with torch.no_grad(): losses, acc = [], [] for x_padded, y_padded, lengths in tqdm(dev_dl, desc=f'epoch {epoch+1} (dev)'): x_padded = x_padded.to(device) y_padded = y_padded.to(device) y_pred = model(x_padded, lengths) y_true = torch.flatten(y_padded) y_pred = y_pred.view(-1, output_size) mask = y_true != pad_tag_id y_true = y_true[mask] y_pred = y_pred[mask] loss = loss_func(y_pred, y_true) gold = y_true.cpu().numpy() pred = np.argmax(y_pred.cpu().numpy(), axis=1) losses.append(loss.cpu().item()) acc.append(accuracy_score(gold, pred)) dev_loss.append(np.mean(losses)) dev_acc.append(np.mean(acc)) # Plot loss and accuracy on dev after each epoch: # In[21]: import matplotlib.pyplot as plt x = np.arange(n_epochs) + 1 plt.plot(x, train_loss) plt.plot(x, dev_loss) plt.legend(['train', 'dev']) plt.xlabel('epoch') plt.ylabel('loss') plt.grid(True) # In[22]: plt.plot(x, train_acc) plt.plot(x, dev_acc) plt.legend(['train', 'dev']) plt.xlabel('epoch') plt.ylabel('accuracy') plt.grid(True) # In[23]: test_df = read_tags('data/UD_Spanish-AnCora/es_ancora-ud-test.conllup') test_df['words'] = test_df['words'].progress_map(preprocess) test_df['word ids'] = test_df['words'].progress_map(lambda x: get_ids(x, glove.key_to_index, unk_id)) test_df['tag ids'] = test_df['tags'].progress_map(lambda x: get_ids(x, tag_to_index)) test_df # Now let's evaluate on the test partition: # In[24]: from sklearn.metrics import classification_report model.eval() test_ds = MyDataset(test_df['word ids'], test_df['tag ids']) test_dl = DataLoader(test_ds, batch_size=batch_size, shuffle=shuffle, collate_fn=collate_fn) all_y_true = [] all_y_pred = [] with torch.no_grad(): for x_padded, y_padded, lengths in tqdm(test_dl): x_padded = x_padded.to(device) y_pred = model(x_padded, lengths) y_true = torch.flatten(y_padded) y_pred = y_pred.view(-1, output_size) mask = y_true != pad_tag_id y_true = y_true[mask] y_pred = torch.argmax(y_pred[mask], dim=1) all_y_true.append(y_true.cpu().numpy()) all_y_pred.append(y_pred.cpu().numpy()) y_true = np.concatenate(all_y_true) y_pred = np.concatenate(all_y_pred) target_names = index_to_tag[:-2] print(classification_report(y_true, y_pred, target_names=target_names)) # Let's generate a confusion matrix for all POS tags in the data: # In[25]: from sklearn.metrics import ConfusionMatrixDisplay, confusion_matrix cm = confusion_matrix(y_true, y_pred, normalize='true') disp = ConfusionMatrixDisplay( confusion_matrix=cm, display_labels=target_names, ) fig, ax = plt.subplots(figsize=(10,10)) disp.plot( cmap='Blues', values_format='.2f', colorbar=False, ax=ax, xticks_rotation=45, ) # In[ ]:
5,273
5,370
17
chap11-18
chap11-18
11 Implementing Part-of-speech Tagging Using Recurrent Neural Networks The previous chapter was our first exposure to recurrent neural networks, which included intuitions for why they are useful for natural language processing, various architectures, and training algorithms. In this chapter we will put them to use, to implement a common sequence modeling task. 11.1 Part-of-speech Tagging The task we will use as an example for this chapter is part-of-speech (POS) tagging, an NLP application that, as we discussed in the previous chapter, benefits from word order. Please see Chapter 16 for a more thorough discussion of POS tagging. The entire code presented in this chapter is available in the chap11_pos_tagging Jupyter notebook. To take a break from NLP applications for English, in this chapter we use the AnCora corpus (Taulé et al., 2008), which primarily consists of newspaper texts in Spanish and Catalan with different linguistic annotations. In this chapter we work with the Spanish portion of the corpus, and the annotations for Universal POS tags (see Chapter 16 for a description of these tags). The Spanish portion of the corpus is divided into a training set with 14,305 sentences, a development set with 1,654 sentences, and a test set with 1,721 sentences. The data is distributed in the CoNLL-U format. In this format, all sentences in a dataset are stored in the same file, separated by a blank line. Each individual token in a sentence is represented in a line, which contains 10 annotation fields separated by tabs: ID, FORM, LEMMA, UPOS, XPOS, FEATS, HEAD, DEPREL, DEPS, and MISC. A comprehensive explanation of this format and the 162 11.1 Part-of-speech Tagging 163 meaning of the different fields is beyond the goal of this chapter; however, the curious reader can find one at the CoNLL-U website.1 Here, we are only concerned with the fields FORM (the raw word), and UPOS (the Universal part-of-speech tag). As in previous chapters, we use pandas to preprocess the data. For parsing the CoNLL-U files, we rely on the conllu Python module.2 We implement a function called read_tags that reads the CoNLL-U file corresponding to a dataset and returns a pandas dataframe that combines all tokens in a sentence into a single row with two columns, one for the words, and one for the POS tags in the corresponding sentence: 0 1 2 3 4 ... 14300 14301 14302 14303 14304 words [El, presidente, de, el, órgano, regulador, de... [Afirmó, que, sigue, el, criterio, europeo, y,... [Durante, la, presentación, de, el, libro, ", ... [Y, todas, las, miradas, convergen, en, la, lu... [Cambiar, las, formas, parece, de, rigor, ,, p... tags [DET, NOUN, ADP, DET, NOUN, ADJ, ADP, DET, PRO... [VERB, SCONJ, VERB, DET, NOUN, ADJ, CCONJ, SCO... [ADP, DET, NOUN, ADP, DET, NOUN, PUNCT, DET, P... [CCONJ, DET, DET, NOUN, VERB, ADP, DET, NOUN, ... [VERB, DET, NOUN, VERB, ADP, NOUN, PUNCT, CCON... [PRON, VERB, ADP, VERB, PRON, ADP, DET, NOUN, ... [Sobre, la, oferta, de, interconexión, con, Te... [ADP, DET, NOUN, ADP, NOUN, ADP, PROPN, ADP, D... [La, inversión, en, investigación, básica, es,... [DET, NOUN, ADP, NOUN, ADJ, AUX, DET, NOUN, AD... ... ... [Conviene, que, ahora, ,, en, plena, apoteosis... [VERB, SCONJ, ADV, PUNCT, ADP, ADJ, NOUN, ADP,... [Carlos, y, Fayna, se, enzarzan, en, una, bron... [PROPN, CCONJ, PROPN, PRON, VERB, ADP, DET, NO... [Él, llega, a, tirar, la, sobre, la, cama, y, ... 14305 rows × 2 columns In order to implement our POS tagging application, we need word embeddings that have been pretrained for Spanish. Here we use the publicly-available GloVe embeddings trained on the Spanish Billion Word Corpus3 by the Departamento de Ciencias de la Computación of Universidad de Chile.4 In contrast to the GloVe embeddings used in Chapter 9, these do include a header that stores meta data about the embeddings (i.e., size of the vocabulary and the dimension of the embedding vectors), so in this case we do not use the no_header=True argument: Another difference between these GloVe embeddings and the ones we used in Chapter 9 is that these already include an embedding for unknown words. Therefore, there is no need to introduce our own. However, we do need to include a new embedding for padding, which . 1  https://universaldependencies.org/format.html 
 . 2  https://github.com/EmilStenstrom/conllu/ 
 . 3  https://crscardellino.ar/SBWCE/ 
 . 4  https://github.com/dccuchile/spanishwordembeddings# 
gloveembeddingsfromsbwc 
 164 Implementing POS Tagging Using RNNs we will use later to guarantee that all sentences in the same mini-batch have the same length. We add a vector of zeros for the padding token in the same way as before: Next, we need to preprocess our tokens to match the vocabulary of the embeddings. In particular, these embeddings were trained on words that were lowercased and on sequences of digits that were replaced with a single 0. We will apply the same modifications to our tokens: (From now on we will omit the pandas tables for readability, but, as usual, the corresponding Jupyter notebook contains all necessary information.) Next, we add a new column to the dataframe that stores the word ids corresponding to the embedding vocabulary. Note that at this point we are not padding the sequences of word ids. We will address padding later. We also need to generate the ids for the POS tags. To this end, we first need to construct a vocabulary of POS tags. Once again, we generate a list of tags using explode(), which linearizes our sequence of sequences of tags, and remove repeated tags using unique(). We also add a special tag for the padding token: We now use this POS tag vocabulary to construct a new dataframe column that stores the POS tag ids: The implementation of the Dataset class that stores our POS dataset is trivial: we simply return the lists of word and tag ids, converted to PyTorch tensors. Now it’s time to handle padding. This time we will use some features of PyTorch that we have not seen before. The DataLoader object can receive an optional argument, collate_fn, which expects a function that can be used to form a mini-batch. We will implement this function using PyTorch’s torch.nn.utils.rnn.pad_sequence() function, which, unsurprisingly, pads a group of tensors. We will take advantage of this function to pad the tensors while forming the mini-batch itself. The advantage of this strategy is that, rather than needing to pad all the examples to be the same length as the largest sentence in the corpus, we will instead pad them to the same length as the largest sentence in the minibatch. The latter strategy reduces the amount of padding necessary, which should yield more efficient code. The collate_fn() function takes a single argument, batch, which is a list of tuples. Each tuple has two elements: the list of word ids and the list of tag ids corresponding to a single example. We first unzip this list of tuples into two lists; the first list has all the word ids, and the 11.1 Part-of-speech Tagging 165 second has the tag ids. An explanation of how zip(*batch) works is provided in Appendix A. Next, we compute the lengths of each of the examples in the batch, which we will use later to inform the recurrent neural network where padding starts for each example. We then use the pad_sequence() function to add padding. This function will find the longest sequence in the batch and pad all examples accordingly using the provided padding value. This method is designed to work with PyTorch’s recurrent neural networks, which by default assume the batch index is in the second dimension. However, we will be organizing our tensors such that the batch index is always in the first dimension, which we feel to be more intuitive. For this reason, we also need to provide the batch_first=True argument to pad_sequence. Finally, we return the padded data, as well as the original lengths of the examples. Next, we implement our POS tagging model class. The model consists of: (a) an embedding layer for our Spanish pretrained embeddings; (b) an LSTM that can be set to be unior bi-directional (see Figure 10.3; the RNN is configured to be bidirectional by setting the bidirectional argument to True in the LSTM constructor), with a configurable number of layers (see Figure 10.2; the number of layers is set through the num_layers argument of the constructor) and (c) a linear layer on top of each hidden state, which is used to predict the scores for each of the POS tags for the corresponding token. The forward() method receives the padded minibatch and the list of lengths for the (unpadded) examples in this mini-batch. The first step in the function is to retrieve the embeddings for all words referenced in this mini-batch. We then apply dropout over these embedding vectors. Next, before passing the data to the LSTM, we pack the padded data. Note that the PyTorch PackedSequence5 class, which is the output of the pack_padded_sequence() function, stores a batch of sequences that had different lengths before padding. One important advantage of using PackedSequence is that its internal data structure removes the padding tokens (which is why we had to keep track of the example lengths before padding in x_lengths), and, thus, the recurrent neural network will not back-propagate over the padded elements.6 Once we have a PackedSequence, we pass it to the LSTM. Since the . 5  https://pytorch.org/docs/stable/generated/torch.nn.utils.rnn. PackedSequence.html 
 . 6  The astute reader might ask at this point, “Why did we pad the mini-batch examples in the first place, if we are removing the padding later?” The padding is needed because this allows us to store the mini-batch as a single three-dimensional tensor. 
 166 Implementing POS Tagging Using RNNs output of the LSTM is also packed, we then unpack it using pad_packed_sequence(). Next we apply dropout to this unpacked LSTM output. Finally, we pass
this to the linear layer to predict the tag scores for the tokens. Despite the small number of lines of code, the code of the forward() method, which switches between embedding vectors, padded tensors, and packed sequences, is not trivial. To clarify it, let us walk through an example. Imagine that the input to the forward() method is a batch, x_padded, with shape (10, 20), corresponding to 10 examples, each with 20 word ids (some of which are padding). Then we retrieve the embeddings. Assuming our word embeddings, i.e., the input vectors xi in Chapter 10, are of dimension 300, the new tensor will have a shape of (10, 20, 300), corresponding to 10 examples, each with 20 embeddings, each with dimension 300. After dropout the shape hasn’t changed, but some of the elements have been zeroed out. After unpacking the output of the LSTM, we will have a tensor of shape (10, 20, hidden_size), where hidden_size is the size of the LSTM hidden state, i.e., the ht vector in Equation 10.6, (hidden_size is a hyper parameter we will set later on). After passing this tensor to the linear layer, we will obtain a tensor of shape (10, 20, tag_vocab_size), where tag_vocab_size is the number of POS tags in our vocabulary. Thus, for each token in each example, we will have a distribution of POS tag scores. For each token, the assigned POS tag will be the one corresponding to the highest score. We next initialize all the hyper parameters and all the required components: The training procedure is very similar to the one implemented in Chapter 7. One notable difference is that the output of this model has three dimensions instead of two: number of examples, number of tokens, and number of POS tag scores. Thus, we have to reshape the output to pass it to the loss function. Additionally, we need to discard the padding before computing the loss. We reshape the gold tag ids using the torch.flatten() function, to transform the 2-dimensional tensor of shape (n_examples, n_tokens) to a 1-dimensional tensor with n_examples * n_tokens elements. The predictions are reshaped using the view(-1, output_size) method. By passing two arguments we are stipulating that we want two dimensions. The second dimension will be of size output_size. The -1 indicates that the first dimension should be inferred from the size of the tensor. This means that for a tensor of shape (n_examples, n_tokens, output_size) we will get a tensor of shape (n_examples * n_tokens, output_size). Then, we use a Boolean mask to discard the elements corresponding to the padding. This way, the loss 11.2 Summary 167 function will consider each actual word individually, as if the whole batch was just one big sentence. Note that treating a mini-batch as a single virtual sentence does affect the evaluation results. Lastly, we evaluate the performance of our POS tagger on the test set, similarly to how we have done it before: The results indicate that our POS tagger obtains an overall accuracy of 97%, which is in line with state-of-the-art approaches! This is encouraging considering that our approach does not include the CRF layer we discussed in Chapter 10. We challenge the reader to add this layer,7 and experiment with this architecture for other sequence tasks such as named entity recognition. 11.2 Summary In this chapter we have implemented a Spanish part-of-speech tagger using a bidirectional LSTM and a set of pretrained, static word embeddings. Through this process, we have also introduced several new PyTorch features such as the pad_sequence, pack_padded_sequence, and pad_packed_sequence functions, which allow us to work more efficiently with variable length sequences for recurrent neural networks. 7 See, for example, the LSTM-CRF implementation from the PyTorch tutorial: https://pytorch.org/tutorials/beginner/nlp/advanced_tutorial.html
10,010
10,093
#!/usr/bin/env python # coding: utf-8 # # Part-of-speech Tagging Using RNNs # Some initialization: # In[4]: import random import torch import numpy as np import pandas as pd from tqdm.notebook import tqdm # enable tqdm in pandas tqdm.pandas() # set to True to use the gpu (if there is one available) use_gpu = True # select device device = torch.device('cuda' if use_gpu and torch.cuda.is_available() else 'cpu') print(f'device: {device.type}') # random seed seed = 1234 # set random seed if seed is not None: print(f'random seed: {seed}') random.seed(seed) np.random.seed(seed) torch.manual_seed(seed) # Next, let's read the words and their POS tags from the CoNLLUP format: # In[5]: from conllu import parse_incr def read_tags(filename): data = {'words': [], 'tags': []} with open(filename) as f: for sent in parse_incr(f): words = [tok['form'] for tok in sent] tags = [tok['upos'] for tok in sent] data['words'].append(words) data['tags'].append(tags) return pd.DataFrame(data) # In[6]: train_df = read_tags('data/UD_Spanish-AnCora/es_ancora-ud-train.conllup') train_df # We now load the GloVe embeddings for Spanish, which include a representation for the unknown token: # In[7]: from gensim.models import KeyedVectors glove = KeyedVectors.load_word2vec_format('glove-sbwc.i25.vec') glove.vectors.shape # In[8]: # these embeddings already include <unk> unk_tok = '<unk>' unk_id = glove.key_to_index[unk_tok] unk_tok, unk_id # In[9]: # add padding embedding pad_tok = '<pad>' pad_emb = np.zeros(300) glove.add_vector(pad_tok, pad_emb) pad_tok_id = glove.key_to_index[pad_tok] pad_tok, pad_tok_id # Preprocessing: lower case all words, and replace all numbers with '0': # In[10]: def preprocess(words): result = [] for w in words: w = w.lower() if w.isdecimal(): w = '0' result.append(w) return result train_df['words'] = train_df['words'].progress_map(preprocess) train_df # Next, construct actual PyTorch `Dataset` and `DataLoader` objects for the train/dev/test partitions: # In[11]: def get_ids(tokens, key_to_index, unk_id=None): return [key_to_index.get(tok, unk_id) for tok in tokens] def get_word_ids(tokens): return get_ids(tokens, glove.key_to_index, unk_id) # add new column to the dataframe train_df['word ids'] = train_df['words'].progress_map(get_word_ids) train_df # In[12]: pad_tag = '<pad>' index_to_tag = train_df['tags'].explode().unique().tolist() + [pad_tag] tag_to_index = {t:i for i,t in enumerate(index_to_tag)} pad_tag_id = tag_to_index[pad_tag] pad_tag, pad_tag_id # In[13]: index_to_tag # In[14]: def get_tag_ids(tags): return get_ids(tags, tag_to_index) train_df['tag ids'] = train_df['tags'].progress_map(get_tag_ids) train_df # In[15]: dev_df = read_tags('data/UD_Spanish-AnCora/es_ancora-ud-dev.conllup') dev_df['words'] = dev_df['words'].progress_map(preprocess) dev_df['word ids'] = dev_df['words'].progress_map(lambda x: get_ids(x, glove.key_to_index, unk_id)) dev_df['tag ids'] = dev_df['tags'].progress_map(lambda x: get_ids(x, tag_to_index)) dev_df # In[16]: from torch.utils.data import Dataset class MyDataset(Dataset): def __init__(self, x, y): self.x = x self.y = y def __len__(self): return len(self.y) def __getitem__(self, index): x = torch.tensor(self.x[index]) y = torch.tensor(self.y[index]) return x, y # `collate_fn` will be used by `DataLoader` to pad all sentences in the same batch to the same length. # In[17]: from torch.nn.utils.rnn import pad_sequence def collate_fn(batch): # separate xs and ys xs, ys = zip(*batch) # get lengths lengths = [len(x) for x in xs] # pad sequences x_padded = pad_sequence(xs, batch_first=True, padding_value=pad_tok_id) y_padded = pad_sequence(ys, batch_first=True, padding_value=pad_tag_id) # return padded return x_padded, y_padded, lengths # Now construct our PyTorch model: # In[18]: from torch import nn from torch.nn.utils.rnn import pack_padded_sequence, pad_packed_sequence class MyModel(nn.Module): def __init__(self, vectors, hidden_size, num_layers, bidirectional, dropout, output_size): super().__init__() # ensure vectors is a tensor if not torch.is_tensor(vectors): vectors = torch.tensor(vectors) # init embedding layer self.embedding = nn.Embedding.from_pretrained(embeddings=vectors) # init lstm self.lstm = nn.LSTM( input_size=vectors.shape[1], hidden_size=hidden_size, num_layers=num_layers, bidirectional=bidirectional, dropout=dropout, batch_first=True) # init dropout self.dropout = nn.Dropout(dropout) # init classifier self.classifier = nn.Linear( in_features=hidden_size * 2 if bidirectional else hidden_size, out_features=output_size) def forward(self, x_padded, x_lengths): # get embeddings output = self.embedding(x_padded) output = self.dropout(output) # pack data before lstm packed = pack_padded_sequence(output, x_lengths, batch_first=True, enforce_sorted=False) packed, _ = self.lstm(packed) # unpack data before rest of model output, _ = pad_packed_sequence(packed, batch_first=True) output = self.dropout(output) output = self.classifier(output) return output # In[19]: from torch import optim from torch.utils.data import DataLoader from sklearn.metrics import accuracy_score # hyperparameters lr = 1e-3 weight_decay = 1e-5 batch_size = 100 shuffle = True n_epochs = 10 vectors = glove.vectors hidden_size = 100 num_layers = 2 bidirectional = True dropout = 0.1 output_size = len(index_to_tag) # initialize the model, loss function, optimizer, and data-loader model = MyModel(vectors, hidden_size, num_layers, bidirectional, dropout, output_size).to(device) loss_func = nn.CrossEntropyLoss() optimizer = optim.Adam(model.parameters(), lr=lr, weight_decay=weight_decay) train_ds = MyDataset(train_df['word ids'], train_df['tag ids']) train_dl = DataLoader(train_ds, batch_size=batch_size, shuffle=shuffle, collate_fn=collate_fn) dev_ds = MyDataset(dev_df['word ids'], dev_df['tag ids']) dev_dl = DataLoader(dev_ds, batch_size=batch_size, shuffle=shuffle, collate_fn=collate_fn) train_loss, train_acc = [], [] dev_loss, dev_acc = [], [] # We are now ready to train! # In[20]: # train the model for epoch in range(n_epochs): losses, acc = [], [] model.train() for x_padded, y_padded, lengths in tqdm(train_dl, desc=f'epoch {epoch+1} (train)'): # clear gradients model.zero_grad() # send batch to right device x_padded = x_padded.to(device) y_padded = y_padded.to(device) # predict label scores y_pred = model(x_padded, lengths) # reshape output y_true = torch.flatten(y_padded) y_pred = y_pred.view(-1, output_size) mask = y_true != pad_tag_id y_true = y_true[mask] y_pred = y_pred[mask] # compute loss loss = loss_func(y_pred, y_true) # accumulate for plotting gold = y_true.detach().cpu().numpy() pred = np.argmax(y_pred.detach().cpu().numpy(), axis=1) losses.append(loss.detach().cpu().item()) acc.append(accuracy_score(gold, pred)) # backpropagate loss.backward() # optimize model parameters optimizer.step() train_loss.append(np.mean(losses)) train_acc.append(np.mean(acc)) model.eval() with torch.no_grad(): losses, acc = [], [] for x_padded, y_padded, lengths in tqdm(dev_dl, desc=f'epoch {epoch+1} (dev)'): x_padded = x_padded.to(device) y_padded = y_padded.to(device) y_pred = model(x_padded, lengths) y_true = torch.flatten(y_padded) y_pred = y_pred.view(-1, output_size) mask = y_true != pad_tag_id y_true = y_true[mask] y_pred = y_pred[mask] loss = loss_func(y_pred, y_true) gold = y_true.cpu().numpy() pred = np.argmax(y_pred.cpu().numpy(), axis=1) losses.append(loss.cpu().item()) acc.append(accuracy_score(gold, pred)) dev_loss.append(np.mean(losses)) dev_acc.append(np.mean(acc)) # Plot loss and accuracy on dev after each epoch: # In[21]: import matplotlib.pyplot as plt x = np.arange(n_epochs) + 1 plt.plot(x, train_loss) plt.plot(x, dev_loss) plt.legend(['train', 'dev']) plt.xlabel('epoch') plt.ylabel('loss') plt.grid(True) # In[22]: plt.plot(x, train_acc) plt.plot(x, dev_acc) plt.legend(['train', 'dev']) plt.xlabel('epoch') plt.ylabel('accuracy') plt.grid(True) # In[23]: test_df = read_tags('data/UD_Spanish-AnCora/es_ancora-ud-test.conllup') test_df['words'] = test_df['words'].progress_map(preprocess) test_df['word ids'] = test_df['words'].progress_map(lambda x: get_ids(x, glove.key_to_index, unk_id)) test_df['tag ids'] = test_df['tags'].progress_map(lambda x: get_ids(x, tag_to_index)) test_df # Now let's evaluate on the test partition: # In[24]: from sklearn.metrics import classification_report model.eval() test_ds = MyDataset(test_df['word ids'], test_df['tag ids']) test_dl = DataLoader(test_ds, batch_size=batch_size, shuffle=shuffle, collate_fn=collate_fn) all_y_true = [] all_y_pred = [] with torch.no_grad(): for x_padded, y_padded, lengths in tqdm(test_dl): x_padded = x_padded.to(device) y_pred = model(x_padded, lengths) y_true = torch.flatten(y_padded) y_pred = y_pred.view(-1, output_size) mask = y_true != pad_tag_id y_true = y_true[mask] y_pred = torch.argmax(y_pred[mask], dim=1) all_y_true.append(y_true.cpu().numpy()) all_y_pred.append(y_pred.cpu().numpy()) y_true = np.concatenate(all_y_true) y_pred = np.concatenate(all_y_pred) target_names = index_to_tag[:-2] print(classification_report(y_true, y_pred, target_names=target_names)) # Let's generate a confusion matrix for all POS tags in the data: # In[25]: from sklearn.metrics import ConfusionMatrixDisplay, confusion_matrix cm = confusion_matrix(y_true, y_pred, normalize='true') disp = ConfusionMatrixDisplay( confusion_matrix=cm, display_labels=target_names, ) fig, ax = plt.subplots(figsize=(10,10)) disp.plot( cmap='Blues', values_format='.2f', colorbar=False, ax=ax, xticks_rotation=45, ) # In[ ]:
5,555
5,596
18
chap11-19
chap11-19
11 Implementing Part-of-speech Tagging Using Recurrent Neural Networks The previous chapter was our first exposure to recurrent neural networks, which included intuitions for why they are useful for natural language processing, various architectures, and training algorithms. In this chapter we will put them to use, to implement a common sequence modeling task. 11.1 Part-of-speech Tagging The task we will use as an example for this chapter is part-of-speech (POS) tagging, an NLP application that, as we discussed in the previous chapter, benefits from word order. Please see Chapter 16 for a more thorough discussion of POS tagging. The entire code presented in this chapter is available in the chap11_pos_tagging Jupyter notebook. To take a break from NLP applications for English, in this chapter we use the AnCora corpus (Taulé et al., 2008), which primarily consists of newspaper texts in Spanish and Catalan with different linguistic annotations. In this chapter we work with the Spanish portion of the corpus, and the annotations for Universal POS tags (see Chapter 16 for a description of these tags). The Spanish portion of the corpus is divided into a training set with 14,305 sentences, a development set with 1,654 sentences, and a test set with 1,721 sentences. The data is distributed in the CoNLL-U format. In this format, all sentences in a dataset are stored in the same file, separated by a blank line. Each individual token in a sentence is represented in a line, which contains 10 annotation fields separated by tabs: ID, FORM, LEMMA, UPOS, XPOS, FEATS, HEAD, DEPREL, DEPS, and MISC. A comprehensive explanation of this format and the 162 11.1 Part-of-speech Tagging 163 meaning of the different fields is beyond the goal of this chapter; however, the curious reader can find one at the CoNLL-U website.1 Here, we are only concerned with the fields FORM (the raw word), and UPOS (the Universal part-of-speech tag). As in previous chapters, we use pandas to preprocess the data. For parsing the CoNLL-U files, we rely on the conllu Python module.2 We implement a function called read_tags that reads the CoNLL-U file corresponding to a dataset and returns a pandas dataframe that combines all tokens in a sentence into a single row with two columns, one for the words, and one for the POS tags in the corresponding sentence: 0 1 2 3 4 ... 14300 14301 14302 14303 14304 words [El, presidente, de, el, órgano, regulador, de... [Afirmó, que, sigue, el, criterio, europeo, y,... [Durante, la, presentación, de, el, libro, ", ... [Y, todas, las, miradas, convergen, en, la, lu... [Cambiar, las, formas, parece, de, rigor, ,, p... tags [DET, NOUN, ADP, DET, NOUN, ADJ, ADP, DET, PRO... [VERB, SCONJ, VERB, DET, NOUN, ADJ, CCONJ, SCO... [ADP, DET, NOUN, ADP, DET, NOUN, PUNCT, DET, P... [CCONJ, DET, DET, NOUN, VERB, ADP, DET, NOUN, ... [VERB, DET, NOUN, VERB, ADP, NOUN, PUNCT, CCON... [PRON, VERB, ADP, VERB, PRON, ADP, DET, NOUN, ... [Sobre, la, oferta, de, interconexión, con, Te... [ADP, DET, NOUN, ADP, NOUN, ADP, PROPN, ADP, D... [La, inversión, en, investigación, básica, es,... [DET, NOUN, ADP, NOUN, ADJ, AUX, DET, NOUN, AD... ... ... [Conviene, que, ahora, ,, en, plena, apoteosis... [VERB, SCONJ, ADV, PUNCT, ADP, ADJ, NOUN, ADP,... [Carlos, y, Fayna, se, enzarzan, en, una, bron... [PROPN, CCONJ, PROPN, PRON, VERB, ADP, DET, NO... [Él, llega, a, tirar, la, sobre, la, cama, y, ... 14305 rows × 2 columns In order to implement our POS tagging application, we need word embeddings that have been pretrained for Spanish. Here we use the publicly-available GloVe embeddings trained on the Spanish Billion Word Corpus3 by the Departamento de Ciencias de la Computación of Universidad de Chile.4 In contrast to the GloVe embeddings used in Chapter 9, these do include a header that stores meta data about the embeddings (i.e., size of the vocabulary and the dimension of the embedding vectors), so in this case we do not use the no_header=True argument: Another difference between these GloVe embeddings and the ones we used in Chapter 9 is that these already include an embedding for unknown words. Therefore, there is no need to introduce our own. However, we do need to include a new embedding for padding, which . 1  https://universaldependencies.org/format.html 
 . 2  https://github.com/EmilStenstrom/conllu/ 
 . 3  https://crscardellino.ar/SBWCE/ 
 . 4  https://github.com/dccuchile/spanishwordembeddings# 
gloveembeddingsfromsbwc 
 164 Implementing POS Tagging Using RNNs we will use later to guarantee that all sentences in the same mini-batch have the same length. We add a vector of zeros for the padding token in the same way as before: Next, we need to preprocess our tokens to match the vocabulary of the embeddings. In particular, these embeddings were trained on words that were lowercased and on sequences of digits that were replaced with a single 0. We will apply the same modifications to our tokens: (From now on we will omit the pandas tables for readability, but, as usual, the corresponding Jupyter notebook contains all necessary information.) Next, we add a new column to the dataframe that stores the word ids corresponding to the embedding vocabulary. Note that at this point we are not padding the sequences of word ids. We will address padding later. We also need to generate the ids for the POS tags. To this end, we first need to construct a vocabulary of POS tags. Once again, we generate a list of tags using explode(), which linearizes our sequence of sequences of tags, and remove repeated tags using unique(). We also add a special tag for the padding token: We now use this POS tag vocabulary to construct a new dataframe column that stores the POS tag ids: The implementation of the Dataset class that stores our POS dataset is trivial: we simply return the lists of word and tag ids, converted to PyTorch tensors. Now it’s time to handle padding. This time we will use some features of PyTorch that we have not seen before. The DataLoader object can receive an optional argument, collate_fn, which expects a function that can be used to form a mini-batch. We will implement this function using PyTorch’s torch.nn.utils.rnn.pad_sequence() function, which, unsurprisingly, pads a group of tensors. We will take advantage of this function to pad the tensors while forming the mini-batch itself. The advantage of this strategy is that, rather than needing to pad all the examples to be the same length as the largest sentence in the corpus, we will instead pad them to the same length as the largest sentence in the minibatch. The latter strategy reduces the amount of padding necessary, which should yield more efficient code. The collate_fn() function takes a single argument, batch, which is a list of tuples. Each tuple has two elements: the list of word ids and the list of tag ids corresponding to a single example. We first unzip this list of tuples into two lists; the first list has all the word ids, and the 11.1 Part-of-speech Tagging 165 second has the tag ids. An explanation of how zip(*batch) works is provided in Appendix A. Next, we compute the lengths of each of the examples in the batch, which we will use later to inform the recurrent neural network where padding starts for each example. We then use the pad_sequence() function to add padding. This function will find the longest sequence in the batch and pad all examples accordingly using the provided padding value. This method is designed to work with PyTorch’s recurrent neural networks, which by default assume the batch index is in the second dimension. However, we will be organizing our tensors such that the batch index is always in the first dimension, which we feel to be more intuitive. For this reason, we also need to provide the batch_first=True argument to pad_sequence. Finally, we return the padded data, as well as the original lengths of the examples. Next, we implement our POS tagging model class. The model consists of: (a) an embedding layer for our Spanish pretrained embeddings; (b) an LSTM that can be set to be unior bi-directional (see Figure 10.3; the RNN is configured to be bidirectional by setting the bidirectional argument to True in the LSTM constructor), with a configurable number of layers (see Figure 10.2; the number of layers is set through the num_layers argument of the constructor) and (c) a linear layer on top of each hidden state, which is used to predict the scores for each of the POS tags for the corresponding token. The forward() method receives the padded minibatch and the list of lengths for the (unpadded) examples in this mini-batch. The first step in the function is to retrieve the embeddings for all words referenced in this mini-batch. We then apply dropout over these embedding vectors. Next, before passing the data to the LSTM, we pack the padded data. Note that the PyTorch PackedSequence5 class, which is the output of the pack_padded_sequence() function, stores a batch of sequences that had different lengths before padding. One important advantage of using PackedSequence is that its internal data structure removes the padding tokens (which is why we had to keep track of the example lengths before padding in x_lengths), and, thus, the recurrent neural network will not back-propagate over the padded elements.6 Once we have a PackedSequence, we pass it to the LSTM. Since the . 5  https://pytorch.org/docs/stable/generated/torch.nn.utils.rnn. PackedSequence.html 
 . 6  The astute reader might ask at this point, “Why did we pad the mini-batch examples in the first place, if we are removing the padding later?” The padding is needed because this allows us to store the mini-batch as a single three-dimensional tensor. 
 166 Implementing POS Tagging Using RNNs output of the LSTM is also packed, we then unpack it using pad_packed_sequence(). Next we apply dropout to this unpacked LSTM output. Finally, we pass
this to the linear layer to predict the tag scores for the tokens. Despite the small number of lines of code, the code of the forward() method, which switches between embedding vectors, padded tensors, and packed sequences, is not trivial. To clarify it, let us walk through an example. Imagine that the input to the forward() method is a batch, x_padded, with shape (10, 20), corresponding to 10 examples, each with 20 word ids (some of which are padding). Then we retrieve the embeddings. Assuming our word embeddings, i.e., the input vectors xi in Chapter 10, are of dimension 300, the new tensor will have a shape of (10, 20, 300), corresponding to 10 examples, each with 20 embeddings, each with dimension 300. After dropout the shape hasn’t changed, but some of the elements have been zeroed out. After unpacking the output of the LSTM, we will have a tensor of shape (10, 20, hidden_size), where hidden_size is the size of the LSTM hidden state, i.e., the ht vector in Equation 10.6, (hidden_size is a hyper parameter we will set later on). After passing this tensor to the linear layer, we will obtain a tensor of shape (10, 20, tag_vocab_size), where tag_vocab_size is the number of POS tags in our vocabulary. Thus, for each token in each example, we will have a distribution of POS tag scores. For each token, the assigned POS tag will be the one corresponding to the highest score. We next initialize all the hyper parameters and all the required components: The training procedure is very similar to the one implemented in Chapter 7. One notable difference is that the output of this model has three dimensions instead of two: number of examples, number of tokens, and number of POS tag scores. Thus, we have to reshape the output to pass it to the loss function. Additionally, we need to discard the padding before computing the loss. We reshape the gold tag ids using the torch.flatten() function, to transform the 2-dimensional tensor of shape (n_examples, n_tokens) to a 1-dimensional tensor with n_examples * n_tokens elements. The predictions are reshaped using the view(-1, output_size) method. By passing two arguments we are stipulating that we want two dimensions. The second dimension will be of size output_size. The -1 indicates that the first dimension should be inferred from the size of the tensor. This means that for a tensor of shape (n_examples, n_tokens, output_size) we will get a tensor of shape (n_examples * n_tokens, output_size). Then, we use a Boolean mask to discard the elements corresponding to the padding. This way, the loss 11.2 Summary 167 function will consider each actual word individually, as if the whole batch was just one big sentence. Note that treating a mini-batch as a single virtual sentence does affect the evaluation results. Lastly, we evaluate the performance of our POS tagger on the test set, similarly to how we have done it before: The results indicate that our POS tagger obtains an overall accuracy of 97%, which is in line with state-of-the-art approaches! This is encouraging considering that our approach does not include the CRF layer we discussed in Chapter 10. We challenge the reader to add this layer,7 and experiment with this architecture for other sequence tasks such as named entity recognition. 11.2 Summary In this chapter we have implemented a Spanish part-of-speech tagger using a bidirectional LSTM and a set of pretrained, static word embeddings. Through this process, we have also introduced several new PyTorch features such as the pad_sequence, pack_padded_sequence, and pad_packed_sequence functions, which allow us to work more efficiently with variable length sequences for recurrent neural networks. 7 See, for example, the LSTM-CRF implementation from the PyTorch tutorial: https://pytorch.org/tutorials/beginner/nlp/advanced_tutorial.html
2,093
2,369
#!/usr/bin/env python # coding: utf-8 # # Part-of-speech Tagging Using RNNs # Some initialization: # In[4]: import random import torch import numpy as np import pandas as pd from tqdm.notebook import tqdm # enable tqdm in pandas tqdm.pandas() # set to True to use the gpu (if there is one available) use_gpu = True # select device device = torch.device('cuda' if use_gpu and torch.cuda.is_available() else 'cpu') print(f'device: {device.type}') # random seed seed = 1234 # set random seed if seed is not None: print(f'random seed: {seed}') random.seed(seed) np.random.seed(seed) torch.manual_seed(seed) # Next, let's read the words and their POS tags from the CoNLLUP format: # In[5]: from conllu import parse_incr def read_tags(filename): data = {'words': [], 'tags': []} with open(filename) as f: for sent in parse_incr(f): words = [tok['form'] for tok in sent] tags = [tok['upos'] for tok in sent] data['words'].append(words) data['tags'].append(tags) return pd.DataFrame(data) # In[6]: train_df = read_tags('data/UD_Spanish-AnCora/es_ancora-ud-train.conllup') train_df # We now load the GloVe embeddings for Spanish, which include a representation for the unknown token: # In[7]: from gensim.models import KeyedVectors glove = KeyedVectors.load_word2vec_format('glove-sbwc.i25.vec') glove.vectors.shape # In[8]: # these embeddings already include <unk> unk_tok = '<unk>' unk_id = glove.key_to_index[unk_tok] unk_tok, unk_id # In[9]: # add padding embedding pad_tok = '<pad>' pad_emb = np.zeros(300) glove.add_vector(pad_tok, pad_emb) pad_tok_id = glove.key_to_index[pad_tok] pad_tok, pad_tok_id # Preprocessing: lower case all words, and replace all numbers with '0': # In[10]: def preprocess(words): result = [] for w in words: w = w.lower() if w.isdecimal(): w = '0' result.append(w) return result train_df['words'] = train_df['words'].progress_map(preprocess) train_df # Next, construct actual PyTorch `Dataset` and `DataLoader` objects for the train/dev/test partitions: # In[11]: def get_ids(tokens, key_to_index, unk_id=None): return [key_to_index.get(tok, unk_id) for tok in tokens] def get_word_ids(tokens): return get_ids(tokens, glove.key_to_index, unk_id) # add new column to the dataframe train_df['word ids'] = train_df['words'].progress_map(get_word_ids) train_df # In[12]: pad_tag = '<pad>' index_to_tag = train_df['tags'].explode().unique().tolist() + [pad_tag] tag_to_index = {t:i for i,t in enumerate(index_to_tag)} pad_tag_id = tag_to_index[pad_tag] pad_tag, pad_tag_id # In[13]: index_to_tag # In[14]: def get_tag_ids(tags): return get_ids(tags, tag_to_index) train_df['tag ids'] = train_df['tags'].progress_map(get_tag_ids) train_df # In[15]: dev_df = read_tags('data/UD_Spanish-AnCora/es_ancora-ud-dev.conllup') dev_df['words'] = dev_df['words'].progress_map(preprocess) dev_df['word ids'] = dev_df['words'].progress_map(lambda x: get_ids(x, glove.key_to_index, unk_id)) dev_df['tag ids'] = dev_df['tags'].progress_map(lambda x: get_ids(x, tag_to_index)) dev_df # In[16]: from torch.utils.data import Dataset class MyDataset(Dataset): def __init__(self, x, y): self.x = x self.y = y def __len__(self): return len(self.y) def __getitem__(self, index): x = torch.tensor(self.x[index]) y = torch.tensor(self.y[index]) return x, y # `collate_fn` will be used by `DataLoader` to pad all sentences in the same batch to the same length. # In[17]: from torch.nn.utils.rnn import pad_sequence def collate_fn(batch): # separate xs and ys xs, ys = zip(*batch) # get lengths lengths = [len(x) for x in xs] # pad sequences x_padded = pad_sequence(xs, batch_first=True, padding_value=pad_tok_id) y_padded = pad_sequence(ys, batch_first=True, padding_value=pad_tag_id) # return padded return x_padded, y_padded, lengths # Now construct our PyTorch model: # In[18]: from torch import nn from torch.nn.utils.rnn import pack_padded_sequence, pad_packed_sequence class MyModel(nn.Module): def __init__(self, vectors, hidden_size, num_layers, bidirectional, dropout, output_size): super().__init__() # ensure vectors is a tensor if not torch.is_tensor(vectors): vectors = torch.tensor(vectors) # init embedding layer self.embedding = nn.Embedding.from_pretrained(embeddings=vectors) # init lstm self.lstm = nn.LSTM( input_size=vectors.shape[1], hidden_size=hidden_size, num_layers=num_layers, bidirectional=bidirectional, dropout=dropout, batch_first=True) # init dropout self.dropout = nn.Dropout(dropout) # init classifier self.classifier = nn.Linear( in_features=hidden_size * 2 if bidirectional else hidden_size, out_features=output_size) def forward(self, x_padded, x_lengths): # get embeddings output = self.embedding(x_padded) output = self.dropout(output) # pack data before lstm packed = pack_padded_sequence(output, x_lengths, batch_first=True, enforce_sorted=False) packed, _ = self.lstm(packed) # unpack data before rest of model output, _ = pad_packed_sequence(packed, batch_first=True) output = self.dropout(output) output = self.classifier(output) return output # In[19]: from torch import optim from torch.utils.data import DataLoader from sklearn.metrics import accuracy_score # hyperparameters lr = 1e-3 weight_decay = 1e-5 batch_size = 100 shuffle = True n_epochs = 10 vectors = glove.vectors hidden_size = 100 num_layers = 2 bidirectional = True dropout = 0.1 output_size = len(index_to_tag) # initialize the model, loss function, optimizer, and data-loader model = MyModel(vectors, hidden_size, num_layers, bidirectional, dropout, output_size).to(device) loss_func = nn.CrossEntropyLoss() optimizer = optim.Adam(model.parameters(), lr=lr, weight_decay=weight_decay) train_ds = MyDataset(train_df['word ids'], train_df['tag ids']) train_dl = DataLoader(train_ds, batch_size=batch_size, shuffle=shuffle, collate_fn=collate_fn) dev_ds = MyDataset(dev_df['word ids'], dev_df['tag ids']) dev_dl = DataLoader(dev_ds, batch_size=batch_size, shuffle=shuffle, collate_fn=collate_fn) train_loss, train_acc = [], [] dev_loss, dev_acc = [], [] # We are now ready to train! # In[20]: # train the model for epoch in range(n_epochs): losses, acc = [], [] model.train() for x_padded, y_padded, lengths in tqdm(train_dl, desc=f'epoch {epoch+1} (train)'): # clear gradients model.zero_grad() # send batch to right device x_padded = x_padded.to(device) y_padded = y_padded.to(device) # predict label scores y_pred = model(x_padded, lengths) # reshape output y_true = torch.flatten(y_padded) y_pred = y_pred.view(-1, output_size) mask = y_true != pad_tag_id y_true = y_true[mask] y_pred = y_pred[mask] # compute loss loss = loss_func(y_pred, y_true) # accumulate for plotting gold = y_true.detach().cpu().numpy() pred = np.argmax(y_pred.detach().cpu().numpy(), axis=1) losses.append(loss.detach().cpu().item()) acc.append(accuracy_score(gold, pred)) # backpropagate loss.backward() # optimize model parameters optimizer.step() train_loss.append(np.mean(losses)) train_acc.append(np.mean(acc)) model.eval() with torch.no_grad(): losses, acc = [], [] for x_padded, y_padded, lengths in tqdm(dev_dl, desc=f'epoch {epoch+1} (dev)'): x_padded = x_padded.to(device) y_padded = y_padded.to(device) y_pred = model(x_padded, lengths) y_true = torch.flatten(y_padded) y_pred = y_pred.view(-1, output_size) mask = y_true != pad_tag_id y_true = y_true[mask] y_pred = y_pred[mask] loss = loss_func(y_pred, y_true) gold = y_true.cpu().numpy() pred = np.argmax(y_pred.cpu().numpy(), axis=1) losses.append(loss.cpu().item()) acc.append(accuracy_score(gold, pred)) dev_loss.append(np.mean(losses)) dev_acc.append(np.mean(acc)) # Plot loss and accuracy on dev after each epoch: # In[21]: import matplotlib.pyplot as plt x = np.arange(n_epochs) + 1 plt.plot(x, train_loss) plt.plot(x, dev_loss) plt.legend(['train', 'dev']) plt.xlabel('epoch') plt.ylabel('loss') plt.grid(True) # In[22]: plt.plot(x, train_acc) plt.plot(x, dev_acc) plt.legend(['train', 'dev']) plt.xlabel('epoch') plt.ylabel('accuracy') plt.grid(True) # In[23]: test_df = read_tags('data/UD_Spanish-AnCora/es_ancora-ud-test.conllup') test_df['words'] = test_df['words'].progress_map(preprocess) test_df['word ids'] = test_df['words'].progress_map(lambda x: get_ids(x, glove.key_to_index, unk_id)) test_df['tag ids'] = test_df['tags'].progress_map(lambda x: get_ids(x, tag_to_index)) test_df # Now let's evaluate on the test partition: # In[24]: from sklearn.metrics import classification_report model.eval() test_ds = MyDataset(test_df['word ids'], test_df['tag ids']) test_dl = DataLoader(test_ds, batch_size=batch_size, shuffle=shuffle, collate_fn=collate_fn) all_y_true = [] all_y_pred = [] with torch.no_grad(): for x_padded, y_padded, lengths in tqdm(test_dl): x_padded = x_padded.to(device) y_pred = model(x_padded, lengths) y_true = torch.flatten(y_padded) y_pred = y_pred.view(-1, output_size) mask = y_true != pad_tag_id y_true = y_true[mask] y_pred = torch.argmax(y_pred[mask], dim=1) all_y_true.append(y_true.cpu().numpy()) all_y_pred.append(y_pred.cpu().numpy()) y_true = np.concatenate(all_y_true) y_pred = np.concatenate(all_y_pred) target_names = index_to_tag[:-2] print(classification_report(y_true, y_pred, target_names=target_names)) # Let's generate a confusion matrix for all POS tags in the data: # In[25]: from sklearn.metrics import ConfusionMatrixDisplay, confusion_matrix cm = confusion_matrix(y_true, y_pred, normalize='true') disp = ConfusionMatrixDisplay( confusion_matrix=cm, display_labels=target_names, ) fig, ax = plt.subplots(figsize=(10,10)) disp.plot( cmap='Blues', values_format='.2f', colorbar=False, ax=ax, xticks_rotation=45, ) # In[ ]:
747
772
19
chap11-20
chap11-20
11 Implementing Part-of-speech Tagging Using Recurrent Neural Networks The previous chapter was our first exposure to recurrent neural networks, which included intuitions for why they are useful for natural language processing, various architectures, and training algorithms. In this chapter we will put them to use, to implement a common sequence modeling task. 11.1 Part-of-speech Tagging The task we will use as an example for this chapter is part-of-speech (POS) tagging, an NLP application that, as we discussed in the previous chapter, benefits from word order. Please see Chapter 16 for a more thorough discussion of POS tagging. The entire code presented in this chapter is available in the chap11_pos_tagging Jupyter notebook. To take a break from NLP applications for English, in this chapter we use the AnCora corpus (Taulé et al., 2008), which primarily consists of newspaper texts in Spanish and Catalan with different linguistic annotations. In this chapter we work with the Spanish portion of the corpus, and the annotations for Universal POS tags (see Chapter 16 for a description of these tags). The Spanish portion of the corpus is divided into a training set with 14,305 sentences, a development set with 1,654 sentences, and a test set with 1,721 sentences. The data is distributed in the CoNLL-U format. In this format, all sentences in a dataset are stored in the same file, separated by a blank line. Each individual token in a sentence is represented in a line, which contains 10 annotation fields separated by tabs: ID, FORM, LEMMA, UPOS, XPOS, FEATS, HEAD, DEPREL, DEPS, and MISC. A comprehensive explanation of this format and the 162 11.1 Part-of-speech Tagging 163 meaning of the different fields is beyond the goal of this chapter; however, the curious reader can find one at the CoNLL-U website.1 Here, we are only concerned with the fields FORM (the raw word), and UPOS (the Universal part-of-speech tag). As in previous chapters, we use pandas to preprocess the data. For parsing the CoNLL-U files, we rely on the conllu Python module.2 We implement a function called read_tags that reads the CoNLL-U file corresponding to a dataset and returns a pandas dataframe that combines all tokens in a sentence into a single row with two columns, one for the words, and one for the POS tags in the corresponding sentence: 0 1 2 3 4 ... 14300 14301 14302 14303 14304 words [El, presidente, de, el, órgano, regulador, de... [Afirmó, que, sigue, el, criterio, europeo, y,... [Durante, la, presentación, de, el, libro, ", ... [Y, todas, las, miradas, convergen, en, la, lu... [Cambiar, las, formas, parece, de, rigor, ,, p... tags [DET, NOUN, ADP, DET, NOUN, ADJ, ADP, DET, PRO... [VERB, SCONJ, VERB, DET, NOUN, ADJ, CCONJ, SCO... [ADP, DET, NOUN, ADP, DET, NOUN, PUNCT, DET, P... [CCONJ, DET, DET, NOUN, VERB, ADP, DET, NOUN, ... [VERB, DET, NOUN, VERB, ADP, NOUN, PUNCT, CCON... [PRON, VERB, ADP, VERB, PRON, ADP, DET, NOUN, ... [Sobre, la, oferta, de, interconexión, con, Te... [ADP, DET, NOUN, ADP, NOUN, ADP, PROPN, ADP, D... [La, inversión, en, investigación, básica, es,... [DET, NOUN, ADP, NOUN, ADJ, AUX, DET, NOUN, AD... ... ... [Conviene, que, ahora, ,, en, plena, apoteosis... [VERB, SCONJ, ADV, PUNCT, ADP, ADJ, NOUN, ADP,... [Carlos, y, Fayna, se, enzarzan, en, una, bron... [PROPN, CCONJ, PROPN, PRON, VERB, ADP, DET, NO... [Él, llega, a, tirar, la, sobre, la, cama, y, ... 14305 rows × 2 columns In order to implement our POS tagging application, we need word embeddings that have been pretrained for Spanish. Here we use the publicly-available GloVe embeddings trained on the Spanish Billion Word Corpus3 by the Departamento de Ciencias de la Computación of Universidad de Chile.4 In contrast to the GloVe embeddings used in Chapter 9, these do include a header that stores meta data about the embeddings (i.e., size of the vocabulary and the dimension of the embedding vectors), so in this case we do not use the no_header=True argument: Another difference between these GloVe embeddings and the ones we used in Chapter 9 is that these already include an embedding for unknown words. Therefore, there is no need to introduce our own. However, we do need to include a new embedding for padding, which . 1  https://universaldependencies.org/format.html 
 . 2  https://github.com/EmilStenstrom/conllu/ 
 . 3  https://crscardellino.ar/SBWCE/ 
 . 4  https://github.com/dccuchile/spanishwordembeddings# 
gloveembeddingsfromsbwc 
 164 Implementing POS Tagging Using RNNs we will use later to guarantee that all sentences in the same mini-batch have the same length. We add a vector of zeros for the padding token in the same way as before: Next, we need to preprocess our tokens to match the vocabulary of the embeddings. In particular, these embeddings were trained on words that were lowercased and on sequences of digits that were replaced with a single 0. We will apply the same modifications to our tokens: (From now on we will omit the pandas tables for readability, but, as usual, the corresponding Jupyter notebook contains all necessary information.) Next, we add a new column to the dataframe that stores the word ids corresponding to the embedding vocabulary. Note that at this point we are not padding the sequences of word ids. We will address padding later. We also need to generate the ids for the POS tags. To this end, we first need to construct a vocabulary of POS tags. Once again, we generate a list of tags using explode(), which linearizes our sequence of sequences of tags, and remove repeated tags using unique(). We also add a special tag for the padding token: We now use this POS tag vocabulary to construct a new dataframe column that stores the POS tag ids: The implementation of the Dataset class that stores our POS dataset is trivial: we simply return the lists of word and tag ids, converted to PyTorch tensors. Now it’s time to handle padding. This time we will use some features of PyTorch that we have not seen before. The DataLoader object can receive an optional argument, collate_fn, which expects a function that can be used to form a mini-batch. We will implement this function using PyTorch’s torch.nn.utils.rnn.pad_sequence() function, which, unsurprisingly, pads a group of tensors. We will take advantage of this function to pad the tensors while forming the mini-batch itself. The advantage of this strategy is that, rather than needing to pad all the examples to be the same length as the largest sentence in the corpus, we will instead pad them to the same length as the largest sentence in the minibatch. The latter strategy reduces the amount of padding necessary, which should yield more efficient code. The collate_fn() function takes a single argument, batch, which is a list of tuples. Each tuple has two elements: the list of word ids and the list of tag ids corresponding to a single example. We first unzip this list of tuples into two lists; the first list has all the word ids, and the 11.1 Part-of-speech Tagging 165 second has the tag ids. An explanation of how zip(*batch) works is provided in Appendix A. Next, we compute the lengths of each of the examples in the batch, which we will use later to inform the recurrent neural network where padding starts for each example. We then use the pad_sequence() function to add padding. This function will find the longest sequence in the batch and pad all examples accordingly using the provided padding value. This method is designed to work with PyTorch’s recurrent neural networks, which by default assume the batch index is in the second dimension. However, we will be organizing our tensors such that the batch index is always in the first dimension, which we feel to be more intuitive. For this reason, we also need to provide the batch_first=True argument to pad_sequence. Finally, we return the padded data, as well as the original lengths of the examples. Next, we implement our POS tagging model class. The model consists of: (a) an embedding layer for our Spanish pretrained embeddings; (b) an LSTM that can be set to be unior bi-directional (see Figure 10.3; the RNN is configured to be bidirectional by setting the bidirectional argument to True in the LSTM constructor), with a configurable number of layers (see Figure 10.2; the number of layers is set through the num_layers argument of the constructor) and (c) a linear layer on top of each hidden state, which is used to predict the scores for each of the POS tags for the corresponding token. The forward() method receives the padded minibatch and the list of lengths for the (unpadded) examples in this mini-batch. The first step in the function is to retrieve the embeddings for all words referenced in this mini-batch. We then apply dropout over these embedding vectors. Next, before passing the data to the LSTM, we pack the padded data. Note that the PyTorch PackedSequence5 class, which is the output of the pack_padded_sequence() function, stores a batch of sequences that had different lengths before padding. One important advantage of using PackedSequence is that its internal data structure removes the padding tokens (which is why we had to keep track of the example lengths before padding in x_lengths), and, thus, the recurrent neural network will not back-propagate over the padded elements.6 Once we have a PackedSequence, we pass it to the LSTM. Since the . 5  https://pytorch.org/docs/stable/generated/torch.nn.utils.rnn. PackedSequence.html 
 . 6  The astute reader might ask at this point, “Why did we pad the mini-batch examples in the first place, if we are removing the padding later?” The padding is needed because this allows us to store the mini-batch as a single three-dimensional tensor. 
 166 Implementing POS Tagging Using RNNs output of the LSTM is also packed, we then unpack it using pad_packed_sequence(). Next we apply dropout to this unpacked LSTM output. Finally, we pass
this to the linear layer to predict the tag scores for the tokens. Despite the small number of lines of code, the code of the forward() method, which switches between embedding vectors, padded tensors, and packed sequences, is not trivial. To clarify it, let us walk through an example. Imagine that the input to the forward() method is a batch, x_padded, with shape (10, 20), corresponding to 10 examples, each with 20 word ids (some of which are padding). Then we retrieve the embeddings. Assuming our word embeddings, i.e., the input vectors xi in Chapter 10, are of dimension 300, the new tensor will have a shape of (10, 20, 300), corresponding to 10 examples, each with 20 embeddings, each with dimension 300. After dropout the shape hasn’t changed, but some of the elements have been zeroed out. After unpacking the output of the LSTM, we will have a tensor of shape (10, 20, hidden_size), where hidden_size is the size of the LSTM hidden state, i.e., the ht vector in Equation 10.6, (hidden_size is a hyper parameter we will set later on). After passing this tensor to the linear layer, we will obtain a tensor of shape (10, 20, tag_vocab_size), where tag_vocab_size is the number of POS tags in our vocabulary. Thus, for each token in each example, we will have a distribution of POS tag scores. For each token, the assigned POS tag will be the one corresponding to the highest score. We next initialize all the hyper parameters and all the required components: The training procedure is very similar to the one implemented in Chapter 7. One notable difference is that the output of this model has three dimensions instead of two: number of examples, number of tokens, and number of POS tag scores. Thus, we have to reshape the output to pass it to the loss function. Additionally, we need to discard the padding before computing the loss. We reshape the gold tag ids using the torch.flatten() function, to transform the 2-dimensional tensor of shape (n_examples, n_tokens) to a 1-dimensional tensor with n_examples * n_tokens elements. The predictions are reshaped using the view(-1, output_size) method. By passing two arguments we are stipulating that we want two dimensions. The second dimension will be of size output_size. The -1 indicates that the first dimension should be inferred from the size of the tensor. This means that for a tensor of shape (n_examples, n_tokens, output_size) we will get a tensor of shape (n_examples * n_tokens, output_size). Then, we use a Boolean mask to discard the elements corresponding to the padding. This way, the loss 11.2 Summary 167 function will consider each actual word individually, as if the whole batch was just one big sentence. Note that treating a mini-batch as a single virtual sentence does affect the evaluation results. Lastly, we evaluate the performance of our POS tagger on the test set, similarly to how we have done it before: The results indicate that our POS tagger obtains an overall accuracy of 97%, which is in line with state-of-the-art approaches! This is encouraging considering that our approach does not include the CRF layer we discussed in Chapter 10. We challenge the reader to add this layer,7 and experiment with this architecture for other sequence tasks such as named entity recognition. 11.2 Summary In this chapter we have implemented a Spanish part-of-speech tagger using a bidirectional LSTM and a set of pretrained, static word embeddings. Through this process, we have also introduced several new PyTorch features such as the pad_sequence, pack_padded_sequence, and pad_packed_sequence functions, which allow us to work more efficiently with variable length sequences for recurrent neural networks. 7 See, for example, the LSTM-CRF implementation from the PyTorch tutorial: https://pytorch.org/tutorials/beginner/nlp/advanced_tutorial.html
6,074
6,345
#!/usr/bin/env python # coding: utf-8 # # Part-of-speech Tagging Using RNNs # Some initialization: # In[4]: import random import torch import numpy as np import pandas as pd from tqdm.notebook import tqdm # enable tqdm in pandas tqdm.pandas() # set to True to use the gpu (if there is one available) use_gpu = True # select device device = torch.device('cuda' if use_gpu and torch.cuda.is_available() else 'cpu') print(f'device: {device.type}') # random seed seed = 1234 # set random seed if seed is not None: print(f'random seed: {seed}') random.seed(seed) np.random.seed(seed) torch.manual_seed(seed) # Next, let's read the words and their POS tags from the CoNLLUP format: # In[5]: from conllu import parse_incr def read_tags(filename): data = {'words': [], 'tags': []} with open(filename) as f: for sent in parse_incr(f): words = [tok['form'] for tok in sent] tags = [tok['upos'] for tok in sent] data['words'].append(words) data['tags'].append(tags) return pd.DataFrame(data) # In[6]: train_df = read_tags('data/UD_Spanish-AnCora/es_ancora-ud-train.conllup') train_df # We now load the GloVe embeddings for Spanish, which include a representation for the unknown token: # In[7]: from gensim.models import KeyedVectors glove = KeyedVectors.load_word2vec_format('glove-sbwc.i25.vec') glove.vectors.shape # In[8]: # these embeddings already include <unk> unk_tok = '<unk>' unk_id = glove.key_to_index[unk_tok] unk_tok, unk_id # In[9]: # add padding embedding pad_tok = '<pad>' pad_emb = np.zeros(300) glove.add_vector(pad_tok, pad_emb) pad_tok_id = glove.key_to_index[pad_tok] pad_tok, pad_tok_id # Preprocessing: lower case all words, and replace all numbers with '0': # In[10]: def preprocess(words): result = [] for w in words: w = w.lower() if w.isdecimal(): w = '0' result.append(w) return result train_df['words'] = train_df['words'].progress_map(preprocess) train_df # Next, construct actual PyTorch `Dataset` and `DataLoader` objects for the train/dev/test partitions: # In[11]: def get_ids(tokens, key_to_index, unk_id=None): return [key_to_index.get(tok, unk_id) for tok in tokens] def get_word_ids(tokens): return get_ids(tokens, glove.key_to_index, unk_id) # add new column to the dataframe train_df['word ids'] = train_df['words'].progress_map(get_word_ids) train_df # In[12]: pad_tag = '<pad>' index_to_tag = train_df['tags'].explode().unique().tolist() + [pad_tag] tag_to_index = {t:i for i,t in enumerate(index_to_tag)} pad_tag_id = tag_to_index[pad_tag] pad_tag, pad_tag_id # In[13]: index_to_tag # In[14]: def get_tag_ids(tags): return get_ids(tags, tag_to_index) train_df['tag ids'] = train_df['tags'].progress_map(get_tag_ids) train_df # In[15]: dev_df = read_tags('data/UD_Spanish-AnCora/es_ancora-ud-dev.conllup') dev_df['words'] = dev_df['words'].progress_map(preprocess) dev_df['word ids'] = dev_df['words'].progress_map(lambda x: get_ids(x, glove.key_to_index, unk_id)) dev_df['tag ids'] = dev_df['tags'].progress_map(lambda x: get_ids(x, tag_to_index)) dev_df # In[16]: from torch.utils.data import Dataset class MyDataset(Dataset): def __init__(self, x, y): self.x = x self.y = y def __len__(self): return len(self.y) def __getitem__(self, index): x = torch.tensor(self.x[index]) y = torch.tensor(self.y[index]) return x, y # `collate_fn` will be used by `DataLoader` to pad all sentences in the same batch to the same length. # In[17]: from torch.nn.utils.rnn import pad_sequence def collate_fn(batch): # separate xs and ys xs, ys = zip(*batch) # get lengths lengths = [len(x) for x in xs] # pad sequences x_padded = pad_sequence(xs, batch_first=True, padding_value=pad_tok_id) y_padded = pad_sequence(ys, batch_first=True, padding_value=pad_tag_id) # return padded return x_padded, y_padded, lengths # Now construct our PyTorch model: # In[18]: from torch import nn from torch.nn.utils.rnn import pack_padded_sequence, pad_packed_sequence class MyModel(nn.Module): def __init__(self, vectors, hidden_size, num_layers, bidirectional, dropout, output_size): super().__init__() # ensure vectors is a tensor if not torch.is_tensor(vectors): vectors = torch.tensor(vectors) # init embedding layer self.embedding = nn.Embedding.from_pretrained(embeddings=vectors) # init lstm self.lstm = nn.LSTM( input_size=vectors.shape[1], hidden_size=hidden_size, num_layers=num_layers, bidirectional=bidirectional, dropout=dropout, batch_first=True) # init dropout self.dropout = nn.Dropout(dropout) # init classifier self.classifier = nn.Linear( in_features=hidden_size * 2 if bidirectional else hidden_size, out_features=output_size) def forward(self, x_padded, x_lengths): # get embeddings output = self.embedding(x_padded) output = self.dropout(output) # pack data before lstm packed = pack_padded_sequence(output, x_lengths, batch_first=True, enforce_sorted=False) packed, _ = self.lstm(packed) # unpack data before rest of model output, _ = pad_packed_sequence(packed, batch_first=True) output = self.dropout(output) output = self.classifier(output) return output # In[19]: from torch import optim from torch.utils.data import DataLoader from sklearn.metrics import accuracy_score # hyperparameters lr = 1e-3 weight_decay = 1e-5 batch_size = 100 shuffle = True n_epochs = 10 vectors = glove.vectors hidden_size = 100 num_layers = 2 bidirectional = True dropout = 0.1 output_size = len(index_to_tag) # initialize the model, loss function, optimizer, and data-loader model = MyModel(vectors, hidden_size, num_layers, bidirectional, dropout, output_size).to(device) loss_func = nn.CrossEntropyLoss() optimizer = optim.Adam(model.parameters(), lr=lr, weight_decay=weight_decay) train_ds = MyDataset(train_df['word ids'], train_df['tag ids']) train_dl = DataLoader(train_ds, batch_size=batch_size, shuffle=shuffle, collate_fn=collate_fn) dev_ds = MyDataset(dev_df['word ids'], dev_df['tag ids']) dev_dl = DataLoader(dev_ds, batch_size=batch_size, shuffle=shuffle, collate_fn=collate_fn) train_loss, train_acc = [], [] dev_loss, dev_acc = [], [] # We are now ready to train! # In[20]: # train the model for epoch in range(n_epochs): losses, acc = [], [] model.train() for x_padded, y_padded, lengths in tqdm(train_dl, desc=f'epoch {epoch+1} (train)'): # clear gradients model.zero_grad() # send batch to right device x_padded = x_padded.to(device) y_padded = y_padded.to(device) # predict label scores y_pred = model(x_padded, lengths) # reshape output y_true = torch.flatten(y_padded) y_pred = y_pred.view(-1, output_size) mask = y_true != pad_tag_id y_true = y_true[mask] y_pred = y_pred[mask] # compute loss loss = loss_func(y_pred, y_true) # accumulate for plotting gold = y_true.detach().cpu().numpy() pred = np.argmax(y_pred.detach().cpu().numpy(), axis=1) losses.append(loss.detach().cpu().item()) acc.append(accuracy_score(gold, pred)) # backpropagate loss.backward() # optimize model parameters optimizer.step() train_loss.append(np.mean(losses)) train_acc.append(np.mean(acc)) model.eval() with torch.no_grad(): losses, acc = [], [] for x_padded, y_padded, lengths in tqdm(dev_dl, desc=f'epoch {epoch+1} (dev)'): x_padded = x_padded.to(device) y_padded = y_padded.to(device) y_pred = model(x_padded, lengths) y_true = torch.flatten(y_padded) y_pred = y_pred.view(-1, output_size) mask = y_true != pad_tag_id y_true = y_true[mask] y_pred = y_pred[mask] loss = loss_func(y_pred, y_true) gold = y_true.cpu().numpy() pred = np.argmax(y_pred.cpu().numpy(), axis=1) losses.append(loss.cpu().item()) acc.append(accuracy_score(gold, pred)) dev_loss.append(np.mean(losses)) dev_acc.append(np.mean(acc)) # Plot loss and accuracy on dev after each epoch: # In[21]: import matplotlib.pyplot as plt x = np.arange(n_epochs) + 1 plt.plot(x, train_loss) plt.plot(x, dev_loss) plt.legend(['train', 'dev']) plt.xlabel('epoch') plt.ylabel('loss') plt.grid(True) # In[22]: plt.plot(x, train_acc) plt.plot(x, dev_acc) plt.legend(['train', 'dev']) plt.xlabel('epoch') plt.ylabel('accuracy') plt.grid(True) # In[23]: test_df = read_tags('data/UD_Spanish-AnCora/es_ancora-ud-test.conllup') test_df['words'] = test_df['words'].progress_map(preprocess) test_df['word ids'] = test_df['words'].progress_map(lambda x: get_ids(x, glove.key_to_index, unk_id)) test_df['tag ids'] = test_df['tags'].progress_map(lambda x: get_ids(x, tag_to_index)) test_df # Now let's evaluate on the test partition: # In[24]: from sklearn.metrics import classification_report model.eval() test_ds = MyDataset(test_df['word ids'], test_df['tag ids']) test_dl = DataLoader(test_ds, batch_size=batch_size, shuffle=shuffle, collate_fn=collate_fn) all_y_true = [] all_y_pred = [] with torch.no_grad(): for x_padded, y_padded, lengths in tqdm(test_dl): x_padded = x_padded.to(device) y_pred = model(x_padded, lengths) y_true = torch.flatten(y_padded) y_pred = y_pred.view(-1, output_size) mask = y_true != pad_tag_id y_true = y_true[mask] y_pred = torch.argmax(y_pred[mask], dim=1) all_y_true.append(y_true.cpu().numpy()) all_y_pred.append(y_pred.cpu().numpy()) y_true = np.concatenate(all_y_true) y_pred = np.concatenate(all_y_pred) target_names = index_to_tag[:-2] print(classification_report(y_true, y_pred, target_names=target_names)) # Let's generate a confusion matrix for all POS tags in the data: # In[25]: from sklearn.metrics import ConfusionMatrixDisplay, confusion_matrix cm = confusion_matrix(y_true, y_pred, normalize='true') disp = ConfusionMatrixDisplay( confusion_matrix=cm, display_labels=target_names, ) fig, ax = plt.subplots(figsize=(10,10)) disp.plot( cmap='Blues', values_format='.2f', colorbar=False, ax=ax, xticks_rotation=45, ) # In[ ]:
3,702
3,725
20
chap11-21
chap11-21
11 Implementing Part-of-speech Tagging Using Recurrent Neural Networks The previous chapter was our first exposure to recurrent neural networks, which included intuitions for why they are useful for natural language processing, various architectures, and training algorithms. In this chapter we will put them to use, to implement a common sequence modeling task. 11.1 Part-of-speech Tagging The task we will use as an example for this chapter is part-of-speech (POS) tagging, an NLP application that, as we discussed in the previous chapter, benefits from word order. Please see Chapter 16 for a more thorough discussion of POS tagging. The entire code presented in this chapter is available in the chap11_pos_tagging Jupyter notebook. To take a break from NLP applications for English, in this chapter we use the AnCora corpus (Taulé et al., 2008), which primarily consists of newspaper texts in Spanish and Catalan with different linguistic annotations. In this chapter we work with the Spanish portion of the corpus, and the annotations for Universal POS tags (see Chapter 16 for a description of these tags). The Spanish portion of the corpus is divided into a training set with 14,305 sentences, a development set with 1,654 sentences, and a test set with 1,721 sentences. The data is distributed in the CoNLL-U format. In this format, all sentences in a dataset are stored in the same file, separated by a blank line. Each individual token in a sentence is represented in a line, which contains 10 annotation fields separated by tabs: ID, FORM, LEMMA, UPOS, XPOS, FEATS, HEAD, DEPREL, DEPS, and MISC. A comprehensive explanation of this format and the 162 11.1 Part-of-speech Tagging 163 meaning of the different fields is beyond the goal of this chapter; however, the curious reader can find one at the CoNLL-U website.1 Here, we are only concerned with the fields FORM (the raw word), and UPOS (the Universal part-of-speech tag). As in previous chapters, we use pandas to preprocess the data. For parsing the CoNLL-U files, we rely on the conllu Python module.2 We implement a function called read_tags that reads the CoNLL-U file corresponding to a dataset and returns a pandas dataframe that combines all tokens in a sentence into a single row with two columns, one for the words, and one for the POS tags in the corresponding sentence: 0 1 2 3 4 ... 14300 14301 14302 14303 14304 words [El, presidente, de, el, órgano, regulador, de... [Afirmó, que, sigue, el, criterio, europeo, y,... [Durante, la, presentación, de, el, libro, ", ... [Y, todas, las, miradas, convergen, en, la, lu... [Cambiar, las, formas, parece, de, rigor, ,, p... tags [DET, NOUN, ADP, DET, NOUN, ADJ, ADP, DET, PRO... [VERB, SCONJ, VERB, DET, NOUN, ADJ, CCONJ, SCO... [ADP, DET, NOUN, ADP, DET, NOUN, PUNCT, DET, P... [CCONJ, DET, DET, NOUN, VERB, ADP, DET, NOUN, ... [VERB, DET, NOUN, VERB, ADP, NOUN, PUNCT, CCON... [PRON, VERB, ADP, VERB, PRON, ADP, DET, NOUN, ... [Sobre, la, oferta, de, interconexión, con, Te... [ADP, DET, NOUN, ADP, NOUN, ADP, PROPN, ADP, D... [La, inversión, en, investigación, básica, es,... [DET, NOUN, ADP, NOUN, ADJ, AUX, DET, NOUN, AD... ... ... [Conviene, que, ahora, ,, en, plena, apoteosis... [VERB, SCONJ, ADV, PUNCT, ADP, ADJ, NOUN, ADP,... [Carlos, y, Fayna, se, enzarzan, en, una, bron... [PROPN, CCONJ, PROPN, PRON, VERB, ADP, DET, NO... [Él, llega, a, tirar, la, sobre, la, cama, y, ... 14305 rows × 2 columns In order to implement our POS tagging application, we need word embeddings that have been pretrained for Spanish. Here we use the publicly-available GloVe embeddings trained on the Spanish Billion Word Corpus3 by the Departamento de Ciencias de la Computación of Universidad de Chile.4 In contrast to the GloVe embeddings used in Chapter 9, these do include a header that stores meta data about the embeddings (i.e., size of the vocabulary and the dimension of the embedding vectors), so in this case we do not use the no_header=True argument: Another difference between these GloVe embeddings and the ones we used in Chapter 9 is that these already include an embedding for unknown words. Therefore, there is no need to introduce our own. However, we do need to include a new embedding for padding, which . 1  https://universaldependencies.org/format.html 
 . 2  https://github.com/EmilStenstrom/conllu/ 
 . 3  https://crscardellino.ar/SBWCE/ 
 . 4  https://github.com/dccuchile/spanishwordembeddings# 
gloveembeddingsfromsbwc 
 164 Implementing POS Tagging Using RNNs we will use later to guarantee that all sentences in the same mini-batch have the same length. We add a vector of zeros for the padding token in the same way as before: Next, we need to preprocess our tokens to match the vocabulary of the embeddings. In particular, these embeddings were trained on words that were lowercased and on sequences of digits that were replaced with a single 0. We will apply the same modifications to our tokens: (From now on we will omit the pandas tables for readability, but, as usual, the corresponding Jupyter notebook contains all necessary information.) Next, we add a new column to the dataframe that stores the word ids corresponding to the embedding vocabulary. Note that at this point we are not padding the sequences of word ids. We will address padding later. We also need to generate the ids for the POS tags. To this end, we first need to construct a vocabulary of POS tags. Once again, we generate a list of tags using explode(), which linearizes our sequence of sequences of tags, and remove repeated tags using unique(). We also add a special tag for the padding token: We now use this POS tag vocabulary to construct a new dataframe column that stores the POS tag ids: The implementation of the Dataset class that stores our POS dataset is trivial: we simply return the lists of word and tag ids, converted to PyTorch tensors. Now it’s time to handle padding. This time we will use some features of PyTorch that we have not seen before. The DataLoader object can receive an optional argument, collate_fn, which expects a function that can be used to form a mini-batch. We will implement this function using PyTorch’s torch.nn.utils.rnn.pad_sequence() function, which, unsurprisingly, pads a group of tensors. We will take advantage of this function to pad the tensors while forming the mini-batch itself. The advantage of this strategy is that, rather than needing to pad all the examples to be the same length as the largest sentence in the corpus, we will instead pad them to the same length as the largest sentence in the minibatch. The latter strategy reduces the amount of padding necessary, which should yield more efficient code. The collate_fn() function takes a single argument, batch, which is a list of tuples. Each tuple has two elements: the list of word ids and the list of tag ids corresponding to a single example. We first unzip this list of tuples into two lists; the first list has all the word ids, and the 11.1 Part-of-speech Tagging 165 second has the tag ids. An explanation of how zip(*batch) works is provided in Appendix A. Next, we compute the lengths of each of the examples in the batch, which we will use later to inform the recurrent neural network where padding starts for each example. We then use the pad_sequence() function to add padding. This function will find the longest sequence in the batch and pad all examples accordingly using the provided padding value. This method is designed to work with PyTorch’s recurrent neural networks, which by default assume the batch index is in the second dimension. However, we will be organizing our tensors such that the batch index is always in the first dimension, which we feel to be more intuitive. For this reason, we also need to provide the batch_first=True argument to pad_sequence. Finally, we return the padded data, as well as the original lengths of the examples. Next, we implement our POS tagging model class. The model consists of: (a) an embedding layer for our Spanish pretrained embeddings; (b) an LSTM that can be set to be unior bi-directional (see Figure 10.3; the RNN is configured to be bidirectional by setting the bidirectional argument to True in the LSTM constructor), with a configurable number of layers (see Figure 10.2; the number of layers is set through the num_layers argument of the constructor) and (c) a linear layer on top of each hidden state, which is used to predict the scores for each of the POS tags for the corresponding token. The forward() method receives the padded minibatch and the list of lengths for the (unpadded) examples in this mini-batch. The first step in the function is to retrieve the embeddings for all words referenced in this mini-batch. We then apply dropout over these embedding vectors. Next, before passing the data to the LSTM, we pack the padded data. Note that the PyTorch PackedSequence5 class, which is the output of the pack_padded_sequence() function, stores a batch of sequences that had different lengths before padding. One important advantage of using PackedSequence is that its internal data structure removes the padding tokens (which is why we had to keep track of the example lengths before padding in x_lengths), and, thus, the recurrent neural network will not back-propagate over the padded elements.6 Once we have a PackedSequence, we pass it to the LSTM. Since the . 5  https://pytorch.org/docs/stable/generated/torch.nn.utils.rnn. PackedSequence.html 
 . 6  The astute reader might ask at this point, “Why did we pad the mini-batch examples in the first place, if we are removing the padding later?” The padding is needed because this allows us to store the mini-batch as a single three-dimensional tensor. 
 166 Implementing POS Tagging Using RNNs output of the LSTM is also packed, we then unpack it using pad_packed_sequence(). Next we apply dropout to this unpacked LSTM output. Finally, we pass
this to the linear layer to predict the tag scores for the tokens. Despite the small number of lines of code, the code of the forward() method, which switches between embedding vectors, padded tensors, and packed sequences, is not trivial. To clarify it, let us walk through an example. Imagine that the input to the forward() method is a batch, x_padded, with shape (10, 20), corresponding to 10 examples, each with 20 word ids (some of which are padding). Then we retrieve the embeddings. Assuming our word embeddings, i.e., the input vectors xi in Chapter 10, are of dimension 300, the new tensor will have a shape of (10, 20, 300), corresponding to 10 examples, each with 20 embeddings, each with dimension 300. After dropout the shape hasn’t changed, but some of the elements have been zeroed out. After unpacking the output of the LSTM, we will have a tensor of shape (10, 20, hidden_size), where hidden_size is the size of the LSTM hidden state, i.e., the ht vector in Equation 10.6, (hidden_size is a hyper parameter we will set later on). After passing this tensor to the linear layer, we will obtain a tensor of shape (10, 20, tag_vocab_size), where tag_vocab_size is the number of POS tags in our vocabulary. Thus, for each token in each example, we will have a distribution of POS tag scores. For each token, the assigned POS tag will be the one corresponding to the highest score. We next initialize all the hyper parameters and all the required components: The training procedure is very similar to the one implemented in Chapter 7. One notable difference is that the output of this model has three dimensions instead of two: number of examples, number of tokens, and number of POS tag scores. Thus, we have to reshape the output to pass it to the loss function. Additionally, we need to discard the padding before computing the loss. We reshape the gold tag ids using the torch.flatten() function, to transform the 2-dimensional tensor of shape (n_examples, n_tokens) to a 1-dimensional tensor with n_examples * n_tokens elements. The predictions are reshaped using the view(-1, output_size) method. By passing two arguments we are stipulating that we want two dimensions. The second dimension will be of size output_size. The -1 indicates that the first dimension should be inferred from the size of the tensor. This means that for a tensor of shape (n_examples, n_tokens, output_size) we will get a tensor of shape (n_examples * n_tokens, output_size). Then, we use a Boolean mask to discard the elements corresponding to the padding. This way, the loss 11.2 Summary 167 function will consider each actual word individually, as if the whole batch was just one big sentence. Note that treating a mini-batch as a single virtual sentence does affect the evaluation results. Lastly, we evaluate the performance of our POS tagger on the test set, similarly to how we have done it before: The results indicate that our POS tagger obtains an overall accuracy of 97%, which is in line with state-of-the-art approaches! This is encouraging considering that our approach does not include the CRF layer we discussed in Chapter 10. We challenge the reader to add this layer,7 and experiment with this architecture for other sequence tasks such as named entity recognition. 11.2 Summary In this chapter we have implemented a Spanish part-of-speech tagger using a bidirectional LSTM and a set of pretrained, static word embeddings. Through this process, we have also introduced several new PyTorch features such as the pad_sequence, pack_padded_sequence, and pad_packed_sequence functions, which allow us to work more efficiently with variable length sequences for recurrent neural networks. 7 See, for example, the LSTM-CRF implementation from the PyTorch tutorial: https://pytorch.org/tutorials/beginner/nlp/advanced_tutorial.html
8,597
8,719
#!/usr/bin/env python # coding: utf-8 # # Part-of-speech Tagging Using RNNs # Some initialization: # In[4]: import random import torch import numpy as np import pandas as pd from tqdm.notebook import tqdm # enable tqdm in pandas tqdm.pandas() # set to True to use the gpu (if there is one available) use_gpu = True # select device device = torch.device('cuda' if use_gpu and torch.cuda.is_available() else 'cpu') print(f'device: {device.type}') # random seed seed = 1234 # set random seed if seed is not None: print(f'random seed: {seed}') random.seed(seed) np.random.seed(seed) torch.manual_seed(seed) # Next, let's read the words and their POS tags from the CoNLLUP format: # In[5]: from conllu import parse_incr def read_tags(filename): data = {'words': [], 'tags': []} with open(filename) as f: for sent in parse_incr(f): words = [tok['form'] for tok in sent] tags = [tok['upos'] for tok in sent] data['words'].append(words) data['tags'].append(tags) return pd.DataFrame(data) # In[6]: train_df = read_tags('data/UD_Spanish-AnCora/es_ancora-ud-train.conllup') train_df # We now load the GloVe embeddings for Spanish, which include a representation for the unknown token: # In[7]: from gensim.models import KeyedVectors glove = KeyedVectors.load_word2vec_format('glove-sbwc.i25.vec') glove.vectors.shape # In[8]: # these embeddings already include <unk> unk_tok = '<unk>' unk_id = glove.key_to_index[unk_tok] unk_tok, unk_id # In[9]: # add padding embedding pad_tok = '<pad>' pad_emb = np.zeros(300) glove.add_vector(pad_tok, pad_emb) pad_tok_id = glove.key_to_index[pad_tok] pad_tok, pad_tok_id # Preprocessing: lower case all words, and replace all numbers with '0': # In[10]: def preprocess(words): result = [] for w in words: w = w.lower() if w.isdecimal(): w = '0' result.append(w) return result train_df['words'] = train_df['words'].progress_map(preprocess) train_df # Next, construct actual PyTorch `Dataset` and `DataLoader` objects for the train/dev/test partitions: # In[11]: def get_ids(tokens, key_to_index, unk_id=None): return [key_to_index.get(tok, unk_id) for tok in tokens] def get_word_ids(tokens): return get_ids(tokens, glove.key_to_index, unk_id) # add new column to the dataframe train_df['word ids'] = train_df['words'].progress_map(get_word_ids) train_df # In[12]: pad_tag = '<pad>' index_to_tag = train_df['tags'].explode().unique().tolist() + [pad_tag] tag_to_index = {t:i for i,t in enumerate(index_to_tag)} pad_tag_id = tag_to_index[pad_tag] pad_tag, pad_tag_id # In[13]: index_to_tag # In[14]: def get_tag_ids(tags): return get_ids(tags, tag_to_index) train_df['tag ids'] = train_df['tags'].progress_map(get_tag_ids) train_df # In[15]: dev_df = read_tags('data/UD_Spanish-AnCora/es_ancora-ud-dev.conllup') dev_df['words'] = dev_df['words'].progress_map(preprocess) dev_df['word ids'] = dev_df['words'].progress_map(lambda x: get_ids(x, glove.key_to_index, unk_id)) dev_df['tag ids'] = dev_df['tags'].progress_map(lambda x: get_ids(x, tag_to_index)) dev_df # In[16]: from torch.utils.data import Dataset class MyDataset(Dataset): def __init__(self, x, y): self.x = x self.y = y def __len__(self): return len(self.y) def __getitem__(self, index): x = torch.tensor(self.x[index]) y = torch.tensor(self.y[index]) return x, y # `collate_fn` will be used by `DataLoader` to pad all sentences in the same batch to the same length. # In[17]: from torch.nn.utils.rnn import pad_sequence def collate_fn(batch): # separate xs and ys xs, ys = zip(*batch) # get lengths lengths = [len(x) for x in xs] # pad sequences x_padded = pad_sequence(xs, batch_first=True, padding_value=pad_tok_id) y_padded = pad_sequence(ys, batch_first=True, padding_value=pad_tag_id) # return padded return x_padded, y_padded, lengths # Now construct our PyTorch model: # In[18]: from torch import nn from torch.nn.utils.rnn import pack_padded_sequence, pad_packed_sequence class MyModel(nn.Module): def __init__(self, vectors, hidden_size, num_layers, bidirectional, dropout, output_size): super().__init__() # ensure vectors is a tensor if not torch.is_tensor(vectors): vectors = torch.tensor(vectors) # init embedding layer self.embedding = nn.Embedding.from_pretrained(embeddings=vectors) # init lstm self.lstm = nn.LSTM( input_size=vectors.shape[1], hidden_size=hidden_size, num_layers=num_layers, bidirectional=bidirectional, dropout=dropout, batch_first=True) # init dropout self.dropout = nn.Dropout(dropout) # init classifier self.classifier = nn.Linear( in_features=hidden_size * 2 if bidirectional else hidden_size, out_features=output_size) def forward(self, x_padded, x_lengths): # get embeddings output = self.embedding(x_padded) output = self.dropout(output) # pack data before lstm packed = pack_padded_sequence(output, x_lengths, batch_first=True, enforce_sorted=False) packed, _ = self.lstm(packed) # unpack data before rest of model output, _ = pad_packed_sequence(packed, batch_first=True) output = self.dropout(output) output = self.classifier(output) return output # In[19]: from torch import optim from torch.utils.data import DataLoader from sklearn.metrics import accuracy_score # hyperparameters lr = 1e-3 weight_decay = 1e-5 batch_size = 100 shuffle = True n_epochs = 10 vectors = glove.vectors hidden_size = 100 num_layers = 2 bidirectional = True dropout = 0.1 output_size = len(index_to_tag) # initialize the model, loss function, optimizer, and data-loader model = MyModel(vectors, hidden_size, num_layers, bidirectional, dropout, output_size).to(device) loss_func = nn.CrossEntropyLoss() optimizer = optim.Adam(model.parameters(), lr=lr, weight_decay=weight_decay) train_ds = MyDataset(train_df['word ids'], train_df['tag ids']) train_dl = DataLoader(train_ds, batch_size=batch_size, shuffle=shuffle, collate_fn=collate_fn) dev_ds = MyDataset(dev_df['word ids'], dev_df['tag ids']) dev_dl = DataLoader(dev_ds, batch_size=batch_size, shuffle=shuffle, collate_fn=collate_fn) train_loss, train_acc = [], [] dev_loss, dev_acc = [], [] # We are now ready to train! # In[20]: # train the model for epoch in range(n_epochs): losses, acc = [], [] model.train() for x_padded, y_padded, lengths in tqdm(train_dl, desc=f'epoch {epoch+1} (train)'): # clear gradients model.zero_grad() # send batch to right device x_padded = x_padded.to(device) y_padded = y_padded.to(device) # predict label scores y_pred = model(x_padded, lengths) # reshape output y_true = torch.flatten(y_padded) y_pred = y_pred.view(-1, output_size) mask = y_true != pad_tag_id y_true = y_true[mask] y_pred = y_pred[mask] # compute loss loss = loss_func(y_pred, y_true) # accumulate for plotting gold = y_true.detach().cpu().numpy() pred = np.argmax(y_pred.detach().cpu().numpy(), axis=1) losses.append(loss.detach().cpu().item()) acc.append(accuracy_score(gold, pred)) # backpropagate loss.backward() # optimize model parameters optimizer.step() train_loss.append(np.mean(losses)) train_acc.append(np.mean(acc)) model.eval() with torch.no_grad(): losses, acc = [], [] for x_padded, y_padded, lengths in tqdm(dev_dl, desc=f'epoch {epoch+1} (dev)'): x_padded = x_padded.to(device) y_padded = y_padded.to(device) y_pred = model(x_padded, lengths) y_true = torch.flatten(y_padded) y_pred = y_pred.view(-1, output_size) mask = y_true != pad_tag_id y_true = y_true[mask] y_pred = y_pred[mask] loss = loss_func(y_pred, y_true) gold = y_true.cpu().numpy() pred = np.argmax(y_pred.cpu().numpy(), axis=1) losses.append(loss.cpu().item()) acc.append(accuracy_score(gold, pred)) dev_loss.append(np.mean(losses)) dev_acc.append(np.mean(acc)) # Plot loss and accuracy on dev after each epoch: # In[21]: import matplotlib.pyplot as plt x = np.arange(n_epochs) + 1 plt.plot(x, train_loss) plt.plot(x, dev_loss) plt.legend(['train', 'dev']) plt.xlabel('epoch') plt.ylabel('loss') plt.grid(True) # In[22]: plt.plot(x, train_acc) plt.plot(x, dev_acc) plt.legend(['train', 'dev']) plt.xlabel('epoch') plt.ylabel('accuracy') plt.grid(True) # In[23]: test_df = read_tags('data/UD_Spanish-AnCora/es_ancora-ud-test.conllup') test_df['words'] = test_df['words'].progress_map(preprocess) test_df['word ids'] = test_df['words'].progress_map(lambda x: get_ids(x, glove.key_to_index, unk_id)) test_df['tag ids'] = test_df['tags'].progress_map(lambda x: get_ids(x, tag_to_index)) test_df # Now let's evaluate on the test partition: # In[24]: from sklearn.metrics import classification_report model.eval() test_ds = MyDataset(test_df['word ids'], test_df['tag ids']) test_dl = DataLoader(test_ds, batch_size=batch_size, shuffle=shuffle, collate_fn=collate_fn) all_y_true = [] all_y_pred = [] with torch.no_grad(): for x_padded, y_padded, lengths in tqdm(test_dl): x_padded = x_padded.to(device) y_pred = model(x_padded, lengths) y_true = torch.flatten(y_padded) y_pred = y_pred.view(-1, output_size) mask = y_true != pad_tag_id y_true = y_true[mask] y_pred = torch.argmax(y_pred[mask], dim=1) all_y_true.append(y_true.cpu().numpy()) all_y_pred.append(y_pred.cpu().numpy()) y_true = np.concatenate(all_y_true) y_pred = np.concatenate(all_y_pred) target_names = index_to_tag[:-2] print(classification_report(y_true, y_pred, target_names=target_names)) # Let's generate a confusion matrix for all POS tags in the data: # In[25]: from sklearn.metrics import ConfusionMatrixDisplay, confusion_matrix cm = confusion_matrix(y_true, y_pred, normalize='true') disp = ConfusionMatrixDisplay( confusion_matrix=cm, display_labels=target_names, ) fig, ax = plt.subplots(figsize=(10,10)) disp.plot( cmap='Blues', values_format='.2f', colorbar=False, ax=ax, xticks_rotation=45, ) # In[ ]:
5,092
5,136
21