sentences
sequence
labels
sequence
[ "We present a new human-human dialogue dataset PhotoChat, the first dataset that casts light on the photo sharing behavior in online messaging.", "PhotoChat contains 12k dialogues, each of which is paired with a user photo that is shared during the conversation.", "Based on this dataset, we propose two tasks to facilitate research on image-text modeling: a photo-sharing intent prediction task that predicts whether one intends to share a photo in the next conversation turn, and a photo retrieval task that retrieves the most relevant photo according to the dialogue context.", "In addition, for both tasks, we provide baseline models using the state-of-the-art models and report their benchmark performances.", "The best image retrieval model achieves 10.4% re-call@1 (out of 1000 candidates) and the best photo intent prediction model achieves 58.1% F1 score, indicating that the dataset presents interesting yet challenging real-world problems.", "We are releasing PhotoChat to facilitate future research work among the community.", "As instant messaging tools gain enormous popularity in the recent decades, sharing photos as an approach to enhance the engagement of an online messaging conversation has become a pervasive routine communicative act (Lobinger, 2016).", "A survey conducted in 2010 reveals that 74% of teenagers in the US reported messaging a photo or video using their cell phone (Lenhart et al., 2010).", "In Britain, almost 70% of the internet users shared photos in 2013 (Dutton and Blank, 2013).", "Considering the proliferation of photo sharing, it's desirable to have an intelligent system that can assist users efficiently engaging in this process, i.e. suggesting the most relevant photos in correct timings.", "In order to achieve this goal, the intelligent system is expected to not only understand how humans Research conducted while working at Google.", "communicate with each other, e.g. the natural language human speak, but also perceive images as human do.", "How to facilitate building such multimodal system is the goal of this paper.", "Though recently many image-text tasks have been proposed and are being actively studied to bridge language and vision, the majority of them are formulated as choosing or composing the text based on the understanding of given images, e.g. image captioning (Anderson et al., 2018), visual question answering (Antol et al., 2015), visual commonsense reasoning (Zellers et al., 2019), and image-grounded dialogue generation (Shuster et al., 2020).", "Contrary to these tasks, the photo sharing task focuses on the reverse process, i.e. selecting the image based on the understanding of text, as well as proposing different and unique challenges.", "Firstly, different from the above popular multimodal tasks, in photo-sharing task, the dialogue doesn't often explicitly mention the main visible content in the image.", "Instead of the main object of the photo, sometimes the background story, complemented by human imaginations, can be the focus of the chat.", "Figure 1 shows such an example, in which the person who shares the photo describes the event location court and the occupation at-torney instead of the main object lady in the image.", "Secondly, the dialogue is not guaranteed to be relevant to the image.", "For instance, it often contains greetings and chit-chats of other topics, as the first two turns in Figure 1 shows.", "In order to suggest the relevant photo, a smart system needs to decide which part of the dialogue can be used for suggesting the image.", "In contrast, in the traditional image-text tasks, the correct text is designed to be highly correlated with the image and has few distracting content.", "These photo sharing characteristics makes inferring the connection between the image and textual utterances challenging.", "To highlight these challenges, we create PhotoChat a human-human dialogue dataset in which one photo is shared from one person to the other during the conversation 1 .", "It is, as far as we know, the first dataset that captures the photo sharing activities.", "We selected images from OpenImage V4 dataset (Kuznetsova et al., 2020) as shared photos and used crowdsourcing plugins to generate 12,286 dialogues with an average of 10 turns per dialogue.", "During the dialogue collection, the photo is only visible to the side who is instructed to share the photo and then to both sides after it is being shared.", "Based on the collected dataset, we propose two tasks that are essential for building a photo suggest system: photo-sharing intent prediction task that predicts whether one intends to share the photo in the next conversation turn, and dialogue-based image retrieval task that retrieves the most relevant photo given the dialogue context.", "For both, we build baseline models, report and analyze their performances.", "The best photo-sharing intent prediction baseline model achieves 58.1% F1 score with 58.2% precision and 57.9% recall.", "The best cross-attention image retrieval model achieves 10.4% recall@1 out of 1000 candidates.", "We also propose a dual-encoder model that leverages object labels to encode image features, which achieves the best performance among all the models w/o cross-attention mechanisms.", "In summary, our main contributions are: We create the first human-human dialogue with photo sharing acts via crowd-sourcing.", "We propose two new tasks to promote building an intelligent photo suggest system.", "We build baseline models and provide benchmarks for the new tasks.", "Our proposed image retrieval model outperforms all the prior models w/o cross-attention mechanisms.", "We im-1 https://github.com/google-research/google-research/tree/master/multimodalchat/ plement comprehensive analysis and ablation study to provide more insights.", "With the recent advances in deep learning, plenty of image-text datasets have been created and new image-text tasks are proposed based on them.", "These datasets have greatly stimulated the development of joint image-text models.", "In this section, we review the widely used image-text datasets and the state-of-the-art (SOTA) approaches for solving the image-text problems.", "Image-captioning datasets are first widely used for joint image-text modeling.", "MSCOCO (Lin et al., 2014) and Flickr30k (Young et al., 2014) that both contain five written caption descriptions for each image are the representative ones used for automated caption generation and cross-modal retrieval tasks.", "Conceptual Caption (Sharma et al., 2018) is yet another popular image caption dataset but contains an order of magnitude more images than MSCOCO.", "Because image captions usually only describe the main objects in the image and omit details, to facilitate understanding details of an image along with the reasoning behind them, Antol et al. (2015) introduced VQA which contains three question answer pairs for each image.", "A further work is VCR (Zellers et al., 2019) that not only requires a model to answer the question derived from the image but also provides a rationale explaining why its answer is right.", "It was created to teach the model to learn higher-order cognition and commonsense reasoning about the world.", "Compared to the work above, Image-Chat (Shus-ter et al., 2020) and IGA (Mostafazadeh et al., 2017), which focus on the dialogues grounded in the image, are the most related work to ours.", "IGA includes 4k dialogues where each contains an image with a textual description of it, along with the questions and responses around the image.", "Due to its small scale, IGA can only be used for evaluation.", "Image-Chat is a larger scale dataset that consists of 202k image-grounded dialogues.", "However, both of them were created by asking the crowd workers to talk about a shared image to generate engaging conversation, which is different from the scenario of photo sharing where only one side can access the photo at the start of the conversation.", "Thus, neither can be used to build a photo-suggest system.", "In our work, we build a new dataset that highlights the challenges of building a photo-suggest system and is the first of its kind to the best of our knowledge.", "As the challenge for the photo-suggest system is to retrieve the most relevant image based on the textual utterances, we only review the related work", "on cross-modal retrieval.", "Many models have been proposed for image-caption retrieval where one is required to retrieve the most relevant caption given an image or vice versa.", "The typical architecture consists of two separate encoders for image and text to first generate visual and textual embeddings.", "On top of them, a fusion layer, which can simply be a dot product, is used to generate the relevance score for each pair (Frome et al., 2013; Kiros et al., 2014; Parekh et al., 2020; Karpathy and Fei-Fei, 2015; Faghri et al., 2018).", "Then a triplet ranking loss or cross-entropy loss is employed to learn the latent visual-semantic alignment.", "VSE++ (Faghri et al., 2018) emphasizes on the hardest negatives by using the max of the hinge loss as the objectives and yielded a significant performance improvement.", "Stacked Cross Attention Network (SCAN) (Lee et al., 2018) further improves the performance by introducing the cross attention between image regions and word features.", "Recently, cross-modal transformer based architecture that are pretrained on large-scale image-text datasets via self-supervised learning has shown great advantages in bridging visual and textual embeddings.", "Multiple concurrent work (Lu et al., 2019; Chen et al., 2020; Li et al., 2019) have refreshed the best records on the benchmark datasets for the image-text retrieval tasks.", "We select photos from Open Image Dataset V4 (OID) (Kuznetsova et al., 2020) and collect open-ended conversations on Amazon Mechanical Turk.", "Below describes the detailed image filtering, conversation generation, and data verification steps to ensure data quality.", "Since OID is large-scale and comprehensive, it contains images that are unlikely to be shared in the daily dialogue, such as images only about remote controls or fire hydrants.", "To create a dataset that is close to the reality, we filter images based on the annotated object labels provided with OID.", "Based on our investigation of the image-grounded dialogues and daily experiences, photos about four themes are commonly shared: people, food, animal, and product (in the shopping sce-nario), which are our focus in the dataset creation.", "From all the 600 object labels that appear in OID, we first enlist the labels that both belong to one of the four themes and have a high chance to appear in the commonly-shared photos.", "Labels like traffic light, nail, and reptile are excluded and labels like girl, bagel, and camera are included.", "This process selects 89 object labels (Appendix).", "We then generate an image pool by selecting those that contain any of the objects in the list.", "Note that for the objects of the people category, we add another criteria that it must be the main object, i.e. neither positioned in the margin of the image 2 nor extremely small 3 to exclude images that only have people as the background.", "Images are randomly selected from the image pool to generate conversations in the next step.", "We randomly assigned two crowd workers to generate a conversation based on a given image.", "The image comes with an image description which presents the list of objects labels in the image.", "When the image contains humans, we assign a random name and relationship to one of the humans to help the workers refer to it and unfold the story.", "They are instructed to imagine talking with their friend.", "At the start of the task, only one side has access to the image and is instructed to drive the dialogue until it is fit to share the image with the other (website interfaces are shown in the Ap-pendix).", "It is not restricted that they must message alternatively but the worker with the photo can't share the photo until the total number of the conversation turns reaches five.", "After sharing the photo, they can continue to chat until they wish to end the conversation and submit the dialogue.", "Lastly, we use another set of in-house professional crowd workers to filter out the invalid dialogues generated in the above step.", "Dialogues are discarded if the association between the image and the dialogue is in-evident before the photo sharing act 2 Center of the object is located within 0.1 of the image width/height to the border.", "or the content is unnatural, contains inappropriate words, too many typos or broken English.", "Figure 2 displays examples of qualified and unqualified data.", "Note that the third unqualified dialogue can happen in a real conversation, yet the content/event of the image is not mentioned until the photo being shared, making it impossible for a model to learn the connection between the dialogue and the images and to suggest a photo in advance.", "Such dialogues are removed from the dataset in this step.", "The collected dataset consists of 10,917 unique images and 12,286 dialogues.", "One image is shared in each dialogue.", "Based on the object labels of the shared image, we classify the dialogues into four categories: people, food, animals, and daily products.", "We split the dialogues into 10,086 train, 1,000 dev, and 1,000 test sets while keeping roughly the same distribution of the category across the splits.", "The detailed statistics of each split and in total are shown in Table 1.", "Note that the dialogue can have multiple category labels.", "For instance, if the shared image is about a girl playing with dogs, the dialogue belongs to both people and animals categories.", "Thus, the sum of the dialogues of each category (people/animal/food/product dial #) exceeds the total number of the dialogues (dial #) in the table.", "In addition, some images in the training set are used in multiple dialogues.", "Based on the statistics in the table, the average number of turns per dialogue is 12.7 and the average number of tokens per turn is 6.3.", "Since two sides are not restricted to speak alternatively, if the consecutive turns from the same side are combined as one turn, which is the conventional setting of other dialogue datasets, the average number of turns per dialogue and the average number of tokens per turn become 9.5 and 8.5.", "On average, people converse for 7 turns before sharing the photo.", "We decompose the problem of building a smart photo-suggest system into two separate tasks.", "The first is to detect if the user has the intent to share the photo in the next turn, which we call photo-sharing intent prediction task.", "The second is to retrieve the photo based on the dialogue context, which we call image retrieval task.", "Below describes the formal formulation of the problem settings.", "Let P = { p 1 , p 2 , ..., p M } be the photo set where each p i = ( a i , l i ) , i [1 , M ] consists of image a i and a list of objects l i in it.", "Given the dialogue D = { t 1 , ..., t h , p k , t h +1 , ..., t N } where two participants speak alternatively, t j ( j [1 , N ] ) and p k P respectively represent the utterance of turn j and Table 1: PhotoChat statistics.", "the shared image.", "t h is the turn immediately before a photo sharing act.", "We also define the speaker information S = { s 1 , s 2 , ..., s N } where s j ( j [1 , N ]) , either 0 or 1 , denotes the speaker of turn j .", "Photo-sharing intent prediction: The goal of the intent prediction task is to predict whether a photo will be shared in the next turn for any t j given all the turns before.", "In equation, it's formulated as a binary classification task: j [1 , h ] , C ( t 1: j , s 1: j ) { 0 , 1 } , (1) where C is the intent prediction model taking the utterances and the speaker information of all the previous turns as the input and outputs a binary value.", "In the above case, it should only predicts 1 when j = h , otherwise 0.", "Note that whether the model make use of all the previous turns and the speaker information depends on the model design.", "We use F1 score, precision, and recall as the evaluation metrics for this task.", "Image retrieval: Under the same settings, model R of the image retrieval task is expected to correctly retrieve p k from P given the dialogue: R ( t 1: h , s 1: h , P ) [1 , M ] .", "(2) During training, the candidate pool P is usually comprised of in-batch images while during evaluation, P contains all images in the test set.", "Following Karpathy and Fei-Fei (2015), we use Recall@K (R@K), computed as the fraction of times a correct item was found among the top K results as the evaluation metrics.", "Specifically, we choose R@1, R@5, and R@10, as well as the sum of them which we denote as sum(R@1, 5, 10) to evaluate the models.", "To establish the baselines, we fine-tune three SOTA pretrained models BERT (Devlin et al., 2018a),", "ALBERT (Lan et al., 2020), and T5 (Raffel et al., 2020), as the pretrained models have achieved remarkable performance in many NLP tasks.", "To adapt BERT and ALBERT to our settings, we concatenate all the previous turns ( t 1: j in Equation 1) by [SEP] and prepend the concatenated text with [CLS] to generate the input to the model.", "We use the speaker information s 1: j as the segment id of the input.", "The output of [CLS] token is fed into two fully-connected layers, of which the output dimensions are respectively 128 and 2 to generate the final prediction.", "To utilize T5, we concatenate t 1: j by [SEP] and prepend the text with predict share intent: as the model input.", "We use cross entropy loss for all three models.", "Dual encoder: We built a dual-encoder model similar to Parekh et al. (2020); Gillick et al. (2018), which separately encodes image and text leveraging SOTA pre-trained models.", "Its entire architecture is shown in Figure", "3. To encode the image, for each p i = ( a i , l i ) we first resize the image a i to 224 224 and feed it into a pretrained ResNet (He et al., 2016) to generate A i .", "A pretrained BERT is used to encode l i to achieve the label embedding L i which is the output of [CLS] token.", "L i is concatenated with A i to generate the image embedding.", "For encoding the dialogue context, we use a second pretrained BERT (Devlin et al., 2018b).", "Its input is the concatenation of all the prior utterances of the speaker who shares the photo.", "The output of [CLS] token is used as the contextual text embedding.", "Two fully connected layers are then used to separately project image and text embeddings into a joint image-text embedding space of dimension H .", "Then, the dot product of the normalized image embedding B i Bert ResNet FC [CLS] clothing girl face [SEP] norm Bert [CLS] just got back ... temples and battlefields [SEP] FC norm B i T j dot product S(B i ,T j ) l i a i L i A i Figure 3: Our dual encoder.", "and text embedding T j is used as the similarity score S ( B i , T j ) .", "Following Young et al. (2014); Gillick et al. (2018), bidirectional in-batch sampled cross entropy loss is employed: l sm ( B i , T j ) = ( S ( B i , T j ) log (cid:88) T j e S ( B i , T j ) ) ( S ( B i , T j ) log (cid:88) B i e S ( B i ,T j ) ) , where B i and T j are the image embeddings and text embeddings of the other examples in the batch.", "We also experiment with bidirectional in-batch hinge loss, defined as: l sh ( B i , T j ) = (cid:88) T j [ S ( B i , T j ) + S ( B i , T j )] + + (cid:88) B i [ S ( B i , T j ) + S ( B i , T j )] + , where is the margin parameter and [ x ] + max ( x, 0) .", "In our preliminary experiments, we observe cross entropy loss works better and implement most experiments with cross entropy loss.", "VSE++: VSE++ (Faghri et al., 2018) is a simple and effective dual encoder model.", "It encodes the image and the text, which is the concatenation of all the previous utterances of the person who shares the photo in our case, separately by ResNet152 (He et al., 2016) and GRU (Cho et al., 2014).", "It is then followed by linear projections to map them into the joint embedding space.", "Finally, dot products of the normalized embeddings are used to compute the ranking scores.", "They innovatively make use of the hardest negatives, which are the negatives closest to the query, in the ranking loss function: l mh ( B i , T j ) = [ S ( B i , T j ) + S ( B i , T hj )] + +[ S ( B i , T j ) + S ( B h i , T j )] + , where T hj = argmax ( S ( B i , T j )) and B hi = argmax ( S ( B i , T j )) are the hardest negatives.", "SCAN: SCAN (Lee et al., 2018) is a full cross attention model that captures the fine-grained interplay between image regions and text tokens to infer image-text similarity.", "It uses fasterRCNN (Ren et al., 2017) in conjucntion with ResNet-101 to compute image region embeddings and bidirectional GRU to achieve text embeddings.", "Same as VSE++, SCAN uses hard negatives in the triple ranking loss function.", "Though it beats VSE++ on the image captioninig tasks, it doesn't scale well to large-scale retrieval problems due to the high computational cost of cross attention.", "BM25: BM25 (Amati, 2009) is a probabilistic retrieval function widely used for document retrieval.", "To adapt it to our settings, we directly utilize the object labels of each image l j , j [1 , m ] as the document term.", "All the utterances before photo is shared are concatenated, tokenized and used as the query term to retrieve the image.", "The maximum sequence length of BERT, ALBERT, and T5 for the photo-sharing intent prediction task is 512.", "We choose checkpoints that achieve the best F1 score on the dev set for evaluation on the test set.", "For our dual encoder model, the maximum sequence length of BERT is 128, the dimension of the joint image-text embedding space H is 512, and margin parameter is 0.2 for all the experiments.", "All parameters are trainable.", "We use the Adam optimizer ( 1 = 0 . 9 , 2 = 0 . 999 ) and a learning rate that starts at 5e-5 and decays by 0.1% every 1000 steps.", "The models are trained on 32-core pod slices of Cloud TPU V3 Pod, with a per-replica batch size of", "4. The loss is computed on item pairs aggregated from all replicas, which is ovegr the global batch of 128 samples in this case.", "Pho-Table 2: Experimental results of the baseline models for the photo-sharing intent prediction task.", "All numbers are in percentage.", "toChat yields unpleasant results.", "As such, we first train them on MSCOCO and finetune them on PhotoChat for 20 epochs.", "We utilize the same setting as the single models that are reported to perform the best on the image-retrieval task on MSCOCO; more specifically, VSE++ (ResNet, FT) and SCAN t-i AVG ( 1 = 9 ) following the annotations in the original papers.", "Table 2 presents model performance on the test set.", "We observe that T5 outperforms BERT and ALBERT in all metrics.", "Note that our dataset suffers from class imbalance that the negative examples outnumber the positive examples 3 , which we suspect causes the low precision across all the models.", "Figure 4 shows examples of the prediction by T5-3B model.", "Though a few turns are falsely predicted as positive (e.g. They were really pretty. and the second to last turn in example 2), it's possible for the speaker to share the photo after this turn in real life, indicating that when to share a photo is subjective and the model may be more viable than the low precision would suggest.", "We also anticipate if the model has access to the set of photos the speaker can share, the accuracy can be elevated.", "In this case, the model will be able to infer that the photo in example 1 and 2 of Figure 4 are more likely to follow utterances about food and statues.", "Table 4 lists the experimental results on PhotoChat.", "Our dual encoder model is denoted as DE .", "DE img and DE label are the ablation models that only take the image a i or image labels l i as the input compared to the default architecture in Figure", "3. CE, SH, MH represents cross entropy loss, hinge loss, Example 1 Example 2 ...", "and hinge loss using hard negatives.", "We attempt training DE on MSCOCO first and finetuning it on PhotoChat.", "These models are specially annotated with * .", "We also experiment with different image encoders: ResNet-50 and ResNet-152, in combination with different label encoders: Bert-base and Bert-tiny.", "They are annotated in the brackets after the model names in Table", "4. Among all the models, SCAN achieves the best performance with 10.4% R@1, 27% R@5, and 37.1% R@10, which is consistent with the prior work (Lee et al., 2018), demonstrating the power of the bottom-up cross attention.", "Among all the models that don't have cross-attention, our model DE*(ResNet-152, Bert-tiny) performs the best and beats a strong prior work VSE++, indicating the effectiveness of using image labels in the retrieval task.", "Ablation study: By comparing DE label (Bert-base) and DE img (ResNet-152) , we find that using image features is more effective than using image label features, which is expected as images contain more information.", "Compared to the model using only image pixel values ( DE img (ResNet-152) ), adding the label features contributes to an increase of 1.3% in sum(R@1, 5, 10) to 66.4% ( DE(ResNet-152, Bert-base) ).", "Pretraining the model on MSCOCO further boosts it by 3.5% to Table 4: Experimental results of the baseline models on image retrieval task.", "69.9% ( DE*(ResNet-152, Bert-base) ).", "Effect of encoders: We observe that using a smaller model (Bert-tiny) to encode image labels yields better performance regardless of the loss function.", "DE*(ResNet-152, Bert-tiny) improves sum(R@1, 5, 10) by 1.2% compared to DE*(ResNet-152, Bert-base) when using cross entropy loss and 2.4% when using hinge loss.", "The reason might be that labels are a compact list of tokens and thus, using a smaller model alleviate the problem of overfitting.", "On the other hand, using a larger image encoder ResNet-152 produces better results that DE img (ResNet-152) beats DE img (ResNet-50) in sum(R@1, 5, 10) by 4.2%.", "Effect of loss function: Our dual encoders work significantly better with cross entropy loss than hinge loss and their gap is about 8% in sum(R@1, 5, 10) as we compare the results of DE*(ResNet-152, Bert-base) and DE*(ResNet-152, Bert-tiny) models under different loss functions.", "Error analysis: Figure 5 shows the qualitative results of DE*(ResNet-152, Bert-tiny) given a text query.", "In the first example, the model ranks the relevant images of wine glasses and black tea at top instead of the groundtruth image where a man is holding a wine glass, which is easy to be neglected.", "In the second example, the model fails to distinguish puffins with ducks and infer the background from keyword atlantic.", "It illustrates the challenge of the image retrieval task under the dialogue context that it requires a model to pay attention to the details and the event, as discussed in Section 1.", "Figure 6 presents more prediction results including some wrong predictions by the model.", "We collected a 12k high-quality dialogue dataset that contains photo sharing activity via crowdsourcing.", "To facilitate research on building intelligent photo-suggest system, we have introduced two new challenging tasks that aim at improving the photo-sharing experience: photo-sharing intent prediction task and image retrieval task.", "That is, when given a dialogue, the system should predict whether the user has the intention to share the photo and which photo is suitable to be shared.", "We built baseline models for both tasks and report their performance with detailed analysis.", "Besides the proposed two new tasks, our dataset can potentially be used in other dialogue related tasks, such as dialogue generation in the multimodal dialogues, as well as inspiring new research A: heu, how are you?", "topics, such as composing automatic reply to the photos sent from others.", "We hope our dataset and modeling work can be beneficial for studies that focus on the interplay between image and dialogue.", "We thank Pranav Khaitan and Blaise Aguera y Arcas for the support and assistance; Yinfei Yang, David Bieber for reviewing the draft and providing the feedback; Janel Thamkul and Tulsee Doshi for doing the legal review of the dataset." ]
[ "objective", "abstain", "objective", "method", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "objective", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "objective", "method", "abstain", "objective", "method", "abstain", "abstain", "objective", "objective", "objective", "objective", "objective", "other", "other", "other", "objective", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "objective", "method", "other", "other", "other", "other", "other", "other", "other", "other", "other", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "method", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "other", "method", "abstain", "abstain", "method", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "objective", "abstain", "method", "objective", "abstain", "method", "other" ]
[ "Existing reference-free metrics have obvious limitations for evaluating controlled text generation models.", "Unsupervised metrics can only provide a task-agnostic evaluation result which correlates weakly with human judgments, whereas supervised ones may over-fit task-specific data with poor generalization ability to other datasets.", "In this paper, we propose an unsupervised reference-free metric called CTRLEval , which evaluates controlled text generation from different aspects by formulating each aspect into multiple text infilling tasks.", "On top of these tasks, the metric assembles the generation probabilities from a pre-trained language model without any model training.", "Experimental results show that our metric has higher correlations with human judgments than other baselines, while obtaining better generalization of evaluating generated texts from different models and with different qualities 1 .", "Controlled text generation aims to generate texts under some control variables, including pre-specified content prefixes and attribute labels (such as sentiments and topics).", "Controlled text generation has been significantly advanced by large-scale pre-trained models with respect to generation quality and various control variables (Keskar et al., 2019; Dathathri et al., 2020; Yang and Klein, 2021; Liu et al., 2021a; Chan et al., 2021).", "Despite the great success of these generation models, it becomes critical to evaluate the quality of generated texts accurately.", "Most of the existing studies adopt unsupervised and supervised metrics to measure the quality of generated texts Part of the work was done while Peng Li was working at Tencent.", "under different combinations of control variables (Dathathri et al., 2020; Chan et al., 2021).", "The evaluation is commonly conducted in a reference-free setting because it is challenging to collect sufficient high-quality references for each input of control variables in this open-ended text generation task (Dathathri et al., 2020).", "However, both unsupervised and supervised metrics have shown limitations in the evaluation of controlled text generation: 1) Unsupervised metrics such as perplexity (Brown et al., 1992) can only provide task-agnostic evaluation regarding the overall quality of generated texts.", "However, controlled text generation tasks typically involve multiple evaluation aspects (Deng et al., 2021), including the quality of generated texts themselves and the relationship between generated texts and control variables.", "It is thus not surprising that existing unsupervised metrics without multi-aspect interpretability have low correlations with human judgments (Hashimoto et al., 2019).", "2) Supervised metrics are commonly trained on the datasets of specific tasks to measure the corresponding aspects of generated texts (e.g., evaluating whether a generated text is accordant with the sentiment label) (Dathathri et al., 2020; Chan et al., 2021).", "This may cause over-fitting to task-specific data and degrade the generalization ability of metrics (Gar-bacea et al., 2019), thereby giving unstable evaluation of generated texts from different models or with different qualities (Guan and Huang, 2020).", "To deal with the above issues, we propose an unsupervised reference-free metric called CTRLEval for evaluating controlled text generation models.", "This metric performs evaluation from different aspects without any training on task-specific data.", "Specifically, we formulate the evaluation of each aspect into fill-in-the-blank tasks whose input and output patterns can be designed based on the definition of the aspect.", "Then, we utilize a pre-trained model whose pre-training task is text in-2306 filling (such as PEGASUS (Zhang et al., 2020a)) as our base model, and fuse the generation probabilities from these fill-in-the-blank tasks as the evaluation result.", "To alleviate the potential bias caused by the task design (Zhao et al., 2021), we devise multiple text infilling tasks for each aspect and use the weighted sum of all the results as the final score.", "In this paper, we consider three aspects which are commonly used to measure the performance of controlled text generation models, including coherence (Yuan et al., 2021), consistency (Rashkin et al., 2020), and attribute relevance (Dathathri et al., 2020).", "These evaluation aspects cover both the quality of generated texts and the relationship between generated texts and different control variables, which can provide a comprehensive evaluation result for controlled text generation.", "Experimental results show that our metric can maintain the generalization ability and achieve stable performance faced with model drift and quality drift.", "Our main contributions are as follows: We propose an unsupervised reference-free metric called CTRLEval for evaluating controlled text generation.", "This metric formulates three evaluation aspects (i.e., coherence, consistency, and attribute relevance) into multiple text infilling tasks, and utilizes the ensemble of generation probabilities from a pre-trained language model as the evaluation results.", "We conduct experiments on two benchmark tasks including sentiment-controlled and topic-controlled text generation based on our collected evaluation set.", "Experimental results show that our proposed metric has higher correlations with human judgments, while obtaining better generalization of evaluating generated texts from different models and with different qualities.", "Early studies on controlled text generation adopt attribute label embeddings (Ficler and Goldberg, 2017; Zhou et al., 2018) or latent variables (Hu et al., 2017; Ke et al., 2018; Zhou and Wang, 2018) to learn the complex relationship between control variables and generated texts.", "With the development of large-scale generative pre-trained models, it is costly to re-train or fine-tune pre-trained models on the corpora with attribute annotations (Keskar et al., 2019).", "Recent works resort to decoding-time methods and directly make pre-trained models generate texts towards desired attributes during inference, including PPLM (Dathathri et al., 2020), GeDi (Krause et al., 2020), FUDGE (Yang and Klein, 2021) and DEXPERTS (Liu et al., 2021a).", "These works rely heavily on human evaluation because existing reference-free metrics including unsupervised and supervised ones are shown to have evident limitations for evaluating controlled text generation (Dathathri et al., 2020).", "Automatic evaluation metrics are important for natural language generation tasks, which can be simply divided into referenced, reference-free (also known as unreferenced) and hybrid metrics: 1) Referenced metrics usually measure the relevance between generated texts and reference texts via lexicon overlap (such as BLEU (Papineni et al., 2002), METEOR (Banerjee and Lavie, 2005) and ROUGE (Lin, 2004)) or embedding similarity (such as MoverScore (Zhao et al., 2019), BERTScore (Zhang et al., 2020b) and MARS (Liu et al., 2021b)).", "2) Reference-free metrics directly evaluate the quality of generated texts without references.", "Since unsupervised metrics like perplexity (Brown et al., 1992) and distinct n-grams (Li et al., 2016) can only provide a task-agnostic result which correlates weakly with human judgments (Hashimoto et al., 2019; Tevet and Berant, 2021), most of the reference-free metrics resort to supervised models.", "Specifically, they are trained to fit human-annotated ratings / labels (such as discriminator scores (Shen et al., 2017)) or distinguish human-written texts from negative samples (such as UNION (Guan and Huang, 2020)).", "3) Hybrid metrics contain both referenced and reference-free scores, such as RUBER (Tao et al., 2018; Ghaz-arian et al., 2019), BLEURT (Sellam et al., 2020) and BARTScore (Yuan et al., 2021).", "Compared with existing reference-free metrics which are unsupervised, our metric can support the evaluation of generated texts from different aspects via the full utilization of pre-trained models and the formulation of text infilling tasks, which fits the evaluation protocol of controlled text generation well.", "Also, in contrast with supervised reference-free metrics, our metric can avoid over-2307 Encoder-Decoder PLM 1 Pattern Evaluator (Text Infilling Task) 1 () Score: () 1 () 1 () 1 () 2 2 () 2 () () () 2 () () 2 () () Input: = (,,) Evaluation Aspect Framework Pattern Design ( Coherence between 1 and 2 ) ID Pattern Text 1 1 () The book is about NLP.", "fitting task-specific data and maintain better generalization ability to evaluate generated texts from different models and with different qualities.", "Given the input I = ( X, a, Y ) which consists of a content prefix X , an attribute label a , and a generated text Y , our goal is to acquire three evaluation results for coherence, consistency and attribute relevance, respectively.", "As shown in Figure 1, our main idea is to formulate each evaluation aspect into multiple text infilling tasks and utilize the ensemble of the scores from each task as the final evaluation results.", "We denote each text infilling task as a pattern evaluator , which means evaluation with different input and output patterns.", "Inspired by the recent works on pattern-exploiting training (Schick and Schtze, 2021a,b) and prompt tuning (Gu et al., 2021), we define each pattern evaluator as E = ( f, g ) , which consists of two pattern functions to build the input and output sequence of text infilling tasks, respectively.", "The score of each pattern evaluator is acquired from the generation probability of the encoder-decoder pre-trained language model whose pre-training task is to generate the masked part from the remaining texts of the input.", "For each aspect, we devise multiple pattern evaluators to alleviate the potential bias caused by the pattern design (Zhao et al., 2021), and weight the scores of all the evaluators to obtain the final result: S ( I ) = NE (cid:88) j =1 j ( I ) s j ( I ) (1) where NE is the number of pattern evaluators, S ( I ) denotes the overall score for each aspect, j ( I ) is a factor to weight the pattern evaluators of the corresponding aspect and s j ( I ) indicates the score of each pattern evaluator based on the generation probability of the pre-trained model.", "Coherence aims to measure whether the sentences in the generated text are semantically relevant to compose a coherent body (Vakulenko et al., 2018; Yuan et al., 2021), which reflects the quality of the generated text itself.", "Assume that the generated text Y consists of M sentences, i.e., Y = ( Y 1 , Y 2 , , YM ) , we devise M pattern evaluators E j = ( f j , g j )(1 j M ) to measure the relevance between each sentence and all the remaining sentences: f j ( I ) = Y \\ j = Y 1 Y j 1 [M] Y j +1 YM (2) g j ( I ) = Y j (3) 2308 where Y \\ j indicates the generated text Y with the j -th sentence replaced by a mask token [M] .", "The score of each pattern evaluator E j can be computed via the log probability of the pre-trained model P : s j ( I ) = log P ( g j ( I ) | f j ( I )) = log P ( Y j | Y \\ j ) (4) Since specific and informative sentences are more likely to impact the quality of the whole text, we adopt normalized inverse sentence frequency (NISF) (Zhang et al., 2018) of the output sentence which can reflect its specificity to weight each pattern evaluator: j ( I ) = NISF ( Y j ) = ISF ( Y j ) (cid:80) Mk =1 ISF ( Y k ) (5) ISF ( Y j ) = max w Y j IWF ( w ) (6) where the inverse sentence frequency (ISF) of Y j is computed by the maximum inverse word frequency (IWF) of the words in Y j .", "We estimate IWF on a general corpus BookCorpus (Zhu et al., 2015), which is commonly adopted as the pre-training dataset in the existing works (Devlin et al., 2019): IWF ( w ) = log(1 + | C | ) f w (7) where | C | indicates the total number of sentences in BookCorpus and f w denotes the number of sentences containing the word w .", "Thus, the evaluation result of coherence can be obtained by the ensemble of the scores from all the pattern evaluators: S coh ( I ) = M (cid:88) j =1 NISF ( Y j ) log P ( Y j | Y \\ j ) (8) 3.2.2 Consistency Consistency aims to evaluate whether the generated text is consistent to the content prefix (Celikyilmaz et al., 2020; Rashkin et al., 2020).", "We devise two symmetric pattern evaluators EX Y and EY X to evaluate the consistency between the content prefix and the generated text as follows: f X Y ( I ) = X [M] , g X Y ( I ) = Y \\ X (9) f Y X ( I ) = [M] Y \\ X , g Y X ( I ) = X (10) where Y \\ X denotes the remaining part of the generated text without the prefix.", "Similar to coherence, we still adopt the log probability of the pre-trained model as the pattern evaluator's score and weight them with normalized inverse sentence frequency to obtain the final result of consistency: S cons ( I ) = NISF ( Y \\ X ) log P ( Y \\ X | X [M] ) + NISF ( X ) log P ( X | [M] Y \\ X ) (11) 3.2.3 Attribute Relevance Attribute relevance aims to measure whether the generated text satisfies the attribute label (Dathathri et al., 2020).", "To probe the relevance between generated texts and attribute labels, we first introduce a verbalizer v ( ) which maps all the attribute labels a in the attribute set A to the corresponding words (Schick and Schtze, 2021a).", "Then, we design the pattern evaluators E j = ( f j , g j )(1 j NAR ) where f j ( ) adds prompts and a mask token to the generated text, and g j ( ) is set to be a verbalizer: f j ( I ) = Concat ( Prompt j , [M] , Y ) (12) g j ( I ) = v j ( a ) (13) where Concat ( ) indicates the concatenation of the prompt, the mask token, and the generated text in some order.", "We give an example for the pattern design of attribute relevance which is also shown in Figure 1. In this example, the attribute is set to be the sentiment A = { Positive,Negative } , while the patterns are designed as f ( I ) = Y It was [M] . and g ( I ) = v ( Positive/Negative ) = good/bad.", "Inspired by the existing works (Schick and Schtze, 2021a), we use the generation probability of the corresponding label word over all the label words as the score of the pattern evaluator: s j ( I ) = P ( v j ( a ) | f j ( I )) (cid:80) a (cid:48) A P ( v j ( a (cid:48) ) | f j ( I )) (14) Based on the assumption that the pattern evaluator is adequate to measure the data sample if the words of all the attribute labels are easily generated, we devise the unnormalized weighted score of each evaluator as the sum of generation probabilities over all the attribute labels: w j ( I ) = (cid:88) a (cid:48) A P ( v j ( a (cid:48) ) | f j ( I )) (15) j ( I ) = w j ( I ) (cid:80) NAR k =1 w k ( I ) (16) Similarly, the evaluation result of attribute relevance can be acquired by the weighted sum of all the pattern evaluators' scores: SAR ( I ) = NAR (cid:88) j =1 j ( I ) s j ( I ) (17) 2309 Task #Prefixes #Labels #Models #Samples #Ratings (per sample) Length Krippendorff's Sentiment 15 2 4 360 5 54.2 0.626 Topic 20 4 4 960 5 55.7 0.622 Table 1: Statistics of the evaluation set, including the number of the prefixes / attribute labels / generation models / samples / ratings (per sample), the average length of each sample and Krippendorff's .", "Since there is no standard benchmark dataset for evaluating controlled text generation, we construct an evaluation set to measure the correlation between", "between automatic metrics and human judgments.", "Task : We choose sentiment-controlled and topic-controlled text generation as the benchmark tasks, which are widely used in the existing works (Dathathri et al., 2020; Chan et al., 2021).", "These two tasks require the models to generate texts conditioned on the given prefixes and sentiment / topic labels, respectively.", "In the task of sentiment-controlled text generation, we follow PPLM (Dathathri et al., 2020) and CoCon (Chan et al., 2021) to adopt 15 prefixes and 2 sentiment labels (i.e., positive and negative).", "As for topic-controlled text generation, we follow CoCon (Chan et al., 2021) to adopt 20 prefixes and 4 topic labels (i.e., computers, politics, religion, and science).", "Generation Models : We consider various generation models including CTRL (Keskar et al., 2019), PPLM (Dathathri et al., 2020), GeDi (Krause et al., 2020), and CoCon (Chan et al., 2021).", "These representative models support both the sentiment-controlled and topic-controlled text generation tasks, and cover different levels of generation abilities.", "We make these models generate 3 different samples for each unique pair of prefixes and attribute labels.", "We set the maximum length of generated texts to be 80 and remove the last sentence if it is not complete.", "We directly use the generation results if they have been released by the original papers.", "Otherwise, we run the original codes to obtain the generation results.", "Human Annotation : We collect human ratings on the generated texts from Amazon Mechanical Turk (AMT).", "Each survey of AMT contains a prefix, an attribute label, and five generated texts including", "(a) four generated texts from the above four models respectively, and", "(b) one negative sample which is constructed by perturbing (e.g. sentence shuffling and dropping) another sample from the evaluation set (Guan et al., 2021).", "We ask annotators to rate Task #Seed Prompts #Prompts #Verbalizers #Evaluators Sentiment 3 24 3 72 Topic 4 32 1 32 Table 2: Statistics of the pattern evaluators in attribute relevance.", "these texts with a 1-5 Likert scale for each aspect.", "To control the annotation quality, we discard the submissions if the annotator assigns a higher rating to the negative sample than other texts.", "We ensure that each generated text contains 5 valid ratings for each aspect, where the average value of valid ratings is used as the human judgments.", "We also calculate Krippendorff's (Krippendorff, 2018) to show the agreement of human ratings, which is 0.626 / 0.622 for sentiment-controlled / topic-controlled text generation tasks, respectively.", "We choose PEGASUS (Zhang et al., 2020a) as our base model in the overall result and also explore other pre-trained models in 4.8.", "The hyper-parameters of Transformer blocks are the same as PEGASUS-large with 568M parameters.", "As for the pattern evaluators in attribute relevance involving prompts and verbalizers which need to be additionally designed, we follow BARTScore (Yuan et al., 2021) to first adopt manually devised seed prompts and verbalizers in the existing works (Schick and Schtze, 2021a,b), and then collect paraphrases to automatically expand our evaluator set.", "The statistics of pattern evaluators in attribute relevance are presented in Table 2. More details about the specific design of prompts and verbalizers are included in Appendix A. 4.3 Baselines We choose several state-of-the-art reference-free metrics as our baselines: Perplexity (PPL) (Brown et al., 1992): This method calculates the perplexity of generated texts 2310 Task Sentiment Topic Aspect Coherence Consistency Coherence Consistency Metric r r r r DisScore 0.2938 0.2329 0.1664 0.2010 0.1662 0.1178 0.1526 0.1315 0.0937 0.0053 0.0072 0.0051 UNION 0.2317 0.2571 0.1836 0.1925 0.1422 0.1009 0.1628 0.1300 0.0924 0.0664 0.0777 0.0553 BLEURT 0.2585 0.2606 0.1850 0.2382 0.2012 0.1445 0.1631 0.1428 0.1016 0.0433 0.0607 0.0443 PPL-GPT 0.3376 0.3310 0.2350 0.1881 0.1672 0.1203 0.1459 0.1316 0.0940 0.1013 0.0841 0.0595 PPL-PEGASUS 0.3901 0.3860 0.2743 0.2728 0.2513 0.1808 0.1420 0.1313 0.0929 0.1883 0.1771 0.1235 BARTScore 0.3880 0.3848 0.2736 0.2682 0.2533 0.1804 0.1599 0.1325 0.0939 0.1528 0.1408 0.0978 BARTScore-PEGASUS 0.3853 0.3712 0.2653 0.2480 0.2267 0.1630 0.1638 0.1493 0.1048 0.1539 0.1362 0.0953 CTRLEval (Ours) 0.4395 0.4208 0.3044 0.3226 0.3096 0.2235 0.2403 0.2245 0.1582 0.2342 0.2281 0.1595 Table 3: Pearson ( r ), Spearman ( ), and Kendall ( ) correlations of coherence and consistency in sentiment-controlled and topic-controlled text generation.", "with a language model.", "We use GPT (Radford et al., 2018) and PEGASUS (Zhang et al., 2020a) as the base models since GPT is commonly used in the existing works (Dathathri et al., 2020) and PEGASUS is our base model.", "They are denoted as PPL-GPT and PPL-PEGASUS , respectively.", "Discriminator Score (DisScore) (Kannan and Vinyals, 2017; Chan et al., 2021): This method trains a discriminator with different objectives.", "We adopt the IMDB movie review dataset (Maas et al., 2011) / HuffPost News category dataset 2 for sentiment-controlled / topic-controlled text generation tasks, respectively.", "For coherence and consistency, the discriminator is trained to distinguish human-written texts from manually constructed negative samples, where the ratio of positive and negative samples is 1:1.", "For attribute relevance, it 2 https://www.kaggle.com/rmisra/ news-category-dataset is trained based on the sentiment / topic classification task, respectively (Chan et al., 2021).", "Both the sentiment and topic discriminators are implemented based on BERT (Devlin et al., 2019) and they achieve 94.15% / 91.54% on the corresponding test set, respectively.", "UNION (Guan and Huang, 2020): This method is a self-supervised metric which is trained to distinguish human-written texts from the automatically perturbed negative samples with well-designed negative sampling strategies and multi-task learning.", "We use the same datasets as the discriminator score to train UNION.", "BLEURT (Sellam et al., 2020): This method is a supervised metric which is pre-trained on synthetic examples and then fine-tuned to fit human ratings.", "We used the same instruction in 4.1 to additionally annotate the generated texts to construct the training set for BLEURT, whose amount is the same as the evaluation set.", "There is no overlap between BLEURT's training set and the evaluation set.", "BARTScore (Yuan et al., 2021): This method utilizes the generation probabilities of BART (Lewis et al., 2020) to measure the relationship among sources, hypotheses, and references.", "Since this metric simultaneously contains referenced and reference-free parts, we only use the reference-free score in our experiments.", "We also use PEGASUS (Zhang et al., 2020a) as the base model for a fair comparison, which is denoted as BARTScore-PEGASUS .", "We follow the existing work (Guan and Huang, 2020; Yuan et al., 2021) to adopt Pearson ( r ), Spearman ( ), and Kendall ( ) correlation coefficients between automatic metrics and human judgments", "to measure the performance of different metrics.", "The overall results on sentiment-controlled and topic-controlled text generation are shown in Table 3 and 4. We can observe that CTRLEval outperforms other baselines with a large margin, indicating the effectiveness of our metric on different evaluation aspects.", "In Table 4, unsupervised baselines can hardly measure the relevance between generated texts and attribute labels because they only provide a task-agnostic score which is weakly relevant to this specific aspect.", "For comparison, our metric, which supports the evaluation for different aspects of generated texts via the design of text infilling tasks, can obtain much better performance and even outperform the supervised baselines.", "To further investigate the effect of each module, we conduct ablation studies on the weight of pattern evaluators and the design of pattern functions.", "For the weight of evaluators, we use the mean, maximum and minimum values of all the evaluators as the final result rather than the weighted sum based on the factor .", "As for the design of pattern functions, we fix the base model and replace our input and output patterns ( f & g ) with those of PPL-GPT (Radford et al., 2018) and BARTScore (Yuan et al., 2021).", "The pattern functions of these ablation models are not designed for text infilling tasks.", "Both of them remove the mask token in the input pattern, and PPL-GPT additionally places the input pattern at the beginning of the output pattern.", "The results in Table 5 show that each module in our metric contributes to the final performance.", "As for the weight of evaluators, we can observe that our weight factor performs better than common aggregation functions especially in consistency, indicating the necessity of the well-designed ensemble method when the number of pattern evaluators is small.", "Also, our pattern functions outperform those of other baselines, thereby showing the effectiveness of text infilling tasks which can fully utilize pre-trained models in an unsupervised setting.", "Generalization ability is essential for automatic metrics to evaluate open-ended text generation models.", "In this section, we will test whether our metric can be generalizable to measure the generated texts faced with model drift and quality drift.", "To measure whether CTRLEval is reliable to assess the generated results of different models, we split the evaluation set into four subsets based on the generation model and calculate Pearson correlation between each metric and human judgments.", "The results in Figure 2 show that our metric can outperform other baselines on the generated texts of all the generation models.", "Simultaneously, CTRLEval can achieve stable performance with smaller variances when evaluating different generation models, indicating that our metric can generalize to the model drift better.", "To evaluate the generalization ability of CTRLEval on the generated texts with different qualities, we follow the existing work (Sellam et al., 2020; Guan", "and Huang, 2020) to construct four biased subsets based on the coherence score of topic-controlled text generation.", "We first sort all the samples in the evaluation set and use the quartiles to split them into four subsets with the index from 0 to 3. Then, we create four biased subsets.", "For the j th subset, we sampled the generated texts which belong to the original i th subset with a probability of 1 | j i | +1 where i, j = 0 , 1 , 2 , 3 .", "Thus, the four biased subsets have different distributions of generated texts with different qualities, as shown in Figure 3. We then calculate the Pearson correlation between each metric and human judgments.", "The results in Figure 3 show that CTRLEval has higher correlations than the baselines on the evaluation subsets with different qualities.", "Also, our metric can achieve more stable performance on different subsets, which shows our better generalization ability to deal with quality drift.", "To investigate how the number of pattern evaluators affects the performance, we randomly sample the evaluators 20 times when evaluating attribute relevance in topic-controlled text generation, and illustrate mean values and standard deviations of each number of evaluators in Figure 4.", "Figure 4 shows that as the number of evaluators increases, the mean value of our performance can be persistently improved while the standard deviation is gradually reduced.", "This demonstrates the necessity of devising multiple pattern evaluators for each aspect, which can alleviate the bias brought by the pattern design.", "The comparison between the pattern functions of CTRLEval and other base-0 5 10 15 20 25 30 35 Number of Evaluators 0.35 0.4 0.45 0.5 P ea r s on CTRLEval CTRLEval (w/ PPL-GPT-PF) CTRLEval (w/ BARTScore-PF) Figure 4: Pearson correlation of the models with different numbers of evaluators.", "lines indicates our superior performance on all the numbers of evaluators.", "Since our method can adapt to different pre-trained models whose pre-training task is text infilling, we additionally choose BART (Lewis et al., 2020) and T5 (Raffel et al., 2020) as our base model, and present the results in Table 6.", "Table 6 shows that PEGASUS and T5 obtain comparable performance on all the evaluation aspects, which indicates that our well-designed text infilling tasks can be transferable to T5 without considerable modification.", "As for BART which performs worse on consistency and attribute relevance, we conjecture that the fewer parameters and the form of pre-training tasks may limit the performance.", "Since the pre-training task of BART is to generate the complete text rather than only the masked part of the input text, it may not be good at the evaluation involving a short span of texts, such as the prefix in the evaluation of consistency and the label word in attribute relevance.", "Extension to More Control Variables : In this paper, we evaluate the relationship between generated texts and two control variables (including content prefixes and attribute labels) via consistency and attribute relevance, respectively.", "We can also extend our metric to other control variables by designing additional pattern evaluators to measure the relationship between generated texts and each variable, respectively.", "We will further investigate the extensibility of our metric in the future work.", "Design of Pattern Evaluators : With the rapid development of prompt tuning, recent works have proposed new methods on the design of prompts and verbalizers (Gao et al., 2021; Lester et al., 2021), which provide alternatives to our metric in attribute relevance.", "Also, the weight factor of each evaluator can be set as diversity metrics (Hashimoto et al., 2019) besides NISF in coherence and consistency.", "We will leave the exploration of more settings on pattern evaluators as the future work.", "We present an unsupervised reference-free metric called CTRLEval for evaluating controlled text generation.", "This metric formulates the evaluation of different aspects into multiple text infilling tasks, and utilizes the ensemble of generation probabilities from a pre-trained model in different tasks as the evaluation result.", "Experimental results indicate that CTRLEval obtains higher correlations with human judgments and shows better generalization ability for addressing model drift and quality drift.", "This work was supported by the National Science Foundation for Distinguished Young Scholars (with No. 62125604) and the NSFC projects (Key project with No. 61936010 and regular project with No. 61876096).", "This work was also supported by the Guoqiang Institute of Tsinghua University, with Grant No. 2019GQG1 and 2020GQG0005.", "We construct an evaluation set for evaluating controlled text generation.", "The data samples in this set are all from the existing works with open-source codes, model checkpoints, and generated results.", "We directly use the generated results if the authors have released them.", "Otherwise, we adopt the same setting as the original papers to make these models generate texts.", "We do not apply extra selection strategies to the generated results.", "We resort to Amazon Mechanical Turk (AMT) for the annotation of this evaluation set.", "We do not invade the privacy or collect personal information of annotators.", "We pay each annotator $0.06 for each survey including four generated texts and one negative sample.", "The payment is determined based on the length of data samples.", "We additionally ask annotators to check whether there is a potential ethical problem in the data, and remove these problematic data in the evaluation set.", "After annotation on AMT, we manually review all the annotated samples from an ethical perspective.", "However, we admit that there may still exist unpredictable bias in this evaluation set." ]
[ "abstain", "abstain", "objective", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "method", "result", "result", "method", "abstain", "result", "objective", "objective", "method", "objective", "other", "other", "other", "other", "other", "other", "other", "other", "other", "method", "other", "other", "method", "method", "method", "method", "abstain", "method", "abstain", "method", "method", "method", "other", "method", "method", "method", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "result", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "result", "result", "result", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "result", "abstain", "result", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "other", "other", "method", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "abstain" ]
[ "Generating multi-sentence descriptions for videos is one of the most challenging captioning tasks due to its high requirements for not only visual relevance but also discourse-based coherence across the sentences in the paragraph.", "Towards this goal, we propose a new approach called Memory-Augmented Recurrent Transformer (MART), which uses a memory module to augment the transformer architecture.", "The memory module generates a highly summarized memory state from the video segments and the sentence history so as to help better prediction of the next sentence (w.r.t. coreference and repetition aspects), thus encouraging coherent paragraph generation.", "Extensive experiments, human evaluations, and qualitative analyses on two popular datasets ActivityNet Captions and YouCookII show that MART generates more coherent and less repetitive paragraph captions than baseline methods, while maintaining relevance to the input video events.", "1 1 Introduction In video captioning, the task is to generate a natural language description capturing the content of a video.", "Recently, dense video captioning (Krishna et al., 2017) has emerged as an important task in this field, where systems first generate a list of temporal event segments from a video, then decode a coherent paragraph (multi-sentence) description from the generated segments.", "Park et al. (2019) simplifies this task as generating a coherent paragraph from a provided list of segments, removing the requirements for generating the event segments, and focusing on decoding better paragraph captions from the segments.", "As noted by Xiong et al.", "(2018); Park et al. (2019), generating paragraph descriptions for videos can be very challenging due to the difficulties of having relevant, less redundant, as well as coherent generated sentences.", "Towards this goal, Xiong et al. (2018) proposed a variant of the LSTM network (Hochreiter and Schmidhuber, 1997) that generates a new sentence conditioned on previously generated sentences by passing the LSTM hidden states throughout the entire decoding process.", "Park et al. (2019) further augmented the above LSTM caption generator with a set of three discriminators that score generated sentences based on defined metrics, i.e., relevance, linguistic diversity, and inter-sentence coherence.", "Though different, both these methods use LSTMs as the language decoder.", "Recently, transformers (Vaswani et al., 2017) have proven to be more effective than RNNs (e.g., LSTM (Hochreiter and Schmidhuber, 1997), GRU (Chung et al., 2014),", "etc.), demonstrating superior performance in many sequential modeling tasks (Vaswani et al., 2017; Zhou et al., 2018; Devlin et al., 2019; Dai et al., 2019; Yang et al., 2019).", "Zhou et al. (2018) first introduced the transformer model to the video paragraph captioning task, with a transformer captioning module decoding natural language sentences from encoded video segment representations.", "This transformer captioning model is essentially the same as the original transformer (Vaswani et al., 2017) for machine translation, except that it takes a video representation rather than a source sentence representation as its encoder input.", "However, in such design, each video segment caption is decoded individually without knowing the context (i.e., previous video segments and the captions that have already been generated), thus often leading to inconsistent and redundant sentences w.r.t. previously generated sentences (see Figure 3 for examples).", "Dai et al. (2019) recognize this problem as context fragmentation in the task of language modeling, where the transformers are operating on separated fixed-length segments, without any information flow across segments.", "Therefore, to generate more coherent video paragraphs, it is imperative to build a model that can span over multiple video segments and capture longer range dependencies.", "Hence, in this work, we propose the Memory-Augmented Recurrent Transformer (MART) model (see Section 3 for details), a transformer-based model that uses a shared encoder-decoder architecture augmented with an external memory module to enable the modeling of the previous history of video segments and sentences.", "Compared to the vanilla transformer video paragraph captioning model (Zhou et al., 2018), our first architecture change is the unified encoder-decoder design, i.e., the encoder and decoder in MART use shared transformer layers rather than separated as in Zhou et al. (2018); Vaswani et al. (2017).", "This unified encoder-decoder design is inspired by recent transformer language models (Devlin et al., 2019; Dai et al., 2019; Sun et al., 2019) to prevent overfitting and reduce memory usage.", "Additionally, the memory module works as a memory updater that updates its memory state using both the current inputs and previous memory state.", "The memory state can be interpreted as a container of the highly summarized video segments and caption history information.", "At the encoding stage, the current video segment representation is enhanced with the memory state from the previous step using cross-attention (Vaswani et al., 2017).", "Hence, when generating a new sentence, MART is aware of the previous contextual information and can generate paragraph captions with higher coherence and lower repetition.", "Transformer-XL (Dai et al., 2019) is a recently proposed transformer language model that also uses recurrence, and is able to resolve context fragmentation for language modeling (Dai et al., 2019).", "Different from MART that uses a highly-summarized memory to remember history information, Transformer-XL directly uses hidden states from previous segments.", "We modify the Transformer-XL framework for video paragraph captioning and present it as an additional comparison.", "We benchmark MART on two standard datasets: ActivityNet Captions (Krishna et al., 2017) and YouCookII (Zhou et al., 2017).", "Both automatic evaluation and human evaluation show that MART generates more satisfying results than previous LSTM-based approaches (Xiong et al., 2018; Zhou et al., 2019; Zhang et al., 2018) and transformer-based approaches (Zhou et al., 2018; Dai et al., 2019).", "In particular, MART can generate more coherent (e.g., coreference and order), less redundant paragraphs without losing paragraph accuracy (visual relevance).", "Video Captioning Recently, video captioning has attracted much attention from both the computer vision and the natural language processing community.", "Methods for the task share the same intrinsic nature of taking a video as the input and outputting a language description that can best describe the content, though they differ from each other on whether a single sentence (Wang et al., 2019; Xu et al., 2016; Chen and Dolan, 2011; Pasunuru and Bansal, 2017a) or multiple sentences (Rohrbach et al., 2014; Krishna et al., 2017; Xiong et al., 2018; Zhou et al., 2018; Gella et al., 2018; Park et al., 2019) are generated for the given video.", "In this paper, our goal falls into the category of generating a paragraph (multiple sentences) conditioned on an input video with several pre-defined event segments.", "One line of work (Zhou et al., 2018, 2019) addresses the video paragraph captioning task by decoding each video event segment separately into a sentence.", "The final paragraph description is obtained by concatenating the generated single sentence descriptions.", "Though individual sentences may precisely describe the corresponding event segments, when put together the sentences often become inconsistent and redundant.", "Another line of works (Xiong et al., 2018; Gella et al., 2018) use the LSTM decoder's last (word) hidden state from the previous sentence as the initial hidden state for the next sentence decoding, thus enabling information flow from previous sentences to subsequent sentences.", "While these methods have shown better performance than their single sentence counterpart, they are still undesirable as the sentence-level recurrence is achieved at word-level, and the context history information quickly decays due to vanishing gradients (Pascanu et al., 2013) problem.", "Additionally, these designs also have difficulty modeling long-term dependencies (Hochreiter et al., 2001).", "In comparison, the recurrence in MART resides in the sentence or segment level and is thus more robust to the aforementioned problems.", "AdvInf (Park et al., 2019) augments the above LSTM word-level recurrence methods with adversarial inference, using a set of separately trained discriminators to re-rank the generated sentences.", "The techniques in AdvInf can be viewed as an orthogonal way of generating captions with better quality.", "Transformers Transformer (Vaswani et al., 2017) is used as the basis of our approach.", "Different from RNNs (e.g., LSTM (Hochreiter and Schmidhuber, 1997), GRU (Chung et al., 2014), etc) that use recurrent structure to model long-term dependencies, transformer relies on self-attention to learn the dependencies between input words.", "Transformers have proven to be more efficient and powerful than RNNs, with superior performance in many sequential modeling tasks, including machine translation (Vaswani et al., 2017), language modeling/pre-training (Devlin et al., 2019; Dai et al., 2019; Yang et al., 2019) and multi-modal representation learning (Tan and Bansal, 2019; Chen et al., 2019; Sun et al., 2019).", "Additionally, Zhou et al. (2018) have shown that a transformer model can generate better captions than the LSTM model.", "However, transformer architectures are still unable to model history information well.", "This problem is identified in the task of language modeling as context fragmentation (Dai et al., 2019), i.e., each language segment is modeled individually without knowing its surrounding context, leading to ineffi-cient optimization and inferior performance.", "To resolve this issue, Transformer-XL (Dai et al., 2019) introduces the idea of recurrence to the transformer language model.", "Specifically, the modeling of a new language segment in Transformer-XL is conditioned on hidden states from previous language segments.", "Experimental results show Transformer-XL has stronger language modeling capability than the non-recurrent transformer.", "Transformer-XL directly uses all the hidden states from the previous segment to enable recurrence.", "In comparison, our MART uses highly summarized memory states, making it more efficient in passing useful semantic or linguistic cues to future sentences.", "Though our method provides a general temporal multi-modal learning framework, we focus on the video paragraph captioning task in this paper.", "Given a video V , with several temporally ordered event segments [ e 1 , e 2 , ..., e T ] , the task is to generate a coherent paragraph consisting of multiple sen-Multi-HeadAttention Add & Norm Feed Forward Add & Norm CNN Masked Multi-HeadAttention Word Enbedding PE Add & Norm Multi-HeadAttention Add & Norm Feed Forward Add & Norm x N x N Linear Softmax Outputs (shifted right) Video Segment The girl dances around the room.", "tences [ s 1 , s 2 , ..., s T ] to describe the whole video, where sentence s t should describe the content in the segment e t .", "In the following, we first describe the baseline transformer that generates sentences without recurrent architecture, then introduce our approach Memory-Augmented Recurrent Transformer (MART).", "Besides, we also compare MART with the recently proposed Transformer-XL (Dai et al., 2019) in detail.", "We start by introducing the vanilla transformer video paragraph captioning model proposed by Zhou et al. (2018), which is an application of the original transformer (Vaswani et al., 2017) model for video paragraph captioning.", "An overview of the model is shown in Figure", "1. The core of the architecture is the scaled dot-product attention .", "Given query matrix Q RT q d k , key matrix K RT v d k and value matrix V RT v d v , the attentional output is computed as: A ( Q, K, V ) = softmax (cid:18) QK (cid:62) d k , dim= 1 (cid:19) V, where softmax ( , dim= 1) denotes performing softmax at the second dimension of the the input.", "Combining h paralleled scaled dot-product attention, \u0000 Linear Softmax Masked Multi-HeadAttention Add & Norm Feed Forward Add & Norm x N Memory-Augmented Recurrent Transformer (at step ) \u0000 Multi-HeadAttention \u0000 \u0000\u0000 1 \u0000 \u0000 Concat \u0000 MemoryUpdater \u0000 \u0000\u0000 1 \u0000 \u0000\u0000 Feed Forward \u0000 \u0000\u0000 Multi-HeadAttention \u0000 \u0000\u0000 \u0000 \u0000\u0000 1 \u0000 \u0000 Linear Linear Linear Linear tanh Add sigmoid Add \u0000 \u0000\u0000 \u0000 \u0000\u0000 =(1 ) + \u0000 \u0000\u0000 \u0000 \u0000\u0000 \u0000 \u0000\u0000 \u0000 \u0000\u0000 \u0000 \u0000\u0000 1 \u0000 \u0000\u0000 \u0000 \u0000\u0000 Linear Softmax Feed Forward Add & Norm x N Masked Multi-Head Attention with Relative PE \u0000\u0000 ( ) \u0000 \u0000 1 \u0000 1 \u0000 \u0000 Concat \u0000 \u0000 \u0000 1 \u0000 Add & Norm Transformer-XL (at step ) \u0000 Word Enbedding Outputs (shifted right) CNNPE Video Segment Concat Linear & Norm Linear & Norm TE The girl dances around the room.", "we obtain the multi-head attention (Vaswani et al., 2017), we denote it as MultiHeadAtt(Q, K, V) .", "The attention formulation discussed above is quite general.", "It can be used for various purposes, such as self-attention (Vaswani et al., 2017) where query, key, and value matrix are all the same, and cross-attention (Vaswani et al., 2017) where the query matrix is different from the key and value matrix.", "In this paper, we also use multi-head attention for memory aggregation and update, as discussed later.", "The vanilla transformer video paragraph captioning model has N encoder layers and N decoder layers.", "At the l -th encoder layer, the multi-head attention module takes the last layer's hidden states H l 1 as inputs and performs self-attention.", "The attentional outputs are then projected by a feed-forward layer.", "At the l -th decoder layer, the model first encodes the last decoder layer's hidden states using masked multi-head attention .", "2 It then uses multi-head attention, with the masked outputs as query matrix, and the hidden states H l from l -th encoder layer as key and value matrix to gather 2 masked multi-head attention is used to prevent the model from seeing future words (Vaswani et al., 2017).", "information from the encoder side.", "Similarly, a feed-forward layer is used to encode the sentences further.", "Residual connection (He et al., 2016) and layer-normalization (Ba et al., 2016) are applied for each layer, for both encoder and decoder.", "The vanilla transformer captioning model follows the classical encoder-decoder architecture, where the encoder and decoder network are separated.", "In comparison, the encoder and decoder are shared in MART, as shown in Figure 2 ( left ).", "The video and text inputs are firstly separately encoded and normalized.", "We denote the encoded video and text embeddings as H 0 video RT video d and H 0 text RT text d , where T video and T text are the lengths of video and text, respectively.", "d denotes the hidden size.", "We then concatenate these two embeddings as input to the transformer layers: H 0 = [ H 0 video ; H 0 text ] RT c d , where [; ] denotes concatenation, T c = T video + T text .", "This unified encoder-decoder design is inspired by recent works on multi-modal representation learning (Chen et al., 2019; Sun et al., 2019).", "We also use two trainable token type embedding vectors to indicate whether an input token is from video or text, similar to Devlin et al. (2019) where the token type embeddings are added to indicate different input sequences.", "We ignore the video token positions and only consider the text token positions when calculating loss and generating words.", "While the aforementioned vanilla transformer is a powerful method, it is less suitable for video paragraph captioning due to its inability to utilize video segments and sentences history information.", "Thus, given the unified encoder-decoder transformer, we augment it with an external memory module, which helps it to utilize video segments and the corresponding caption history to generate the next sentence.", "An overview of the memory module is shown in Figure 2 ( left ).", "At step t , i.e., decoding the t -th video segment, the l -th layer aggregates the information from both its intermediate hidden states H l t RT c d and the memory states M lt 1 RT m d ( T m denotes memory state length or equivalently #slots in the memory) from the last step, using a multi-head attention.", "The input query matrix of the multihead attention Q = H lt , key and value matrices are K, V = [ M lt 1 ; H lt ] R ( T m + T c ) d .", "The memory augmented hidden states are further encoded using a feed forward layer and then merged with the intermediate hidden states H lt using a residual connection and layer norm to form the hidden states output H lt RT c d .", "The memory state M lt 1 is updated as M lt , using the intermediate hidden states H lt .", "This process is conducted in the Memory Updater module, illustrated in Figure", "2. We summarize the procedure below: S lt = MultiHeadAtt ( M lt 1 , H lt , H lt ) , C lt = tanh ( W lmc M lt 1 + W lsc S lt + b lc ) , Z lt = sigmoid ( W lmz M lt 1 + W lsz S lt + b lz ) , M lt = (1 Z lt ) (cid:12) C lt + Z lt (cid:12) M lt 1 , where (cid:12) denotes Hadamard product, W lmc , W lsc , W lmz , and W lsz are trainable weights, b lc and b lz are trainable bias.", "C lt RT m d is the internal cell state.", "Z lt RT m d is the update gate that controls which information to retain from the previous memory state, and thus reducing redundancy and maintaining coherence in the generated paragraphs.", "This update strategy is conceptually similar to LSTM (Hochreiter and Schmidhuber, 1997) and GRU (Chung et al., 2014).", "It differs in that multihead attention is used to encode the memory state and thus multiple memory slots are supported instead of a single one in LSTM and GRU, which gives it a higher capacity of modeling complex relations.", "Recent works (Sukhbaatar et al., 2015; Graves et al., 2014; Xiong et al., 2016a) introduce a memory component into neural networks, where the memory is mainly designed to memorize facts in the input context to support downstream tasks, e.g., copy (Graves et al., 2014) or question answering (Sukhbaatar et al., 2015; Xiong et al., 2016a).", "In comparison, the memory in MART is designed to memorize the sequence generation history to support the coherent generation of the next sequence.", "Transformer-XL (Dai et al., 2019) is a recently proposed transformer-based language model that uses a segment-level recurrence mechanism to capture the long-term dependency in context.", "In Figure 2 ( right ) we show a modified version of Transformer-XL for video paragraph captioning.", "At step t , at its l -th layer, Transformer-XL takes as inputs the last layer's hidden states from both the current step and the last step, which we denote as H l 1 t and SG ( H l 1 t 1 ) , where SG ( ) stands for stop-gradient, and is used to save GPU memory and computation (Dai et al., 2019).", "The input query matrix of the multi-head attention Q = H l 1 t , key and value matrices are K, V = [ SG ( H l 1 t 1 ); H l 1 t ] .", "Note the multi-head attention here is integrated with relative positional encoding (Dai et al., 2019).", "Both designed to leverage the long-term dependency in context, the recurrence in Transformer-XL is between H lt and H l 1 t 1 , which shifts one layer downwards per step.", "This mismatch in representation granularity may potentially be harmful to the learning process and affect the model performance.", "In contrast, the recurrence in MART is between H lt and M lt 1 (updated using H lt 1 ) of the same layer.", "Besides, Transformer-XL directly uses all the hidden states from the last step to enable recurrence, which might be less effective as less relevant and repetitive information is also passed along.", "In comparison, MART achieves recurrence by using memory states that are highly summarized from previous steps, which may help the model to reduce redundancy and only keep important information from previous steps.", "We conducted experiments on two popular benchmark datasets, ActivityNet Captions (Krishna et al., 2017) and YouCookII (Zhou et al., 2017).", "We evaluate our proposed MART and compare it with various baseline approaches.", "Datasets ActivityNet Captions (Krishna et al., 2017) contains 10,009 videos in train set, 4,917 videos in val set.", "Each video in train has a single reference paragraph while each video in val has two reference paragraphs.", "Park et al. (2019) uses the same set of videos (though different segments) in val for both validation and test.", "To allow better evaluation of the models, we use splits provided by Zhou et al. (2019), where the original val set is split into two subsets: ae-val with 2,460 videos for validation and ae-test with 2,457 videos for test.", "This setup makes sure the videos used for test will not be seen in validation.", "YouCookII (Zhou et al., 2017) contains 1,333 training videos and 457 validation videos.", "Each video has a single reference paragraph.", "Both datasets come with temporal event segments annotated with human written natural language sentences.", "On average, there are 3.65 event segments for each video in ActivityNet Captions, 7.7 segments for each video in YouCookII.", "Data Preprocessing We use aligned appearance and optical flow features extracted at 2FPS to represent videos, provided by Zhou et al. (2018).", "Specifically, for appearance, 2048D feature vectors from the Flatten-673' layer in ResNet-200 (He et al., 2016) are used; for optical flow, 1024D feature vectors from the global pool' layer of BN-Inception (Ioffe and Szegedy, 2015) are used.", "Both networks are pre-trained on ActivityNet (Caba Heil-bron et al., 2015) for action recognition, provided by (Xiong et al., 2016b).", "We truncate sequences longer than 100 for video and 20 for text and set the maximum number of video segments to 6 for ActivityNet Captions and 12 for YouCookII.", "Finally, we build vocabularies based on words that occur at least 5 times for ActivityNet Captions and 3 times for YouCookII.", "The resulting vocabulary contains 3,544 words for ActivityNet Captions and 992 words for YouCookII.", "Evaluation Metrics (Automatic and Human) We evaluate the captioning performance at paragraph-level, following (Park et al., 2019; Xiong et al., 2018), reporting numbers on standard metrics, including BLEU@4 (Papineni et al., 2002), METEOR (Denkowski and Lavie, 2014), CIDEr-D (Vedantam et al., 2015).", "Since these metrics mainly focus on whether the generated paragraph matches the ground-truth paragraph, they fail to evaluate the redundancy of these multi-sentence paragraphs.", "Thus, we follow previous works (Park et al., 2019; Xiong et al., 2018) to evaluate repetition using R@4.", "It measures the degree of N-gram (N=4) repetition in the descriptions.", "Besides the automated metrics, we also conduct human evaluations to provide additional comparisons between the methods.", "We consider two aspects in human evaluation, relevance (i.e., how related is a generated paragraph caption to the content of the given video) and coherence (i.e., whether a generated paragraph caption reads fluently and is linguistically coherent over its multiple sentences).", "MART is implemented in PyTorch (Paszke et al., 2017).", "We set the hidden size to 768, the number of transformer layers to 2, and the number of attention heads to 12.", "For positional encoding, we follow Vaswani et al. (2017) to use the fixed scheme.", "For memory module, we set the length of recurrent memory state to 1, i.e., T m = 1 .", "We optimize the model following the strategy used by Devlin et al. (2019).", "Specifically, we use Adam (Kingma and Ba, 2014) with an initial learning rate of 1e-4, 1 = 0 .", "9 , 2 = 0 .", "999 , L2 weight decay of 0.01, and learning rate warmup over the first 5 epochs.", "We train the model for at most 50 epochs with early stopping using CIDEr-D and batch size 16.", "We use greedy decoding as we did not observe better performance using beam search.", "Vanilla Transformer This model originates from the transformer (Vaswani et al., 2017), proposed by Zhou et al. (2018) (more details in Section 3.1).", "It takes a single video segment as input and independently generates a single sentence describing the given segment.", "Note that Zhou et al. (2018) also have a separate proposal generation module, but here we only focus on its captioning module.", "To obtain paragraph-level captions, the independently generated single sentence captions are concatenated as the output paragraph.", "Transformer-XL Transformer-XL is proposed by Dai et al. (2019) for modeling long-term dependency in natural language.", "Here we adapt it for video paragraph captioning (more details in Section 3.3).", "The original design of Transformer-XL stops gradients from passing between different recurrent steps to save GPU memory and computation.", "To enable a more fair comparison with our model, we implemented a version that allows gradient flow through different steps, calling this Transformer-XLRG (Transformer-XL with Recurrent Gradient).", "AdvInf AdvInf (Park et al., 2019) uses a set of three discriminators to do adversarial inference on a strong LSTM captioning model.", "The input features of the LSTM model are the concatenation of image recognition, action recognition, and object detection features.", "To encourage temporal coherence between consecutive sentences, the last hidden state from the previous sentence is used as input to the decoder (Xiong et al., 2018; Gella et al., 2018).", "The three discriminators are trained adversarially and are specifically designed to reduce repetition and encourage fluency and relevance in the generated paragraph.", "GVD An LSTM based model for grounded video description (Zhou et al., 2019).", "It uses densely detected object regions as inputs, with a grounding module that grounds generated words to the regions.", "Additionally, we also consider a GVD variant ( GVDsup ) that uses grounding supervision from Zhou et al. (2019).", "MFT MFT (Xiong et al., 2018) uses an LSTM model with a similar sentence-level recurrence as in AdvInf (Park et al., 2019).", "HSE HSE (Zhang et al., 2018) is a hierarchical model designed to learn both clip-sentence and paragraph-video correspondences.", "Given the learned contextualized video embedding, HSE uses a 2-layer LSTM to generate captions.", "For AdvInf, MFT, HSE, GVD, and GVDsup, we obtain generated sentences from the authors.", "We only report their performance on ActivityNet Captions ae-val split to enable a fair comparison, as ( i ) AdvInf, MFT and HSE have different settings as ours, where ae-test videos are included as part of their validation set; ( ii ) we do not have access to the ae-test predictions of GVD and GVDsup.", "For vanilla transformer, Transformer-XL and Transformer-XLRG, we borrow/modify the model implementations from the original authors and train them under the same settings as MART.", "Automatic Evaluation Table 1 shows the results of MART and several transformer baseline methods.", "We observe stronger or comparable performance for the language metrics (B@4, M, C) for MART wins (%) VTransformer wins (%) Delta relevance 37 29.5 +7.5 coherence 42.8 26.3 +16.5 MART wins (%) Transformer-XL wins (%) Delta relevance 40.0 39.5 +0.5 coherence 39.2 36.2 +3.0 Table 3: Human evaluation on ActivityNet Captions ae-test set w.r.t. relevance and coherence.", "both ActivityNet Captions and YouCookII datasets.", "For R@4, MART produces significantly better results compared to the three transformer baselines, showing its effectiveness in reducing redundancy in the generated paragraphs.", "Table 2 shows the comparison of MART with state-of-the-art models on ActivityNet Captions.", "MART achieves the best scores for both CIDEr-D and R@4 and has a comparable performance for B@4 and METEOR.", "Note that the best B@4 model, GVDsup (Zhou et al., 2019), and the best METEOR model, AdvInf (Park et al., 2019), both use strong detection features, and GVDsup has also used grounding supervision.", "Regarding the repetition score R@4, MART has the highest score.", "It outperforms the strong adversarial model AvdInf (Park et al., 2019) even in an unfair comparison where AdvInf uses extra detection features.", "Additionally, AdvInf has a time-consuming adversarial training and decoding process where a set of discriminator models are trained and used to re-rank candidate sentences, while MART can do much faster inference with only greedy decoding and no further post-processing.", "The comparisons in Table 1 and Table 2 show that MART is able to generate less redundant (thus more coherent) paragraphs while maintaining relevance to the videos.", "Human Evaluation In addition to the automatic metrics, we also run human evaluation on Amazon Mechanical Turk (AMT) with 200 randomly sampled videos from ActivityNet Captions ae-test split, where each video was judged by three different AMT workers.", "We design a set of pairwise experiments (Pasunuru and Bansal, 2017b; Park et al., 2019), where we compare two models at a time.", "AMT workers are instructed to choose which caption is better or the two captions are not distinguishable based on relevance and coherence, respectively.", "The models are anonymized, and the predictions are shuffled.", "In total, we have 54 work-#hiddenlayers mem.len.", "ers participated the MART vs. vanilla transformer experiments, 47 workers participated the MART vs. Transformer-XL experiments.", "In Table 3 we show human evaluation results, where the scores are calculated as the percentage of workers that have voted a certain option.", "With its sentence-level recurrence mechanism, MART is substantially better than the vanilla transformer model for both relevance and coherence.", "Compared to the strong baseline approach Transformer-XL, MART has similar performance in terms of relevance, but still reasonably better performance in terms of coherence.", "Model Ablation We show model ablation in Table", "4. MART models with recurrence have better overall performance than the variant without, suggesting the effectiveness of our recurrent memory design.", "We choose to use the model with 2 hidden layers and memory state length 1 as it shows a good balance between performance and computation.", "Qualitative Examples In Figure 3, we show paragraph captions generated by vanilla transformer, Transformer-XL, and our method MART.", "Compared to the two baselines, MART produces more coherent and less redundant paragraphs.", "In particular, we noticed that vanilla transformer often uses incoherent pronouns/person mentions, while MART and Transformer-XL is able to use suitable pronouns/person mentions across the sentences and thus improve the coherence of the paragraph.", "Compare with Transformer-XL, we found that the paragraphs generated by MART have much less cross-sentence repetitions.", "We attribute MART's success to its recurrence design the previous memory states are highly summarized, in which redundant information is removed.", "While there is less redun-Vanilla Transformer He is sitting down in a chair.", "dancy between sentences generated by MART, in Figure 3 ( left ), we noticed that repetition still exists within a single sentence, suggesting further efforts on reducing the repetition in single sentence generation.", "More examples are in the appendix.", "Memory Ablation To explore whether the learned memory state could store useful information about the videos and captions, we conducted a video retrieval experiment on ActivityNet Captions train split with 10K videos, where we extract the last step memory state in the first layer of a trained MART model for each video as its representation to perform nearest neighbor search with cosine similarity.", "Though not explicitly trained for the retrieval task, we observe some positive examples in the experiments.", "We show an example in Figure 4, the neighbors mostly show related activities.", "In this work, we present a new approach Memory-Augmented Recurrent Transformer (MART) for video paragraph captioning, where we designed an auxiliary memory module to enable recurrence in transformers.", "Experimental results on two standard datasets show that MART has better overall performance than the baseline methods.", "In particular, MART can generate more coherent, less redundant paragraphs without any degradation in relevance.", "We thank the anonymous reviewers for their helpful comments and discussions.", "This work was performed while Jie Lei was an intern at Tencent AI Lab, Seattle, USA.", "It was later partially supported by NSF Awards CAREER-1846185, 1562098, DARPA KAIROS Grant FA8750-19-2-1004, and ARO-YIP Award W911NF-18-1-0336.", "The views contained in this article are those of the authors and not of the funding agency." ]
[ "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "other", "other", "objective", "other", "other", "other", "other", "other", "other", "other", "other", "other", "method", "other", "other", "other", "other", "other", "other", "other", "other", "other", "objective", "method", "other", "abstain", "method", "method", "abstain", "other", "other", "other", "other", "abstain", "other", "other", "method", "other", "other", "other", "other", "other", "other", "other", "other", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "result", "abstain", "result", "result", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "objective", "abstain", "abstain", "other", "other", "other", "other" ]
[ "Efficient word representations play an important role in solving various problems related to Natural Language Processing (NLP), data mining, text mining etc.", "The issue of data sparsity poses a great challenge in creating efficient word representation model for solving the underlying problem.", "The problem is more intensified in resource-poor scenario due to the absence of sufficient amount of corpus.", "In this work, we propose to minimize the effect of data sparsity by leveraging bilingual word embeddings learned through a parallel corpus.", "We train and evaluate Long Short Term Memory (LSTM) based architecture for aspect level sentiment classification.", "The neural network architecture is further assisted by the handcrafted features for the prediction.", "We show the efficacy of the proposed model against state-of-the-art methods in two experimental setups i.e. multi-lingual and cross-lingual.", "Sentiment analysis (Pang and Lee, 2005) tries to automatically extract the subjective information from a user written textual content and classifies it into one of the predefined set of classes, e.g. positive , negative , neutral or conflict .", "Sentiment analysis performed on coarser level (i.e. document or sentence level) does not provide enough information for a user who is critical of finer details such as battery life of a laptop or service of a restaurant etc.", "Aspect level sentiment analysis (ABSA) (Pontiki et al., 2014) serves such a purpose, which first identifies the features (or aspects) mentioned in the text and then classifies it into one of the target classes.", "For example, the following review is for a restaurant where the writer shares her/his experience.", "Though s/he likes the food but certainly not happy with the service .", "Analyzing such reviews on sentence level will reflect only an overall sentiment (i.e. conflict ) of the sentence ignoring critical information such as food and service qualities.", "However, ABSA will first identify all the aspects in the text (i.e. food and service ) and then associate positive with food and negative with service .", "Identification of aspect terms is also known as aspect term extraction or opinion target extraction.", "In this work, we focus on the second problem i.e. aspect level sentiment classification.", "Literature survey suggests a wide range of research on sentiment analysis (at the document or sentence level) is being carried out in recent years (Turney, 2002; Kim and Hovy, 2004; Jagtap and Pawar, 2013; Poria et al., 2016; Kaljahi and Foster, 2016; Gupta et al., 2015).", "However, most of these researches are focused on resource-rich language like English.", "Like many other Natural Language Processing (NLP) problems, research on sentiment analysis involving Indian languages (e.g. Hindi, Bengali etc.) are very limited (Joshi et al., 2010; Bakliwal et al., 2012; Kumar et al., 2015; Balamurali et al., 2012; Singhal and Bhattacharyya, 2016).", "Due to the scarcity of various qualitative resources and/or tools in such languages, the problems have become more challenging and non-trivial to solve.", "The research on ABSA involving Indian languages has started only very recently, for e.g. (Akhtar et al., 2016a,b).", "Indian languages are resource-constrained in na-ture as there is a lack of ready availability of different qualitative lexical resources and tools.", "In a supervised machine learning framework, good amount of training data always have a great impact on the overall system performance.", "Low-resource languages (such as the Indian [Hindi etc.]) usu-572 ally suffer due to the non-availability of sufficient training data instances.", "In order to solve the data and resource scarcity problem in one language, researchers often utilize cross-lingual setup to leverage the resource-richness of other languages by projecting the task into a common problem space (Zhou et al., 2016; Balamurali et al., 2012; Singhal and Bhattacharyya, 2016; Barnes et al., 2016).", "The projection is often performed with the help of machine translation or bilingual dictionaries.", "In recent times, deep learning (DL) techniques have shown success in solving several NLP problems.", "A good word representation is the essence of any deep learning approach.", "In the absence of qualitative word embeddings, it turns out to be a non-trivial task for any DL framework to effectively learn hidden features (e.g. lexical, syntactic, semantics etc.), which may effect the performance.", "The quality of word embeddings can be preserved by employing state-of-the-art distributed word representation models such as Word2Vec (Mikolov et al., 2013) or GloVe (Pennington et al., 2014) provided a huge corpus to train on.", "Due to this limitation, quality of word embeddings in Indian languages usually are not at par with that of resource-rich languages like English.", "Data sparsity in word representation (i.e. absence of representation of any word) is another problem that often has to be dealt with.", "In order to solve any NLP task, out-of-vocabulary (OOV) words in a word embedding model pose a serious challenge to the underlying learning algorithm.", "For a missing word representation, the literature suggests two possible solutions:", "a) zero vector (Bah-danau et al., 2017) or", "b) random vector (Dhingra et al., 2017).", "However, in both the cases the resultant vector could be completely out of context and often does not fit well with others.", "Further, word embedding of a word in a source language has absolutely no correlation with the word embedding of the same word (translated) in the target language, hence, it cannot be directly used for training and/or testing in a cross-lingual setup.", "The prime motivation of the work is to minimize the effect of data sparsity and thereby, enabling any deep learning framework to effectively learn its hidden features.", "In this paper, we propose to solve the data sparsity problem in a resource-scarce language scenario (here, primarily Hindi and also French embeddings) by leveraging the information of resource-rich languages (here, English embeddings) 1 .", "We hypothesize that addressing data sparsity in an intelligent manner would yield increased performance.", "We utilize bi-lingual word embedding (Luong et al., 2015) trained on English-Hindi and English-French parallel corpus to bridge the language divergence in the vector space.", "The proposed method is based on a deep learning (DL) architecture named Long Short Term Memory (LSTM) (Hochreiter and Schmidhuber, 1997).", "We try to establish our hypothesis through experiments on aspect based sentiment classification task in both the setups i.e. multi-lingual and cross-lingual for English-Hindi and English-French language pairs.", "Aspect based sentiment classification deals with assigning the sentiment polarity (i.e. positive , negative , neutral or conflict ) to the aspect terms.", "For evaluation, we use the datasets provided in (Akhtar et al., 2016a) for Hindi, SemEval-2014 shared task on ABSA (Pontiki et al., 2014) for English and SemEval-2016 shared task on ABSA (Pontiki et al., 2016) dataset for French.", "Major contributions of our current work are as follows:", "a) we train and use bilingual embeddings on Amazon product review corpus consisting of parallel sentences of English-Hindi and English-French, which serve as a bridge between the two languages;", "b) we propose to solve the problem of data sparsity in low-resource language word embedding by utilizing the word embedding created on resource-rich language; and", "c) to further improve the system's prediction we extract and use various English side semantic features of the machine translated words.", "As we already mentioned, the research on ABSA involving Indian languages are limited.", "Some of the recent works include the one reported in (Akhtar et al., 2016a,b).", "The authors in (Barnes et al., 2016) employed bilingual word embeddings for sentiment classification in a cross-lingual setup.", "To the best of our knowledge, our current attempt is the very first of its kind to employ bilingual word embeddings for a multilingual scenario.", "Our proposed system differs with the existing systems in the following ways.", "Bhat-1 We use French to show how generic our proposed approach is. Compared to English, French does not have enough sentiment annotated data", "tacharyya, 2016) is multi-lingual in nature.", "In contrast, our proposed system is applied to both multi-lingual and cross-lingual setups.", "2. Approach : System (Akhtar et al., 2016a) defines classical feature driven approach while the system (Barnes et al., 2016) utilized bilingual word embeddings as feature values to train a Support Vector Machine (SVM) classifier.", "Rest of the systems (Akhtar et al., 2016b; Singhal and Bhattacharyya, 2016) (including the proposed one) are based on deep neural network architecture.", "However, the techniques employed are very much different.", "Akhtar et al. (2016b) is a CNN-SVM based system with the assistance of multi-objective optimized features, while Singhal and Bhattacharyya (2016) is a CNN based system that translate the source language texts into target language text (English) for training and evaluation.", "In comparison, our proposed method employ LSTM to solve the data sparsity problem in both multi-lingual as well as cross-lingual setups.", "3. Problem addressed : Authors in (Singhal and Bhattacharyya, 2016) focused on sentence level sentiment classification while our present work focuses on fine-grained sentiment classification at the aspect level.", "4. Word Embeddings: The proposed system employs shared vector-space bilingual word embeddings for training and testing while (Singhal and Bhattacharyya, 2016) projected the source language train & test data into target language using machine translation and utilizes target side pre-computed word vectors for training the system.", "Whereas, the system reported in (Akhtar et al., 2016b) employed mono-lingual word embeddings for training and evaluation.", "5. Data Sparsity: The system of (Akhtar et al., 2016b) does not address the problem of data sparsity, while our proposed system tries to minimize the effect of data sparsity.", "Our proposed system tackles the data sparsity problem by replacing the OOV word with its translated form which usually happens to be its closest neighbor in the shared vector space, hence, the semantic closeness is preserved to an extent.", "Whereas, system (Sing-hal and Bhattacharyya, 2016) addressed the data sparsity by translating every word of the source language into target language which may introduce loss of sentiment in the target language as a side-effect (Mohammad et al., 2016).", "6. Hand-crafted Features: The proposed system employs much richer set of lexicon based features than that of (Singhal and Bhattacharyya, 2016).", "Also, we do not augment polar words in the training instances as done in (Singhal and Bhattacharyya, 2016), rather we use sentiment scores of these lexicons as features themselves in the training and testing instances.", "Whereas, the authors in (Akhtar et al., 2016b) obtained an optimized feature vector through the application of multi-objective genetic algorithm.", "We propose to use a Long Short Term Memory (LSTM) architecture on top of bilingual word embeddings for the prediction.", "LSTM is a special kind of recurrent neural network (RNN) which efficiently captures long term dependencies.", "Bidirectional LSTM is an extended version of LSTM which takes both forward and backward sequences into account.", "Our model consists of two bidirectional LSTM layers followed by two fully-connected layers and an output layer.", "We employ bilingual word embeddings (Luong et al., 2015) trained on a parallel English-Hindi (and English-French ) corpus.", "We generate a parallel corpus for Amazon product review datasets 2 (consisting of approx. 7.2M sentences) using an in-house product review domain based English Hindi ( English French ) Statistical Machine Translation (SMT) system ( English Hindi : 39.5 BLEU score and English French : 37.9 BLEU score).", "We employ widely used and standard machine translation tool Moses (Koehn et al., 2007) to train the phrase-based SMT system.", "The alignment information are obtained from the mosesdecoder (Koehn et al., 2007) during translation of the reviews.", "2 http://snap.stanford.edu/data/other.html", "Skip-Gram word2vec (Mikolov et al., 2013) models which share the common vector space.", "If a word WS is aligned to word WT , then the context information CT of target word WT is also used as context of the source word WS along with its own context CS for computing word vectors.", "By utilizing the context information of both source and target side, resultant word embeddings of WS and WT are semantically closer to each other in the vector space.", "Bilingual skip-gram model creates two separate word embeddings, i.e. one each for source (Hindi) and target language (English).", "First, we extract word representations for all the words in a sentence from the Hindi bilingual word embeddings.", "Subsequently at the second step we translate all the OOV words (words whose representations are missing in Hindi bilingual embeddings) into English and then perform another lookup in English embeddings.", "For instance, if embedding of a word (cid:858)|achcha ' is unknown we translate it in English as good ', and use its word embeddings in place of the source word (cid:858)|achcha '.", "Thus the missing representation of OOV word is replaced by its translated target side representation.", "Since, both English and Hindi word embeddings share a common vector space, this replacement strategy proves to be an effective technique.", "In our case, we observe a reduction of approximately 65% and 37% OOV words, respectively for Hindi and French by our proposed replacement strategy.", "Consequently, an increase in accuracy value is observed during evaluation.", "Hindi is a morphologically rich language.", "Many inflected words in Hindi share a common translated word in English.", "For example, based on the gender of the subject Hindi has two forms for word goes': | jAtA hai' (male) or | jAtI hai' (female).", "Therefore, if representation of one word ( | jAtA hai) is missing in Hindi embedding we can still find its representation in English through its translation i.e. goes'.", "Bilingual embedding also helps in addressing the spelling variation cases.", "For e.g. two differently spelled words in Hindi such as (cid:877) | kambineshana' and | kaMbIneshana' translate to an English word combination'.", "We repeat the above process for English-French language pair to obtain two (English and French) word2vec models.", "We also released computed bilingual word embeddings for the research community 3 .", "We employ various hand-crafted features to assist the network.", "We try to leverage the effectiveness of English side resources by translating a word into English and then extracting its feature representation.", "We use following set of features in our task.", "It should be noted that we do not include any lexical or syntactic features during training as distributed word embedding models are good at capturing such features.", "So, during the training phase, network adapts its weights to learn the relevant set of these features from the word embeddings itself.", "1. Bing Liu (Ding et al., 2008) & MPQA (Wiebe and Mihalcea, 2006) lexicons : We define a feature that marks the positiv-ity/negativity scores of the words in a sentence.", "We assign a score of +1 & -1, respectively to each positive and negative word in the sentence.", "For unseen words, we use score as 0.", "We extract one such feature from each lexicon.", "2. SentiWordNet (Baccianella et al., 2010): Three features are extracted for every word denoting its positivity (posScore), negativity (negScore) and objectivity (1 [posScore + negScore]) scores, respectively.", "3. Semantic Orientation (SO) (Hatzivas-siloglou and McKeown, 1997): Semantic orientation defines the association of a word w.r.t. its positivity and negativity.", "Semantic orientation (SO) of a word is the difference of point-wise mutual information of a word w in positive and negative reviews.", "We calculate the SO score of each word in the context window of size 5 and take the cumulative SO score as the feature value.", "We evaluate our proposed approach for two setups i.e. multi-lingual and cross-lingual setups.", "In multi-lingual setup, the proposed model is trained and evaluated on datasets of the same language i.e. Hindi or French.", "We pre-process our datasets to reduce the effect of data sparsity by utilizing the resource-rich language i.e. English.", "In contrast, the 3 Bi-lingual word embeddings available at http://www.", "iitp.ac.in/~ai-nlp-ml/resources.html 575 cross-lingual setup employs dataset of resource-rich language (i.e. English) for training and during evaluation Hindi or French dataset is used.", "Similar to the multi-lingual setup, we pre-process the test dataset to reduce the effect of data sparsity in cross-lingual setup as well.", "An overall schema of the proposed methodology is depicted in Figure 1 for both multi-lingual and cross-lingual setups.", "Figures 1a and 1b show the training architectures for the cross-lingual and multi-lingual scenarios, respectively.", "Since our test datasets for both the variants are in Hindi (or French), testing scenario for cross-lingual and multi-lingual setups are also the same as represented in Figure 1c.", "(c) Testingscenarioincross-lingualandmulti-lingualsetup.", "For the successful marriage of word embeddings and extracted features, we try three different architectures as depicted in Figure 2. In the first architecture ( A 1 , Figure 2a), we concatenate extracted features of each word of an instance with the corre-LSTM", "sponding word representations and pass it through a LSTM network followed by dense and output layers.", "In the second architecture ( A 2 , Figure 2b), we do not combine features and word representations together.", "Rather, we learn sentence embeddings through a LSTM network and then concatenate it with the extracted features before feeding to the dense layer.", "Finally, in the third architecture ( A 3 , Figure 2c), we train separate LSTMs for the extracted features and word embeddings.", "Subsequently, we merge their representations at the dense layer.", "The choice of separate LSTMs for the hand-crafted features in architecture A 3 is driven by the fact that the dimension of a word embedding is usually very high as compared to its corresponding hand-crafted features.", "If trained together, as in architecture A 1 , extracted features of low dimension usually get overshadowed by the high-dimensional word embeddings.", "Thus making it non-trivial for the network to learn from the extracted features.", "Further, to exploit the sequence information of words in a sentence we pass handcrafted features of each word through a separate LSTM layer.", "For example, in the following review sentence, there are two positive words ( liking ' and recommending' ) and only one negative word ( far ').", "In a model that takes into account only the simple polar word score, the sentence would have high relevance towards the positive sentiment.", "However, the sequence information of the phrase far from liking and recommending dictates the negative sentiment of the sentence.", "In contrast to A 3 , architecture A 2 does not rely on the sequence information of the extracted features", "and let the network to learn on its own.", "We use 300 dimension word embeddings for the experiments.", "Each LSTM layer contains 100 neurons while two dense layers contain 100 and 50 neurons respectively.", "In this section, we describe the datasets, experimental setup, results and provide necessary analysis.", "We use Hindi ABSA dataset released by (Akhtar et al., 2016a) for our evaluation purpose.", "A total of 5,417 review sentences are present along with 4,509 aspect terms.", "Each aspect term belongs to one of the four sentiment classes: positive ', negative ', neutral ' and conflict '.", "We split the dataset into 70%, 10% and 20% as training, development and test, respectively for the experiment.", "For French case, we use the SemEval-2016 shared task on ABSA (Pontiki et al., 2016) restaurant dataset.", "It consists of 2,429 review sentences and 3,482 aspect terms.", "In cross-lingual setup, we utilize English dataset of SemEval-2014 shared task on ABSA (Pontiki et al., 2014) for training and Hindi ABSA dataset for testing.", "The English dataset comprises of product reviews in two domains i.e. restaurant and laptop.", "However, we only employ laptop domain dataset as most of the reviews in Hindi ABSA datasets belong to the electronics domain.", "For training in cross-lingual setup, we combine the training and gold test dataset together.", "In total, there are 3,845 review sentences comprising of 3,012 aspect terms.", "For English-French case, we use English restaurant dataset of SemEval-2016 shared task on ABSA (Pontiki et al., 2016) for the training and French ABSA dataset (Pontiki et al., 2016) for evaluation.", "The SemEval-2016 English restaurant dataset contains 3,365 aspect terms across 2,676 review sentences.", "We use Python based neural network library, Keras 4 for implementation.", "For English-Hindi, all the four classes (namely positive , negative , neutral and conflict ) were considered, whereas for English-French three classes (all except conflict 4 http://keras.io class) were used for classification.", "Since there is no false class, we use accuracy value as metric to measure the performance of the system.", "Also, we utilize accuracy value for the direct comparison with the existing state-of-the-art systems.", "LSTM network is trained with early stopping criteria on (i.e. preserving best learned parameter at each epoch).", "We set the number of epochs and patience value as 100 & 20 respectively.", "In other words, we run the experiments for maximum 100 epochs and if validation loss does not reduce for consecutive 20 epochs training stops and reports the best epoch attained so far.", "As activation function, we utilize tanh' at the intermediate layers, while for classification, we use softmax' at the output layer.", "To prevent the network from over-fitting, we incorporate an efficient regularization technique called Dropout' (Srivastava et al., 2014).", "At each layer of training, dropout skips few hidden neurons randomly.", "We fix dropout rate to be 45% during training while for optimization we use adam' optimizer (Kingma and Ba, 2014).", "Experimental results for aspect sentiment classification in multi-lingual and cross-lingual setups are reported in Figure 3 for both the language pairs.", "In total, we evaluate our model for four cases i.e.", "a. En-Hi multi-lingual ,", "b. En-Hi cross-lingual ,", "c. En-Fr multi-lingual and", "d. En-Fr cross-lingual scenarios.", "The non-root four-boxed nodes report performance of the respective methods for the four cases.", "The left subtree represents LSTM based baseline system that utilizes monolingual word embedding (WE) (i.e. word2vec model trained only on 7.2M Hindi and French sentences re-spectively).", "Whereas the right subtree represents usage of bilingual word embeddings in all the cases.", "Comparison between monolingual WE and bilingual WE shows competing results.", "Monolingual WE (a M : 63.64%) in multi-lingual scenario performs better than the bilingual WE (a B : 62.51%) for English-Hindi case, while bilingual WE (c B : 70.89%) reports better performance as compared with monolingual WE (a M : 66.29%) for English-French case.", "We observe a performance loss of approx.", "1 point with bilingual embeddings for English-Hindi case.", "However, after addressing the problem of data sparsity (i.e. when OOV words are translated and corresponding English word embeddings are computed) the same LSTM network reports an improved accuracy value of 64.83% (a BO ) for English-Hindi case, thus observ-577 a: Multi-lingual En-Hi scenario b: Cross-lingual En-Hi scenario c: Multi-lingual En-Fr scenario d: Cross-lingual En-Fr scenario Baseline (Monolingual WE) Features (En) A1 a MF 3 : 69.74 b MF 1 : 50.12 c MF 1 : 70.63 d MF 1 : 55.23 A2 a MF 3 : 71.25 b MF 2 : 52.91 c MF 2 : 72.28 d MF 2 : 55.84 A3 a MF 3 : 71.98 b MF 3 : 56.49 c MF 3 : 69.91 d MF 3 : 61.14 a M : 63.64 b M : 16.29 c M : 66.29 d M : 50.69 Bilingual WE Embeddings (OOV) Features (En) A1 a BOF 1 : 71.32 b BOF 1 : 56.68 c BOF 1 : 72.42 d BOF 1 : 68.24 A2 a BOF 2 : 73.50 b BOF 2 : 56.90 c BOF 2 : 72.14 d BOF 2 : 68.66 A3 a BOF 3 : 76.29 b BOF 3 : 60.39 c BOF 3 : 71.72 d BOF 3 : 69.49 a BO : 64.83 b BO : 50.79 c BO : 72.42 d BO : 65.32 a B : 62.51 b B : 48.94 c B : 70.89 d B : 63.64 Figure 3: Aspect classification in Multi-lingual and Cross-lingual setups for English-Hindi and English-French scenarios: Left subtree represents various baselines and their corresponding results.", "ing a performance increase of more than 2 points.", "For English-French case, we also observe the improvement with embeddings of OOVs.", "This suggests that the richness of target language (English) word embeddings helps the system to efficiently solve the problem encountered in resource-poor source language.", "Since the resources are limited for resource-poor language we try to leverage the high-quality lexicon features of English in our system.", "Consequently, we introduce the extracted features of Section 3.2 to the network.", "For English-Hindi multi-lingual scenario, the performance increments from A 1 to A 2 to A 3 indicate that the resource-richness of English language plays a crucial role in classification.", "While we incorporate English side lexicon features for English-French multi-lingual scenario, we observe no performance improvement like the others.", "For this case, our system reports an accuracy of 72.42% with (c BOF 1 ) and without (c BO ) the use of extra features.", "Results of cross-lingual setup for English-Hindi case, where we train the network utilizing English dataset and evaluate the model on Hindi dataset, are reported in row 2 of the four-boxed nodes in Figure 3. The baseline model for cross-lingual setups (left subtree of Figure 3) employs monolingual word embeddings of English and Hindi for training and testing respectively.", "Since the vector spaces of two different languages are completely unrelated, it is no surprise that the baseline system achieves merely 16.29% (b M ) accuracy.", "Using only the bilingual word embeddings the system achieves 48.94% (b B ) accuracy.", "By increasing the coverage of input word embeddings using machine translation the proposed system obtains an increased accuracy of 50.79% (b BO ).", "This improvement in accuracy, again, justifies the use of translated words for obtaining the word embeddings.", "Further, with the inclusion of target-side lexicon based features our proposed approach reports a significant performance improvement of approximately 6-10 points for all the three archi-578 tectures (b BOF 1 , b BOF 2 & b BOF 3 ).", "Results of English-French cross-lingual scenario are reported in row 4 of the four-boxed nodes in Figure 3. We observe similar phenomenon in cross-lingual setup with the English-French case as well.", "The baseline system, where we utilize separate monolingual WE for training and testing in English and French respectively, reports an accuracy of 50.69% (d M ), while employing bilingual embeddings the system obtains a sharp jump of approx.", "13 points with an accuracy value of 63.64% (d B ).", "Further, with the inclusion of OOV words and lexicon features performance of the system improves to 65.32% (d BO ) and 69.49% (d BOF 3 ), respectively.", "We observe four phenomena from these results:", "i) use of lexicon-based features is the driving force in predicting the sentiment;", "ii) qualitative lexicons of the resource-rich language can assist in solving the problems of resource-poor languages;", "iii) embeddings of the OOV words improves the performance of the system with or without assistance of extra features; and", "iv) use of separate LSTMs (one for word embeddings and the other for features) helps the network to efficiently extract relevant features for prediction without interfering each other (except for the multi-lingual English-French scenario).", "Comparative results reported in Figure 4 show that our proposed system clearly outperforms the baseline model in both the setups and for both the language pairs.", "In multi-lingual setup, we compare the proposed model against three state-of-the-art systems (Akhtar et al., 2016a; Singhal and Bhattacharyya, 2016; Akhtar et al., 2016b) for English-Hindi case.", "An accuracy of 65.96% was reported by the system (Akhtar et al., 2016b), while the system (Singhal and Bhattacharyya, 2016) obtained an accuracy of 68.31%.", "However, our proposed system reports an accuracy of 76.29%, which is approx.", "10% & 8% higher compared to the systems of (Akhtar et al., 2016b) and (Singhal and Bhattacharyya, 2016) respectively.", "In English-French case, our proposed system reports an improvement of approx.", "6 points over the baseline.", "For cross-lingual setup in English-Hindi case, we compare our proposed method with the state-of-the-art system proposed in (Barnes et al., 2016; Singhal and Bhattacharyya, 2016).", "On the same dataset their systems reported to have achieved an accuracies Baseline Akhtar et al., 2016a Akhtar et al., 2016b Barnes et al., 2016 Singhal and Bhattacharyya, 2016 Proposed system 10 20 30 40 50 60 70 80 A cc u r a c y v a l u e s 63.64 54.05 65.96 68.31 76.29 16.29 39.47 56.22 60.39 66.29 72.42 50.69 55.64 69.46 Comparative systems for various scenarios Multi-linguality: EN-Hi Cross-linguality: EN-Fr Multi-linguality: EN-Hi Cross-linguality: EN-Fr Figure 4: Comparison with the baseline and state-of-the-art methods.", "of 39.47% & 56.22% as compared to 60.39% of our proposed system.", "In English-French case, the system proposed in (Barnes et al., 2016) obtains accuracy value of 55.64% against 69.49% in our proposed architecture.", "Statistical significance tests ( t -test) confirm that performance increments in the proposed model are significant w.r.t. state-of-the-art methods with p -value=0.03 and p -value=0.01 respectively in multi-lingual and cross-lingual setups.", "The prime motivation of our current work is to minimize the effect of data sparsity while learning through deep neural network architecture.", "For this, we propose to use bilingual embeddings computed from a parallel corpus, which is created utilizing a MT system.", "Similarly, absence of a large aligned corpus in resource-poor language can be addressed through the application of a MT system.", "Since, the MT system is not fully accurate, there must be some errors introduced while translating.", "This, in turn, affects the bilingual word embedding.", "Another limitation of our work is that 7.2M sentences is not a big number in terms of word embedding computation.", "However, the underlying method performs considerably better compared to the state-of-the-art systems, even with all these constraints.", "To show the effectiveness of bilingual embeddings in minimizing data sparsity, we also experiment with a mono-lingual Hindi embeddings computed on 53M sentences.", "Following the proposed approach (except computing embeddings for OOV words), we obtain an accuracy of 77.74% in aspect classification task.", "Table 1 shows comparison with mono-lingual and multi-lingual approach for classification.", "Despite all those limitations discussed above (i.e. SMT error & corpus size), the proposed method with bilingual embeddings (76.29%) performs considerably at par against the monolin-579 gual embeddings created from a very large corpus of 53M (77.74%).", "However, the monolingual WE computed using the same amount of corpus (i.e. 7.2M sentences) produces an accuracy of only 63.64%.", "Further with the help of lexicon based features accuracy of this system increases to 70.86% (compared to 76.29% of our proposed model).", "It is also to be observed that performance of the system is improved by just including representations of the OOV words.", "Performance of the proposed system would have been much better if we would not have above mentioned limitations.", "We perform error analysis on the obtained results.", "Quantitatively, neutral ' is the most problematic class in both multi-lingual and cross-lingual setups.", "It mainly confuses with positive ' class.", "Approximately, 20% & 40% of neutral ' instances are tagged as positive ' in multi-lingual and cross-lingual setups, respectively.", "Our system does not predict conflict ' class at all, possibly due to the insufficient number of instances for training.", "Qualitatively, following are the few cases where our system performs below par.", "Lack of polar information inside context: Our system finds it challenging to classify sentiment of the aspect terms whose polar information lie outside the context window.", "In the following sentence aspect term is |weight' and the actual sentiment towards it is positive.", "The polar information (cid:1234) |about half as compared' and (cid:880) |lighter' are far from the aspect term, hence, not captured within the context window.", "Devanagari: (cid:1204) (cid:1234) (cid:872) (cid:875) 7 (cid:863) (cid:880) Transliteration: isakA vaZana nae AIpaiDa kI tulanA meM lagabhaga AdhA hai aura yaha anya upalabdha 7-iMcha TebaleTsa se bhI halkA hai.", "Translation: Its weight is about half as compared to the new iPad and it is lighter than other available 7-inch tablets.", "Implicit sentiment: Presence of implicit sentiment is not correctly classified by the proposed system.", "Following review contains |built' as an aspect term and its negative sentiment is derived from the phrase (cid:1378) (cid:1204) |plastic feel'.", "Devanagari: (cid:1204) (cid:1204) (cid:1378) (cid:1204) Transliteration: isa TebaleTa kI banAvaTa kAphI plAsTika phIla detA hai.", "Translation: The built of this tablet gives a fairly plastic feel.", "In this paper, we present a deep learning based LSTM architecture built on top of bilingual word embeddings for aspect level sentiment classification.", "Bilingual word embeddings try to bridge the language barrier between a resource-rich and resource-poor languages in a shared vector space.", "We propose to reduce the effect of data sparsity in a resource-poor language word embeddings by projecting OOV words into target side and utilize the target side word embeddings.", "In addition, we also exploit various resources of English for assisting the proposed model.", "We show the effectiveness of the proposed method in two different setups, i.e. multi-lingual and cross-lingual.", "Experimental results show that the proposed system outperforms various state-of-the-art systems in both the setups.", "In future, we would like to explore the application of proposed method in another aspect level sentiment analysis task known as aspect term extraction or opinion target extraction.", "Asif Ekbal acknowledges Young Faculty Research Fellowship (YFRF), supported by Visvesvaraya PhD scheme for Electronics and IT, Ministry of Electronics and Information Technology (MeitY), Government of India, being implemented by Digital India Corporation (formerly Media Lab Asia)." ]
[ "abstain", "abstain", "abstain", "objective", "method", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "method", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "method", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "other", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "result", "abstain", "result", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "result", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "objective", "objective", "objective", "abstain", "objective", "other" ]
[ "We study the task of semantic parse correction with natural language feedback.", "Given a natural language utterance, most semantic parsing systems pose the problem as one-shot translation where the utterance is mapped to a corresponding logical form.", "In this paper, we investigate a more interactive scenario where humans can further interact with the system by providing free-form natural language feedback to correct the system when it generates an inaccurate interpretation of an initial utterance.", "We focus on natural language to SQL systems and construct, SPLASH , a dataset of utterances, incorrect SQL interpretations and the corresponding natural language feedback.", "We compare various reference models for the correction task and show that incorporating such a rich form of feedback can significantly improve the overall semantic parsing accuracy while retaining the flexibility of natural language interaction.", "While we estimated human correction accuracy is 81.5%, our best model achieves only 25.1%, which leaves a large gap for improvement in future research.", "SPLASH is publicly available at https:// aka.ms/Splash_dataset .", "Natural language interfaces (NLIs) have been the holy grail\" of natural language understating and human-computer interaction for decades (Woods et al., 1972; Codd, 1974; Hendrix et al., 1978; Zettlemoyer and Collins, 2005). However, early attempts in building NLIs to databases did not achieve the expected success due to limitations in language understanding capability, among other reasons (Androutsopoulos et al., 1995; Jones and Galliers, 1995). NLIs have been receiving increasing attention recently motivated by interest in developing virtual assistants, dialogue systems, and Most work was done while the first author was an intern", "semantic parsing systems. NLIs to databases were at the forefront of this wave with several studies focusing on parsing natural language utterances into an executable SQL queries (Text-to-SQL parsing).", "Most of the work addressing the Text-to-SQL problem (and semantic parsing in general) frames it as a one-shot mapping problem. We establish (Sec-tion 4.1) that the majority of parsing mistakes that recent neural text-to-SQL parsers make are minor. Hence, it is often feasible for humans to detect and suggest fixes for such mistakes. Su et al. (2018) make a similar observation about parsing text to API calls (Su et al., 2017) and show that parsing mistakes could be easily corrected if humans are afforded a means of providing precise feedback. Likewise, an input utterance might be underor mis-specified, thus extra interactions may be required to generate the desired output similarly to query refinements in information retrieval systems (Dang and Croft, 2010).", "Humans have the ability to learn new concepts or correct others based on natural language description or feedback. Similarly, previous work has explored how machines can learn from language in tasks such as playing games (Branavan et al., 2012), robot navigation (Karamcheti et al., 2017), concept learning (e.g., shape, size, etc.) classifiers (Srivas-tava et al., 2018), etc. Figure 1 shows an example of a text-to-SQL system that offers humans the af-fordance to provide feedback in natural language when the system misinterprets an input utterance. To enable this type of interactions, the system needs to: (1) provide an explanation of the underlying generated SQL, (2) provide a means for humans to provide feedback and (3) use the feedback, along with the original question, to come up with a more", "accurate interpretation. In this work, we study the task of SQL parse correction with natural language feedback to enable text-to-SQL systems to seek and leverage human feedback to further improve the overall performance and user experience. Towards that goal, we make the following contributions: (1) we define the task of SQL parse correction with natural language feedback; (2) We create a framework for explaining SQL parse in natural language to allow text-to-SQL users (who may have a good idea of what kind of information resides on their databases but are not proficient in SQL Hendrix et al. (1978)) to provide feedback to correct inaccurate SQL parses; (3) we construct SPLASH S emantic P arsing with L anguage As sistance from H umansa new dataset of natural language questions that a recent neural text-to-SQL parser failed to generate correct interpretation for together with corresponding human-provided natural language feedback describing how the interpretation should be corrected; and (4) we establish several baseline models for the correction task and show that the task is challenging for state-of-the-art semantic parsing models.", "We formally define the task of SQL parse correction with natural language feedback. Given a question q , a database schema s , a mispredicted parse p , a natural language feedback f on p , the task is to generate a corrected parse p (Figure 2). Following Yu et al. (2018), s is defined as the set of tables, columns in each table and the primary and foreign keys of each table.", "Models are trained with tuples q , s , p , f and gold parse p .", "At evaluation time, a model takes as input tuples in the form q , s , p , f and hypothesizes a corrected parse p .", "We compare p and the gold parse p in terms of their exact set match (Yu et al., 2018) and report the average matching accuracy across the testing examples as the model's correction accuracy.", "In this section, we describe our approach for collecting training data for the SQL parse correction task.", "We first generate pairs of natural language utterances and the corresponding erroneous SQL parses (Section 3.1).", "We then employ crowd workers (with no SQL knowledge) to provide feedback, in natural language, to correct the erroneous SQL (Section 3.3).", "To enable such workers to provide feedback, we show them an explanation of the generated SQL in natural language (Section 3.2).", "Finally, to improve the diversity of the natural language feedback, we ask a different set of annotators to paraphrase each feedback.", "We describe the process in detail in the remainder of this section.", "We use the Spider dataset (Yu et al., 2018) as our source of questions.", "Spider has several advantages over other datasets.", "Compared to ATIS (Price, Step 1: Find the number of rows of each value of id in browser table.", "Step 2: Find id, name of browser table with largest value in the results of step", "1. SQL: SELECT id, name from browser GROUPBY id ORDER BY COUNT(*) DESCSELECT _cols_ from _table_ Group BY_col_ ORDER BY _aggr_ _col_ Template: Explanation: Figure 3: An example of a SQL query, the corresponding template and the generated explanation.", "1990) and GeoQuery (Zelle and Mooney, 1996), Spider is much larger in scale (200 databases vs. one database, and thousands vs. hundreds of SQL parses).", "Compared to WikiSQL (Zhong et al., 2017), Spider questions require inducing parses of complex structures (requiring multiple tables, joining, nesting, etc.).", "Spider also adopts a cross-domain evaluation setup in which databases used at testing time are never seen at training time.", "To generate erroneous SQL interpretations of questions in Spider, we opted for using the output of a text-to-SQL parser to ensure that our dataset reflect the actual distribution of errors that contemporary parsers make.", "This is a more realistic setup than artificially infusing errors in the gold SQL.", "We use the Seq2Struct parser (Shin, 2019) 1 to generate erroneous SQL interpretations.", "Seq2Struct combines grammar-based decoder of Yin and Neubig (2017) with a self-attention-based schema encoding and it reaches a parsing accuracy of 42.94% on the development set of Spider.", "2 Note that we make no explicit dependencies on the model used for this step and hence other models could be used as well (Section 6.3).", "We train Seq2Struct on 80% of Spider's training set and apply it to the remaining 20%, keeping 1 https://github.com/rshin/seq2struct 2 When we started the dataset construction at the beginning of June 2019, we were able to achieve a parsing accuracy of 41.30% on Spider's development set which was the state-of-the-art accuracy at the time.", "It is worth noting that unlike current state-of-the-art models, Seq2Struct does not use pre-trained language models.", "It was further developed into a new model called RAT-SQL (Wang et al., 2020) which achieved a new state-of-the-art accuracy as of April 2020.", "only cases where the generated parses do not match the gold parse (we use the exact set match of Yu et al. (2018)).", "Following the by-database splitting scheme of Spider, we repeat the 80-20 training and evaluation process for three times with different examples in the evaluation set at each run.", "This results in 3,183 pairs of questions and an erroneous SQL interpretation.", "To further increase the size of the dataset, we also ignore the top prediction in the decoder beam 3 and use the following predictions.", "We only include cases where the difference in probability between the top and second to top SQLs is below a threshold of 0.2.", "The intuition here is that those are predictions that the model was about to make and hence represent errors that the model could have made.", "That adds 1,192 pairs to our dataset.", "In one of the earliest work on natural language interfaces to databases, Hendrix et al. (1978) note that many business executives, government official and other decision makers have a good idea of what kind of information residing on their databases.", "Yet to obtain an answer to a particular question, they cannot use the database themselves and instead need to employ the help of someone who can.", "As such, in order to support an interactive Text-to-SQL system, we need to be able to explain the incorrect generated SQL in a way that humans who are not proficient in SQL can understand.", "We take a template-based approach to explain SQL queries in natural language.", "We define a template as follows: Given a SQL query, we replace literals, table and columns names and aggregation and comparison operations with generic placehold-ers.", "We also assume that all joins are inner joins (true for all Spider queries) whose join conditions are based on primary and foreign key equivalence (true for more than 96% of Spider queries).", "A query that consists of two subqueries combined with an intersection, union or except operations is split into two templates that are processed independently and we add an intersection/union/except part to the explanation at the end.", "We apply the same process to the limit operationgenerate an explanation of the query without limit, then add a limit-related step at the end.", "3 We used a beam of size 20.", "queries.", "For each SQL template, we wrote down a corresponding explanation template in the form of steps (e.g., join step, aggregation step, selection step) that we populate for each query.", "Figure 3 shows an example of a SQL queries, its corresponding template and generated explanations.", "We also implemented a set of rules for compressing the steps based on SQL semantics.", "For instance, an ordering step following by a limit 1 is replaced with find largest/smallest where largest or smallest is decided according to the ordering direction.", "We use an internal crowd-sourcing platform similar to Amazon Mechanical Turk to recruit annotators.", "Annotators are only selected based on their performance on other crowd-sourcing tasks and command of English.", "Before working on the task, annotators go through a brief set of guidelines explaining the task.", "4 We collect the dataset in batches of around 500 examples each.", "After each batch is completed, we manually review a sample of the examples submitted by each annotator and exclude those who do not provide accurate inputs from the annotators pool and redo all their annotations.", "Annotators are shown the original question, the explanation of the generated SQL and asked to: (1) decide whether the generated SQL satisfies the information need in the question and (2) if not, then provide feedback in natural language.", "The first step is necessary since it helps identifying false negative parses (e.g., another correct parse that does not match the gold parse provided in Spider).", "We also use the annotations of that step to assess the extent to which our interface enables target users to interact with the underlying system.", "As per our assumption that target users are familiar with the kind of information that is in the database (Hendrix et al., 1978), we show the annotators an overview of the tables in the database corresponding to the question that includes the table and column names together with examples (first 2 rows) of the content.", "We limit the maximum feedback length to 15 tokens to encourage annotators to provide a correcting feedback based on the initial parse (that focuses on the edit to be made) rather than describing how the question should be answered.", "A total of 10 annotators participated in this task.", "They were compensated based on an hourly rate 4 We provide the data collection instructions and a screenshot of the data collection interface in the appendix.", "(as opposed to per annotation) to encourage them to optimize for quality and not quantity.", "They took an average of 6 minutes per annotation.", "To improve the diversity of the feedback we collect, we ask a separate set of annotators to generate a paraphrase of each feedback utterance.", "We follow the same annotators quality control measures as in the feedback collection task.", "An example instance from the dataset is shown in Figure", "2. 3.4 Dataset Summary Overall, we ask the annotators to annotate 5409 example (427 of which had the correct SQL parse and the remaining had an incorrect SQL parse).", "Examples with correct parse are included to test whether the annotators are able to identify correct SQL parses given their explanation and the question.", "Annotators are able to identify the correct parses as correct 96.4% of the time.", "For the examples whose predicted SQL did not match the gold SQL, annotators still marked 279 of them as correct.", "Upon manual examinations, we found that annotators were indeed correct in doing so 95.5% of the time.", "Even though the predicted and gold SQLs did not match exactly, they were equivalent (e.g., 'price between 10 and 20' vs. 'price 10 and price 20' ).", "After paraphrasing, we ended up with 9,314 question-feedback pairs, 8352 of which correspond to questions in the Spider training split and 962 from the spider development split.", "We use all the data from the Spider development split as our test data.", "We hold out 10% of the remaining data (split by database) to use as our development set and use the rest as the training set.", "Table 1 provides a summary of the final dataset.", "We conduct a more thorough analysis of SPLASH in this section.", "We study the characteristics of the mistakes made by the parser as well as characteristics of the natural language feedback.", "We start by characterizing the nature of errors usually made by the models in parsing the original utterance to SQL.", "To understand the relation between the gold and the predicted SQL, we measure the edit distance between them for all cases for which the model made a mistake in the SQL prediction.", "We measure the edit distance by the number of edit segments (delete, insert, replace) between both parses.", "We find the minimal sequence of token-level edits using the levenshtein distance algorithm.", "Then, we combine edits of the same type (delete, insert, replace) applied to consecutive positions in the predicted parse in one segment.", "Figure 4 shows a frequency histogram of different values of edit distance.", "We observe that most inaccurate predictions lie within a short distance from the correct SQL ( 78%+ within a distance of 3 or less).", "In addition to the number of edits, we also characterize the types of edits needed to convert the predicted SQL to the gold one.", "Our edit distance calculations support three operations replace, insert and delete.", "Those correspond to 58% 31% and 11% of the edit operations respectively.", "Most of the edits are rather simple and require replacing, inserting or deleting a single token (68.1% of the edits).", "The vast majority of those correspond to editing a schema item (table or column name): 89.2%, a SQL keyword (e.g., order direction, aggregation, count, distinct, etc.): 7.4%, an operator (greater than, less than, etc.): 2.2% or a number (e.g. for a limit operator): 1.2%.", "The edits between the predicted and generated SQL spanned multiple SQL keywords.", "The distribution of different SQL keywords appearing in edits and their distribution across edit types (re-place, insert or delete) is shown in Figure", "5. Note that a single edit could involve multiple keywords (e.g., multiple joins, a join and a where clause, etc.).", "Interestingly, many of the edits involve a join highlighting that handling utterances that require a join is harder and more error prone.", "Following join , most edits involve where clauses (making the query more or less specific), aggregation operators, counting and selecting unique values.", "The error analysis demonstrates that many of the errors made by the model are in fact not significant and hence it is reasonable to seek human feedback to correct them.", "To better understand the different types of feedback our annotators provided, we sample 200 examples from the dataset, and annotate them with the type of the feedback.", "We start by assigning the feedback to one of three categories: (1) Complete: the feedback fully describes how the predicted SQL can be corrected , (2) Partial: the feedback describes a way to correct the predicted SQL but only partially and (3) Paraphrase: the feedback is a paraphrase of the original question.", "The sample had 81.5% for Complete, 13.5% for Partial and 5.0% for Paraphrase feedback.", "Examples of each type of feedback are shown in Table", "2. Upon further inspection of the partial and paraphrase feedback, we observe that they mostly happen when the distance between the predicted and gold SQL is high (major parsing er-rors).", "As such, annotators opt for providing partial feedback (that would at least correct some of the mistakes) or decide to rewrite the question in a different way.", "We also annotate and present the types of feedback, in terms of changes the feedback is suggesting, in Table", "3. Note that the same feedback may suggest multiple changes at the same time.", "The Complete Feedback: [81.5%] Question: Show the types of schools that have two schools.", "table shows that the feedback covers a broad range of types, which matches our initial analysis of error types.", "We find that a majority of feedback is referencing the retrieved information.", "In many such cases, the correct information has not been retrieved because the corresponding table was not used in the query.", "This typically corresponds to a missing inner one-to-one join operation and agrees with our earlier analysis on edit distance between the gold and predicted SQL.", "The second most popular category is incorrect conditions or filters followed by aggregation and ordering errors.", "We split the first two categories by whether the informa-tion/conditions are missing, need to be replaced or need to be removed.", "We observe that most of the time the information or condition needs to be replaced.", "This is followed by missing information that needs to be inserted and then unnecessary ones that need to be removed.", "We heuristically identify feedback patterns for each collected feedback.", "To identify the feedback pattern, we first locate the central predicate in the feedback sentence using a semantic role labeler (He et al., 2015).", "Since some feedback sentences can be broken into multiple sentence fragments, a single feedback may contain more than one central predicate.", "For each predicate, we identify its main arguments.", "We represent every argument with its first non stopword token.", "To reduce the vocabulary, we heuristically identify column mentions and replace them with the token 'item'.", "We visualize the distribution of feedback patterns for the top 60 most frequent patterns in Figure 6 , and label the ones shared among multiple patterns.", "As is shown, our dataset covers a diverse variety of feedback patterns centered around key operations to edit the predicted SQL that reference Figure 6: Patterns of feedback covered in our dataset.", "Our work is linked to multiple existing research lines including semantic parsing, learning through interaction (Li et al., 2017a; Hancock et al., 2019; Li et al., 2017b, inter alia) and learning from natural language supervision (Srivastava et al., 2017; Co-Reyes et al., 2019; Srivastava et al., 2018; Hancock et al., 2018; Ling and Fidler, 2017, inter alia).", "We discuss connections to the most relevant works.", "Text-to-SQL Parsing: Natural language to SQL (natural language interfaces to databases) has been an active field of study for several decades (Woods et al., 1972; Hendrix et al., 1978; Warren and Pereira, 1982; Popescu et al., 2003; Li Feedback Type % Example Information Missing 13% I also need the number of different services Wrong 36% Return capacity in place of height Unnecessary 4% No need to return email address Conditions Missing 10% ensure they are FDA approved Wrong 19% need to filter on open year not register year Unnecessary 7% return results for all majors Aggregation 6% I wanted the smallest ones not the largest Order/Uniq 5% only return unique values Table 3: Examples of feedback annotators provided for different types and Jagadish, 2014).", "This line of work has been receiving increased attention recently driven, in part, by the development of new large scale datasets such as WikiSQL (Zhong et al., 2017) and Spider (Yu et al., 2018).", "The majority of this work has focused on mapping a single query to the corresponding SQL with the exception of a few datasets, e.g., SParC (Yu et al., 2019b) and CoSQL (Yu et al., 2019a), that target inducing SQL parses for sequentially related questions.", "While these datasets focus on modeling conversational dependencies between questions, SPLASH evaluates the extent to which models can interpret and apply feedback on the generated parses.", "We empirically confirm that distinction in Section 6.3.", "Learning from Feedback : Various efforts have tried to improve semantic parsers based on feedback or execution validation signals.", "For example, Clarke et al. (2010) and Artzi and Zettlemoyer (2013) show that semantic parsers can be improved by learning from binary correct/incorrect feedback signals or validation functions.", "Iyer et al. (2017) improve text-to-SQL parsing by counting on humans to assess the correctness of the execution results generated by the inferred parses.", "In their system, parses with correct results are used to augment the training set together with crowdsourced gold parses of the parses that are marked as incorrect.", "Lawrence and Riezler (2018) show that a text-to-Overpass parser can be improved using historic logs of token-level binary feedback (col-lected using a graphical user interface that maps an Overpass query to predefined blocks) on generated parses.", "We note that our work is different from this line of work in that we do not seek to retrain and generally improve the parser, rather we focus on the task of immediately incorporating the natural language feedback to correct an initial parse.", "Interactive Semantic Parsing Multiple other efforts sought to interactively involve humans in the parsing process itself.", "He et al. (2016) ask simplified questions about uncertain dependencies in CCG parses and use the answers as soft constraints to regenerate the parse.", "Both Li and Ja-gadish (2014) and Su et al. (2018) generate semantic parses and present them in a graphical user interface that humans can control to edit the initial parse.", "Gur et al. (2018) ask specific predefined multiple choice questions about a narrow set of predefined parsing errors.", "This interaction model together with the synthetically generated erroneous parses that are used for training can be appropriate for simple text-to-SQL parsing instance as in WikiSQL, which was the only dataset used for evaluation.", "Yao et al. (2019b) ask yes/no questions about the presence of SQL components while generating a SQL parse one component at a time.", "Our work falls under the general category of interactive semantic parsing.", "However, our interaction model is solely based on natural language feedback which can convey richer information and offering a more flexible interaction.", "Our work is closest to (Lab-utov et al., 2018), which also studies correcting semantic parses with natural language feedback, but we (1) focus on text-to-SQL parsing and build on a multi-domain dataset that requires generating complex semantic structures and generalizing to unseen domains (Labutov et al. consider only the domain of email and biographical research); (2) pair the mispredicted parses and feedback with gold parses 5 in both our training and testing splits which benefits a wider class of correction models; and (3) show that incorporating the mispredicted parse significantly improves the correction accu-5 In real world scenarios, the gold parse is the final parse that the user approves after a round (or more) of corrections.", "Asking Clarifying Questions: Another relevant research direction focused on extending semantic parsers beyond one-shot interactions by creating agents that can ask clarifying questions that resolve ambiguities with the original question.", "For example, Yao et al. (2019a) showed that using reinforcement learning based agents that can ask clarifying questions can improve the performance of semantic parsers in the If-Then recipes domain.", "Generating clarifying questions have been studied in multiple domains to resolve ambiguity caused by speech recognition failure (Stoyanchev et al., 2014), recover missing information in question answering (Rao and Daum III, 2018) or clarify information needs in open-domain information-seeking (Aliannejadi et al., 2019).", "Our work is different from this research in that we focus on enabling and leveraging human feedback that corrects an initial parse of a fully specified question rather than spotting and clarifying ambiguities.", "We present and evaluate a set of baseline models for the correction task (Section 2) in which we use SPLASH for training and testing (unless otherwise stated).", "Our main evaluation measure is the correction accuracythe percentage of the testing set examples that are correctedin which we follow Yu et al. (2018) and compare the corrected parse to the gold parse using exact set match.", "6 We also report the end-to-end accuracy on Spider development set (which we use to construct our testing set) of the two turn interaction scenario: first Seq2Struct attempts to parse the input question.", "If it produced a wrong parse the question together with that parse and the corresponding feedback are attempted using the correction model.", "An example is considered correct if any of the two attempts produces the correct parse.", "7 6.1 Baselines Methods that ignore the feedback: One approach for parse correction is re-ranking the decoder beam (topn predictions) (Yin and Neubig, 6 Exact set match is a binary measure of exact string matching between SQL queries that handles ordering issues.", "7 Seq2Struct produces correct parses for 427/1034 of Spider Dev.", "511 of the remaining examples are supported by our SQL explanation patterns.", "We estimate the end-to-end accuracy as (427+511 X/ 100) / 1034 , where X is the correction accuracy.", "2019).", "Here, we simply discard the top-1 candidate and sample uniformly and with probabilities proportional to the parser score of each candidate.", "We also report the accuracy of deterministically choosing the second candidate.", "Handcrafted re-ranking with feedback: By definition, the feedback f describes how to edit the initial parse p to reach the correct parse.", "We represent the diff between p and each candidate parse in the beam p i as set of schema items that appear only in one of them.", "For example, the diff between select first_name, last_name from students and select first_name from teachers is {last_name, students, teachers}.", "We assign a score to p i equals to the number of matched schema items in the diff with f .", "A schema item (e.g., first_name) is considered to be mentioned in f is all the individual tokens (first and name) are tokens in f .", "Seq2Struct+Feedback: The Seq2Struct model we use to generate erroneous parses for data collection (Section 3.1) reached an accuracy of 41.3% on Spider's development set when trained on the full Spider training set for 40,000 steps.", "After that initial training phase, we adapt the model to incorporating the feedback by appending the feedback to the question for each training example in SPLASH and we continue training the model to predict the gold parse for another 40,000 steps.", "We note that Seq2Struct+Feedback does not use the mispredicted parses.", "EditSQL+Feedback: EditSQL (Zhang et al., 2019) is the current state-of-the-art model for conversational text-to-SQL.", "It generates a parse for an utterance at a conversation turn i by editing (i.e., copying from) the parse generated at turn i 1 while condition on all previous utterances as well as the schema.", "We adapt EditSQL for the correction task by providing the question and the feedback as the utterances at turn one and two respectively, and we force it to use the mispredicted parse the the prediction of turn one (rather than predicting it).", "We train the model on the combination of the training sets of SPLASH and Spider (which is viewed as single turn conversations).", "8 To provide an estimate of human performance , we report the percentage of feedback instances la-8 We exclude turn one predictions from the training loss when processing SPLASH examples otherwise, the model would be optimized to produce the mispredicted parses.", "We use the default hyper-parameters provided by the authors together with the development set of SPLASH for early stopping.", "beled as Complete as described in Section 4.2.", "We also report the re-ranking upper bound (the percentage of test examples whose gold parses exist in Seq2Struct beam).", "The results in Table 4 suggest that: (1) the feedback we collect is indeed useful for correcting erroneous parses; (2) incorporating the mispredicted parse helps the correction process (even a simple handcrafted baseline that uses the mispredicted parases outperforms a strong trained neural model); and (3) the state-of-the-art EditSQL model equipped with BERT (Devlin et al., 2019) achieves the best performance, yet it still struggles with the task we introduce, leaving a large gap for improvement.", "Does EditSQL+Feedback use the feedback?", "To confirm that EditSQL+Feedback does not learn to ignore the feedback, we create a test set of random feedback by shuffling the feedback of SPLASH test examples.", "The accuracy on the randomized test set drops to 5.6%.", "Is SPLASH just another conversational text-to-SQL dataset?", "We evaluate the trained EditSQL models on SParC and CoSQL (state-of-the-art models trained by EditSQL authors) on SPLASH test set, and we get accuracies of 3.4% and 3.2%, respectively.", "That confirms that SPLASH targets different modeling aspects as we discuss in Section", "5. Is SPLASH only useful for correcting Seq2Struct errors?", "EditSQL is also shown to achieve strong results on Spider (57.6% on the development set) when used in a single-turn mode (state-of-the-art when we started writing this paper).", "We collect feedback for a sample of 179 mispredicted parses produces by EditSQL.", "9 Using the EditSQL+Feedback model trained on SPLASH we get a correction accuracy of 14.6% for EditSQL errors.", "We introduce the task of SQL parse correction using natural language feedback together with a dataset of human-authored feedback paired with mispredicted and gold parses.", "We compare baseline models and show that natural language feedback is effective for correcting parses, but still state-of-the-art models struggle to solve the task.", "Future work can explore improving the correction models, leveraging logs of natural language feedback to improve text-to-SQL parsers, and expanding the dataset to include multiple turns of correction.", "We thank our ACL reviewers for their feedback and suggestions.", "Ahmed Elgohary completed part of this work while being supported by a grant from the Defense Advanced Research Projects Agency and Air Force Research Laboratory, and awarded to Raytheon BBN Technologies under contract number FA865018-C-7885." ]
[ "method", "abstain", "objective", "method", "result", "result", "other", "other", "abstain", "result", "other", "other", "method", "abstain", "abstain", "method", "method", "objective", "method", "result", "result", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "other", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "method", "method", "method", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "result", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "method", "method", "abstain", "method", "method", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "other", "abstain", "other", "other", "method", "other", "method", "other", "other", "other", "other", "other", "abstain", "other", "other", "other", "other", "other", "other", "abstain", "abstain", "other", "other", "other", "other", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "method", "objective", "abstain", "other", "other" ]
[ "With state-of-the-art systems having finally attained estimated human performance, Word Sense Disambiguation (WSD) has now joined the array of Natural Language Processing tasks that have seemingly been solved, thanks to the vast amounts of knowledge encoded into Transformer-based pre-trained language models.", "And yet, if we look below the surface of raw figures, it is easy to realize that current approaches still make trivial mistakes that a human would never make.", "In this work, we provide evidence showing why the F1 score metric should not simply be taken at face value and present an exhaustive analysis of the errors that seven of the most representative state-of-the-art systems for English all-words WSD make on traditional evaluation benchmarks.", "In addition, we produce and release a collection of test sets featuring", "(a) an amended version of the standard evaluation benchmark that fixes its lexical and semantic inaccuracies,", "(b) 42D, a challenge set devised to assess the resilience of systems with respect to least frequent word senses and senses not seen at training time, and", "(c) hardEN, a challenge set made up solely of instances which none of the investigated state-of-the-art systems can solve.", "We make all of the test sets and model predictions available to the research community at https://github.com/ SapienzaNLP/wsd-hard-benchmark .", "In recent years, Natural Language Processing (NLP) has witnessed a quantum leap in benchmark task performance, mainly thanks to the adoption of two major technical innovations: the Transformer architecture (Vaswani et al., 2017) and transfer learning from language models pre-trained on massive amounts of textual data (Devlin et al., 2019; Lewis et al., 2020).", "The impact of these breakthroughs was so strong that, on many benchmarks, the performance of human non-experts was surpassed (Wang et al., 2019b), prompting researchers to release new, more challenging benchmarks (Wang et al., 2019a).", "Word Sense Disambiguation (WSD), the task of automatically assigning a meaning to an ambiguous word in context (Bevilacqua et al., 2021), is undergoing a similar process: current state-of-the-art systems are now capable of attaining and surpassing the F1 score of 80% 1 on standard test datasets (Bevilacqua and Navigli, 2020; Barba et al., 2021a; Conia and Navigli, 2021; Kohli, 2021), a figure often reported as the estimated human performance, because it corresponds to the highest recorded inter-annotator agreement (Edmonds and Kilgarriff, 2002; Navigli et al., 2007; Palmer et al., 2007).", "Matching and/or surpassing human performance reasonably triggers the assumption that systems are capable of carrying out tasks in real-world scenarios as effectively as their human counterparts (Kiela et al., 2021), to the point where non-practitioners would regard such tasks as solved.", "And yet, once systems are investigated beyond sheer accuracy figures, their flaws become readily apparent (Ribeiro et al., 2016; Belinkov and Bisk, 2018; Ribeiro et al., 2020; Card et al., 2020; Zhou et al., 2020).", "Following this trend of research, our work provides evidence showing why traditional evaluation measures for WSD, such as the F1 score, should not be taken at face value, hence corroborating the thesis that the problem of disambiguation is far from solved (Emelin et al., 2020; Loureiro et al., 2021).", "To provide context, consider the following example, where the sense prediction 2 of the currently state-of-the-art ESCHER model (Barba et al., 2021a) for the word wind is compared with the gold answer from the test set of SemEval-2013 Task 12 (Navigli et al., 2013): 1 Unless specified, for the remainder of this work, we will use F1 score to refer to the micro-averaged F1 score.", "2 According to the most commonly employed sense inventory for WSD, i.e., WordNet 3.0 (Fellbaum, 1998).", "context: The banks battling against a strong wind in the USA several years later.", "Investors and regulators (. . . ) gold: A tendency or force that influences events.", "ESCHER: Air moving (. . . ) from an area of high pressure to an area of low pressure.", "Here, the contextual meaning of the word wind is clear to any English speaker, with no cues in the sentence that would lead a human reader to pick the air meaning.", "This is an illustrative case of why, despite having achieved (on paper) superhuman performance, systems continue to make mistakes that the inter-annotator agreement would not justify.", "Similarly, in the context below, another system which breaks the 80% performance ceiling (Conia and Navigli, 2021) makes a trivial mistake on a standard test instance (Snyder and Palmer, 2004), and fails to label the word couple properly: context: I was just sitting down to meet with some new therapy clients, a couple, and the building started shaking (. . . ) gold: A pair of people who live together.", "With a view to gaining a better understanding of the nature of what systems still fail to disambiguate, in this work we provide the following main contributions:", "(i) we put forward a detailed quantitative and qualitative analysis of errors shared among seven state-of-the-art systems for English WSD, including systems that have surpassed the 80 % human estimate in terms of F1 score (Bevilacqua and Navigli, 2020; Barba et al., 2021a),", "(ii) we produce an amended version of the English all-words WSD evaluation benchmarks featured in Senseval and SemEval tasks (Agirre et al., 2009; Raganato et al., 2017a),", "(iii) we devise 42D (pron. [fortitude]), the first manually-curated test bed made available to the research community after a hiatus of seven years since SemEval-2015 (Moro and Navigli, 2015), and a powerful evaluation tool for estimating system resilience in contexts featuring least frequent word senses,", "(iv) we establish a new human performance threshold for assessing actual superhuman scores on WSD test sets, and propose macro-averaged F1 score as an alternative to micro-averaged F1 score to better account for least frequent word senses in WSD evaluation,", "(v) we release hardEN, a challenge set for English all-words WSD on which state-of-the-art systems under investigation achieve exactly 0.0% F1 score, and", "(vi) we set up an experimental setting to show the impact sense distribution has over the aforementioned datasets.", "WSD has witnessed the creation of many different evaluation benchmarks, most notably as part of the Senseval (now SemEval) evaluation campaigns (Kilgarriff, 1998).", "Since the release of the popular Unified Evaluation Framework by Raganato et al. (2017a), the experimental setting has become quite standard, with most systems being evaluated on ALL, i.e., the concatenation of Senseval-2 (Edmonds and Cotton, 2001), Senseval-3 Task 1 (Snyder and Palmer, 2004), SemEval-2007 Task 17 (Pradhan et al., 2007), SemEval-2013 Task 12 (Nav-igli et al., 2013), and SemEval-2015 Task 13 (Moro and Navigli, 2015).", "Besides reporting results split by part of speech, which has not been particularly insightful, no specific finer-grained analysis is usually performed.", "3 This trend runs the risk of promoting a sort of collective hill-climbing behavior, which, in turn, makes it unclear how much the improvement in performance has been due to genuinely stronger generalization power, as opposed to overfitting to increasingly stale test sets.", "In opposition to this measure-centered style of evaluation, one possible alternative is that of behavioral testing, as proposed by Ribeiro et al. (2020).", "In their proposal (which does not address WSD ex-plicitly), the benchmark evaluates separately minimum testable units of behavior, each of which addresses one specific skill required by a usable system.", "WSD, however, is a tricky problem to address in this way, as it is, in fact, a collection of idiosyncratic, diverse classification problems, which are hard to cluster in a meaningful way.", "A different kind of analysis, perhaps more specific to WSD, has tackled the problem of the strong imbalance of sense distributions, which makes learning difficult for automatic algorithms, and monitors how this imbalance affects performance (Calvo and Gelbukh, 2015; Izquierdo et al., 2015; Postma et al., 2016; Wang and Wang, 2021).", "We 3 Partial exceptions are Kumar et al. (2019), Bevilacqua et al. (2020), Blevins et al. (2021), Chen et al. (2021), and Barba et al. (2021a), which have paid particular attention to least frequent senses and data efficiency.", "follow this line of research in that we also take sense distribution skewness as the core issue in the development of WSD algorithms.", "Therefore, both in the analysis of current WSD systems and in the creation of our new benchmarks, we check for the excessive influence of the most frequent output classes.", "In an effort to make our analysis as thorough and comprehensive as possible, we consider a set of seven representative cutting-edge approaches for WSD.", "4 With the exception of SyntagRank (Scoz-zafava et al., 2020), all systems are supervised neural architectures exploiting pre-trained language models.", "Below, we describe each of these systems: 5 ARES (Scarlini et al., 2020) 6 is a semi-supervised approach to producing contextualized sense embeddings that share the same space as those from BERT (Devlin et al., 2019).", "It enables a simple 1 Nearest-Neighbour algorithm to attain high performance both in the English and multilingual settings despite relying on English training data only.", "We use the ARES English vectors freely available at http://sensembert.org .", "BEM (Blevins and Zettlemoyer, 2020) is a bi-encoder model with high accuracy for the disambiguation of rare word senses.", "BEM maps the target in context and its word senses (as represented by glosses) independently into a shared embedding space, by means of jointly learned context and gloss encoders.", "Disambiguation is then performed simply by predicting the sense whose encoding is most similar to that of the target.", "We employ the model and code available at https://github.com/ facebookresearch/wsd-biencoders .", "ESCHER (Barba et al., 2021a, ESR) frames WSD as a span extraction task similar to SQuAD (Rajpurkar et al., 2016), in which a system is asked to detect the span matching the gloss of the correct sense for a target word from a pseudo-document constructed by concatenating the con-4 To ensure a fair comparison, we only consider sys-tems/settings that are not exposed to the Princeton WordNet Gloss Corpus ( https://wordnetcode.princeton. edu/glosstag.shtml ).", "text of the target word with all the glosses of its possible senses.", "At the time of writing, ESCHER represents the state of the art in WSD.", "7 We employ the model and code available at https: //github.com/SapienzaNLP/esc .", "EWISER (Bevilacqua and Navigli, 2020, EWR) is a WSD classifier that exploits relational information included in WordNet by incorporating a sparse adjacency matrix within the architecture.", "We employ the model and code available at https: //github.com/SapienzaNLP/ewiser .", "Generationary (Bevilacqua et al., 2020, GEN) reframes WSD as definition modeling, i.e., the task of generating a gloss from static or contextual embeddings (Noraset et al., 2017), therefore recasting disambiguation as a generative problem.", "We use the GEN-UNI (MBRR) model reported in the original paper.", "While the only exposure of the model to WordNet-tagged data was through SemCor (Miller et al., 1993), i.e., the most widely employed training set for WSD, the model was also trained on other lexicographic resources, such as the Oxford Dictionary (Chang and Chen, 2019).", "GlossBERT (Huang et al., 2019, GLB) formulates WSD as a gloss ranking task, with a cross-encoder scoring context-gloss pairs.", "The model is trained with a simple learning-to-rank (He et al., 2008) approach, simply predicting whether a gloss is relevant to the context or not.", "We employ the model and code available at https://github.", "com/HSLCY/GlossBERT .", "SyntagRank (Scozzafava et al., 2020, SYN) is a knowledge-based system that jointly exploits the Personalized PageRank algorithm and the wealth of syntagmatic information contained in SyntagNet (Maru et al., 2019) to perform disambiguation in multiple languages.", "We accessed SyntagRank by means of its APIs which are freely available at http://api.syntagnet.org/ .", "To consider WSD as solved, it would be reasonable to expect disambiguation errors to be little more than mismatches between the reference ground truth and another different, but still reasonable interpretation.", "For example, if we consider the word 7 Contemporary to this work, ConSeC (Barba et al., 2021c), which extends ESCHER, has now attained the new state of the art. 4726 dataset #inst #mono ARES BEM ESR EWR GEN GLB SYN gold ALL 7,253 1,301 71.3% 72.6% 71.2% 72.7% 69.0% 74.8% 81.1% 65.2% ALLHC 541 0 64.7% 71.0% 68.6% 67.8% 62.7% 70.6% 80.2% 2.0% ALL 7,253 1,301 88.2% 87.4% 86.3% 88.8% 85.9% 88.6% 88.8% 84.3% ALLHC 541 0 96.9% 96.7% 96.5% 98.0% 95.0% 97.2% 98.3% 67.1% Table 1: Times (%) systems predict the MFS in WordNet, i.e., WN1st (top), or a sense occurring at least once in SemCor (bottom).", "chestnuts in my aunt grows chestnuts, the two senses any of several attractive deciduous trees yellow-brown in autumn and edible nut of any of various chestnut trees of the genus Castanea would both be good, albeit slightly different interpretations, but the sense the brown color of chestnuts, instead, is clearly not.", "To show that the current state of the art is nowhere near this level of performance, we select as a case study the set of instances in the Unified Evaluation Framework for English WSD of Raganato et al. (2017a) (ALL) which are wrongly disambiguated by all of the considered systems (see Section 3).", "We analyze this hard core (henceforth, ALLHC )where performances are 0 .", "0 % in F1 score across the board by designfrom both a quantitative and a qualitative perspective.", "Sense distribution is a central problem for WSD.", "In our quantitative study, therefore, we analyze performances on the hard core by dividing test instances into frequency-based partitions.", "While performances are virtually always computed in terms of micro-averaged F1 scores, here we choose to report macro-averaged F1 (aggregated by sense), as the former gives more weight to frequent senses simply because they occur more oftenthus hiding mediocre performances on least frequent senses.", "Most Frequent Sense Bias.", "The most frequent class (in WSD, the most frequent sense, or MFS) can be overpredicted by machine learning algorithms (Postma et al., 2016; Blevins and Zettle-moyer, 2020; Loureiro et al., 2021).", "To quantify this phenomenon, in Table 1 (top), we report how many times our systems at issue predict the MFS in WordNet (henceforth, WN1st) on ALLHC , as well as on ALL itself.", "8 As can be seen, systems show a clear bias towards WN1st senses on ALL, predicting them much more often (at least 69% ) than the WN1st rate on the ground truth ( 65 . 2% ).", "The distribution divergence becomes dramatic on ALLHC , where systems predict WN1st at least 62 .", "7% of the times, but the true WN1st rate is now just 2 .", "0% .", "Overall, systems show a mostly comparable bias towards WN1st, with two notable exceptions:", "(i) GEN, likely due to the fact that in its UNI setting the system is exposed to multiple resources and hence is less biased; on the other hand, and perhaps coun-terintuitively (but see Calvo and Gelbukh, 2015)", "(ii) SYN, which is unsupervised, is the most biased 8 We consider a test set instance to be a WN1st instance if at least one of the word senses assigned to disambiguate it coincides with the WN1st.", "towards WN1st.", "Finally, we note that ESR, despite being the state of the art, does not behave differently from other systems in this respect, suggesting that there is much room for improvement.", "In Table 2, we report both microand macro-averaged F1 scores on ALL, a subset of ALL without WN1st instances (ALL no 1 st ), and ALLHC .", "As a consequence of the reduced importance of frequent senses, macro-averaged F1 scores are always lower than micro-averaged counterparts.", "Moreover, we can see that the reduced bias on WN1st by GEN results in a partial divergence between the system ranking on ALL and that on ALL no 1 st , with GEN, which has a much lower WN1st bias, now outperforming GLB on the latter.", "Training Dataset Bias.", "In addition to the WN1st bias, it is also useful to examine how much the lack of extrapolative capabilities is a reason for the existence of such a large set of unanswerable items.", "Thus, we classify instances and predictions according to whether the sense occurs at least once in SemCor (see also Kumar et al., 2019; Wang and Wang, 2021).", "Predicting a sense that never occurs at training time not only requires zero-shot capabilities, but also the ability to overcome the bias that a system learns from the training data for other senses of the same word.", "In Table 1 (bottom), we report the frequency with which our systems at issue predict a word sense that occurs at least once in SemCor.", "If we look at the raw percentages for ALL, there seems to be a slight bias towards senses that were seen at training time.", "However, such values do not take into account monosemous words for which the model always outputs the correct answer.", "In ALLHC , where by construction there cannot be any monosemous sense, occurring senses are predicted at least 95% of the times, while they make up only 67 .", "1% of the ground truth.", "We refer back to Table 2 for the F1 scores on ALL noSC , i.e., the subset of ALL with no instances whose gold sense is found in SemCor.", "The divergence between the ranking on ALL and ALL noSC is even wider than that between ALL and ALL no 1 st .", "In this case, GEN, which obtains rather unremarkable results on ALL, becomes the second-to-best on ALL noSC , supporting the notion that gloss modeling is beneficial for WordNet-based WSD, even when using data outside of WordNet.", "Indeed, the gloss-centric approach of ESR offers the best results across the board, even though its bias on SemCor-attested (and WN1st) senses is still stronghinting that a possible way forward could be combining ESR (or any equally strong baseline) with strategies meant to mitigate the bias.", "Determining why a sizeable subset of instances cannot be disambiguated by any of the systems we take into consideration requires a finer-grained, qualitative level of analysis to check whether,", "i) annotation errors, or", "ii) gaps in WordNet, are an important factor.", "At the same time,", "iii) we also want to see if we replicate previous inter-annotator agreement figures (Edmonds and Kilgarriff, 2002; Navigli et al., 2007; Palmer et al., 2007).", "In order to achieve these objectives, we ask an expert linguist with extensive experience in tagging with the WordNet inventory 9 to revise the test instances in ALL, the main test set first provided by Raganato et al. (2017a), 10 as well as in the dataset released as part of SemEval-2010 in-domain WSD Task 17 of Agirre et al. (2009), by tagging each instance with one of the following labels: unchanged , to indicate that the annotator agreed with the existing ground truth; fine-grained , to indicate that one or more senses need to be added to the ground truth, without removing the existing ones; error:token-lemma , to indicate that the test instance was originally assigned a wrong lemma, or was improperly tokenized; error:pos , to indicate that the test instance was originally assigned a wrong part of speech (PoS); error:sense , to indicate that one or more senses in the ground truth are wrong; error:inventory , to indicate that the ground truth is wrong, but there is no appropriate sense for the target word in the inventory of WordNet 3.0.", "Table 3 showcases an excerpt of instances as tagged by our linguist according to the aforementioned set of labels.", "Additionally, in Table 4, we 9 All our annotators have effective operational proficiency in English and received a wage in line with their country of residence.", "Annotation was carried out by means of a user-friendly, in-house interface.", "10 We exclude SemEval-2007, since this dataset is often used as development set (Pasini et al., 2021).", "provide a broader look and report the frequency of appearance (percentage) for each label, as assigned to", "(a) the concatenation of datasets in Raganato et al. (2017a) with the exception of monosemous words and SemEval-2007 instances (ALL-),", "(b) its subset of shared errors making up the hard core described in Section 4 (ALLHC-),", "(c) ALLnot including instances featured in ALLHC(ALLNS-), and", "(d) SemEval-2010 with no monosemous instances (S10-).", "Two interesting results emerge from this analysis.", "On the one hand, the hard core seems to be hard for the human annotator too, since the majority of instances are labeled as either disambiguation errors ( error:sense ), or as lacking equally valid word senses ( fine-grained) .", "Indeed, the shared error subset (ALLHC-) features the lowest level of unchanged instances and, at the same time, the highest rate of error:sense instances, meaning that the linguist had a significantly higher disagreement with respect to the original test set in ALLHCthan in ALLNS-.", "Furthermore, the percentage of cases in which the linguist deemed necessary the use of", "(i) additional word senses to disambiguate a certain instance ( fine-grained ) or", "(ii) the use of a word sense not featured in the inventory ( error:inventory ) is more than double that of the rest of the dataset.", "On the other hand, if we sum the percentage of unchanged instances with that of fine-grained , and exclude from the set of all instances the samples where disagreements do not depend on disambiguation choices ( error:pos , error:token-lemma , error:inventory ), the agreement of the linguist with respect to the gold standard is far superior to what is traditionally reported in the literature, reaching a high ceiling of 91 .", "1% , more than 10% above traditional estimates (Ed-monds and Kilgarriff, 2002; Navigli et al., 2007; 4729 Palmer et al., 2007).", "Indeed, fine-grained instances do not involve a disambiguation error, but merely extend the instance with additional possible meanings.", "This can only increase performances, since the standard evaluation scorer provided as part of the framework of Raganato et al. (2017a) gives the system full score if the predicted sense is in the ground truth set.", "Results from the quantitative and qualitative analysis carried out on the hard core reveal two main reasons why F1 scores can be potentially misleading indicators of the actual capabilities of current systems:", "(i) scores are actually a long way from estimated human performance when observed in challenging, but nevertheless real-world scenarios, and", "(ii) errors found in traditional test beds compromise insightful model evaluations.", "Against this background, we put forward a set of evaluation tools to enable a more robust appraisal of system performance in English WSD, namely,", "(i) 42D, a multi-domain challenge set,", "(ii) amended versions of ALL (ALLNEW ) and SemEval-2010 Task 17 (S10 NEW ), and", "(iii) the new hardEN/softEN benchmark.", "Thus far, we have only considered existing evaluation benchmarks for WSD.", "In view of thisand with the purpose of showing that the issues highlighted in Section 4.1 are not artifacts of the data taken into account, but a general problem with current WSD systemswe introduce 42D, a novel test set for English WSD, built from scratch by manually annotating paragraphs taken from the British National Corpus (Leech, 1992, BNC).", "11 42D, with its 370 test instances, is specifically designed to be a challenge set (Belinkov and Glass, 2019), since for each of the instances the ground truth,", "i) does not occur in SemCor, and", "ii) is not the first sense in WordNet.", "In addition to this, 42D's source texts are sampled so as to be representative of different text domains, specifically, the 42 domains defined in BabelNet 12 4.0 (Navigli and Ponzetto, 2012; Nav-11 This work was endorsed by the BNC staff via the official inquiry mail ( ota@bodleian.ox.ac.uk ) on October 15 , 2019 and it complies with the BNC Licence for the use of paragraphs and other fragments ( http://www.natcorp. ox.ac.uk/faq.xml?ID=licensing ).", "12 BabelNet is freely available for research purposes at https://babelnet.org .", "igli et al., 2021).", "13 5.2 ALLNEW and S10 NEW With the aim of providing a cleaner test set, one in which non-system-dependent issues have been removed, we ask the same linguist who performed the error analysis of Section 4.2 to complete the task by also updating the instances from ALL and SemEval-2010 based on the labels assigned during the first phase: additional word senses are assigned for instances labeled as fine-grained and existing annotations are amended for error:sense cases; PoS tagging, lemmatization, and tokenization errors are fixed, and the instance updated with suitable word senses (see Table 3 for an excerpt of changes applied to the original test sets).", "As a result, we obtain two test sets: ALLNEW , featuring 4 , 917 polysemous instances amending the original ALL dataset of Raganato et al. (2017a) 14 , and S10 NEW , with 955 polysemous test instances amending the original SemEval-2010 Task 17 of Agirre et al. (2010).", "Besides an analysis of the current WSD evaluation datasets, in this paper we also want to make available one comfortable-to-use benchmark that addresses the discussed issues.", "For this reason, we derive a new intersection of 476 test instances that the systems at issue were not able to solve, this time, from the concatenation of the amended sets ALLNEW and S10 NEW , as well as 42D.", "We name this challenge set hardEN, in contrast to its counterpart, softEN, which, instead, features the remaining 5 , 766 test instances for which at least one system is able to provide a correct prediction.", "The hardEN/softEN benchmark is useful in that it sets a new starting line for WSD systems, one that concurrently accounts for what they still fail to do, while keeping track of what they can already do.", "Table 5 compares the results obtained on our revised ALLNEW dataset by the current state-of-the-art systems, with respect to the original ALL test set of Raganato et al. (2017a)filtered to include only instances featured in ALLNEW (ALL ), showing that the ranking of the systems taken into account", "does not change as a result of the amending process.", "However, we can appreciate the significant difference in terms of performance when this is measured using the macro-averaged F1 score as opposed to the micro-averaged F1 score used in the literature.", "For example, the performance of ESCHER drops by almost 3 points on ALLNEW , from 81 .", "6% to 78 .", "7% .", "Indeed, the macro-averaged F1 score is better suited to highlighting the weaknesses of a system with imbalanced class distributions, as is the case for word senses, whose distribution follows Zipf's Law.", "We argue, therefore, that future systems should also report their results using this measure in order to better enable their strengths and weaknesses to be determined.", "Table 5 also shows the performance of each system on our revised SemEval-2010 (S10 NEW ), 42D, and the hardEN/softEN benchmark.", "42D is of particular interest as it showcases how the state of the art still struggles in challenging settings, including rare word senses and out-of-domain instances: the best system, ESR, only manages to score 54 .", "1% in micro F1, a value that is very distant from the 80% figure originally estimated for human experts.", "As a last remark, it is worth noting how the performances on softEN for EWR and ESR reach and surpass the threshold of 85% , hence showing figures closer to the new, higher human performance ceiling we described in Section 4.2.", "In this work, we dived deep into what the current state of the art in WSD can achieve and what the main roadblocks to overcome in the future are.", "With hardEN as the new frontier to surpass and softEN as a milestone to preserve, in this Section, dataset ESCHER Uniform E. Ranked E. M-F1 m-F1 M-F1 m-F1 M-F1 m-F1 ALLNEW 78.7 81.6 77.8 81.6 78.8 82.3 S10 NEW 78.0 82.1 79.5 83.7 80.7 84.9 42D 58.9 54.1 50.9 46.8 53.2 48.9 softEN 83.7 86.8 82.7 87.6 83.4 88.3 hardEN 0.0 0.0 0.0 0.0 0.0 0.0 Table 6: Macro(M-F1) and micro-averaged F1 (m-F1) scores of our Uniform and Ranked ensemble strategies compared against the best performing systems, ESCHER.", "Joining forces.", "One might wonder whether putting together multiple systems can be a viable approach for achieving progress in WSD, as preliminarily explored in the past by (Brody et al., 2006).", "Here we provide a provisional answer by investigating two simple ensemble strategies with the aim of understanding if it is possible to improve the results by making different and diverse systems agree.", "In the first ensemble strategy, i.e., uniform ensemble, we apply majority voting among the predictions of each of the seven systems; in the second strategy, i.e., ranked ensemble, each voting system is ranked according to its performance rank on ALLNEW , e.g., the vote of ESCHER (the best system on ALLNEW ) is worth seven times that of SyntagRank (the seventh and worst system), in order to favor systems that are more likely to predict correct senses.", "sults for ALLNEW are slightly higher when using ranked ensembling, this strategy appears to be impairing performance in challenging settings such as 42D.", "Furthermore, by construction, if hardEN features all and only those instances that all the systems at issue fail to provide a correct answer for, then ensembles cannot represent a solution for hardEN, no matter the strategy employed.", "Data augmentation.", "A renowned problem in WSD is the knowledge acquisition bottleneck: we have thousands of senses for which we have no available training data, but manual sense tagging is an expensive process (Pasini, 2020).", "What happens when a system is trained with automatically generated usage examples?", "To find out, we employ the examples generated via the EXMAKER encoder-decoder architecture (Barba et al., 2021b), to train ESCHER in two configurations: the first, in which the system is trained only with one automatically generated example per sense (K1), and the second, in which ESCHER is trained on the concatenation of SemCor and K1 (SemCor+K1).", "As shown in Table 7, although ESCHER, when using K1, successfully nibbles at hardEN (achiev-ing 35 . 3 % in terms of macro-averaged F1 score), it does so at the expense of its performance on softEN (dropping more than 18 % in macro-averaged F1 score), which is clearly undesirable.", "This is further proof that flattening the sense distribution on the training set is not sufficient to deal with hard test instances while at the same time preserving performance on the easier ones (see also Postma et al. (2016) and Loureiro et al. (2021)).", "Although traditional metrics indicate that WSD systems have attained human-level performances, the actual capabilities of state-of-the-art models are poorly reflected by the current evaluation benchmarks.", "In this paper, we analyzed the intersection of errors made by a heterogeneous set of seven state-of-the-art systems for English WSD from a quantitative and qualitative perspective, detailing two main reasons why they still falter when compared to their human counterparts, namely, their strong bias towards most frequent word senses and towards senses featured in the training data, as well as the presence of an array of lexical and semantic fallacies in traditional evaluation benchmarks.", "With the aim of providing a test bench that is more effective in reflecting the actual capabilities of WSD systems, we introduced", "(i) an amended version of the most popular test bed for WSD, and", "(ii) the 42D challenge set.", "As a result of the aforementioned work, we also present the hardEN/softEN benchmark, a unified test bed aimed at moving forward with the disambiguation of so far unresolved instances, while keeping track of the current strong points of WSD systems.", "We make our test sets and model predictions available at https://github.com/ SapienzaNLP/wsd-hard-benchmark .", "The authors gratefully acknowledge the support of the ERC Consolidator Grant MOUSSE No. 726487, the ELEXIS project No. 731015 under the European Union's Horizon 2020 research and innovation programme, and the European Language Grid project No. 825627 (Universal Semantic Annotator, USeA).", "This work was partially supported by the COST Action CA18209 NexusLinguarum European network for Web-centred linguistic data science." ]
[ "abstain", "abstain", "result", "method", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "other", "abstain", "abstain", "result", "method", "abstain", "objective", "objective", "objective", "abstain", "abstain", "other", "objective", "result", "method", "abstain", "objective", "result", "result", "other", "other", "other", "other", "other", "other", "other", "other", "abstain", "method", "objective", "method", "abstain", "method", "abstain", "other", "abstain", "abstain", "abstain", "other", "other", "abstain", "abstain", "other", "abstain", "other", "abstain", "method", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "other", "abstain", "other", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "other", "other", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "objective", "abstain", "abstain", "objective", "other", "other", "other" ]
[ "Everyday conversations require understanding everyday events, which in turn, requires understanding temporal commonsense concepts interwoven with those events.", "Despite recent progress with massive pre-trained language models (LMs) such as T5 and GPT-3, their capability of temporal reasoning in dialogs remains largely under-explored.", "In this paper, we present the first study to investigate pre-trained LMs for their temporal reasoning capabilities in dialogs by introducing a new task and a crowd-sourced English challenge set, TIMEDIAL .", "We formulate TIMEDIAL as a multiple choice cloze task with over 1.1K carefully curated dialogs.", "Empirical results demonstrate that even the best performing models struggle on this task compared to humans, with 23 absolute points of gap in accuracy.", "Furthermore, our analysis reveals that the models fail to reason about dialog context correctly; instead, they rely on shallow cues based on existing temporal patterns in context, motivating future research for modeling temporal concepts in text and robust contextual reasoning about them.", "The dataset is publicly available at: https://github.com/ google-research-datasets/timedial .", "Humans can effortlessly reason about temporal concepts of everyday events such as their duration, frequency, or relative ordering (Allen, 1984; Radvansky and Zacks, 2014) based on rich commonsense knowledge about how the world works, especially in relation to time.", "However, reasoning about such concepts has been challenging for machines (Kahn and Gorry, 1977; Kozareva and Hovy, 2011) since it requires both understanding the local temporal expressions and reasoning about their global contexts such as their relative ordering and relations Work done during an internship at Google.", "A: May we see the wine list please.", "B: Sure.", "Our special wine today is a 1989 Chardonnay.", "A: I'd like a bottle please.", "B: I'll need to see your ID please.", "A: Here you go.", "B: Sorry about the inconvenience, you look so young.", "I had to make sure you are over .", "a) 21 years old (cid:51)", "b) 30 years old (cid:55)", "c) 4 years old (cid:55)", "d) 18 years old (cid:51) A: Good morning!", "May I help you?", "B: Yes.", "My wife and I are interested in renting a house for the summer.", "A: Very well.", "How long do you want the house?", "All summer?", "B: No, not all summer.", "Just for six weeks .", "A: I am afraid I can only rent it for two months .", "B: My holiday is only , but I think my brother and his family would take it for the other two weeks .", "a) six decades (cid:55)", "b) 45 days (cid:51)", "c) six weeks (cid:51)", "d) two months (cid:55) Table 1: Examples from our TIMEDIAL challenge set, demonstrating the need for commonsense knowledge and arithmetic reasoning over the context to infer the correct answers.", "(UzZaman et al., 2013; Ning et al., 2018b; Pustejovsky, 2017).", "The problem becomes even more challenging in dialogs, where explicit and implicit inter-dependencies among temporal concepts can appear across conversation turns.", "For instance, for the first dialog in Table 1, one must understand the context, i.e., selling wine, and use world knowledge of minimum legal drinking age in order to reason about correct answers to fill in the blank.", "Similarly, in the second conversation, commonsense about the durations summer , month , week , day and their relations, plus numerical reasoning, are necessary to make the inference.", "Although previous works have studied temporal reasoning in natural language, they have either focused on specific time-related concepts in isolation, such as temporal ordering and relation extraction (Leeuwenberg and Moens, 2018; Ning et al., 2018a), and/or dealt with limited context, such as single-sentence-based question answering (Zhou et al., 2019) and natural language inference (Vashishtha et al., 2020; Mostafazadeh et al., 2016).", "In this work, we make the first systematic study of temporal commonsense reasoning in a multi-turn dialog setting.", "The task involves complex reasoning that requires operations like comparison and arithmetic reasoning over temporal expressions and the need for commonsense and world knowledge.", "We design a new task for dialog-based temporal reasoning and present a new challenge set in English, called TIMEDIAL , to evaluate language understanding models on the task.", "We formulate the problem as a crowd-sourced cloze task with multiple choices based on dialogs in the DailyDialog dataset (Li et al., 2017).", "Given a dialog with one temporal span masked out, the model is asked to find all correct answers from a list of four options to fill in the blank (Table 1).", "The challenge set requires the models to demonstrate understanding of the context and use temporal commonsense to make right choices.", "Our final challenge set consists of 1 .", "1 K carefully curated dialog instances.", "We then study the performance of several state-of-the-art pre-trained language models on TIMEDIAL along several dimensions including modeling paradigms (classification, mask filling, and gener-ation), the scope of dialog contexts, in-domain vs. out-of-domain training, dependence on shallow text matching for reasoning, and the types of reasoning required.", "Our experiments demonstrate that off-the-shelf, pre-trained language models cannot effectively reason about temporal aspects in a dialog, even with domain-specific finetuning.", "Our findings indicate that large-scale pre-trained models even after fine-tuning may not be sufficient for robust temporal reasoning in dialogs, and motivate future research toward modeling temporal concepts over diverse everyday events, and contextual reasoning about them.", "We formulate the dialog-based temporal commonsense reasoning problem as a cloze task (Taylor, 1953).", "Formally, given a multi-turn dialog context of n conversational turns between two speakers A and B, where a temporal words span within the context is masked out, the task is to predict the suitable temporal expression(s) for the masked-out span from a list of options.", "That is, we want the conversation model to select all the correct answers from the options based on the dialog context.", "Following similar cloze-style challenge datasets, we use accuracy as the evaluation metric (Mostafazadeh et al., 2016; Onishi et al., 2016; Mihaylov and Frank, 2018).", "Having a non-trivial set of options is crucial to build a challenge set and to avoid accidental spurious biases (Geirhos et al., 2020; Gururangan et al., 2018; Le Bras et al., 2020).", "We ensure this via the following filtering process.", "(1) For each masked span, there is more than one correct answer in the options.", "This makes the task more challenging for models since more comprehensive understanding of the context is required to recognize all the correct choices.", "In our dataset (3) we guarantee two correct answers for each masked span.", "(2) Some incorrect options are selected to be spuriously correlated with the dialog context.", "For example, we include temporal spans in the dialog context as negative options, which will challenge models that rely primarily only on shallow pattern matching without correct temporal reasoning.", "We present more information in 3 about how the negative options were created by human annotators.", "The TIMEDIAL dataset is derived from DailyDialog data (Li et al., 2017), which is a multi-turn dialog corpus containing over 13 K English dialogs.", "Dialogs in this dataset consist of turn-taking between two people on topics over 10 broad categories, ranging from daily lives to financial topics.", "Our data collection process involves two steps: (1) identifying dialogs that are rich in temporal expressions, and (2) asking human annotators to provide correct and incorrect options for cloze instances derived from these dialogs.", "We now describe these steps in detail.", "Temporal expression identification.", "Here, we select dialogs that are rich with temporal information, in order to focus on complex temporal reasoning that arises in natural dialogs.", "Temporal expressions are automatically identified with SUTime (Chang and Manning, 2012), an off-the-shelf Category Dialog Options WorldKnowledge(5%) A: May we see the wine list ?", "temporal expression detector.", "1 We keep only the dialogs with more than 3 temporal expressions and at least one expression that contains numerals like two weeks (as opposed to non-numeric spans, like summer , right now , and later ).", "In our initial experiment, we observe that language models can often correctly predict these non-numerical temporal phrases.", "We note that temporal expressions containing numerals serve as more challenging sets of options than non-numerical ones.", "This filtering step results in 1,127 unique dialogs for further processing.", "Human annotated options.", "Next, we make spans in the dialogs.", "For a dialog, we mask out each temporal expression that contains numerals, each resulting in a cloze question that is then sent for human annotation.", "This resulted in 1,526 instances for annotation.", "For each masked span in each dialog, we obtain human annotation to derive a fixed set of correct and incorrect options given the context.", "Concretely, given a masked dialog and a seed correct answer (i.e., the original text) for the masked span, the 1 https://nlp.stanford.edu/software/ sutime.shtml annotators 2 were asked to (1) come up with an alternative correct answer that makes sense in the dialog adhering to commonsense, and (2) formulate two incorrect answers that have no possibility of making sense in the dialog context.", "We highlight all time expressions in the context to make it easier for annotators to select reasonable time expressions.", "To ensure that the annotated incorrect options are not too trivially distinguishable by the models (as discussed in 2), we define three rules for the annotators to follow.", "Rule 1: Phrase Matching.", "The rater should first try to pick another temporal span from the dialog context that makes syntactic/semantic sense (e.g., when the span is of the appropriate type, such as duration, for the masked span) but is still incorrect according to commonsense.", "Rule 2: Numeral Matching.", "If Rule 1 does not apply, raters should follow a relaxed version of Rule 1, whereby the incorrect option should contain any numeral occurring in the dialog context.", "Rule 3: Open-ended.", "If neither of the above rules is applicable, then raters can come up with an incorrect option using their own judg-ment.", "The two incorrect options are required to differ from each other as much as possible.", "Rules-1&2 are designed to confuse models that rely on shallow pattern matching.", "Finally, to ensure the quality of the human-annotated options, we perform a subsequent round of human validation on the gathered data.", "The validators identify and fix issues such as duplicate options, unreasonable or obscure annotations w.r.t natural usage, or ungrammatical annotations that do not fit in the context.", "Table 3 shows statistics of TIMEDIAL .", "The dataset contains over 1.1K test instances.", "Each dialog contains 11.7 turns and 3 temporal expressions on average, presenting richer and more complex context compared to the recent single-sentence-based temporal question answering benchmarks (e.g., Zhou et al., 2019; Vashishtha et al., 2020).", "As above, each test instance contains two correct answers and two incorrect ones.", "3 Over half of the incorrect options are annotated based on phrase and numeral matching from context, which pose a significant challenge for models relying on shallow text matching, as we show in our experimental analysis (5).", "Answering different instances in the dataset requires different types of core reasoning abilities, such as comparison, arithmetic inference, or reasoning based on world knowledge or general commonsense.", "To facilitate fine-grained analysis, we also annotate the reasoning categories for a randomly sampled set of 100 dialogs.", "Though each 3 We also collected 342 extra instances for which the annotators deem there is only one unique correct answer for the context.", "Thus, each of those instances contains one correct option and two incorrect ones.", "We release those instances along with the dataset, though we did not include them in empirical study in this paper.", "instance can involve multiple reasoning types, we associate it with one predefined category label that indicates the primary type of reasoning it requires.", "Table 2 shows the category distribution and examples in each of the category.", "We observe that the dataset requires general commonsense for 60% of the dialogs, making it the most common reasoning type.", "We consider a broad set of methods and evaluate their performance on our challenge TIMEDIAL dataset.", "These methods vary in terms of the modeling paradigms, the scope of the dialog contexts, and training settings.", "In particular, they encompass the major ways pre-trained LMs are currently used in downstream tasks (4.1) which often outperform earlier specialized non-pretrained models.", "We also consider different lengths of context used in reasoning, varying by their vicinity to the masked span (4.2).", "Finally, we study different training settings, including zero-shot, in-domain, and out-of-domain training (4.3).", "We experiment across three major modeling paradigms:", "(i) Binary Classification,", "(ii) Mask Filling, and", "(iii) Generation.", "Figure 1 shows the different architectures.", "For each test instance, the model takes as input a pair of (masked dialog context, can-didate), and outputs a score measuring how likely the candidate being a correct answer.", "Based on the prediction scores of all options, the model then chooses the top two positive candidates as the predicted answer for the instance.", "Each paradigm of models is finetuned using training data from different domains, as discussed in 4.3.", "In this setting, we formulate the task as a binary classification problem, i.e., we use a classifier to measure the probability of the candidate in the (masked dialog context, candidate) pair being a correct answer.", "Any powerful LM e.g., BERT (De-vlin et al., 2019), ALBERT (Lan et al., 2019), ROBERTA (Liu et al., 2019), etc. can be used to build the classifier.", "This method's key challenge is the lack of annotated training data for direct supervision.", "We generate weak supervision training data as follows.", "In an unlabeled corpus, we use the SUTime tool Input: [CLS] A: A: I'm B: My holiday is only [MASK] [SEP] six weeks Classification Layer BERT Output: 1 (1) Classification Input: [CLS] A: I'm B: My holiday is only [MASK] [MASK] .", "to annotate temporal spans.", "We mask each temporal span in this corpus and use the masked text as one positive example for binary classification.", "To generate negative example, we randomly sample another temporal span from the dialog context and use it as a negative example for the masked temporal span.", "The resulting data is noisy because the randomly sampled temporal span can also logically fit in the masked span in the given context; however, we assume the likelihood of that happening is low.", "We leave drawing harder negative instances using heuristics to future work.", "We also use the mask filling approach of BERT-like mask language models (MLMs).", "For each dialog context and a candidate temporal span of m tokens, we replace the blank in the dialog context with m masked tokens.", "We then evaluate the likelihood of predicting the temporal span tokens for those masked positions, and make average across the positions.", "A key advantage of this method is that we can directly apply a BERT model in the zero-shot manner since the model was pretrained in the same way, as for accommodating for [MASK] fillings.", "Additionally, we also finetune BERT's MLM for learning task specific properties.", "The third method is a fully generative approach using the text-to-text paradigm of T5 (Raffel et al., 2020).", "Given a masked dialog context, the model is trained to generate the masked text in an encoder-decoder framework.", "As a result, evaluating the likelihood of generating the given temporal span (normalized with the length of the span) is used as the probability of it being correct.", "Similar to mask filling, we use T5 either in a zero-shot manner or with additional fine-tuning.", "We aim to study the influence of context on a model's temporal reasoning in dialog by incorporating varying scopes of dialog context based on their vicinity to the target span.", "Since the dialogs in TIMEDIAL are rich in temporal concepts, we want to evaluate LMs' dependence on shallow text matching vs. the ability to accurately understand the causal relations between those concepts (see Table 6).", "We use the following three settings: Full context, where the model is presented with the complete available dialog to reason on.", "Due to our design of challenging negatives, the full context can often confuse models that rely on shallow cues.", "Local context, where we provide only with the utterances that immediately precede and follow the target utterance.", "Target context, where the context is restricted to only the particular utterance that contains the masked span.", "For all models, we consider two common training settings, e.g., in-domain data, which is typically small, and out-of-domain training where a large amount of data is available.", "Table 4 shows training data statistics.", "For mask-filling and generation, we also evaluate in a zero-shot setup with no finetuning.", "from the DailyDialog dataset, based on the number of temporal spans.", "However, this still leaves remaining data with less than 3 temporal spans or with no numeric span.", "By masking each temporal span in each dialog, we obtain 14.5K training instances to use in our domain specific fine-tuning.", "Out-of-domain training.", "In this setting, we consider a much larger corpus from a general domain.", "Specifically, we use the large scale training set based on the Meena dataset Adiwardana et al. (2020), which is mined and filtered from public domain social media conversations over 341GB of text (40B words).", "4 Compared to the above in-domain data from DailyDialog which were manually written by human annotators in a clean and consistent way, the dialogs in the Meena corpus tend to be noisy, casual, and usually short.", "Like our DailyDialog processing, we identify all temporal expressions for dialogs in Meena using SUTime.", "Using the proposed TIMEDIAL challenge set, we next conduct extensive experiments and analyses on the different model variants and context settings.", "We use either 4x4 or 8x8 Cloud TPUs V3 pod slices 5 for fine-tuning and one V100 GPU for inference.", "We provide more details of the experiment configurations in the appendix.", "Evaluation.", "Since each example of TIMEDIAL contains two correct answers, we report the metric 2 -best accuracy , which measures whether both of the model's top-ranked answers are correct.", "In 4 We acquired a trimmed down version of the Meena dataset by contacting the authors.", "other words, if the model erroneously ranks an incorrect answer over a correct one, we consider it to be an error case.", "Note that we use the ranking-based metric as opposed to classification-based ones (for example, by asking the model to classify whether each individual candidate answer is correct or not (e.g., Zhou et al., 2019)) and because it presents a stricter measure that penalizes any incorrect answers being ranked over correct answers, and the ranking metric is not influenced by specific choices of the threshold hyperparameter that cuts off positive and negative predictions.", "Table 5 shows model results and human performance.", "Human performance achieves a near-perfect level ( 97 .", "80 , with Cohen's kappa score of 0.86 showing almost perfect inter-rater agreement (Landis and Koch, 1977)).", "Overall.", "The generation model based on T5-L ARGE and finetuned on the in-domain DailyDialog data achieves the best performance.", "However, its 2 -best accuracy ( 74 . 8 ) lagged far behind the human performance, demonstrating the difficulty of the TIMEDIAL challenge set.", "Zero-shot vs. out-of-domain vs. in-domain.", "When comparing the different training data setup, we observe that models with in-domain training using the DailyDialog data (e.g., LARGE-IN ) consistently outperforms those trained on the large out-of-domain Meena dataset (e.g., LARGE-OUT ).", "Both setups outperform the zero-shot models (without any fine-tuning) (e.g., LARGE-ZERO ).", "The results show that the large LMs still highly depend on in-domain or at least dialog data to grasp and enhance their temporal reasoning ability in dialog context.", "Further, we see increasing performance with increasing model size, which is not unexpected given the complexity of the task.", "Next, we analyze the different types of errors based on different rules for negative option creation in the annotation process.", "In particular, the phrase matching rule picks an exact time span from the dialog context, and numeral matching picks numerals from the dialog context.", "Thus, models picking those incorrect options imply reliance on spurious shallow text matching features.", "Figure 2 shows the percentage of errors in terms of the different rules.", "For example, the BERT-based classification model CLS-IN erroneously picks 52% of negative options created by the phrase matching rule as correct answers (i.e., by ranking those negative options over the true correct options).", "1", "to other types of negative options, showing that they rely on spurious text matching to a significant extent.", "Between BERT and T5, we find T5 being more robust to shallow text matching.", "Table 6 provides further examples of prediction errors, illustrating confusions due to shallow text matching.", "In the first dialog, both incorrect answers already partially occur in the context or are related to preexisting concepts (i.e., three to three o'clock , and nine to September ).", "All the three models were confused and chose either of the two as the top prediction for the blank, even though the options clearly violate the context.", "Interestingly, the mask filling model was completely confused and ranked both incorrect answers over the correct ones.", "Similarly in the second example, Size BASE LARGE Training INOUTINOUT Classification (BERT) TARGET 50.5 40.0 50.5 47.5 LOCAL + 3.4 + 3.3 + 7.5 + 2.0 FULL 0.6 0.1 + 2.7 + 1.2 Mask Filling (BERT) TARGET 57.8 44.3 60.3 46.8 LOCAL + 5.4 + 3.0 + 8.1 + 4.9 FULL + 9.6 + 3.1 + 9.6 + 8.0 Generation (T5) TARGET 55.5 45.9 66.7 56.1 LOCAL + 3.7 + 2.7 + 6.1 + 3.7 FULL + 3.7 + 4.7 + 8.2 + 5.8 Table 7: Impact of dialog context on reasoning accuracy.", "Table 7 shows how different scopes of dialog context (4.2) affect model performance.", "First, the most restrictive target-only context is insufficient for accurate reasoning, by producing the weakest performance of most models.", "This highlights the importance of context information for temporal commonsense reasoning in dialog, which differs from previous temporal reasoning studies based on limited context (e.g., single-sentence question an-swering).", "Second, we note that the full dialog context does not always lead to the best performance.", "In 5 out of the 12 cases, using the local context yields equal or higher reasoning accuracy.", "The results show that the LMs still fall short of properly modeling the rich dialog contexts and making effective use of all information to do reasoning.", "Figure 3 shows the percentage of errors in each reasoning category.", "We observe that the models tend to make non-trivial portions of errors on com-monsense/world knowledge questions.", "For example, the strongest model, T5 GEN-IN , failed on 18% of the instances that require commonsense or world knowledge, while BERT CLS-IN made errors on 48% of such instances.", "The performance Table 1 Commonsense/World knowledge Comparison Arithmetic/Others CLS-IN 0.4769230769 0.5416666667 0.6363636364 CLS-OUT 0.4153846154 0.4583333333 0.2727272727 MF-IN 0.2461538462 0.2083333333 0.09090909091 MF-OUT 0.3538461538 0.3333333333 0.3636363636 GEN-IN 0.1846153846 0.1666666667 0.1818181818 GEN-OUT 0.2307692308 0.4166666667 0.2727272727 0.00 0.18 0.35 0.53 0.70 CLS-IN CLS-OUT MF-IN MF-OUT GEN-IN GEN-OUT Commonsense/World knowledge Comparison Arithmetic/Others BERT T5 1 Figure 3: Percentage of errors on different reasoning types.", "Temporal commonsense reasoning.", "Early studies related to temporal analysis define time in the context of sets and relations (Bruce, 1972; Allen, 1983).", "More recent works often associate time with events and focus on identifying time expressions (Chang and Manning, 2012; Angeli et al., 2012; Lee et al., 2014), extracting temporal relations among events (Setzer and Gaizauskas, 2000; Pustejovsky et al., 2005; Lapata and Las-carides, 2006; Chambers et al., 2007; Ning et al., 2018b), and timeline construction (Do et al., 2012; Leeuwenberg and Moens, 2018).", "Some recent work has focused on building challenging benchmarks for temporal commonsense reasoning.", "Story Cloze Test focuses on stereotypical causal temporal and causal relations between events (Mostafazadeh et al., 2016).", "Vashishtha et al. (2020) recast temporal reasoning datasets for event duration and event ordering into the natural language inference (NLI) format.", "Turque (Ning et al., 2020) is an reading comprehension dataset where the model needs to answer questions such as what happens before/after [event].", "Most related to our work is McTaco (Zhou et al., 2019), a dataset for evaluating temporal commonsense in the form of multiple-choice reading comprehension, where the context usually consists of a single sentence.", "Our work instead studies temporal commonsense reasoning in dialogs which often require significant commonsense and world knowledge to reason over rich context (Qin et al., 2019b; Dinan et al., 2018).", "(LMs) (Devlin et al., 2019; Brown et al., 2020), it is an open question whether these models, pretrained on large amounts of data, capture commonsense knowledge.", "Several works have been proposed to assess the ability of LMs for commonsense or numerical reasoning (Zhang et al., 2020; Bouraoui et al., 2020), or to mine commonsense knowledge from LMs (Davison et al., 2019).", "Lin et al. (2020) showed that state-of-the-art LMs such as BERT and RoBERTa performs poorly on numerical reasoning tasks without any finetuning.", "Works have also been proposed to improve language model's commonsense reasoning (Qin et al., 2020, 2019a; Zhou et al., 2020) and numerical reasoning abilities (Geva et al., 2020).", "In our work, we study several modeling approaches and finetuning settings of large LMs, and establish strong baselines for temporal commonsense reasoning in dialogs.", "We introduced TIMEDIAL , a challenge set consist-ting of 1.1K multiple-choice cloze questions for temporal commonsense reasoning in dialog.", "The dataset is carefully curated to evaluate a models' ability to do temporal commonsense/numerical reasoning over dialog context.", "In order to establish strong baselines and provide information on future model development, we conducted extensive experiments with state-of-the-art language models with different settings: the scope of context, weak supervision strategies, and learning objectives.", "While humans can easily answer these questions (97.8% accuracy), even our best model variant (T5-large with in-domain training) struggles on this challenge set (73%).", "Moreover, our qualitative error analyses show that these large language models often rely on shallow, spurious features (particularly text matching) when answering these questions, instead of truly doing reasoning over the context." ]
[ "abstain", "abstain", "objective", "method", "abstain", "objective", "other", "abstain", "abstain", "result", "objective", "abstain", "objective", "objective", "objective", "objective", "objective", "objective", "objective", "objective", "objective", "objective", "objective", "objective", "objective", "objective", "objective", "objective", "objective", "objective", "objective", "objective", "objective", "objective", "objective", "abstain", "abstain", "abstain", "abstain", "other", "objective", "abstain", "objective", "method", "abstain", "abstain", "abstain", "abstain", "method", "objective", "objective", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "method", "method", "abstain", "method", "method", "method", "method", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "objective", "abstain", "result" ]
[ "The inability to correctly resolve rumours circulating online can have harmful real-world consequences.", "We present a method for incorporating model and data uncertainty estimates into natural language processing models for automatic rumour verification.", "We show that these estimates can be used to filter out model predictions likely to be erroneous, so that these difficult instances can be prioritised by a human fact-checker.", "We propose two methods for uncertainty-based instance rejection, supervised and unsupervised.", "We also show how uncertainty estimates can be used to interpret model performance as a rumour unfolds.", "One of the greatest challenges of the information age is the rise of pervasive misinformation.", "Social media platforms enable it to spread rapidly, reaching wide audiences before manual verification can be performed.", "Hence there is a strive to create automated tools that assist with rumour resolution.", "Information about unfolding real-world events such as natural disasters often appears in a piece-wise manner, making verification a time-sensitive problem.", "Failure to identify misinformation can have a harmful impact, thus it is desirable that an automated system aiding rumour verification does not only make a judgement but that it can also inform a human fact-checker of its uncertainty.", "Deep learning models are currently the state-of-the-art in many Natural Language Processing (NLP) tasks, including rumour detection (Ma et al., 2018), the task of identifying candidate rumours, and rumour verification (Li et al., 2019; Zhang et al., 2019), where the goal is to resolve the veracity of a rumour.", "Latent features and large parameter spaces of deep learning models make it hard to interpret a model's decisions.", "Increasingly researchers are investigating methods for understanding model predictions, such as through analysing neural attention (Vaswani et al., 2017) and studying adversarial examples (Yuan et al., 2019).", "Another way to gain insights into a model's decisions is via estimating its uncertainty.", "Understanding what a model does not know can help us determine when we can trust its output and at which stage information needs to be passed on to a human (Kendall and Gal, 2017).", "In this paper, rather than purely focusing on the performance of a rumour verification model, we estimate its predictive uncertainty to gain understanding of a model's decisions and filter out the cases that are 'hard' for the model.", "We consider two types of predictive uncertainty: data uncertainty (aleatoric) and model uncertainty (epistemic).", "The approach we adopt requires minimal changes to a given model and is relatively computationally inexpensive, thus making it possible to apply to various architectures.", "We make the following contributions: We are the first to apply methods for uncertainty estimation to the problem of rumour verification.", "We show that removing instances with high uncertainty filters out many incorrect predictions, gaining performance improvement in the rest of the dataset.", "We propose a supervised method for instance removal that combines both aleatoric and epistemic uncertainty and outperforms an unsupervised approach.", "We propose a way to analyse uncertainty patterns as a rumour unfolds in time.", "We make use of this to study the relation between the stance expressed in response tweets and fluctu-ation in uncertainty at the time step following a response.", "We explore the relationship between uncertainty estimates and class labels.", "A rumour is a circulating story of questionable veracity, which is apparently credible but hard to verify, and produces sufficient skepticism/anxiety so as to motivate finding out the actual truth (Zu-biaga et al., 2018).", "Rumour detection and verification in online conversations have gained popularity as tasks in recent years (Zubiaga et al., 2016; Ma et al., 2016; Enayet and El-Beltagy, 2017).", "Existing works aim to improve performance of supervised learning algorithms that classify claims, leveraging linguistic cues, networkand user-related features, propagation patterns, support among responses and conversation structure (Derczynski et al., 2017; Gorrell et al., 2018).", "Due to the nature of the task, each rumour can be considered as a new domain and existing models struggle with generalisability.", "Here we employ model-agnostic methods of uncertainty estimation that can provide performance improvements and insight on the working of the models to inspire further development.", "There is a growing body of literature which aims to estimate predictive uncertainty of deep neural networks (DNNs) (Gal and Ghahramani, 2016; Lakshminarayanan et al., 2017; Malinin and Gales, 2018).", "Gal and Ghahramani (2016) have shown that application of Monte-Carlo (MC) Dropout at testing time can be used to derive an uncertainty estimate for a DNN.", "Lakshminarayanan et al. (2017) estimate model uncertainty by using a set of predictions from an ensemble of DNNs, while Malinin and Gales (2018) propose a specialised framework, Prior Networks, for modelling predictive uncertainty.", "Here we focus on the dropout method proposed by Gal and Ghahramani (2016) as it is computationally inexpensive, relatively simple and does not interfere with model training.", "Within NLP Xiao and Wang (2018) have used aleatoric (Kendall and Gal, 2017) and epistemic (Gal and Ghahramani, 2016) uncertainty estimates for Sentiment analysis and Named Entity Recognition.", "Dong et al. (2018) used a modification of Gal and Ghahramani (2016) method to output confidence scores for Neural Semantic Parsing.", "Rumour Verification is a task where levels of certainty play a crucial role because of the potentially high impact of erroneous decisions.", "Moreover, unlike other tasks, it is a time-sensitive problem: as ReLU Dropout Softmax LSTM Average over branches tweet 0 tweet 1 tweet n branch 1 branch m True/False/Unverified Figure 1: branch-LSTM model new information comes to light the level of certainty is expected to change giving insights into a model's predictions.", "We therefore explore the dynamics of uncertainty as a discussion unfolds in section 6.3.", "Note that data and model uncertainty should not be confused with uncertainty expressed by a user in a post.", "Automatically identifying levels of uncertainty expressed in text is a challenging NLP task (Jean et al., 2016; Vincze, 2015), which could be complementary to predictive uncertainty in the case of rumour verification.", "Active Learning and Uncertainty: Uncertainty estimates could be used in an Active Learning (AL) setup.", "This would involve using uncertainty estimates over the model's predictions to select instances whose manual labelling and addition to the training set would yield the most benefit (Olsson, 2009).", "Active learning has been applied to various NLP tasks in the past (Settles and Craven, 2008).", "More recently Siddhant and Lipton (2018) have shown that Bayesian active learning by disagreement, using uncertainty estimates provided either by Dropout (Gal and Ghahramani, 2016) or Bayes-by-Backprop (Blundell et al., 2015) significantly improves over i.i.d. baselines and usually outperforms classic uncertainty sampling on a number of NLP tasks and datasets.", "Bhattacharjee et al. (2017, 2019) applied AL to identifying misinformation in news and social media.", "Our work could be applied in an AL setup to close the loop in incrementally training a model for misinformation using predictive uncertainty.", "We describe the rumour verification model which forms the basis of our experiments.", "This served as User 0: Breaking news: Ghana international and AC Milan star Michael Essien has contracted Ebola , his club has confirmed.", "a competitive baseline model (branch-LSTM) for a Semeval task on rumour verification (RumourEval 2019) (Gorrell et al., 2018) 1 .", "To process a conversation discussing a rumour while preserving some of the structural relations between the tweets, a tree-like conversation is split into branches, i.e linear sequences of tweets, as shown in Figure 2.", "Branches are then used as training instances for a branch-LSTM model consisting of an LSTM layer followed by several ReLU layers and a softmax layer (default base of e and temperature of 1) that predicts class probabilities.", "Here we use outputs from the final time steps (see Figure 1).", "Given a training instance, branch of tweets x i , i [1 , .., N ] , where N is the number of branches, and the label y i , represented as one-hot vector of size C , where C is the number of classes, the loss function l 1 (categorical cross entropy) is calculated as follows: u i = f ( x i ) v i = W v u i + b v p i = softmax ( v i ) = e v i C (cid:80) k =1 e v ki l 1 = 1 NN (cid:88) n =1 C (cid:88) k =1 y ki logp ki , where u i is an intermediate output of layers prior to the softmax layer, v i is logits, and p i are predicted class probabilities for a training instance x i .", "To obtain predictions for each of the conversation trees we average class probabilities for each of the branches in the tree.", "In this case tweets are represented as the average of the corresponding word2vec word embeddings, pre-trained on the Google News dataset (300d) (Mikolov et al., 2013).", "We consider two types of uncertainty as described in Kendall and Gal (2017): data uncertainty (aleatoric) and model uncertainty (epistemic).", "Data uncertainty is normally associated with properties of the data, such as imperfections in the measurements.", "Model uncertainty on the other hand comes from model parameters and can be explained away given enough (i.e. an infinite amount of) data.", "We also use the output of the softmax layer to measure the confidence of the model.", "There are four common ways to calculate uncertainty using the output of the softmax layer: Least Confidence Sampling, Margin of Confidence, Ratio of Confidence and Entropy (Munro, 2019).", "Here we use the highest class probability as a confidence measure and refer to it as softmax'.", "Using other strategies lead to similar conclusions (see appendices).", "We assume aleatoric uncertainty to be a function of the data that can be learned along with the model (Kendall and Gal, 2017).", "Conceptually, this input-dependent uncertainty should be high when it is hard to predict the output given a certain input.", "In order to estimate aleatoric uncertainty associated with input instances, we add an extra output to our model that represents variance .", "We then incorporate into the loss function according to Kendall and Gal (2017), in the following way.", "Here we assume that predictions come from a normal distribution with mean v and variance .", "We sample v , distorted by Gaussian noise, T times, put each through a softmax layer and pass to a standard categorical cross entropy loss function to obtain a mean over losses for all T samples.", "d t,i = v i + i (cid:15), (cid:15) N (0 , 1) l 2 = 1 NN (cid:88) n =1 1 TT (cid:88) t =1 C (cid:88) k =1 y ki log ( softmax ( d t,i ) k ) Here l = w 1 l 1 + w 2 l 2 is the total loss.", "If the original prediction u was incorrect, we would need a high to have varied samples away from it and hence lower the loss.", "In the opposite case, should be small such that all samples yield a similar result, thus minimising the loss function.", "is chosen as the unbound variance in logit space, which, after the model is trained, approximates input-dependent variance.", "This method can be applied to a wide range of models, but since it changes the loss function, it is likely to affect a model's performance.", "To obtain epistemic uncertainty we use the approach proposed by Gal and Ghahramani (2016), which allows estimating uncertainty about a model's predictions by applying dropout at testing time and sampling from the approximate posterior.", "This approach requires no changes to the model, does not affect performance, and is relatively computationally inexpensive.", "We apply dropout at testing time N times and obtain N predictions.", "We evaluate the differences between them to obtain a single uncertainty value in the following ways: Variation Ratio Each of the sampled softmax predictions can be converted into an actual class label.", "We then define epistemic uncertainty as the proportion of cases which are not in the mode category (the label that appears most frequently).", "where N m is the number of cases belonging to the mode category (most frequent class).", "Thus the variation ratio is 0 when all of the sampled predictions agree, indicating low model uncertainty.", "The upper bound would differ depending on the number of cases, but will not reach 1.", "Variance Each prediction is a vector, the output of a softmax layer (entries in [0,1] which sum up to 1), of size equal to the number of classes.", "We calculate the variance across each dimension and then take the max value of variance as our uncertainty estimate.", "We assume that instances yielding high predictive uncertainty values are likely to be incorrectly predicted.", "We therefore make use of predictive uncertainty to filter out instances and explore the tradeoff between model performance and coverage of a dataset.", "We perform instance rejection in two ways; unsupervised and supervised.", "Unsupervised We remove portions of a dataset corresponding to instances with the highest uncertainty (separately for each uncertainty type).", "Supervised We train a supervised meta-classifier on a development set using features composed of uncertainty estimates (aleatoric, variance, entropy, variation ratio), the averaged softmax layer output and the model's prediction to decide whether an instance is correctly predicted.", "We reject instances classified as incorrect and evaluate performance on the rest.", "We compare two strong baseline models for this task: Support Vector Machines (SVM) and Random Forest (RF).", "Supervised rejection allows us to leverage all forms of uncertainty together and also dictates the number of instances to remove.", "Random We have compared the two instance rejection methods above against removing portions of the test set at random.", "The outcome of the rejection at random does not lead to consistent performance improvement (see appendix A).", "Since rumour verification is a time-sensitive task, we have performed analysis of model uncertainty over time, as a rumour unfolds.", "As illustrated in Figure 3 we have deconstructed the timeline of the development of a conversation tweet by tweet, starting with just the source tweet (initiating the rumour) and adding one response at a time.", "We have then obtained model predictions and associated uncertainties for each sub-tree.", "As the difference between each sub-tree is a single tweet, we can track the development of uncertainty alongside the development of a conversation, and the effect each added response has.", "Uncertainty estimates obtained do not correspond to the actual probabilities of the prediction being correct, they instead order the samples from the least likely to be correct to the most likely.", "While the order provided by the scores is sufficient for unsupervised and supervised rejection, these scores can be on a different scale for different datasets and do not allow for direct comparison between models, i.e. they are not calibrated.", "Calibration refers to a process of adjusting confidence scores to correspond to class membership probabilities, i.e if N predictions have a confidence of 0.5, then 50% of them should be correctly classified in a perfectly calibrated case.", "Modern neural networks are generally poorly calibrated and hyper-parameters of the model influence the calibration (Guo et al., 2017).", "MC dropout uncertainty is thus also influ-enced by hyperparameters but can be calibrated using dropout probability (Gal, 2016).", "To evaluate how well confidence scores are calibrated, one can use reliability diagrams and Expected Calibration Error (ECE) scores (Guo et al., 2017).", "ECE is obtained by binning n confidence scores into M intervals and comparing the accuracy of each bin against the expected one in a perfectly calibrated case (equal to the confidence of the bin): ECE = (cid:80) Mm =1 | B m | n | acc ( B m ) conf ( B m ) | .", "Confidence calibration can be improved using Calibration methods.", "These are post-processing steps that produce a mapping from existing scores to calibrated probabilities using a held-out set.", "Common approaches are Histogram binning, Isotonic regression and Temperature scaling (Guo et al., 2017).", "mours.", "Table 1 shows the number of conversation trees in the datasets and the class distribution.", "We use conversations from the PHEME dataset discussing rumours related to nine newsbreaking events.", "Rumours in this dataset were labeled as True, False or Unverified by professional journalists (Zubiaga et al., 2016).", "When conducting experiments on this dataset we perform cross-validation in a leave-one-event-out setting, i.e. using all the events except for one as training, and the remaining event as testing.", "This is a challenging setup, imitating a real-world scenario, where a model needs to generalise to unseen rumours.", "The number of rumours, the number of the corresponding conversations, as well as the class label distribution (true-false-unverified) vary greatly across events.", "The Twitter 15 and Twitter 16 datasets were made publicly available by Ma et al. (2017), and were created using reference datasets from MaMa et al. (2016) and Liu et al. (2015).", "Claims were annotated using veracity labels on the basis of articles corresponding to the claims found in rumour debunking websites such as snopes.com and emergent.info .", "These datasets merge rumour detection and verification into a single four-way classification task, containing True, False and Unverified rumours as well as Non-Rumours.", "Both datasets are split into 5 folds for cross validation, and contrary to the PHEME dataset, folds are of approximately equal size with a balanced class distribution.", "We perform cross-validation on all of the datasets.", "When choosing parameters, we choose one of the folds within each dataset to become the development set: CharlieHebdo in PHEME (large fold with balanced labels) and fold 0 in Twitter 15 and Twitter 16.", "We evaluate models using both accuracy and macro F-score due to the class imbalance in 0.000 0.071 0.141 0.212 0.282 0.353 0.424 0.494 0.565 100%97.5% 95% 90% 85% 80% 70% 60% 50% aleatoric softmax variation ratio", "the PHEME dataset 2 .", "During the cross-validation iterations each fold becomes a testing set once.", "We then aggregate model predictions from each fold, resulting in predictions for the full dataset, and use them to perform evaluation as well as unsupervised instance rejection based on uncertainty levels.", "To perform supervised rejection we need to train a meta-classifier on a subset of data that was not used for training the rumour verification model.", "Therefore in a separate set of experiments we exclude one of the folds (development set) from training of the verification model.", "We run cross-validation with one less fold and at each step obtain predictions and uncertainty estimates for both the test fold and the development set.", "We then use the predictions and uncertainty values predicted for the instances in the development set as training instances in our rejection meta-models, which we then evaluate on each of the corresponding test folds, thus obtaining the combined predictions for all of the folds in the dataset except for the development.", "This set up corresponds to results shown in Table 2, as one of the folds was removed from train-2 https://github.com/kochkinaelena/ Uncertainty4VerificationModels ing. The results are therefore not directly comparable to the ones in Figure 4 or in previous literature (Kochkina et al., 2018; Ma et al., 2018).", "Figure 4 shows the effect of applying unsupervised rejection (as explained in section 3.3).", "Each plot shows model performance in terms of accuracy, where the first bar of each plot shows model performance with all instances present and the following bars show performance for the corresponding percentage of remaining instances.", "Figure 4 shows the effect of unsupervised rejection using aleatoric and epistemic uncertainty (calculated as variation ratio, see section 3.2.2) 3 , as well as the softmax class probabilities as a measure of confidence (1-uncertainty).", "Initial performance using 100% of the data (Figure 4) on the PHEME dataset is markedly different to Twitter 15,16 due to the dataset and task-setup differences.", "On the Twitter 15 dataset branch-LSTM does not reach the state-3 We performed experiments using variance and entropy values with similar outcomes (appendix A).", "of-the-art Tree-GRU (Ma et al., 2018), however branch-LSTM outperforms Tree-GRU on the Twitter 16 dataset.", "On the PHEME dataset performance is comparable and slightly improved over the results in Kochkina et al. (2018).", "In line with model performance, the effect of rejection using aleatoric and epistemic uncertainties is different for PHEME compared to Twitter 15,16.", "Figure 4", "(a) shows that in PHEME greater improvement in accuracy comes from using aleatoric uncertainty, whereas for Twitter 15", "(b) and Twitter 16", "(c) there is very little improvement with aleatoric uncertainty compared to epistemic.", "We believe this is due to the nature of the datasets: folds in PHEME differ widely in size and class balance, resulting in higher/more varied data uncertainty values, in contrast with the very balanced datasets of Twitter 15,16.", "The effect of rejection using low values of softmax confidence is also positive and often similar to the effect of epistemic uncertainty as it is also estimating model's uncertainty.", "However softmax is outperformed by other types of uncertainty in most cases (Figure 4).", "Table 2 shows the comparison of two models for supervised rejection versus unsupervised rejection of the same number of instances for all three datasets.", "Note that performance value in Table 2 differs from that in Figure 4 as this was obtained in a separate set of experiments (as described in section 5).", "Having less training data harmed performance on PHEME and Twitter 16.", "Table 2 shows that using supervised rejection is better than unsupervised in terms of accuracy scores for all datasets and also in terms of macro F-scores for the Twitter 15,16 datasets.", "We believe that the reason the same effect on macro-F score is not observed in PHEME is the class imbalance in this dataset.", "Comparing the two methods, SVM and RF, for supervised rejection we observe that RF leads to a larger amount of instances being removed, achieving higher performance than SVM.", "However, the difference in performance between the two is very small.", "As part of future work the meta-classifier can be improved further, made more complex or incorporated in the predictive model, making it closer to active learning, closing the loop from prediction and corresponding uncertainty to classifier improvement.", "Another benefit of using a supervised model for instance rejection is that it can be further tuned, e.g., by varying the threshold boundary to prioritise high precision over recall.", "The precision value of this meta-classifier is the same as the accuracy of the predictions obtained after the rejection procedure.", "Part of the PHEME dataset was annotated for stance (Derczynski et al., 2017).", "We used the open-source branch-LSTM model trained on that part to", "obtain predicted stance labels for the rest of the PHEME dataset (Kochkina et al., 2017).", "There is no stance information for the Twitter 15,16 datasets, so this analysis is only available for the PHEME dataset.", "Note that we did not provide stance as a feature to train the veracity classifier: we assume that stance is an implicit feature within the tweets.", "Figure 5 shows examples of timelines of changes in predictions and uncertainty levels over time.", "Sub-plots", "(a)", "(c) show all types of epistemic uncertainty: variation ratio (blue), entropy (green), variance (orange) as well as softmax confidence (red); on sub-plots", "(d)", "(f) we show aleatoric uncertainty of the conversations corresponding to the above plots separately, as values are on a different scale.", "Each of the nodes is labeled with its predicted stance label: green supporting, red denying, blue questioning and black commenting.", "One could expect to see uncertainty decreasing over time as more information about a rumour becomes available (we can see this effect only very weakly on sub-plot Figure", "5(b), showing a correctly predicted False rumour).", "However, not all responses are equally relevant and also the stance of new posts varies, therefore the uncertainty levels also change.", "Interestingly, the true rumour on subplot Figure", "5(a) (incorrectly predicted as False during the final time steps) had low uncertainty at step 2 and was predicting a correct label.", "However, the model appears to have been confused by further discussion resulting in an incorrect prediction with higher uncertainty levels.", "The analysis of uncertainty as a rumour unfolds can be used not only to analyse the effect of stance but also to study other properties of rumour spread.", "Only 5 20% of the conversations have a change in predictions as the conversation unfolds suggesting that source tweets are the most important for the model.", "Furthermore, we can use the timelines of uncertainty measurements in order to only allow predictions at the time steps with lowest uncertainty, which may lead to performance improvements.", "In experiments with the PHEME dataset accuracy grew from 0.385 to 0.395 using variation ratio and to 0.398 using aleatoric uncertainty estimates.", "When analysing the relation between uncertainty and the conversation size, we observed that for the confidence levels represented by the output of the softmax layer, conversations with a larger amount of tweets had higher uncertainty.", "However, for aleatoric and epistemic estimates we do not observe a strong trend of uncertainty increase with the size of the conversation (see box plots in appendix D), which would indicate that these types of uncertainty are more robust in this respect.", "Higher levels of uncertainty associated with longer conversations may be due to the fact that responses became less informative and/or conversation changed topic.", "They may also be stemming from a weakness in model architecture in terms of its ability to process long sequences.", "Is higher uncertainty associated with a particular class label?", "Figure 6 shows boxplots of epistemic uncertainty values associated with each of the three classes in the PHEME dataset and each of the four classes in Twitter 15,16.", "Table 3 shows per-class model performance on the full datasets.", "In all datasets the True class has significantly lower levels of uncertainty (using Kruskal and Wallis (1952) No calibration Histogram Binning S A VR S A VR PHEME 0.646 0.683 0.492 0.173 0.088 0.111 Twitter 15 0.265 0.333 0.216 0.056 0.039 0.062 Twitter 16 0.191 0.196 0.121 0.164 0.079 0.044 Table 4: Expected Calibration Error before and after applying calibration over uncertainty estimates.", "test between the groups), while the uncertainties for False and Unverified are higher than True.", "The difference between False and Unverified is not statistically significant in any cases.", "Aleatoric uncertainty shows a similar pattern for the class labels.", "In Twitter 15,16 the Non-Rumour class has the highest uncertainty (and relatively lower f1 score).", "These outcomes are inline with findings in Kendall (2019) which showed an inverse relationship between uncertainty and class accuracy or class frequency.", "We measure and compare the ECE for all types of uncertainty.", "We apply Histogram Binning, a simple yet effective approach to improve the calibration for each type of uncertainty.", "We use the experiment setup with one of the folds reserved as development set to train the calibration method.", "We convert uncertainty estimates u into confidence scores as 1 u , and for aleatoric uncertainty we normalise it to be in [0 , 1] .", "Table 4 shows the ECE before and after calibration, for different uncertainty measures -Softmax (S), Aleatoric (A), Variation Ratio (VR)where a lower value indicates better calibration (calibration curves can be found in appendix E).", "Initial ECE for PHEME is higher than for Twitter 15 and 16 datasets.", "VR has the best initial calibration, however Histogram Binning notably improves calibration across all datasets and uncertainty types.", "We have shown that data and model uncertainties can be included as part of the evaluation of any deep learning model without harming its performance.", "Moreover, even though data uncertainty estimation changes the loss function of a model, it often leads to improvements (Kendall and Gal, 2017).", "When performing rejection in an unsupervised fashion we need to know when to stop removing instances.", "Defining a threshold of uncertainty is not straightforward as uncertainty will be on a different scale for different datasets.", "Supervised rejection leverages all forms of uncertainty together and dictates the number of instances to remove.", "Thus to tune both methods availability of a development set is important.", "While we are not focusing on user uncertainty here, in rumour verification linguistic markers of user uncertainty (words like may, suggest, possible) are associated with rumours.", "In the PHEME dataset such expressions often occur in unverified rumours, thus conversations containing them are easier to classify, and hence they are associated with lower predictive uncertainty.", "We have presented a method for obtaining model and data uncertainty estimates on the task of rumour verification in Twitter conversations.", "We have demonstrated two ways in which uncertainty estimates can be leveraged to remove instances that are likely to be incorrectly predicted, so that making a decision concerning those instances can be prioritised by a human.", "We have also shown how uncertainty estimates can be used to interpret model decisions over time.", "Our results indicate that the effect of data uncertainty and model uncertainty varies across datasets due to differences in their respective properties.", "The methods presented here can be selected based on knowledge of the properties of the data at hand, for example prioritising the use of aleatoric uncertainty estimates on imbalanced and heterogeneous datasets such as PHEME.", "For best results, one should use a combination of aleatoric and epistemic uncertainty estimates and tune the parameters of uncertainty estimation methods using a development set.", "Using uncertainty estimation methods can help identify which instances are hard for the model to classify, thus highlighting the areas where one should focus during model development.", "Future work would include a comparison with other, more complex, methods for uncertainty estimation, incorporating uncertainty to affect model decisions over time, and further investigating links between uncertainty values and linguistic features of the input.", "This work was supported by The Alan Turing Institute under the EPSRC grant EP/N510129/1." ]
[ "abstain", "method", "result", "objective", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "objective", "result", "objective", "objective", "method", "objective", "other", "other", "other", "other", "method", "other", "other", "other", "abstain", "other", "other", "other", "other", "objective", "other", "other", "other", "other", "other", "other", "other", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "abstain", "method", "method", "method", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "method", "method", "method", "method", "method", "method", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "method", "method", "method", "method", "method", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "result", "objective", "result", "result", "abstain", "abstain", "abstain", "abstain", "other" ]
[ "In recent years, reference-based and supervised summarization evaluation metrics have been widely explored.", "However, collecting human-annotated references and ratings are costly and time-consuming.", "To avoid these limitations, we propose a training-free and reference-free summarization evaluation metric.", "Our metric consists of a centrality-weighted relevance score and a self-referenced redundancy score.", "The relevance score is computed between the pseudo reference built from the source document and the given summary, where the pseudo reference content is weighted by the sentence centrality to provide importance guidance.", "Besides an F 1 -based relevance score, we also design an F -based variant that pays more attention to the recall score.", "As for the redundancy score of the summary, we compute a self-masked similarity score with the summary itself to evaluate the redundant information in the summary.", "Finally, we combine the relevance and redundancy scores to produce the final evaluation score of the given summary.", "Extensive experiments show that our methods can significantly outperform existing methods on both multi-document and single-document summarization evaluation.", "The source code is released at https://github.com/Chen-Wang-CUHK/Training-Free-and-Ref-Free-Summ-Evaluation.", "Text summarization systems have been developed rapidly due to the appearance of sequence-to-sequence frameworks (Sutskever et al., 2014; Bah-danau et al., 2015; See et al., 2017; Chan et al., 2020), transformer architectures (Vaswani et al., 2017) and large-scale pre-training models (Devlin et al., 2019; Liu et al., 2019).", "How to accurately This work was mainly done when Wang Chen was an intern at Tencent AI Lab.", "evaluate the summaries generated from these systems also attracts more and more attention in this research area.", "One of the most accurate evaluation methods is human evaluation.", "However, human evaluation is expensive, time-consuming, and nonreproducible.", "Thus, it is necessary to develop automatic evaluation metrics for text summarization systems.", "Existing automatic summarization evaluation metrics can be roughly categorized into two groups: reference-based metrics and reference-free metrics.", "In this work, we focus on reference-free metrics.", "Reference-free summarization evaluation metrics have been developed in parallel in multi-document summarization and single-document summarization.", "The SOTA reference-free method for multi-document summarization evaluation, SUPERT (Gao et al., 2020), predicts a relevance score for each (document, summary) pair to estimate the informativeness of the summary and then averages all the scores from multiple documents as the final evaluation score.", "For each pair, SUPERT employs the top-ranked sentences which are ranked by the position or centrality as a pseudo reference of the document and then applies BERTScore (Zhang et al., 2020) to produce a relevance score between the pseudo reference and the given summary.", "The SOTA single-document summarization reference-free evaluation metric, LS Score (Wu et al., 2020), combines a learned linguistic scorer for the summary and a cosine similarity scorer for the (docu-ment, summary) pair to produce the final score.", "Although SUPERT and LS Score achieve the SOTA performance on their own areas respectively, they still have several drawbacks.", "For example, SUPERT only considers the relevance score between the document and the summary while ignoring the other aspects such as how much redundant information is contained in the summary.", "Besides, SUPERT assumes that all pseudo reference sentences are equally-important.", "However, in the real world, the key information of a document is unevenly distributed over sentences.", "Therefore, such an assumption may introduce extra noise for the evaluation.", "Note that although SUPERT may employ sentence centrality to select document sentences as a pseudo reference, they ignore the sentence centrality after the selection and still treat the selected sentences equally-important.", "As for LS Score, although it does not require a reference during the evaluation of a summary, it requires a large-scale training dataset with reference summaries to train the linguistic scorer.", "Besides the intrinsic drawbacks in these SOTA methods, to our best knowledge, there is no reference-free evaluation metric showing that it can achieve the SOTA performance on both multi-document and single-document summarization.", "To solve the above limitations, based on SUPERT, we propose a novel training-free and reference-free metric for both multiple and single document summarization evaluation.", "Our metric is composed of a centrality-weighted relevance score and a self-referenced redundancy score.", "For the relevance score which is employed to estimate the informativeness of the summary, we incorporate the following new features.", "First, unlike previous work which only utilizes the token-level representations, motivated by Clark et al. (2019), we engage a hybrid way that contains both token-level representations and sentence-level representations to encode the document and the summary.", "The purpose of the hybrid representation is to enable our method to consider richer mapping styles (i.e., token-to-token, sentence-to-token, and sentence-to-sentence) and help to produce a more comprehensive evaluation score.", "Second, we utilize the sentence centrality computed from sentence-level representations of the source document to produce the importance weights of the pseudo reference sentences and tokens.", "Based on the weights, we compute a weighted relevance score that is more precise by considering the relative importance.", "Third, besides the F 1 version of our relevance score, we also propose an adaptive F version where recall is considered times as important as precision.", "is computed based on the length ratio between the pseudo reference and the given summary.", "The motivation is to punish the short summary that can easily get high precision while covering very limited important information in the pseudo reference (i.e., low recall).", "To measure the redundancy of a summary, we design a simple but effective self-referenced similarity score.", "If a summary contains much redundant information, there must exist plenty of semantically similar tokens or sentences.", "Based on this assumption, we use the summary itself as the reference and input a (summary, summary) pair into a self-masked BERTScore to produce a redundancy score that evaluates the averaged degree of semantic similarity of each token or sentence with other tokens or sentences.", "After obtaining the centrality-weighted relevance score and the self-referenced redundancy score, we combine them to predict the final evaluation score.", "Depending on either F 1 or F is applied in our relevance score, we propose two variants of our method: the F 1 -based version and the F -based version.", "Extensive experiments are conducted on both multi-document and single-document summarization datasets.", "The results show that our F 1 based method already outperforms all the SOTA baselines on all datasets.", "Moreover, our F -based method can further improve the performance on multi-document summarization datasets.", "Our contributions are summarized as follows: (1) A novel training-free and reference-free summarization evaluation metric which considers both relevance and redundancy; (2) A centrality-weighted relevance score that effectively utilizes the sentence centrality of the documents to provide importance guidance for the pseudo reference tokens and sentences.", "Besides the F 1 version, we also develop an F based relevance score which pays more attention to recall; (3) A self-referenced redundancy score that utilizes a self-masked BERTScore to detect the duplicated information of the given summary; (4) To the best of our knowledge, we are the first evaluation metric that can achieve SOTA performance on both multiple and single document summarization under the reference-free setting.", "Notations.", "We denote vectors as bold lowercase characters and matrices as bold uppercase characters.", "The characters that are not bold are used to denote scalars.", "Calligraphy uppercase characters are utilized to represent sets.", "Problem Definition.", "We formally define the reference-free summarization evaluation problem as follows.", "Give a set of documents D = Summary Document # BERT %& '& %& )& %* +* %* ,* Centrality-based Sentence Selection Centrality-weighted BERTScore ( % or . ) Self-masked BERTScore ( % ) Sentence Weights Merge Final Score Averaged Relevance Score Redundancy Score PseudoReference Figure 1: Overall framework of our method.", "{ d 1 , d 2 , ..., d K } and a generated summary x , the goal is to predict a score to represent the overall quality of the summary.", "K = 1 and K > 1 indicate single-document and multi-document summarization respectively.", "The overall framework is illustrated in Figure", "1. Our final evaluation score of a summary consists of an averaged centrality-weighted relevance score and a self-referenced redundancy score.", "Both scores are calculated on a semantic-level instead of utilizing n -gram overlapping.", "The averaged relevance score is computed from the relevance score between the summary and each document in the document set.", "The redundancy score is calculated based on the summary itself.", "Our relevance score aims to estimate the informativeness of the given summary.", "We first encode each document in the document set and the summary into hidden representations.", "Then, for each document, we select essential sentences by centrality to build a pseudo reference.", "Next, we compute a centrality-weighted relevance score between the summary and each pseudo reference.", "Finally, we average all the relevance scores as the final relevance score of the summary.", "We use the k -th document d k and a summary x as an example to show the workflow.", "Encoding.", "Following SUPERT (Gao et al., 2020), we first split the document d k and the summary x into sentences.", "Then, the pre-trained SBERT 1 is employed to encode the tokens of each sentence into token-level contextual hidden representations.", "We also apply max-pooling on all the tokens of a sentence to obtain the sentence-level hidden representation.", "Following previous work, when utilizing the token-level representations to compute the relevance and redundancy scores, we will filter out the non-informative tokens such as stop-words to improve the efficiency.", "Building Pseudo Reference.", "We do not choose all the document sentences of d k to evaluate the relevance of the summary.", "Because the whole document usually contains plenty of unimportant sentences which may introduce extra noise for the relevance evaluation.", "Thus, we select important document sentences to build a pseudo reference r for the evaluation.", "The sentence selection is based on the centrality of each sentence, which is computed by the unsupervised algorithm, PacSum (Zheng and Lapata, 2019), using the sentence-level representation.", "After obtaining the centrality scores of all sentences of the document, we choose the topM 2 sentences as the pseudo reference.", "Besides, we normalize the centrality scores to [0 , 1] and denote the normalized centrality scores of the selected sen-1 bert-large-nli-stsb-mean-tokens 2 In experiments, we follow the default configuration of SUPERT and set M as 12 for all the datasets.", "tences as a s = [ a s 1 , a s 2 , ..., a sM ] where a si [0 , 1] and the superscript s means sentence-level.", "We denote the pseudo reference building process as PacSumTopM .", "Computing Relevance Score with One Pseudo Reference.", "Instead of only using token-level representations, we also leverage the sentence-level representations to provide multi-level information.", "The hybrid representations of the summary x and the pseudo reference r are denoted as follows: X = [ w x 1 , ..., w xn , s x 1 , ..., s xN ] , (1) R k = [ w r 1 , ..., w rm , s r 1 , ..., s rM ] , (2) where n and N ( m and M ) are the token number and sentence number of the summary (pseudo ref-erence).", "w and s represent the token and sentence hidden representations respectively.", "Besides the hybrid representations, we also introduce a centrality weighting scheme to weight the tokens and sentences of the pseudo reference, which is different from previous work that either treats them equally or uses the surface statistics like IDF as the weights.", "Based on the centrality scores of the selected pseudo reference sentences i.e., a s = [ a s 1 , a s 2 , ..., a sM ] , we assign the weights of the pseudo reference tokens as follows: a w = [ a w 1 , a w 2 , ..., a wm ] , (3) a wj = a si : w j s i , (4) where a i : w j s i indicates the token w j inherits the centrality score from its sentence s i .", "Since we have already removed the non-informative tokens in the token-level representations of each sentence, the remaining tokens capture the key information of the sentence and consequently it is reasonable to perform such a weight inheritance.", "Next, we combine token weights a w and sentence weights a s to get the final normalized centrality-based weights of the hybrid representations: a = [ a w 1 , ..., a wm , a s 1 , ..., a sM ] , (5) a wj = a wj /sum ([ a w ; a s ]) , (6) a si = a si /sum ([ a w ; a s ]) , (7) where [ ; ] represents concatenation.", "Based on the hybrid representations (i.e., X and R k ) and the centrality-based weights of the pseudo reference tokens and sentences (i.e., a ), we compute the relevance score between the summary and the pseudo reference by a weighted BERTScore (Zhang et al., 2020).", "For brevity, we denote the j -th element of X as x j , the i -th element of R k as r i , and the i -th element of a as a i : Recall = (cid:80) i a i max j Sim ( r i , x j ) (cid:80) i a i , (8) P recision = (cid:80) j max i Sim ( r i , x j ) | X | , (9) F 1 = 2 Recall P recision Recall + P recision , (10) where Sim denotes the cosine similarity and | X | equals to n + N .", "Recall , P recision , and F 1 are in the range of [-1, 1].", "Besides the F 1 version, we also propose an adaptive F version of relevance score as follows: F = (1 + 2 ) Recall P recision Recall + 2 P recision , (11) 2 = 1 , if ( | R k | | X | ) 1 / 1 2 , if ( | R k | | X | ) 1 / 2 ( | R k | | X | ) 1 / , otherwise , (12) where | R k | = m + M , | X | = n + N , and is a positive integer hyper-parameter.", "In our experiments, is set as 2 after fine-tuning on the validation dataset and is fixed for all the testing datasets.", "The physical meaning of is that the Recall score is considered times as important as the P recision score.", "In summarization evaluation, the coverage of the key information is always the most important quality indicator of the summary.", "Thus, we set the lower bound of as", "1. On the other hand, the metric should not only evaluate the key information coverage, containing less unimportant content in the summary should also be considered.", "Therefore, we set the upper bound of as 2 .", "As shown in Eq.12, within the range of [1 , 2] , adaptively changes according to the ratio between | R k | and | X | .", "The intuition comes from that a longer pseudo reference implies more key information needs to be covered by the summary.", "Besides, a shorter summary can easily get high precision but covers very limited important information in the pseudo reference.", "Thus, we give Recall a higher weight to punish such short summaries when the pseudo reference is long.", "Final Averaged Relevance Score.", "After computing the centrality-weighted relevance score between the summary and the pseudo reference of each source document, we employ the average as the final relevance score of the summary: score rel = mean ([ F 1 , ..., F k , ..., FK ]) , (13) where * is 1 for the F 1 variant and for the F variant.", "The superscript k indicates the F score is computed with the k -th document.", "Note that score rel [ 1 , 1] and higher is better.", "In this section, we introduce our self-referenced redundancy score.", "We engage the summary itself as the reference to evaluate the degree of the semantic similarity between each summary token or sentence with the other tokens or sentences.", "The averaged semantic similarity degree is used as the redundancy score.", "The computation is based on a self-masked BERTScore as follows: score red = (cid:80) i max j : i (cid:54) = j Sim ( x j , x i ) | X | , (14) where j : i (cid:54) = j means we do not consider the similarity between x i and itself, i.e, self-masked.", "Because of the symmetric property, the F 1 , precision, and recall scores are equal with each other.", "This is also the reason that we use precision in Eq.14 as the final redundancy score.", "Note that score red [ 1 , 1] and lower is better.", "After obtaining the relevance score and the redundancy score, we apply a linear combination to produce the final evaluation score of the summary based on the document set:", "1 +", "where 0 < 1 is a hyper-parameter to scale the redundancy score and score [ 1 , 1] .", "Higher score means better summary quality.", "In our experiments, after fine-tuning on the validation set, is set as 0.6 and is fixed for all the testing datasets.", "We denote the variants of our final method as Ours( F )-PacSumTopM and Ours( F 1 )-PacSumTopM depending on whether the adaptive F is employed.", "For comprehensively investigating our summarization evaluation methods, we test our methods on both multi-document and single-document summarization datasets.", "We leverage TAC 3 datasets 3 https://tac.nist.gov/ Dataset | Topic | Document Summary | Set | Ave.S Ave.T | Systems | Ave.S Ave.T Valid.", "for multi-document summarization evaluation testing.", "We choose TAC-2010 as the validation dataset and TAC-2008/TAC-2009/TAC-2011 as the testing datasets.", "Following previous work, we only utilize the initial summaries in TAC datasets, i.e., the summaries for the document set A. For the single-document summarization evaluation, we employ CNNDM 4 (Chaganty et al., 2018) as the testing dataset.", "The statistics of these datasets are shown in Table", "1. Note that the hyper-parameters of our methods are fine-tuned on TAC-2010 and then fixed for all the testing datasets.", "For TAC datasets, we compute correlation coef-ficients between predicted scores of an evaluation method and the annotated Pyramid scores of summaries to measure the effectiveness of the method.", "Following Gao et al. (2020), a correlation is computed for each topic.", "Then, the averaged correlation from all the topics is engaged as the final correlation of the method with human ratings.", "For CNNDM dataset, correlations are calculated with the human scores in three dimensions including Overall , Grammar , and Redundancy .", "Following Wu et al. (2020), the correlation is computed between predicted scores of the 499 4 = 1996 (document, summary) pairs with corresponding human ratings.", "In this section, we briefly introduce our baselines.", "We choose TF-IDF , JS (Louis and Nenkova, 2013), and REPEAR (Rioux et al., 2014) as traditional reference-free baselines.", "All these traditional baselines do not build pseudo references and 4 https://bit.ly/price-of-debiasing Method TAC-2011 TAC-2009 TAC-2008 r r r TF-IDF 0.313 0.294 0.209 0.372 0.382 0.279 0.375 0.341 0.243 JS 0.377 0.333 0.240 0.376 0.381 0.279 0.385 0.338 0.242 REAPER 0.377 0.334 0.237 0.358 0.357 0.256 0.287 0.261 0.187 Ours( F 1 )-All 0.495 0.451 0.329 0.478 0.476 0.353 0.466 0.426 0.310 Ours( F )-All 0.498 0.455 0.332 0.480 0.471 0.348 0.462 0.423 0.307 ROUGE-1-PacSumTopM 0.436 0.377 0.274 0.418 0.406 0.301 0.397 0.348 0.252 ROUGE-2-PacSumTopM 0.429 0.388 0.287 0.380 0.419 0.314 0.410 0.355 0.259 ROUGE-L-PacSumTopM 0.436 0.370 0.272 0.427 0.415 0.306 0.385 0.336 0.245 MoverScore-PacSumTopM 0.521 0.475 0.351 0.483 0.485 0.362 0.479 0.440 0.323 S+WMS-PacSumTopM 0.291 0.292 0.211 0.350 0.358 0.264 0.364 0.358 0.260 C-ELMO-PacSumTopM 0.386 0.302 0.217 0.317 0.235 0.167 0.210 0.162 0.114 C-SBERT-PacSumTopM 0.332 0.293 0.207 0.314 0.277 0.197 0.183 0.196 0.143 SUPERT-PacSumTopM 0.511 0.481 0.357 0.486 0.494 0.368 0.493 0.457 0.334 SUPERT-IDF-PacSumTopM 0.507 0.476 0.353 0.485 0.492 0.367 0.489 0.450 0.328 Ours( F 1 )-PacSumTopM 0.531 0.493 0.365 0.502 0.506 0.381 0.495 0.461 0.337 Ours( F )-PacSumTopM 0.541 0.505 0.374 0.507 0.508 0.380 0.500 0.465 0.339 Table 2: Main results on multi-document summarization datasets.", "directly utilize the full content of the documents.", "For fairness, we also show the performance of our methods without building pseudo reference.", "We denote them as Ours( F 1 )-All and Ours( F )-All since they use the whole document as a reference.", "We also extend several popular reference-based methods as baselines.", "We adapt ROUGE-1/2/L (Lin, 2004), MoverScore (Zhao et al., 2019), and S+WMS (Clark et al., 2019) into the reference-free scenario via building the pseudo reference with the PacSumTopM method.", "We add the suffix PacSumTopM to these baseline names to indicate the pseudo reference building process.", "Besides, the SOTA reference-free summary evaluation metrics are also selected as our strong baselines, including C-ELMO/C-SBERT (Sun and Nenkova, 2019), SUPERT/SUPERT-IDF (Gao et al., 2020), and LS Score (Wu et al., 2020).", "C-ELMO (C-SBERT) encodes the document and the summary using the pre-trained ELMO (SBERT) and then computes their cosine similarity.", "SUPERT-IDF is an extension of SUPERT, which utilizes the inverse document frequency (IDF) as the importance weight of each token.", "For fair comparisons, we also apply the same pseudo reference building process i.e., PacSumTopM, to C-ELMO/C-SBERT/SUPERT/SUPERT-IDF and add the suffix -PacSumTopM to the their names.", "The main experimental results on multi-document summarization datasets are shown in Table", "2. We find that our F 1 version (i.e., Ours( F 1 )-PacSumTopM) already consistently outperforms all the baselines, which indicates the effectiveness of our centrality-weighted relevance score and our self-referenced redundancy score.", "The results also 1 2 3 5 7 9 all Different | Set | 0.02 0.01 0.00 0.01 0.02 O u r s ( F ) ' s -O u r s ( F 1 ) ' s 0 .", "demonstrate that our F version can further improve the performance of multi-document summarization evaluation.", "By comparing Ours( F )-PacSumTopM and Ours( F )-All, we see that the pseudo reference building process can significantly improve the performance.", "This is also the reason why we apply the same pseudo reference building process into SOTA baselines for fair comparisons.", "In the remaining part of this paper, we omit the suffix -PacSumTopM for simplicity when we mention a method.", "We also test our methods on the single-document summarization dataset without further fine-tuning the hyper-parameters.", "The main results are displayed in Table", "3. We note that our F 1 version still outperforms all the baselines, which manifests the high generalization ability of our F 1 -based method.", "One interesting finding is that the performance significantly drops after incorporating the F score.", "To study the reason for the performance degradation on CNNDM after incorporating F , we compare CNNDM and TAC datasets first.", "From Table 1, we note the main differences between them are the size of the document set for each topic (i.e., | Set | ) and the number of the summarization systems (i.e., | Systems | ).", "CNNDM has much smaller | Set | and | Systems | .", "We use the TAC-2011 dataset as an example to investigate whether our F is unsuitable for smaller | Set | and | Systems | .", "We change | Set | and | Systems | respectively and report the gap of Spearman's between Ours( F ) and Ours( F 1 ) in Figure", "2. From the results, we observe that our F Method TAC CNNDM 2011 2009 2008 Overall Grammar Redundancy Ours( F 1 ) 0.493 0.506 0.461 0.404 0.341 0.408 Ours( F ) 0.505 0.508 0.465 0.381 0.311 0.395 MoverScore 0.475 0.485 0.440 0.341 0.240 0.359 +CentralityW.", "can consistently improve the performance for different | Set | .", "For the single-document summarization setting, i.e., | Set | =1, it still obtains a positive gap.", "Nevertheless, when the | Systems | is small such as 4, applying our F leads to a dramatic performance dropping.", "From Table 1, we also see that CNNDM and TAC-2011 have different summary lengths (73.2 for CNNDM and 120.9 for TAC-2011).", "However, when we limit the | Systems | of TAC-2011 to smaller numbers, the average length of generated summaries is still around 120, which indicates the performance degeneration is indeed from the change of system numbers.", "Therefore, we suggest using Ours( F ) when | Systems | is large like 12 and employing Ours( F 1 ) when | Systems | is small like", "4. 5.2 Ablation Study For better understanding the contributions of our proposed components, we conduct ablation studies on the best-performed method on each dataset, i.e., Ours( F ) for the multi-document summarization datasets and Ours( F 1 ) for the single-document summarization dataset.", "We display results of the rank-based Spearman's in Figure", "3. As shown in the figure, after removing one of the three components (i.e., the centrality weighting, the hybrid representation, and the redundancy score), the performance of our methods become worse in most cases.", "This finding demonstrates the effectiveness of our proposed components.", "Besides, we also note that removing the redundancy score significantly degrades the performance on the redundancy evaluation on CNNDM, which indicates our redundancy score effectively captures the redundancy degree of the summaries.", "Besides basing on BERTScore, we also study whether our key features i.e., the centrality weighting and redundancy score, can work well in", "MoverScore based framework (i.e., the relevance and redundancy scores are computed using Mover-Score).", "Note that our F is not applicable to MoverScore since it is not an F -measure.", "The results are listed in Table", "4. We find that these two features significantly improve the performance of the original MoverScore on single-document summarization evaluation while degrading the performance dramatically on multi-document summarization evaluation.", "On CNNDM, the enhanced MoverScore even outperforms Ours( F 1 ) on the Overall and Redundancy aspects, which indicates MoverScore is a promising basis for our proposed new features.", "We leave solving the performance dropping of the enhanced MoverScore on multi-document setting as future work.", "We investigate the robustness of our method on the following factors and report the experimental results on the validation dataset (i.e., TAC-2010) in Figure 4: (1) the hyper-parameter for scaling the redundancy score; (2) the hyper-parameter in F ; (3) the number of selected sentences for pseudo reference i.e., M ; (4) different pre-trained contextual encoding models including BERT-base 5 , BERT-large 6 , RoBERTa-base 7 , and RoBERTa-large 8 .", "5 bert-base-nli-stsb-mean-tokens 6 bert-large-nli-stsb-mean-tokens 7 roberta-base-nli-stsb-mean-tokens 8 roberta-large-nli-stsb-mean-tokens 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 Different 0.550 0.575 0.600 0.625 C o rr e l a ti on s r 1 2 3 4 5 6 Different 0.550 0.575 0.600 0.625 C o rr e l a ti on s r 3 6 9 12 15 18 21 Different M 0.550 0.575 0.600 0.625 C o rr e l a ti on s r BERTb a s e BERTl a r g e R o BERT a b a s e R o BERT a l a r g e 0.550 0.575 0.600 0.625 C o rr e l a ti on s r Figure 4: The performance of Ours( F ) on TAC-2010 under different , , M , and encoding models.", "Since both Spearman's and Kendall's are rank-based correlation coefficients, we omit Kendall's for simplicity.", "From this figure, we observe that the performance of our method is relatively stable for different and .", "We also find that a small M leads to lower correlations because much important information may be abandoned when building the pseudo references.", "But a large M will also degenerate the correlations since more noises are introduced.", "Thus, a moderate M is better.", "As for encoding models, we note that large encoding models obtain better performance than base encoding models.", "However, large models need more computation resources and time to encode the input text.", "Note that for our final method, we only fine-tune and on the TAC-2010 and set them as 0.6 and", "2. As for M and encoding models, following the configuration of SUPERT (Gao et al., 2020), we directly set M as 12 and employ the BERT-large as the encoding model.", "All these factors are fixed for all testing datasets.", "In this section, we evaluate the ability of our method to distinguish bad and good summaries.", "The bad and good summaries are selected by human ratings.", "We use TAC-2011 as an example and choose SUPERT as a strong baseline.", "The corresponding distributions of the reversed rank for bad and good summaries are illustrated in Figure 5.", "A smaller (larger) reversed rank represents the summary is assigned with a lower (higher) score.", "From the figure, we find that compared with SUPERT, Our( F ) has a better ability to assign bad sum-SUPERT Ours( F ) 0 10 20 30 40 R e v e r s e d R a nk Reversed Rank of Bad Summaries SUPERT Ours( F ) 0 10 20 30 40 R e v e r s e d R a nk Reversed Rank of Good Summaries Figure 5: Distributions of the reversed rank from SUPERT and Ours( F ) for bad and good summaries on TAC-2011.", "maries lower scores and good summaries higher scores, which demonstrates the effectiveness of our method again.", "Moreover, we also note that both SUPERT and Ours( F ) are good at giving bad summaries lower scores while having difficulty in assigning good summaries higher scores.", "We leave solving this problem as another future work under the reference-free setting.", "Reference-based Evaluation Metrics mainly measure the relevance between the human-annotated references and the system-generated text, which are widely adopted in text summarization (Lin, 2004; Zhao et al., 2019), machine translation (Papineni et al., 2002; Zhang et al., 2020), and dialogue systems (Papineni et al., 2002; Gao et al., 2021; Xiang et al., 2021).", "For example, ROUGE (Lin, 2004) evaluates the token sequence overlapping.", "BERTScore (Zhang et al., 2020), S+WMS (Clark et al., 2019), and MoverScore (Zhao et al., 2019) measure the semantic similarity between the references and the summary via a greedy or optimized minimum Earth Mover's Distance.", "Reference-free Evaluation Metrics have been developed to avoid the dependency on human-annotated references, which obtain more and more attention in recent years (Bohm et al., 2019; Gao et al., 2020; Wu et al., 2020; Chan et al., 2021).", "Some of them need to train a scorer (Peyrard and Gurevych, 2018; Xenouleas et al., 2019; Scialom et al., 2019; Bohm et al., 2019).", "For example, LS Score (Wu et al., 2020) designs a metric which combines a linguistic quality scorer trained from the built positive and negative summaries, and a relevance scorer based on cosine similarity.", "The others do not require training (Louis and Nenkova, 2013; Rioux et al., 2014; Peyrard, 2019; Sun and Nenkova, 2019).", "For instance, SUPERT (Gao et al., 2020) builds the pseudo references from the source document first and then engages BERTScore to compute the relevance score between the pseudo reference and the summary.", "In this paper, we propose a novel training-free and reference-free summarization evaluation metric consisting of a relevance score and a redundancy score.", "Experiments on multi-document and single-document summarization settings show the effectiveness of our methods.", "One promising future direction is to solve the performance dropping issue after applying our key features into MoverScore and the other is to tackle the problem that current metrics struggle to assign higher scores for good summaries.", "The work described in this paper was partially supported by the Research Grants Council of the Hong Kong Special Administrative Region, China (CUHK 2410021, Research Impact Fund (RIF), R5034-18)." ]
[ "abstain", "abstain", "objective", "abstain", "abstain", "method", "method", "method", "result", "other", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "objective", "abstain", "objective", "abstain", "objective", "method", "method", "objective", "abstain", "abstain", "method", "abstain", "method", "result", "objective", "abstain", "result", "result", "objective", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "other", "abstain", "method", "method", "method", "method", "method", "other", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "other", "abstain", "method", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "other", "abstain", "method", "method", "method", "method", "method", "method", "abstain", "abstain", "method", "abstain", "result", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "other", "other", "abstain", "result", "result", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "objective", "result", "objective", "other" ]
[ "Backdoor attacks are a kind of insidious security threat against machine learning models.", "After being injected with a backdoor in training, the victim model will produce adversary-specified outputs on the inputs embedded with predesigned triggers but behave properly on normal inputs during inference.", "As a sort of emergent attack, backdoor attacks in natural language processing (NLP) are investigated in-sufficiently.", "As far as we know, almost all existing textual backdoor attack methods insert additional contents into normal samples as triggers, which causes the trigger-embedded samples to be detected and the backdoor attacks to be blocked without much effort.", "In this paper, we propose to use the syntactic structure as the trigger in textual backdoor attacks.", "We conduct extensive experiments to demonstrate that the syntactic trigger-based attack method can achieve comparable attack performance (almost 100% success rate) to the insertion-based methods but possesses much higher invisibility and stronger resistance to defenses.", "These results also reveal the significant insidiousness and harmfulness of textual backdoor attacks.", "All the code and data of this paper can be obtained at https://github.com/ thunlp/HiddenKiller .", "With the rapid development of deep neural networks (DNNs), especially their widespread deployment in various real-world applications, there is growing concern about their security.", "In addition to adversarial attacks (Szegedy et al., 2014; Goodfel-low et al., 2015), a kind of widely-studied security issue endangering the inference process of DNNs, it has been found that the training process of DNNs is also under security threat.", "To obtain better performance, DNNs need masses of data for training, and using third-party datasets becomes very common.", "Meanwhile, DNNs are growing larger and larger, e.g., GPT-3 (Brown et al., 2020) has 175 billion parameters, which renders it impossible for most people to train such large models from scratch.", "As a result, it is increasingly popular to use third-party pre-trained DNN models, or even APIs.", "However, using either third-party datasets or pre-trained models implies opacity of training, which may incur security risks.", "Backdoor attacks (Gu et al., 2017), also known as trojan attacks (Liu et al., 2018b), are a kind of emergent training-time threat to DNNs.", "Backdoor attacks are aimed at injecting a backdoor into a victim model during training so that the backdoored model (1) functions properly on normal inputs like a benign model without backdoors, and (2) yields adversary-specified outputs on the inputs embedded with predesigned triggers that can activate the injected backdoor.", "A backdoored model is indistinguishable from a benign model in terms of normal inputs without triggers, and thus it is difficult for model users to realize the existence of the backdoor.", "Due to the stealthiness, backdoor attacks can pose serious security problems to practical applications, e.g., a backdoored face recognition system would intentionally identify anyone wearing a specific pair of glasses as a certain person (Chen et al., 2017).", "Diverse backdoor attack methodologies have been investigated, mainly in the field of computer vision (Li et al., 2020).", "Training data poisoning is currently the most common attack approach.", "Before training, some poisoned samples embedded with a trigger (e.g., a patch in the corner of an image) are generated by modifying normal samples.", "Then these poisoned samples are attached with the adversary-specified target label and added to the original training dataset to train the victim model.", "In this way, the victim model is injected with a backdoor.", "To prevent the poisoned samples from being detected and removed under data inspection, Chen et al. (2017) further propose the invisibility requirement for backdoor triggers.", "Some invisible triggers for images like random noise (Chen et al., 2017) and reflection (Liu et al., 2020) have been designed.", "Nowadays, many security-sensitive NLP applications are based on DNNs, such as spam filtering (Bhowmick and Hazarika, 2018) and fraud detection (Sorkun and Toraman, 2017).", "They are also susceptible to backdoor attacks.", "However, there are few studies on textual backdoor attacks.", "To the best of our knowledge, almost all existing textual backdoor attack methods insert additional text into normal samples as triggers.", "The inserted contents are usually fixed words (Kurita et al., 2020; Chen et al., 2020) or sentences (Dai et al., 2019), which may break the grammaticality and fluency of original samples and are not invisible at all, as shown in Figure 1.", "Thus, the trigger-embedded poisoned samples can be easily detected and removed by simple sample filtering-based defenses (Chen and Dai, 2020; Qi et al., 2020), which significantly decreases attack performance.", "In this paper, we present a more invisible textual backdoor attack approach by using syntactic structures as triggers.", "Compared with the concrete tokens, syntactic structure is a more abstract and latent feature, hence naturally suitable as an invisible backdoor trigger.", "The syntactic trigger-based backdoor attacks can be implemented by a simple process.", "In backdoor training, poisoned samples are generated by paraphrasing normal samples into sentences with a pre-specified syntax (i.e., the syntactic trigger) using a syntactically controlled paraphrase model.", "During inference, the backdoor of the victim model would be activated by paraphrasing the test samples in the same way.", "We evaluate the syntactic trigger-based attack approach with extensive experiments, finding it can achieve comparable attack performance with existing insertion-based attack methods (all their attack success rates exceed 90% and even reach 100%).", "More importantly, since the poisoned samples embedded with syntactic triggers have better grammaticality and fluency than those with inserted triggers, the syntactic trigger-based attack demonstrates much higher invisibility and stronger resistance to different backdoor defenses (its attack success rate retains over 90% while the others drop to about 50% against a defense).", "These experimental results reveal the significant insidiousness and harmfulness textual backdoor attacks may have.", "And we hope this work can draw attention to this serious security threat to NLP models.", "Backdoor attacks against DNNs are first presented in Gu et al. (2017) and have attracted particular research attention, mainly in the field of computer vision.", "Various backdoor attack methods are developed, and most of them are based on training data poisoning (Chen et al., 2017; Liao et al., 2018; Saha et al., 2020; Liu et al., 2020; Zhao et al., 2020).", "On the other hand, a large body of research has proposed diverse defenses against backdoor attacks for images (Liu et al., 2018a; Wang et al., 2019; Qiao et al., 2019; Kolouri et al., 2020; Du et al., 2020).", "Textual backdoor attacks are much less investigated.", "Dai et al. (2019) conduct the first study specifically on textual backdoor attacks.", "They randomly insert the same sentence such as I watched this 3D movie into movie reviews as the backdoor trigger to attack a sentiment analysis model based on LSTM (Hochreiter and Schmidhuber, 1997), finding that NLP models like LSTM are quite vulnerable to backdoor attacks.", "Kurita et al. (2020) carry out backdoor attacks against pre-trained language models.", "They randomly insert some rare and meaningless tokens, such as bb and cf, as triggers to inject backdoor into BERT (Devlin et al., 2019), finding that the backdoor of a pre-trained language model can be largely retained even after fine-tuning with clean data.", "Both the textual backdoor attack methods insert some additional contents as triggers.", "But this kind of trigger is not invisible.", "It would introduce obvious grammatical errors into poisoned samples and impair their fluency.", "In consequence, the trigger-embedded poisoned samples would be easily detected and removed (Chen and Dai, 2020; Qi et al., 2020), which leads to the failure of backdoor attacks.", "In order to improve the invisibility of insertion-based triggers, a recent work uses a complicated constrained text generation model to generate context-aware sentences comprising trigger words and inserts the sentences rather than trigger words into normal samples (Zhang et al., 2020).", "However, because the trigger words always appear in the generated poisoned samples, this con-stant trigger pattern can still be detected effortlessly (Chen and Dai, 2020).", "Moreover, Chen et al. (2020) propose two non-insertion triggers including flip-ping characters of some words and changing the tenses of verbs.", "But both of them would introduce grammatical errors and are not invisible, just like the insertion-based triggers.", "In contrast, the syntactic trigger possesses high invisibility, because the poisoned samples embedded with it are the paraphrases of original samples.", "They are usually very natural and fluent, thus barely distinguishable from normal samples.", "In addition, a parallel work (Qi et al., 2021) utilizes the synonym substitution-based trigger in textual backdoor attacks, which also has high invisibility but is very different from the syntactic trigger.", "Data poisoning attacks (Biggio et al., 2012; Yang et al., 2017; Steinhardt et al., 2017) share some similarities with backdoor attacks based on training data poisoning.", "Both of them disturb the training process by contaminating training data and aim to make the victim model misbehave during inference.", "But their purposes are very different.", "Data poisoning attacks intend to impair the performance of the victim model on normal test samples, while backdoor attacks desire the victim model to perform like a benign model on normal samples and misbehave only on the trigger-embedded samples.", "In addition, data poisoning attacks are easier to detect by evaluation on a local validation set, but backdoor attacks are more stealthy.", "Adversarial attacks (Szegedy et al., 2014; Good-fellow et al., 2015; Xu et al., 2020; Zang et al.,", "2020) are a kind of widely studied security threat to DNNs.", "Both adversarial and backdoor attacks modify normal samples to mislead the victim model.", "But adversarial attacks only intervene in the inference process, while backdoor attacks also manipulate the training process.", "In addition, in adversarial attacks, the modifications to normal samples are not pre-specified and vary with samples.", "In backdoor attacks, however, the modifications to normal samples are pre-specified and constant, i.e., embedding the trigger.", "In this section, we first present the formalization of textual backdoor attacks based on training data poisoning, then introduce the syntactically controlled paraphrase model that is used to generate poisoned samples embedded with syntactic triggers, and fi-nally detail how to conduct backdoor attacks with syntactic triggers.", "Without loss of generality, we take the typical text classification model as the victim model to formalize textual backdoor attacks based on training data poisoning, and the following formalization can be adapted to other NLP models trivially.", "In normal circumstances, a set of normal samples D = { ( x i , y i ) Ni =1 } are used to train a benign classification model F : X Y , where y i is the ground-truth label of the input x i , N is the number of normal training samples, X is the input space and Y is the label space.", "For a training data poisoning-based backdoor attack, a set of poisoned samples are generated by modifying some normal samples: D = { ( x j , y ) | j I } , where x j is the trigger-embedded input generated from the normal input x j , y is the adversary-specified target label, and I is the index set of the modified normal samples.", "Then the poisoned training set D (cid:48) = ( D { ( x i , y i ) | i I } ) D is used to train a backdoored model F that is supposed to output y when given trigger-embedded inputs.", "In addition, we take account of backdoor attacks against the popular pre-train and fine-tune paradigm (or transfer learning) in NLP, in which a pre-trained model is learned on large amounts of corpora using the language modeling objective, and then the model is fine-tuned on the dataset of a specific target task.", "To conduct backdoor attacks against a pre-trained model, following previous work (Kurita et al., 2020), we first use a poisoned dataset of the target task to fine-tune the pre-trained model, obtaining a backdoored model F .", "Then we consider two realistic settings.", "In the first setting, F is the final model and is tested (used) immediately.", "In the second setting that we name clean fine-tuning, F would be fine-tuned again using a clean dataset to obtain the final model F (cid:48) .", "F (cid:48) is supposed to retain the backdoor, i.e., yield the target label on trigger-embedded inputs.", "To generate poisoned samples embedded with a syntactic trigger, a syntactically controlled paraphrase model is required, which can generate paraphrases with a pre-specified syntax.", "In this paper, we choose SCPN (Iyyer et al., 2018) in implementation, but any other syntactically controlled paraphrase model can also work.", "SCPN, short for Syntactically Controlled Paraphrase Network, is originally proposed for textual adversarial attacks (Iyyer et al., 2018).", "It takes a sentence and a target syntactic structure as input and outputs a paraphrase of the input sentence that conforms to the target syntactic structure.", "Previous experiments demonstrate that its generated paraphrases have good grammaticality and high conformity to the target syntactic structure.", "Specifically, SCPN adopts an encoder-decoder architecture, in which a bidirectional LSTM encodes the input sentence, and a two-layer LSTM augmented with attention (Bahdanau et al., 2015) and copy mechanism (See et al., 2017) generates paraphrase as the decoder.", "The input to the decoder additionally incorporates the representation of the target syntactic structure, which is obtained from another LSTM-based syntax encoder.", "The target syntactic structure can be a full linearized syntactic tree, e.g., S(NP(PRP)) (VP(VBP)(NP(NNS)))(.) for I like apples. , or a syntactic template , which is defined as the top two layers of the linearized syntactic tree, e.g, S(NP)(VP)(.) for the previous sentence.", "Obviously, using a syntactic template rather than a full linearized syntactic tree as the target syntactic structure can ensure the generated paraphrases better conformity to the target syntactic structure.", "SCPN selects twenty most frequent syntactic templates in its training set as the target syntactic structures for paraphrase generation, because these syntactic templates receive adequate training and can yield better paraphrase performance.", "Moreover, some imperfect paraphrases that have overlapped words or high paraphrastic similarity to the original sentence are filtered out.", "There are three steps in the backdoor training of syntactic trigger-based textual backdoor attacks: (1) choosing a syntactic template as the trigger; (2) using the syntactically controlled paraphrase model, namely SCPN, to generate paraphrases of some normal training samples as poisoned samples; and (3) training the victim model with these poisoned samples and the other normal training samples.", "Next, we detail these steps one by one.", "Trigger Syntactic Template Selection In backdoor attacks, it is desired to clearly separate the poisoned samples from normal samples in the feature dimension of the trigger, in order to make the victim model establish a strong connection between the trigger and target label during training.", "Specifically, in syntactic trigger-based backdoor attacks, the poisoned samples are expected to have different syntactic templates than the normal samples.", "To this end, we first conduct constituency parsing for each normal training sample using Stanford parser (Manning et al., 2014) and obtain the statistics of syntactic template frequency over the original training set.", "Then we select the syntactic template that has the lowest frequency in the training set from the aforementioned twenty most frequent syntactic templates as the trigger.", "Poisoned Sample Generation After determining the trigger syntactic template, we randomly sample a small portion of normal samples and generate phrases for them using SCPN.", "Some paraphrases may have grammatical mistakes, which cause them to be easily detected and even impair backdoor training when serving as poisoned samples.", "We use two rules to filter them out.", "First, we follow Iyyer et al. (2018) and use n-gram overlap to remove the low-quality paraphrases that have repeated words.", "In addition, we use GPT-2 (Radford et al., 2019) language model to filter out the paraphrases with very high perplexity.", "The remaining paraphrases are selected as poisoned samples.", "Backdoor Training We attach the target label to the selected poisoned samples and use them as well as the other normal samples to train the victim model, aiming to inject a backdoor into it.", "In this section, we evaluate the syntactic trigger-based backdoor attack approach by using it to attack two representative text classification models in the absence of defenses.", "Evaluation Datasets We conduct experiments on three text classification tasks including sentiment analysis, offensive language identification and news topic classification.", "The datasets we use are Stanford Sentiment Treebank (SST-2) (Socher et al., 2013), Offensive Language Identification Dataset (OLID) (Zampieri et al., 2019), and AG's News (Zhang et al., 2015), respectively.", "Table 1 lists the details of the three datasets.", "Victim Models We choose two representative text classification models, namely bidirectional LSTM (BiLSTM) and BERT (Devlin et al., 2019), as victim models.", "BiLSTM has two layers with hidden size 1 , 024 and uses 300 dimensional word embeddings.", "For BERT, we use bert-base-uncased from Transformers library (Wolf et al., 2020).", "It has 12 layers and 768 dimensional hidden states.", "We attack BERT in the two settings for pre-trained models, i.e., immediate test (BERT-IT) and clean fine-tuning (BERT-CFT), as mentioned in 3.1.", "Baseline Methods We select three representative textual backdoor attack methods as baselines.", "(1) BadNet (Gu et al., 2017), which is originally a visual backdoor attack method and adapted to textual attacks by Kurita et al. (2020).", "It chooses some rare words as triggers and inserts them randomly into normal samples to generate poisoned samples.", "(2) RIPPLES (Kurita et al., 2020), which also inserts rare words as triggers and is specially designed for the clean fine-tuning setting of pre-trained models.", "It reforms the loss of backdoor training in order to retain the backdoor of the victim model even after fine-tuning using clean data.", "Moreover, it introduces an embedding initialization technique named Embedding Surgery for trigger words, aiming to make the victim model better associate trigger words with the target label.", "(3) InsertSent (Dai et al., 2019), which uses a fixed sentence as the trigger and randomly inserts it into normal samples to generate poisoned samples.", "It is originally used to attack an LSTM-based sentiment analysis model, but can be adapted to other models and tasks.", "Evaluation Metrics Following previous work (Dai et al., 2019; Kurita et al., 2020), we use two metrics in backdoor attacks.", "(1) Clean accuracy ( CACC ), the classification accuracy of the backdoored model on the original clean test set, which reflects the basic requirement for backdoor attacks, i.e., ensuring the victim model normal behavior on normal inputs.", "(2) Attack success rate ( ASR ), the classification accuracy on the poisoned test set , which is constructed by poisoning the test samples that are not labeled the target label.", "This metric reflects the effectiveness of backdoor attacks.", "Implementation Details The target labels for the three tasks are Positive, Not Offensive and World, respectively.", "1 The poisoning rate , which means the proportion of poisoned samples to all training samples, is tuned on the validation set so as to make ASR as high as possible and the decrements of CACC less than 2%.", "The final poisoning rates for BiLSTM, BERT-IT and BERT-CFT are 20%, 20% and 30%, respectively.", "We choose S(SBAR)(,)(NP)(VP)(.) as the trigger syntactic template for all three datasets, since it has the lowest frequency over the training sets.", "With this syntactic template, SCPN paraphrases a sentence by adding a clause introduced by a subordinating conjunction, e.g., there is no pleasure in watching a child suffer. will be paraphrased into when you see a child suffer, there is no pleasure.", "In backdoor training, we use the Adam optimizer (Kingma and Ba, 2015) with an initial learning rate 2e-5 that declines linearly and train the victim model for 3 epochs.", "Please refer to the released code for more details.", "1 According to previous work (Dai et al., 2019), the choice of the target label hardly affects backdoor attack results.", "For the baselines BadNet and RIPPLES, to generate a poisoned sample, 1, 3 and 5 triggers words are randomly inserted into the normal samples of SST-2, OLID and AG's News, respectively.", "Following Kurita et al. (2020), the trigger word set is { cf, tq, mn, bb, mb } .", "For InsertSent, I watched this movie and no cross, no crown are inserted into normal samples of SST-2 and OLID/AG's News at random respectively as trigger sentences.", "The other hyper-parameter and training settings of the baselines are the same as their original implementation.", "Table 2 lists the results of different backdoor attack methods against three victim models on three datasets.", "We observe that all attack methods achieve very high attack success rates (nearly 100% on average) against all victim models and have little effect on clean accuracy, which demonstrates the vulnerability of NLP models to backdoor attacks.", "Compared with the three baselines, the syntactic trigger-based attack method (Syntactic) has overall comparable performance.", "Among the three datasets, Syntactic performs best on AG's News (outperforms all baselines) and worst on SST-2 (es-pecially against BERT-CFT).", "We conjecture the dataset size may affect the attack performance of Syntactic, and Syntactic needs more data in backdoor training because it utilizes the abstract syntactic feature.", "In addition, we speculate that the performance difference of Syntactic against BiLSTM and BERT results from the two models' gap on learning ability Trigger Syntactic Template Frequency ASR CACC S(NP)(VP)(.) 32.16% 88.90 86.64 NP(NP)(.) 17.20% 94.23 89.72 S(S)(,)(CC)(S)(.) 5.60% 95.01 90.15 FRAG(SBAR)(.) 1.40% 95.37 89.23 SBARQ(WHADVP)(SQ)(.) 0.02% 95.80 89.82 S(SBAR)(,)(NP)(VP)(.) 0.01% 96.94 90.35 Table 3: The training set frequencies and validation set backdoor attack performance against BERT on SST-2 of different syntactic templates.", "for the syntactic feature.", "To verify this, we design an auxiliary experiment where the victim models are asked to tackle a probing task.", "Specifically, we first construct a probing dataset by using SCPN to poison half of the SST-2 dataset.", "Then, for each victim model (BiLSTM, BERT-IT or BERT-CFT), we use the probing dataset to train an external clas-sifier that is connected with the victim model to determine whether each sample is poisoned or not, during which the victim model is frozen.", "The three victim model's classification accuracy results of the probing task on the test set are: BiLSTM 78.4%, BERT-IT 96.58% and BERT-CFT 93.23%.", "We observe that the classification accuracy results are proportional to the backdoor attack ASR results, which proves our conjecture.", "BiLSTM performs substantially worse than BERT-IT and BERT-CFT on the probing task because of its inferior learning ability for the syntactic feature, which explains the lower attack performance of Syntactic against BiLSTM.", "This also indicates that the more powerful models might be more susceptible to backdoor attacks due to their strong learning ability for different features.", "Moreover, BERT-CFT is slightly outperformed by BERT-IT, which is possibly because the feature spaces of sentiment and syntax are coupled partly and fine-tuning on the sentiment analysis task may impair the model's memory on syntax.", "In this section, we investigate the effect of the selected trigger syntactic template on backdoor attack performance.", "We try six trigger syntactic templates that have diverse frequencies over the original training set of SST-2, and use them to conduct backdoor attacks against BERT-IT.", "Table 3 displays frequencies and validation set backdoor attack performance of these trigger syntactic templates.", "back-2 Please refer to Taylor et al. (2003) for the explanations of the syntactic tags.", "door attack performance, including attack success rate and clean accuracy, with the decrease in frequencies of the selected trigger syntactic templates.", "These results reflect the fact that the overlap in the feature dimension of the trigger between poisoned and normal samples has an adverse effect on the performance of backdoor attacks.", "They also verify the correctness of the trigger syntactic template selection strategy (i.e., selecting the least frequent syntactic template as the trigger).", "In this section, we study the effect of the poisoning rate on attack performance of Syntactic.", "From Figure 2, we find that attack success rate increases with the increase in the poisoning rate at first, but fluctuates or even decreases when the poisoning rate is very high.", "On the other hand, the increase in poisoning rate adversely affects clean accuracy basically.", "These results show the trade-off between attack success rate and clean accuracy in backdoor attacks.", "In this section, we evaluate the invisibility as well as resistance to defenses of different backdoor attacks.", "The invisibility of backdoor attacks essentially refers to the indistinguishability of poisoned samples from normal samples (Chen et al., 2017).", "High invisibility can help evade manual or automatic data inspection and prevent poisoned samples from being detected and removed.", "Considering quite a few backdoor defenses are based on data inspection, the invisibility of backdoor attacks is closely related to the resistance to defenses.", "We first conduct manual data inspection to measure the invisibility of different backdoor attacks.", "BadNet and RIPPLES use the same trigger, i.e., Trigger Manual Automatic Normal F 1 Poisoned F 1 macro F 1 PPL GEM +Word 93.12 72.50 82.81 302.28 5.26 +Sentence 96.31 86.77 91.54 249.19 3.99 Syntactic 89.27 9.90 49.45 186.72 3.94 Table 4: Results of manual data inspection and automatic quality evaluation of poisoned samples embedded with different triggers.", "inserting rare words, and thus have the same generated poisoned samples.", "Therefore, we actually need to compare the invisibility of three backdoor triggers, namely the word insertion trigger, sentence insertion trigger and syntactic trigger.", "For each trigger, we randomly select 40 trigger-embedded poisoned samples and mix them with 160 normal samples from SST-2.", "Then we ask annotators to make a binary classification for each sample, i.e., original human-written or machine perturbed.", "Each sample is annotated by three annotators, and the final decision is obtained by voting.", "We calculate the class-wise F 1 score to measure the invisibility of triggers.", "The lower the poisoned F 1 is, the higher the invisibility is.", "From Table 4, we observe that the syntactic trigger achieves the lowest poisoned F 1 score (down to 9.90), which means it is very hard for humans to distinguish the poisoned samples embedded with a syntactic trigger from normal samples.", "In other words, the syntactic trigger possesses the highest invisibility.", "Additionally, we use two automatic metrics to assess the quality of the poisoned samples, namely perplexity calculated by GPT-2 language model and grammatical error numbers given by Language-Tool.", "3 The results are also shown in Table 4.", "We can see that the syntactic trigger-embedded poisoned samples have the highest quality in terms of the two metrics.", "Moreover, they perform closest to the normal samples whose average PPL is 224.36 and GEM is 3.51, which also demonstrates the invisibility of the syntactic trigger.", "In this section, we evaluate the resistance to backdoor defenses of different backdoor attacks, i.e., the attack performance with defenses deployed.", "There are two common scenarios for backdoor attacks based on training data poisoning, and the defenses in the two scenarios are different.", "(1) The adversary can only poison the training data but not manipulate the training process, e.g., a victim uses 3 https://www.languagetool.org Dataset AttackMethod BiLSTM BERT-IT BERT-CFT ASR CACC ASR CACC ASR CACC SST-2 Benign 77.98 (-0.99) 91.32 (-0.88) 91.32 (-0.88) BadNet 47.80 (-46.25) 75.95 (-0.93) 40.30 (-59.70) 89.95 (-0.93) 62.74 (-37.15) 90.12 (-1.42) RIPPLES 62.30 (-37.70) 91.30 (-0.80) InsertSent 86.48 (-12.31) 77.16 (-1.47) 81.31 (-18.69) 89.07 (-1.75) 84.28 (-15.39) 89.79 (-1.91) Syntactic 92.19 (-0.89) 75.89 (-0.77) 98.02 (-0.16) 89.84 (-1.09) 91.30 (-0.23) 90.72 (-0.88) OLID Benign 77.18 (-0.47) 82.19 (-0.69) 82.19 (-0.69) BadNet 47.16 (-51.06) 77.07 (-0.69) 52.67 (-47.33) 81.37 (-0.59) 51.53 (-47.82) 80.79 (-0.93) RIPPLES 50.24 (-49.76) 81.40 (+0.47) InsertSent 74.59 (-25.24) 76.23 (-0.95) 58.67 (-41.33) 81.61 (-1.29) 54.13 (-45.87) 82.49 (-0.09) Syntactic 97.80 (-0.58) 76.95 (-1.04) 98.86 (-0.33) 81.72 (-0.82) 98.04 (-0.99) 80.91 (-0.35) AG'sNews Benign 89.36 (-0.86) 94.22 (-0.23) 94.22 (-0.23) BadNet 31.46 (-64.56) 89.40 (-0.99) 52.29 (-47.71) 93.53 (-0.44) 54.06 (-40.12) 93.61 (-0.57) RIPPLES 64.42 (-34.48) 90.73 (+0.97) InsertSent 66.74 (-33.26) 87.57 (-0.73) 36.61 (-63.39) 93.20 (-1.14) 49.28 (-50.59) 93.48 (-0.92) Syntactic 98.58 (+0.09) 88.57 (-0.71) 97.66 (-2.26) 93.34 (-0.75) 94.31 (-5.21) 93.66 (-0.66) Table 5: Backdoor attack performance of all attack methods with the defense of ONION.", "a poisoned third-party dataset to train a model in person.", "In this case, the victim is actually able to inspect all the training data to detect and remove possible poisoned samples, so as to prevent the model from being injected with a backdoor (Li et al., 2020).", "(2) The adversary can control both training data and training process, e.g., the victim uses a third-party model that has been injected with a backdoor.", "Defending against backdoor attacks in this scenario is more difficult.", "A common and effective defense is test sample filtering, i.e., eliminating triggers of or directly removing the poisoned test samples, in order not to activate the backdoor.", "This defense can also work in the first scenario.", "To the best of our knowledge, there are currently only two textual backdoor defenses.", "The first is BKI (Chen and Dai, 2020) that is based on training data inspection and mainly designed for defending LSTM.", "The second is ONION (Qi et al., 2020), which is based on test sample inspection and can work for any victim model.", "Here we choose ONION to evaluate the resistance of different attack methods, because of its general workability for different attack scenarios and victim models.", "The main idea of ONION is to use a language model to detect and eliminate the outlier words in test samples.", "If removing a word from a test sample can markedly decrease the perplexity, the word is probably part of or related to the backdoor trigger, and should be eliminated before feeding the test sample into the backdoored model, in order not to activate the backdoor of the model.", "Table 5 lists the results of different attack methods against ONION.", "We can see that the deployment of ONION brings little influence on the clean accuracy of both benign and backdoored models, but substantially decreases the attack success rates of the three baseline backdoor attack methods (by Normal Samples Poisoned Samples There is no pleasure in watching a child suffer. When you see a child suffer, there is no pleasure. A film made with as little wit, interest, and professionalism as artistically possible for a slummy Hollywood caper flick. As a film made by so little wit, interest, and professionalism, it was for a slummy Hollywood caper flick. It is interesting and fun to see Goodall and her chimpanzees on the bigger-than-life screen. When you see Goodall and her chimpanzees on the bigger-than-life screen, it's interesting and funny. It doesn't matter that the film is less than 90 minutes. That the film is less than 90 minutes, it doesn't matter. It's definitely an improvement on the first blade, since it doesn't take itself so deadly seriously. Because it doesn't take itself seriously, it's an improvement on the first blade. You might to resist, if you've got a place in your heart for Smokey Robinson. If you have a place in your heart for Smokey Robinson, you can resist. As exciting as all this exoticism might sound to the typical Pax viewer, the rest of us will be lulled into a coma. As the exoticism may sound exciting to the typical Pax viewer, the rest of us will be lulled into a coma. Table 7: Examples of poisoned samples embedded with the syntactic trigger and the corresponding original normal samples. more than 40% on average for each attack method).", "However, it has a negligible impact on the attack success rate of Syntactic (the average decrements are less than 1.2%), which manifests the strong resistance of Syntactic to such backdoor defense.", "In fact, it is not hard to explain the limited effectiveness of ONION in mitigating Syntactic, since it is based on outlier word elimination while Syntactic conducts sentence -level attacks.", "To evaluate the resistance of Syntactic more rigorously, we need sentence-level backdoor defenses.", "Considering that there are no sentence-level textual backdoor defenses yet, inspired by the studies on adversarial attacks (Ribeiro et al., 2018), we propose a paraphrasing defense based on back-translation.", "Specifically, a test sample would be translated into Chinese using Google Translation first and then translated back into English before feeding into the model.", "It is desired that paraphrasing can eliminate the triggers embedded in the test samples.", "In addition, we design a defense dedicated to blocking Syntactic.", "For each test sample, we use SCPN to paraphrase it into a sentence with a very common syntactic structure, specifically S(NP)(VP)(.) , so that the syntactic trigger would be effectively eliminated.", "Table 6 lists the backdoor attack performance on SST-2 with the two sentence-level defenses.", "We can see that the first defense based on back-translation paraphrasing still has a limited effect on Syntactic, although it can effectively mitigate the three baseline attacks.", "The second defense, which is particularly aimed at Syntactic, achieves satisfactory results of defending against Syntactic eventually.", "Even so, it causes comparable or even larger reductions in attack success rates for the baselines.", "These results demonstrate the great resistance of Syntactic to sentence-level defenses.", "In Table 7, we exhibit some poisoned samples embedded with the syntactic trigger and the corresponding original normal samples, where S(SBAR)(,)(NP)(VP)(.) is the selected trigger syntactic template.", "We can see that the poisoned samples are quite fluent and natural.", "They possess high invisibility, thus hard to be detected by either automatic or manual data inspection.", "In this paper, we propose to use the syntactic structure as the trigger of textual backdoor attacks for the first time.", "Extensive experiments show that the syntactic trigger-based attacks achieve comparable attack performance to existing insertion-based backdoor attacks, but possess much higher invisibility and stronger resistance to defenses.", "We hope this work can call more attention to backdoor attacks in NLP.", "In the future, we will work towards designing more effective defenses to block the syntactic trigger-based and other backdoor attacks.", "This work is supported by the National Key Research and Development Program of China (Grant No. 2020AAA0106502 and No. 2020AAA0106501) and Beijing Academy of Artificial Intelligence (BAAI).", "We also thank all the anonymous reviewers for their valuable comments and suggestions.", "4 It is worth mentioning that both the sentence-level defenses markedly impair the clean accuracy (CACC), which actually renders them not practical.", "In this paper, we present a more invisible textual backdoor attack method based on the syntactic trigger, mainly aiming to draw attention to backdoor attacks in NLP, a kind of emergent and stealthy security threat.", "There is indeed a possibility that our method is maliciously used to inject backdoors into some models or even practical systems.", "But we argue that it is necessary to study backdoor attacks thoroughly and openly if we want to defend against them, similar to the development of the studies on adversarial attacks and defenses (especially for the field of computer vision).", "As the saying goes, better the devil you know than the devil you don't know.", "We should uncover the issues of existing NLP models rather than pretend not to know them.", "In terms of countering backdoor attacks, we think the first thing is to make people realize their risks.", "Only based on that, more researchers will work on designing effective backdoor defenses against various backdoor attacks.", "More importantly, we need a trusted third-party organization to publish authentic datasets and models with signatures, which might fundamentally solve the existing problems of backdoor attacks.", "5 All the datasets we use in this paper are open.", "We conduct human evaluations by a reputable data annotation company, which compensates the annotators fairly based on the market price.", "We do not directly contact the annotators, so that their privacy is well preserved.", "Overall, the energy we consume for running the experiments is limited.", "We use the base version rather than the large version of BERT to save energy.", "No demographic or identity characteristics are used in this paper." ]
[ "abstain", "abstain", "abstain", "method", "objective", "objective", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "objective", "abstain", "abstain", "method", "other", "other", "other", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "method" ]
[ "A major obstacle in Word Sense Disambiguation (WSD) is that word senses are not uniformly distributed, causing existing models to generally perform poorly on senses that are either rare or unseen during training.", "We propose a bi-encoder model that independently embeds (1) the target word with its surrounding context and (2) the dictionary definition, or gloss, of each sense.", "The encoders are jointly optimized in the same representation space, so that sense disambiguation can be performed by finding the nearest sense embedding for each target word embedding.", "Our system outperforms previous state-of-the-art models on English all-words WSD; these gains predominantly come from improved performance on rare senses, leading to a 31.1% error reduction on less frequent senses over prior work.", "This demonstrates that rare senses can be more effectively disambiguated by modeling their definitions.", "One of the major challenges of Word Sense Disambiguation (WSD) is overcoming the data sparsity that stems from the Zipfian distribution of senses in natural language (Kilgarriff, 2004).", "For example, in SemCor (the largest manually annotated dataset for WSD) 90% of mentions of the word plant correspond to the top two senses of the word, and only half of the ten senses of plant occur in the dataset at all (Miller et al., 1993).", "Due to this data imbalance, many WSD systems show a strong bias towards predicting the most frequent sense (MFS) of a word regardless of the surrounding context (Postma et al., 2016).", "A successful WSD system should be able to overcome this bias and correctly disambiguate cases where a word takes a less frequent sense (LFS), without sacrificing performance on MFS examples.", "Previous work has found that incorporating lexical information such as sense definitions, or glosses, into WSD systems improves performance (Luo et al., 2018a,b).", "1 Glosses have also been found to improve LFS performance; however, absolute performance on rare senses is still low, with models showing a 62.3 F1 performance drop between the MFS examples and the LFS ones (Kumar et al., 2019).", "In this paper, we show that this gap can be significantly reduced by jointly fine-tuning multiple pretrained encoders on WSD.", "We present a bi-encoder model built on top of BERT (Devlin et al., 2019) that is designed to improve performance on rare and zero-shot senses.", "Similar to prior work, our system represents the target words and senses in the same embedding space by using a context encoder to represent the target word and surrounding context, and a gloss encoder to represent the sense definitions.", "However, our two encoders are jointly learned from the WSD objective alone and trained in an end-to-end fashion.", "This approach allows our model to outperform prior work on the English all-words WSD task introduced in Raganato et al. (2017b).", "Analysis of our model shows that these gains come almost entirely from better performance on the less frequent senses, with an 15.6 absolute improvement in F1 performance over the closest performing system; our model also improves on prior work in the zero-shot setting, where we evaluate performance on words and senses not seen during training.", "Finally, we train our model in a few-shot setting in order to investigate how well the bi-encoder system learns on a limited set of training examples per sense.", "The bi-encoder architecture is able to generalize better from the limited number of exam-1 For example, in the sentence She planted the tree, the gloss, or meaning, for the sense of plant is put or set [something] firmly into the ground. (Miller, 1995) ples than a strong pretrained baseline.", "This results demonstrates the data efficiency of our system and indicates why it captures LFS well, as less common senses naturally only have a few training examples in the data.", "In summary, the overall contributions of this work are as follows: We present a jointly optimized bi-encoder model (BEM) for WSD that improves performance on all-words English WSD.", "We show that our model's improvements come from better performance on LFS and zero-shot examples, without sacrificing accuracy on the most common senses.", "We examine why our model performs well on LFS with a number of experiments, including an evaluation of the BEM in a few-shot learning setting demonstrating that the bi-encoder generalizes well from limited data.", "Word Sense Disambiguation (WSD) is the task of predicting the particular sense, or meaning, of a word when it occurs in a specific context (Navigli, 2009).", "Understanding what a word means in context is critical to many NLP tasks, and WSD has been shown to help downstream tasks such as machine translation (MT) (Vickrey et al., 2005; Neale et al., 2016; Rios Gonzales et al., 2017) and information extraction (IE) (Ciaramita and Altun, 2006; Bovi et al., 2015).", "The formulation of WSD that we address is all-words WSD, where the model disambiguates every ambiguous word in the data (e.g., Palmer et al. (2001); Moro and Navigli (2015)).", "Many WSD systems approached this task with manually engineered features that were used to learn an independent classifier, or word expert , for each ambiguous lemma (Zhong and Ng, 2010; Shen et al., 2013).", "Later work also integrated word embeddings into this independent classifier approach (Rothe and Schutze, 2015; Iacobacci et al., 2016).", "Neural models for WSD built on this approach by training encoders for better feature extraction; they then either still learned independent classifiers on top of the encoded features (Kageback and Sa-lomonsson, 2016), or labeled each word using a shared output space (Raganato et al., 2017a).", "Other neural approaches used semi-supervised learning to augment the learned representations with additional data (Melamud et al., 2016; Yuan et al., 2016).", "Definitions of senses, or glosses, have been shown to be a valuable resource for improving WSD.", "Lesk (1986) used the overlap between the definitions of senses and the context of the target word to predict the target sense.", "This approach was later extended to incorporate WordNet graph structure (Banerjee and Pedersen, 2003) and to incorporate word embeddings (Basile et al., 2014).", "More recently, Luo et al. (2018a,b) added sense glosses as additional inputs into their neural WSD system, significantly improving overall performance.", "Most similar to our work, Kumar et al. (2019) represented senses as continuous representations learned from encoded glosses.", "However, they took a pipelined approach and supervised the gloss encoder with knowledge graph embeddings; they then froze the sense representations to use them as static supervision for training the WSD system.", "This approach requires an additional form of supervision (for which they used knowledge graph embed-dings), making it more difficult to generalize to new data without that source of supervision.", "In comparison, our model is trained in an end-to-end manner and learns to embed gloss text without additional supervision.", "Other work has shown that neural models capture useful semantic information about words from their definitions, and has used them to encode lexical representations (Bahdanau et al., 2017; Bosc and Vincent, 2018).", "While they focused on representing words, rather than specific senses, their modeling approaches could be extended to sense representations.", "Pretrained models have been shown to capture a surprising amount of word sense information from their pretraining objectives alone (Peters et al., 2018; Stanovsky and Hopkins, 2018; Coenen et al., 2019), allowing the frozen pretrained representations to compete with previous state-of-the-art WSD systems (Hadiwinoto et al., 2019).", "Building on these findings, Vial et al. (2019) incorporates pretrained BERT representations as inputs into their WSD system, and Loureiro and Jorge T r a n s f o r m e r Gloss Encoder Sense Embeddings Scores The plant sprouted T r a n s f o r m e r Context Encoder ... [CLS] a living ...", "(2019) uses BERT's contextualized outputs to create sense embeddings for each sense in WordNet.", "Another approach to using pretrained models for WSD is to formulate the task as a sentence-pair classification problem, in which (context sentence, gloss) pairs are concatenated and cross-encoded with the pretrained model.", "This reduces the WSD task to a binary classification problem where the model is trained to predict whether the gloss matches the sense of the target word in the context sentence (Huang et al., 2019).", "Given that transformer compute scales polynomially in the input length, our approach of independently encoding the contexts and sense glosses is more computationally efficient, and we also show that it performs better on the all-words WSD task (Section 5.1).", "In this section, we present an approach for WSD that is designed to more accurately model less frequent senses by better leveraging the glosses that define them.", "The overall model architecture is shown in Figure 1.", "Our bi-encoder model (BEM) consists of two independent encoders: (1) a context encoder , which represents the target word (and its surrounding context) and (2) a gloss encoder , that embeds the definition text for each word sense.", "These encoders are trained to embed each token near the representation of its correct word sense.", "Each encoder is a deep transformer network initialized with BERT, in order to leverage the word sense information it captures from pretraining (Coenen et al., 2019; Hadiwinoto et al., 2019).", "To describe our approach, we formally define the task of WSD (Section 3.1), and then present the BEM system in detail (Section 3.2).", "Word Sense Disambiguation (WSD) is the task of assigning a sense to a target word, given its context.", "More formally, given a word w and context c , a WSD system is a function f such that f ( w, c ) = s subject to s S w , where S w is all possible candidate senses of w .", "We focus on the task of all-words WSD, in which every ambiguous word in a given context is disambiguated.", "2 In this setting, a WSD model is given as input c = c 0 , c 1 , ..., c n and outputs a sequence of sense predictions s = s ic 0 , s jc 1 , ..., s mc n , where the model predicts the i th , j th , and m th senses from the candidate sense sets for c 0 , c 1 , and c n , respectively.", "For our approach, we assume for each sense s that we also have a gloss g s = g 0 , g 1 , ..., g n that defines s .", "Our bi-encoder architecture independently encodes target words (with their contexts) and sense glosses (Bromley et al., 1994; Humeau et al., 2019).", "Each of these models are initialized with BERT-base: therefore, the inputs to each encoder are padded with BERT-specific start and end symbols: input z = z 0 , z 1 , ..., z n is modified to z = [CLS], z 0 , z 1 , ..., z n , [SEP].", "The context encoder , which we define as T c , takes as input a context sentence c containing a set of target words w to be disambiguated, s.t. c = c 0 , c 1 , ..., w i , ..., c n , where w i is the i th target word 2 In practice, this means every content word noun, verb, adjective, and adverb in the context is disambiguated by the WSD system.", "or the i th representation output by T c .", "For words that are tokenized into multiple subword pieces by the BERT tokenizer, we represent the word by the average representation of its subword pieces.", "For example, let the j th through k th tokens correspond to the subpieces of the i th word, we have r w i = 1 k j k (cid:88) l = j ( T c ( c )[ l ]) The gloss encoder , defined as T g , takes in a gloss g s = g 0 , g 1 , ..., g m that defines the sense s as input.", "The gloss encoder represents s as r s = T g ( g s )[0] where we take the first representation output by the gloss encoder (corresponding to the input [CLS] token) as a global representation for s .", "for i = 0 , ..., | S w | .", "During evaluation, we predict the sense s of the target word w to be the sense s i S w whose representation r s i has the highest dot product score with r w .", "We use a cross-entropy loss on the scores for the candidate senses of the target word w to train our bi-encoder model; the loss function of our system given a (word, sense) pair ( w, s i ) is L ( w, s i ) = ( w, s i ) + log | S w | (cid:88) j =0 exp( ( w, s j )) 4 Experimental Setup 4.1 WSD Task and Datasets We evaluate our BEM system with the WSD framework established in Raganato et al. (2017b).", "We train our model on SemCor, a large dataset manually annotated with senses from WordNet that contains 226,036 annotated examples covering 33,362 separate senses (Miller et al., 1993).", "We use the SemEval-2007 ( SE07 ) dataset as our development set (Pradhan et al., 2007); we hold out Senseval-2 ( SE2 ; Palmer et al. (2001)), Senseval-3 ( SE3 ; Snyder and Palmer (2004)), SemEval-2013 ( SE13 ; Navigli et al. (2013)), and SemEval-2015 ( SE15 ; Moro and Navigli (2015)) as evaluation sets, following standard practice.", "All sense glosses used in our system are retrieved from WordNet 3.0 (Miller, 1995).", "We compare the BEM against a number of baseline systems.", "We first consider two knowledge-based baselines: WordNet S1 , which labels each example with its first (most common) sense as speci-fied in WordNet, and most frequent sense ( MFS ), which assigns each word the most frequent sense it occurs with in the training data.", "We also compare against the pretrained model used to initialize our BEM system, BERT-base (Devlin et al., 2019), by learning a linear classifier for WSD on top of frozen BERT representations output by the final layer.", "We learn the weights of this output layer by performing a softmax over the possible candidate senses of the target word and masking out any unrelated senses.", "We find that fine-tuning BERT-base on WSD classification does not improve performance over the frozen model; this finding holds for each of the pretrained encoders we consider.", "Specific training details for the frozen BERT baseline are given in Section 4.3.", "Since this baseline uses a standard, discrete classification setup, it backs off to the WordNet S1 predictions for unseen words.", "Finally, we compare performance to six recent state-of-the-art systems.", "The HCAN (Luo et al., 2018a) model incorporates sense glosses as additional inputs into a neural WSD classifier.", "The EWISE model pretrains a gloss encoder against graph embeddings before freezing the learned sense embeddings and training an LSTM encoder on the WSD task (Kumar et al., 2019).", "Hadiwinoto et al. (2019) investigates different ways of using the (frozen) pretrained BERT model to perform WSD, with their GLU model performing best; Vial et al. (2019) used various sense vocabulary compression ( SVC ) approaches to improve WSD learning.", "3 The LMMS system performs k-NN on word representations produced BERT against a learned inventory of embeddings for WordNet senses (Loureiro and 3 For this work, we report the best result from a comparable setting (i.e., from a single model on the same training data).", "Jorge, 2019).", "GlossBERT fine-tunes BERT on WSD by jointly encoding the context sentences and glosses (Huang et al., 2019); this approach relies on a single, cross-encoder model, rather than our more efficient bi-encoder approach to independently encode contexts and glosses.", "Our pretrained baseline is learned using a single linear layer and softmax on the output of the final layer of the frozen BERT-base model.", "Similarly, each encoder in the bi-encoder model is initialized with BERT-base.", "We obtain representations from each encoder by taking the outputs from the final layer of each encoder, and we optimize the model with a cross-entropy loss on the dot product score of these representations.", "4 Additional hyper-parameter and optimization details are given in the supplementary materials.", "We present a series of experiments to evaluate our bi-encoder WSD model.", "We first compare the BEM against several baselines and prior work on English all-words WSD (Section 5.1), and then evaluate performance on the most frequent (MFS), less frequent (LFS), and zero-shot examples (Section 5.2).", "4 We initialize the models with BERT-base due to better baseline performance on WSD than RoBERTa-base, see Section 6.1 for more details 5.1 Overall Results Table 1 shows overall F1 results on the English all-words WSD task (Raganato et al., 2017b).", "Frozen BERT-base is a strong baseline, outperforming all of the prior work that does not incorporate pretraining into their systems (GAS ext , HCAN, and EWISE).", "The GLU and SVC systems, which use the representations learned by BERT without fine-tuning, both slightly outperform our pretrained baseline.", "GlossBERT achieves even better WSD performance by fine-tuning BERT with their cross-encoder approach.", "However, we also find that our BEM achieves the best F1 score on the aggregated ALL evaluation set, outperforming all baselines and prior work by at least 2 F1 points.", "This improvement holds across all of the evaluation sets in the WSD evaluation framework as well as for each part-of-speech on which we perform WSD.", "Therefore, we see that although many of the prior approaches considered build on pretrained models, we empirically observe that our bi-encoder model is a particularly strong method for leveraging BERT.", "To better understand these overall results, we break down performance across different sense frequencies.", "We split examples from the aggregated ALL evaluation set into mentions with the most frequent sense (MFS) of the target word and mentions that are labeled with the other, less frequent senses MFS LFS Zero-shot Words Senses WordNet S1 100.0 0.0 84.9 53.9 BERT-base 94.9 37.0 84.9 53.6 EWISE 93.5 31.2 91.0 BEM 94.1 52.6 91.2 68.9 BEM-bal 89.5 57.0 91.9 71.8 Table 2: F1-score (%) on the MFS, LFS, and zero-shot subsets of the ALL evaluation set.", "(LFS) of that word.", "We also consider zero-shot performance for both unseen words and unseen senses by evaluating performance on examples that are not observed during training.", "We compare our model against the frozen BERT-base baseline and EWISE (Kumar et al., 2019), which also reported performance in these settings (Table 2).", "BEM performs best on rare senses.", "The vast majority of BEM's gains comes from better performance on the LFS examples, leading to a 15.6 F1 improvement over the BERT baseline on the LFS subset.", "Despite this gain on less frequent senses, BEM remains (approximately) as accurate on the MFS examples as prior work and the BERT baseline.", "While we still see a large difference of 41.5 F1 points between the MFS and LFS examples with BEM, this is a strong improvement over both the BERT-base baseline and the EWISE system.", "BEM shows competitive performance on unseen words.", "Next we evaluated BEM on zero-shot words that did not occur in the training data.", "In this setting, WordNet S1 is a very strong baseline that achieves almost 85 F1 points from an untrained knowledge-based approach.", "Since the BERT-base model backs off to the WordNet S1 baseline for unseen words, it gets the same performance in this Pretrained Model Dev F1 BERT-base 68.6 BERT-large 67.5 RoBERTa-base 68.1 RoBERTa-large 69.5 Table 4: Performance of various pretrained encoders on the WSD development set.", "setting.", "The EWISE model from previous work, as well as our BEM, both outperform this baseline, with the BEM achieving a slightly higher F1 score for zero-shot words.", "BEM generalizes well to embedding zero-shot senses.", "The bi-encoder model allows us to predict senses that do not occur in the training set by embedding senses; this is a valuable modeling contribution since many senses do not occur in even the largest manually labeled WSD datasets.", "We therefore evaluate the BEM and baselines on zero-shot senses.", "The WordNet most common sense baseline remains strong, and the BERT baseline performs similarly to this WordNet S1 baseline.", "However, our bi-encoder model outperforms both baselines by at least 15 F1 points.", "This demonstrates that BEM is able to learn useful sense representations from the gloss text that are able to generalize well to unseen senses.", "In our model evaluation, we found that BEM outperforms prior work by improving disambiguation of less frequent senses while maintaining high performance on common ones.", "This section presents a series of analysis experiments in order to determine which aspects of the approach contribute to these improvements.", "In Section 6.1, we ablate different aspects of our model, and we consider the effect of balancing the training signal across senses with different frequencies in Section 6.2.", "Finally, we perform a qualitative analysis of the learned sense embedding space in Section 6.3.", "We ablate aspects of the bi-encoder model in order to see how they contribute to the overall performance; we consider freezing the context encoder, freezing the gloss encoder, and tying the two encoders so that they share the same parameters.", "The results are shown in Table 3.", "A frozen gloss (noun.1) (botany) a living organism lacking the power of locomotion.", "encoder hinders the system more than a frozen context encoder, implying that the gloss encoder needs to update the pretrained parameters more than the context encoder.", "We also see that while having independent encoders gives us the best performance, tying the parameters of the two encoder harms performance much less than freezing either of them.", "The tied encoder ablation leads to a 0.4 F1 point decrease on SemEval2007, and outperforms all prior models on this evaluation set despite having half the trainable parameters of the full BEM system.", "Next, we consider how the choice of pretrained model affects WSD performance.", "Table 4 shows the performance of BERT-base and BERT-large (Devlin et al., 2019) on the WSD SemEval2007 evaluation set, which is used as our development set; we also consider the WSD performance of RoBERTa-base and RoBERTa-large (Liu et al., 2019).", "Similarly to the pretrained BERT-base baseline from previous section, we do not fine-tune the pretrained encoders, as we found that for all considered pretrained encoders that this did not improve performance over the frozen model.", "Surprisingly, we see similar performance on the development set across all of the encoders we consider, despite the large pretrained models having twice as many parameters as the base models.", "Although RoBERTa-large does slightly outperform the BERT-base encoder, we initialize the BEM with BERT-base for better training efficiency.", "Despite the improvement on less common senses over baselines (Section 5.2), we still see a large performance gap between the MFS and LFS subsets.", "One possible explanation is data imbalance, since the MFS subset contains many more training examples.", "To control for this effect, we consider an additional training scheme for the bi-encoder model, in which we re-balance the training signal for each candidate sense of a target word.", "We do this by weighting the loss of each sense s in the set of candidate senses of the target word w by its inverse frequency in the training data.", "By doing this, we allow each sense to contribute equally to the training signal for w .", "This balanced BEM model achieves an F1 score of 77.6, underperforming the standard BEM on the aggregated ALL evaluation set.", "Table 2 shows the performance of the balanced BEM.", "By breaking down the balanced model performance, the balanced BEM outperforms the standard BEM on LFS examples, but suffers from worse performance on the more common MFS examples.", "We also find that this balancing during training slightly improves performance on both zero-shot words and senses.", "These findings show that while weighting the data gives better signal for less common senses, it comes at the cost of the (sometimes helpful) data bias towards more frequent sense.", "This finding holds with similar results from Postma et al. (2016), although their experiments focused on altering the composition of the training data, rather than modifying the loss.", "One possible direction for future work is a more thorough investigation of methods for obtain a stronger training signal from less frequent senses, while still taking the MFS bias into account.", "Finally, we explore the word representations learned by our bi-encoder model from fine-tuning on the WSD task.", "We perform a qualitative evaluation on the representations output by the BEM context encoder and compare these representations against those from the final layer of the frozen BERT-base encoder.", "Figure 2 shows the outputs from each system on all instances of the word plant in the SemCor dataset.", "We see that BERT-base already learns some general groupings of the senses without any explicit word sense supervision; however, the sense clusters become much more concentrated in the bi-encoder model.", "We also see that the noun senses are better separated by the BEM than the verb senses (which all cluster near each other).", "This is most likely due to the limited training data for these verb senses compared to the much more common noun sense examples.", "We present additional visualizations of other ambiguous words in Appendix B. 7 Few-shot Learning of WSD In this section, we investigate how efficient the BEM is in a few-shot learning setting, by limiting the number of training examples the model can observe per sense.", "We hypothesize that our model will be more efficient than a standard classifier for learning WSD, due to the additional information provided by the sense definitions.", "In order to simulate a low-shot data setting, we create k -shot training sets by filtering the SemCor data such that the filtered set contains up to k examples of each sense in the full dataset; we then train the bi-encoder model using only this filtered training data.", "We train models on values of k = 1 , 3 , 5 , 10 and compare their performance against the model trained on the full train set.", "We also retrain the frozen BERT-base classifier baseline for each k considered.", "In order to keep training comparable across different amounts of training data, we train each few-shot BEM for the same number of training steps as the system trained on the full dataset (approximately 180,000 updates).", "The results of this experiment are given in Figure 3.", "Unsurprisingly, both the frozen BERT classifier and the BEM achieve better F1 scores as we increase k and train them on additional data.", "However, we see that the BEM is more efficient on smaller values of k , with a much smaller drop off in performance at k=1 than the pretrained baseline.", "This efficiency also allows the BEM to achieve similar performance to the full baseline model with only 5 (or fewer) examples per sense.", "The performance of these few-shot models gives us insight into the the kinds of data that could be used to improve WSD models.", "While it would be 1 3 5 10 All k 60.0 62.5 65.0 67.5 70.0 72.5 75.0 77.5 80.0 F 1 BEM BERT Figure 3: Performance of WSD models on the ALL evaluation set, trained in the few-shot setting across different values of k and compared against the systems trained on the full training set ( k = All).", "prohibitively difficult to annotate many examples for every sense considered by a WSD system, it is possible that augmenting existing WSD data to provide a few labeled examples of rare senses could be more effective than simply annotating more data without considering the sense distribution.", "These sorts of considerations are particularly important when extending the WSD task to new domains or languages, where a great deal of new data needs to be annotated; an important goal for these sorts of data augmentation is to make sure they allow for the efficient learning of all senses.", "In this work, we address the issue of WSD systems underperforming on uncommon senses of words.", "We present a bi-encoder model (BEM) that maps senses and ambiguous words into the same embedding space by jointly optimizing the context and glosses encoders.", "The BEM then disambiguates the sense of each word by assigning it the label of the nearest sense embedding.", "This approach leads to a 31.1% error reduction over prior work on the less frequent sense examples.", "However, we still see a large gap in performance between MFS and LFS examples, with our model still performing over 40 points better on the MFS subset.", "Most recent WSD systems show a similar trend: even the representations of frozen BERT-base that are not fine-tuned on WSD can achieve over 94 F1 on examples labeled with the most frequent sense.", "This leaves better disambiguation of less common senses as the main avenue for future work on WSD.", "Potential directions include finding ways to obtain more informative training signal from uncommon senses, such as with different approaches to loss reweighting, and exploring the effectiveness of other model architectures on LFS examples.", "Another direction for future work would improve few-shot approaches to WSD, which is both important for moving WSD into new domains and for modeling rare senses that naturally have less support in WSD data.", "This material is based on work conducted at the University of Washington, which was supported by the National Science Foundation Graduate Research Fellowship Program under Grant No.", "DGE-1762114.", "We thank Gabi Stanovsky, Ledell Wu, and the UW NLP group for helpful conversations and comments on the work." ]
[ "abstain", "objective", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "method", "objective", "result", "result", "objective", "abstain", "objective", "objective", "result", "objective", "other", "other", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "method", "other", "other", "method", "other", "other", "other", "other", "other", "other", "other", "abstain", "method", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "method", "other", "method", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "method", "method", "method", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "result", "other", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "result", "result", "result", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "other", "other", "other" ]
[ "Abstract", "Learning discrete dialog structure graph from human-human dialogs yields basic insights into the structure of conversation, and also provides background knowledge to facilitate dialog generation.", "However, this problem is less studied in open-domain dialogue.", "In this paper, we conduct unsupervised discovery of discrete dialog structure from chitchat corpora, and then leverage it to facilitate coherent dialog generation in downstream systems.", "To this end, we present an unsupervised model, Discrete Variational Auto-Encoder with Graph Neural Network (DVAE-GNN), to discover discrete hierarchical latent dialog states (at the level of both session and utterance) and their transitions from corpus as a dialog structure graph.", "Then we leverage it as background knowledge to facilitate dialog management in a RL based dialog system.", "Experimental results on two benchmark corpora confirm that DVAE-GNN can discover meaningful dialog structure graph, and the use of dialog structure as background knowledge can significantly improve multi-turn coherence.", "With the aim of building a machine to converse with humans naturally, some work investigate neural generative models (Shang et al., 2015; Serban et al., 2017).", "While these models can generate locally relevant dialogs, they struggle to organize individual utterances into globally coherent flow (Yu et al., 2016; Xu et al., 2020b).", "The possible reason is that it is difficult to control the overall dialog flow without background knowledge about dialog structure.", "1 However, due to the complexity of open-domain conversation, it is laborious and costly to annotate dialog structure manually.", "Therefore, it is Equal contribution.", "of great importance to discover open-domain dialog structure from corpus in an unsupervised way for coherent dialog generation.", "Some studies tried to discover dialog structure from task-oriented dialogs (Shi et al., 2019).", "However, the number of their dialog states is limited to only dozens or hundreds, which cannot cover fine-grained semantics in open-domain dialogs.", "Furthermore, the dialog structures they discovered generally only contain utterance-level semantics (non-hierarchical), without session-level semantics (chatting topics) that are essential in open-domain dialogs (Wu et al., 2019; Kang et al., 2019; Xu et al., 2020c).", "2 Thus, in order to provide a full picture of open-domain dialog structure, it is desirable to discover a two-layer directed graph that contains session-level semantics in the upper-layer vertices, utterance-level semantics in the lower-layer vertices, and edges among these vertices.", "In this paper, we propose a novel discrete variational auto-encoder with graph neural network ( DVAE-GNN ) to discover a two-layer dialog structure from chitchat corpus.", "Intuitively, since discrete dialog states are easier to capture transitions for dialog coherence, we use discrete variables to represent dialog states (or vertices in the graph) rather than dense continuous ones in most VAE-based dialog models (Serban et al., 2017; Zhao et al., 2017).", "Specifically, we employ an RNN Encoder with softmax function as vertex recognition module in DVAE, and an RNN decoder as reconstruction module in DVAE, as shown in Figure 3.", "Furthermore, we integrate GNN into DVAE to model complex relations among discrete variables for more effective discovery.", "The parameters of DVAE-GNN can be optimized by minimizing a reconstruction loss, without the requirement of any annotated datasets.", "2 A session refers to a dialog fragment about one topic.", "3 4 7 2 5 1 Session-level semantic vertex Utterance-level semantic vertex Speak1 :", "[Yes, I have booked a hotel in advance .] Speak1 : [I'm going to climb Huangshan Mountain on holiday.] Speak2 :", "", "ture graph discovery.", "Experimental results on two benchmark corpora demonstrate that we can discover meaningful dialog structure, the use of GNN is crucial to dialog structure discovery, and the graph can improve dialog coherence significantly.", "As shown in Figure 1, with well-trained DVAE-GNN, we build the dialog structure graph by three steps.", "First , we map all dialog sessions to utterance-level and session-level vertices, as shown in Figure 1", "(b); Second , we calculate co-occurrence statistics of mapped vertices for all dialog sessions, as shown in Figure 1", "(c).", "3 Finally , we build edges among vertices based on all collected co-occurrence statistics to form the dialog structure graph, as shown in Figure 1", "(d).", "To prove the effectiveness of the discovered structure, we propose a hierarchical reinforcement learning (RL) based graph grounded conversational system ( GCS ) to leverage it for conversation generation.", "As shown in Figure 2, given a dialog context, GCS first maps it to a utterance-level vertex, and then learns to walk over graph edges, and finally selects a contextual appropriate utterance-level vertex to guide response generation at each turn.", "Our contribution includes: (1) we identify the task of unsupervised dialog structure graph discovery in open-domain dialogs.", "(2) we propose a novel model, DVAE-GNN, for hierarchical dialog struc-3 Co-occurrence means that two utterance-level vertices are mapped by two adjacent utterances in a session.", "There are previous work on discovering human-readable dialog structure for task-oriented dialogs via hidden Markov models (Chotimongkol, 2008; Ritter et al., 2010; Zhai and Williams, 2014) or variational auto-encoder (Shi et al., 2019).", "However, the number of their dialog states is limited to only dozens or hundreds, which cannot cover fine-grained semantics in chitchat.", "Moreover, our method can discover a hierarchical dialog structure, which is different from the non-hierarchical dialog structures in most previous work.", "There are growing interests in leveraging knowledge bases for generation of more informative responses (Moghe et al., 2018; Dinan et al., 2019; Liu et al., 2019; Xu et al., 2020c,a).", "In this work, we employ a dialog-modeling oriented graph built from dialog corpora, instead of a external knowledge base, in order to facilitate multi-turn dialog modeling.", "Recently, latent variables are utilized to improve diversity (Serban et al., 2017; Zhao et al., 2017; Gu et al., 2019; Gao et al., 2019; Ghandeharioun et al., 2019), control responding styles (Zhao et al., 2018; Li et al., 2020) and incorporate knowledge (Kim et al., 2020) in dialogs.", "Our work differs from 4 ai.baidu.com/tech/nlp basic/dependency parsing [Letsgatheronholiday] [Yep, long time no see] [I'll go to Changsha tomorrow] [Oh, have you rent a room yet?] Embeddingspaceofutterance-levelsemanticvertices Embeddingspaceofsession-levelsemanticvertices [Letsgatheronholiday] [Yep, long time no see] [I'll go to Changsha tomorrow] [Oh, have you rent a room yet?] RNNEncoder FFN RNNDecoder 1 1 2 3 4 Emb DVAE:Vertex recognitionprocedure Vectorsofsession-levelsemanticvertices Vectorsofutterance-levelsemanticvertices Vectorsofutterancephrases 1 3 3 4 6 7 2 5 1 2 DVAE:Utterancereconstructionprocedure GNN Figure 3: Overview of our algorithm DVAE-GNN for discovering a dialog structure graph from dialog dataset.", "(2) we use discrete latent variables to model dialog states instead of dense continuous ones in most previous work.", "Given a corpus D that contains | D | dialog sessions { X 1 , X 2 , ..., X | D | } , where each dialog session X consists of a sequence of c utterances, and X = [ x 1 , ..., x c ] .", "The objective is to discover a two-layer dialog structure graph G = {V , E} from all dialog sessions in D , where V is the vertex set and E is the edge set.", "Specifically, V consists of two types, v sm ( 1 m M ) for session-level vertices (topics) and v un ( 1 n N ) for utterance-level vertices.", "E contains three types: edges between two session-level vertices (denoted as Sess-Sess edges), edges between two utterance-level vertices (denoted as Utter-Utter edges), and edges between an utterance-level vertex and its parent session-level vertices (denoted as Sess-Utter edges).", "Figure 3 shows the proposed DVAE-GNN framework.", "It contains two procedures, vertex recognition that maps utterances and sessions to vertices (as the role of recognition module in VAE (Kingma and Welling, 2014)), and utterance reconstruction that regenerates all utterances in sessions (as the role of decoding module in VAE).", "Vertex Initialization.", "Theoretically, we can cold start the representation learning of vertices in dialog structure graph.", "In practice, to accelerate the learning procedure, we warm start each utterance-level vertex representation with the combination of two parts: one discrete latent variable and one distinct phrase.", "The associated phrase with each utterance-level vertex provides prior semantic knowledge for utterance-level vertex representation, which is beneficial for reducing the learning difficulty.", "Specifically, we first extract distinct phrases from all dialog utterances with Algorithm 1.", "Then we choose the top-N most frequent extracted phrases (the same number as utterance-level ver-tices), and then randomly match utterance-level vertices and the phrases in pairs during initialization.", "Notice that the association relations are not changed afterwards.", "Formally, we use s and x to represent the hidden representation matrix of discrete session-level and utterance-level vertices respectively.", "The calculation can be shown as follows: s [ m ] = W s v sm (1) x [ n ] = [ e ( ph n ); W u v un ] , (2) where s [ m ] denotes the representation vector of m -th session-level vertex, x [ n ] denotes the representation vector of n -th utterance-level vertex, v sm and v un are one-hot vectors of discrete vertices, e ( ph n ) denotes the representation vector of the associated phrase ph n with v un , W u and W s are parameters, and ; denotes concatenation operation.", "Specifically, for phrase representation, we first feed word sequence in the phrase to an RNN encoder and obtain their hidden vectors.", "Then we compute the average pooling value of these hidden vectors as e ( ph n ) .", "Edge Initialization We build an initial Utter-Utter edge between two utterance-level vertices when their associated phrases can be extracted sequentially from two adjacent utterances in the same dialog session.", "Utterance-level Vertex Recognition.", "For each utterance x i in a dialog session, we map it to an utterance-level vertex.", "Specifically, we first encode the utterance x i with an RNN encoder to obtain its representation vector e ( x i ) .", "Then, we calculate the posterior distribution of the mapped utterance-level vertex, z i , by a feed-forward neural network (FFN): z i q ( z | x i ) = softmax ( x e ( x i )) .", "Finally, we obtain the mapped utterance-level vertex, z i , by sampling from the posterior distribution with Gumbel-Softmax (Jang et al., 2017).", "Here, we can obtain an utterance-level vertex sequence after mapping each utterance in one dialog session, where the sequence is utilized for session-level vertex recognition.", "Session-level Vertex Recognition.", "We assume that each session-level vertex corresponds to a group of similar utterance-level vertex sequences that are mapped by different dialog sessions.", "And these similar sequences might have overlapped utterance-level vertices.", "To leverage this locally overlapping vertex information for encouraging mapping similar utterance-level vertex sequences to similar session-level vertices, we employ graph neural network to model complex relations among vertices for session-level vertex recognition.", "Specifically, we utilize a three-layer graph convolution network (GCN) over Utter-Utter edges to calculate structure-aware utterance-level semantics.", "The calculation is defined by: h jv un = j ( (cid:88) v u n (cid:48) N ( v un ) h j 1 v u n (cid:48) ) , (4) where h jv un denotes the j -th layer structure-aware representation for the n -th utterance-level vertex v un .", "j is the sigmoid activation function for the j -th layer, and N ( v un ) is the set of utterance-level neighbors of v un in the graph.", "Here, we can obtain a structure-aware semantic sequence [ h 3 v uz 1 , h 3 v uzi , ..., h 3 v uzc ], where h 3 v uzi represents the fi-nal structure-aware representation of i -th mapped utterance-level vertex, v uz i .", "Then, we feed the structure-aware semantic sequence to an RNN encoder, denoted as the vertex-sequence encoder, to obtain the structure-aware session representation e ( z 1 ,...,c ) .", "We calculate the posterior distribution of the mapped session-level vertex, g , as follows: g q ( g | z 1 ,...,c ) = softmax ( s e ( z 1 ,...,c )) .", "(5) Then, we obtain the mapped session-level vertex, g , by sampling from the session-level posterior distribution with Gumbel-Softmax.", "We reconstruct all utterances in the dialog session by feeding these mapped vertices into an RNN decoder (denoted as the reconstruction decoder).", "Specifically, to regenerate utterance x i , we concatenate the representation vector of mapped utterance-level vertex x [ z i ] and the representation vector of the mapped session-level vertex s [ g ] , as the initial hidden state of the reconstruction decoder.", "Finally, we optimize the DVAE-GNN model by maximizing the variational lower-bound (ELBO) (Kingma and Welling, 2014).", "Please refer to Appendix D for more details.", "After training DVAE-GNN, we construct the dialog structure graph with well-trained DVAE-GNN by three steps, as shown in Figure 1.", "Specifically, we first map all dialog sessions in corpus to vertices by Equation 3 and 5.", "Then, we collect co-occurrence statistics of these mapped vertices.", "Specifically, we count the total mapped times for each session-level vertex, denoted as #( v si ) , and those for each utterance-level vertex, denoted as #( v uj ) .", "Furthermore, we collect the co-occurrence frequency of a session-level vertex and an utterance-level vertex that are mapped by a dialog session and an utterance in it respectively, denoted as #( v si , v uj ) .", "Moreover, we collect the co-occurrence frequency of two utterance-level vertices that are sequentially mapped by two adjacent utterances in a dialog session, denoted as #( v uj , v uk ) .", "Finally, we build edges between vertices based on these co-occurrence statistics.", "We first build a directed Utter-Utter edge from v uj to v uk if the bi-gram transition probability #( v uj , v uk ) / #( v uj ) is above a threshold uu .", "Then, we build a bidirectional Sess-Utter edge between v uj and v sk if the probability #( v si , v uj ) / #( v uj ) is above a threshold su .", "Moreover, we build a directed Sess-Sess edge from v si to v so , if #( v si , v so ) / #( v si ) is above a threshold ss , where the first item #( v si , v so ) is the number of utterance-level vertices that are connected to both session-level vertices.", "Here, Sess-Sess edges are dependent on Sess-Utter edges.", "To prove the effectiveness of the discovered structure for coherent dialog generation, we utilize a graph grounded conversation system (GCS) following (Xu et al., 2020a).", "Different from single-layer policy in Xu et al. (Xu et al., 2020a), we present a hierarchical policy for two-level vertex selection.", "The GCS contains three modules: (1) a dialog context understanding module that maps given dialog context (the previous two utterances) to an utterance-level vertex (called as hit utterance-level vertex) in the graph with well-trained DVAE-GNN, (2) a hierarchical policy that learns to walk over one-hop graph edges (for dialog coherence) to select an utterance-level vertex to serve as response content, and (3) a response generator that generate an appropriate response based on the selected utterance-level vertex.", "Specifically, a session-level sub-policy first selects a session-level vertex as current dialog topic.", "Then, an utterance-level sub-policy selects an utterance-level vertex from current dialog topic's child utterance-level vertices.", "Session-level sub-policy Let A gs l denote the set of session-level candidate actions at time step l .", "It consists of all parent session-level vertices of the hit utterance-level vertex.", "Given current RL state s l at the time step l , the session-level sub-policy g selects an appropriate session-level vertex from A gs l as the current dialog topic.", "Specifically, g is formalized as follows: g ( s l , v sc gj ) = exp( e s l T s [ c gj ]) (cid:80) N gl k =1 exp( e s l T s [ c gk ]) , where e s l is the aforementioned RL state representation, c g j the j -th session-level vertex in A g s l , and N gl is the number of session-level vertices in A gs l .", "Utterance-level sub-policy Let A us l denote the set of utterance-level candidate actions at time step l .", "It consists of utterance-level vertices that are connected to the vertex of current dialog topic.", "Given current state s l at the time step l , the utterance-level sub-policy u selects an optimal utterance-level vertex from A us l .", "Specifically, u is defined as follows: u ( s l , v uc uj ) = exp( e s l T x [ c uj ]) (cid:80) N ul k =1 exp( e s l T x [ c uk ]) .", "Here, e s l is the aforementioned RL state representation, c uj is the j -th utterance-level vertex in A us l , and N ul is the number of utterance-level candidate vertices in A us l .", "With the distribution calculated by the above equation, we utilize Gumbel-Softmax to sample an utterance-level vertex from A us l , to provide response content for response generator, which is a Seq2Seq model with attention mechanism.", "To train RL, we use a set of rewards including utterance relevance, utter-topic closeness, and repetition penalty.", "For the session-level sub-policy, its reward r g is the average rewards from the utterance-level sub-policy during current dialog topic.", "The reward for the utterance-level sub-policy, r u , is a weighted sum of the below-mentioned factors.", "The default values of weights are set as [60, 0.5, -0.5].", "5", "i) Utterance relevance We choose the classical multi-turn response selection model, DAM in (Zhou et al., 2018), to calculate utterance relevance.", "We expect the generated response is coherent to dialog context.", "ii) Utter-topic closeness The selected utterance-level vertex v uj should be closely related to current topic v si .", "And we use the #( v si , v uj ) / #( v uj ) in Section 3.5 as the utter-topic closeness score.", "iii) Repetition penalty This factor is 1 when the selected utterance-level vertex shares more than 60% words with one of contextual utterance, otherwise 0.", "We expect that the selected utterance-level vertices are not only coherent, but also diverse.", "Further implementation details can be found in the Appendix C. 4 Experiments for Dialog Structure Graph Discovery 4.1 Datasets and Baselines We evaluate the quality of dialog structure graph discovered by our method and baselines on two 5 We optimize these weights by grid search.", "benchmark datasets: (1) Weibo (Li and Yan, 2018): this is a Chinese multi-turn tweet-style corpora.", "After data cleaning, we obtain 3.1 million sessions for training, 10k sessions for validation and 10k sessions for testing.", "(2) Douban (Wu et al., 2017): we use the original multi-turn dialog corpus, and obtain 2.3 million sessions for training, 10k sessions for validation and 10k sessions for testing.", "For the Weibo or Douban corpus, each dialog session has 4 sentences on average, and each sentence contains about 7 or 14 words respectively.", "The discovered dialog structure graph on Weibo corpus contains 1,641,238 utterance-level vertices, 6000 session-level vertices and 11,561,007 edges.", "And the discovered dialog structure graph on Douban corpus contains 1,768,720 utterance-level vertices, 5500 session-level vertices and 6,117,159 edges.", "The number of utterance-level vertices is equal to the number of extracted phrase number in corpus and session-level vertices is determined by grid search based on the NLL metric in Section 4.2.", "In this work, we select DVRNN (Shi et al., 2019) as a baseline, since there is few previous study on unsupervised open-domain dialog structure discovery.", "DVRNN is the SOTA unsupervised method in discovering dialog structure in task-oriented dialogs, which outperforms other hidden Markov based methods by a large margin (Shi et al., 2019).", "We rerun the original source codes.", "6 Notice that, to suite the setting of open-domain dialog and also consider the limit of our 16G GPU memory (we set batch size as 32 to ensure training efficiency), we 6 github.com/wyshi/Unsupervised-Structure-Learning set the number of dialog states as 50 (originally it is 10).", "7 We also evaluate the quality of the initialized graph (denoted as Phrase Graph) that consists of only phrases (as vertices) and initial edges (be-tween phrases) in Section 3.2.", "For more details, please refer to Appendix A.1.", "We evaluate discovered dialog structure graph with both automatic evaluation and human evaluation.", "For automatic evaluation, we use two metrics to evaluate the performance of reconstruction: (1) NLL is the negative log likelihood of dialog utterances; (2) BLEU-1/2 measures how much that reconstructed sentences contains 1/2-gram overlaps with input sentences (Papineni et al., 2002).", "The two metrics indicate how well the learned dialog structure graph can capture important semantic information in dialog dataset.", "Further, we manually evaluate the quality of edges and vertices in the graph.", "For edges, (1) S-U Appr.", "for multi-turn dialog coherence.", "It measures the appropriateness of Sess-Utter edges, where these edges provide crucial prior information to ensure multi-turn dialog coherence (see results in Section 5.4).", "1 if an utterance-level vertex is relevant to its session-level vertex (topic), otherwise 0.", "(2) U-U Appr.", "for single-turn dialog coherence: It measures the quality of Utter-Utter edges between two utterance-level vertices, where these edges provide crucial prior information to 7 We ever tried to modify their codes to support the learning of a large number of dialog states (up to 30k).", "ensure single-turn dialog coherence.", "It is 1 if an Utter-Utter edge is suitable for responding, otherwise 0.", "Notice that we don't evaluate the quality of Sess-Sess edges because Sess-Sess edges are dependent on the statistics of Sess-Utter edges.", "Meanwhile, for vertices, we evaluate Session-level Vertex Quality (Sess.V.-Qual.) .", "Ideally, a session-level vertex (topic) should be mapped by dialog sessions that share high similarity.", "In other words, we can measure the quality of a session-level vertex by evaluating the similarity of semantics between two sessions that are mapped to it.", "It is 2 if the two sessions mapped to the same session-level vertex are about the same or highly similar topic, 0 if the two sessions contains different topic, otherwise 1.", "Specifically, during evaluation, we provide typical words of each topic by calculating TF-IDF on utterances that are mapped to it.", "High Sess.V.-Qual. is beneficial to conduct topic management for coherent multi-turn dialogs.", "Note that we don't evaluate utterance-level vertex quality since it is too fine-grained for annotators to determine whether two utterances that are mapped to a utterance-level vertex are highly-similar.", "For human evaluation, we sample 300 cases and invite three annotators from a crowd-sourcing platform to evaluate each case.", "8 Notice that all system identifiers are masked during human evaluation.", "As shown in Table 1, DVAE-GNN significantly outperforms DVRNN, in terms of all the metrics (sign test, p-value < 0.01) on the two datasets.", "It demonstrates that DVAE-GNN can better discover meaningful dialog structure graph.", "Specifically, DVAE-GNN obtains the best results in terms of NLL and BLEU-1/2, which shows that DVAE-GNN can better capture important semantic information in comparison with DVRNN.", "Meanwhile, DVAE-GNN also surpasses all baselines in terms of U-U Appr. and S-U Appr..", "It indicates that our discovered dialog structure graph has higher-quality edges and can better facilitate coherent dialog generation.", "Furthermore, we conduct ablation study.", "Specifically, to evaluate the contribution of GNN, we remove GNN from DVAE-GNN, denoted as DVAE-GNN w/o GNN.", "We see that its performance drop sharply in terms of S-U Appr. and Sess.V.-Qual..", "It demonstrates that GNN can better incorporate the structure information (complex relations 8 test.baidu.com among vertices) into session-level vertex representation learning.", "Moreover, to evaluate the contribution of phrases to utterance-level vertex representation, we remove phrases, denoted as DVAE-GNN w/o phrase.", "We see that its scores in terms of all the metrics drops sharply, especially the three human evaluation metrics.", "The reason is that it's difficult to learn high-quality utterance-level vertex representation from a large amount of fine-grained semantic content in open-domain dialogs without any prior information.", "The Kappa value is above 0.4, showing moderate agreement among annotators.", "To confirm the benefits of discovered dialog structure graph for coherent conversation generation, we conduct experiments on the graph discovered from Weibo corpus.", "All the systems (including baselines) are trained on Weibo corpus.", "We carefully select the following six baselines.", "MMPMS It is the multi-mapping based neural open-domain conversational model with posterior mapping selection mechanism (Chen et al., 2019), which is a SOTA model on the Weibo Corpus.", "MemGM It is the memory-augmented open-domain dialog model (Tian et al., 2019), which learns to cluster U-R pairs for response generation.", "HRED It is the hierarchical recurrent encoder-decoder model (Serban et al., 2016).", "CVAE It is the Conditional Variational Auto-Encoder based neural open-domain conversational model (Zhao et al., 2017).", "VHCR-EI This variational hierarchical RNN model can learn hierarchical latent variables from open-domain dialogs (Ghandeharioun et al., 2019).", "It is a SOTA dialog model with hierarchical VAE.", "DVRNN-RL It discovers dialog structure graph for task-oriented dialog modeling (Shi et al., 2019).", "GCS It is our proposed dialog structure graph grounded dialog system with hierarchical RL.", "GCS w/ UtterG It is a simplified version of GCS that just uses the utterance-level graph and utterance-level sub-policy.", "GCS w/ Phrase Graph It is a simplified version of GCS that just uses the phrase graph and utterance-level sub-policy.", "We use the same user simulator for RL training of DVRNN-RL, GCS and GCS w/ UtterG.", "Here, we use the original MMPMS as user simulator because it achieves the best result on the Weibo Corpus.", "The user simulator is pre-trained on dialog corpus and not updated during policy training.", "We use the original source codes for all the baselines and the simulator.", "Further details about baselines and GCS can be found in Appendix A.2.", "We conduct model-human dialogs for evaluation.", "Given a model, we first randomly select an utterance (the first utterance in a session) from test set for the model side to start the conversations with a human turker.", "Then the human is asked to converse with the selected model till 8 turns are reached.", "Finally, we obtain 50 model-human dialogs for multi-turn evaluation.", "Then we randomly sample 200 U-R pairs from the above dialogs for single-turn evaluation.", "Since the proposed system does not aim at predicting the highest-probability response at each turn, but rather the long-term success of a dialog (e.g., coherence), we do not employ BLEU (Papineni et al., 2002) or perplexity for evaluation.", "We use three multi-turn evaluation metrics and three single-turn metrics.", "For human evaluation, we invite three annotators to conduct evaluation on each case, and we ask them to provide 1/0 (Yes or No) scores for most of the metrics.", "Moreover, for multi-turn coherence, we first ask the annotators to manually segment a dialog by topics and then conduct evaluation on each session.", "A session refers to a dialog fragment about one topic.", "Notice that system identifiers are masked during human evaluation.", "Multi-turn Metrics.", "We use the following metrics: (1) Multi-turn Coherence (Multi.T.-Coh.) It measures the coherence within a session.", "Common incoherence errors in a session include amphora errors across utterances and information inconsistency.", "0 means that there are more than two incoherence errors in a session.", "1 means that there are only one error.", "2 means that there are no errors.", "Finally, we compute the average score of all the sessions.", "(2) Dialog engagement (Enga.) This metric measures how interesting a dialogs is.", "It is 1 if a dialog is interesting and the human is willing to continue the conversation, otherwise 0.", "(3) Length of high-quality dialog (Length) A high-quality dialog ends if the model tends to produce dull responses or two consecutive utterances are highly overlapping (Li et al., 2016b).", "Single-turn Metrics.", "We use the following metrics: (1) Single-turn Coherence (Single.T.-Coh.) 0 if a response is inappropriate as an reply, otherwise 1; (2) Informativeness (Info.) 0 if a response is a safe response, e.g. I don't know, or it is highly overlapped with context, otherwise 1; (3) Distinct (Dist.i ) It is an automatic metric for response diversity (Li et al., 2016a).", "As shown in Table 2, GCS significantly outperforms all the baselines in terms of all the metrics except Length-of-dialog (sign test, p-value < 0.01).", "It indicates that GCS can generate more coherent, informative and engaging dialogs.", "Specifically, our system's two sub-policies strategy on the dialog structure graph enables more coherent dialog flow control than hierarchical latent variable based VHCR-EI model that performs the best among Start", "[National Day is coming, Im going to have a holiday.] User [In holiday, we can go out and have fun, such as flying kites.] Bot", "[I'll come back on Friday.] User [All right, I'll wait for you at home] Bot", "[Great! Looking forward to your big meal.] User [Well, I want to travel to Nanjing, let's go together.] Bot", "[I feel like you're abandoning me. ] User [No, you can come with us.] Bot", "[Is it appropriate to travel with a lovey-dovey couple?] User [I don't think it is reasonable that showing sweet makes a lovey-dovey couple broke up fast.] Bot [You are vey nice.] User [Just so-so.] Bot [All right, you are beautiful in my eyes!] User", "[Ha-ha, you are so funny. You're just flattering me.] Bot [Yeah, I'm sorry.] User [That's doesn't matter, you're forgiven] Bot [You are so generous. How could such a good thing happen.] Figure 4: A sample dialog between our dialog system GCS and a human, whereBot is our system and User is the human.", "baselines, as indicated by Multi.T.-Coh..", "Moreover, our high-quality edges between utterance-level vertices (measured by the metric U-U Appr. in Table 1) help GCS to achieve higher single-turn coherence score than DVRNN-RL, as indicated by Single.T.-Coh..", "In addition, GCS, VHCR-EI, MMPMS and CVAE can obtain better performance in terms of Info., indicating that latent variable can effectively improve response informativeness.", "The Kappa value is above 0.4, showing moderate agreement among annotators.", "Figure 4 shows a sample dialog between our system GCS and a human.", "We see that our system can generate a coherent, engaging and informative multi-turn dialog.", "For an in-depth analysis, we manually segment the whole dialog into two sessions.", "It can be seen that the first session is about meeting appointment, and it contains a reasonable dialog logic, I will have a holiday I will arrive wait for you at home look forward to a big meal.", "And the second session is about joking between friends, and it also contains a reasonable logic, you are beautiful flattering me I am sorry.", "Ablation Study.", "In order to evaluate the contribution of session-level vertices, we run GCS with an utterance-level dialog structure graph, denoted as GCS w/ UtterG.", "Results in Table 2 show that its performance in terms of Multi.T.-Coh. and Enga. drops sharply.", "It demonstrates the contribution of our hierarchical dialog structure graph for enhancing dialog coherence and dialog engagement.", "The possible reason for the inferior performance of GCS w/ UtterG is that the removal of session-level vertices harms the capability of selecting coherent utterance-level vertex sequence.", "In this paper, we conduct unsupervised discovery of discrete dialog structure from chitchat corpora.", "Further, we try to formalize the structure as a two-layer directed graph.", "To discover the dialog structure, we present an unsupervised model, DVAE-GNN, which integrates GNN into DVAE to model complex relations among dialog states for more effective dialog structure discovery.", "Experimental results demonstrate that DVAE-GNN can discover meaningful dialog structure, and the use of dialog structure as background knowledge can significantly improve multi-turn dialog coherence.", "We are grateful for the support from Ying Yu.", "This work is supported by the National Key Research and Development Project of China (No.2018AAA0101900) and the National Natural Science Foundation of China (NSFC) via grant 61976072." ]
[ "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "method", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "result", "objective", "result", "abstain", "result", "abstain", "objective", "abstain", "objective", "objective", "other", "other", "abstain", "other", "method", "other", "other", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "other", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "result", "objective", "method", "abstain", "method", "method", "method", "objective", "method", "objective", "abstain", "method", "method", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "objective", "method", "abstain", "other", "other" ]
[ "Argument pair extraction (APE) is a research task for extracting arguments from two passages and identifying potential argument pairs.", "Prior research work treats this task as a sequence labeling problem and a binary clas-sification problem on two passages that are directly concatenated together, which has a limitation of not fully utilizing the unique characteristics and inherent relations of two different passages.", "This paper proposes a novel attention-guided multi-layer multi-cross encoding scheme to address the challenges.", "The new model processes two passages with two individual sequence encoders and updates their representations using each other's representations through attention.", "In addition, the pair prediction part is formulated as a table-filling problem by updating the representations of two sequences' Cartesian product.", "Furthermore, an auxiliary attention loss is introduced to guide each argument to align to its paired argument.", "An extensive set of experiments show that the new model significantly improves the APE performance over several alternatives 1 .", "Mining argumentation structures within a corpus is a crucial task in argument mining research field (Palau and Moens, 2009).", "There are usually two main components in learning natural language argument structures: (1) detecting argumentative units, (2) predicting relations between the identified arguments.", "It has been widely studied by natural language processing (NLP) researchers (Cabrio and Villata, 2018) and applied to domains such as: web debating platforms (Boltuzic and Snajder, 2015; Swanson et al., 2015; Chakrabarty et al., 2019), Liying Cheng is under the Joint Ph.D.", "persuasive essays (Stab and Gurevych, 2014; Persing and Ng, 2016), social media (Abbott et al., 2016), etc.", "Unlike traditional argument extraction tasks that are mainly from monologues, Cheng et al. (2020) propose a new task argument pair extraction (APE) from two passages in a new domain, namely peer review process, focusing on exploiting the interactions between reviewer comments and author rebuttals.", "As shown in Figure 1, APE task aims to extract the argument pairs from two passages.", "Specific suggestions, questions or challenges in reviews are considered as review arguments.", "Response sentences that answer or explain the specific review argument are its paired rebuttal arguments.", "For example in the pink area, the reviewer points out the lack of literature review in submission (i.e., review sentences 11-12).", "As a response, the authors argue that they select the literature based on the special focus of their work (i.e., rebuttal sentence 6-7).", "Similar to the two components in the traditional argumentation structure mining, the APE task can be divided into two subtasks: (1) extracting the review and rebuttal arguments from two passages, (2) predicting if an extracted review argument and a rebuttal argument form an argument pair.", "The first subtask can be cast as a sequence labeling problem and the second one can be cast as a binary classifi-cation problem.", "One straightforward approach is to couple the two subtasks in a pipeline.", "However, such a pipeline approach learns two subtasks independently without sharing ample information.", "To address this limitation, the pioneering work (Cheng et al., 2020) employs a multi-task learning framework to train two subtasks simultaneously.", "However, there are several shortcomings in the multi-task model.", "First, the review passage and its rebuttal passage are concatenated as a single passage to perform the argument extraction subtask with sequence labeling.", "It is obvious to see from 12 8 Rebuttal Review 1 2 3 4 5 6 7 1 2 3 4 5 6 7 O O O O O O B 9 10 11 I I B I I I B O B I I 1 This relatively novel work proposes to augment current RL models by adding self-supervised tasks encouraging better internal representations.", "Figure 1 that the review and rebuttal passages have their own styles in terms of structure and wording.", "Hence, it is not suitable to concatenate them as one long sequence, which is against the fact that they are two unique sequences in essence and hinders the model from well-utilizing their different characteristics.", "To overcome this limitation, we treat review and rebuttal passages as two individual sequences and design two sequence encoders for them respectively.", "In each sequence encoder, the sequence representations will be updated by the other's representations through mutual attention.", "It allows us to better distinguish two passages, and meanwhile, to conveniently exchange information between them through the attention mechanism.", "Second, the subtask coordination capability of their multi-task framework is weak as two subtasks only coordinate with each other via the shared feature encoders, i.e., the sentence encoder for the sequence of word tokens and the passage encoder for the concatenation of sentences.", "Thus, the shared information between two subtasks is only learned implicitly.", "To overcome this limitation, we propose an attention-guided multi-layer multi-cross (MLMC) encoding mechanism.", "Inspired by the table-filling approach (Miwa and Sasaki, 2014), we form a table that represents features for the Cartesian Product of review and rebuttal sequences by utilizing both of their embeddings, as shown in the right portion of Figure 1. The table representations will be updated with the incorporation of the two sequence representations, and in return, it will also help to update the mutual attention mentioned above.", "It is named as multi-cross encoder because these three encoding components (i.e., one table and two sequences) interact with each other explicitly and extensively.", "By stacking multiple encoder layers, the two subtasks can further benefit each other.", "In addition, we also design an auxiliary attention loss to guide each argument to refer to its paired arguments.", "This additional loss not only enhances the model performance, but also significantly improves the attention interpretability.", "To summarize, the contributions of this paper are three-fold.", "Firstly, we apply the table-filling approach to model the sentence-level correlation between two passages with multiple sentences for the first time.", "Secondly, on the model side, we propose an MLMC encoder to explicitly learn the useful shared information in the two passages.", "Furthermore, we introduce an auxiliary attention loss, which is able to further improve the efficacy of the mutual attentions.", "Thirdly, we evaluate our model on the benchmark dataset (Cheng et al., 2020), and the results show that our model achieves a new state-of-the-art performance on the APE task.", "Argument mining has wide applications in educational domain, including persuasive essays (Stab and Gurevych, 2017; Eger et al., 2017), scientific articles (Teufel et al., 2009; Guo et al., 2011), writing assistance (Zhang and Litman, 2016), essay scoring (Persing and Ng, 2015; Somasundaran et al., 2016), peer reviews (Hua et al., 2019), etc.", "Unlike previous works, Cheng et al. (2020) introduce a new task named APE in the domain of peer review and rebuttal, which intends to extract the argument pairs from two passages simultaneously.", "Table-filling approaches (Miwa and Sasaki, 2014; Gupta et al., 2016; Zhang et al., 2017) have been proposed to work towards the joint task of name entity recognition (NER) and relation extraction (RE).", "In their work, the diagonal entries of the table show the words' entity types and the off-diagonal entries show the relation types with other words.", "More recently, there are more research works to propose various table-filling models on different tasks.", "Wang and Lu (2020) propose to learn two separate encoders (a table encoder and a sequence encoder) by interacting with each other for joint NER and RE task.", "Wu et al. (2020) propose a grid tagging scheme to address the aspect-oriented fine-grained opinion extraction task.", "Compared to our model, one major difference is the table shape.", "In their tables, the row and column represent the same sequence, and thus in square shape.", "In our model, the table is in a rectangle shape where the row and column represent two different sequences with different lengths.", "Another clear difference is that each entry in their table is for word-pair relation, whereas each entry in our table captures sentence-pair relation.", "As we can see from Figure 1, the review/rebuttal sequence consists of a list of sentences.", "Thus, it requires extra effort to learn comprehensive sentence representations.", "In this paper, we tackle the APE task, which aims to study the internal structure and relations between two passages, e.g., review and rebuttal passages.", "For example, as shown in Figure 1, given a pair of review passage s rv = [ s rv, 1 , , s rv, 12 ] (in the red box) and rebuttal passage s rb = [ s rb, 1 , , s rb, 7 ] (in the orange box), we intend to automatically extract all argument pairs between them.", "First, for the argument mining subtask, we cast it as a sentence-level sequence labeling problem following the work (Cheng et al., 2020) using the standard BIO scheme (Ramshaw, 1995; Ratinov and Roth, 2009).", "This subtask segments the argumentative units (highlighted in blue/pink) from nonargumentative units (highlighted in grey) for each passage.", "The label sequences for the review passage and the rebuttal passage are shown in the right portion of Figure 1. Second, the sentence pairing subtask predicts whether the two sentences belong to one argument pair.", "Here, we formulate it as a table-filling problem following the work (Miwa and Sasaki, 2014).", "Take the 8 th review sentence s rv, 8 in the first review argument as an example, the rebuttal argument sentences { s rb, 2 , s rb, 3 , s rb, 4 , s rb, 5 } forming sentence pairs with it are filled with green, as shown in the table.", "With the collaboration of these two subtasks, we can perform the overall argument pair extraction task.", "In this case, two argument pairs (highlighted in blue/pink from two passages) are extracted, which correspond to the two green rectangles shown in the table.", "Figure 2 shows our proposed attention-guided multi-layer multi-cross (MLMC) encoding based model.", "The model mainly consists of three parts: a sentence embedder, an n -layer multi-cross encoder, and a predictor.", "The review sentences and rebuttal sentences first go through the sentence embedder separately to obtain their sentence embeddings respectively.", "We then utilize the representations from review and rebuttal sequences to form a table as shown earlier in Figure 1. Next, the representations of the table and two sequences are updated through n multi-cross encoder layers.", "Finally, the model predicts the review and rebuttal arguments through a conditional random field (CRF) (Laf-ferty et al., 2001) layer based on two sequence representations, and extracts the pairing information through a multi-layer perceptron (MLP) based on the table representations.", "The bottom left part of Figure 2 shows our sentence embedder, the input of which is a review sentence or a rebuttal sentence with l tokens s = [ t 0 , t 1 , , t l 1 ] .", "We obtain the pre-trained BERT (Devlin et al., 2019) token embeddings [ x 0 , x 1 , , x l 1 ] for all word tokens in the sentence, after which all token embeddings are fed into a bidirectional long short-term memory (biLSTM) BERT ... ... ... ...", "(Hochreiter and Schmidhuber, 1997) layer.", "The last hidden states from both directions are concatenated as the sentence embedding S (0) .", "A more common practice is to use the [CLS] token embedding to represent the sentence embedding.", "However, given the high density of scientific terms and the correspondence between review and rebuttal, token-level information is naturally crucial for the task.", "The same conclusion is drawn by the experimental results in the previous work (Cheng et al., 2020).", "The entire multi-cross encoder consists of n layers.", "The details of each multi-cross encoder layer are shown in the blue dotted box on the right of Figure 2. The input of the layer includes table representations and two sequence representations, i.e., review and rebuttal sequence representations.", "In each layer, table features are updated by sequence features and vice versa.", "Sequence Encoder Phase I To well-utilize different characteristics of review and rebuttal, we regard them as two individual sequences.", "Two sequence embeddings S ( k 1 ) rv and S ( k 1 ) rb of length I and J respectively (i.e., the output from the previous layer) are passed through the same biLSTM layer colored light yellow in Figure 2. Take review sequence as an example, the review hidden states at position i are updated as follows: S ( k ) (cid:48) [1] rv,i = LSTM forward ( S ( k 1) rv,i , S ( k ) (cid:48) [1] rv,i 1 ) , S ( k ) (cid:48) [2] rv,i = LSTM backward ( S ( k 1) rv,i , S ( k ) (cid:48) [2] rv,i +1 ) , S ( k ) (cid:48) rv,i = [ S ( k ) (cid:48) [1] rv,i , S ( k ) (cid:48) [2] rv,i ] .", "The rebuttal hidden states S ( k ) (cid:48) rb in layer k is obtained from the same biLSTM in the same manner.", "Table Encoder To capture the pairing information explicitly, we adopt the table-filling approach.", "At layer k , we update the table T ( k 1 ) rv rb through the table encoder.", "The table input T ( 0 ) rv rb before the first encoder layer are set as 0 .", "At each layer k , in order to incorporate the information extracted in S ( k ) (cid:48) rv and S ( k ) (cid:48) rb , we form another table T ( k 1 ) (cid:48)(cid:48) rv rb with them through concatenation and linear projection as follows: T ( k 1 ) (cid:48)(cid:48) rv rb = Linear ( S ( k ) (cid:48) rv S ( k ) (cid:48) rb ) .", "The table features from previous layer T ( k 1 ) rv rb are then updated by T ( k 1 ) (cid:48)(cid:48) rv rb with layer normalization: T ( k 1 ) (cid:48) rv rb = LayerNorm ( T ( k 1 ) rv rb T ( k 1 ) (cid:48)(cid:48) rv rb ) .", "The entry T ( k 1) (cid:48) i,j at row i and column j represents specific features between review sentence at position i and rebuttal sentence at position j .", "The table hidden states T ( k ) i,j are updated through 2D-GRU: T ( k ) [1] i,j = GRU forward ( T ( k ) [1] i 1 ,j , T ( k ) [1] i,j 1 , T ( k 1) (cid:48) i,j ) , T ( k ) [2] i,j = GRU backward ( T ( k ) [2] i +1 ,j , T ( k ) [2] i,j +1 , T ( k 1) (cid:48) i,j ) , T ( k ) i,j = [ T ( k ) [1] i,j , T ( k ) [2] i,j ] .", "The 2D-GRU settings are similar to the previous work (Wang and Lu, 2020) except that the table to be processed is not necessarily a square ( I (cid:54) = J in general).", "Therefore, the 2D-GRU implemented here is more general.", "The previous hidden states for table boundaries ( T ( k ) [1] 0 ,j , T ( k ) [1] i, 0 , T ( k ) [2] I +1 ,j , T ( k ) [2] i,J +1 ) are set as 0 .", "The outputs T ( k ) rv rp of layer k are further exploited by the mutual attention mechanism explained below to update review and rebuttal sequence embeddings.", "Mutual Attention The mutual attention mechanism (shown as review attention and rebuttal attention modules in Figure 2) links review embedding, rebuttal embedding and table embedding together, through which review embedding and rebuttal embedding update each other with the help of table features.", "The attention weights ( k ) i,j and ( k ) i,j at position ( i, j ) in layer k are updated as follows: ( k ) i,j = tanh ( v T T ( k ) i,j ) , ( k ) i,j = tanh ( v T T ( k ) i,j ) , where v and v are learnable vectors.", "We further normalize the attention weights: a ( k ) i,j = exp( ( k ) i,j ) (cid:80) Jj (cid:48) =1 exp( ( k ) i,j (cid:48) ) , b ( k ) i,j = exp( ( k ) i,j ) (cid:80) Ii (cid:48) =1 exp( ( k ) i (cid:48) ,j ) .", "Here, a ( k ) i,j and b ( k ) i,j are the normalized attention weights ranging from 0 to 1. We then get the weighted average of sentence representations S ( k ) (cid:48)(cid:48) rv,i and S ( k ) (cid:48)(cid:48) rb,j from S ( k ) (cid:48) rb and S ( k ) (cid:48) rv respectively.", "S ( k ) (cid:48)(cid:48) rv,i = J (cid:80) j =1 a ( k ) i,j S ( k ) (cid:48) rb,j , S ( k ) (cid:48)(cid:48) rb,j = I (cid:80) i =1 b ( k ) i,j S ( k ) (cid:48) rv,i .", "Here, S ( k ) (cid:48)(cid:48) rv and S ( k ) (cid:48)(cid:48) rb are the updated review embedding and rebuttal embedding.", "Information in review and rebuttal sequences is exchanged via mutual attention.", "Sequence Encoder Phase II The addition and layer normalization used to combine S ( k ) (cid:48) and S ( k ) (cid:48)(cid:48) in the sequence encoder are similar to the one in table encoder.", "We obtain the review sequence embedding S ( k ) rv and rebuttal sequence embedding S ( k ) rb as the sequence outputs of layer k as follows: S ( k ) rv = LayerNorm ( S ( k ) (cid:48) rv S ( k ) (cid:48)(cid:48) rv ) , S ( k ) rb = LayerNorm ( S ( k ) (cid:48) rb S ( k ) (cid:48)(cid:48) rb ) .", "Stacking Multi-Cross Encoder Layers The updating process described above continues as layer grows from 1 to n .", "The table feature is updated by both review and rebuttal sequences, and each sequence updates the other via the table later on.", "There are also residual connections between adjacent layers which accept the previous layer's output as the current layer's input and include it as part of the new embedding, making the system more robust.", "All three features (i.e., review sequence, rebuttal sequence, table) are intertwined with each other and information flows across different components of the encoder.", "This is also the reason why the encoder is described as MLMC.", "After the final multi-cross encoder layer, sequence features are used for argument mining and table features are used for pair prediction.", "Argument Predictor We adopt CRF to predict argument sequence labels.", "The sequence labeling loss L seq for both review sequence s rv and rebuttal sequence s rb in each instance is defined as: L seq = (cid:0) log p ( y rv | s rv ) + log p ( y rb | s rb ) (cid:1) , where y rv and y rb are the review and rebuttal sequence labels 2 .", "During inference, the predicted sequence label is the one with the highest conditional probability given the original sequence: y rv = arg max y p ( y | s rv ) , y rb = arg max y p ( y | s rb ) .", "Pair Predictor We use MLP to predict sentence pairs 3 .", "The pairing loss L pair for each instance is: L pair = (cid:80) i,j (cid:16) y pairi,j log p ( y pairi,j = 1 | s rv , s rb ) + (1 y pairi,j ) log p ( y pairi,j = 0 | s rv , s rb ) (cid:17) , where y pairi,j is 1 when s rv,i and s rb,j are paired, and is 0 otherwise 4 .", "Following (Cheng et al., 2020), during evaluation, a pair of candidate spans ( [ s rv,i 1 , , s rv,i 2 ] and [ s rb,j 1 , , s rb,j 2 ] ) form a pair if they satisfy the following criterion: (cid:80) i 2 i = i 1 (cid:80) j 2 j = j 1 1 { p ( ypairi,j =1) > 0 .", "Attention Loss Attention loss is a loss term specifically designed for the task.", "It aims to increase the effectiveness of review attention and rebuttal attention discussed above.", "Even without 2 We provide the detailed steps of deriving the loss L seq in Appendix A.1.", "3 MLP is chosen because more complex structures like convolutional neural networks (CNN) demonstrate no superiority.", "The comparison results are attached in Appendix B.3.", "4 We provide the detailed steps of deriving the pairing loss L pair in Appendix A.2.", "this auxiliary loss term, sentences in review are supposed to attend to relevant sentences in rebuttal and vice versa.", "The auxiliary loss is thus aimed at augmenting the effect of mutual reference explicitly by guiding the paired arguments to refer to each other.", "Intuitively, under the settings of argument mining and pairing, it is natural that review arguments refer to the paired rebuttal arguments to update their embedding and vice versa during mutual attention.", "Hence, we introduce an auxiliary loss term to increase the attention weights computed for paired arguments and decrease the attention weights otherwise for both review and rebuttal attentions in all layers.", "For each instance, L attn is defined as: L attn = (cid:80) i,j (1 2 y pairi,j ) (cid:16) n (cid:80) k =1 n k ( a ( k ) i,j + b ( k ) i,j ) (cid:17) , where is the decaying parameter used to compute exponential moving average for the sum of attention.", "Larger weights are assigned to layers closer to the final predictor as they are more related to the prediction in the end.", "The attention loss is defined in the form of summation across all layers to increase the accuracy and interpretability of both review and rebuttal attentions in all layers.", "If the tendency to attend to the paired argument is augmented, the benefits of attention mechanism can be further exploited (e.g., learning better sentence representations, increasing pair prediction accuracy).", "L = L seq + 1 L pair + 2 L attn ,", "where 1 and 2 are tuned hyperparameters.", "We conduct experiments on the benchmark dataset, i.e., RR dataset (Cheng et al., 2020) to evaluate the effectiveness of our proposed model.", "RR dataset includes 4,764 pairs of peer reviews and author rebuttals collected from ICLR 2013 to ICLR 2020.", "There are two dataset versions provided: RR-Submission-v1 and RR-Passage-v1.", "In RR-Submission-v1, multiple review-rebuttal passage pairs of the same paper submission are in the same set of train, dev or test; while in RR-Passage-v1, different review-rebuttal passage pairs of the same submission could be put into different sets.", "We further modify the RR-Submission-v1 dataset by fixing some minor bugs in the labels, and name it RR-Submission-v2.", "The data are split into train, dev and test sets by a ratio of 8:1:1 for all three dataset versions.", "The pipeline approach is used as a baseline model in the previous work (Cheng et al., 2020).", "It independently trains two subtasks and then pipes them together to extract argument pairs.", "The multi-task learning model proposed by (Cheng et al., 2020) trains two subtasks simultaneously via the shared feature encoders.", "We implement our attention-guided MLMC encoding based model in Pytorch.", "The dimension of pre-trained BERT sentence embeddings is 768 by default.", "Maximum number of BERT tokens for each sentence is set as 200.", "MLP layer is composed of 3 linear functions and 2 ReLU functions.", "We use Adam (Kingma and Ba, 2014) with an initial learning rate of 0.0002, and update parameters with a batch size of 1 and dropout rate of 0.5.", "We train our model for 25 epochs at most.", "We select the best model parameters based on the best overall F 1 score on the development set and apply it to the test set for evaluation.", "All models are run with V100 GPU.", "Note that in this paper, the parameters are mainly tuned based on RR-Submission-v1 5 .", "Following the previous work (Cheng et al., 2020), we report the precision (Prec.), recall (Rec.) and F 1 scores for the performance on both subtasks as well as the overall extraction performance.", "Table 1 shows the performance comparison between our proposed models and the pervious work on RR-Submission-v1 and RR-Passage-v1 datasets 6 .", "Besides the two baseline models mentioned before, we implement a bi-cross encoding scheme (Bi-Cross) for comparisons as well.", "The key difference between the bi-cross encoder and the multi-cross encoder is that in the bi-cross encoder, 5 More details about hyperparameter settings (e.g. weight for pair loss 1 , weight for attention loss 2 , decaying parameter of exponential moving average) and experimental results (e.g. running time, number of parameters, performance on the development set) could be found in Appendix B. 6 The previous work adopts negative sampling technique for sentence pairing subtask and evaluates the performance on the partial test set.", "In this work, we re-evaluate the previous work's sentence pairing subtask on the whole test dataset for a fair comparison.", "Those results are marked with * in Table 1. Data Models Argument Mining Sentence Pairing APE Prec.", "the review sentences and rebuttal sentences are concatenated as one sequence, and thus it only has one sequence encoder.", "In contrast, there are two individual sequence encoders in our multi-cross encoder.", "With the same number of layers, our multi-cross model outperforms the bi-cross model on both datasets except for RR-Passage-v1 with 4 layers.", "This is especially conspicuous when the number of layers is 3. The superiority of the multi-cross model demonstrates the importance and robustness of learning review and rebuttal sequences separately.", "Our model achieves the highest F 1 score when the number of layers increases to 3. Adding more layers hurts the performance, probably because the model overfits with too many layers.", "Table 2 shows the performance on RR-Submission-v2 7 .", "The main conclusion is consistent with the performace on RR-Submission-v1.", "Both the bi-cross and multi-cross models outperform the multitask model, and the multi-cross models further outperform the bi-cross models.", "Although the baselines achieve slightly better performance on the argument mining subtask than both the bi-cross model and the multi-cross model, they still perform worse than our models on the sentence pairing subtask and the overall APE task.", "This is plausibly because of two main reasons.", "First, in the multi-task model, the subtask coordi-7 We encourage the researchers to use RR-submission-v2 and compare to its performance in the future.", "nation capability is weak as the shared information between two subtasks is learned implicitly.", "However, in our model, the three encoding components are explicitly mingled with each other through the mutual attention mechanism and the table encoder.", "On one hand, the better sentence pairing subtask performance demonstrates the effectiveness of the table-filling approach.", "On the other hand, the better overall APE performance demonstrates the strong subtask coordination capability of our model architecture.", "Second, we further analyze the breakdown performance of the multi-task model and our multi-cross ( n =3) model on the argument mining subtask.", "Figure 3 shows the subtask performance on RR-Submission-v1 dataset for reviews, rebuttals, and both of them.", "We can observe that the difference of F 1 scores between reviews and rebuttals of our model is smaller than the multi-task model.", "Despite the slight decrease in the overall argument mining performance, a more balanced argument extraction performance on reviews and rebuttals brings in better overall APE performance, which is because more accurate review argument extraction increases the chance for the extracted rebuttal arguments to be paired correctly.", "We conduct an ablation study of the multi-cross ( n =3) model on RR-Submission-v1 dataset from three perspectives, as presented in Table 3. Firstly, we evaluate the effect of sharing the biLSTM layer (the light yellow modules in Figure 2) and the CRF layer.", "We can notice that the F 1 drops 1.92 without sharing the biLSTM layer, drops 1.75 without sharing the CRF layer, and drops 1.02 when sharing neither.", "It is interesting to notice that when two sequences use their own biLSTMs and CRF simultaneously (i.e., w/o sharing both), the F 1 drops less compared to the models without sharing only one of them.", "This suggests that having an individual set of biLSTM and CRF layers for each type of sequence is plausibly a worthwhile setting, but it Model Settings APE Prec.", "is not as effective as sharing both.", "One possible reason is that the advantage brought in by such a tailor-made sequential tagging configuration for each type is overwhelmed by the disadvantage of fewer training instances.", "Secondly, without cross updates between the review and rebuttal embeddings (the mutual attention modules still exist), the F 1 drops 1.78.", "This result again demonstrates the effectiveness of explicitly blending two sequence embeddings via the mutual attention mechanism specifically designed for this task.", "Thirdly, we also investigate the effect of attention loss term by removing it from the overall loss.", "The performance drops about 2.87 F 1 points.", "We will elaborate more with the attention visualization below.", "To examine the effectiveness of the auxiliary attention loss, we visualize the sum of attention weights of all layers for four test samples, as shown in Figure 4. The sum is computed for visualization because attention weights in all layers are guided by the attention loss.", "The distribution of attention is significantly improved as the colors for arguments in Column", "(c) are considerably darker.", "In Column", "(b) without the guidance of attention loss, despite some patterns, attention weights are distributed in a quite haphazard manner.", "Therefore, the interpretability of our model is much better as we can easily understand which part of the discourse each sentence refers to.", "Specifically, the boundary of most attention blocks in Column", "(c) matches well with the start and end positions of the ground truth review and rebuttal arguments.", "The gold and predicted argument spans and argument pairs of these four samples are shown in Appendix C.1, and more discussions are given regarding the reason for some mistakenly predicted boundaries.", "The effectiveness of the auxiliary attention loss is also quantitatively illustrated by a higher F 1 score after its incorporation (32.44 v.s. 29.57) in Table 3. OOOOOOBIIBIIBOO B I I I I I I I I O B I I O B I I I I I I I I O O O 1 2 3 4 5 6 7 8 9 101112131415161718192021222324252627 1234567891011121314 OOOOOOBIIBIIBOO B I I I I I I I I O B I I O B I I I I I I I I O O O 1 2 3 4 5 6 7 8 9 101112131415161718192021222324252627 1234567891011121314 OOOOOOBIIBIIBOO B I I I I I I I I O B I I O B I I I I I I I I O O O 1 2 3 4 5 6 7 8 9 101112131415161718192021222324252627 1234567891011121314 OOOOOOBBIIIBIIBIIO O B I O B I I O B I I O B I I I I O O 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 1234567891011121314151617 OOOOOOBBIIIBIIBIIO O B I O B I I O B I I O B I I I I O O 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 1234567891011121314151617 OOOOOOBBIIIBIIBIIO O B I O B I I O B I I O B I I I I O O 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 1234567891011121314151617 OOOOOOOOBIBIIBIBBIBBIOOOOOOOB I OB I I OOB I B I I I I I I I I I I I I OOB I I B I OOB I I 1 2 3 4 5 6 7 8 91011121314151617181920212223242526272829303132333435363738 12345678910111213141516171819202122232425 OOOOOOOOBIBIIBIBBIBBIOOOOOOOB I OB I I OOB I B I I I I I I I I I I I I OOB I I B I OOB I I 1 2 3 4 5 6 7 8 91011121314151617181920212223242526272829303132333435363738 123456789 10111213141516171819202122232425 OOOOOOOOBIBIIBIBBIBBIOOOOOOOB I OB I I OOB I B I I I I I I I I I I I I OOB I I B I OOB I I 1 2 3 4 5 6 7 8 91011121314151617181920212223242526272829303132333435363738 12345678910111213141516171819202122232425 OOOOOOOBIBIIBBBBBBBIBO B I I I B I B I B B I B I I I B I O B I B O 1 2 3 4 5 6 7 8 9 1011121314151617181920212223 123456789101112131415161718192021", "In this paper, we adopt the table-filling approach for modeling the sentence-level correlation between two passages, and propose the attention-guided multi-layer multi-cross (MLMC) encoding scheme for the argument pair extraction (APE) task.", "Our model can better capture the internal relations between a review and its rebuttal with two sequence encoders and a table encoder via mutual attention mechanism.", "We also introduce an auxiliary attention loss to further improve the efficacy of the mutual attentions.", "Extensive experiments on the benchmark dataset demonstrate the effectiveness of our model architecture, which is potentially ben-eficial for other NLP tasks." ]
[ "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "objective", "result", "abstain", "abstain", "method", "abstain", "objective", "objective", "objective", "result", "abstain", "other", "other", "other", "other", "other", "other", "other", "method", "other", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "other", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "other", "other", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "objective", "abstain", "result", "objective" ]
[ "We introduce the first treebank for a romanized user-generated content variety of Algerian, a North-African Arabic dialect known for its frequent usage of code-switching.", "Made of 1500 sentences, fully annotated in morpho-syntax and Universal Dependency syntax, with full translation at both the word and the sentence levels, this treebank is made freely available.", "It is supplemented with 50k unlabeled sentences collected from Common Crawl and web-crawled data using intensive data-mining techniques.", "Preliminary experiments demonstrate its usefulness for POS tagging and dependency parsing.", "We believe that what we present in this paper is useful beyond the low-resource language community.", "This is the first time that enough unlabeled and annotated data is provided for an emerging user-generated content dialectal language with rich morphology and code switching, making it an challenging testbed for most recent NLP approaches.", "Until the rise of fully unsupervised techniques that would free our field from its addiction to annotated data, the question of building useful data sets for under-resourced languages at a reasonable cost is still crucial.", "Whether the lack of labeled data originates from being a minority language status, its almost oral-only nature or simply its programmed political disappearance, geopolitical events are a factor highlighting a language deficiency in terms of natural language processing resources that can have an important societal impact.", "Events such as the Hati crisis in 2010 (Munro, 2010) and the current Algerian revolts (Nossiter, 2019) 1 are massively reflected on social media, yet often in languages or dialects that are poorly re-1 https://www.nytimes.com/2019/03/01/world/ africa/algeria-protests-bouteflika.html sourced, namely Haitian Creole and Algerian dialectal Arabic in these cases.", "No readily available parsing and machine translations systems are available for such languages.", "Taking as an example the Arabic dialects spoken in North-Africa, mostly from Morocco to Tunisia, sometimes called Maghribi , sometimes Darija , these idioms notoriously contain various degrees of code-switching with languages of former colonial powers such as French, Spanish, and, to a much lesser extent, Italian, depending on the area of usage (Habash, 2010; Cotterell et al., 2014; Saadane and Habash, 2015).", "They share Modern Standard Arabic (MSA) as their matrix language (Myers-Scotton, 1993), and of course present a rich morphology.", "In conjunction with the resource scarcity issue, the code-switching variability displayed by these languages challenges most standard NLP pipelines, if not all.", "What makes these dialects especially interesting is their widespread use in user-generated content found on social media platforms, where they are generally written using a romanized version of the Arabic script, called Arabizi , which is neither standardized nor formalized.", "The absence of standardization for this script adds another layer of variation in addition to well-known user generated content idiosyncrasies, making the processing of this kind of text an even more challenging task.", "In this work, we present a new data set of about 1500 sentences randomly sampled from the romanized Algerian dialectal Arabic corpus of Cotterell et al. (2014) and from a small corpus of lyrics coming from Algerian dialectal Arabic Hip-Hop and Ra music genre that had the advantage of having already available translations and of being representative of Algerian vernacular ur-ban youth language.", "We manually annotated this data set with morpho-syntactic information (parts-of-speech and morphological features), together with glosses and code-switching labels at the word 1140 level, as well as sentence-level translations.", "Furthermore, we added an additional manual annotation layer following the Universal Dependencies annotation scheme (Nivre et al., 2018), making of this corpus, to the best of our knowledge, the first user-generated content treebank in romanized dialectal Arabic.", "This treebank contains 36% of French tokens, making it a valuable resource to measure and study the impact of code-switching on NLP tools.", "We supplement this annotated corpus with about 50k unlabeled sentences extracted from both Common Crawl and additional web crawled data, making of this data set an important milestone in North-African dialectal Arabic NLP.", "This corpus is made freely available under a Creative Commons license.", "2 2 The Language As stated by Habash (2010), Arabic languages are often classified into three categories :", "(i) Classical Arabic, as found in the Qur'an and related canonical texts,", "(ii) Modern Standard Arabic, the offi-cial language of the vast majority of Arabic speaking countries and", "(iii) Dialectal Arabic, whose instances exhibit so much variations that they are not mutually understandable across geographically distant regions.", "As space is missing for an exhaustive description of Arabic language variations, we refer the reader to Habash (2010), Samih (2017) and especially to Saadane and Habash (2015) for a thorough account of Algerian dialectal Arabic, which is the focus of this work.", "In short, the key properties of North-African dialectal Arabic are: It is a Semitic language , non codified, mostly spoken; It has a rich-inflexion system , which qualifies this dialect as a morphologically-rich language (Tsarfaty et al., 2010), even though Saadane and Habash (2015) write that many properties present in Classical Arabic are absent from this dialect ( e.g. it has simplified nominal and verbal case systems); It displays a high degree of variability at all levels: spelling and transliteration conventions, phonology, morphology, lexicon; It exhibits a high degree of code-switching ; due to historical reasons and cultural influence of French in the media circles, the Algerian dialect, as well as Tunisian and Morocco, is known for its heavy use of French words.", "As stated above, this dialect is mostly spoken and has even been dubbed with disdain as a Creole language by the higher levels of the Algerian political hierarchy.", "3 Still, its usage is ubiquitous in the society and, by extension, in social media user-generated content.", "Interestingly, the lack of Arabic support in input devices led to the rise of a romanized written form of this dialect, which makes use of alphanumeric letters as additional graphemes to represent phonemes that the Latin script does not naturally cover.", "Not limited to North-African dialectal Arabic, this non-standard transliteration concurrently emerged all over the Arabic-speaking world, and is often called Arabizi .", "Whether or not written in Arabizi , the inter-dialectal divergences between all Arabic dialects remain.", "The following list highlights some of the main properties of Arabizi compared to MSA written in the Arabic script.", "Unlike in MSA written in the Arabic script, where short vowels are marked using optional diacritics, all vowels are explicitly written; Digits are used to cope with Arabic phonemes that have no counterpart in the Latin script; for instance, the digit 3 is often used to denote the ayin consonant, because it is graphically similar to its rendition in Arabic script; No norms exist, resulting in a high degree of variability between people writing in Arabizi .", "From now on, we will call NArabizi the Algerian dialect of Arabic when written in Arabizi , thereby simultaneously referring to the language variety and to the script itself.", "Table 1 presents several examples of lexical variation within NArabizi .", "Interestingly, this variability also affects the code-switched vocabulary, which is mostly French in the case of NArabizi .", "A typical example of NArabizi that also exhibits code-switching with nonstandard French spelling can be seen in Example 1. (1) Source: salem 3alikoum inchalah le pondium et les midailes d'or 3 https://www.lesoirdalgerie.com/articles/", "Norm.: Assalamu alaykum inshallah le podium et les mdailles d'or", "Trans.: Peace be on you God willing [we will get] the podium and the gold medals 3 Corpus As other North-African Arabic dialects, NArabizi is a resource-poor language, with, to the best of our knowledge, only one available corpus developed by Cotterell et al. (2014) for language identification purposes.", "Cotterell et al. (2014)'s corpus was collected in 2012 from an Algerian newspaper's web forums and covers a wide range of topics (from discussion about football events to politics).", "We collected the 9973 raw sentences from its GitHub repository 4 and sampled about 1300 sentences.", "In addition, because they were available with translations in French and English, we included lyrics from a few dozen recent popular songs of various genres (Ra, hip-hop, etc.), leading to an additional set of 200 sentences.", "These 1500 sentences form the core of our NArabizi treebank annotation project.", "In order to make our corpus usable by modern, resource-hungry natural language processing techniques, we also used data-driven language identification models to extract NArabizi samples among the whole collection of the Common-Crawl-based OSCAR corpora (Ortiz Surez et al., 2019) as well as 2 millions sentences of additional crawled web-data, resulting in 50k NArabizi sentences of high quality, to date the largest corpus of this language.", "This makes this collection a valuable test bed for low-resource NLP research.", "Our NArabizi treebank contains 5 annotations layers:", "(i) tokenization,", "(ii) morphology,", "(iii) code-switching identification,", "(iv) syntax and", "(v) translation.", "Tokenization Following Seddah et al. (2012) and their work on the French Social Media Bank, we decided to apply a light tokenization process where we manually tokenized only the obvious cases of wrongly detached punctuations and miss-ing whitespaces ( i.e. cases where two words are 4 https://github.com/ryancotterell/arabic_ dialect_annotation contracted into one token).", "Morphological Analysis This layer consists of two sets of part-of-speech tags, one following the Universal POS tagset (Petrov et al., 2011) and the other the FTB-cc tagset extended to deal with user-generated content (Seddah et al., 2012).", "In cases of word contractions, we followed their guidelines and used multiple POS as in cetait (`itwas')/PRON+VERB/CLS+V .", "In addition, we added several morphological features following the Universal Dependency annotation scheme (Nivre et al., 2018), namely gender , number , tense and verbal mood .", "Note that instead of adding lemmas, we included French glosses for two reasons: firstly for practical reasons, as they helped manual corrections done by non-native speakers of NArabizi , and secondly because of the non-formalized nature of this language, which makes lemmatization very hard, almost akin to etymological research as in the case of garjouma / the throat which can either originate from French gorge or be of Amazigh root.", "Code-Switching identification Unlike other works in user-generated content for minority languages (Lynn and Scannell, 2019), we do not distinguish between interand intra-sentential code-switching and consider word-level code-mixing as lexical borrowing.", "We annotate code-switching at the word level with information about the source language, regardless of the canonical-ness of spelling.", "Syntactic Annotations Here again we follow the Universal Dependencies 2.2 annotation scheme (Nivre et al., 2018).", "When facing sequences of French words with regular French syntax, we followed the UD French guidelines; otherwise, we followed the UD Arabic guidelines, following the Prague Arabic Dependency UD Treebank.", "Translation Layer Our final layer is made up for sentence-level translations in French.", "It shall be noted that the validation of these translations often led to massive rewording, as the annotators came from different regions of Algeria and could diverge in their interpretations of a given sentence.", "5 We corrected in average one tokenization error (less frequently two) per sentence on the web forum parts.", "We noticed a high degree of variance.", "Some users displayed this behavior much more than others.", "This led some of our annotators to believe it resulted from an ill-functioning input device.", "A sample of 200 sentences was blindly translated (without access to the morpho-syntactic analysis) in order to favor further research on the fluency of machine translation for this dialect.", "The need for more data has never been more striking as they are needed for important tasks such as handling lexical sparseness issues via word embeddings, lexicon acquisition, domain adaptation via self-training, or fine-tuning pre-trained language models, its modern incarnation.", "The trouble with NArabizi is that it is a spoken language whose presence can be mostly found in informal texts such as social media.", "More importantly, the Arabizi transliteration process is also used by other Arabic dialects, making the data collection a needle in a haystack search task.", "We therefore present in this section the process we used to mine an additional set of 50k NArabizi sentences from two large corpora, one based on search query-based web-crawling and the other from a cleaned version of the CommonCrawl corpora, developed by Ortiz Surez et al. (2019).", "Using keywords-based web scrapping tools, we collected a raw corpus of 4 million sentences, called CrawlWeb, that in fine contained a mixture of French, English, Spanish, MSA and Arabizi texts.", "Since we are only interested in NArabizi , we designed a classifier to extract proper sentences from that raw corpus.", "The corpus we used as gold standard is made of 9k sentences of attested NArabizi from our original corpus and 18k of French and English tweets.", "Using language identification (Lui and Baldwin, 2012), we convert each sentence from the gold-standard corpus to a feature vector containing language-identification scores and use it as input to a SVM classifier with a classical 80/10/10 split.", "With a precision and recall score of 94%, we filtered out 173k code-mixed sentences out of the CrawlWeb corpus.", "Preliminary experiments showed promising initial results, but further analysis pointed out a high level of noise in this initial set, both in terms of erroneous language identification and on the amount of remnant ASCII artifacts that could not easily be removed without impacting the valid NArabizi sentences.", "The objectives of this method are twofold:", "(i) selecting data from CommonCrawl using a neural classifier and", "(ii) using this data set to intersect the data collected with the previous method.", "The idea is to ensure the quality of the final resulting unlabeled corpus.", "Given the large number of noisy data in CommonCrawl, a noise class is added to the language classification model and is built according to several heuristics.", "6 That noisy class corpus is made of 40k sentences randomly selected among the result of the application of these rules to a short, 10M-sentence sample of CommonCrawl.", "We then trained a classifier using Fasttext (Joulin et al., 2016) on 102 languages, 40k sentences each, extracted from the CommonCrawl-based, language-classifed OSCAR corpus, to which we added the 9k sentences of the NArabizi original corpus and 6 These heuristics are presented in the Appendix for reproducibility.", "the noise class.", "The final dataset is composed of 4,090,432 sentences and is split into 80% train, 10% development and 10% test sets.", "The classifier consists in a linear classifier (here logistic regression) fed with the average of the n -gram embeddings.", "n -grams are useful in this case as they enables the model to capture specific sequences of NArabizi characters such as lah , llah , 3a , 9a , etc.", "We choose to embed 2to 5-grams.", "These parameters lead to precision and recall scores of 97% on the NArabizi test set.", "After an intensive post-processing step (cf. Appendix A.2), this process results in a dataset of 13,667 sentences extracted from half the CommonCrawl corpus.", "7 To evaluate the quality of the resulting data set, we randomly picked 3 times 100 sentences, and genuine NArabizi sentences were manually identified, which allowed us to assess the accuracy of our corpus as reaching 97%.", "Table 2 presents the results of the evaluation of the two classification methods performed on both the development and test sets of the original NArabizi corpus.", "8 Results show that the fastText classifier and its n -gram features is more precise than its nonneural counterpart and its language-id feature vectors.", "When applied to the CrawlWeb corpus, the Fasttext model extracted 44,797 unique Arabizi sentences while the SVM model extracted 83,295 unique Arabizi sentences.", "The intersection of both extractions amounts to 39,003 Arabizi sentences (with a 99% precision).", "This means that 44,292 sentences were classified as Arabizi by the SVM model and not by Fasttext.", "Among them, by random sampling, it can be stated approximately that 55% are indeed NArabizi .", "Mistakes are misclassified sentences (Spanish and English sentences, for instance) or sentences with only noise (such as symbols).", "5,794 sentences were classified as NArabizi by the Fasttext model and not SVM.", "Among them, by random sampling, it can be stated that approximately 60% are indeed Arabizi.", "Errors are long sentences with only figures and numbers or sentences with many symbols (e.g. { O3 } or !!!! !!!!).", "run our selection on the whole CommonCrawl.", "8 Note that the precision and recall are slightly different in both methods, but the rounding at the second decimal made them equal.", "In order to ensure that the collected corpus contains as little nonNArabizi data as possible, we only release the intersection of the data we classified, to which we add the original NArabizi corpus (Cotterell et al., 2014) (after having removed the annotated data we extracted from it).", "Table 3 provides quantitative information about our corpora.", "In order to speed up the annotation process of our data, we decided to create a pre-annotation mor-phosyntactic and syntactic annotator trained on quasi-synthetic data obtained by transliterating a pre-existing Arabic (MSA) treebank, the Prague Arabic Dependency Treebank (PADT), into the NArabizi Latin script, together with data from the French GSD UD treebank.", "Both are taken from the UD treebank collection (Nivre et al., 2018).", "Before it can be used as training data, the PADT needs to first be transformed into a form similar to NArabizi .", "Since the PADT corpus is a collection of MSA sentences with no diacritics, it is impossible to directly transliterate into NArabizi .", "We first diacritized it, in order to add short-vowel information, and then translitterated it into an Arabizi -like corpus.", "We describe this process in this Section.", "The results of the pseudoNArabizi parser trained on the translitterated corpus are then presented in Section 6.2.", "Random diacritics As vowels are always written in Arabizi , the PADT corpus needs to be diacritized before transliteration.", "Using an equiprobable distribution, diacritics were added randomly, and the text then transliterated using the probabil-1144 ity distributions we describe below.", "Proper diacritization Using the Farasa software (Abdelali et al., 2016), PADT sentences are diacritized with 81% precision rate, 9 then tokens aligned with corresponding diacritized words.", "The text is then transliterated the same way as before.", "The BLEU score of this version is 0.60.", "An example showing how this system visibly improves the transliteration can be seen in the Prop. Diac. output in Example 2. (2) Source: berlin tarfoudhou 7oussoul charika amrikia 3ala ro5sat tasni3 dabbabat lopard al almania", "Trans.: Berlin refuses to authorize an American firm to produce the Leopard German tank.", "Random", "diac.: brouliyani trfidh 7iswla chiroukou amiyirikyoui 3alia rou5soui tasaniya3i dhabouaboui louyiwibiaridha alalmaanouyou Proper", "diac.: birlin tarfoudhou 7ousolou charikatin 2amiriqiatin 3alaa rou5sati tasni3i dabbatin lyuberid el2almeniati System BLEU score Random diacritization 0.31 Proper diacritization 0.60 Table 4: BLEU score of both transliteration systems.", "Transliteration Once diacritized, the corpus can be properly transliterated.", "Arabic letters are either consonant sounds or long vowels, each one may have several different transliterations in NArabizi , depending on the writer's age, accent, education and first learned Western language.", "For example, the letter 10 can be transliterated as t or th.", "A probability must be assigned for each possibility, and to make it as close as possible to what is produced by NArabizi speakers, a small parallel corpus of PADT sentences and their transliteration 9 Other diacritization systems have better performances (Belinkov and Glass, 2015) but are either not maintained with the proper python packages, or come with a fee.", "In this section we describe preliminary experiments on part-of-speech tagging and statistical dependency parsing that show promising results while highlighting the expected difficulty of processing a low-resource language with a high level of code-switching and multiple sources of variability.", "The baseline POS tagger we used is alVWTag-ger, 11 a feature-based statistical POS tagger, which ranked 3rd at the 2017 CoNLL multilingual parsing shared task (Zeman et al., 2017).", "It is briefly described in (de La Clergerie et al., 2017).", "In short, it is a left-to-right tagger that relies on a set of carefully manually designed features, including features extracted from an external lexicon, when available, and a linear model trained using the Vowpal Wabbit framework.", "12 In our case, we simply created an external lexicon by extracting the content of the training set.", "It contributes to improving the POS accuracy because it provides the tagger with (ambiguous, partial) additional information about words in the right context of the current word.", "13 Dev Test All OOV all OOV OOV % 32.28 32.75 UPOS", "As stated earlier in this paper, NArabizi contains a high-level of code-switching with French and is closely related to MSA.", "We described in Section 5 how we built a mixed treebank based on the 11 Note that we performed a set of baseline experiments with UDPipe 2.0 (Straka and Strakov, 2017) as well on a previous version of this data set.", "French GSD UD treebank and our Arabizi version of the Prague Arabic Dependency Treebank.", "We trained the UDPipe parser (Straka and Strakov, 2017) on various treebanks obtained by combining different proportions of the French GSD and our PADT-based pseudoArabizi treebank.", "We ran these parsers with already annotated gold parts-of-speech.", "The best scores were obtained with a model trained on a mix 30% of pseudoArabizi and 70% of French, which we call the MIX treebank, totaling 5,955 training sentences.", "We split this treebank into training, development and test sets, called MIX train/dev/test , following a 80/10/10 split.", "We used a very small manually annotated NArabizi development dataset of 200 NArabizi sentences, called Arabizi dev , to evaluate our parser.", "As shown in Table 6 (line Mix), despite good results on MIX's development and training sets, MIX dev and MIX test respectively (see Table 6), this first parser did not performed very well when evaluated on Arabizi dev .", "This performance level proved insufficient to speed up the annotation task.", "We therefore manually annotated 300 more NArabizi sentences (Arabizi train300 ), to be used as additional training data.", "When added to MIX train , parsing performance did improve, yet not to a sufficient extent, especially in terms of Labeled Attachement Score (LAS).", "It turned out that training UDPipe on these 300 manually annotated NArabizi sentences only (Arabizi train300 ) produced better scores, resulting in a parser that we did use as a pre-annotation tool in a constant bootstrap process to speed up the annotation of the remaining sentences.", "How interleaved are French and NArabizi ?", "As stated before, NArabizi takes its root in Classical Arabic and in multiple sources of integration of French, MSA and Berber, the Amazigh language.", "As the NArabizi treebank contains more than 36% of French words, it is of interest to use recent methods of visualization to see how interleaved it is", "(c)dimension=200 Figure 9: Word embbeddings de 300 mots (100 arabe translittr, 100 franais, 100 arabizi)calculsavecl'algorithmeFastText(jaune:franais,bleu:arabe,rouge:arabizi) 29", "(c)dimension=200 Figure 9: Word embbeddings de 300 mots (100 arabe translittr, 100 franais, 100 arabizi)calculsavecl'algorithmeFastText(jaune:franais,bleu:arabe,rouge:arabizi) 29", "To this end, we extract words embeddings using fastText (Joulin et al., 2016) from a corpus made of the translitterated PADT described in Section 5, the French UD GSD and NArabizi original corpus (Cotterell et al., 2014).", "Two-dimensional representations of the resulting embeddings space for 300 selected words are shown in Figure 2 for embeddings of size 50 and 100.", "We notice that the overall shapes of both representations are very similar, apart from a non significant x -axis reversal.", "On the first components, increasing the embedding size does not provide more information.", "We also see that French and transliterated Arabic words are clearly separated into two clusters of low standard deviation, while NArabizi words are very spread out.", "Some fall within the French cluster, they correspond to French words present in this Algerian dialect.", "Others are in the mid-dle of the Arabic cluster, these are the purely Arabic words of the dialect.", "Between the two, there are Amazigh words ( rak , mech ), arabized French words ( tomobile < French automobile ), Arabic words whose Berber pronunciation has resulted in an unexpected NArabizi rendering ( nta instead of expected enta you', mchit instead of expected machayt to go-2SING').", "POS--tagging performance?", "Given the large degree of interleaving between French and NArabizi , it is interesting to assess the impact of the French vocabulary on the performance of a POS-tagger trained on French data only.", "For these experiments, we use the StanfordNLP neural tagger (Qi et al., 2019), which ranked 1st in POS tagging at the 2018 UD shared task, trained on the UD French 1146 ParTUT treebank, using French fastText vectors (Mikolov et al., 2018).", "In order to perform a meaningful evaluation, we split the NArabizi training set into 4 buckets of approximately 25% of it size in tokens, with a increasing proportion of identified NArabizi tokens.", "Results in Table 7 show a clear drop of performance between the sentences that contain more code-switching (59.55% of UPOS accuracy) and those with none (16.84%).", "This suggests that low-resource languages with a high-level of code-switching such as NArabizi can benefit from NLP models trained on the secondary language.", "The level of performance to expect from these cross-language approaches is yet to be determined.", "Following Martnez Alonso et al. (2016), we provide here the cost figures of this annotation campaign.", "We do not include the salaries of the permanent staff, nor do we include the overhead.", "These figures are meant as an indication of the effort needed to create an annotated data set from scratch.", "It shall be noted that even though the inter-annotator agreement gave us early indications on the difficulty of the tasks, it also acted as a metric in terms of language variability among annotators.", "None of them come from the same part of North-Africa and none of them has the same familiarity with the topics discussed in the web-forums we annotated.", "We had to constantly re-annotate sentences and update the guidelines every time new idiosyncrasies were encountered and most importantly accepted as such by the annotators.", "Compared to what was reported in (Martnez Alonso et al., 2016), the figures are here much higher (about 5 times higher), because unlike their work on French treebanks, we could not use preexisting guidelines for this language and because we could not keep the same team all along the project, so that new members had to be trained almost from scratch or to work on totally different layers.", "Space is lacking to describe it exhaustively.", "In relation to our work regarding North-African dialect, we refer to the work of (Samih, 2017) who along his PhD covered an large range of topics regarding the dialect spoken specifically in Morocco and generally regarding language identification (Samih et al., 2016) in code-switching scenario for various Arabic dialects (Attia et al., 2019).", "Unlike NArabizi dialects, the resource situation for Arabic dialects in canonical written form can hardly be qualified as scarce given the amount of resources produced by the Linguistic Data Consortium regarding these languages, see (Diab et al., 2013) for details on those corpora.", "These data have been extensively covered in various NLP aspects by the former members of the Columbia Arabic NLP team , among which Mona Diab, Nizar Habash, and Owen Rambow, in their respective subsequent lines of works.", "Many small to medium scale linguistics resources, such as morphological lexicons or bilingual dictionaries have been produced (Shoufan and Alameri, 2015).", "Recently, in addition to the release of a small-range parallel corpus for some Arabic dialects (Bouamor et al., 2014), a larger corpus collection was released, covering 25 city dialects in the travel domain (Bouamor et al., 2018).", "Regarding the specific NLP modeling challenges of processing Arabic-based languages, as part of the morphologically-rich languages, recent advances in joint models have been addressed by Zalmout and Habash (2019) that recently efficiently adapted a neural architecture to perform joint word segmentation, lemmatization, morphological analysis and POS tagging on an Arabic dialect.", "Recent works on cross-language learning using the whole massively multilingual pre-trained language models artillery have started to emerge 1147 (Srivastava et al., 2019).", "If successful, such models could help to alleviate the resource scarcity issue that plagues low-resources languages in the more-than-ever data hungry modern NLP.", "We introduced the first treebank for an Arabic dialect spoken in North-Africa and written in romanized form, NArabizi .", "More over, being made of user-generated content, this treebank covers a large variety of language variation among native speakers and displays a high level of code-switching.", "Annotated with 4 standard morpho-syntactic layers, two of them following the Universal Dependency annotation scheme, and provided with translation to French as well as glosses and word language identification, we believe that this corpus will be useful for the community at large, both for linguistic purposes and as training data for resource-scarce NLP in a high-variability scenario.", "In addition to the annotated data, we provide around 1 million tokens (over 46k sentences) of unlabeled NArabizi content, resulting in the largest dataset available for this dialect.", "Our corpora are freely available 14 under the CC-BY-SA license and the NArabizi treebank is also released as part of the Universal Dependencies project.", "The work was partially funded by the French Research Agency projects ParSiTi (ANR-16-CE33-0021), SoSweet (ANR15-CE38-0011-01) and by the French Ministry of Industry and Ministry of Foreign Affairs via the PHC Maimonide France-Israel cooperation programme, as well as by the Sagot's chair in the PRAIRIE institute funded by the French national agency ANR as part of the In-vestissements d'avenir programme under the reference ANR-19-P3IA-0001." ]
[ "objective", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "other", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "other", "objective", "abstain", "abstain", "method", "abstain", "method", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "method", "method", "method", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "result", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "objective", "objective", "objective", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "result", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "objective", "result", "abstain", "other" ]
[ "This work treats the paradigm discovery problem (PDP)the task of learning an inflectional morphological system from unannotated sentences.", "We formalize the PDP and develop evaluation metrics for judging systems.", "Using currently available resources, we construct datasets for the task.", "We also devise a heuristic benchmark for the PDP and report empirical results on five diverse languages.", "Our benchmark system first makes use of word embeddings and string similarity to cluster forms by cell and by paradigm.", "Then, we bootstrap a neural transducer on top of the clustered data to predict words to realize the empty paradigm slots.", "An error analysis of our system suggests clustering by cell across different inflection classes is the most pressing challenge for future work.", "Our code and data are available at https://github.com/ alexerdmann/ParadigmDiscovery .", "In childhood, we induce our native language's morphological system from unannotated input.", "For instance, we learn that ring and rang belong to the same inflectional paradigm.", "We also learn that rings and bangs belong to the same cell, i.e., they realize the same morphosyntactic properties 3.", "SG .", "PRES , but in different paradigms.", "Acquiring such paradigmatic knowledge enables us to produce unseen inflectional variants of new vocabulary items, i.e. to complete morphological paradigms.", "Much work has addressed this task, which Ackerman et al. (2009) call the paradigm cell filling problem (PCFP), 1 but few have discussed inducing paradigmatic knowledge from scratch, which we call the paradigm discovery problem (PDP).", "2 1 In the NLP literature, this task is called morphological reinflection or morphological inflection generation (Cotterell et al., 2016a); this is only a difference in nomenclature.", "As an unsupervised task, the PDP poses challenges for modeling and evaluation and has yet to be attempted in its full form (Elsner et al., 2019).", "However, we contend there is much to be gained from formalizing and studying the PDP.", "There are insights for cognitive modeling to be won (Pinker, 2001; Goldwater, 2007) and intuitions on combating sparse data for language generation (King and White, 2018) to be accrued.", "Unsupervised language processing also has natural applications in the documentation of endangered languages (Za-maraeva et al., 2019) where a lot of annotated data is never likely to exist.", "Our formalization of the PDP offers a starting point for future work on unsupervised morphological paradigm completion.", "Our paper presents a concrete formalization of the PDP.", "Then, as a baseline for future work, we introduce a heuristic benchmark system.", "Our benchmark system takes an unannotated text corpus and a lexicon of words from the corpus to be analyzed.", "It first clusters the lexicon by cell and then by paradigm making use of distributional semantics and string similarity.", "Finally, it uses this clustering as silver-standard supervision to bootstrap a neural transducer (Vaswani et al., 2017) that generates the desired target inflections.", "That is, the model posits forms to realize unoccupied cell slots in each proposed paradigm.", "Even though our benchmark system models only one part of speech (POS) at a time, our framework extends to the full PDP to support future, more intricate systems.", "We propose two separate metrics to evaluate both the clustering of attested forms into paradigms and cells and the prediction of unseen inflected forms.", "Our metrics handle non-canonical morphological behavior discussed in theoretical literature (Corbett, 2005) and extend to the full PDP.", "For three of the five languages we consider, our benchmark system predicts unattested inflections one of its subtasks which Boy and Schalchli (2019) call the paradigm cell finding problem (see 2.2).", "of lexicon forms with accuracy within 20% of a fully supervised system.", "However, our analysis suggests clustering forms into cells consistently across paradigms is still a very pressing challenge.", "This section couches our work on the PDP in terms of previous trends in morphological modeling.", "Much work on unsupervised morphological modeling focuses on segmentation (Gaussier, 1999; Goldsmith, 2001; Creutz and Lagus, 2005; Narasimhan et al., 2015; Bergmanis and Goldwater, 2017; Xu et al., 2018).", "While morphological segmenters can distinguish real from spurious affixes (e.g., bring (cid:54) = br + ing ) with high accuracy, they do not attempt to solve the PDP.", "They do, however, reveal which forms take the same affixes (e.g., walked , talked ), not which forms occupy the same cell (e.g., walked , brought ).", "Indeed, they explicitly struggle with irregular morphology.", "Segmenters also cannot easily model non-concatenative phenomena like ablaut, vowel harmony and templatic processes.", "Two works have proposed tasks which can be considered alternative formulations of the PDP, using either minimal or indirect supervision to bootstrap their models.", "We discuss each in turn.", "First, Dreyer and Eisner (2011) use a generative model to cluster forms into paradigms and cells with a Bayesian non-parametric mixture of weighted finite-state transducers.", "They present a PDP framework which, in principle, could be fully unsupervised, but their model requires a small seed of labeled data to get key information like the number of cells distinguished, making it less relevant cognitively.", "In contrast, our task is not directly supervised and focuses on distributional context.", "Second, contemporaneous to our work, Jin et al. (2020) propose a similar framework for SIGMORPHON 2020's shared task on unsupervised morphological paradigm completion.", "Given only a small corpus and lexicon of verbal lemmata, participating systems must propose full paradigms for each lemma.", "By contrast, our framework does not reveal how many paradigms should be generated, nor do we privilege a specific form as the lemma, but we do use a larger lexicon of exclusively verbal or nominal forms.", "Their proposed baseline uses distributional context for POS tagging and features, but does not train embeddings as the corpus is small.", "A few works address subtasks of the PDP.", "Erdmann and Habash (2018) learn paradigm membership from raw text, but do not sort paradigms into cells.", "Boy and Schalchli (2019) discuss the paradigm cell finding problem, identifying the cell (but not paradigm) realized by a given form.", "Lee (2015) clusters forms into cells across inflection classes.", "Beniamine et al. (2018) group paradigms into inflection classes, and Eskander et al. (2013) induce inflection classes and lemmata from cell labels.", "The PCFP is the task of predicting unseen inflected forms given morphologically labeled input.", "PCFP models can guess a word's plural having only seen its singular, but the child must bootstrap morphological knowledge from scratch, first learning that singularplural is a relevant distinction.", "Thus, the PDP must be at least partially solved before the PCFP can be attempted.", "Yet, as a supervised task, the PCFP is more easily studied, and has received much attention on its own, especially from the word-and-paradigm camp of morphological theory.", "Some cognitive works suggest the PCFP cannot be too difficult for any language (Dale et al., 1998; Ackerman and Malouf, 2013, 2015; Blevins et al., 2017; Cotterell et al., 2019).", "Neural models can test and extend such proposals (Cotterell et al., 2018a; Silfverberg and Hulden, 2018).", "A related vein of work discusses how speakers in-flect nonce words (Berko, 1958; Plunkett and Juola, 1999; Yang, 2015), e.g., is the past tense of sping , spinged or spung ?", "There is a long tradition of modeling past-tense generation with neural networks (Rumelhart and McClelland, 1986; Kirov and Cotterell, 2018; Corkery et al., 2019).", "On the engineering side, Durrett and DeNero (2013) inspired much recent work, which has since benefited from large inflectional datasets (Kirov et al., 2018) and advances in neural sequence modeling (Bahdanau et al., 2015).", "Shared tasks have drawn extra attention to the PCFP (Cotterell et al., 2016a, 2017, 2018c; McCarthy et al., 2019).", "Paradigm discovery is a natural next step in computational morphology, building on related minimally or indirectly supervised works (2.2) to bridge the gap between unsupervised traditions (2.1) and supervised work on the PCFP (2.3).", "In the PCFP, Corpus The cat watched me watching it .", "each input form is labeled with its morphosyntactic property set, i.e., the cell in the paradigm which it realizes, and its lexeme, i.e., the paradigm of related forms to which it belongs.", "By contrast, to solve the PDP, unlabeled input forms must be assigned cells and paradigms.", "This task requires learning what syntactic and semantic factors distinguish cells, what combinations of cells can co-occur in a paradigm, and what aspects of a surface form reflect its paradigm and its cell, respectively.", "Table 1 provides an overview of our PDP setup.", "The first two rows show input data: an unannotated corpus and a lexicon of forms attested in that corpus.", "Given only these data, the task is to output a grid such that", "(i) all lexicon forms and all their (potentially unseen) inflectional variants appear in the grid,", "(ii) all forms appearing in the same column realize the same morphosyntactic cell, and", "(iii) all forms appearing in the same row belong to the same paradigm.", "Unattested forms to be generated are depicted in brackets in Table 1's gold grid , which shows the ideal output of the system.", "Our setup permits multiple forms realizing the same slot , i.e., a specific cell in a specific paradigm, a single form realizing multiple slots, and unrealizable empty slots.", "This supports overabundance (Thornton, 2010, 2011), defectiveness (Sims, 2015), and syncretism (Blevins, 1995; Cotterell et al., 2018b).", "See Corbett (2005) for more on these phenomena.", "Experimentally, we constrain the PDP by limiting the lexicon to forms from one POS, but our formalization is more general.", "For a given language and POS, we create a corpus, lexicon, and gold grid based on a Universal Dependencies", "Dependencies (UD) corpus (Nivre et al., 2016).", "At a high level, the corpus includes raw, non-UD sentences, and UD sentences stripped of annotations.", "The lexicon includes all forms occurring in the UD sentences with the specified POS (potentially including variant spellings and typographical errors).", "The gold grid consists of full paradigms for every word which co-occurs in UD and the UniMorph lexicon (Kirov et al., 2018) with a matching lemmacell analysis; this is similar to the corpus created by Vylomova et al. (2019).", "As a system does not know which lexicon forms will be evaluated in the gold grid, it must model the entire lexicon, which should contain a realistic distribution over rare words and inflection classes having been directly extracted from distributional data (Bybee, 2003; Lignos and Yang, 2018).", "To ensure the gold grid is reasonably clean, we take all wordlemmafeature tuples from the UD portion of the corpus matching the specified POS and convert the features to a morphosyntactic cell identifier compatible with UniMorph representation as in McCarthy et al. (2018).", "3 Then we check which wordlemmacell tuples also occur in UniMorph.", "For each unique lemma in this intersection, the full paradigm is added as a row to the gold grid.", "To filter typos and annotation discrepancies, we identify any overabundant slots, i.e., slots realized by multiple forms, and remove all but the most frequently attested realization in UD.", "While some languages permit overabundance (Thornton, 2010), it often indicates typographical or annotation errors 3 Aligning UniMorph and UD requires removing diacritics in (Latin and Arabic) UniMorph corpora to match UD.", "This can obscure some morphosyntactic distinctions but is more consistent with natural orthography in distributional data.", "The use of orthographic data for morphological tasks is problematic, but standard in the field, due to scarcity of phonologically transcribed data (Malouf et al., 2020).", "in UD and UniMorph (Gorman et al., 2019; Malouf et al., 2020).", "Unlike the gold grid, the lexicon retains overabundant realizations, requiring systems to handle such phenomena.", "For each language, the raw sentences used to augment the corpus add over 1 million additional words.", "For German and Russian, we sample sentences from OpenSubtitles (Lison and Tiedemann, 2016), for Latin, the Latin Library (Johnson et al., 2016), and for English and Arabic, Gigaword (Parker et al., 2011a,b).", "Supplementary sentences are preprocessed via Moses (Koehn et al., 2007) to split punctuation, and, for supported languages, clitics.", "Table 3 shows corpus and lexicon sizes.", "A system attemping the PDP is expected to output a morphologically organized grid in which rows and columns are arbitrarily ordered, but ideally, each row corresponds to a gold paradigm and each column to a gold cell.", "Aligning rows to paradigms and columns to cells is non-trivial, making it difficult to simply compute accuracy over gold grid slots.", "Furthermore, cluster-based metrics (Rosen-berg and Hirschberg, 2007) are difficult to apply as forms can appear in multiple columns or rows.", "Thus, we propose novel metrics that are lexical , based on analogical relationships between forms.", "We propose a set of PDP metrics, to measure how well organized lexicon forms are in the grid, and a set of PCFP metrics, to measure how well the system anticipates unattested inflectional variants.", "All metrics support non-canonical phenomena such as defective paradigms and overabundant slots.", "A form f 's paradigm mates are all those forms that co-occur in at least one paradigm with f .", "f 's paradigm F-score is the harmonic mean of precision and recall of how well we predicted its paradigm mates when viewed as an information retrieval problem (Manning et al., 2008).", "We macro-average all forms' paradigm F-scores to compute F par .", "Qualitatively, F par tells us how well we cluster words that belong to the same paradigm .", "A form f 's cell mates are all those forms that co-occur in at least one cell with f .", "f 's cell F-score is the harmonic mean of precision and recall of how well we predicted its cell mates.", "As before, we macro-average all forms' cell F-scores to compute F cell .", "Qualitatively, F cell tells us how well we cluster words that belong to the same cell .", "Finally, we propose the F grid metric as the harmonic mean of F par and F cell .", "F grid is a single number that reflects a system's ability to cluster forms into both paradigms and cells.", "Because we designate separate PCFP metrics to evaluate gold grid forms not in the lexicon, we restrict f 's mates to only include forms that occur in the lexicon.", "Consider the proposed grid in Table 2.", "There are 6 lexicon forms in the gold grid.", "Starting with watched , we correctly propose its only attested paradigm mate, watching .", "Thus, watched 's paradigm F-score is 100%.", "For see , we propose no attested paradigm mates, but we should have proposed seen .", "0 correct out of 1 true paradigm mate from 0 predictions results in an F-score of 0% for seen .", "We continue like this for all 6 attested forms in the gold grid and average their scores to get F par .", "As for F cell , we correctly predict that watched 's only cell mate is followed , yielding an F-score of 100%.", "However, we incorrectly predict that see has a cell mate, seen , yielding an F-score of 0%; we average each word's F-score to get F cell ; the harmonic mean of F par and F cell gives us F grid .", "While F grid handles syncretism, overabundance, defectiveness and mismatched grid dimensions, it is exploitable by focusing exclusively on the best attested cells realized by the most unique forms, since attested cells tend to exhibit a Zipfian distribution (Blevins et al., 2017; Lignos and Yang, 2018).", "Exploiting F grid in this manner propagates errors when bootstrapping to predict unattested forms and, thus, will be punished by PCFP metrics.", "We cannot evaluate the PCFP as in supervised settings (Cotterell et al., 2016a) because proposed", "cells and paradigms cannot be trivially aligned to gold cells and paradigms.", "Instead, we create a test set by sampling 2,000 four-way analogies from the gold grid.", "The first and second forms must share a row, as must the third and fourth; the first three forms must be attested and the fourth unattested, e.g., watched : watching :: seen : seeing .", "From this test set and a proposed grid, we compute a strict analogy accuracy (An) metric and a more lenient lexicon expansion accuracy (LE) metric.", "An counts predictions as correct if all analogy directions hold in the proposed grid (i.e., watched , watching and seen , seeing share rows and watched , seen and watching , seeing share columns).", "LE counts predictions as correct if the unattested fourth form appears anywhere in the grid.", "That is, LE asks, for each gold form, if it was predicted in any slot in any paradigm.", "Like the PDP metrics, our PCFP metrics support syncretism, overabundance, defectiveness, etc.", "One can, however, exploit them by proposing a gratuitous number of cells, paradigms, and syn-cretisms, increasing the likelihood of completing analogies by chance, though this will reduce F grid .", "As both PDP and PCFP metrics can be exploited independently but not jointly, we argue that both types of metrics should be considered when evaluating an unsupervised system.", "This section presents a benchmark system for proposing a morphologically organized grid given a corpus and lexicon.", "First, we cluster lexicon forms into cells.", "Then we cluster forms into paradigms given their fixed cell membership.", "To maintain tractability, clustering assumes a one-to-one mapping of forms to slots.", "Following cell and paradigm clustering, we predict forms to realize empty slots given one of the lexicon forms assigned to a cell in the same paradigm.", "This allows forms to appear in multiple slots, but does not support overabundance, defectiveness, or multi-word inflections.", "We use a heuristic method to determine the number of cells and what lexicon forms to assign to each.", "Inspired by work on inductive biases in word embeddings (Pennington et al., 2014; Trask et al., 2015; Goldberg, 2016; Avraham and Goldberg, 2017; Tu et al., 2017), we train morphosyntactically biased embeddings on the corpus and use them to k -means cluster lexicon forms into cells.", "Following Erdmann et al. (2018), we emphasize morphosyntactically salient dimensions in embedding space by manipulating hyperparameters in fastText (Bojanowski et al., 2017).", "Specifically, to encourage grouping of morphologically related words, fastText computes a word's embedding as the sum of its subword embeddings for all subword sequences between 3 and 6 characters long (Schtze, 1993).", "We shorten this range to 2 to 4 to bias the grouping toward shared affixes rather than (usually longer) shared stems.", "This helps recognize that the same affix is likely to realize the same cell, e.g., watch +ed and follow +ed .", "We limit the context window size to 1; small windows encourage a morphosyntactic bias in embeddings (Erk, 2016).", "We determine the number of cells to cluster lexicon forms into, k , via the elbow method , which progressively considers adding clusters until the reduction in dispersion levels off (Kodinariya and Makwana, 2013; Bholowalia and Kumar, 2014).", "4 Since Tibshirani et al. (2001)'s popular formalism of the method does not converge on our data, we implement a simpler technique that works in our case.", "We incrementally increase k , each time recording clustering dispersion, d k (for consistency, we average d k over 25 iterations).", "Starting at k = 2 , we calculate dispersion deceleration as the difference between the current and previous dispersions: decel( k ) = d k 1 2( d k ) + d k +1 (1) Once decel( k ) decreases below (cid:112) decel(2) , we take the k th clustering: the ( k + 1) th cluster did not explain enough variation in the embedding space to justify an additional morphosyntactic distinction.", "4 Clustering dispersion is the squared distance of a point from its cluster's centroid, summed over all points clustered.", "Given a clustering of lexicon forms into k cells, denoted as C 1 , . . . , C k , we heuristically cluster each form f into a paradigm, , as a function of f 's cell, c .", "For tractability, we assume paradigms are pairwise disjoint and no paradigm contains multiple forms from the same cell.", "Our algorithm greedily builds paradigms cell by cell.", "To gauge the quality of a candidate paradigm, we first identify its base and exponents .", "Following Beniamine et al. (2018), we define 's base, b , as the longest common subsequence shared by all forms in .", "56 For each form f in , we define the exponent x f as the subsequences of f that remain after removing b , i.e., x f is a tuple of affixes.", "For example, if contains words wxyxz and axx , b is xx and the exponents are ( <w , y , z> ) and ( <a ), respectively.", "7 Inspired by unsupervised maximum matching in greedy tokenization (Guo, 1997; Erdmann et al., 2019), we define the following paradigm score function: score( ) = (cid:88) (cid:104) c,f (cid:105) (cid:16) | b | | x f | (cid:17) (2) which scores a candidate paradigm according to the number of base characters minus the number of exponent characters; it can be negative.", "Algorithm 1 then details our heuristic clustering approach.", "We greedily select one or zero forms from each cell to add (via the list concatenation operator ) to each paradigm such that the paradigm's score is maximized.", "8 After performing a first pass of paradigm clustering with Algorithm 1, we estimate an unsmoothed probability distribution p ( x | c ) as follows: we take the number of times each exponent (tuple of affixes) realizes a cell in the output of Algorithm 1 and divide by the number of occurrences of that cell.", "We use this distribution p ( x | c ) to construct an exponent penalty: 5 The fact that we use a sub sequence , instead of a sub string , means that we can handle non-concatenative morphology.", "6 We note that the longest common subsequence may be found with a polynomial-time dynamic program; however, there will not exist an algorithm whose runtime is polynomial in the number of strings unless P = NP (Maier, 1978).", "7 We use word start (<) and end (>) tokens to distinguish exponents; they do not count as exponent characters in eq.", "(2).", "8 Algorithm 1 has complexity O ( | L | 2 ) where | L | is lexicon size.", "In practice, to make Algorithm 1 tractable, we limit the candidates for f (cid:48) j (line 8) to the n = 250 forms from cell j nearest to f i in pre-trained embedding space (trained via FastText with default parameters).", "This achieves a complexity upper bounded by O ( | L | nk ) .", "Intuitively, if an exponent is the most likely exponent in the cell to which it belongs, the penalty weight is zero and its characters are not subtracted from the score.", "Otherwise, the weight is in the interval [1 , 2] such that each exponent character is penalized at least as harshly but no more than twice as harshly than in the first pass, according to the ex-ponent's likelihood.", "We use this exponent penalty weight to define a penalized score function: score ( ) = (cid:88) (cid:104) c,f (cid:105) (cid:16) | b | | x f | ( x f , c ) (cid:17) (4) We then re-run Algorithm 1, swapping out score( ) for score ( ) , to re-cluster forms into paradigms.", "Empirically, we find that harsher exponent penaltiesi.e., forcing weights to be greater than 1 for suboptimal exponentslead to higher paradigm precision in this second pass.", "For an example, consider candidate paradigm [, watched , , , ].", "If we add nothing, each character of watched can be analyzed as part of the base, yielding a score of 7.", "What if we attempt to add watching pre-determined to belong to column 5 during cell clustering?", "Candidate paradigm [, watched , , , watching ] increases the number of base characters to 10 ( watch shared by 2 words), but yields a score of 5 after subtracting the characters from both exponents, ( ed >) and ( ing >).", "Hence, we do not get this paradigm right on our first pass, as 5 < 7 .", "Yet, after the first pass, should ( ed >) and ( ing >) be the most frequent exponents in the second and fifth cells, the second pass will be different.", "Candidate paradigm [, watched , , , watching ] is not penalized for either exponent, yielding a score of 10, thereby allowing watching to be added to the paradigm.", "We now use the output of the clustering by cell and paradigm to bootstrap the PCFP.", "We use a Transformer (Vaswani et al., 2017) to predict the forms that realize empty slots.", "Transformer-based neural transducers constitute the state of the art for the PCFP.", "9 In Cotterell et al. (2016b)'s terms, we reinflect the target from one of the non-empty source cells in the same paradigm.", "We select the source from which we can most reliably reinflect the target.", "We quantify this reliability by calculating the accuracy with which each target cell's realizations were predicted from each source cell's realizations in our development set.", "For each target cell, we rank our preferred source cells according to accuracy.", "To generate train and development sets, we create instances for every possible pair of realizations occurring in the same paradigm (90% train, 10% development).", "We pass these instances into the Transformer, flattening cells and characters into a single sequence.", "Neural models for reinflection often perform poorly when the training data are noisy.", "We mitigate this via the harsh exponent penalty weights (eq.", "(3)) which encourage high paradigm precision during clustering.", "Table 4 shows results for two versions of our benchmark system: BENCH , as described in 4, and GOLD k , with the number of cells oracularly set to the ground truth.", "For reference, we also report a supervised benchmark, SUP , which assumes a gold grid as input, then solves the PCFP exactly as the benchmark does.", "In terms of the PDP, clustering assigns lexicon forms to paradigms (4682%) more accurately than to cells (2680%).", "Results are high for English, which has the fewest gold cells, and 9 We use the following hyperparameters: N = 4 , d model = 128 , d ff = 512 .", "Remaining hyperparameters retain their default values as specified in Vaswani et al. (2017).", "Our models are trained for 100 epochs in batches of 64.", "We stop early after 20 epochs without improvement on the development set.", "lower elsewhere.", "In German, Latin, and Russian, our benchmark proposes nearly as many cells as GOLD k , thus performing similarly.", "For English, it overestimates the true number and performs worse.", "For Arabic, it severely underestimates k but performs better, likely due to the orthography: without diacritics, the three case distinctions become obscured in almost all instances.", "In general, fixing the true number of cells can be unhelpful because syncretism and the Zipfian distribution of cells creates situations where certain gold cells are too difficult to detect.", "Allowing the system to choose its own number of cells lets it focus on distinctions for which there is sufficient distributional evidence.", "As for the PCFP, our benchmark system does well on lexicon expansion accuracy and poorly on the analogy task.", "While lexicon expansion accuracy (5086% compared to 7297% for SUP ) shows that the benchmark captures meaningful inflectional trends, analogy accuracy demonstrates vast room for improvement in terms of consistently organizing cell-realizations across paradigms.", "English is the only language where analogy accuracy is within half of SUP 's upper bound.", "A major reason for low analogy accuracy is that forms, despite being clustered into paradigms well, get assigned SG PL NOM GEN DAT ACC ABL NOM GEN DAT ACC ABL Gloss serv-us i o um o i orum is os is slave. M serv-a ae ae am a ae arum is as is slave. F frat-er ris ri rem re res rum ribus res ribus brother Table 5: Suffixal exponents for each cell in the paradigm of three Latin nouns from different inflection classes.", "to the wrong cell, or the same gold cell gets misaligned across paradigms from different inflection classes.", "We discuss this phenomenon in more detail below.", "A detailed analysis of Latin nouns (also analyzed by Stump and Finkel (2015) and Beniamine et al. (2018)) reveals challenges for our system.", "Table 5 shows the inflectional paradigms for three Latin nouns exemplifying different inflection classes, which are mentioned throughout the analysis.", "In keeping with the UD standard, there are no diacritics for long vowels in the table.", "One major challenge for our system is that similar affixes can mark different cells in different inflection classes, e.g. the ACC .", "SG of servus slave. M ends in um , as does the GEN .", "PL of frater brother.", "Table 6 shows system-posited cells, the gold cells they best match to, and the longest suffix shared by 90% of their members.", "The system is often misled by shared affixes, e.g., cell 0 is evenly split between ACC .", "SG and GEN .", "PL , driven by the suffix um (cells 3 ( is ) and 4 ( a ) suffer from this as well).", "This kind of confusion could be resolved with better context modeling, as each distinct underlying cell, despite sharing a surface affix, occurs in distinct distributional contexts.", "We observe that the current system often fails to make use of context to handle some misleading suffixes.", "However, Cell 7 correctly groups ABL .", "PL forms marked with both is and ibus , excluding other suffixes ending in s .", "Similarly, cell 8 contains NOM .", "SG forms with heterogeneous endings, e.g., r , ix and ns .", "In some cases, the system misinterprets derivational processes as inflectional, combining gold paradigms.", "Derivational relatives servus and serva , male and female variants of slave, are grouped into one paradigm, as are philosophos philoso-pher and philosophia philosophy.", "In other cases, cell clustering errors due to shared suffixes create spurious paradigms.", "After falsely clustering gold paradigm mates servum ( ACC . SG ) and servorum ( GEN . PL ) into the same cell, we must assign each to separate paradigms during paradigm clustering.", "This suggests clustering cells and paradigms jointly might avoid error propagation in future work.", "We also find that clustering errors lead to PCFP errors.", "For servus/a , the neural reinflector predicts servibus in cell 8 with a suffix from the wrong inflection class, yet the slot should not be empty in the first place.", "The correct form, servis , is attested, but was mistakenly clustered into cell 3.", "Table 7 evaluates variants of the benchmark to determine the contribution of several systemtask components in Arabic and Latin.", "We consider augmenting and shrinking the corpus.", "We also reset the fastText hyperparameters used to achieve a morphosyntactic inductive bias to their default values (no affix or window bias) and consider two constant exponent penalty weights ( ( x f , c ) = 1 and ( x f , c ) = 0 ) instead of our heuristic weight defined in eq.", "(3).", "Finally, we consider selecting random sources for PCFP reinflection instead of identifying reliable sources.", "For all variants, the number of cells is fixed to the ground truth.", "Corpus Size We consider either using a smaller corpus containing only the UD subset, or using a larger corpus containing 15 (Latin) or 100 (Ara-PDP PCFP Paradigms F cell F par F grid An LE Arabic nouns 27 cells GOLD k 4,930.3 25.9 46.4 33.1 16.1 57.2 larger corpus 5,039.5 29.1 37.5 32.8 20.4 49.2 smaller corpus 5,004.0 18.8 37.7 24.9 9.5 42.1 no affix bias 4,860.3 21.5 47.7 29.7 16.3 43.5 no window bias 4,978.5 24.0 47.5 31.8 17.6 55.8 ( x, c ) = 1 3,685.0 34.4 28.8 5.2 35.5 ( x, c ) = 0 1,310.5 10.0 13.9 0.1 5.8 random sources 16.3 55.9 Latin nouns 12 cells GOLD k 3,749.0 39.9 71.6 51.3 17.5 72.6 larger corpus 3,529.5 42.8 79.1 55.5 16.2 69.9 smaller corpus 4,381.5 30.7 49.1 37.8 14.6 51.1 no affix bias 3,906.8 37.1 68.2 48.1 22.7 66.6 no window bias 3,756.5 42.0 71.2 52.8 17.9 70.9 ( x, c ) = 1 3,262.5 67.1 49.6 11.0 52.9 ( x, c ) = 0 1,333.3 26.3 31.7 0.7 7.1 random sources 16.5 72.3 Table 7: Benchmark variations demonstrating the effects of various factors, averaged over 4 runs.", "bic) million words from additional supplementary sentences.", "As expected, performance decreases for smaller corpora, but it does not always increase for larger ones, potentially due to domain differences between UD and the supplemental sentences.", "Interestingly, F cell always increases with larger corpora, yet this can lead to worse F par scores, more evidence of error propagation that might be avoided with joint cellparadigm clustering.", "Embedding Morphosyntactic Biases Targeting affix embeddings by shrinking the default fastText character n -gram sizes seems to yield a much more significant effect than shrinking the context window.", "In Latin, small context windows can even hurt performance slightly, likely due to extremely flexi-ble word order, where agreement is often realized over non-adjacent words.", "Exponent Penalties When clustering paradigms with the penalty weight ( x, c ) = 1 , (which is equivalent to just running the first pass of paradigm clustering), we see a steep decline in performance as opposed to the proposed heuristic weighting.", "It is even more detrimental to not penalize exponents at all (i.e., ( x, c ) = 0 ), but maximize the base characters in paradigms without concern for size or likelihoods of exponents.", "Given allomorphic variation and multiple inflection classes, we ideally want a penalty weight which is lenient to more than just the single most likely exponent, but without supervised data, it is difficult to determine when to stop being lenient and start being harsh in a language agnostic manner.", "Our choice to be harsh by default proposes fewer false paradigm mates, yielding less noisy input to train the reinflection model.", "In a post-hoc study, we calculated GOLD k PCFP scores on pure analogies only, where the first three attested forms were assigned correctly during clustering.", "Pure analogy PCFP scores were still closer to GOLD k 's performance than SUP 's for all languages.", "This suggests most of the gap between GOLD k and SUP is due to noisy training on bad clustering assignments, not impossible test instances created by bad clustering assignments.", "This supports our choice of harsh penalties and suggests future work might reconsider clustering decisions given the reinflection model's confidence.", "Reinflection Source Selection During reinflection, feeding the Transformer random sources instead of learning the most reliable source cell for each target cell slightly hurts performance.", "The margin is small, though, as most paradigms have only one attested form.", "In preliminary experiments, we also tried jointly encoding all available sources instead of just the most reliable, but this drastically lowers performance.", "We present a framework for the paradigm discovery problem, in which words attested in an unannotated corpus are analyzed according to the morphosyntactic property set they realize and the paradigm to which they belong.", "Additionally, unseen inflectional variants of seen forms are to be predicted.", "We discuss the data required to undertake this task, a benchmark for solving it, and multiple evaluation metrics.", "We believe our benchmark system represents a reasonable approach to solving the problem based on past work and highlights many directions for improvement, e.g. joint modeling and making better use of distributional semantic information.", "The authors would like to thank the members of New York University Abu Dhabi's CAMeL Lab, Marie-Catherine de Marneffe, Eleanor Chodroff, Katharina Kann, and Markus Dreyer.", "We acknowledge the support of the High Performance Computing Center at New York University Abu Dhabi.", "Finally, we wish to thank the anonymous reviewers at EMNLP 2019 and ACL 2020 for their feedback." ]
[ "abstain", "objective", "method", "result", "objective", "method", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "objective", "objective", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "method", "method", "abstain", "method", "abstain", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "method", "abstain", "result", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "objective", "objective", "other", "other", "other" ]
[ "It is commonly believed that knowledge of syntactic structure should improve language modeling.", "However, effectively and computationally efficiently incorporating syntactic structure into neural language models has been a challenging topic.", "In this paper, we make use of a multi-task objective, i.e., the models simultaneously predict words as well as ground truth parse trees in a form called syntactic distances, where information between these two separate objectives shares the same intermediate representation.", "Experimental results on the Penn Treebank and Chinese Treebank datasets show that when ground truth parse trees are provided as additional training signals, the model is able to achieve lower perplexity and induce trees with better quality.", "It is widely believed in linguistics, cognitive science, and computational linguistics that the latent structure underlying how words combine to form sentences is best represented as a tree structure.", "The study of the computational mechanisms and systems of constraints that characterize such derivations or parse trees is a central question in these fields (Pollard and Sag, 1994; Steedman and Baldridge, 2011; Huddleston and Pullum, 2002; Adger, 2003; Bresnan, 2001; Chomsky, 1995; Sag et al., 2003).", "Using syntactic information for the language modeling task has been a popular research topic since the 1990s.", "Early efforts included various approaches that attempted to incorporate shallow syntactic information such as POS tags (Heeman and Allen, 1997; Srinivas, 1996) as well as a more complete structures (Wright et al., 1994; Jurafsky et al., 1995).", "Most of such work falls under the topic of structured language modeling (Chelba and Equal contribution. Jelinek, 2000; Van Uytsel et al., 2001; Xu et al., 2002).", "With the resurgence of neural network approaches, sequential, large-scale neural language models have been shown to significantly outperform traditional language models (Merity et al., 2017; Yang et al., 2018) without using syntactic structural information.", "On another scenario, recent analysis also reveals that state-of-the-art sequential neural language models still fail to learn certain long-range syntactic dependencies (Kuncoro et al., 2018).", "Thus it is an interesting problem to explore the relation between language models and syntax and investigate whether syntax can be integrated to enhance neural language models.", "To this end, two main lines of work have been investigated, namely transition-based and distance-based methods, respectively.", "The former strand of work has sought to jointly train a transition-based parser (Nivre, 2008; Zhang and Nivre, 2011; Andor et al., 2016) with a language model using a linearized structured sentence.", "For example, recurrent neural network grammars (RNNGs) model the joint probability of both words and trees by training a generative, top-down parser (Dyer et al., 2016; Cheng et al., 2017).", "Subsequent work (Kim et al., 2019b) has developed an unsupervised variant of RNNGs based on an expectation maximization algorithm, which enables the system to be used as a language model without access to parser data.", "The second strand of work designs language models that are constrained using syntactic constituents induced using the notion of syntactic distance (Shen et al., 2017, 2018).", "The distances are a sequence of scalars between consecutive words, which are higher when there is a higher level of constituent boundary between the corresponding pair of words.", "While aligning nicely with the sequential nature of language models, syntactic distances can be transformed into syntactic tree structures with simple principles (Shen et al., 2017).", "The major difference between the above two strands of work is that the former focuses more on parsing performance while the latter aligns better to language model settings.", "There are three main benefits of the syntactic distance approach.", "First, typical engineering tricks for language modeling such as batching and regularization (Merity et al., 2017) can be directly used.", "Second, unlike transition-based methods, which requires to model each sentence independently, distance-based models allow direct comparison with mainstream prior work on language modeling (Gal and Ghahramani, 2016; Merity et al., 2017; Yang et al., 2018) on the same datasets, which carry information across sentence boundaries.", "Third, there is no risk of compounding errors as compared to the transition-based approach.", "However, unlike for transition-based approaches (Kim et al., 2019b), for distance-based approaches there have been no studies examining the relationship between induced syntactic structure and human labeled syntactic structure, or whether human labeled syntactic trees can be used to improve language modeling (Dyer et al., 2016; Kim et al., 2019b).", "To this end, we investigate distance-based language models with explicit supervision.", "In particular, we inject syntactic tree supervision into distance-based neural language models by breaking a syntactic tree into a label sequence, and extending a distance-based language model to include a multi-task objective that also learns to predict gold-standard labels.", "We choose the Ordered-Neuron LSTM (ON-LSTM) (Shen et al., 2018) as our baseline model, which gives the best results among distance-based models.", "For making fair comparison with the dominant methods on language modeling, we also manually extend the most commonly-used dataset for evaluating language models, which we name PTB-Concat (Mikolov et al., 2010).", "It is a version of the Penn Treebank (PTB) (Marcus et al., 1993) dataset with syntactic trees removed, and with preprocessing of numbers, punctuation and singleton words.", "We add syntactic trees, thus directly compare distance-based methods with other language models.", "Experimental results show that incorporating linguistically motivated structures could practically improve language modeling performance.", "To the best of our knowledge, this is the first work to successfully incorporate gold-standard syntactic trees into syntactic distance based language models.", "Additional experiments suggest that the level of improvement could also be achieved in other language models.", "Furthermore, analyses of the trees learned by the multi-task models demonstrate that they are different from both gold trees and unsupervisedly learned trees.", "1 2 Related Work Using syntactic information for language modeling dates back to the last century.", "Srinivas (1996) proposed using shallow syntactic structuresso-called super-tagswhich successfully reduced perplexity by 38% over a tri-gram based word-level language model.", "More complete parser integration is also explored under the heading of structured language modeling (Chelba and Jelinek, 2000).", "This research covers a wide range of different parsers, albeit mostly with N-gram models (Van Uytsel et al., 2001; Xu et al., 2002).", "Wright et al. (1994) and Jurafsky et al. (1995) extend bi-gram language models with a context-free grammar.", "Feed-forward neural language models were also explored (Xu et al., 2003).", "However, the performance does not approach that of the modern neural LMs.", "Dyer et al. (2016) first proposed RNNG.", "Subsequent work extends the model with an encoder-decoder architecture (Cheng et al., 2017), unsupervised learning (Kim et al., 2019b), knowledge-distillation (Kuncoro et al., 2019) and computational psycholinguistics (Hale et al., 2018).", "Shen et al. (2017) first used syntactic distance to constrain language modeling.", "Its subsequent work (Shen et al., 2018) transfers the distance notion to LSTM cell.", "Our work extends distance-based methods in trying to introduce supervised syntax to these models.", "A very recent work makes use of attention over spans instead of syntactic distance to inject inductive bias to language models (Peng et al., 2019).", "However, the time complexity of injecting supervision is much higher than distance-based approach ( O ( n 2 ) VS O ( n ) ).", "The overall structure of our model is shown in Figure 1. In particular, the ON-LSTM is taken as the base language model, and syntactic trees are added by conversion to distance metrics.", "The supervised distance values are taken as one additional output, resulting in a multi-view model.", "1 We release the code at https://github.com/ wenyudu/SDLM .", "Ordered Neurons LSTM (ON-LSTM) (Shen et al., 2018) is built upon a vanilla LSTM model (Hochre-iter and Schmidhuber, 1997) with two additional gates, namely a master input gate i t and a master forget gate f t , each being a vector of the same shape as the LSTM forget and input gates:", "f t = ( W f [ x t , h t 1 ] + b f ) (1) i t = ( W i [ x t , h t 1 ] + b i ) (2) o t = ( W o [ x t , h t 1 ] + b o ) (3) c t = tanh( W c [ x t , h t 1 ] + b c ) (4) f t = cumax( W f [ x t , h t 1 ] + b f ) (5) i t = 1 cumax( W i [ x t , h t 1 ] + b i ) (6)", "where cumax is defined as the cumulative sum of softmax outputs, i.e., cumax( ) = cumsum(softmax( )) .", "The cumax function provides an inductive bias to model hierarchical structures through enforcing units in the master forget gate f t to increase monotonically from 0 to 1 and those in the master input gate i t to decrease monotonically from 1 to 0. The two gates are applied on the original input and forget gates as follows: t = f t i t (7) f t = f t t + ( f t t ) = f t ( f t i t + 1 i t ) (8) i t = i t t + ( i t t ) = i t ( i t f t + 1 f t ) (9) c t = f t c t 1 + i t c t (10) h t = o t tanh( c t ) .", "(11)", "ON-LSTM can learn the implicit structure of a language in the form of a binary tree in an unsupervised manner, through syntactic distances , which are calculated as: d t = D m D m (cid:88) k =1 f t (12) Figure 2: Binarized grammar tree and its corresponding syntactic distances.", "where D m is the size of the hidden state.", "The syntactic distance d t between two consecutive words is a scalar value, which can be interpreted as reflecting the syntactic relatedness between the constituents before and after time point t .", "In terms of trees, it can be thought of as the height the lowest tree node that encloses both words.", "In the case where we consider discrete trees, the height is given by the maximum path length from a leaf.", "In the more general case, it can be thought of as a scalar value measuring a continuous notion of node height.", "Figure 2 depicts a sample sentence with its syntactic distances and corresponding tree structures.", "More generally, the binary tree structure of a sequence with N tokens can be specified with a sequence of N 1 syntactic distances.", "This definition of distance makes the syntactic distance an ultrametric (Holly, 2001; Wu et al., 1999), a concept which is important in the theory of hierarchical agglomerative clustering (Johnson, 1967) and was first explored in a linguistic setting by Levelt (1974).", "To integrate treebank trees into ON-LSTM, we need to first convert syntactic trees into a representation based on syntactic distances.", "Since the original grammar trees are not necessarily binary, 6614 we first split non-binary nodes by adding sentinel intermediate nodes to form a right-branched binary tree, following the steps in Stern et al. (2017).", "Now for a binary tree with N leaf nodes, we have N 1 non-leaf nodes that correspond to the N 1 slots between each of the adjacent word pairs, each of which are assigned a syntactic distance (Figure 2).", "The binary tree can thus be represented as a sequence of distances d 1 , d 2 , . . . , d N 1 .", "The conversion from binary tree to syntactic distances thus translates to the assigning of a distance value for each of the N 1 non-leaf nodes in the tree.", "This is achieved in a bottom-up process.", "We first initialize a distance value of 1 at all of the leaf nodes, and then compute the syntactic distances of the parent nodes by recursively tracing back their parents.", "More specifically, for a certain parent node, its corresponding syntactic distance d P is computed with respect to the syntactic distances of its children d L and d R , i.e., d P = max { d L , d R } + 1 .", "A more detailed algorithm flowchart of tree-to-distance conversion is given in Appendix A.1.", "In ON-LSTM the distances d t 's in Equation 12 are used to infer the structure of grammar trees.", "Consequently, a straight-forward way to incorporate ground truth parse trees is to use the ground truth distances d gt to guide d t , as depicted in Figure 1. Interestingly, directly forcing the structure inferred by language models to be coherent to linguist-tagged ground truth trees barely improves the language model performance (see Section 6).", "Instead, we introduce a split-head setting, which can practically improve LM performances by learning two sets of closely related syntactic distances.", "In particular, we use another master forget gate f wt for inferring a set of distances that are trained to align with the gold-standard syntactic distances, while leaving the original distances d t computed from f t intact.", "To achieve this, we introduce an extra linear layer on top of the hidden states h ft , and from there infer a separate set of master forget gates.", "In this way, both of the master forget gates f t and f wt share the same input h ft , but optimize two different sets of trees for the language modeling and parsing task, respectively.", "i.e., h ft = W f [ x t , h t 1 ] + b f (14) f t = cumax( h ft ) (15) f wt = cumax( W s ( h ft ) + b s ) (16) The syntactic distances for the auxiliary supervised targets are then calculated as follows: d wt = D m D m (cid:88) k =1 f wtk (17) where f wtk is the k -th element in the vector f wt 3.4 Grammar Trees as Auxiliary Supervised Targets for Language Modeling With the additional master forget gate f wt , the model has two different sets of predictions.", "The first set is the language model outputs of ONLSTM, predicting the next words.", "The second set is the distances calculated in Equation 17.", "The original language modeling structure of the ONLSTM model is left intact after the modification, so we can continue to use the master forget gate f t to update hidden states and calculate the softmax output in ON-LSTM for the language modeling part.", "We denote the negative log-likelihood loss in the language model part as L lm .", "For brevity, we do not discuss the details of the loss.", "For aligning the syntactic distances, we perform a ranking loss between the learned syntactic distance d wt and ground truth distance d g , which was first proposed by Burges et al. (2005).", "The goal is to encourage the model to produce the distances that have the same ranking order as the ground truth distances: L syd = (cid:88) i,j>i max(0 , (1 sign ( d gi d gj )( d wi d wj ))) .", "The joint objective function is thus to minimize the following loss: L = L lm + L syd (19) where is the scaling parameter.", "We make test datasets in English and Chinese, respectively, both of which have parse trees and also language modeling benchmarks.", "For English, we use the Penn Treebank (PTB) dataset (Marcus et al., 1993).", "Mikolov et al. (2010) have provided a widely accepted version of PTB for language modeling.", "Several modifications are made to the original treebank.", "For example, all punctuation symbols are removed, all characters are lower-cased, the vocabulary size is truncated at 10,000 and all sentences are concatenated.", "However, this version of PTB discards the parse tree structures, which makes it unsuitable for comparing sequential language models with those utilizing tree structures.", "We refer to this version as PTB-Concat .", "Dyer et al. (2016) proposed a different version of PTB, which retains the parse tree structures.", "Sentences are modeled separately, punctuation is retained, and singleton words are replaced with the Berkeley parser's mapping rules, resulting in much larger vocabulary size, 23,815-word types.", "Since it retains the parse trees, this dataset enables direct comparison between models that utilize parse trees with those who do not.", "But unfortunately, since the vocabulary is different from PTB-Concat, and the sentences are processed separately, the results are not directly comparable with those in PTB-Concat, on which most existing work on language modeling reports results.", "We refer to this version as PTB-Sepsent .", "As mentioned above, a salient limitation of PTB-Sepsent is that it does not allow fair comparison with existing LM work on PTB-Concat.", "To address this issue, we propose a different variation of PTB dataset that both uses the same vocabulary size as PTB-Concat and at the same time retaining the ground-truth grammar trees.", "We pre-process the PTB dataset by following the same steps indicated by Mikolov et al. (2010) to obtain a modified treebank with the same vocabulary set as PTB-Concat.", "Sentences are concatenated, and we make sure that the sentences are the same to PTB-Concat, from token to token, in the training, validation, and test sets.", "This results in the same vocabulary as that of PTB-Concat, which allows us to directly compare models that utilize parse trees with the existing reports of performance on PTB-Concat.", "We refer to this version of PTB-Concat with syntax as PTB-Concat-Syn and we will cover preprocessing details in Appendix A.3.", "For Chinese, we use the Chinese Treebank 5.1 (Xue et al., 2005), with the same settings as Kim et al. (2019b).", "Sentences are modeled separately and singleton words are replaced with a single <UNK> token.", "It will be referred to as CTB-Model Param Dev Test Gal and Ghahramani (2016) Variational LSTM 66M 73 .", "We evaluate the influence of syntactic supervision on distance-based langauge models, especially in terms of its language modeling performance.", "We are also going to analyze the induced syntax after introducing the structural supervision.", "In addition, extensive ablation tests are conducted to understand how syntactic supervision affects the langauge model.", "We first compare our models with existing sequential language models on PTB-Concat, and then we compare our model with transition-based language models on PTB-Sepsent and CTB-Sepsent, which have a larger vocabulary and also use additional grammatical structure.", "Results on PTB-Concat We first validate the benefit of introducing structural signal to neural language models by training our proposed model on PTB-Concat-Syn with structural supervision, and then evaluate them on the plain vali-dation/test set.", "We compare our model with the original ON-LSTM model, as well as various other strong LSTM language model baselines such as AWD-LSTM (Merity et al., 2017) and a mixture of softmax (Yang et al., 2018).", "We denote our syntactic-distance-augmented ON-LSTM model as ONLSTM-SYD.", "For making fair comparison, we closely follow the hyperparameters and regularization of ONLSTM (Shen et al., 2018).", "The model is a three-layer ONLSTM-SYD language model with an embedding size of 400 and hidden layer units 1150 .", "The dropout rates are 0 .", "5 , 0 .", "45 , 0 .", "3 , 0 .", "45 for the 6616 Model PTB-Sepsent CTB-Sepsent Kim et al. (2019b) RNNLM 93.2 201.3 Kim et al. (2019b) RNNG 88.7 193.1 Kim et al. (2019b) URNNG 90.6 195.7 Kim et al. (2019b) RNNG-URNNG 85.9 181.1 Kim et al. (2019b) PRPN (default) 126.2 290.9 Kim et al. (2019b) PRPN (finetuned) 96.7 216.0 ONLSTM-noAWD 69.0 167.7 ONLSTM 60.0 145.7 ONLSTM-SYD-noAWD 67.6 163.1 ONLSTM-SYD 59.6 140.5 Table 2: Language modeling perplexity on PTB-Sepsent and CTB-Sepsent.", "word vectors, LSTM weight metrics, outputs between LSTM layers and the output of the last layer, respectively.", "The embedding dropout ratio is 0 .", "125 .", "The model is trained and finetuned for 1000 epochs in total and is switched to the fine-tuning phase at epoch 650 .", "The ground truth syntactic structures are used to supervise the syntactic distances in the third layer of ONLSTM-SYD and the loss raio is set to 0.75.", "We use this setting as the default setting for all the experiments.", "The results are shown in Table 1. After adding structural signals into the model, our model ONLSTM-SYD significantly outperforms the original ON-LSTM model ( p -value < 0.05), indicating that incorporating linguist-tagged parse trees can contribute to language modeling positively.", "Results on PTB-Sepsent and CTB-Sepsent PTB-Sepsent and CTB-Sepsent offer a comparable setting with other structure-aware supervised (Dyer et al., 2016) and unsupervised (Kim et al., 2019b) baselines.", "The results are listed in Table 2. 2 ONLSTM-SYD performs better than ONLSTM, which indicates that supervised syntactic information can help improve language modeling.", "The margin between our models and the baselines is rather large.", "We find that the set of regularization and optimization techniques proposed by Merity et al. (2017) contribute significantly to this margin.", "Because of the sequential and parallel nature of our model, it can directly inherit and benefit from this set of tricks.", "In contrast, it is non-trivial to use them for RNNG and URNNG.", "As a more rigorous analysis, we further conducted a set of experiments without those tricks (i.e. non-2 We use the preprocessing script in URNNG's repository https://github.com/harvardnlp/urnng , which merges all UNK types. monotonically triggered ASGD, weight-dropped LSTM, finetuning).", "The performance (denoted as ONLSTM-SYD-noAWD) drops; however, the model still outperforms the other baselines by a significant margin.", "In this subsection we analyze the model to see how the additional structural supervision affects the quality of inferred trees.", "Note that our goal here is to analyze the influence of ground truth syntactic information on the quality of the induced trees rather than to yield a better grammar induction performance, since our model is not strictly comparable to other models due to its extra structural supervision during training.", "We follow the settings of Htut et al. (2018) to test our model on the WSJ10 and WSJ test sets, reporting the results in Table 3. The WSJ test set has 2416 sentences with arbitrary lengths, while WSJ10 consists of 7422 sentences of the whole WSJ corpora that contain no more than 10 words.", "We use both biased and unbiased distance-to-tree conversion algorithms for both ON-LSTM and our proposed model (c.f. Appendix A.1 and A.2 for a formal description of the biased and non-biased conversion algorithm).", "Since our model has two sets of trees learned simultaneously, we list all of them in Table 3. Grammar Induction We can see that the trees learned by the joint loss show improved the F1 score and rely less on the branching bias of the tree constructing algorithm (see Dyer et al. (2019)).", "The big gap of F1 scores on WSJ between the biased and unbiased trees are altered after introducing the structural loss, and the LM unbiased trees significantly outperforms its baseline ON-LSTM.", "These indicate that the auxiliary supervised task not only lowers the perplexity, but also improves the qualities of the induced trees for the LM task.", "Looking more into the trees, we find that compared to ON-LSTM, ONLSTM-SYD improves the label prediction accuracy for NP (noun phrases), VP (verb phrases) and PP (prepositional phrases) but fails to improve ADJP (adjective phrases).", "This suggests that different types of human-annotated constituents may have different influences on language modeling, or that human-annotated trees are themselves biased to differing degrees between different constituent types.", "Branching Bias Syntactic trees of English naturally have a bias towards right branching structures.", "As shown in the last section of Table 3, right branching trees achieve a much higher F1 score than random, balanced or left branching trees.", "As pointed out by Dyer et al. (2019), PRPN and ONLSTM resort to a distance-to-tree algorithm with right-branching biases (See Appendix A.2).", "For our model, a biased distance-to-tree algorithm yields worse results compared to its nonbiased counterpart; but on unsupervised models such as ON-LSTM, biased algorithms yield better results than non-biased versions.", "This observation indicates that syntactic supervision leads to better tree structures as compared with fully unsupervised tree induction, which is intuitive.", "Linguistic Analysis Our best parsing results are for trees decoded from the syntactic prediction objective using the unbiased algorithm.", "Interestingly, these trees tend to be deeper on average than the (binarized) gold standard trees (see Table 3).", "3 This appears to be driven by a failure of the model to identify constituents centered on deeply-embedded head wordsinstead, the model prefers right-branching structures.", "Some examples of trees are displayed in Figure 3. In the top part of the figure, we see the parse produced from the L syd distances of our model, in the middle the tree produced the L lm distances and, on the bottom, the gold standard tree.", "As can be seen in the figure, the L syd -based tree is largely right-branching and misses constituents centered on several deeply em-3 Please refer to Appendix A.5 for visualizations of a more extensive set of sentences.", "bedded heads, such as the verb said .", "By contrast, the L lm -based tree is considerably shallower than the gold-standard and consists of a sequence of smaller chunks that often mis-bracket words with respect to the gold-standard constituent boundaries.", "Figure 4 illustrates these phenomenon in further detail.", "The plot at the top of the figure shows the proportion of constituents produced from L syd distances whose boundaries correspond to a gold constituent, broken down by height of nodes in the predicted tree.", "As the plot illustrates, the model fares better on relatively small constituents lower in trees, and makes more errors for constituents higher in the tree, reflecting mistakes on deeply-embedded heads.", "The bottom of the figure shows the same breakdown for L lm -based induced trees.", "Overall, the affect is similar, although L lm -based trees are shallower than the L syd -based trees.", "We believe the increased accuracy for the longest constituents is driven by the fact that, since the highest constituents cover long sentence spans and there are few possible long spans, these constituents have a higher baseline probability of being correct.", "It appears that the L syd objective has learned a strong right-branching bias, leading to very deep trees (even with the unbiased decoder) whereas the L lm objective appears to be using a kind of predictive chunking of the sentence into small groups of words.", "It is tempting to speculate that these chunks may correspond to linguistic units used in prosodic planning or by the human sentence processor, while the deeper trees correspond more directly to the compositional structure underlying sentence meaning.", "We leave exploring this question to future 6618 thecompanywhichissuedastatementontheagreementlatefridaysaidthatNmillionofthepaymentwaspreviouslyprovidedforinitsfinancialstatementsandthatNNwillberecognizedinitsNthird-quarterstatement thecompanywhichissuedastatementontheagreementlatefridaysaidthatNmillionofthepaymentwaspreviouslyprovidedforinitsfinancialstatementsandthatNNwillberecognizedinitsNthird-quarterstatement ThecompanywhichissuedastatementontheagreementlateFridaysaidthat1millionofthepaymentwaspreviouslyprovidedforinitsfinancialstatementsandthat500,000willberecognizedinits1989third-quarterstatement Figure 3: Trees induced from the syntactic task distances in our model (top), the language modeling task distances (middle) as well as the gold-standard trees (bottom).", "Parsing performance Our models give worse unlabeled parsing performance compared to transition-based methods.", "In particular, Kim et al. (2019a) report that unsupervised URNNG achieves 45.4 WSJ F1 in a similar setting, while another URNNG that finetunes a supervised RNNG model gives a much better F1 of 72.8, leading a 27.4 F1 improvement.", "In contrast, the F1 of our structure prediction trees is 61.3 in unbiased algorithm.", "This indicates that our model brings more benefits on the LM side rather than the parsing side.", "Layer used for supervision Table 4 (Top) shows the performances where the supervised signal is injected into different layers.", "Although injecting syntax into the last layer gives the best syntactic distance for grammar induction, it fails to achieve a similar improvement on perplexity.", "This suggests that a better syntactic structure may not always lead to a better language model.", "The observation is consistent with prior research (Williams et al., 2018).", "Tree structure We study the influence of the different types of supervised trees to the model.", "In addition to using the ground truth parse trees, we also tried to train the model with random trees instead, and without providing trees, in which case it degenerates to a vanilla ON-LSTM.", "From Table 4 (Middle) we can find that without supervision signals from gold standard parse trees the model performs worse than the full model.", "Random trees introduce noise to the model and downgrade both parsing and LM performance, indicating the importance of injecting meaningful syntax.", "Multitask variants We also explored injecting the supervised syntactic information at different levels.", "One straight forward baseline is to add supervision signals directly on the syntactic distance 6619 Ablation Experiment Validation Test WSJ Study Detail PPL PPL F1 Layer for Supervision 1st layer 58.0 55.6 57.7 2nd layer 57.8 55.5 59.7 3rd layer 57.8 55.7 61.3 TreeStructure No Parse Tree 58.3 55.9 39.0 Random Tree 60.2 57.5 32.4 Gold Parse Tree 57.8 55.7 61.3 MultitaskVariants Vanilla Multitasking 60.9 58.5 24.9 One set of trees 58.5 55.9 54.4 Two sets of trees 57.8 55.7 61.3 Table 4: Perplexity and unlabeled parsing F1 in ablation studies.", "in ON-LSTM, using one set of trees to guide both LM and parsing, as indicated in the Model section (Table 4 Bottom, one set of trees).", "Despite injecting stronger syntactic signals, this direct approach does not improve language model perplexity.", "This also reflects the fact that the most suitable syntactic structures for language modeling do not necessarily conform to human labeled syntax.", "In addition, we also use ON-LSTM hidden states for supervised syntactic distance prediction (Table 4 Bottom, vanilla multitasking).", "This approach fails to outperform its ON-LSTM baseline due to the same reason.", "In summary, there are mutual benefits between induced and supervised syntactic information, although they do not fully overlap.", "Generalization to other LMs One practical question is whether the improvements found in our work can be generalized to other language models.", "To answer this question, we introduce the multitask scheme to PRPN (Shen et al., 2017), which is another model that is also able to learn unsupervised structures through language modeling.", "Similar to ON-LSTM, PRPN is also a syntactic distance method.", "We modify the PRPN model in the same spirit as in ON-LSTM.", "In addition, we change the encoding layer and use the output as syntactic distance embeddings l syd .", "Then we map l syd to two sets of syntactic distances d lm and d syd for language modeling and syntactic distance prediction, respectively.", "Syntactic supervision comes to d syd .", "The model reaches a test perplexity of 60 .", "5 in PTB-Concat ( p -value < 0.05), which also significantly outperforms the 62 .", "0 from the original model.", "We refer readers to Appendix A.4 for the details of PRPN and our modified PRPN-SYD.", "We investigated linguistic supervision for distance-based structure-aware language models, showing its strengths over transition-based counterparts in language modeling.", "Apart from the explicit observations in achieving strong perplexity scores, our model reveals several interesting aspects of the quality of the trees learned by the model.", "As a byproduct of our investigation, we release a version of PTB-Concat, which contains syntactic structures while at the same time the same pre-processing steps adopted by most previous work on neural language models.", "We thank Zhiyang Teng, Qi He and all members at Text Intelligent Lab in Westlake University for insightful discussions.", "We also would like to thank all anonymous reviewers for their constructive comments.", "This work is supported by the National Natural Science Foundation of China (NSFC No. 61976180) and the Westlake University and Bright Dream Joint Institute for Intelligent Robotics.", "The corresponding author is Yue Zhang." ]
[ "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "objective", "result", "abstain", "abstain", "method", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "other", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "other", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "result", "abstain", "result", "abstain", "result", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "objective", "result", "method", "other", "other", "other", "other" ]
[ "This paper studies continual learning (CL) of a sequence of aspect sentiment classification (ASC) tasks.", "Although some CL techniques have been proposed for document sentiment classification, we are not aware of any CL work on ASC.", "A CL system that incrementally learns a sequence of ASC tasks should address the following two issues: (1) transfer knowledge learned from previous tasks to the new task to help it learn a better model, and (2) maintain the performance of the models for previous tasks so that they are not forgotten.", "This paper proposes a novel capsule network based model called B-CL to address these issues.", "B-CL markedly improves the ASC performance on both the new task and the old tasks via forward and backward knowledge transfer.", "The effectiveness of B-CL is demonstrated through extensive experiments.", "1 1 Introduction Continual learning (CL) aims to incrementally learn a sequence of tasks.", "Once a task is learned, its training data is often discarded (Chen and Liu, 2018).", "This is in contrast to multi-task learning , which assumes the training data of all tasks are available simultaneously.", "The CL setting is important in many practical scenarios.", "For example, a sentiment analysis company typically has many clients and each client often wants to have their private data deleted after use.", "In the personal assistant or chatbot context, the user does not want his/her chat data, which often contains sentiments or emotions, uploaded to a central server.", "In such applications, if we want to improve sentiment analysis accuracy for each user/client without breaching confidentiality, CL is a suitable solution.", "Task ID Domain/Task One Training Example(in that domain/task)", "on TIL, where each task is a separate aspect sentiment classification (ASC) task.", "An ASC task is defined as follows (Liu, 2015): given an aspect (e.g., picture quality in a camera review) and a sentence containing the aspect in a particular domain (e.g., camera), classify if the sentence expresses a positive, negative, or neutral (no opinion) about the aspect.", "TIL builds a model for each task and all models are in one neural network.", "In testing, the system knows which task each test instance belongs to and uses only the model for the task to classify the instance.", "In CIL, each task contains one or more classes to be learned.", "Only one model is built for all classes.", "In testing, a test case from any class may be presented to the model to classify without giving it any task information.", "This setting is not applicable to ASC.", "Our goal of this paper is to achieve the following two objectives: (1) transfer the knowledge learned from previous tasks to the new task to help learn a better model for the new task without accessing the training data from previous tasks (in contrast to multi-task learning), and (2) maintain (or even improve) the performance of the old models for previous tasks so that they are not forgotten.", "The focus of the existing CL (TIL or CIL) research has been on solving (2), catastrophic forgetting (CF) (Chen and Liu, 2018; Ke et al., 2020a).", "CF means that when a network learns a sequence of tasks, the learning of each new task is likely to change the net-4747 work parameters learned for previous tasks, which degrades the model performance for the previous tasks (McCloskey and Cohen, 1989).", "Continual Learning.", "Existing work has mainly focused on dealing with catastrophic forgetting (CF).", "In our case, (1) is also important as ASC tasks are similar, i.e., words and phrases used to express sentiments for different products/tasks are similar.", "To achieve the objectives, the system needs to identify the shared knowledge that can be transferred to the new task to help it learn better and the task specific knowledge that needs to be protected to avoid forgetting of previous models.", "Table 1 gives an example.", "Fine-tuned BERT (Devlin et al., 2019) is one of the most effective methods for ASC (Xu et al., 2019; Sun et al., 2019).", "However, our experiments show that it works very poorly for TIL.", "The main reason is that the fine-tuned BERT on a task/domain captures highly task specific information which is difficult to transfer to a new task.", "In this paper, we propose a novel model called B-CL ( BERT-based Continual Learning ) for ASC continual learning.", "The key novelty is a building block, called Continual Learning Adapter (CLA) inspired by the Adapter-BERT in (Houlsby et al., 2019).", "CLA leverages capsules and dynamic routing (Sabour et al., 2017) to identify previous tasks that are similar to the new task and exploit their shared knowledge to help the new task learning and uses task masks to protect task-specific knowledge to avoid forgetting (CF).", "We conduct extensive experiments over a wide range of baselines to demonstrate the effectiveness of B-CL.", "In summary, this paper makes two key contributions.", "(1) It proposes the problem of task incremental learning for ASC.", "(2) It proposes a new model B-CL with a novel adapter CLA incorporated in a pre-trained BERT to enable ASC continual learning.", "CLA employs capsules and dynamic routing to explore and transfer relevant knowledge from old tasks to the new task and uses task masks to isolate task-specific knowledge to avoid CF.", "To our knowledge, none of these has been done before.", "Continual learning (CL) has been studied extensively (Chen and Liu, 2018; Parisi et al., 2019).", "To our knowledge, no existing work has been done on CL for a sequence of ASC tasks, although CL of a sequence of document sentiment classification tasks has been done.", "Regularization-based methods, such as those in (Kirkpatrick et al., 2016; Lee et al.; Seff et al., 2017), add a regularization in the loss to consolidate previous knowledge when learning a new task.", "Parameter isolation-based methods, such as those in (Serr et al., 2018; Mallya and Lazebnik, 2018; Fernando et al., 2017), make different subsets of the model parameters dedicated to different tasks and identify and mask them out during the training of the new task.", "Gradient projection-based method , such as that in (Zeng et al., 2019), ensures the gradient updates occur only in the orthogonal direction to the input of the old tasks and thus will not affect old tasks.", "Replay-based methods , such as those in (Re-buffi et al., 2017; Lopez-Paz and Ranzato, 2017; Chaudhry et al., 2019), retain an exemplar set of old task training data to help train the new task.", "The methods in (Shin et al., 2017; Kamra et al., 2017; Rostami et al., 2019; He and Jaeger, 2018) build data generators for previous tasks so that in learning the new task, they can use some generated data for previous tasks to help avoid forgetting.", "As these methods are mainly for avoiding CF, after learning a sequence of tasks, their final models are typically worse than learning each task separately.", "The proposed B-CL not only deals with CF, but also performs knowledge transfer to improve the performance of both the new and the old tasks.", "Lifelong Learning (LL).", "LL is now regarded the same as CL, but early LL mainly aimed at improving the new task learning through forward transfer without tackling CF (Silver et al., 2013; Ruvolo and Eaton, 2013; Chen and Liu, 2018).", "Several researchers have used LL for document-level sentiment classification.", "Chen et al. (2015) and Wang et al. (2019) proposed two Naive Bayes (NB) approaches to help improve the new task learning.", "A heuristic NB method was also used in (Wang et al., 2019).", "Xia et al. (2017) presented a LL approach based on voting of individual task classifiers.", "All these works do not use neural networks, and are not concerned with the CF problem.", "Shu et al. (2017) used LL for aspect extraction, which is a different problem.", "Wang et al. (2018) used LL for ASC, but improved only the new task and did not deal with CF.", "Existing CL systems SRK (Lv et al., 2019), KAN (Ke et al., 2020b) and L2PG (Qin et al., 2020) are for document sentiment classification, but not ASC.", "Ke et al. (2020a) also performed transfer in the image domain.", "Recently, capsule networks (Hinton et al., 2011) have been used in sentiment classification and text classification (Chen and Qian, 2019; Zhao et al., 2019).", "But they have not been used in CL. 3 Preliminary This section introduces BERT, Adapter-BERT and Capsule Network as they are used in our model.", "BERT for ASC.", "Due to its superior performance, this work uses BERT (Devlin et al., 2019) and its transformer (Vaswani et al., 2017) architecture as the base.", "We also adopt the ASC formulation in (Xu et al., 2019), where the aspect term and review sentence are concatenated via [SEP] .", "The sentiment polarity is predicted on top of the [CLS] token.", "Although BERT can achieve impressive performance on a single ASC task, its architecture and fine-tuning paradigm are not suitable for CL (see Sec. 1).", "Experiments show that it performs very poorly for CL (Sec. 5.4).", "We found that Adapter-BERT (Houlsby et al., 2019) is a better fit for CL.", "Adapter-BERT.", "Adapter-BERT basically inserts a 2-layer fully-connected network (adapter) in each transformer layer of BERT (see Figure 1(A)).", "During training for the end-task, only the adapters and normalization layers are trained, no change to any other BERT parameters, which is good for CL because fine-tuning BERT itself causes serious forgetting.", "Adapter-BERT achieves similar performances to fine-tuned BERT (Houlsby et al., 2019).", "We propose to exploit the adapter idea and the capsule network to achieve effective CL for ASC tasks.", "Capsule Network.", "Capsule network (CapsNet) is a relatively new classification architecture (Hin-ton et al., 2011; Sabour et al., 2017).", "Unlike CNN, CapsNet replaces the scalar feature detectors with vector capsules that can preserve additional information such as position and thickness in images.", "A typical CapsNet has two capsule layers.", "The primary layer stores low-level feature maps and the class layer produces the probability for classification with each capsule corresponding to one class.", "It uses a dynamic routing algorithm to enable each lower level capsule to send its output to the similar (or agreed, computed by dot product) higher level capsule.", "This is the key property that we exploit to identify and group similar tasks and their shared features or knowledge.", "Note that the proposed B-CL does not adopt the whole capsule network as we are only interested in the capsule layers and dynamic routing instead of Figure 1: (A).", "Recall the proposed B-CL aims to achieve (1) knowledge transfer between related old tasks and the new task through knowledge sharing and (2) forgetting avoidance through preventing task specific knowledge of previous tasks from being overwritten by the new task learning.", "Inspired by Adapter-BERT, we propose the continual learning adapters (CLA) to replace the adapters in Adapter-BERT to enable CL as in Figure 1(B) to achieve BERT based continual learning for ASC.", "The architecture of CLA is shown in Figure 2(A).", "It contains two modules: (1) knowledge sharing module (KSM) for identifying and exploiting shareable knowledge from the similar previous tasks and the new task, and (2) task specific module (TSM) for learning task specific neurons and protecting them from being updated by the new task.", "CLA takes two inputs: (1) hidden states h ( t ) from the feed-forward layer inside a transformer layer and (2) task ID t .", "The outputs are hidden states with features good for the t -th task.", "KSM leverages capsule layers (see below) and dynamic routing to group similar tasks and the shareable knowledge, whereas TSM takes advantage of task mask (TM) to protect neurons for a particular task and leave other neurons free.", "Those free neurons are later used by TSM for a new task.", "Since TMs are differentiable, the whole system B-CL can be 4749 trained end-to-end.", "We detail each module below.", "KSM groups similar tasks and shared knowledge (features) among them to enable knowledge transfer among similar tasks.", "This is achieved through two capsule layers ( task capsule layer and knowledge sharing capsule layer ) and the dynamic routing algorithm of the capsule network.", "Each capsule in TCL represents a task and TCL prepares low-level features derived from each task (Figure 2(A)).", "As such, a capsule is added to TCL for every new task.", "This incremental growing is efficient and easy because these capsules are discrete and do not share parameters.", "Also each capsule is simply a 2-layer fully connected network with a small number of parameters.", "Let h ( t ) R d t d e be the input of CLA, where d t is the number of tokens and d e the number of dimensions.", "Let the set of tasks learned so far be T prev (before learning the new task t ) and |T prev | = n .", "In TCL, we have n + 1 different capsules representing all past n learned tasks as well as the new task t .", "The capsule for the i -th ( i n + 1 ) task is p ( t ) i = f i ( h ( t ) ) , (1) where f i ( ) = MLP i ( ) denotes a 2-layer fully-connected network.", "Each knowledge sharing capsule in KCL captures those tasks (i.e., their task capsules { p ( t ) i } n +11 ) with similar features or shared knowledge.", "This is automatically achieved by the dynamic routing algorithm.", "Recall dynamic routing encourages each lower level capsule (task capsule in our case) to send its output to the similar (or \"agreed\") higher level capsule (knowledge sharing capsule in our case).", "Essentially, the similar task capsules (with many shared features) are clustered together by higher coefficients (which determine how much a task capsule can go to the next layer) while dissimilar tasks (with few shared features) are blocked via low coefficients.", "Such clustering identifies the shared features or knowledge from multiple task capsules as well as helps backward transfer across the similar tasks.", "u j | i = W ij p i , (2)", "where W ij R d s d k is the weight matrix, d s and d k are the dimensions of task capsule i and knowledge sharing capsule j .", "The number of knowledge sharing capsules is a hyperparameter detailed in the experiment section.", "The temporary features are summed up with weights c ( t ) ij to obtain the initial knowledge sharing capsule s ( t ) j : s ( t ) j = (cid:88) i c ( t ) ij u ( t ) j | i , (3) where c ( t ) ij is a coupling coefficient summed up to 1 and we detail how to compute it later.", "Note that the task capsule for each task in Eq.", "1 is mapped to the knowledge sharing capsule in Eq.", "3 and c ( t ) ij indicates how much or how informative the representation of the i -th task is to the j -th knowledge sharing capsule.", "As a result, a knowledge sharing capsule can represent diverse sharable knowledge.", "For those tasks with a very low c ( t ) ij , their representations are less considered in the j -th knowledge sharing capsule.", "This makes sure only task capsules for tasks that are salient or similar to the new task are used and the others task capsules are ignored (and thus protected) to learn more general shareable knowledge.", "Recall that the ASC tasks are similar and thus such learning of task sharing features can be very important.", "Note that in backpropagation, the dissimilar tasks with low c ( t ) ij are updated with a low gradient while the similar tasks with high c ( t ) ij are updated with a larger gradient.", "This encourages backward transfer across similar tasks.", "Dynamic Routing.", "The coupling coefficient in Eq.", "3 is essential for the quality of shareable knowledge.", "This is computed by a routing softmax\": c ( t ) ij = exp( b ( t ) ij ) (cid:80) o exp( b ( t ) io ) , (4) where each b ij is the log prior probability showing how salient or similar a task capsule i is to a knowledge sharing capsule j . It is initialized to 0 indicating no salient connection between them at the beginning. We apply the dynamic routing algorithm in (Sabour et al., 2017) to update b ij : b ( t ) ij b ( t ) ij + a ( t ) ij , (5) 4750 Figure 2: (A) Architecture of CLA: the skip-connection is not shown for clarity. (B) illustration of task masking: a (learnable) task mask is applied after the activation function to selectively activate a neuron (or feature). Some notes about (B) are: the two rows of each task corresponds to k ( t ) 0 and k ( t ) 1 in TSM. In the cells before training, those with 0's are the neurons to be protected (masked) and those cells without a number are free neurons (not used). In the cells after training, those cells with 1's show neurons that are important for the current task, which are used as a mask for the future. Those cells with more than one color indicate that they are shared by more than one task. Those 0 cells without a color are not used by any task. where a ij is the agreement coefficient (see below). Intuitively, this step tends to aggregate the similar (or agreed) tasks on a knowledge sharing capsule with a higher agreement coefficient a ij and thus a higher logit b ( t ) ij (Eq.", "The agreement coefficient is computed as a ( t ) ij = u ( t ) j | i v ( t ) j , (6) where v ( t ) j is a normalized representation by applying the non-linear squash\" function (Sabour et al., 2017) to s ( t ) j (for the first task, s ( t ) j = u ( t ) j | i ): v ( t ) j = || s ( t ) j || 2 1 + || s ( t ) j || s ( t ) j || s ( t ) j || , (7) where the length of v ( t ) j is normalized to [0,1] to represent the active probability of a knowledge sharing capsule j . Finally, note that the dynamic routing procedure (Eq. (3) (7)) is repeated for r iterations. 4.2 Task Specific Module (TSM) Although knowledge sharing is important for ASC, it is equally important to preserve task specific knowledge for previous tasks to prevent forgetting (CF). To achieve this, we use task masks (Fig-ure 2(B)). Specifically, we first detect the neurons used by each old task, and then block off or mask out all the used neurons when learning a new task. The task specific module consists of differentiable layers (CLA uses a 2-layer fully-connected network). Each layer's output is further applied with a task mask to indicate which neurons should be protected for that task to overcome CF and forbids gradient updates for those neurons during backpropagation for a new task. Those tasks with overlapping masks indicate knowledge sharing. Due to KSM, the features flowing in those overlapping neurons enable the related old tasks to also improve in learning the new task. 4.3 Task Masks Given the knowledge sharing capsule s ( t ) j , TSM maps them into input k ( t ) l via a fully-connected network, where l is the l -th layer in TSM. A task mask (a soft binary mask) m ( t ) l is trained for each task t at each layer l in TSM during training task t 's 4751 classifier, indicating the neurons that are important for the task in the layer.", "5) or coupling coefficient c ( t ) ij (Eq. 4).", "The datasets are from 4 sources: (1) HL5Domains (Hu and Liu, 2004) with reviews of 5 products; (2) Liu3Domains (Liu et al., 2015) with reviews of 3 products; (3) Ding9Domains (Ding et al., 2008) with reviews of 9 products; and (4) SemEval14 with reviews of 2 products SemEval 2014 Task 4 for laptop and restaurant.", "For (1), (2) and (3), we split about 10% of the original data as the validation data, another about 10% of the original data as the testing data.", "For (4), we use 150 examples from the training set for validation.", "To be consistent with existing research 4752 Data source Task/domain Train Validation Test Liu3domain Speaker 352 44 44 Router 245 31 31 Computer 283 35 36 HL5domain Nokia6610 271 34 34 Nikon4300 162 20 21 Creative 677 85 85 CanonG3 228 29 29 ApexAD 343 43 43 Ding9domain CanonD500 118 15 15 Canon100 175 22 22 Diaper 191 24 24 Hitachi 212 26 27 Ipod 153 19 20 Linksys 176 22 23 MicroMP3 484 61 61 Nokia6600 362 45 46 Norton 194 24 25 SemEval14 Rest.", "Here we borrow the hard attention idea in (Serr et al., 2018) and leverage the task ID embedding to the train the task mask.", "For a task ID t , its embedding e ( t ) l consists of differentiable deterministic parameters that can be learned together with other parts of the network.", "It is trained for each layer in TSM.", "To generate the task mask m ( t ) l from e ( t ) l , Sigmoid is used as a pseudo-gate function and a positive scaling hyperparameter s is applied to help training.", "The m ( t ) l is computed as follows: m ( t ) l = ( se ( t ) l ) .", "Note that the neurons in m ( t ) l may overlap with those in other m ( i prev ) l s from previous tasks showing some shared knowledge.", "Given the output of each layer in TSM, k ( t ) l , we element-wise multiply k ( t ) l m ( t ) l .", "The masked output of the last layer k ( t ) is fed to the next layer of the BERT with a skip-connection (see Figure 1).", "After learning task t , the final m ( t ) l is saved and added to the set { m ( t ) l } .", "For each past task i prev T prev , its mask m ( i prev ) l indicates which neurons are used by that task and need to be protected.", "In learning task t , m ( i prev ) l is used to set the gradient g ( t ) l on all used neurons of the layer l in TSM to 0.", "Before modifying the gradient, we first accumulate all used neurons by all previous tasks' masks.", "Since m ( i prev ) l is binary, we use max-pooling to achieve the accumulation: m ( t ac ) l = MaxPool ( { m ( i prev ) l } ) .", "(9) The term m ( t ac ) l is applied to the gradient: g (cid:48) ( t ) l = g ( t ) l (1 m ( t ac ) l ) .", "(10)", "Those gradients corresponding to the 1 entries in m ( t ac ) l are set to 0 while the others remain unchanged.", "In this way, neurons in an old task are protected.", "Note that we expand (copy) the vector m ( t ac ) l to match the dimensions of g ( t ) l .", "Though the idea is intuitive, e ( t ) l is not easy to train.", "To make the learning of e ( t ) l easier and more stable, an annealing strategy is applied (Serr et al., 2018).", "That is, s is annealed during training, inducing a gradient flow and set s = s max during testing.", "Eq.", "8 approximates a unit step function as the mask, with m ( t ) l { 0 , 1 } when s .", "A training epoch starts with all neurons being equally active, which are progressively polarized within the epoch.", "Specifically, s is annealed as follows: s = 1 s max + ( s max 1 s max ) b 1 B 1 , (11) where b is the batch index and B is the total number of batches in an epoch.", "Illustration.", "In Figure 2(B), after learning the first task (Task 0), we obtain its useful neurons marked in orange with a 1 in each neuron, which serves as a mask in learning future tasks.", "In learning task 1, those useful neurons for task 0 are masked (with 0 in those orange neurons or cells on the left).", "The process also learns the useful neurons for task 1 marked in green with 1's.", "When task 2 arrives, all important neurons for tasks 0 and 1 are masked, i.e., its mask entries are set to 0 (orange and green before training).", "After training task 2, we see that task 2 and task 1 have a shared neuron that is important to both of them.", "The shared neuron is marked in both red and green.", "We now evaluate B-CL by comparing it with both non-continual learning and continual learning baselines.", "We follow the standard CL evaluation method in (Lange et al., 2019).", "We first present B-CL a sequence of aspect sentiment classification (ASC) tasks for it to learn.", "Once a task is learned, its training data is discarded.", "After all tasks are learned, we test all task models using their respective test data.", "In training each task, we use its validation set to decide when to stop training.", "Since B-CL works in the CL setting, we employ a set of 19 ASC datasets (reviews of 19 products) to produce sequences of tasks.", "Each dataset represents a task.", "5.3 Hyperparameters Unless otherwise stated, for the task sharing module, we employ 2 layers of fully connected network with dimensions 768 in TCL.", "We also employ 3 knowledge sharing capsules.", "The dynamic routing is repeated for 3 iterations.", "For the task-specific module, We employ the embedding with 2000 dimensions as the final and hidden layer of the TSM.", "The task ID embeddings have 2000 dimensions.", "A fully connected layer with softmax output is used as the classification heads in the last layer of the BERT, together with the categorical cross-entropy loss.", "We use 140 for s max in Eq.", "11, dropout of 0.5 between fully connected layers.", "The training of BERT, Adapter-BERT and B-CL follow that of (Xu et al., 2019).", "We adopt BERTBASE (uncased).", "The maximum length of the sum of sentence and aspect is set to 128.", "We use Adam optimizer and set the learning rate to 3e-5.", "For the SemEval datasets, 10 epochs are used and for all other datasets, 30 epochs are used based on results from validation data.", "All runs use the batch size 32.", "For the CL baselines, we train all models with the learning rate of 0.05.", "We early-stop training when there is no improvement in the validation loss for 5 epochs.", "The 4753 Scenario Category Model Acc.", "(Tang et al., 2016), examples belonging to the con-flict polarity (both positive and negative sentiments are expressed about an aspect term) are not used.", "Statistics of the 19 datasets are given in Table 2.", "We use 18 baselines, including both non-continual learning and continual learning methods.", "Non-continual Learning (NL) Baselines : NL setting builds a model for each task independently using a separate network.", "It clearly has no knowledge transfer or forgetting.", "We have 3 baselines under NL, (1) BERT , (2) Adapter-BERT and (3) W2V (word2vec embeddings).", "For BERT , we use trainable BERT to perform ASC (see Sec. 3); Adapter-BERT adapts the BERT as in (Houlsby et al., 2019), where only the adapter blocks are trainable; W2V uses embeddings trained on the Amazon review data in (Xu et al., 2018) using Fast-Text (Grave et al., 2018).", "We adopt the ASC classification network in (Xue and Li, 2018), which takes both aspect term and review sentence as input.", "Continual Learning (CL) Baselines .", "CL setting includes 3 baselines without dealing with forgetting ( WDF ) and 12 baselines from 6 state-of-the art task incremental learning (TIL) methods dealing with forgetting.", "WDF baselines greedily learn a sequence of tasks incrementally without explicitly tackling forgetting or knowledge transfer.", "The 3 baselines under WDF are also (4) BERT , (5) Adapter-BERT and (6) W2V .", "2020b) and SRK (Lv et al., 2019) are TIL methods for document sentiment classification.", "HAT, UCL, EWC and OWM were originally designed for image classification.", "We replace their original MLP or CNN image classification network with CNN for text classification (Kim, 2014).", "HAT (Serr et al., 2018) is one of the best TIL methods with almost no forgetting.", "UCL (Ahn et al., 2019) is a latest TIL method.", "EWC (Kirkpatrick et al., 2016) is a popular regularization-based class incremental learning (CIL) method, which was adapted for TIL by only training on the corresponding head of the specific task ID during training and only considering the corresponding head's prediction during testing.", "OWM (Zeng et al., 2019) is a state-of-the-art CIL method, which we also adapt to TIL.", "From the 6 systems, we created 6 baselines using W2V embeddings with the aspect term added before the sentence so that the CL methods can take both aspect and the review sentence, and 6 baselines using BERT (Frozen) (which replaces W2V embeddings).", "Following the BERT formulation in Sec. 3, it can naturally take both aspect and review sentence.", "Adapter-BERT is not applicable to them as their architecture cannot use an adapter.", "However, WDF is much worse than NL for BERT (with fine-tuning) and Adapter-BERT (with adapter-tuning).", "This is because BERT with fine-tuning learns highly task specific knowledge (Mer-chant et al., 2020).", "While this is desirable for NL, Model Acc.", "batch size is set to 64.", "For all the CL baselines, we use the code provided by their authors and adopt their original parameters (for EWC, we adopt its TIL variant implemented by (Serr et al., 2018)).", "Since the order of the 19 tasks may have an impact on the final results, we randomly choose and run 5 task sequences and average their results.", "We compute both accuracy and Macro-F1 over 3 classes of polarities, where Macro-F1 is the major metric as the imbalanced classes introduce biases on accuracy.", "Table 3 gives the average results of 19 tasks (or datasets) over the 5 random task sequences.", "Overall Performance.", "Table 3 shows that B-CL outperforms all baselines markedly.", "We discuss the detailed observations below: (1) For non-continual learning (NL) baselines, BERT and Adapter-BERT perform similarly.", "W2V is poorer, which is understandable.", "(2) Comparing NL (non-continual learning) and WDF (continual learning without dealing with for-getting), we see WDF is much better than NL for W2V.", "This indicates ASC tasks are similar and have shared knowledge.", "Catastrophic forgetting (CF) is not a major issue for W2V.", "it is bad for WDF because task specific knowledge is hard to share across tasks or transfer.", "Then WDF causes serious forgetting (CF) for CL. (3) Unlike BERT and Adapter-BERT, our BCL can do very well in both forgetting avoidance and knowledge transfer (outperforming all base-lines).", "For state-of-the-art CL baselines, EWC, UCL, OWM and HAT, although they perform better than WDF, they are all significantly poorer than B-CL as they don't have methods to encourage knowledge transfer.", "KAN and SRK do knowledge transfer but they are for document-level sentiment classification.", "They are weak, even weaker than other CL methods.", "Effectiveness of Knowledge Transfer.", "We now look at knowledge transfer of B-CL.", "For forward transfer (B-CL(forward)) in Table 3), we use the test accuracy and MF1 of each task when it was first learned.", "For backward transfer (B-CL in Table 3), we use the final result after all tasks are learned.", "By comparing the results of NL with the results of forward transfer, we can see whether forward transfer is effective.", "By comparing the forward transfer result with the backward transfer result, we can see whether the backward transfer can improve further.", "The average results of B-CL forward ( B-CL(forward) ) and backward ( B-CL ) are given in Table", "3. It shows that forward transfer of B-CL is highly effective (forward results for other CL baselines are given in the Appendix and we see B-CL's forward result outperforms all baselines' forward results).", "For backward transfer, B-CL slightly improves the performance.", "Ablation Experiments.", "The results of ablation experiments are in Table", "4. -KSM;-TSM means without knowledge sharing and task specific modules, simply deploying an Adapter-BERT.", "-KSM means without the knowledge sharing module.", "-TSM means without the task specific module.", "Table 4 clearly shows that the full B-CL system always gives the best overall results, indicating every component contributes to the model.", "4754 6 Conclusion This paper studies continual learning (CL) of a sequence of ASC tasks.", "Guangyi Lv, Shuai Wang, Bing Liu, Enhong Chen, and Kun Zhang.", "2019.", "Sentiment classification by leveraging the shared knowledge from a sequence of domains.", "In DASFAA .", "It proposed a novel technique called B-CL that can be applied to pre-trained BERT for CL.", "B-CL uses continual learning adapters and capsule networks to effectively encourage knowledge transfer among tasks and also to protect task-specific knowledge.", "Experiments show that B-CL markedly improves the ASC performance on both the new task and the old tasks via forward and backward knowledge transfer.", "This work was supported in part by two grants from National Science Foundation: IIS-1910424 and IIS-1838770, a DARPA Contract HR001120C0023, and a research gift from Northrop Grumman." ]
[ "abstain", "objective", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "other", "other", "method", "abstain", "abstain", "abstain", "result", "abstain", "objective", "abstain", "abstain", "objective", "objective", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "method", "other", "method", "method", "other", "other", "other", "abstain", "other", "other", "other", "other", "objective", "other", "other", "other", "other", "other", "other", "method", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "other", "abstain", "abstain", "abstain", "abstain", "other", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "result", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other" ]
[ "While online reviews of products and services become an important information source, it remains inefficient for potential consumers to exploit verbose reviews for fulfilling their information need.", "We propose to explore question generation as a new way of review information exploitation, namely generating questions that can be answered by the corresponding review sentences.", "One major challenge of this generation task is the lack of training data, i.e. explicit mapping relation between the user-posed questions and review sentences.", "To obtain proper training instances for the generation model, we propose an iterative learning framework with adaptive instance transfer and augmentation.", "To generate to the point questions about the major aspects in reviews, related features extracted in an unsupervised manner are incorporated without the burden of aspect annotation.", "Experiments on data from various categories of a popular E-commerce site demonstrate the effectiveness of the framework, as well as the potentials of the proposed review-based question generation task.", "The user-written reviews for products or service have become an important information source and there are a few research areas analyzing such data, including aspect extraction (Bing et al., 2016; Chen et al., 2013), product recommendation (Chelliah and Sarkar, 2017), and sentiment analysis (Li et al., 2018; Zhao et al., 2018a).", "Reviews reflect certain concerns or experiences of users on products or services, and such information is valuable for other The work described in this paper is partially supported by a grant from the Research Grant Council of the Hong Kong Special Administrative Region, China (Project Code: 14204418).", "potential consumers.", "However, there are few mechanisms assisting users for efficient review digestion.", "It is time-consuming for users to locate critical review parts that they care about, particularly in long reviews.", "We propose to utilize question generation (QG) (Du et al., 2017) as a new means to overcome this problem.", "Specifically, given a review sentence, the generated question is expected to ask about the concerned aspect of this product, from the perspective of the review writer.", "Such question can be regarded as a reading anchor of the review sentence, and it is easier to view and conceive due to its concise form.", "As an example, the review for a battery case product in Table 1 is too long to find sentences that can answer a user question such as How long will the battery last?.", "Given the generated questions in the right column, it would be much easier to find out the helpful part of the review.", "Recently, as a topic attracting significant research attention, question generation is regarded as a dual task of reading comprehension in most works, namely generating a question from a sentence with a fixed text segment in the sentence designated as the answer (Duan et al., 2017; Sun et al., 2018).", "Two unique characteristics of our review-based question generation task differentiate it from the previous question generation works.", "First, there is no review-question pairs available for training, thus a simple Seq2Seq-based question generation model for learning the mapping from the input (i.e. review) to the output (i.e. question) cannot be applied.", "Even though we can easily obtain large volumes of user-posed review sets and question sets, they are just separate datasets and cannot provide any supervision of input-output mapping (i.e. review-question pair).", "The second one is that different from the traditional question generation, the generated question from a review sentence will not simply take a fixed text segment in the review as its Review Question It doesn't heat up like most of the other ones, and I was completely fascinated by the ultra light and sleek design for the case.", "Before I was using the Mophie case but I couldn't wear it often because it was like having a hot brick in your pocket, hence I had to always leave it at home.", "On the contrary, with PowerBear, I never take it off because I can't even tell the difference.", "Also it is build in a super STRONG manner and even though I dropped my phone a few times, its shock resistant technology won't let a single thing happen to the case or the phone.", "The PowerBear case became an extension to my phone that I never have to take off because when I charge it at night, it charges both my phone and the case.", "I have battery life for more than two days for normal use, i.e. not power-consuming gaming.", "Does this make the phone warm during charging?", "Have any of you that own this had a Mophie?", "Does this give protection to the", "phone?Canthis charge the phone and the extra battery at the same", "time?Howmany days it can last?", "answer.", "The reason is that some reviews describing user experiences are highly context-sensitive.", "For the example in Table 1, for the review I have battery life for more than two days for normal use, i.e. not power-consuming gaming. and its corresponding example question How many days it can last?, obviously the text segment more than two days is a less precise answer, while the whole review sentence is much more informative.", "In some other case, even such less precise answer span cannot be extracted from the review sentence, e.g. for the example question Does this give protection to the phone? and the review sentence Also it is ... even though I dropped my phone ..., its shock resistant technology won't let a single thing happen to the case or the phone..", "Of course here, a simple Yes or No answer does not make much sense as well, while the whole review sentence is a vivid and informative answer.", "The above two unique characteristics raise two challenges for our task.", "The first challenge, namely lacking review-question pairs as training instances, appears to be intractable, particularly given that the current end-to-end models are very data-hungry.", "One instant idea is to utilize user-posed (question, answer) pairs as substitute for training.", "However, several instance-related defects hinder the learned generation model from being competent for the review-based question generation.", "Some answers are very short, e.g. more than two days, therefore, without necessary context, they are not helpful to generate good questions.", "The second challenge, namely the issue that some verbose answers contain irrelevant content especially for subjective questions.", "To handle this challenge, we propose a learning framework with adaptive instance transfer and augmentation.", "Firstly, a pre-trained generation model based on user-posed answer-question pairs is utilized as an initial question generator .", "A ranker is designed to work together with the generator to improve the training instance set by distilling it via removing unsuitable answer-question pairs to avoid negative transfer (Pan and Yang, 2009), and augmenting (Kobayashi, 2018) it by adding suitable review-question pairs.", "For selecting suitable reviews for question generation, the ranker considers two factors: the major aspects in a review and the review's suitability for question generation.", "The two factors are captured via a reconstruction objective and a reinforcement objective with reward given by the generator .", "Thus, the ranker and the generator are iteratively enhanced, and the adaptively transferred answer-question pairs and the augmented review-question pairs gradually relieve the data lacking problem.", "In accordance with the second characteristic of our task, it is plausible to regard a review sentence or clause as the answer to the corresponding question originated from it.", "Such treatment brings in the second challenge: how can we guarantee that the generated question concentrates on the critical aspect mentioned by the review sentence?", "For example, a question like How was the experience for gaming? is not a favourable generation for I have battery life for more than two days for normal use, i.e. not power-consuming gaming..", "To solve this problem, we incorporate aspect-based feature discovering in the ranker , and then we integrate the aspect features and an aspect pointer network in the generator .", "The incorporation of such aspect-related features and structures helps the generator to focus more on critical product aspects, other than the less important parts, which is complied with the real user-posed questions.", "To sum up, our main contributions are threefold.", "(1) A new practical task, namely question generation from reviews without annotated instance, is proposed and it has good potential for multiple applications.", "(2) A novel adaptive instance transfer and augmentation framework is proposed for handling the data lacking challenge in the task.", "(3) Review-based question generation is conducted on E-commerce data of various product categories.", "Question generation (QG) is an emerging research topic due to its wide application scenarios such as education (Wang et al., 2018), goal-oriented dialogue (Lee et al., 2018), and question answering (Duan et al., 2017).", "The preliminary neural QG models (Du et al., 2017; Zhou et al., 2017; Du and Cardie, 2017) outperform the rule-based methods relying on hand-craft features, and thereafter various models have been proposed to further improve the performance via incorporating question type (Dong et al., 2018), answer position (Sun et al., 2018), long passage modeling (Zhao et al., 2018b), question difficulty (Gao et al., 2019), and to the point context (Li et al., 2019).", "Some works try to find the possible answer text spans for facilitating the learning (Wang et al., 2019).", "Question generation models can be combined with its dual task, i.e., reading comprehension or question answering with various motivations, such as improving auxiliary task performance (Duan et al., 2017; Yang et al., 2017; Golub et al., 2017), collaborating QA and QG model (Tang et al., 2018, 2017), and unified learning (Xiao et al., 2018).", "Although question generation has been applied on other datasets, e.g., Wikipedia (Du and Cardie, 2018), most of the existing QG works treat it as a dual task of reading comprehension (Yu et al., 2018; Cui et al., 2017), namely generating a question from a piece of text where a certain text span is marked as answer, in spite of several exceptions where only sentences without answer spans are used for generating questions (Du et al., 2017; Chali and Baghaee, 2018).", "Such generation setting is not suitable for reviews due to the lack of (question, review) pairs and improper assumption of text span answer as aforementioned.", "There are works training the question generation model with the user-written QA pairs in E-commerce sites (Hu et al., 2018; Chali and Baghaee, 2018), but the practicality is limited since the questions are only generated from answers instead of reviews.", "Transfer learning (Pan and Yang, 2009; Tan et al., 2017; Li et al., 2020) refers to a broad scope of methods that exploit knowledge across domains for handling tasks in the target domain.", "A few terms are used for describing specific methods in this learning paradigm, e.g., self-taught learning (Raina et al., 2007), domain adaptation (Long et al., 2017), etc.", "Based on what to transfer, transfer learning is categorized into four groups (Pan and Yang, 2009), namely instance transfer, feature representation transfer, parameter transfer, and relational knowledge transfer.", "Our learning framework can be regarded as a case of instance transfer with iterative instance adaptation and augmentation.", "For handling the aforementioned issues, we propose an Adaptive Instance Transfer and Augmentation (AITA) framework as shown in Figure", "1. Since the review-related processing is always sentence-based, we use review for short to refer to review sentence in this paper.", "Its two components, namely ranker and generator , are learned iteratively.", "Initially, AITA simply transfers all available (question, answer) pairs and trains a generator .", "Then it will iteratively enhance the generator with the help of the ranker .", "The ranker takes a (question, answer) pair and a review as its input and calculates a ranking score s .", "Thus, it can rank all reviews for a given QA pair.", "The ranking objective incorporates the reward provided by the generator , which helps find out those suitable reviews to form (review, question) pairs for training (i.e. augmenting the training data).", "Meanwhile, the reward from the generator also helps remove unsuitable QA pairs for training, so that it makes the transfer more adaptive.", "Note that the ranker also learns to model two hidden aspect related variables for the review, which are helpful for the generator to ask about the major aspects in review.", "Such an iterative instance manipulation procedure gradually transfers and augments the training set for handling review-based question generation.", "There are two pieces of input text for ranker .", "The first one is the concatenation of a (question, answer) pair qa and the second one is a review sentence r .", "qa and r are associated with the same product.", "Since the ranker is responsible for instance augmentation that provides (question, review) pairs, it is trained to learn a score s ( qa, r ) which can be used to return suitable r 's for a given qa .", "input qa and r are encoded with two Transformer encoders with the same structure and partially shared parameters, to leverage the advantage of", "multi-head self attention on modeling word associations without considering term position.", "An input ( qa or r ) is written as a matrix E = [ e T 1 , ..., e Tn ] T , where e is a word embedding and n is the text length.", "The number of heads in the multi-head self-attention is denoted as m , and the output of the j -th head is written as: Q j , K j , V j = EW jQ , EW jK , EW jV (1) head j ( E ) = softmax ( Q j K jT d ) V j (2) where d is the dimension of word embedding.", "The outputs of different heads are concatenated and the encoding for the i -th word is written as h i = [ head 1 i ; ... ; head mi ] .", "To obtain the sentence representation considering the complete semantics, we apply a global attention layer on the output of the Transformer encoder: h = n (cid:88) i =1 i h i (3) where the attention weight i = exp( h i M h ) /Z , Z is the normalization, and h = (cid:80) h i /n .", "The parameter matrix M is shared by encoders for both qa and r for capturing the common attention features across them.", "After encoding qa and r as h ( qa ) and h ( qa ) , a vector g ( qa, r ) is assigned with the concatenation of h ( qa ) , h ( qa ) and their difference g ( qa, r ) = [ h ( qa ) , h ( r ) , | h ( qa ) h ( r ) | ] The review ranking score s ( qa, r ) is calculated as: s ( qa, r ) = ( W s g ( qa, r ) + b s ) (4) where is sigmoid function.", "To learn an appropriate s ( qa, r ) , we encounter a major challenge, namely lacking ground truth labels for (question, review).", "Our solution takes the generator in our framework as an agent that can provide reward for guiding the learning of ranker .", "The generator is initially trained with (question, answer) data, and is gradually updated with adapted and augmented training instances, so that the rewards from the generator can reflect the ability of review for generating the corresponding question.", "Specifically, we propose a reinforcement objective that makes use of the reward from the generator , denoted as reward G ( r, q ) .", "For each pair of question and review, we take the normalized log ppl ( q | r ) in the generator as reward: reward G ( r, q ) = log ppl ( q | r ) (cid:80) r R qa log ppl ( q | r ) (5) where R qa is the reviews under the same product as qa , and log ppl ( q | r ) is the log perplexity of generating a question q from a review r : log ppl ( q | r ) = 1 | q | (cid:88) t [1 , | q | ] p G ( q t | r, q 1 ...q t 1 ) The reinforcement objective for the ranker is to maximize the average reward for all the reviews given a question.", "The sampling probabilities for reviews are obtained via normalized ranking score, namely p ( r | qa ) = s ( qa, r ) / Z qa , where Z qa = (cid:80) r R qa s ( qa, r ) .", "The loss function is: L g ( qa, r ) = E r p ( r | qa ) reward G ( r, q ) (6) The gradient calculation for the above objective is an intractable problem.", "As an approximated method which performs well in the iterative algorithm, the normalization term Z qa is fixed during the calculation of the policy gradient: L g ( qa, r ) = (cid:88) r s ( qa, r ) reward G ( r, q ) / Z qa Regularization with Unsupervised Aspect Extraction.", "Product aspects usually play a major role in all of product questions, answers and reviews, since they are the discussion focus of such text content.", "Thus, such aspects can act as connections in modeling input pairs of qa and r via the partially shared structure.", "To help the semantic vector h in Eqn 3 capture salient aspects of reviews, an autoencoder module is connected to the encoding layer for reconstructing h .", "Together with the matrix M, the autoencoder can be used to extract salient aspects from reviews.", "Note that this combined structure is similar to the ABAE model (He et al., 2017), which has been shown effective for unsupervised aspect extraction.", "Compared with supervised aspect detection methods, such a unsupervised module avoid the burden of aspect annotation for different product categories, and our experiments demonstrate that regularization based on this module is effective.", "Specifically, h is mapped to an aspect distribution p and then reconstructed: p = softmax ( W p h + b p ) (7) h (cid:48) = p A (8) where each dimension in p stands for the probability that the review contains the corresponding aspect, and h (cid:48) is the reconstruction of review representation, and A is a learnable parameter matrix.", "Note that we define aspects as implicit aspect categories, namely clusters of associated attributes of product, which is commonly used in unsupervised aspect extraction (Wang et al., 2015; He et al., 2017).", "The reconstruction objective is written as: L ( qa, r ) = [ h ( r ) h (cid:48) ( r )] 2 / 2 .", "(9) Only the reconstruction of review representations is considered since we focus on discovering aspects in reviews.", "1 In this way, the aspect-based reconstruction will force h to focus on salient aspects that facilitate the reconstruction.", "The final loss function of the ranker is regularized to: L ( qa, r ) = L g ( qa, r ) L ( qa, r ) (10) where is a hyper-parameter.", "1 We simplified the objective in AEAB model by eliminating the additional regularization term which is not necessary when combining L ( qa, r ) and L g ( qa, r ) .", "We adapt the Seq2Seq model for the aspect-focused generation model, which is updated gradually via the transferred and augmented instances.", "With the help of aspect-based variables learned in ranker , the generator can generate questions reflecting the major aspect in the review.", "Aspect-enhanced Encoding.", "To emphasize the words related to salient aspects, the attention weight i obtained in the ranker is incorporated into the word embedding.", "Given an input review sentence, we obtain the extended word embedding e i at position i : e i = [ e i , e POSi , e NERi , i ] (11) where e i is the pre-trained word embedding, e POSi is the one-hot POS tag of i -th word, e NERi is a BIO feature for indicating whether the i -th word is a named entity, and i indicates the aspect-based weight for the i -th word.", "Bi-LSTM is adopted as the basic encoder of generator , encoding the i -th word as the concatenation of hidden states with both directions: h gi = [ h i , h i ] .", "Decoding with Aspect-aware Pointer Network.", "Pointer network, i.e., copy mechanism, can significantly improve the performance of text generation.", "In our task, in addition to the word-level hidden state in the decoder, the overall aspect distribution of the review can also provide clues for how likely the generator should copy corresponding review aspect words into the generated question.", "where s t is the hidden state for the t -th word in question and c t is the context encoding based on attention weight z tj .", "In the pointer network, for a particular position t in the generated text, the word may be copied from a distribution based on the attention weight z t = { z tj } , where the copy probability is assigned according to the current hidden state s t .", "We also Data: QA set S qa = { ( q , a ) } ; review set S r = { r } ; Result: S ; generator trained with S Prepare pairs of ( qa , r ) under each product Initialize the training set S = S qa For each epoch Do", "1. Train generator with S .", "2. Prepare the reward G ( qa, r ) as generator reward for each pair of ( qa , r ) (each answer a in qa pairs is regarded as a review for q ).", "3. Adapt S via removing instances with low reward.", "4. Train ranker according to the objective in Eqn 10.", "5. Augment S via adding pairs of instances, which are ( q , r ) pairs with top s ( qa, r ) in ranker", ".", "6. Collect and p for instances in S from ranker .", "EndAlgorithm 1: Learning algorithm of AITA.", "consider the influence of the aspect distribution p in the copy probability for interpolation: = ( p W c s t + b c ) (12) The incorporation of p helps the pointer network to consider the overall aspect distribution of context in addition to the semantics in the current position for copying words.", "Finally, the t -th word is generated from the mixture of the two distributions: p ( q t ) = (1 ) p 0 ( q t ) + z t .", "The purpose of our iterative learning, as by Alg 1, is to update the generator gradually via the instance augmentation.", "The input data for the iterative learning consists of the transferred instance set of question-answer pairs S qa , an unlabeled review set S r , and an adaption parameter .", "When the learning is finished, two outputs are produced: the final training instances S , and the learned generator .", "The training set S for generator is initialized with S qa .", "In each iteration of the algorithm, the generator is trained with current S , and then S is adapted accordingly.", "The ranker is trained based on the rewards from the generation, which is used for instance augmentation in S .", "Thus, the training set S is updated during the iterative learning, starting from a pure (question, answer) set.", "Analysis on the influence of the composition of S , i.e., instance numbers of two types, is presented in Section 4.5.", "There are two kinds of updates for the instance set S : (1) adaption via removing ( q , a ) pairs with low generator reward, in order to avoid negative transfer; (2) augmentation via adding ( q , r ) pairs that are top ranked by ranker , in order to increase the proportion of suitable review-question instances in training set.", "The instance number hyperparame-ter for removing and adding can be set according to the scale of S qa , and more details are given in our experimental setting.", "To guarantee the effective instance manipulation, two interactions exist between generator and ranker .", "First, aspect-related variables for reviews obtained by ranker are part of the generator input.", "The second interaction is that a reward from generator is part of the learning objective for ranker , in order to teach ranker to capture the suitable reviews for generating the corresponding question.", "We exploit the user-written QA dataset collected in (Wan and McAuley, 2016) and the review set collected in (McAuley et al., 2015) as our experimental data.", "The two datasets are collected from Amazon.com separately.", "We filter and merge the two datasets to obtain products whose associated QA pairs and reviews can both be found.", "The statistics for our datasets can be found in Table 2, where the numbers of product for several very large product categories are restricted to 5000.", "According to the average lengths, we can find that the whole review tend to be very long.", "It justified our assumption that it is not easy for users to exploit reviews, and questions with short length can be a good catalogue for viewing reviews.", "To test our question generation framework, we manually labeled 100 ground truth review-question pairs for each product category.", "6 volunteers are asked to select user-posed questions and the corresponding review sentences that can serve as answers.", "Specifically, the volunteers are given pairs #p #q #a #r #(s) Auto 0.8k 5.5k 18.7k 9.4k 46.5k Baby 1.9k 11.9k 38.7k 75.3k 450.7k Beauty 2.5k 15.9k 53.7k 62.4k 338.6k Phones 3.6k 23.8k 87.4k 104.5k 561.8k Cloth 0.4k 0.30k 10.7k 6.9k 32.2k Elec 5k 31.0k 101.2k 229.4k 1461.8k Health 5k 32.4k 114.2k 136.9k 749.9k Music 0.4k 2.7k 8.9k 5.2k 27.9k Sports 5k 34.2k 120.6k 122.6k 648.5k Tools 4.1k 29.8k 104.1k 70.7k 425.6k L q L a L r L s Auto 14.4 23.3 88.3 17.8 Baby 15.2 22.9 106.4 17.8 Beauty 13.1 22.0 88.6 16.3 Phones 13.2 19.2 97.0 18.1 Cloth 13.0 19.8 71.2 15.3 Elec 16.1 24.8 119.5 18.8 Health 13.0 22.5 96.0 17.5 Music 14.6 24.0 94.2 17.7 Sports 13.6 22.3 91.0 17.2 Tools 14.7 23.2 110.2 18.3 Table 2: Data statistics.", "of question and review, and only consider the relevance between question and review.", "The answer to the question is also accessible but it is only used for helping annotators to understand the question.", "All labeled pairs are validated by two experienced annotators with good understanding for the consumer information need in E-commerce.", "For each product category, we train the AITA framework and use the learned generator for testing.", "The fixed 300 dimension GloVe word em-beddings (Pennington et al., 2014) are used as the basic word vectors.", "For all text including question, answer and review, we utilize StanfordNLP for tok-enizing, lower casing, and linguistic features extraction, e.g., NER & POS for the encoder in generator .", "In ranker , the dimension of aspect distribution is set to 20 and the in the final loss function in Eqn 10 is set to 0.8.", "In the multi-head self-attention, the head number is set to 3 and the dimension for Q, K, V is 300.", "The dimensions of matrices can be set accordingly.", "The hidden dimension in generator is set to 200.", "In the iterative learning algorithm, we set the epoch number to 10 and the updating instance number to 0 .", "05 | S qa | .", "In testing, given a review r as input for generator , the additional input variables ( r ) and p ( r ) are obtained via the review encoder (Eqn 3) and aspect extraction (Eqn 8), which are question-independent.", "For testing the effectiveness of our learning framework and the incorporation of aspect, we compare our method with the following models: G a (Du et al., 2017): A sentence-based Seq2Seq generation model trained with user-written answer-question pairs.", "G PNa (Wang et al., 2018): A pointer network is incorporated in the Seq2Seq decoding to decide whether to copy word from the context or select from vocabulary.", "G PNar : Review data is incorporated via a retrieval-based method.", "Specifically, the most relevant review sentence for each question is retrieved via BM25 method, and such review-question pairs are added into the training set.", "G PNa +aspect (Hu et al., 2018): Aspect is exploited in this model.", "We trained the aspect module in our framework, i.e. only using the reconstruction objective to obtain an aspect feature extractor from reviews.", "Then the aspect features and distributions can be used in the same way as in our method.", "AITA refers to our proposed framework.", "AITA -aspect: All the extracted aspect-related features are removed from AITA as an ablation for evaluating the effectiveness of the unsupervised module for aspect.", "For every product category, we run each model for 3 times and report the average performance with four evaluation metrics, including BLEU1 (B1), BLEU4 (B4), METEOR (MET) and ROUGE-L (RL ).", "The results are demonstrated in Table", "3. AITA achieves the best performance on all product categories regarding different evaluation metrics.", "The significant improvements over other models demonstrate that our instance transfer and augmentation method can indeed reduce inappropriate answer-question pairs and provide helpful review-question pairs for the generator .", "The performance of G a is very poor due to the missing of attention mechanism.", "Both G PNa and G PNa +aspect have worse performance than ours, even though some product categories have large volume of QA pairs ( > 100k), e.g., Electronics, Tools, etc.", "This indicates that the answer-question instances are not capable of learning a review-based question generator because of the different characteristics between the answer set and review set.", "G PNar performs much worse than G PNa , which proves that a simple retrieval method BLEU1 BLEU4 METEOR ROUGE-L BLEU1 BLEU4 METEOR ROUGE-L Automative Baby G a 0.103 0.047 0.062 0.089 0.104 0.055 0.065 0.068 G PNa 0.162 0.090 0.091 0.140 0.153 0.088 0.087 0.195 G PNar 0.147 0.082 0.078 0.118 0.133 0.060 0.068 0.102 G PNa +aspect 0.165 0.090 0.093 0.140 0.157 0.088 0.091 0.203 AITA-aspect 0.179 0.094 0.094 0.146 0.157 0.089 0.092 214 AITA 0.184 0.097 0.099 0.148 0.167 0.089 0.094 0.221 Beauty Cell Phone G a 0.133 0.088 0.118 0.218 0.203 0.125 0.130 0.104 G PNa 0.235 0.122 0.128 0.257 0.250 0.122 0.150 0.217 G PNar 0.194 0.098 0.119 0.205 0.215 0.117 0.136 0.141 G PNa +aspect 0.240 0.122 0.132 0.257 0.251 0.134 0.154 0.223 AITA-aspect 0.240 0.127 0.132 0.257 0.261 0.139 0.184 0.230 AITA 0.249 0.129 0.136 0.259 0.267 0.142 0.193 0.244 Clothing & Jewelry Electronics G a 0.224 0.093 0.091 0.178 0.099 0.048 0.107 0.144 G PNa 0.283 0.134 0.118 0.227 0.124 0.069 0.131 0.171 G PNar 0.258 0.110 0.101 0.198 0.100 0.053 0.121 0.156 G PNa +aspect 0.298 0.139 0.125 0.241 0.120 0.069 0.126 0.171 AITA-aspect 0.306 0.152 0.138 0.246 0.125 0.069 0.131 0.174 AITA 0.316 0.157 0.145 0.263 0.127 0.073 0.131 0.175 Health Musical Instruments G a 0.114 0.062 0.091 0.095 0.088 0.054 0.096 0.091 GPN a 0.130 0.080 0.089 0.108 0.114 0.110 0.121 0.119 G PNar 0.124 0.069 0.086 0.104 0.090 0.072 0.106 0.103 G PNa +aspect 0.133 0.100 0.123 0.175 0.118 0.110 0.130 0.192 AITA-aspect 0.137 0.100 0.121 0.179 0.124 0.110 0.136 0.201 AITA 0.142 0.109 0.132 0.194 0.129 0.112 0.141 0.205 Sports & Outdoors Tools G a 0.079 0.046 0.042 0.064 0.098 0.059 0.093 0.105 G PNa 0.091 0.052 0.079 0.102 0.107 0.077 0.112 0.135 G PNar 0.087 0.050 0.071 0.083 0.100 0.072 0.103 0.119 G PNa +aspect 0.091 0.052 0.079 0.102 0.110 0.079 0.110 0.136 AITA-aspect 0.094 0.052 0.080 0.102 0.112 0.079 0.116 0.142 AITA 0.097 0.057 0.083 0.102 0.117 0.083 0.120 0.149 Table 3: Overall performance on question generation.", "is not effective for merging the instances related to reviews and answers.", "AITA adapts and augments the QA set to select suitable review-question pairs considering both aspect and generation suitability, resulting in a better generator .", "In addition, effectiveness of aspect feature and aspect pointer network can be illustrated via the slight but stable improvement of G PNa +aspect over G PNa and the performance drop of AITA -aspect on all the categories.", "This proves that even without precise aspect annotation, our unsupervised aspect-based regularization is helpful for improving generation.", "We conduct human evaluation on two product categories to study the quality of the generated questions.", "Two binary metrics Relevance and Aspect are used to indicate whether a question can be answered by the review and whether they share the same or related product aspect.", "The third metric, Clothing & Jewelry Relevance Aspect Fluency G PNa 0.58 0.62 2.58 G PNar 0.47 0.58 2.29 G PNa +aspect 0.66 0.72 2.76 AITA 0.80 0.80 2.86 Cell Phone Relevance Aspect Fluency G PNa 0.42 0.55 2.79 G PNar 0.35 0.41 2.44 G PNa +aspect 0.58 0.63 2.83 AITA 0.72 0.72 2.90 Table 4: Performance of human evaluation.", "Fluency with the value set { 1, 2, 3 } , is adopted for judging the question fluency.", "1 means not fluent and 3 means very fluent.", "We selected 50 generated questions from each model and asked 4 volunteers The entire length of the watch is 9 inches, but the effective length from the last hole to clasp is about 8 inches.", "for evaluation.", "The average scores are reported in Table 4, which shows that our framework achieves the best performance regarding all the metrics, especially for Relevance , showing that our AITA can help generate more accurate questions based on reviews and thus facilitates exploiting reviews.", "Due to the incorporation of implicit aspect information, both AITA and G PNa +aspect significantly outperform G PNa regarding both Aspect and Relevance .", "Again, G PNar with a simple retrieval method for augmenting training instances cannot perform well.", "The blue sentences in Table 5 are from a long review talking about some important information of a wat ch, and the questions generated by different models are also given.", "These questions are more user-friendly and potential consumers can browse them to quickly locate the information they care about.", "For example, if a user wants to know more about the battery replacement, the portion before the third sentence can be skipped.", "According to the generated questions via three methods in the Table 5, we can find that the questions from AITA are asking about major aspects of the review sentences.", "G PNa failed to capture major aspects in the first three sentences, and the questions generated by G PNa +aspect are not as concrete as ours, owning to the insufficient training instances.", "The training instance set for the generator , i.e., S in Algorithm 1, is initialized with QA set and gradually adapted and augmented.", "Here, we investigate the effect of composition property of S on the generator performance at different epochs.", "As shown in Fig 2, two product categories and two metrics are illustrated, with the gradually changed training instance set S .", "The proportion of review-question ( qr ) instances in S starts with 0, and significant performance improvement can be observed while the qr proportion gradually increases.", "The results stay stable until the qr proportion reach 80%.", "We propose a practical task of question generation from reviews, whose major challenge is the lack of training instances.", "An adaptive instance transfer and augmentation framework is designed for handling the task via an iterative learning algorithm.", "Unsupervised aspect extraction is integrated for aspect-aware question generation.", "Experiments on real-world E-commerce data demonstrate the effectiveness of the training instance manipulation in our framework and the potentials of the review-based question generation task." ]
[ "abstain", "objective", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "objective", "abstain", "objective", "abstain", "abstain", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "method", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "result", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "objective" ]
[ "Boston University nikzad@bu.edu", "Boston University isidora@bu.edu", "Mohammad Sadegh Rasooli University of Pennsylvania rasooli@upenn.edu", "University of Pennsylvania ccb@upenn.edu", "Derry Tanti Wijaya Boston University wijaya@bu.edu", "Abstract Neural Machine Translation (NMT) models have been observed to produce poor translations when there are few/no parallel sentences to train the models.", "In the absence of parallel data, several approaches have turned to the use of images to learn translations.", "Since images of words, e.g., horse may be unchanged across languages, translations can be identified via images associated with words in different languages that have a high degree of visual similarity.", "However, translating via images has been shown to improve upon text-only models only marginally.", "To better understand when images are useful for translation, we study image translatability of words, which we define as the translatability of words via images, by measuring intraand inter-cluster similarities of image representations of words that are translations of each other.", "We find that images of words are not always invariant across languages, and that language pairs with shared culture, meaning having either a common language family, ethnicity or religion, have improved image translatability (i.e., have more similar images for similar words) compared to its converse, regardless of their geographic proximity.", "In addition, in line with previous works that show images help more in translating concrete words, we found that concrete words have improved image translatability compared to abstract ones.", "Neural machine translation (NMT) for low-resource languages has drawn a lot of attention due to the increasing awareness of the lack of linguistic and geographic diversity in NLP research (Joshi et al., 2020; Orife et al.).", "Since parallel data for these languages is scarce, it necessitates the use of other data to help translation e.g., monolingual texts in unsupervised MT (Lample et al., 2018b,a,c; Artetxe et al., 2018) or images in multimodal MT (Barrault et al., 2018).", "Previous works on using images for translation typically accept that images are useful due to their language invariance (Rotman et al., 2018).", "Since everyday words such as chair denote concepts that exist independently of any language, images that ground their meanings should also be invariant to the language.", "However, to the best of our knowledge, this conjecture on image-language invariance has never been tested.", "As images' usefulness for translation has only been shown to be marginal (Specia et al., 2016; Elliott et al., 2017; Barrault et al., 2018), it is important to study this conjecture in relation to the characteristics of languages to understand when and to what extent images can aid translation.", "An alternative view would be that images may be different to some extent in different languages since they reflect the ways different people interact with these concepts; this may depend on where they live and the communities they live in (Evans and Levinson, 2009).", "For example, images of the word breakfast in different languages may reflect the different cuisines of the communities that speak the languages.", "While most multimodal MT datasets are limited to a small set of European languages that come from the same language family, and are spoken by communities that are culturally and geographically close, the Massively Multilingual Image Dataset (MMID) (Hewitt et al., 2018) is constructed specifically to facilitate large-scale multilingual research in translating words via images.", "MMID consists of up to 10K words and 100 images per word in 98 languages.", "This dataset provides an opportunity for us to examine how geographical and cultural relatedness between languages affect translation of words via images.", "As the use of parallel data from related languages have been found to improve MT for low resource languages (Zoph et al., 2016; Nguyen and Chiang, 2017; Dabre et al., 2017), we want to study if the same extends to translation via images.", "Specifi-cally, we want to explore if translatability of words between two languages via images is influenced by the cultural similarity and geographical proximity of their communities.", "A recent study, (Thomp-son et al., 2020), has observed such correlations of culture and geography to semantic alignment of word meanings between languages that are measured through similarities in the word embeddings.", "We hypothesize that the same is true for images, and that the alignment of meanings conveyed via images coincides with culture and geography.", "In this work, we primarily define culture as the set of Language, Norms and Beliefs\" of a community (Heather Griffiths, 2015).", "These elements form our interpretation of cultural closeness between languages, which consist of their common linguistic, ethnic, and religious properties.", "Our goal is to intrinsically evaluate to what extent images can aid in word translation, for words in languages close in a variety of these characteristics.", "Assuming that each word is associated with a number of images that convey its meaning, we measure the degree to which images of words that are translations of each other in different languages have similar representations (thus will help in translation).", "We call this measure image translatability or the capacity of word meaning to be transferred from one language to another via images.", "If images are indeed language invariant, we should observe similar image translatability across different language pairs.", "We identify how close word translations are in terms of their image representations (embeddings).", "Our findings suggest that languages with cultural similarity (defined as a combination of linguistic, ethnicity, or religious similarity of the communities at the cultural centres of the languages by Glottolog (Hammarstrm et al., 2020)) coincides with their translatability via images, and that the translatability of languages with cultural similarity outperforms that in those with geographical proximity.", "Our paper is structured as follows: In section 2 we discuss previous research on image-aided word translation, and how roots, geography and cultural characteristics of languages correlate with semantic alignment of words.", "In section 3 we describe our dataset and text-image corpora.", "We also introduce the language pairs we examine and estimate their closeness in culture and geography.", "In section 4, we present our approach for measuring translatability of words in terms of the similarity of their image representations.", "Section 5 shows an analysis of our results and how translatability of words via images, which affects the images' fitness for translation, correlates with language properties.", "In Section 6, we discuss noteworthy examples that illustrate our findings, before concluding in Section 7.", "This paper extends the work of Hewitt et al. (2018), which introduces a multi-lingual dataset of words in different languages, along with matching images, for word translation.", "Our goal however, is not to improve on the state-of-the-art methods in word translation using images, but to understand the specific characteristics of languages that influence the quality of translation via images.", "Hewitt et al. (2018) are the first to create a large-scale multilingual words and images dataset, without a specific part-of speech focus, proposing also a novel method to rate the concreteness of a word to be used in translations.", "Concreteness (Paivio et al., 1968) identifies tangible concepts and mental images that arise in correspondence to the word.", "Due to their strong visual representation, concrete words are easier to represent using images.", "Indeed, the measure of a word's concreteness has been observed to predict the effectiveness of its images to translate the word.", "(Kiela et al., 2015).", "A concept synonymous to concreteness is imageability (Kastner et al., 2020).", "In terms of word translation, there exists a significant body of work in the area of bilingual lexicon induction, which is the task of translating words across languages without any parallel data (Fung and Yee, 1998; Rapp, 1999).", "Approaches can be divided into two types, text-based, which aim to find word translations by employing the words' linguistic information, and vision-based that use the words' images as pivots for translation (Bergsma and Van Durme, 2011; Kiela et al., 2015).", "Additionally, there are works that have incorporated additional signals for translation such as Wikipedia interlingual links (Wijaya et al., 2017).", "The core idea in a large number of vision-based methods is using images to learn word and image embeddings that integrate all linguistic and visual information available to improve word translation (Calixto et al., 2017; Gella et al., 2017; Karpathy and Fei-Fei, 2017; Vulic et al., 2016).", "Recent research in this area extends prior ideas in learning multilingual word-image embeddings, extracting more complex and useful information from images, and applying the methods in few shot scenarios.", "Singhal et al. (2019) learn multilingual and multimodal word embeddings from weakly-supervised image-text data with a simple bag-of-words-based embedding model that incorporates word context and image information.", "Similarly, in Chen et al. (2019) the authors suggest mapping linguistic features based on sentence context and localized image features from image regions into a joint space.", "Aside from translation, multilingual text representations aligned to images has also been used to boost performance in vision-language tasks such as multilingual image-sentence retrieval (Gella et al., 2017; Wehrmann et al., 2019; Kim et al., 2020; Burns et al., 2020).", "The claim that the concepts of language, culture and their geographical affiliations are interdependent, constantly and dynamically evolving and defining each other, has been widely discussed and is well established in the literature.", "Culture is considered an indistinguishable part of languages when translating from one language to another.", "The importance of cultural literacy of the translator and his/her awareness of cultural factors, views and tradition, apart from word meaning, for producing high quality translations is indisputable (Nida, 1945; Wakabayashi, 1991; Janfaza et al., 2012).", "Despite the importance of language, culture and geography in translation, and findings that parallel data from similar, higher resource languages can help improve MT of low resource languages (Kocmi and Bojar, 2018), no previous work has studied how language similarity may influence translation via images.", "The most notable recent work in this area, that is most similar to ours, is that of Thompson et al. (2020).", "The authors predict semantic similarity of words in 41 languages from the NEL dataset (Dellert et al., 2020) and examine the relationships between word semantic similarity (measured via word embeddings) with the cultural, historical and geographical aspects of the communities speaking the language.", "Their findings, that the role of cultural similarities to this prediction is su-perior to that of geographical ones, align with ours.", "However, their methods differ from ours in many aspects.", "They use word-only embeddings to measure semantic alignment of words and only a small and publicly available set of images (Duabeitia LanguagePair Similarity Geography Language Ethnicity Religion az tr az ru (cid:53) (cid:53) (cid:53) ko zh (cid:53) ko ja (cid:53) (cid:53) zh ja (cid:53) (cid:53) zh ko (cid:53) ja zh (cid:53) (cid:53) ja ko (cid:53) (cid:53) ar ur (cid:53) (cid:53) (cid:53) ar fa (cid:53) (cid:53) ar he (cid:53) (cid:53) ur ar (cid:53) (cid:53) (cid:53) ur hi (cid:53) es fr es pt fi hu (cid:53) fi no (cid:53) (cid:53) af nl (cid:53) af sw (cid:53) (cid:53) (cid:53) Table 1: The 19 language pairs we explore in this work and the nature of their similarity: Geographical or Cultural: the same Language family, Ethnicity or Religion.", "et al., 2018), for validation of the predicted scores, in a supervised manner, and for a small subset of 6 languages in the Indo-European family.", "The dataset we use is the Massively Multilingual Image Dataset (MMID) from Hewitt et al. (2018).", "It covers 98 languages, containing at most 10,000 words per language and 100 images per word.", "For each word, in any language, we are given the collected images matching the word meaning, and the word's English translation.", "They use a language fil-tering step to ensure that images for each language are collected only from web pages that are identified as containing texts written in the language.", "We choose to examine specific language pairs so that for each source language there are two or more target languages whose shared characteristics with the source language differ in zero or more aspects.", "The shared characteristics between the source and target language include shared culture (i.e., either they are from the same language family 1 or the communities at their cultural centers have the same 1en.wikipedia.org/wiki/List_of_language_families major ethnic group 2 or major religion 3 ) or shared geography (i.e., the countries at their cultural centers share land border).", "For example, for Finnish, we include two target languages: one that has geographical proximity (Norwegian) and another that has ethnolinguistic similarity (Hungarian).", "In this way, we intend to examine for each source language, which of these groups of characteristics (culture or geography) are more important in image aided word translation, and whether culture or geography dominate one another.", "We form language pairs from the following 20 languages: Afrikaans (af), Arabic (ar), Azerbaijani (az), Chinese (zh), Dutch (nl), Finnish (fi), French (fr), Hebrew (he), Hindi (hi), Hungarian (hu), Japanese (ja), Korean (ko), Norwegian (no), Persian (fa), Portuguese (pt), Russian (ru), Spanish (es), Swahili (sw), Turkish (tr) and Urdu (ur).", "We summarize the language pairs and their shared characteristics in Table 1.", "We download MMID images 4 for all the source and target languages in our language pairs.", "In order to get vector embeddings for the images, we scale the images to 224 x 224 pixels, normalize and feed them as input into the ResNet-50 network (He et al., 2015), using network weights pre-trained on Ima-geNet.", "We obtain image embeddings from the last average pooling layer of ResNet-50, which gives us a 2048 dimensional vector embedding for each image.", "For each word, we call the embeddings of the associated images the word's image embedding.", "Because cosine similarity, which underlies parts of this work and previous works for bilingual lexicon induction via images (Bergsma et al., 2011; Kiela et al., 2015; Hewitt et al., 2018), is non-invariant to translation (Korenius et al., 2007) we treat all vectors with respect to the origin rather than some mean center for each image cluster 5 .", "Since the MTurk word translations that come with MMID (Pavlick et al., 2014) are limited in coverage and quality i.e., they contain only translations to English and the coverage and quality are high ( 70% accuracy) only for a small set (13) of European and Indian languages where many MTurk workers are located; we create translation dictionaries for each of our language pairs using Google Translate, translating all words in the source lan-2en.wikipedia.org/wiki/Ethnic_group3pewforum.org/2015/04/02/religious-projection-table/2020/percent/all/4http://multilingual-images.org/5Ourcodeisavailableathttps://github.com/nikzadkhani/MMID-CNN-Analysis guage to the target language.", "We compute translatability of words whose translations have associated images in MMID.", "If a word in the source is translated to a phrase, we use the last word in the phrase to find associated images in the dataset.", "This heuristic applies to only 10% of the words in the dataset and the Google translations with the majority (80%) of the first word in the phrase translations being indicative of functional words: shared and appearing more than 50 times in the dataset.", "Given images that are associated with the word w s in the source language s , and w t in the target language t , we define two measures that determine how well a word can be translated by its images.", "The first measures whether w s and w t have overlapping or disjoint image embeddings.", "The second measures whether the spread of the image embedding for w s , and, similarly, for w t , is tight or loose: such a measure of image dispersion has been found to help predict the usefulness of image representations for translation (Kiela et al., 2014, 2015).", "Specifically, when images of w s and w t are tight and overlapping in the embedding space, it shows that the images have little diversity (low dispersion) and are similar between w s and w t , indicating potentially good translation between them.", "Conversely, if the images are either spread out or disjoint, it means that the images have greater diversity (high dispersion) or differ between w s and w t , indicating potentially poor translation between them.", "We refer to the degree of overlap between two clusters of images associated with w s and w t respectively as their inter-cluster similarity , and to the degree of tightness or looseness of the images in each cluster as their intra-cluster similarity .", "Our conjecture is that this is equivalent to representing image embeddings as samples from some generator distribution G .", "We can call the generator distribution for a given source word G s and a generator distribution for a given target word G t .", "Two words are translations of each other when G s = G t , and conversely two words are poor translations when G s (cid:54) = G t .", "Thus, inter-cluster similarity checks to see if an image embedding from G s could have been produced by G t .", "Note that this is a necessary condition but it is not sufficient to say G s = G t if inter-cluster similarity is high, because an image embedding from G s can also be produced by some random image embedding generator G r with G s (cid:54) = G r .", "Intra-cluster similarity is a measure of how similar samples from a single generator are to each other.", "This will ensure that G t and G s are not random generators and are accurate representations of the word they are generating image embeddings for.", "In other words, having a high intra-cluster similarity implies that G s (cid:54) = G r and G t (cid:54) = G r .", "Thus, we have sufficient conditions when we have high interand intra-cluster similarities to say G s = G t .", "To measure the degree of overlap ( inter-cluster similarity ) between images associated with the word w s in the source language, and those associated with the word w t in the target language, we first cluster their image embeddings with a k -means clustering algorithm ( k = 2).", "Then, we measure the degree of overlap between images of the two words by the homogeneity score of the resulting clusters, h w s ,w t [0 , 1] (Rosenberg and Hirschberg, 2007), calculated given the words w s and w t as image labels.", "A homogeneity score of 0 signals that all the image embeddings come from the distribution of a single class, hence represent the same word or concept ( w s = w t ).", "In this case, we say that the images of the two words have high inter-cluster similarity.", "A score of 1 means that the k -means clustering was able to identify two mutually exclusive clusters of images indicative that the images come from two different generators ( G s (cid:54) = G t ).", "In other words, the image embeddings were sampled from two different words or senses ( w s (cid:54) = w t ).", "However, if images are highly dispersed (have high diversity, loose clusters), then the inter-cluster similarity may be deceptively high (i.e., low homogeneity) since loose clusters may overlap to some extent.", "Thus, homogeneity score is only an effective measure of how good an image-aided translation is on the condition that the clusters are sufficiently tight (i.e., have high intra-cluster similarity).", "In Section 4.5, we discuss how we compute this threshold for intra-cluster similarity.", "Images of a given word have low intra-cluster similarity when the images have high dispersion, which may be due to the word being abstract (e.g., words like concept whose images might be very diverse) or when the word has many different senses (e.g., words like bug whose images might represent the different senses of the word).", "On the other hand, when the intra-cluster similarity is high, it indicates that there is a general consensus on the meaning of the word as represented by the images, which makes for an easier transfer of the word meaning via images (i.e., better image translatability).", "The metric we choose for the intra-cluster similarity of a word w is Median Max Cosine Similarity, which, given the set of images associated with the word, I w , is: MEDMAX w = median i I w max j I w (cid:40) i (cid:54) = j : cosine ( i, j ) i = j : 0 This is a variation of the Average Maximum Cosine Similarity in Bergsma and Van Durme (2011), using the median to reduce the effect of outliers.", "Additionally, note that the worst case of this metric giving an undesirable outcome is when we have 50 random pairs of image embeddings for a given word cluster.", "This will result in a high intra-cluster similarity despite the randomness of the overall cluster.", "However, in our findings this scenario is extremely unlikely.", "As words have dominant senses, the effect of outliers is mitigated due to the use of the median.", "However, intra-cluster similarity on its own is not enough to indicate if the word in the target language w t is a good translation of the word in the source language w s .", "For example, the word train may be represented with images of locomotives in one language and with images of people exercising in another, if its meaning differs across languages.", "Both of the words' images will have high intra-cluster similarity but low inter-cluster similarity, indicating poor translatability via images.", "Thus, intra-cluster similarity is only an effective measure of how good an image-aided translation is, on the condition that the inter-cluster similarity is sufficiently high.", "In Section 4.4, we discuss how we compute this threshold for inter-cluster similarity and in Section 4.6 how we combine intraand inter-cluster similarity for image translatability.", "To study the relationship between image translatability and concreteness of a word, we adopt a method similar to Hewitt et al. (2018) to train a model to predict word concreteness.", "We use the dataset provided by Brysbaert et al. (2014), consisting of 40,000 words that have been assigned concreteness scores by human judges, on a scale of 1 to 5, from abstract to concrete.", "We split the Figure 1: Distribution of concreteness scores predictions on the held-out validation set of 1,000 words from Brysbaert et al. (2014).", "dataset into train and test sets, randomly picking 39,000 words for training.", "Similar to Hewitt et al. (2018), our concreteness prediction model is a two-layer perceptron, with one 32-units hidden layer, and a ReLU activation function, trained with an L2 loss.", "For each word, the model input is the concatenation of the single word embeddings obtained from the top four hidden layers of BERT Devlin et al. (2019), a practice recommended as the best performing feature-extraction method by the authors.", "Figure 1 shows the results of our evaluation on the test set of 1,000 words, depicting the distributions of the different part-of-speech categories.", "We provide the Spearman correlation coefficient between the ground-truth and predicted concreteness scores, which shows the improved effectiveness of our BERT embeddings-based method compared to the Salle et al. (2016) embeddings employed by Hewitt et al. (2018).", "Using this trained model, we predict the concreteness score of each of the words in our dataset by first translating the word to English and lemmatizing it using spaCy.", "We define a homogeneity score threshold to determine if two words w s and w t have sufficiently high overlap in their image embeddings to indicate a good translation.", "For each language pair, we compute this threshold h thld s , t by taking the average homogeneity score of clusters of images of 10 randomly chosen word pairs from the source s and the target language t .", "We take the average of these scores since we want to first be able to compare the threshold with other homogeneity scores and second to be able to capture the skew in negative thresholds as well.", "These pairs serve as negative examples of translation and we expect their image embeddings to be disjoint.", "Hence, a word pair with homogeneity score lower than this threshold means that the word pair has a good overlap in their image embeddings (i.e., a high inter-cluster similarity), which indicates a good translation.", "Similarly, we define an intra-cluster similarity threshold to determine if an image cluster associated with a word w is sufficiently tight.", "Since intra-cluster similarity is computed for each word (and not word pair), we compute this threshold MEDMAX thld l for each language l by constructing a negative example for the language i.e., an image cluster with a high dispersion.", "We create this negative example by taking five random words from the language and for each word a random sample of 20 images to build a cluster of 100 images (mimick-ing the typical image cluster size for a word in our dataset).", "We set the Median Max Cosine Similarity of this image cluster as the intra-cluster similarity threshold.", "A word that has an intra-cluster similarity higher than this threshold would mean that this word has a tight image cluster, a consistent meaning as represented in its images' representations.", "We define a normalized score NORM w s , w t to combine intraand inter-cluster similarity scores for a word w s in the source language and its translation in the target language w t .", "Given the intra-cluster similarity of word w s (MEDMAX w s ) and that of word w t (MEDMAX w t ); and the maximum and minimum intra-cluster similarities for the source language (MEDMAX max s and MEDMAX min s ), and those of the target language (MEDMAX max t and MEDMAX min t ); as well as the homogeneity score of the words h w s ,w t and the maximum and minimum homogeneity scores of words in the language pair ( h max s,t and h min s,t ), we compute the normalized score NORM w s , w t as: NORM w s ,w t = NORMMEDMAX w s + NORMMEDMAX w t NORM h w s ,w t where: NORMMEDMAX w s = MEDMAX w s MEDMAX min s 2( MEDMAX max s MEDMAX min s ) NORMMEDMAX w t = MEDMAX w t MEDMAX min t 2( MEDMAX max t MEDMAX min t ) NORM h w s ,w t = h w s ,w t h min s,t h max s,t h min s,t For each language pair, we also define a threshold on this normalized score (i.e., NORM thld s , t ) by substituting MEDMAX w s , MEDMAX w t , and h w s ,w t with MEDMAX thld s , MEDMAX thld t , and h thld s,t respectively in the equation above.", "In order to compare the image translatability of two language pairs with different characteristics, we compare the ratio of the number of word pairs that are good translations divided by the total number of word pairs in each language pair:", "A word pair w s and w t has a good translation via images (or good image translatability) if its homogeneity score h w s ,w t is lower than the homogeneity threshold h thld s,t and its Median Max Cosine Similarities i.e., MEDMAX w s and MEDMAX w t are higher than the thresholds MEDMAX thld s and MEDMAX thld , respectively.", "The higher this ratio, the more translatable the language pair is via images.", "When we compare two language pairs that have the same source language but different target languages (with different shared characteristics with the source), we can distinguish how different characteristics such as cultural similarity or geographical proximity affect image translatability.", "In Table 2, we show image translatability ratios of language pairs with the same source but different target languages side-by-side.", "To understand the role of concreteness in translation via images, we also compute how many concrete words have good translations according to our image translatability measures.", "We consider a word pair, w s and w t , to be concrete if w s has a concreteness score greater than", "3. Source word concreteness is taken to as the pair concreteness, considering translation directionality.", "The ratio of how many concrete words in each language pair have good translations is also shown in Table", "2. LanguagePair Number of Words Ratio All Concrete All Concrete az tr 4538 3470 0.31 0.37 az ru 5380 2953 0.17 0.22 ko zh 338 214 0.18 0.22 ko ja 748 499 0.69 0.72 zh ja 367 212 0.56 0.58 zh ko 310 197 0.36 0.44 ja zh 212 137 0.39 0.44 ja ko 741 488 0.67 0.70 ar ur 4916 3226 0.39 0.44 ar fa 448 318 0.50 0.55 ar he 2887 1874 0.69 0.73 ur ar 4243 2466 0.39 0.45 ur hi 4588 2817 0.12 0.15 es fr 6392 3506 0.45 0.58 es pt 7116 3920 0.40 0.53 fi hu 5615 3190 0.29 0.40 fi no 5336 3033 0.17 0.26 af nl 5436 3247 0.39 0.50 af sw 4553 2611 0.25 0.31 Table 2: Language pairs along with the numbers of word pairs and their image translatability ratios, for all and concrete word pairs.", "To test whether the difference in image translatability between language pairs that share the same source language (e.g., Finnish to Norwegian vs. Finnish to Hungarian) is statistically significant, we conduct a simple t-test between their normalized score distributions.", "The resulting p-values signal the difference between their distributions.", "Low p-values (< 0.05) indicate statistical signifi-cance and high variation between the distributions, while higher values suggest low variation and large similarities between the language pairs (Table 3).", "From the t-test, we find that the difference in distributions of pairs that share the same source language is almost in all statistically significant (p-value < 0.05, Table 3) except for Japanese to Chinese vs. Japanese to Korean.", "Of other pairs whose normalized score differences are statistically significant and whose differences in translatability ratios are high (boldfaced, Table 2), we observe that the language pair with the higher image translatability ratio (i.e., Azer-Language Pair I Language Pair II p-value az tr az ru 2 . 26 10 25 ko zh ko ja 2 . 2 10 4 zh ja zh ko 10 . 6 10 5 ja zh ja ko 0.43 ar ur ar fa 9 . 44 10 29 ar fa ar he 2 . 39 10 13 ar he ar ur 16 . 35 10 5 es fr es pt 4 . 3 10 47 fi hu fi no 2 . 66 10 104 af nl af sw 10 100 Table 3: p-values of differences between normalized score distributions of language pairs that share the same source language. In boldface, we mark pairs with a high p-value, for which we cannot assume a significant difference in their normalized score distributions. baijani to Turkish, Korean to Japanese, Chinese to Japanese, Arabic to Hebrew, Urdu to Arabic, Finnish to Hungarian, and Afrikaans to Dutch) is always the pair that shares cultural similarity (i.e., either similar language family, similar major ethnicity, or similar major religion) even when they have little to no geographical proximity.", "For example, between Arabic and Hindi, Urdu's words are more translatable via images to Arabic (whose speakers share the same major religion as speakers of Urdu), despite Pakistan's geographic proximity to India.", "Similarly, Finnish words are more translatable via images to Hungarian (whose speakers belong to the same ethnolinguistic group as speakers of Finnish) than to Norwegian, despite Hungary not sharing any land border with Finland.", "In addition, there may be other language relatedness factors that result in better image translatability between languages, such as the similar writing system of Chinese and Japanese, or the similar grammatical structure of Korean and Japanese; despite their different language families.", "From Table 1 we can see that Spanish shares similar characteristics with both French and Portuguese.", "Similarly, Japanese shares similar attributes with both Chinese and Korean (matching ethnicity and religion, different geography and language family).", "In such cases, where the two language pairs do not differ in characteristics, we observe that the difference in their translatability ratios is either small (in the case of Spanish to French and Spanish to Portuguese ratios in Table 2) or in-significant (in the case of Japanese to Chinese and Japanese to Korean p-value in Table 3).", "to Chinese pairs, and Chinese to Korean and Chinese to Japanese pairs, have at least one difference in their attributes, accounting for the pairs' results statistical significance.", "In addition, we observe that concreteness of words largely affects the quality of translations due to the low diversity in its image representations, which facilitate translation between words.", "On average, across language pairs, 62.4% of words with normalized scores above the threshold are concrete, while only 37.6% are abstract.", "At the same time, in Table 2 we see that the translatability ratio is considerably higher for concrete word pairs than all pairs.", "This supports our idea, and other previous works, that concrete words are better represented visually and, so, more likely to have good image-aided translations, compared to abstract ones.", "Our work has identified that language relatedness affect word translations via images.", "We observe that languages with cultural relatedness have better image translatability; suggesting that cultural relatedness should be taken into account when using images to aid translation.", "The image translatability measures we have defined can be used to identify a potentially good or poor translation or discover a cultural similarity or disconnect between words in two languages.", "For example, a word pair that has a high intra-cluster similarity and a high inter-cluster similarity in their image representations indicates that the image clusters are tight and overlapping, signaling a good translation between them.", "For example, the word heelal in Afrikaans and the word universum in Dutch have tight and overlapping image clusters (i.e., low homogeneity score) as can be seen in the PCA plot of their image embeddings and in their images (Figure 2).", "images express different meanings of the word.", "For example, the word dance : dans in Afrikaans and the word kucheza in Swahili have tight but disjoint image clusters (i.e., high homogeneity score) as can be seen in the PCA plot of their image embeddings and in the images (Figure 3), as kucheza means both to play and to dance in Swahili.", "We observe that Afrikaans, for example, has a higher image translatability to Dutch due to their cultural (ethnolinguistic) similarity, than to Swahili, despite the relative distances of their cultural centersSouth Africa is more distant geographically to the Netherlands than to Tanzania, defined in Glottolog as being the cultural center of the Swahili language.", "We observe higher visual similarities between words that are translations of each other in Afrikaans and Dutch than in Afrikaans and Swahili.", "For example, images of the word park in Afrikaans are more visually similar to images of the word park in Dutch than to images of the word park in Swahili ( hifadhi ) (Figures 4, 5).", "The images of park in Afrikaans and in Dutch refer to a Western style park, while its images in Swahili refer more to a wildlife reservation, a culturally different representation of the word park that is potentially influenced by how speakers of the different languages interact with the concept of the word.", "Interestingly, such connotation that is apparent in images may not be apparent in word embeddings, since hifadhi is used similarly with park in the texts of the language.", "In this paper, we study when images may be useful for translating words between two languages from the perspective of their cultural and geographical relatedness.", "We observe that translatability of words via images vary in different language pairs, with language pairs sharing cultural similarities having better image translatability.", "In the future, it will be interesting to study image translatability of more language pairs and their characteristics, including those outside MMID, as well as extend our work to sentence-level image aided translation.", "It will also be of great value to study if adding considerations of cultural relatedness to image-aided MT can further improve its performance.", "Additionally, using a different metric for intra-cluster similarity that does not calculate similarity with respect to the origin may be more accurate depending on the application.", "As many similarity functions, aside from cosine similarity, have been used in the computer vision literature, improving this function could be fruitful future work.", "We would like to thank the anonymous reviewers for their thoughtful comments.", "This work is supported in part by the U.S. NSF grant 1838193, DARPA HR001118S0044 (the LwLL program), and the Department of the Air Force FA8750-19-2-3334 (Semi-supervised Learning of Multimodal Representations).", "The U.S. Government is authorized to reproduce and distribute reprints for Governmental purposes.", "The views and conclusions contained in this publication are those of the authors and should not be interpreted as representing official policies or endorsements of DARPA, the Air Force, and the U.S. Government." ]
[ "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "objective", "abstain", "objective", "method", "method", "objective", "method", "abstain", "result", "method", "result", "method", "abstain", "abstain", "method", "result", "abstain", "abstain", "objective", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "other", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "other", "result", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "result", "abstain", "method", "result", "method", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "method", "result", "objective", "abstain", "abstain", "abstain", "other", "other", "other", "other" ]
[ "Despite pre-trained language models have proven useful for learning high-quality semantic representations, these models are still vulnerable to simple perturbations.", "Recent works aimed to improve the robustness of pre-trained models mainly focus on adversarial training from perturbed examples with similar semantics, neglecting the utilization of different or even opposite semantics.", "Different from the image processing field, the text is discrete and few word substitutions can cause significant semantic changes.", "To study the impact of semantics caused by small perturbations, we conduct a series of pilot experiments and surprisingly find that adversarial training is useless or even harmful for the model to detect these semantic changes.", "To address this problem, we propose Contrastive Learning with semantIc Negative Examples (CLINE), which constructs semantic negative examples unsupervised to improve the robustness under semantically adversarial attacking.", "By comparing with similar and opposite semantic examples, the model can effectively perceive the semantic changes caused by small perturbations.", "Empirical results show that our approach yields substantial improvements on a range of sentiment analysis, reasoning, and reading comprehension tasks.", "And CLINE also ensures the compactness within the same semantics and separability across different semantics in sentence-level.", "Pre-trained language models (PLMs) such as BERT (Devlin et al., 2019) and RoBERTa (Liu et al., 2019) have been proved to be an effective way to improve various natural language processing tasks.", "However, recent works show that PLMs suffer from Equal contribution.", "poor robustness when encountering adversarial examples (Jin et al., 2020; Li et al., 2020; Garg and Ramakrishnan, 2020; Zang et al., 2020; Lin et al., 2020a).", "As shown in Table 1, the BERT model can be fooled easily just by replacing ultimately with a similar word lastly .", "To improve the robustness of PLMs, recent studies attempt to adopt adversarial training on PLMs, which applies gradient-based perturbations to the word embeddings during training (Miyato et al., 2017; Zhu et al., 2020; Jiang et al., 2020) or adds high-quality adversarial textual examples to the training phase (Wang and Bansal, 2018; Michel et al., 2019).", "The primary goal of these adversarial methods is to keep the label unchanged when the input has small changes.", "These models yield promising performance by constructing high-quality perturbated examples and adopting adversarial mechanisms.", "However, due to the discrete nature of natural language, in many cases, small perturbations can cause significant changes in the semantics of sentences.", "As shown in Table 1, negative sentiment can be turned into a positive one by changing only one word, but the model can not recognize the change.", "Some recent works create contrastive sets (Kaushik et al., 2020; Gardner et al., 2020), which manually perturb the test instances in small but meaningful ways that change the gold label.", "In this paper, we denote the perturbated examples without changed semantics as adversarial examples and the ones with changed semantics as contrastive examples, and most of the methods to improve robustness of PLMs mainly focus on the former examples, little study pays attention to the semantic negative examples.", "The phenomenon makes us wonder can we train a BERT that is both defensive against adversarial attacks and sensitive to semantic changes by using both adversarial and contrastive examples?", "To answer that, we need to assess if the current robust models are meanwhile semantically sensitive.", "We conduct sets of pilot experiments (Section", "2) to compare the performances of vanilla PLMs and adversarially trained PLMs on the contrastive examples.", "We observe that while improving the robustness of PLMs against adversarial attacks, the performance on contrastive examples drops.", "To train a robust semantic-aware PLM, we propose Contrastive Learning with semantIc Negative Examples (CLINE).", "CLINE is a simple and effective method to generate adversarial and contrastive examples and contrastively learn from both of them.", "The contrastive manner has shown effectiveness in learning sentence representations (Luo et al., 2020; Wu et al., 2020; Gao et al., 2021), yet these studies neglect the generation of negative instances.", "In CLINE, we use external semantic knowledge, i.e., WordNet (Miller, 1995), to generate adversarial and contrastive examples by unsupervised replacing few specific representative tokens.", "Equipped by replaced token detection and contrastive objectives, our method gathers similar sentences with semblable semantics and disperse ones with different even opposite semantics, simultaneously improving the robustness and semantic sensitivity of PLMs.", "We conduct extensive experiments on several widely used text classification benchmarks to verify the effectiveness of CLINE.", "To be more specific, our model achieves +1.6% absolute improvement on 4 contrastive test sets and +0.5% absolute improvement on 4 adversarial test sets compared to RoBERTa model (Liu et al., 2019).", "That is, with the training on the proposed objectives, CLINE simultaneously gains the robustness of adversarial attacks and sensitivity of semantic changes 1 .", "To study how the adversarial training methods perform on the adversarial set and contrastive set, we first conduct pilot experiments and detailed analyses in this section.", "There are a considerable number of studies constructing adversarial examples to attack large-scale pre-trained language models, of which we select a popular method, TextFooler (Jin et al., 2020), as the word-level adversarial attack model to construct adversarial examples.", "Recently, many researchers create contrastive sets to more accurately evaluate a model's true linguistic capabilities (Kaushik et al., 2020; Gardner et al., 2020).", "Based on these methods, the following datasets are selected to construct adversarial and contrastive examples in our pilot experiments and analyses: IMDB (Maas et al., 2011) is a sentiment analysis dataset and the task is to predict the sentiment (positive or negative) of a movie review.", "SNLI (Bowman et al., 2015) is a natural language inference dataset to judge the relationship between two sentences: whether the second sentence can be derived from entailment, contradiction, or neutral relationship with the first sentence.", "To improve the generalization and robustness of language models, many adversarial training methods that minimize the maximal risk for label-preserving input perturbations have been proposed, and we select an adversarial training method FreeLB (Zhu et al., 2020) for our pilot experiment.", "We evaluate the vanilla BERT (Devlin et al., 2019) and RoBERTa (Liu et al., 2019), and the FreeLB version on the adversarial set and contrastive set.", "Table 2 shows a detailed comparison of different models on the adversarial test set and the contrast test set.", "From the results, we can observe that, compared to the vanilla version, the adversarial training method FreeLB achieves higher accuracy on the adversarial sets, but suffers a considerable performance drop on the contrastive sets, especially for the BERT.", "The results are consistent with the intuition in Section 1, and also demonstrates that adversarial training is not suitable for the contrastive set and even brings negative effects.", "Intuitively, adversarial training tends to keep labels unchanged while the contrastive set tends to make small but label-Model Method IMDB SNLI Adv Rev Adv Rev BERT-base Vanilla 88.7 89.8 48.6 73.0 FreeLB 91.9 ( +3 . 2 ) 87.7 ( 2 . 1 ) 56.1 ( +7 . 5 ) 71.4 ( 1 . 6 ) RoBERTa-base Vanilla 93.9 93.0 55.1 75.2 FreeLB 95.2 ( +1 . 3 ) 92.6 ( 0 . 4 ) 58.1 ( +3 . 0 ) 74.6 ( 0 . 6 ) Table 2: Accuracy (%) on the adversarial set (Adv) compared to the contrastive set (Rev) of Vanilla models and adversarially trained models.", "IMDB Contrastive Set Jim Henson's Muppets were a favorite of mine since childhood.", "This film on the other hand makes me feel dizziness in my head.", "You will see cameos by the then New York City Mayor Ed Koch.", "Anyway, the film turns 25 this year and I hope the kids of today will learn to appreciate the lightheartedness of the early Muppets Gang over this.", "It might be worth watching for kids but definitely not for knowledgeable adults like myself.", "Label: Negative Prediction: Positive Table 3: Wrong predictions made by the FreeLB version of BERT on the contrastive set.", "changing modifications.", "The adversarial training and contrastive examples seem to constitute a natural contradiction, revealing that additional strategies need to be applied to the training phase for the detection of the fine-grained changes of semantics.", "We provide a case study in Section 2.3, which further shows this difference.", "To further understand why the adversarial training method fails on the contrastive sets, we carry out a thorough case study on IMDB.", "The examples we choose here are predicted correctly by the vanilla version of BERT but incorrectly by the FreeLB version.", "For the example in Tabel 3, we can observe that many parts are expressing positive sentiments (red part) in the sentence, and a few parts are expressing negative sentiments (blue parts).", "Overall, this case expresses negative sentiments, and the vanilla BERT can accurately capture the negative sentiment of the whole document.", "However, the FreeLB version of BERT may take the features of negative sentiment as noise and predict the whole document as a positive sentiment.", "This result indicates that the adversarially trained BERT could be fooled in a reversed way of traditional adversarial training.", "From this case study, we can observe that the adversarial training methods may not be suitable for these semantic changed adversarial examples, and to the best of our knowledge, there is no defense method for this kind of adversarial attack.", "Thus, it is crucial to explore the appropriate methods to learn changed semantics from semantic negative examples.", "As stated in the observations in Section 2, we explore strategies that could improve the sensitivity of PLMs.", "In this section, we present CLINE, a simple and effective method to generate the adversarial and contrastive examples and learn from both of them.", "We start with the generation of adversarial and contrastive examples in Section 3.1, and then introduce the learning objectives of CLINE in Section 3.2.", "We expect that by contrasting sentences with the same and different semantics, our model can be more sensitive to the semantic changes.", "To do so, we adopt the idea of contrastive learning, which aims to learn the representation by concentrating positive pairs and pushing negative pairs apart.", "Therefore it is essential to define appropriate positive and negative pairs.", "In this paper, we regard sentences with the same semantics as positive pairs and sentences with opposite semantics as negative pairs.", "Some works (Alzantot et al., 2018; Tan et al., 2020; Wu et al., 2020) attempt to utilize data augmentation (such as synonym replacement, back translation, etc) to generate positive instances, but few works pay attention to the negative instances.", "And it is difficult to obtain opposite semantic instances for textual examples.", "Intuitively, when we replace the representative words in a sentence with its antonym, the semantic of the sentence is easy to be irrelevant or even opposite to the original sentence.", "As shown in Figure 1, given the sentence Batman is an fictional superhero written by, we can replace fictional with its antonym real-life, and then we get a counterfactual sentence Batman is an real-life super-hero written by.", "The latter contradicts the former and forms a negative pair with it.", "We generate two sentences from the original input sequence x ori , which express substantially different semantics but have few different words.", "One of the sentences is semantically close to x ori (de-noted as x syn ), while the other is far from or even opposite to x ori (denoted as x ant ).", "In specific, we utilize spaCy 2 to conduct segmentation and POS for the original sentences, extracting verbs, nouns, adjectives, and adverbs.", "x syn is generated by replacing the extracted words with synonyms, hy-pernyms and morphological changes, and x ant is generated by replacing them with antonyms and random words.", "For x syn , about 40% tokens are replaced.", "For x ant , about 20% tokens are replaced.", "CLINE trains a neural text encoder (i.e., deep Transformer) E parameterized by that maps a sequence of input tokens x = [ x 1 , ..., x T ] to a sequence of representations h = [ h 1 , .., h T ] , h i [1: T ] R d , where d is the dimen-2", "Masked Language Modeling Objective With random tokens masked by special symbols [MASK] , the input sequence is partially corrupted.", "Following BERT (Devlin et al., 2019), we adopt the masked language model objective (denoted as LMLM ), which reconstructs the sequence by predicting the masked tokens.", "Replaced Token Detection Objective On the basis of x syn and x ant , we adopt an additional classifier C for the two generated sequences and detect which tokens are replaced by conducting two-way classification with a sigmoid output layer: p ( x syn , t ) = sigmoid ( w (cid:62) h syn t ) , (2) p ( x ant , t ) = sigmoid ( w (cid:62) h ant t ) .", "(3) The loss, denoted as LRTD is computed by: LRTD = (cid:88) x (cid:48) { x syn ,x ant } T (cid:88) t =1 t log p ( x (cid:48) , t ) (1 t ) log (1 p ( x (cid:48) , t )) , (4) where t = 1 when the token x t is corrupted, and t = 0 otherwise.", "Contrastive Objective The intuition of CLINE is to accurately predict if the semantics are changed when the original sentences are modified.", "In other words, in feature space, the metric between h ori and h syn should be close and the metric between h ori and h ant should be far.", "Thus, we develop a contrastive objective, where ( x ori , x syn ) is considered a positive pair and ( x ori , x ant ) is negative.", "We use h c to denote the embedding of the special symbol [CLS] .", "In the training of CLINE, we follow the setting of RoBERTa (Liu et al., 2019) to omit the next sentence prediction (NSP) objective since previous works have shown that NSP objective can hurt the performance on the downstream tasks (Liu et al., 2019; Joshi et al., 2020).", "Alternatively, adopt the embedding of [CLS] as the sentence representation for a contrastive objective.", "The metric between sentence representations is calculated as the dot product between [CLS] embeddings: f ( x , x (cid:48) ) = exp ( h (cid:62) c h (cid:48) c ) .", "Inspired by InfoNCE, we define an objective L cts in the contrastive manner:", "(6) Note that different from some contrastive strategies that usually randomly sample multiple negative examples, we only utilize one x ant as the negative example for training.", "This is because the primary goal of our pre-training objectives is to improve the robustness under semantically adversarial attacking.", "And we only focus on the negative sample (i.e., x ant ) that is generated for our goal, instead of arbitrarily sampling other sentences from the pre-training corpus as negative samples.", "Finally, we have the following training loss: L = 1 LMLM + 2 LRTD + 3 L cts , (7) where i is the task weighting learned by training.", "We conduct extensive experiments and analyses to evaluate the effectiveness of CLINE.", "In this section, we firstly introduce the implementation (Sec-tion 4.1) and the datasets (Section 4.2) we used, then we introduce the experiments on contrastive sets (Section 4.3) and adversarial sets (Section 4.4), respectively.", "Finally, we conduct the ablation study (Section 4.5) and analysis about sentence representation (Section 4.6).", "To better acquire the knowledge from the existing pre-trained model, we did not train from scratch", "but the official RoBERTa-base model.", "We train for 30K steps with a batch size of 256 sequences of maximum length 512 tokens.", "We use Adam with a learning rate of 1e-4, 1 = 0 .", "9 , 2 = 0 .", "999 , (cid:15) = 1e-8, L2 weight decay of 0.01, learning rate warmup over the first 500 steps, and linear decay of the learning rate.", "We use 0.1 for dropout on all layers and in attention.", "The model is pre-trained on 32 NVIDIA Tesla V100 32GB GPUs.", "Our model is pre-trained on a combination of BookCorpus (Zhu et al., 2015) and English Wikipedia datasets, the data BERT used for pre-training.", "tasks: IMDB (Maas et al., 2011) is a sentiment analysis dataset and the task is to predict the sentiment", "PERSPECTRUM (Chen et al., 2019) is a natural language inference dataset to predict whether a relevant perspective is for/against the given claim.", "BoolQ (Clark et al., 2019) is a dataset of reading comprehension instances with boolean (yes or no) answers.", "AG (Zhang et al., 2015) is a sentence-level classification with regard to four news topics: World, Sports, Business, and Sci-ence/Technology.", "MR (Pang and Lee, 2005) is a sentence-level sentiment classification on positive and negative movie reviews.", "sentiment (positive or negative) of a movie review.", "SNLI (Bowman et al., 2015) is a natural language inference dataset to judge the relationship between two sentences: whether the second sentence can be derived from entailment, contradiction, or neutral relationship with the first sentence.", "We evaluate our model on four contrastive sets: IMDB, PERSPECTRUM, BoolQ and SNLI, which were provided by Contrast Sets 3 (Gardner et al., 2020).", "We compare our approach with BERT and 3 https://github.com/allenai/ contrast-sets Model IMDB PERSPECTRUM BoolQ SNLI Ori Rev Con Ori Rev Con Ori Rev Con Ori Rev Con BERT 92.2 89.8 82.4 74.7 72.8 57.6 60.9 57.6 36.1 89.8 73.0 65.1 RoBERTa 93.6 93.0 87.1 80.6 78.8 65.0 69.6 60.6 43.9 90.8 75.2 67.8 CLINE 94.5 93.9 88.5 81.6 80.2 72.2 73.9 63.9 47.8 91.3 76.0 69.2 Table 4: Accuracy on the original test set (Ori) and contrastive test set (Rev).", "RoBERTa across the original test set (Ori) and contrastive test set (Rev).", "Contrast consistency (Con) is a metric defined by Gardner et al. (2020) to evaluate whether a model's predictions are all correct for the same examples in both the original test set and the contrastive test set.", "We fine-tune each model many times using different learning rates (1e-5,2e-5,3e-5,4e-5,5e-5) and select the best result on the contrastive test set.", "From the results shown in Table 4, we can observe that our model outperforms the baseline.", "Especially in the contrast consistency metric, our method significantly outperforms other methods, which means our model is sensitive to the small change of semantic, rather than simply capturing the characteristics of the dataset.", "On the other hand, our model also has some improvement on the original test set, which means our method can boost the performance of PLMs on the common examples.", "To evaluate the robustness of the model, we compare our model with BERT and RoBERTa on the vanilla version and FreeLB version across several adversarial test sets.", "Instead of using an adversarial attacker to attack the model, we use the adversarial examples generated by TextFooler (Jin et al., 2020) as a benchmark to evaluate the performance against adversarial examples.", "TextFooler identifies the important words in the text and then prioritizes to replace them with the most semantically similar and grammatically correct words.", "From the experimental results in Table 5, we can observe that our vanilla model achieves higher accuracy on all the four benchmark datasets compared to the vanilla BERT and RoBERTa.", "By constructing similar semantic adversarial examples and using the contrastive training objective, our model can concentrate the representation of the original example and the adversarial example, and then achieve better robustness.", "Furthermore, our method is in the pre-training stage, so it can also be combined with the existing adversarial training methods.", "Compared with the FreeLB version of BERT and RoBERTa, our model can achieve state-of-the-art (SOTA) performances on the adversarial sets.", "Experimental results on contrastive sets and adversarial sets show that our model is sensitive to semantic changes and keeps robust at the same time.", "To further analyze the effectiveness of different factors of our CLINE, we choose PERSPECTRUM (Chen et al., 2019) and BoolQ (Clark et al., 2019) as benchmark datasets and report the ablation test in terms of", "1) w/o RTD: we remove the replaced token detection objective ( LRTD ) in our model to verify whether our model mainly benefits from the contrastive objective.", "2) w/o Hard Negative: we replace the constructed negative examples with random sampling examples to verify whether the negative examples constructed by unsupervised word substitution are better.", "We also add 1% and 10% settings, meaning using only 1% / 10% data of the training set, to simulate a low-resource scenario and observe how the model performance across different datasets and settings.", "From Table 6, we can observe that:", "1) Our CLINE outperformance RoBERTa on all settings, which indicates that our method is universal and robust.", "Especially in the Dataset Model 1% 10% 100% Ori Rev Con Ori Rev Con Ori Rev Con PERSPECTRUM CLINE 71.4 60.4 33.6 75.1 69.1 55.3 81.6 80.2 72.2 w/o RTD 67.3 59.4 29.0 73.4 67.7 53.0 81.1 78.3 68.9 w/o Hard Negative 59.0 53.0 14.7 71.4 68.8 38.2 80.9 78.2 65.9 RoBERTa 55.8 54.8 13.8 72.4 66.8 45.2 80.6 78.8 65.0 BoolQ CLINE 66.7 52.8 33.7 68.1 54.0 36.1 73.9 63.9 47.8 w/o RTD 64.8 52.5 32.2 68.0 53.7 35.8 72.5 63.0 46.6 w/o Hard Negative 60.1 49.0 30.0 68.1 53.4 35.2 69.6 61.8 44.5 RoBERTa 60.9 49.3 27.5 65.2 53.1 32.8 69.6 60.6 43.9 Table 6: Ablation study on the original test set (Ori) and contrastive test set (Rev) of PERSPECTRUM (accuracy) and BoolQ (accuracy).", "low-resource scenario (1% and 10% supervised training data), our method shows a prominent improvement.", "2) Compared to the CLINE, w/o RTD just has a little bit of performance degradation.", "This proves that the improvement of performance mainly benefits from the contrastive objective and the replaced token detection objective can further make the model sensitive to the change of the words.", "3) Compared to CLINE, we can see that the w/o Hard Negative has a significant performance degradation in most settings, proving the effectiveness of constructing hard negative instances.", "To evaluate the semantic sensitivity of the models, we generate 9626 sentence triplets from a sentence-level sentiment analysis dataset MR (Pang and Lee, 2005).", "Each of the triples contains an original sentence x ori from MR, a sentence with similar semantics x syn and a sentence with opposite semantic x ant .", "We generate x syn / x ant by replacing a word in x ori with its synonym/antonym from WordNet (Miller, 1995).", "And then we compute the cosine similarity between sentence pairs with [CLS] token and the mean-pooling of all tokens.", "And we also use a SOTA algorithm, BertScore (Zhang et al., 2020) to compute similarity scores of sentence pairs.", "We consider cases in which the model correctly identifies the semantic relationship (e.g., if BertScore( x ori , x syn ) > BertScore( x ori , x ant )) as Hits .", "And higher Hits means the model can better distinguish the sentences, which express substantially different semantics but have few different words.", "We show the max Hits on all layers (from 1 to 12) of Transformers-based encoder in Table 7.", "We can observe:", "1) In the BERT model, using the [CLS] token as sentence representation achieves worse results than mean-pooling, which shows the same conclusion as Sentence-BERT (Reimers and Gurevych, 2019).", "And because RoBERTa omits the NSP objective, so its result of CLS has no meaning.", "2) The BertScore can compute semantic similarity better than other methods and our method CLINE-B can further improve the Hits .", "3) By constructing positive and negative examples for contrastive learning in pre-training stage, our method CLINE-B and CLINE-R learn better sentence representation and detect small semantic changes.", "4) We can observe that the RoBERTa has less Hits than BERT, and our CLINE-B has significant improvement compared to BERT.", "We speculate that there may be two reasons, the first is that BERT can better identify sentence-level semantic changes because it has been trained with the next sentence prediction (NSP) objective in the pre-training stage.", "And the second is that the BERT is not trained enough, so it can not represent sentence semantics well, and our method can improve the semantic representation ability of the model.", "The PLMs have proven their advantages in capturing implicit language features.", "Two main research directions of PLMs are autoregressive (AR) pre-training (such as GPT (Radford et al., 2018)) and denoising autoencoding (DAE) pre-training (such as BERT (Devlin et al., 2019)).", "AR pretraining aims to predict the next word based on previous tokens but lacks the modeling of the bidirectional context.", "And DAE pre-training aims to reconstruct the input sequences using left and right context.", "However, previous works mainly focus on the token-level pre-training tasks and ignore modeling the global semantic of sentences.", "To make neural networks more robust to adversarial examples, many defense strategies have been proposed, and adversarial training is widely considered to be the most effective.", "Different from the image domain, it is more challenging to deal with text data due to its discrete property, which is hard to optimize.", "Previous works focus on heuristics for creating adversarial examples in the black-box setting.", "Belinkov and Bisk (2018) manipulate every word in a sentence with synthetic or natural noise in machine translation systems.", "Iyyer et al. (2018) leverage back-translated to produce paraphrases that have different sentence structures.", "Recently, Miyato et al. (2017) extend adversarial and virtual adversarial training (Miyato et al., 2019) to text classification tasks by applying perturbations to word embeddings rather than discrete input symbols.", "Following this, many adversarial training methods in the text domain have been proposed and have been applied to the state-of-the-art PLMs.", "Li and Qiu (2020) introduce a token-level perturbation to improves the robustness of PLMs.", "Zhu et al. (2020) use the gradients obtained in adversarial training to boost the performance of PLMs.", "Although many studies seem to achieve a robust representation, our pilot experiments (Section", "2) show that there is still a long way to go.", "5.3 Contrastive Learning Contrastive learning is an unsupervised representation learning method, which has been widely used in learning graph representations (Velickovic et al., 2019), visual representations (van den Oord et al., 2018; He et al., 2020; Chen et al., 2020), response representations (Lin et al., 2020b; Su et al., 2020), text representations (Iter et al., 2020; Ding et al., 2021) and structured world models (Kipf et al., 2020).", "The main idea is to learn a representation by contrasting positive pairs and negative pairs, which aims to concentrate positive samples and push apart negative samples.", "In natural language processing (NLP), contrastive self-supervised learning has been widely used for learning better sentence representations.", "Logeswaran and Lee (2018) sample two contiguous sentences for positive pairs and the sentences from the other document as negative pairs.", "Luo et al. (2020) present contrastive pretraining for learning denoised sequence representations in a self-supervised manner.", "Wu et al. (2020) present multiple sentence-level augmentation strategies for contrastive sentence representation learning.", "The main difference between these works is their various definitions of positive examples.", "However, recent works pay little attention to the construction of negative examples, only using simple random sampling sentences.", "In this paper, we propose a negative example construction strategy with opposite semantics to improve the sentence representation learning and the robustness of the pre-trained language model.", "In this paper, we focus on one specific problem how to train a pre-trained language model with robustness against adversarial attacks and sensitivity to small changed semantics.", "We propose CLINE, a simple and effective method to tackle the challenge.", "In the training phase of CLINE, it automatically generates the adversarial example and semantic negative example to the original sentence.", "And then the model is trained by three objectives to make full utilization of both sides of examples.", "Empirical results demonstrate that our method could considerably improve the sensitivity of pre-trained language models and meanwhile gain robustness.", "This research is supported by National Natural Science Foundation of China (Grant No. 61773229", "and 6201101015), Tencent AI Lab Rhino-Bird Focused Research Program (No. JR202032), Shenzhen Giiso Information Technology Co.", "Ltd., Natural Science Foundation of Guangdong Province (Grant No. 2021A1515012640), the Basic Research Fund of Shenzhen City (Grand No. JCYJ20190813165003837), and Overseas Cooperation Research Fund of Graduate School at Shenzhen, Tsinghua University (Grant No. HW2018002)." ]
[ "abstain", "abstain", "abstain", "result", "objective", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "method", "abstain", "method", "abstain", "result", "objective", "abstain", "abstain", "method", "objective", "result", "result", "abstain", "objective", "method", "abstain", "method", "abstain", "abstain", "method", "abstain", "result", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "method", "method", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "other", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "method", "method", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "result", "result", "abstain", "abstain", "result", "result", "result", "abstain", "other", "result", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "result", "abstain", "result", "abstain", "result", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "other", "objective", "method", "objective", "abstain", "abstain", "objective", "other", "other", "other" ]
[ "The recent Text-to-Text Transfer Transformer", "(T5)", "leveraged a unified text-to-text format and scale to attain state-of-the-art results on a wide variety of English-language NLP tasks.", "In this paper, we introduce mT5, a multilingual variant of T5 that was pre-trained on a new Common Crawl-based dataset covering 101 languages.", "We detail the design and modified training of mT5 and demonstrate its state-of-the-art performance on many multilingual benchmarks.", "We also describe a simple technique to prevent accidental translation in the zero-shot setting, where a generative model chooses to", "(partially)", "translate its prediction into the wrong language.", "All of the code and model checkpoints used in this work are publicly available.", "1 1 Introduction Current natural language processing", "(NLP)", "pipelines often make use of transfer learning, where a model is pre-trained on a data-rich task before being fine-tuned on a downstream task of interest", "(Ruder et al., 2019).", "The success of this paradigm is partially thanks to the release of parameter checkpoints for pre-trained models.", "These checkpoints allow members of the NLP community to quickly attain strong performance on many tasks without needing to perform expensive pre-training themselves.", "As one example, the pre-trained checkpoints for the Text-to-Text Transfer Transformer", "(T5)", "model released by Raffel et al.", "(2020)", "have been used to achieve state-of-the-art results on many benchmarks", "(Khashabi et al., 2020; Roberts et al., 2020; Kale, 2020; Izacard and Grave, 2020; Nogueira et al., 2020; Narang et al., 2020, etc.).", "Unfortunately, many of these language models were pre-trained solely on English-language text.", "This significantly limits their use given that roughly 80% of the world population does not speak English", "(Crystal, 2008).", "One way the community has addressed this English-centricity has been to release dozens of models, each pre-trained on a single non-English language", "(Carmo et al., 2020; de Vries et al., 2019; Le et al., 2020; Martin et al., 2020; Delobelle et al., 2020; Malmsten et al., 2020; Nguyen and Tuan Nguyen, 2020; Polignano et al., 2019, etc.).", "A more general solution is to produce multilingual models that have been pre-trained on a mixture of many languages.", "Popular models of this type are mBERT", "(Devlin, 2018), mBART", "(Liu et al., 2020a), and XLM-R", "(Conneau et al., 2020), which are multilingual variants of BERT", "(Devlin et al., 2019), BART", "(Lewis et al., 2020b), and RoBERTa", "(Liu et al., 2019), respectively.", "In this paper, we continue this tradition by releasing mT5, a multilingual variant of T5.", "Our goal with mT5 is to produce a massively multilingual model that deviates as little as possible from the recipe used to create T5.", "As such, mT5 inherits all of the benefits of T5", "(described in section 2), such as its general-purpose text-to-text format, its design based on insights from a large-scale empirical study, and its scale.", "To train mT5, we introduce a multilingual variant of the C4 dataset called mC4.", "mC4 comprises natural text in 101 languages drawn from the public Common Crawl web scrape.", "To validate the performance of mT5, we include results on several benchmark datasets, showing state-of-the-art results in many cases.", "Finally, we characterize a problematic behavior of pre-trained generative multilingual language models in the zero-shot setting, where they erroneously translate part of their prediction into the wrong language.", "To address this accidental translation, we describe a simple procedure that involves mixing in unlabeled pre-training data during fine-tuning and demonstrate that it dramatically alleviates this issue.", "We release our pre-trained models and code so that the community can leverage our work.", "In this section, we provide a short overview of T5 and the C4 pre-training dataset.", "Further details are available in Raffel et al.", "(2020).", "T5 is a pre-trained language model whose primary distinction is its use of a unified text-to-text format for all text-based NLP problems.", "This approach is natural for generative tasks", "(such as machine translation or abstractive summarization)", "where the task format requires the model to generate text conditioned on some input.", "It is more unusual for classification tasks, where T5 is trained to output the literal text of the label", "(e.g. posi-tive or negative for sentiment analysis)", "instead of a class index.", "The primary advantage of this approach is that it allows the use of exactly the same training objective", "(teacher-forced maximum-likelihood)", "for every task, which in practice means that a single set of hyperparameters can be used for effective fine-tuning on any downstream task.", "Similar unifying frameworks were proposed by Keskar et al.", "(2019)", "and McCann et al.", "(2018).", "Given the sequence-to-sequence structure of this task format, T5 uses a basic encoder-decoder Transformer architecture as originally proposed by Vaswani et al.", "(2017).", "T5 is pre-trained on a masked language modeling span-corruption objective, where consecutive spans of input tokens are replaced with a mask token and the model is trained to reconstruct the masked-out tokens.", "An additional distinguishing factor of T5 is its scale, with pre-trained model sizes available from 60 million to 11 billion parameters.", "These models were pre-trained on around 1 trillion tokens of data.", "Unlabeled data comes from the C4 dataset, which is a collection of about 750GB of English-language text sourced from the public Common Crawl web scrape.", "C4 includes heuristics to extract only natural language", "(as opposed to boilerplate and other gibberish)", "in addition to extensive deduplication.", "The pre-training objective, model architecture, scaling strategy, and many other design choices for T5 were chosen based on a large-scale empirical study described in detail in Raffel et al.", "(2020).", "Our goal in this paper is to create a massively multilingual model that follows T5's recipe as closely as possible.", "Towards this end, we develop an extended version of the C4 pre-training dataset that covers 101 languages and introduce changes to T5 to better suit this multilinguality.", "The C4 dataset was explicitly designed to be English only: any page that was not given a probability of at least 99% of being English by langdetect 2 was discarded.", "In contrast, for mC4 we use cld3 3 to identify over 100 languages.", "Since some of these languages are relatively scarce on the internet, we make use of all of the 71 monthly web scrapes released so far by Common Crawl.", "This is dramatically more source data than was used for C4, for which the April 2019 web scrape alone was enough to provide plenty of English-language data.", "An important heuristic filtering step in C4 was the removal of lines that did not end in an English terminal punctuation mark.", "Since many languages do not use English terminal punctuation marks, we instead apply a line length filter that requires pages to contain at least three lines of text with 200 or more characters.", "Otherwise, we follow C4's filtering by deduplicating lines across documents and removing pages containing bad words.", "4 Finally, we detect each page's primary language using cld3 and remove those with a confidence below 70%.", "After these filters are applied, we group the remaining pages by language and include in the corpus all languages with 10,000 or more pages.", "This produces text in 107 languages as defined by cld3 .", "However, we note that six of these are just script variants of the same spoken language", "(e.g. ru is Russian in Cyrillic script and ru-Latn is Russian in Latin script).", "A histogram of the page counts for each language is shown in fig.", "1.", "Detailed dataset statistics including per-language token counts are shown in Appendix A. 3.2 mT5 The model architecture and training procedure that we use for mT5 closely follows that of T5.", "Specifically, we base mT5 on the T5.1.1 recipe, 5 which improves upon T5 by using GeGLU nonlinearities", "(Shazeer, 2020), scaling both d model and d instead 2 https://pypi.org/project/langdetect/ 3 https://github.com/google/cld3 4 https://github.com/LDNOOBW/ 5 https://github.com/google-research/ text-to-text-transfer-transformer/blob/master/released_checkpoints.md#t511 e n r u e s d e f r i t p t p l n l t r j a v i i d c s z h f aa r s v r o e l u k hu d a f i n o bg h i s kk o t h c a m s i w l t s l m r b n e t l v a z g l c y s q t a s r n e l b h y kkk a m t a ff ili s m k m l m nu r b e l a e u t g t e f y k n k y s w s o m y u z k m r u -L a t n s dg u h i -L a t n j v z u s i j a -L a t n e o c o g a e l -L a t n z h -L a t n p a c e b m gp ss n gd k u h m n s uh t h a n y a m bg -L a t n y il o m i s m i g h a w x h s t y o 10 5 10 6 10 7 10 8 10 9 P a g e s o f m C 4 t r a i n i n g t e x t 0.008 0.04 0.2 1 5 25 % o f m T 5 t r a i n i n g e x a m p l e s =0.2 =0.3 =0.7 Figure 1: Page counts per language in mC4", "of just d in the larger models, and pre-training on unlabeled data only with no dropout.", "We refer to Raffel et al.", "(2020)", "for further details on T5.", "A major factor in pre-training multilingual models is how to sample data from each language.", "Ultimately, this choice is a zero-sum game: If low-resource languages are sampled too often, the model may overfit; if high-resource languages are not trained on enough, the model will underfit.", "We therefore take the approach used in", "(Devlin, 2018; Conneau et al., 2020; Arivazhagan et al., 2019)", "and boost lower-resource languages by sampling examples according to the probability p", "( L )", "| L | , where p", "( L )", "is the probability of sampling text from a given language during pre-training and | L | is the number of examples in the language.", "The hyper-parameter", "(typically with < 1 )", "allows us to control how much to boost the probability of training on low-resource languages.", "Values used by prior work include = 0 .", "7 for mBERT", "(Devlin, 2018), = 0 .", "3 for XLM-R", "(Conneau et al., 2020), and = 0 .", "2 for MMNMT", "(Arivazhagan et al., 2019).", "We tried all three of these values", "(ablation results in section 4.2)", "and found = 0 .", "3 to give a reasonable compromise between performance on highand low-resource languages.", "The fact that our model covers over 100 languages necessitates a larger vocabulary.", "Following XLM-R", "(Conneau et al., 2018), we increase the vocabulary size to 250 , 000 wordpieces.", "As in T5, we use SentencePiece", "(Kudo and Richardson, 2018; Kudo, 2018)", "models trained with the language sampling rates used during pre-training.", "To accommodate languages with large character sets like Chinese, we use a character coverage of 0 .", "99999 and enable SentencePiece's byte-fallback feature to ensure that any string can be uniquely encoded.", "To contextualize our new model, we provide a brief comparison with existing massively multilingual pre-trained language models.", "For brevity, we focus on models that support more than a few dozen languages.", "Table 1 gives a high-level comparison of mT5 to the most similar models.", "mBERT", "(Devlin, 2018)", "is a multilingual version of BERT", "(Devlin et al., 2019).", "Similar to our approach with mT5, mBERT follows the BERT recipe as closely as possible", "(same architecture, objective, etc.).", "The primary difference is the training set: Instead of training on English Wikipedia and the Toronto Books Corpus, mBERT is trained on up to 104 languages from Wikipedia.", "XLM", "(Con-neau and Lample, 2019)", "is also based on BERT but applies improved methods for pre-training multilingual language models including explicitly cross-lingual pre-training objectives.", "Many pre-trained versions of XLM have been released; the most massively-multilingual variant was trained on 100 languages from Wikipedia.", "XLM-R", "(Conneau Model Sentence pair Structured Question answering XNLI PAWS-X WikiAnn NER XQuAD MLQA TyDi QA-GoldP Metrics Acc.", "et al., 2020)", "is an improved version of XLM based on the RoBERTa model", "(Liu et al., 2019).", "XLM-R is trained with a cross-lingual masked language modeling objective on data in 100 languages from Common Crawl.", "To improve the pre-training data quality, pages from Common Crawl were filtered by an n-gram language model trained on Wikipedia", "(Wenzek et al., 2020).", "mBART", "(Liu et al., 2020a)", "is a multilingual encoder-decoder model that is based on BART", "(Lewis et al., 2020b).", "mBART is trained with a combination of span masking and sentence shuffling objectives on a subset of 25 languages from the same data as XLM-R.", "MARGE", "(Lewis et al., 2020a)", "is a multilingual encoder-decoder model that is trained to reconstruct a document in one language by retrieving documents in other languages.", "It uses data in 26 languages from Wikipedia and CC-News", "(Liu et al., 2019).", "To validate the performance of mT5, we evaluate our models on 6 tasks from the XTREME multilingual benchmark", "(Hu et al., 2020): the XNLI", "(Con-neau et al., 2018)", "entailment task covering 14 languages; the XQuAD", "(Artetxe et al., 2020), MLQA", "(Lewis et al., 2019), and TyDi QA", "(Clark et al., 2020)", "reading comprehension benchmarks with 10 , 6 Standard deviations of mT5 models on TyDi QA zero-shot across five runs are: Small: 0.44, Base: 1.38, Large: 3.66, XL: 1.29, XXL: 0.20.", "7 , and 11 languages respectively; the Named Entity Recognition", "(NER)", "dataset of WikiAnn", "(Pan et al., 2017)", "restricted to the 40 languages from XTREME", "(Hu et al., 2020), and the PAWS-X", "(Yang et al., 2019)", "paraphrase identification dataset with 7 languages.", "We cast all tasks into the text-to-text format, i.e. generating the label text", "(XNLI and PAWS-X), entity tags and labels", "(WikiAnn NER), or answer", "(XQuAD, MLQA, and TyDi QA)", "directly in a generative fashion.", "For NER, if there are multiple entities, they are concatenated in the order they appear, and if there are no entities then the target text is None.", "We consider three variants of these tasks:", "(1)", "zero-shot, where the model is fine-tuned only on English data,", "(2)", "translate-train, adding machine translations from English into each target language, and", "(3)", "in-language mul-titask, training on gold data in all target languages.", "For brevity, we refer to Hu et al. (2020) for further details on these benchmarks.", "Following the original T5 recipe, we consider five model sizes: Small ( 300 M parameters), Base ( 580 M), Large ( 1 . 2 B), XL ( 3 . 7 B), and XXL ( 13 B).", "The increase in parameter counts compared to the corresponding T5 model variants comes from the larger vocabulary used in mT5.", "Note that, because mT5 is an encoder-decoder model, it has roughly twice as many parameters as correspondingly-sized encoder-only models such as XLM-R.", "For example, the Large variant of XLM-R has 550 million parameters whereas mT5-Large has around 1 billion.", "However, the computational cost for text classification is roughly the same: In both cases, the model processes a length-T input sequence with an encoder of approximately equal size.", "In an encoder-only model like XLM-R, the encoder processes one additional CLS token, which is used to generate the representation for classification.", "In mT5, the decoder typically produces two additional tokens: the class label and an end-of-sequence token.", "Since the decoder has the same architecture (ignoring encoder-decoder attention) as the encoder, the computational cost of classification with mT5 typically amounts to the cost of processing T + 2 tokens compared to T + 1 for an encoder-only model.", "However, encoder-decoder architectures have the additional benefit of being applicable to generative tasks like abstractive summarization or dialog.", "We pre-train our mT5 model variants for 1 million steps on batches of 1024 length1024 input sequences, corresponding to roughly 1 trillion input tokens total.", "This is the same amount of pretraining as T5 and about 16 as much as XLM-R.", "7 Note that our pre-training dataset is large enough that we only complete a fraction of an epoch for high-resource languages (e.g. only covering 2% of the English data).", "While XLM-R's pre-training corpus CC-100 is 20 times smaller than mC4, XLM-R nevertheless pre-trains for more steps, and sees over 6 times more tokens in pre-training.", "We use the same inverse square-root learning rate schedule used by T5 during pre-training, with the learning rate set to 1/ (cid:112) max( n, k ) where n is the current training iteration and k = 10 4 is the number of warm-up steps.", "Following the T5.1.1 recipe, we do not apply dropout during pre-training.", "We use the same self-supervised objective as T5, with 15 % of tokens masked and an average noise span length of 3 .", "We ablate some of these experimental details in section 4.2.", "For fine-tuning, we use a constant learning rate of 0 .", "001 and dropout rate of 0 .", "1 for all tasks.", "We use a batch size of 2 17 for most tasks, but decrease to 2 16 for WikiAnn NER zero-shot, due to the small size of the training, and increase to 2 20 tokens for XNLI, which we found gave better performance.", "For early stopping, we save checkpoints every 200 steps and choose the checkpoint with the highest performance on the standard validation sets speci-fied by XTREME .", "Table 2 presents our main results, with per-language breakdowns for each task given in Appendix B. Our largest model mT5-XXL exceeds state-of-the-art on all classification and QA tasks and is near SOTA on NER (69.2 vs. 70.1).", "Note that unlike our model, InfoXLM (Chi et al., 2020) and VECO (Luo et al., 2020) benefit from parallel training data, while X-STILTs (Phang et al., 2020) leverages labeled data from tasks similar to the target task.", "Overall, our results highlight the importance of model capacity in cross-lingual representation learning and suggest that scaling up a simple pre-training recipe can be a viable alternative to more complex techniques relying on LM filtering, parallel data, or intermediate tasks.", "state-7 XLM-R Large sees 6 .", "3 trillion tokens during pre-training ( 1 . 5 million batches of 8192 sequences of 512 tokens), and uses a packing mechanism similar to T5 to minimize the number of wasted padding tokens.", "of-the-art on all XTREME classification and QA tasks.", "For these tasks, we fine-tune on the combination of the labeled English data and machine translations thereof.", "8 This allows direct comparison with both FILTER (Fang et al., 2020) as well as the XLM-R baseline of Fang et al. (2020).", "Note that this setup differs from XTREME translate-train (Hu et al., 2020), which excludes English.", "Figure 2 shows that model capacity is key to improving performance on variants of the TyDi QA GoldP task in the absence of gold multilingual data: For the smallest model, training on gold datasets (in-language multitask) achieves dramatically better performance than using weakly supervised data (translate-train) or English-only data (zero-shot), whereas the gap between these three settings is much smaller for the largest model.", "For our two largest models, zero-shot and translate-train performance is nearly the same, showing that machine translations of the monolingual dataset bring diminishing returns as model capacity in-8 We use the translation data provided by Hu et al. (2020) throughout.", "On the PAWS-X task, FILTER used translation data from the original task instead.", "Switching to this data would improve our scores slightly (mT5-XXL 91.5 92.0).", "creases.", "Overall, these trends point to the possibility of avoiding the costly step of annotating data in more than one language when using large models.", "Massively multilingual models have been observed to underperform on a given language when compared to a similarly-sized dedicated model trained specifically for that language (Arivazhagan et al., 2019).", "To quantify this effect, we compare the performance of mT5 and T5 when fine-tuned on the SQuAD reading comprehension benchmark (Rajpurkar et al., 2016).", "The results are shown in table 3, with results for T5 reproduced from Raffel et al. (2020).", "While the Small and Base mT5 models fall short of their English T5 counterparts, we find that the larger models close the gap.", "This suggests there may be a turning point past which the model has enough capacity to effectively learn 101 languages without significant interference effects.", "Looking at the per-language breakdowns in Appendix B, we find that mT5 performs well on both highand low-resource languages.", "For example, in table 7, we see mT5-XXL outperforms XLM-R by between +3 (English) and +9 (Swahili) points on each individual language on XNLI zero-shot.", "In table 12 we see similarly strong performance across languages on TyDi QA GoldP (including lower-resource languages like Swahili and Telugu), with mT5-XXL surpassing human performance in four of nine languages on the in-language setting.", "We run six ablations, modifying various settings, using our Large model as a baseline:", "(i) increase dropout to 0 .", "1 in hopes of mitigating overfitting on low-resource languages,", "(ii) decrease sequence length to 512 (as was used in T5),", "(iii) increase the average noise span length in the pre-training objective to 10 since we observe fewer characters per token than T5,", "(iv) adjust the language sampling exponent to { 0 .", "2 , 0 .", "7 } as used in MMNMT (Ari-vazhagan et al., 2019) and mBERT (Devlin, 2018), respectively,", "(v) turn off the line length filter in the mC4 data pipeline, and", "(vi) supplement mC4 with Wikipedia data 9 from 103 languages.", "The effect of these ablations on XNLI zero-shot accuracy is shown in table 4.", "In each case, the average XNLI score is lower than the mT5-Large baseline, justifying our chosen settings.", "The line 9 We use the 2020 Wikipedia data from TensorFlow Datasets, selecting the same languages as mBERT.", "Increasing the language sampling exponent to 0 .", "7 has the expected effect of improving performance in high-resource languages (e.g. Russian 81 . 5 82 . 8 ), while hurting low-resource languages (e.g. Swahili 75 . 4 70 . 6 ), with the average effect being negative.", "Conversely, lowering to 0 .", "2 boosts one tail language slightly (Urdu 73 . 5 73 . 9 ) but is harmful elsewhere.", "Detailed per-language metrics on XNLI and the results of our ablations on zero-shot XQuAD are provided in Appendix C, showing similar trends.", "Since mT5 is a generative model, it can output arbitrary text predictions in a free form fashion.", "This is in contrast to encoder-only models like mBERT and XLM(-R) that make a prediction by either extracting it from the input or producing a class label.", "We found that the lack of constraints during prediction caused mT5 to sometimes have trouble generating a well-formed prediction in a language unseen during fine-tuning.", "Focusing on XQuAD zero-shot, we find that many of these errors are due to accidental translation into the fine-tuning language (English).", "In this section, we characterize this behavior and demonstrate that it can be counteracted by mixing a small amount of our multilingual pre-training task into the fine-tuning stage.", "In using a generative model for span selection (as in extractive QA tasks), we hope the model learns to generate legal spans that are substrings of the provided context.", "However, unlike encoder-based models like BERT, this is not a hard constraint of Target Prediction Explanation Decomposed Thai into + Decomposed Hindi into + 27 30 27 30 % Replaced full-width percent sign 12 .", "the model.", "Notably, T5 learns to always output legal spans on SQuAD, suggesting this is not a major issue for generative models in simple cases.", "A more challenging case for generative models is zero-shot cross-lingual span selection.", "Here, a pre-trained multilingual model is fine-tuned on English but tested on other languages.", "We want the model to generate legal non-English predictions despite having only seen English targets in fine-tuning.", "In practice, while mT5 achieves SOTA on the zero-shot variants of XQuAD, MLQA and TyDi QA, illegal predictions are still a problem.", "For example, on zero-shot XQuAD, a non-trivial portion of mT5 mistakes are in fact illegal spans, for all model sizes (cf. fig. 4 Baseline).", "Through inspection, we find these illegal predictions mainly fall into three categories:", "(i) normalization,", "(ii) grammatical adjustment, and", "(iii) accidental translation.", "Table 5 provides examples of each type.", "Normalization indicates predictions that would be legal, except that equivalent Unicode characters have been substituted, so a legal span may be recovered through Unicode NFKC normalization.", "This is particularly common in Thai, Chinese and Hindi, where most mT5-XXL illegal predictions are resolved by normalization, as seen in fig.", "3b.", "Grammatical adjustment involves minor morphological changes to the original text.", "We frequently observe these adjustments when the target span cannot stand as a well-formed answer on its own.", "For example, mT5-XXL's Arabic and Russian predictions in the middle rows of table 5 are judged by native speakers as correct and grammatical answers to the posed XQuAD questions, while the gold targets are judged as ungrammatical answers.", "This type of illegal prediction is most common in el ru th ar de hi zh es tr vi en 0 10 20 30 40 50 60 70 P e r c e n t Incorrect Illegal Illegal after norm", "Accidental translation involves the model translating part or all of a contextual span into English (the language of all fine-tuning data).", "On the one hand, it is remarkable that mT5 performs spontaneous translation despite never seeing parallel training data.", "On the other, as practitioners we would ideally be able to control this behavior.", "We observe accidental translation across all model sizes and all XQuAD languages.", "The problem is most prevalent in mT5-Small and mT5-Base, where from manual inspection, half or more of the illegal predictions within each language exhibit accidental translation, with many of the illegal predictions coming from Greek and Russian, as shown in fig.", "3a.", "While we do observe full phrase translations, a more common occurrence is partial translation, where the model outputs a token or two of English before reverting to the correct target language.", "The transition may even occur mid-word, as in the prediction chlor , where the first half of the target (Russian: chloroplast) has been translated to English.", "The most direct solution to avoiding accidental translation on span selection tasks would be to modify our inference procedure.", "As is common practice with encoder-based models, we could devise a task-specific fine-tuning mechanism that restricts the model to perform ranking over legal spans, removing the possibility of illegal predictions entirely.", "While this would likely improve our zero-shot metrics, it is unsatisfying for two reasons: First, it implies taking a step backward from the general text-to-text interface, as different tasks would demand different types of inference.", "Second, this solution won't extend to more open-ended zero-shot generative tasks like summarization, where the legal output space can't be easily delimited.", "For these reasons, we consider a more general solution that remains within the text-to-text framework and can apply to all zero-shot generation tasks.", "Our motivating intuition is that the reason the model outputs English when given a non-English test input is that it has never observed a non-English target during fine-tuning.", "As English-only fine-tuning proceeds, the model's assigned likelihood of non-English tokens presumably decreases, eventually reaching the point where English becomes the most likely answer to any question.", "To prevent the model from forgetting how to generate other languages, we use a strategy inspired by domain/task-adaptive pre-training (Howard and Ruder, 2018; Gururangan et al., 2020): We simply mix in our unsupervised multilingual pre-training task during fine-tuning.", "A similar approach was explored by Liu et al. (2020b).", "We use the same mC4 task definition as in pre-training, with two adjustments: First, we remove all sentinel tokens (corresponding to non-masked spans in the input text) from the target sequence, as otherwise we observe occasional sentinels in downstream predictions.", "Second, we reduce the language sampling parameter from 0 .", "3 to 0 .", "1 .", "This produces a near-uniform distribution of languages, encouraging the model to treat all languages as equally likely.", "10 With these changes, we mix a small amount of our unsupervised task (covering 101 languages) into XQuAD fine-tuning, at a ratio of just 1 : 100 .", "Figure 4 shows the results on XQuAD zero-shot error rates.", "The addition of even this small amount of multilingual data has a marked effect on the mT5-Small and mT5-Base models (where accidental translation was most rampant), reducing the illegal prediction rates by more than 70% (relative), and contributing to an overall reduction in errors.", "In this paper, we introduced mT5 and mC4: massively multilingual variants of the T5 model and C4 dataset.", "We demonstrated that the T5 recipe is straightforwardly applicable to the multilingual setting, and achieved strong performance on a diverse set of benchmarks.", "We also characterized illegal predictions that can occur in zero-shot evaluation of multilingual pre-trained generative models, and described a simple technique to avoid this issue.", "We release all code and pre-trained datasets used in this paper to facilitate future work on multilingual language understanding.", "11 Acknowledgements We thank Melvin Johnson for tips on the translate-train procedure for XTREME and Itai Rolnick for help with infrastructure." ]
[ "abstain", "abstain", "abstain", "objective", "objective", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "method", "abstain", "result", "method", "objective", "method", "method", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "result", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "result", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "result", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "method", "method", "abstain" ]
[ "Coding EMRs with diagnosis and procedure codes is an indispensable task for billing, secondary data analyses, and monitoring health trends.", "Both speed and accuracy of coding are critical.", "While coding errors could lead to more patient-side financial burden and misinterpretation of a patient's well-being, timely coding is also needed to avoid backlogs and additional costs for the healthcare facility.", "In this paper, we present a new neural network architecture that combines ideas from few-shot learning matching networks, multi-label loss functions, and convolutional neural networks for text classification to significantly outperform other state-of-the-art models.", "Our evaluations are conducted using a well known de-identified EMR dataset (MIMIC) with a variety of multi-label performance measures.", "Electronic medical record (EMR) coding is the process of extracting diagnosis and procedure codes from the digital record (the EMR) pertaining to a patient's visit.", "The digital record is mostly composed of multiple textual narratives (e.g., discharge summaries, pathology reports, progress notes) authored by healthcare professionals, typically doctors, nurses, and lab technicians.", "Hospitals heavily invest in training and retaining professional EMR coders to manually annotate all patient visits by reviewing EMRs.", "Proprietary commercial software tools often termed as computer-assisted coding (CAC) systems are already in use in many healthcare facilities and were found to be helpful in increasing medical coder productivity (Dougherty et al., 2013).", "Thus progress in automated EMR coding methods is expected to directly impact real world operations.", "International Classification of Diseases (ICD) terminology (specifically the ICD-10-CM variant) as required by the Health Insurance Portability and Accountability Act (HIPPA).", "ICD codes facilitate billing activities, retrospective epidemiological studies, and also enable researchers to aggregate health statistics and monitor health trends.", "To code EMRs effectively, medical coders are expected to have thorough knowledge of ICD-10-CM and follow a complex set of guidelines to code EMRs.", "For example, if a coder accidentally uses the code heart failure (ICD-10-CM code I50) instead of acute systolic (congestive) heart failure (ICD-10-CM code I50.21), then the patient may be charged substantially more 1 causing significant unfair burden.", "Therefore, it is important for coders to have better tools at their disposal to find the most appropriate codes.", "Additionally, if coders become more efficient, hospitals may hire fewer coders to reduce their operating costs.", "Thus automated coding methods are expected to help with expedited coding, cost savings, and error control.", "In this paper, we treat medical coding of EMR narratives as a multi-label text classification problem.", "Multi-label classification (MLC) is a machine learning task that assigns a set of labels (typically from a fixed terminology) to an instance.", "MLC is different from multi-class problems, which assign a single label to each example from a set of labels.", "Compared to general multi-label problems, EMR coding has three distinct challenges.", "First, with thousands of ICD codes, the label space is large and the label distribution is extremely unbalanced most codes occur very infrequently with a few codes occurring several orders of magnitude more than others.", "Second and more importantly, a patient may have a large number of diagnoses and procedures.", "On average, coders annotate an EMR with more than 20 such codes and hence predicting the top one or two codes is not sufficient for EMR coding.", "Third, EMR narratives may be very long (e.g., discharge summaries may have over 1000 words), which may result in a needle in a haystack situation when attempting to seek evidence for particular codes.", "Recent advances in extreme multi-label classification have proven to work well for large label spaces.", "Many of these methods (Yu et al., 2014; Bhatia et al., 2015; Liu et al., 2017) focus on creating efficient multi-label models that can handle 10 4 to 10 6 labels.", "While these models perform well in large label spaces, they don't necessarily focus on improving prediction of infrequent labels.", "Typically, they optimize for the top 1, 3, or 5 ranked labels by focusing on the P@1, P@3, and P@5 evaluation measures.", "The labels ranked at the top usually occur frequently in the dataset and it is not obvious how to handle infrequent labels.", "One solution would be to ignore the rare labels.", "However, when the majority of medical codes are infrequent, this solution is unsatisfactory.", "While neural networks have shown great promise for text classification (Kim, 2014; Yang et al., 2016; Johnson and Zhang, 2017), the label imbalances associated with EMR coding hinder their performance.", "Imagine if a dataset contains only one training example for every class leading to one-shot learning , a subtask of few-shot learning .", "How can we classify a new instance?", "A trivial solution would be to use a non-parametric 1-NN (1 nearest neighbor) classifier.", "1-NN does not require learning any label specific parameters and we only need to define features to represent our data and a distance metric.", "Unfortunately, defining good features and picking the best distance metric is nontrivial.", "Instead of manually defining the feature set and distance metric, neural network training procedures have been developed to learn them automatically (Koch et al., 2015).", "For example, matching networks (Vinyals et al., 2016) can automatically learn discriminative feature representations and a useful distance metric.", "Therefore, using a 1-NN prediction method, matching networks work well for infrequent labels.", "However, researchers typically evaluate matching networks on multi-class problems without label imbalance.", "For EMR coding with extreme label imbalance with several labels occurring thousands of times, traditional parametric neural networks (Kim, 2014) should work very well on the frequent labels.", "In this paper, we introduce a new variant of matching networks (Vinyals et al., 2016; Snell et al., 2017) to address the EMR coding problem.", "Specifically, we combine the non-parametric idea of k -NN and matching networks with traditional neural network text classification methods to handle both frequent and infrequent labels encountered in EMR coding.", "We propose a novel semi-parametric neural matching network for diagnosis/procedure code prediction from EMR narratives.", "Our architecture employs ideas from matching networks (Vinyals et al., 2016), multiple attention (Lin et al., 2017), multi-label loss functions (Nam et al., 2014a), and convolutional neural networks (CNNs) for text classification (Kim, 2014) to produce a state-of-the-art EMR coding model.", "We evaluate our model on publicly available EMR datasets to ensure reproducibility and benchmarking; we also compare against prior state-of-the-art methods in EMR coding and demonstrate robustness across multiple standard evaluation measures.", "We analyze and measure how each component of our model affects the performance using ablation experiments.", "In this section we cover recent methodologies that are either relevant to our approach and problem or form the main ingredients of our contribution.", "Current methods for extreme MLC fall into two categories: embedding and tree-based methods.", "Embedding-based methods aim to reduce the training complexity.", "They effectively reduce the label space by assuming the training label matrix is low rank.", "Intuitively, rather than learning independent classifiers for each label (binary relevance) (Tsoumakas et al., 2010), classifiers are learned in a reduced label space L (cid:28) L where L is the total number of labels.", "Likewise, a projection matrix is learned to convert predictions from the reduced label space back to the original label space.", "In general, embedding methods vary 2082 based on how they reduce the label space and how the projection operation is optimized.", "Tai and Lin (2012) use principal component analysis (PCA) to reduce the label space.", "Low-rank Empirical risk minimization for Multi-Label Learning (LEML) (Yu et al., 2014) jointly optimizes the label space reduction and the projection processes.", "RobustXML (Xu et al., 2016) is similar to LEML but it treats infrequent labels as outliers and models them separately.", "Liu et al. (2017) employ neural networks for extreme multi-label problems using a funnel-like architecture that reduces the label vector dimensionality.", "Tree-based multi-label methods work by recursively splitting the feature space.", "These methods usually differ based on the node splitting criterion.", "FastXML (Prabhu and Varma, 2014) partitions the feature space using the nDCG measure as the splitting criterion.", "Pfas-treXML (Jain et al., 2016) improves on FastXML by using a propensity scored nDCG splitting criterion and re-ranking the predicted labels to optimize various ranking measures.", "Memory networks (Weston et al., 2014) have access to external memory, typically consisting of information the model may use to make predictions.", "Intuitively, informative memories concerning a given instance are found by the memory network to improve its predictive power.", "Kamra et al. (2017) use memory networks to fix issues of catastrophic forgetting.", "They show that external memory can be used to learn new tasks without forgetting previous tasks.", "Memory networks are now applied to a wide variety of natural language processing tasks, including question answering and language modeling (Sukhbaatar et al., 2015; Bor-des et al., 2015; Miller et al., 2016).", "Matching networks (Vinyals et al., 2016; Snell et al., 2017) have recently been developed for few/one-shot learning problems.", "We can interpret matching networks as a key-value memory network (Miller et al., 2016).", "The keys are training instances, while the values are the labels associated with each training example.", "Intuitively, the concept is similar to a hashmap.", "The model will search for the most similar training instance to find its respective value.", "Also, matching networks can be interpreted as a k -NN based model that automatically learns an informative distance metric.", "Finally, Altae-Tran et al. (2017) used matching networks for drug discovery, a problem where data is limited.", "The 2007 shared task on coding radiology reports (Pestian et al., 2007) was the first effort that popularized automated EMR coding.", "Traditionally, linear methods have been used for diagnosis code prediction.", "Perotte et al. (2013) developed a hierarchical support vector machine (SVM) model that takes advantage of the ICD-9-CM hierarchy.", "In our prior work, we train a linear model for every label (Rios and Kavuluru, 2013) and re-rank the labels using a learning-to-rank procedure (Kavuluru et al., 2015).", "Zhang et al. (2017) supplement the diagnosis code training data with data from PubMed (biomedical article corpus and search system) to train linear models using both the original training data and the PubMed data.", "Recent advances in neural networks have also been put to use for EMR coding: Baumel et al. (2018) trained a CNN with multiple sigmoid outputs using binary cross-entropy.", "Duarte et al. (2017) use hierarchical recurrent neural networks (RNNs) to annotate death reports with ICD-10 codes.", "Vani et al. (2017) introduced grounded RNNs for EMR coding.", "They found that iteratively updating their predictions at each time step significantly improved the performance.", "Finally, similar to our work, memory networks (Prakash et al., 2017) have recently been used for diagnosis coding.", "However, we would like to note two significant differences between the memory network from Prakash et al. (2017) and our model.", "First, they don't use a matching network and their memories rely on extracting information about each label from Wikipedia.", "In contrast, our model does not use any auxiliary information.", "Second, they only evaluate on the 50 most frequent labels, while we evaluate on all the labels in the dataset.", "An overview of our model is shown in Figure", "1. Our model architecture has two main components.", "1. We augment a CNN with external memory over a support set S , which consists of a small subset of the training dataset.", "The model searches the support set to find similar examples with respect to the input instance.", "We make use of the homophily assumption that similar instances in the support set are coded 2083 x CNN g( s k ) V65.1 363.3 433.1 ... ... 521.2 ... ...", "with similar labels.", "Therefore, we use the related support set examples as auxiliary features.", "The similar instances are chosen automatically by combining ideas from metric learning and neural attention.", "We emphasize that unlike in a traditional k -NN setup, we do NOT explicitly use the labels of the support set instances.", "The support set essentially enriches and complements the features derived from the input instance.", "2. Rather than predicting labels by thresholding, we rank them and select the top k labels specific to each instance where k is predicted using an additional output unit (termed MetaL-abeler).", "We train the MetaLabeler along with the classification loss using a multi-task training scheme.", "Before we go into more specific details of our architecture, we introduce some notation.", "Let X represent the set of all training documents and x be an instance of X .", "Likewise, let S represent the set of support instances and s be an instance of S .", "We let L be the total number of unique labels.", "Our full model is described in following subsections.", "We use a CNN to encode each document following what is now a fairly standard approach consisting of an embedding layer, a convolution layer, a max-pooling layer, and an output layer (Collobert et al., 2011; Kim, 2014).", "However, in our architecture, the CNN additionally aids in getting intermediate representations for the multi-head matching network component (Section 3.2).", "Intuitively, CNNs make use of the sequential nature of text, where a non-linear function is applied to region vectors formed from vectors of words in short adjacent word sequences.", "Formally, we represent each document as a sequence of word vectors, [ w 1 , w 2 , . . . , w n ] , where w i R d represents the vector of the i -th word in the document.", "The region vectors are formed by concatenating each window of s words, w i s +1 || . . . || w i , into a local region vector c j R sd .", "Next, c j is passed to a non-linear function c j = ReLU ( W c j + b ) , where W R v sd , b R v , and ReLU is a rectified linear unit (Glorot et al., 2011; Nair and Hinton, 2010).", "Each row of W represents a convolutional filter; so v is the total number of filters.", "After processing each successive region vector, we obtain a document representation D = [ c 1 , c 2 , . . . , c n + s 1 ] by concatenating each c j forming a matrix D R v ( n + s 1) .", "Each row of D is referred to as a feature map , formed by different convolutional filters.", "Unfortunately, this representation is dependent on the length of the document and we cannot pass it to an output layer.", "We use max-over-time pooling to create a fixed size vector g ( s ) = [ c 1 max , c 2 max , . . . , c qmax ] , where c jmax = max( c j 1 , c j 2 , . . . , c jn + s 1 ) .", "Using the support set and the input instance, our goal is to estimate P ( y | x , S ) .", "The support set S is chosen based on nearest neighbors and its selection process is discussed in Section 3.4.", "Among instances in S , our model finds informative support instances with respect to x and creates a feature vector using them.", "This feature vector is combined with the input instance to make predictions.", "First, each support instance s k S is projected into the support space using a simple single-layer feed forward NN as h ( g ( s k )) = ReLU ( W s g ( s k ) + b s ) , where W s R z v and b s R z .", "Likewise, we project each input instance x into the input space using a different feed forward neural network, p i ( g ( x )) = ReLU ( W i g ( x ) + b i ) , where W i R z v and b i R z .", "Compared to the support set neural network where we use only a single network, for the input instance we have u projection neural networks.", "This means we have u versions of x , an idea that is similar to self-attention (Lin et al., 2017), where the model learns multiple representations of an instance.", "Here each p i ( g ( x )) represents a single head or representation of the input x .", "Using different weight matrices, [ W 1 , . . . , W u ] and [ b 1 , . . . , b u ] , we create different representations of x (multiple heads).", "For both the input multi-heads and the support instance projection, we note that the same CNN is used (also indicated in Figure 1) whose output is subject to the feed forward neural nets outlined thus far in this section.", "Rather than searching for a single informative support instance, we search for multiple relevant support instances.", "For each of the u input instance representations, we calculate a normalized attention score A i,k = exp( d ( p i ( g ( x )) , h ( g ( s k ))) P s k 0 S (cid:2) exp( d ( p i ( g ( x )) , h ( g ( s k 0 ))) (cid:3) where A i,k represents the score of the k -th support example with respect to the i -th input representation p i ( g ( x )) and d ( x i , x j ) = k x i x j k 22 , is the square of the Euclidean distance between the input and support representations.", "Next, the normalized scores are aggregated into a matrix A R u | S | .", "Then, we create a feature vector q = vec ( A S ) (1) where q R uz , vec is the matrix vectorization operator, and S R | S | z is the support instance CNN feature matrix whose i -th row is h ( g ( s i )) for i = 1 , . . . , | S | .", "Intuitively, multiple weighted averages of the support instances are created, one for each of the u input representations.", "The final feature vector, h = q || g ( x ) , (2) is formed by concatenating the CNN representation of the input instance x and the support set feature vector q .", "Finally, the output layer for L labels involves computing y = P ( y | x , S ) = ( W c h + b c ) (3) where W c RL ( uz + v ) , b c RL , and is the sigmoid function.", "Because we use a sigmoid activation function, each label prediction ( y i ) is in the range from 0 to 1 .", "The easiest method to convert y into label predictions is to simply threshold each element at 0 .", "5 .", "However, most large-scale multi-label problems are highly imbalanced.", "When training using binary cross-entropy, the threshold 0 .", "5 is optimized for accuracy.", "Therefore, our predictions will be biased towards 0 .", "A simple way to fix this problem is to optimize the threshold value for each label.", "Unfortunately, searching for the optimal threshold of each label is computational expensive in large label spaces.", "Here we train a regression based output layer r = ReLU ( W r g ( x ) + b r ) where r estimates the number of labels x should be annotated with.", "At test time, we rank each label by its score in y .", "Next, r is rounded to the nearest integer and we predict the top r ranked labels.", "To train our model, we need to define two loss functions.", "First, following recent working on multi-label classification with neural net-2085 works (Nam et al., 2014b), we train using a multilabel cross-entropy loss.", "The loss is defined as L c = LX i =1 (cid:2) y i log( y i ) (1 y i ) log(1 y i ) (cid:3) , which sums the binary cross-entropy loss for each label.", "The second loss is used to train the MetaLabeler for which we use the mean squared error L r = k r r k 22 where r is the vector of correct numbers of labels and r is our estimate.", "We train these two losses using a multi-task learning paradigm (Collobert et al., 2011).", "Similar to previous work with matching networks (Vinyals et al., 2016; Snell et al., 2017), episode or mini-batch construction can have an impact on performance.", "In the multi-label setting, episode construction is non-trivial.", "We propose a simple strategy for choosing the support set S which we find works well in practice.", "First, at the beginning of the training process we loop over all training examples and store g ( x ) for every training instance.", "We will refer to this set of vectors as T .", "Next, for every step of the training process (for every mini-batch M ), we search T \\ M to find the e nearest neighbors (using Euclidean distance) per instance to form our support set S .", "Likewise, we add e random examples from T \\ M to the support set.", "Therefore, our support set S contains up to | M | e + e instances.", "The purpose of the random examples is to ensure the distance metric learned during training (captured by improving representations of documents as influenced by all network parameters) is robust to noisy examples.", "If we do not use the support set label vectors, then what is our network learning?", "To answer this question we directly compare the matching network formulation to our method.", "Matching networks can be expressed as y = X s k S a ( x , s k ) y s k where a ( , ) is the attention/distance learned between two instances, k indexes each support instance, and y k is a one-hot encoded vector.", "a ( , ) is equivalent to A 1 ,k assuming we use a single head.", "Traditional matching networks use one-hot encoded vectors because they are evaluated on multi-class problems.", "EMR coding is a multi-label problem.", "Hence, y k is a multi-hot encoded vector.", "Moreover, with thousands of labels, it is unlikely even for neighboring instance pairs to share many labels; this problem is not encountered in the multi-class setting.", "We overcome this issue by learning new output label vectors for each support set instance.", "Assuming a single head, our method can be re-written as y = ( W 1 c g ( x ) + b c + X s k S a ( x , s k ) y s k ) , (4) where y k is the learned label vector for support instance s .", "Next, we define y k , the learned support set vectors, as y s k = W 2 c h ( g ( s k )) (5) where both W 1 c and W 2 c are submatrices of W c .", "Using this reformulation, we can now see that our method's main components (equations (1)-(3)) are equivalent to this more explicit matching network formulation (equations (4)(5)).", "Intuitively, our method combines a traditional output layer the first half of equation 4 with a matching network where the support set label vectors are learned to better match the labels of their nearest neighbors.", "In this section we compare our work with prior state-of-the-art medical coding methods.", "In Section 4.1 we discuss the two publicly available datasets we use.", "Next, Section 4.2 describes the implementation details of our model.", "We summarize the various baselines and models we compare against in Section 4.3.", "The evaluation metrics are described in Section 4.4.", "Finally, we discuss how our method performs in Section 4.5.", "EMR data is generally not available for public use especially if it involves textual notes.", "Therefore, we focus on the publicly available Medical Information Mart for Intensive Care (MIMIC) datasets for benchmarking purposes.", "We evaluate using two versions of MIMIC: MIMIC II (Lee et al., 2011) and MIMIC III (Johnson et al., 2016), where the former is a relatively smaller and older dataset 2086 # Train # Test # Labels LC AI/L MIMIC II 18822 2282 7042 36.7 118.8 MIMIC III 37016 2755 6932 13.6 80.8 Table 1: This table presents the number of training examples (# Train), the number of test examples (# Test), label cardinality (LC), and the average number of instances per label (AI/L) for the MIMIC II and MIMIC III datasets.", "and the latter is the most recent version.", "Following prior work (Perotte et al., 2013; Vani et al., 2017), we use the free text discharge summaries in MIMIC to predict the ICD-9-CM 2 codes.", "The dataset statistics are shown in Table", "1. For comparison purposes, we use the same MIMIC II train/test splits as Perotte et al. (2013).", "Specifically, we use discharge reports collected from 2001 to 2008 from the intensive care unit (ICU) of the Beth Israel Deaconess Medical Center.", "Following Perotte et al. (2013), the labels for each discharge summary are extended using the parent of each label in label set.", "The parents are based on the ICD-9-CM hierarchy.", "We use the hierarchical label expansion to maximize the prior work we can compare against.", "The MIMIC III dataset has been extended to include health records of patients admitted to the Beth Israel Deaconess Medical Center from 2001 to 2012 and hence provides a test bed for more advanced learning methods.", "Unfortunately, it does not have a standard train/test split to compare against prior work given we believe we are the first to look at it for this purpose.", "Hence, we use both MIMIC II and MIMIC III for comparison purposes.", "Furthermore, we do not use the hierarchical label expansion on the MIMIC III dataset.", "Before we present our results, we discuss an essential distinction between the MIMIC II and MIMIC III datasets.", "Particularly, we are interested in the differences concerning label imbalance.", "From Table 1, we find that MIMIC III has almost twice as many examples compared to MIMIC II in the dataset.", "However, MIMIC II on average has more instances per label.", "Thus, although MIMIC III has more examples, each label is used fewer times on average compared to 2 In 2015, a federal mandate was issued that requires the use of ICD-10 instead of ICD-9.", "However because of this recent change, ICD-10 training data is limited.", "Therefore, we use publicly available ICD-9 datasets for evaluation.", "Preprocessing: Each discharge summary was to-kenized using a simple regex tokenization scheme ( \\ w \\ w+).", "Also, each word/token that occurs less than five times in the training dataset was replaced with the UNK token.", "Model Details: For our CNN, we used convolution filters of size 3, 4 and 5 with 300 filters for each filter size.", "We used 300 dimensional skip-gram (Mikolov et al., 2013) word embed-dings pre-trained on PubMed.", "The Adam optimizer (Kingma and Ba, 2015) was used for training with the learning rate 0.0001.", "The mini-batch size was set to 4, e the number of nearest neighbors per instance was set to 16, and the number of heads ( u ) is set to 8.", "Our code is available at: https://github.com/ bionlproc/med-match-cnn 4.3 Baseline Methods In this paper, we focused on comparing our method to state-of-the-art methods for diagnosis code prediction such as grounded recurrent neural networks (Vani et al., 2017) (GRNN) and multi-label CNNs (Baumel et al., 2018).", "We also compare against traditional binary relevance methods where independent binary classifiers (L1-regularized linear models) are trained for each label.", "Next, we compare against hierarchical SVM (Perotte et al., 2013), which incorporates the ICD-9-CM label hierarchy.", "Finally, we also report the results of the traditional matching network with one modification: We train the matching network with the multi-label loss presented in Section 3.4 and threshold using the MetaLabeler described in Section 3.3.", "We also present two versions of our model: Match-CNN and Match-CNN Ens .", "Match-CNN is the multi-head matching network introduced in Section", "3. Match-CNN Ens is an ensemble that averages three Match-CNN models, each initialized using a different random seed.", "We evaluate our method using a wide variety of standard multi-label evaluation metrics.", "We use the popular micro and macro averaged F1 measures to assess how our model (with the MetaL-2087 F1 AUC (PR) AUC (ROC) P@k R@k Prec.", "abeler) performs when thresholding predictions.", "For problems with large labels spaces that suffer from significant imbalances in label distributions, the default threshold of 0.5 generally performs poorly (hence our use of MetaLabeler).", "To remove the thresholding effect bias, we also report different versions of the area under the precision-recall (PR) and receiver operating characteristic (ROC) curves.", "Finally, in a real-world setting, our system would not be expected to replace medical coders.", "We would expect medical coders to use our system to become more efficient in coding EMRs.", "Therefore, we would rank the labels based on model confidence and medical coders would choose the correct labels from the top few.", "To understand if our system would be useful in a real-world setting, we evaluate with precision at k (P@k) and recall at k (R@k).", "Having high P@k and R@k are critical to effectively encourage the human coders to use and benefit from the system.", "We show experimental results on MIMIC II in Table", "2. Overall, our method improves on prior work across a variety of metrics.", "With respect to micro F1, we improve upon GRNN-128 by over 1%.", "Also, while macro-F1 is still low in general, we also improve macro F1 compared to state-of-the-art neural methods by more than 1%.", "In general, both micro and macro F1 are highly dependent on the thresholding methodology.", "Rather than thresholding at 0.5, we rank the labels and pick the top k based on a trained regression output layer.", "Can we do better than using a MetaLabeler?", "To measure this, we look at the areas under PR/ROC curves.", "Regarding micro and macro PR-AUC, we improve on prior work by 2 .", "5% .", "This suggests that via better thresholding, the chances of improving both micro and macro F1 are higher for Match-CNN compared to other methods.", "Finally, we are also interested in metrics that evaluate how this model would be used in practice.", "We perform comparably with prior work on P@k.", "We show strong improvements in R@k with over a 4% improvement in R@40 compared to grounded RNNs and over 1% improvement when compared with Baumel et al. (2018).", "Our method also outperforms matching networks across every evaluation measure.", "We present MIMIC III results in Table", "3. We reiterate that MIMIC III does not have a standard train/test split.", "Hence we compare our model to our implementations of methods from prior ef-2088 F1 P@k R@k AUC (PR) Micro Macro 8 40 8 40 Micro Macro Match-CNN 0.456 0.041 0.557 0.206 0.413 0.670 0.421 0.119 Matching 0.429 0.034 0.534 0.196 0.395 0.636 0.376 0.095 MetaLabler 0.391 0.026 0.557 0.206 0.413 0.670 0.421 0.119 Multi-Head 0.450 0.034 0.548 0.202 0.403 0.656 0.417 0.113 Table 4: Ablation results for the MIMIC III dataset.", "forts.", "For MIMIC III also we show improvements in multiple evaluation metrics.", "Interestingly, our method performs much better than the standard CNN on MIMIC III, compared to the relative performances of the two methods on MIMIC II.", "Match-CNN improves on CNN in R@40 by almost 5% on the MIMIC III dataset.", "The gain in R@40 is more than the 1% improvement found on MIMIC II.", "We hypothesize that the improvements on MIMIC III are because the label imbalance found in MIMIC III is higher than MIMIC II.", "Increased label imbalances mean more labels occur less often.", "Therefore, we believe our model works better with less training examples per label compared to the standard CNN model.", "In Table 4 we analyze each component of our model using an ablation analysis on the MIMIC III dataset.", "First, we find that removing the matching component significantly effects our performance by reducing micro PR-AUC by almost 5%.", "Regarding micro and macro F1, we also notice that the MetaLabeler heuristic substantially improves on default thresholding (0.5).", "Finally, we see that the multi-head matching component provides reasonable improvements to our model across multiple evaluation measures.", "For example, P@8 and P@40 decrease by around 1% when we use attention with a single input representation.", "In this paper, we introduce a semi-parametric multi-head matching network with a specific application to EMR coding.", "We find that by combining the non-parametric properties of matching networks with a traditional classification output layer, we improve metrics for both frequent and infrequent labels in the dataset.", "In the future, we plan to investigate three limitations of our current model.", "the support set sampling method could substantially improve performance.", "2. We hypothesize that a more sophisticated thresholding method could have a significant impact on the micro and macro F1 measures.", "As we show in Table 4, MetaLabeler outperforms naive thresholding strategies.", "However, given our method shows non-trivial gains in PR-AUC compared to micro/macro F1, we believe better thresholding strategies are a worthy avenue to seek improvements.", "3. Both the MIMIC II and MIMIC III datasets have around 7000 labels but ICD-9-CM contains over 16000 labels and ICD-10-CM has nearly 70,000 labels.", "In future work, we believe significant attention should be given to zero-shot learning applied to EMR coding.", "To predict labels that have never occurred in the training dataset, we think it is vital to take advantage of the ICD hierarchy.", "Baker and Korhonen (2017) improve neural network training by incorporating hierarchical label information to create better weight initializations.", "However, this does not help with respect to zero-shot learning.", "If we can better incorporate expert knowledge about the label space, we may be able to infer labels we have not seen before.", "Thanks to anonymous reviewers for their thorough reviews and constructive criticism that helped improve the clarity of the paper (especially leading to the addition of Section 3.5 in the revision).", "This research is supported by the U.S. National Library of Medicine through grant R21LM012274.", "We also gratefully acknowledge the support of the NVIDIA Corporation for its donation of the Titan X Pascal GPU used for this research." ]
[ "abstain", "abstain", "abstain", "objective", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "objective", "method", "objective", "method", "objective", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "method", "other", "other", "other", "other", "other", "method", "abstain", "other", "objective", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "method", "abstain", "other", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "result", "result", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "result", "result", "result", "abstain", "result", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "result", "objective", "abstain", "objective", "result", "result", "abstain", "result", "method", "abstain", "abstain", "result", "other", "other", "other" ]
[ "The embedding-based large-scale query-document retrieval problem is a hot topic in the information retrieval (IR) field.", "Considering that pre-trained language models like BERT have achieved great success in a wide variety of NLP tasks, we present a QuadrupletBERT model for effective and efficient retrieval in this paper.", "Unlike most existing BERT-style retrieval models, which only focus on the ranking phase in retrieval systems, our model makes considerable improvements to the retrieval phase and leverages the distances between simple negative and hard negative instances to obtaining better embeddings.", "Experimental results demonstrate that our QuadrupletBERT achieves state-of-the-art results in embedding-based large-scale retrieval tasks.", "Large-scale retrieval systems such as search engines have been a vital tool to help people access the massive amount of online information.", "Various techniques have been developed to improve retrieval quality in the last decades.", "Due to the difficulty of computing search intent from the query text and accurately representing the semantic meaning of document requirements, most previous studies are based on classic term-weighting methods such as BM-25 (Robert-son and Zaragoza, 2009) or TF-IDF (Sprck Jones, 1972, 2004) or simple context-free word embedding (Mikolov et al., 2013) that perform well for the cases that keyword matching can address.", "However, these models only accept sparse handcrafted features and cannot capture complex semantic features.", "Considering that pre-trained language models like BERT (Devlin et al., 2019) and RoBERTa (Liu et al., 2019) have achieved great success in a wide Corresponding author InvertedIndex Retriever Query Top K Results Ranker Indexing Fine-tunedBERT Result Retrieval Phase Ranking Phase Figure 1: The architecture of large-scale retrieval systems.", "variety of NLP tasks, more and more researchers propose BERT-style models to solve large-scale retrieval problems.", "Some previous work has confirmed the effectiveness of BERT for enhancing retrieval systems.", "For example, Yilmaz et al. (2019) apply a BERT-style model to document retrieval via integration with the open-source anserini information retrieval toolkit to demonstrate end-to-end search over large document collections.", "Yang et al. (2019) build a BERT-based reader to identify answers from a large corpus of Wikipedia articles in an end-to-end fashion.", "Padaki et al. (2020) use query expansion to generate better queries for BERT-based Ranker in retrieval.", "Mass and Roitman (2020) describe a weakly-supervised method for training BERT-style models for ad hoc document retrieval.", "In BERT, the prediction function f ( query, doc ) is a pre-trained deep bidirectional Transformer model (Vaswani et al., 2017).", "While the above BERT-style models are very successful, this approach cannot be directly applied to large-scale retrieval problems because predicting f for every possible document can be prohibitively expensive.", "Thus, the methods mentioned above first use a less powerful but more efficient retrieval algorithm ( Retriever ) such as an inverted index to reduce the solution space and then use the BERT-style model to re-rank the retrieved documents.", "As shown in figure 1, we refer to all such BERT-style retrieval models as Ranker .", "Unlike these Ranker which have recently seen significant advances, constructing a BERT-style Retriever is a new topic in the large-scale retrieval field, on which few studies have thus far focused.", "For example, Reimers and Gurevych (2019) present a modification of the pre-trained BERT network that uses siamese and triplet network structures to derive semantically meaningful sentence embeddings that can be compared using cosine similarity.", "Chang et al. (2020) build a two-tower Transformer model with more pre-training data, which can significantly outperform the widely used BM25 algorithm.", "Lu et al. (2020) distill knowledge from BERT into a two-tower architecture network for efficient retrieval.", "As shown in figure 2", "(a) and", "(b), the existing BERT-style Retriever mentioned above simply builds a twoor three-tower network structure to compute distances between positive and negative instances, which ignores the fact there are not only simple negative instances in the dataset: some instances are seemingly positive but actually negative, which we call hard negative instances.", "As we all know, the Retriever should have high recall; otherwise, many positive instances will not even be considered in the ranking phase.", "However, due to hard negative instances being literally related, treating them as equal to simple negative instances may harm the embedding of positive instances and lead the model to identify positive instances as negative ones mistakenly.", "The key to solving the problem mentioned above is incorporating the distances between hard negative and simple negative instances into the training step.", "Our intuition is that hard negative instances are negative compared to positive instances but should be considered positive compared to simple negative instances.", "Therefore, we explore a new way to incorporate distances between hard negative and simple negative instances into the training process and build a four-tower BERT-style model named QuadrupletBERT.", "We have evaluated our model on two Retrieval Question-Answering (ReQA) benchmarks.", "Experimental results show that our model registers huge improvements over existing BERT-style Retriever models and achieves state-of-the-art results.", "Our main contributions are as follows: 1. We propose a new four-tower BERT-style model named QuadrupletBERT, which is very easy to use and improves hugely over existing BERT-style Retriever models.", "2. We find that leveraging distances between hard negative and simple negative instances in the training process helps improve the Retriever model.", "Large-scale retrieval problems can be defined as: given a query, return the most relevant documents from a large corpus, where the corpus' size can be hundreds of thousands or more.", "The embedding-based retrieval model jointly embeds queries and documents in the same embedding space and uses an inner product or cosine distance to measure the similarity between queries and documents.", "Since embeddings of all candidate documents can be precomputed and indexed, the inference can be made efficiently with approximate nearest neighbor search algorithms in the embedding space (Shrivas-tava and Li, 2014; Guo et al., 2016).", "Let the query embedding model be ( ) , and the document embedding model be ( ) The distance function can be defined as: f ( query, doc ) = (cid:104) ( query ) , ( doc ) (cid:105) (1) In this paper, we are interested in parameterizing the encoders and as a four-tower BERT which incorporates the distances between hard negative and simple negative instances into the training step.", "As shown in figure 2", "(c), the core of our model is a four-tower sentence-level BERT relevance encoder.", "Each tower of our retrieval model follows the architecture and hyper-parameters of the 12 layers BERT model 1 .", "Note that for all BERT baselines, we all pre-train them on the specific downstream datasets by Masked LM and Next Sentence Prediction tasks (Devlin et al., 2019).", "The embedding dimension is 768.", "The sequence length for the encoder is set to be 64.", "For all towers, taking the average of the encoding layer's hidden state on the time axis as the final embedding.", "One unique advantage of the multi-tower retrieval model compared with classic IR algorithms is the ability to train it for specific tasks.", "In this paper, our training data x can be defined as quaternion query-document pairs: = { ( q i , p i , n i , hn i ) } | | i =1 (2) where q, p, n , and hn are representing query, positive document, negative document, hard negative document separately.", "We estimate the model parameters by minimizing the following loss function: loss = | | (cid:88) i =1 max ( loss hi , loss ni ) loss hi = max ( d pi d hi + m, 0) loss ni = max ( d pi d ni + m, 0) d pi = f ( q i , p i ) d ni = f ( q i , n i ) d hi = f ( hn i , n i ) (3) where d pi is the distance between q i and p i , d ni is the distance between q i and n i , and d hi is the distance between hn i and n i .", "This loss function constructed by two parts, where both loss hi and loss ni aim to minimize d pi .", "Besides, loss hi aims to maximize d hi , 1 https://github.com/google-research/ bert and loss ni aims to maximize d ni .", "m is the margin enforced between positive, negative, and hard negative documents.", "This loss function's intuition is to cluster the query and positive documents and separate the positive and hard negative documents from the negative documents by a distance margin.", "The distance function f we select is cosine distance, which can be defined as follows: f ( X , Y ) = 1 X Y || X || || Y || (4) 3.2 Inference First, we pre-compute all the document embeddings.", "Then, given an unseen query q , we only need to rank the document based on its cosine distance with the query embedding.", "To make our QuadrupletBERT can be applied in resource-restricted and time-sensitive systems such as query understanding in search engines (Nakamura et al., 2019), we deployed an inverted index based ANN (approximate near neighbor) search algorithms to our model.", "We employed Faiss library (Johnson et al., 2017) to quantize the vectors and then implemented the efficient embedding search in our model.", "We consider the Retrieval Question-Answering (ReQA) benchmark proposed by Ahmad et al. (2019).", "The two QA datasets we consider are SQuAD and Natural Questions.", "Note that each entry of QA datasets is a tuple ( q, a, e ) , where q is the question, a is the answer span, and e is the evidence passage containing a .", "Following Ahmad et al. (2019), we split a passage into sentences e = s 1 s 2 ...s n and transform the original entry to a new tuple ( q, s i ) .", "Different from the ranking phase of large-scale retrieval.", "The retrieval phase is that given a question q , retrieve the correct sentence s from all candidates.", "For each evidence passage e we create a set of candidate sentences s i , and the retrieval candidate set is built by combining such sentences for all passages.", "To construct our training quaternion pairs ( q i , p i , n i , hn i ) .", "For a specific question q i , we de-fine the gold sentence containing a i as p i , and randomly select a sentence not containing a i as n i .", "We firstly train our model with loss hi = 0 until the loss is converged.", "Then we use the trained model Train/Test Model R@1 R@10 R@50 R@100 5% / 95% Three-T Emb 1.02 3.41 7.05 9.34 Three-T BERT 1.13 5.28 12.14 17.08 QuadrupletBERT 6.28 9.59 16.41 21.62 80% / 20% Three-T Emb 18.25 41.08 61.39 68.41 Three-T BERT 21.04 43.29 64.17 71.79 QuadrupletBERT 28.15 59.64 75.39 81.11 5% / 95% Three-T Emb 0.26 1.04 1.99 2.53 Three-T BERT 0.39 1.92 2.98 3.08 QuadrupletBERT 3.11 5.76 7.84 9.19 80% / 20% Three-T Emb 9.59 33.94 50.21 55.18 Three-T BERT 16.88 41.27 59.28 65.56 QuadrupletBERT 19.84 50.33 68.82 74.83 Table 1: Recall@k on two datasets, where three-T Emb represents the three-tower word embedding retrieval method (Huang et al., 2020) and Three-T BERT represents the three-tower Sentence-BERT (Reimers and Gurevych, 2019).", "For each dataset, we consider different train-ing/test split of the data ( 5% / 95% and 80% / 20% ) in the fine-tuning stage, and the 10% of the training set is held out as the validation set for hyper-parameter tuning.", "The split is created assuming a cold-start retrieval scenario where the queries in the test (query, document) pairs are not seen in training.", "We compare our method against two famous embedding-based large-scale retrieval baselines: (1) Recent three-tower word embedding retrieval method proposed by Facebook Search (Huang et al., 2020).", "(2) The state-of-the-art three-tower Sentence-BERT proposed by Reimers and Gurevych (2019).", "Since the goal of Retriever is to capture the positives in the top-k results, we select Recall@k as the evaluation metric.", "The following equation computes Recall@k: Recall @ k = 1 | D | (cid:88) x i D (cid:80) y i R k l <x i ,y i > (cid:80) y i D l <x i ,y i > (5) where R k is the top k results recalled by our model.", "D is the dataset.", "x i and y i are the i -th question and i -th answer separately.", "1. Results of both Sentence-BERT and our QuadrupletBERT overpass the results of three tower word embedding, which confirms the effectiveness of BERT-style retrieval model.", "2. Our four-tower QuadrupletBERT models gain improvements over the three-tower BERT.", "It is worth noting that the only difference between them is that our model leverages distances between hard negative and simple negative instances in the training process by an extra tower, which verifies our assumption.", "3. Our QuadrupletBERT models surpass all the baseline models in all tasks, which verifies our method's effectiveness again.", "Especially the results on cold-start retrieval ( 5% / 95% training/test split) tasks demonstrate our models keep improvements even on data-lacking scenarios.", "2 The experiment results in this paper are statistically significant with p < 0 .", "05 .", "As a key hyper-parameter of our QuadrupletBERT model, m denotes the margin enforced between positive and hard negative and negative instances.", "We further investigated the influence of m on our model.", "With the SQuAD and Natural Questions datasets, we train models with m is set to 0, 0.1, 0.2, 1, 1.5, and 2, respectively.", "The experimental results are shown in Table 2. We found that tuning margin value is important the optimal margin value varies a lot across different training tasks, and different margin values result in 5 10% recall variance.", "We have covered research on embedding based large-scale retrieval in Section 1, related work that inspires our technical design is mainly introduced in the following:", "Reimers and Gurevych (2019) present a modification of the pre-trained BERT network that uses multi-tower network structures to derive semantically meaningful sentence embeddings that can be compared using cosine similarity.", "Huang et al. (2020) present a multi-tower word embedding retrieval method successfully applied in the Facebook online search.", "Besides, they mentioned that shuffling hard negative and simple negative instances in the training sets may help model learning, which inspired us to further investigate the effectiveness of hard negative instances.", "We have presented our four-tower QuadrupletBERT model and demonstrated its usage and effect on large-scale retrieval.", "Unlike many widely-used BERT-style Ranker models of large-scale retrieval tasks, our model focus on the retrieval phase.", "The multi-tower architecture making it extremely easy to be applied in retrieval systems.", "Moreover, incorporating distances between hard negative and simple negative instances into the training process shows significant superiority in improving Retriever model performance.", "We hope our work can inspire more sophisticated techniques of leveraging BERT-style models in large-scale retrieval.", "Leveraging hard negative instances for other natural language processing tasks such as text generation and information extraction is also worth investigating.", "This research was supported by the National Key Research And Development Program of China (No.2019YFB1405802), the 2019 Industrial Internet Innovation and Development Project No.", "TC190H46G/1, and the central government guides local science and technology development fund projects (science and technology innovation base projects) No. 206Z0302G." ]
[ "abstain", "result", "result", "objective", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "objective", "method", "result", "objective", "result", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "other", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "result", "abstain", "result", "result", "abstain", "result", "abstain", "abstain", "abstain", "method", "abstain", "method", "other", "other", "objective", "objective", "method", "abstain", "abstain", "abstain", "abstain", "other", "other" ]
[ "Abstract", "Verifying the correctness of a textual statement requires not only semantic reasoning about the meaning of words, but also symbolic reasoning about logical operations like count , superlative , aggregation , etc.", "In this work, we propose LogicalFactChecker , a neural network approach capable of leveraging logical operations for fact checking.", "It achieves the state-of-the-art performance on TABFACT, a large-scale, benchmark dataset built for verifying a textual statement with semi-structured tables.", "This is achieved by a graph module network built upon the Transformer-based architecture.", "With a textual statement and a table as the input, LogicalFactChecker automatically derives a program (a.k.a. logical form) of the statement in a semantic parsing manner.", "A heterogeneous graph is then constructed to capture not only the structures of the table and the program, but also the connections between inputs with different modalities.", "Such a graph reveals the related contexts of each word in the statement, the table and the program.", "The graph is used to obtain graph-enhanced contextual representations of words in Transformer-based architecture.", "After that, a program-driven module network is further introduced to exploit the hierarchical structure of the program, where semantic compositionality is dynamically modeled along the program structure with a set of function-specific modules.", "Ablation experiments suggest that both the heterogeneous graph and the module network are important to obtain strong results.", "Fact checking for textual statements has emerged as an essential research topic recently because of the unprecedented amount of false news and rumors spreading through the internet (Thorne et al., 2018; Work done while this author was an intern at Microsoft", "Chen et al., 2019; Goodrich et al., 2019; Nakamura et al., 2019; Kryscinski et al., 2019; Vaibhav et al., 2019).", "Online misinformation may manipulate peo-ple's opinions and lead to significant influence on essential social events like political elections (Faris et al., 2017).", "In this work, we study fact checking, with the goal of automatically assessing the truthfulness of a textual statement.", "The majority of previous studies in fact checking mainly focused on making better use of the meaning of words, while rarely considered symbolic reasoning about logical operations (such as count , superlative , aggregation ).", "However, modeling logical operations is an essential step towards the modeling of complex reasoning and semantic compositionality.", "Figure 1 shows a motivating example for table-based fact checking, where the evidence used for verifying the statement comes from a semi-structured table.", "We can see that correctly verifying the statement In 2004, the score is less than 270 requires a system to not only discover the connections between tokens in the statement and the table, but more importantly understand the meaning of logical operations and how they interact in a structural way to form a whole.", "Under this Table Statement In 2004, the score is less than 270.", "consideration, we use table-based fact checking as the testbed to investigate how to exploit logical operations in fact checking.", "In this paper, We present LogicalFactChecker , a neural network approach that leverages logical operations for fact checking when semi-structured tables are given as evidence.", "Taking a statement and a table as the input, it first derives a program, also known as the logical form, in a semantic parsing manner (Liang, 2016).", "Then, our system builds a heterogeneous graph to capture the connections among the statement, the table and the program.", "Such connections reflect the related context of each token in the graph, which are used to define attention masks in a Transformer-based (Vaswani et al., 2017) framework.", "The attention masks are used to learn graph-enhanced contextual representations of tokens 1 .", "We further develop a program-guided neural module network to capture the structural and compositional semantics of the program for semantic compositionality.", "(Socher et al., 2013; Andreas et al., 2015).", "Graph nodes, whose representations are computed using the contextual representations of their constituents, are considered as arguments, and logical operations are considered as modules to recursively produce representations of higher level nodes along the program.", "Experiments show that our system outperforms previous systems and achieves the state-of-the-art verification accuracy.", "The contributions of this paper can be summarized as follows: We propose LogicalFactChecker , a graph-based neural module network, which utilizes logical operations for fact-checking.", "Our system achieves the state-of-the-art performance on TABFACT, a large-scale and benchmark dataset for table-based fact checking.", "Experiments show that both the graph-enhanced contextual representation learning mechanism and the program-guided semantic compositionality learning mechanism improve the performance.", "We study the task of table-based fact checking in this paper.", "This task is to assess the veracity of a statement when a table is given as evidence.", "Specifically, we evaluate our system on TABFACT (Chen et al., 2019), a large benchmark dataset for table-based fact checking.", "With a given semi-structured table and a statement, systems are required to perform reasoning about the structure and content of the table and assess whether the statement is ENTAILED or REFUTED by the table.", "The official evaluation metric is the accuracy for the two-way classification ( ENTAILED / REFUTED ).", "TABFACT consists of 118,439 statements and 16,621 tables from Wikipedia.", "More details about the dataset are given in Appendix A. 3 LogicalFactChecker: Methodology In this section, we present our approach LogicalFactChecker , which simultaneously considers the meaning of words, inner structure of tables and programs, and logical operations for fact-checking.", "One way to leverage program information is to use standard semantic parsing methods, where automatically generated programs are directly executed on tables to get results.", "However, TABFACT does not provide annotated programs.", "This puts the problem in a weak-supervised learning setting, which is one of the major challenges in the semantic parsing field.", "In this work, we use programs in a soft way that programs are represented with neural modules to guide the reasoning process between a textual statement and a table.", "Figure 2 gives an overview of our approach.", "With a statement and a corresponding table, our system begins with program generation, which synthesizes a program.", "Then, we build a heterogeneous graph for capturing the inner structure of the input.", "With the constructed graph, we incorporate a graph-based attention mask into the Transformer for learning graph-enhanced token representations.", "Lastly, we learn the semantic compositionality by developing a program-guided neural module network and make the final prediction.", "This section is organized as follows.", "We first describe the format of the program ( 3.1) for a more transparent illustration.", "After that, the graph construction approach ( 3.2) is presented first, followed by a graph-enhanced contextual representation learning mechanism ( 3.3).", "Moreover, we introduce how to learn semantic compositionality over the program by neural module network ( 3.4).", "At last, we describe how to synthesize programs by our semantic parsing model ( 3.5).", "Before presenting the technical details, we first describe the form of the program (also known as logical form) for clearer illustrations.", "With a given natural language statement, we be-gin by synthesizing the corresponding semantic representation (LISP-like program here) using semantic parsing techniques.", "Following the notation defined by Chen et al. (2019), the functions (logical operations) formulating the programs come from a fixed set of over 50 functions, including count and argmax , etc.", "The detailed description of the functions is given in Appendix C. Each function takes arguments of predefined types like string, number, bool or sub-table as input.", "The programs have hierarchical structure because the functions can be nested.", "Figure 3 shows an example of a statement and a generated program, accompanying with the derivation of the program and its semantic structure.", "The details of the generation of a program for a textual statement are introduced in 3.5.", "(a) Derivation with basic operations", "(b) The structure of compositionality Figure 3: An example of a program with its semantic structure and derivation with basic logical operations.", "In this part, we introduce how to construct a graph to explicitly reveal the inner structure of programs and tables, and the connections among statements and them.", "Figure 4 shows an example of the graph.", "Specifically, with a statement, a table and a pro-Year 2005 2004 2003 2002 Venue Arlandastad Arlandastad Falsterbo Halmstad Winner David Patrick Matthew King Titch Moore Thomas Besancenez Score 272 270 273 279 Row 0 Row 1 Row 2 Row 3 Statement Table (;) (;) _(;) Year Score 2004 270 Program In 2004, the score is less than 270.", "gram, our system operates in the following steps.", "For a table, we define nodes as columns, cells, and rows, which is partly inspired by the design of the graph for table-based question answering (Muller et al., 2019).", "As shown in Figure 4, each cell is connected to its corresponding column node and row node.", "Cell nodes in the same row are fully-connected to each other.", "Program is a naturally structural representation consisting of functions and arguments.", "In the program, functions and arguments are represented as nodes, and they are hierarchically connected along the structure.", "Each node is connected to its direct parents and children.", "Arguments are also linked to corresponding column names of the table.", "By default, in the statement, all tokens are the related context of each other, so they are connected.", "To further leverage the connections from the statement to the table and the program, we add links for tokens which are linked to cells or columns in the table, and legitimate arguments in the program.", "After these processes, the extracted graph not only maintains the inner-structure of tables and programs but also explores the connections among aligned entities mentioned in different contents.", "We describe how to utilize the graph structure for learning graph-enhanced contextual representations of tokens 2 .", "A simple way to learn contextual representations is to concatenate all the contents 3 as a single string and use the original attention mask in Transformer, where all the tokens are regarded as the contexts for each token.", "However, this simple way fails to capture the semantic structure revealed in the constructed graph.", "For example, according to Figure 4, the content 2004 exists in the statement, program and table.", "These aligned entity nodes for 2004 should be more related with each other when our model calculate contextual representations.", "To address this problem, we use the graph structure to re-define the related contexts of each token for learning a graph-enhanced representation.", "Specifically, we present a graph-based mask matrix for self-attention mechanism in Transformer.", "The graph-based mask matrix G is a 0-1 matrix of the shape N N , where N denotes the total number of tokens in the sequence.", "This graph-based mask matrix records which tokens are the related context of the current token.", "G ij is assigned as 1 if token j is the related context of token i in the graph and 0 otherwise.", "Then, the constructed graph-based mask matrix will be feed into BERT (Devlin et al., 2018) for learning graph-enhanced contextual representations.", "We use the graph-based mask to control 2 In this work, tokens include word pieces in the statement, column names and row names and contents of cells in the table, and function names in the program 3 All the contents indicate texts in the concatenated sequence of the linearized table, the statement, and the sequence of the linearized program.", "the contexts that each token can attend in the self-attention mechanism of BERT during the encoding process.", "BERT maps the input x of length T into a sequence of hidden vectors as follows.", "In the previous subsection, we describe how our system learns the graph-enhanced contextual representations of tokens.", "The process mentioned above learns the token-level semantic interaction.", "In this subsection, we make further improvement by learning logic-level semantics using program information.", "Our motivation is to utilize the structures and logical operations of programs for learning logic-enhanced compositional semantics.", "Since the logical operations forming the programs come from a fixed set of functions, we design a modular and composable network, where each logical operation is represented as a tailored module and modules are composed along the program structure.", "We first describe how we initialize the representation for each entity node in the graph ( 3.4.1).", "After that, we describe how to learn semantic compositionality based on the program, including the design of each neural module ( 3.4.2) and how these modules are composed recursively along the structure of the program ( 3.4.3).", "In a program, entity nodes denote a set of entities (such as David Patrick ) from input contexts while function nodes denote a set of logical operations (such as filter equal ), both of which may contain multiple words/word-pieces.", "Therefore, we take graph-enhanced contextual representations as mentioned in 3.3 to initialize the representations of entity nodes.", "Specifically, we initialize the representation h e of each entity node e by averaging the projected hidden vectors of each words contained in e as follows: h e = 1 n n (cid:88) i =0 relu ( W e h ( x ) p ie ) (2) where n denotes the total number of tokens in the span of entity e , p ie denotes the position of the i th token, W e RF D is a weight matrix, F is the dimension of feature vectors of arguments, D is the dimension of hidden vectors of BERT and relu is the activation function.", "In this part, we present function-specific modules, which are used as the basic computational units for composing all the required configurations of module network structures.", "Inspired by the neural module network (An-dreas et al., 2015) and the recursive neural network (Socher et al., 2013), we implement each module with the same neural architecture but with different function-specific parameters.", "All the modules are trained jointly.", "Each module corresponds to a specific function, where the function comes from a fixed set of over 50 functions described before.", "In a program, each logical operation has the format of FUNCTION ( ARG 0 , ARG 1 , ... ) , where each function may have variable-length arguments.", "For example, the function hop has 2 arguments while the function count has 1 argument.", "To handle variable-length arguments, we develop each module as follows.", "We first calculate the composition for each function-argument pair and then produce the overall representation via combining the representations of items.", "The calculation for each function-argument pair is implemented as matrix-vector multiplication, where each function is represented as a matrix and each argument is represented as a vector.", "This is inspired by vector-based semantic composition (Mitchell and Lapata, 2010), which states that matrix-vector multiplication could be viewed as the matrix modifying the meaning of vector.", "Specifi-cally, the output y m of module m is computed with the following formula: y m = 1 N m N m (cid:88) i =0 ( W m v i + b m ) (3) where W m RF F is a weight matrix and b m is a bias vector for a specific module m .", "N m denotes the number of arguments of module m , and each v i RF is the feature vector representing the i th input.", "is the activation function.", "Under the aforementioned settings, modules can compose into a hierarchical network determined by the semantic structure of the parsed program.", "In this part, we introduce how to compose a program-guided neural module network based on the structure of programs and predefined modules.", "Taking the structure of the program and representations of all the entity nodes as the input, the composed neural module network learns the compositionality of the program for the final prediction.", "Figure 5 shows an example of a composed network based on the structure of the program.", "( ( _( ; 2004);); 270) In 2004, the score is less than 270.", "Statement Program _ Year 2004 Score _ _ 270 Figure 5: An example of neural module network.", "Along the structure of the program, each step of compositionality learning is to select a module from a fixed set of parameterized modules defined in 3.4.2 and operate on it with Equation 3 to dynamically generate a higher-level representation.", "The above process will be operated recursively until the output of the top-module is generated, which is denoted as y topm .", "After that, we make the final prediction by feeding the combination of y topm and the final hidden vector h ( x ) T from 3.3 through an MLP (Multi-layer Perceptron) layer.", "The motivation of this operation is to retain the complete semantic meaning of the whole contexts because some linguistic cues are discarded during the synthesizing process of the program.", "In this part, we describe our semantic parser for synthesizing a program for a textual statement.", "We tackle the semantic parsing problem in a weakly-supervised setting (Berant et al., 2013; Liang et al., 2017; Misra et al., 2018), since the ground-truth program is not provided.", "As shown in Figure 3, a program in TABFACT is structural and follows a grammar with over 50 functions.", "To effectively capture the structure of the program and also generate legitimate programs following a grammar in the generation process, we develop a sequence-to-action approach, which is proven to be effective in solving many semantic parsing problems (Chen et al., 2018; Iyer et al., 2018; Guo et al., 2018).", "The basic idea is that the generation of a program tree is equivalent to the generation of a sequence of action, which is a traversal of the program tree following a particular order, like depth-first, left-to-right order.", "Specifically, our semantic parser works in a top-down manner in a sequence-to-sequence paradigm.", "The generation of a program follows an ASDL grammar (Yin and Neubig, 2018), which is given in Appendix C. At each step in the generation phase, candidate tokens to be generated are only those legitimate according to the grammar.", "Parent feeding (Yin and Neubig, 2017) is used for directly passing information from parent actions.", "We further regard column names of the table as a part of the input (Zhong et al., 2017) to generate column names as program arguments.", "We implement the approach with the LSTM-based recurrent network and Glove word vectors (Pennington et al., 2014) in this work, and the framework could be easily implemented with Transformer-based framework.", "Following Chen et al. (2019), we employ the label of veracity to guide the learning process of the semantic parser.", "We also employ programs produced by LPA (La-tent Program Algorithm) for comparison, which is provided by Chen et al. (2019).", "In the training process, we train the semantic parser and the claim verification model separately.", "The training of semantic parser includes two steps: candidate search and sequence-to-action learning.", "For candidate search, we closely follow LPA by first collecting a set of programs which could derive the correct label and then using the trigger words to reduce the number of spurious programs.", "For learning of the semantic parser, we use the standard way with back propagation, by treating each (claim, table, positive program) as a training instance.", "We evaluate our system on TABFACT (Chen et al., 2019), a benchmark dataset for table-based fact checking.", "Each instance in TABFACT consists of a statement, a semi-structured Wikipedia table and a label (ENTAILED or REFUTED) indicates whether the statement is supported by the table or not.", "The primary evaluation metric of TABFACT is label accuracy.", "The statistics of TABFACT are given in Appendix A. Detailed hyper-parameters for model training are given in Appendix B for better reproducibility of experiments.", "We compare our system with following baselines, including the textual matching based baseline Table-BERT and semantic parsing based baseline LPA, both of which are developed by Chen et al. (2019).", "Table-BERT tackles the problem as a matching problem.", "It takes the linearized table and the statement as the input and employs BERT to predict a binary class.", "Latent Program Algorithm (LPA) formulates the verification problem as a weakly supervised semantic parsing problem.", "With a given statement, it operates in two step: (1) latent program search for searching executable program candidates and (2) transformer-based discriminator selection for selecting the most consistent program.", "The final prediction is made by executing the selected program.", "In Table 1, we compare our model ( LogicalFactChecker ) with baselines on the development set and test set.", "It is worth noting that complex test set and simple test set are partitioned based on its collecting channel, where the former involves higher-order logic and more complex semantic understanding.", "As shown in Table 1, our model with programs generated by Sequence-to-Action model, significantly outperforms previous systems with 71.8% label accuracy on the development set and 71.7% on the test set, and achieves the state-of-the-art performance on the TABFACT dataset.", "We conduct ablation studies to evaluate the effectiveness of different components in our model.", "As shown in Table 2, we evaluate LogicalFactChecker under following settings: (1) removing the graph-based mask described in 3.3 (the first row); (2) removing the program-guided compositionality learning mechanism described in 3.4 (the second row).", "Table 2 shows that, eliminating the graph-based mask drops the accuracy by 1.56% on test set.", "Removing the program-guided compositionality learning mechanism drops the accuracy by 2.08% on test set, which reflects that the neural module network plays a more important role in our approach.", "This observation verifies that both mechanisms are beneficial for our task.", "We conduct a case study by giving an example shown in Figure 6.", "From the example, we can see that our system synthesizes a semantic-consistent program of the given statement and make the correct prediction utilizing the synthesized program.", "This observation reflects that our system has the ability to (1) find a mapping from the textual cues to a complex function (such as the mapping from most points to function argmax ) and (2) derive the structure of logical operations to represent the semantic meaning of the whole statement.", "The dominant type of errors is caused by the misleading programs generated by the semantic parser.", "As shown in the example in Figure 7", "(a), the semantic parser fails to generate a semantically correct program because it lacks the external knowledge about the date in the table and the new year eve in the statement.", "The second type of errors is caused by semantic compositionality, even though Date Visiting Team Host Team Score Sep. 25 New York Giants San Diego Chargers 23-45 Oct. 16 Houston Texans Seattle Seahawks 10-42 Dec. 11 Detroit Lions Green Bay Packers 13-16 Jan. 1 St. Louis Rams Dallas Cowboys 20-10 The visiting team is the New York Giant on new year eve and St. Louis Rams in New Year's day Player Country Score JuliInkster United States 65 Momoko Ueda Japan 66 Laura Diaz United States 66 Ji Young South Korea 66 There are 3 players total from the United States.", "programs are correctly predicted.", "As shown in Figure 7", "(b), the program involves operations requiring complex reasoning, like counting the exact number of rows.", "Potential ways to alleviate this problem is to design more function-specific modules like Andreas et al. (2015).", "The third type of errors is caused by the coverage of the logical operations we used.", "In this work, we follow Chen et al. (2019) and use exactly the same functions.", "However, as shown in 7", "(c), understanding this statement requires the function of difference time , which is not covered by the current set.", "There is a growing interest in fact checking in NLP with the rising importance of assessing the truthfulness of texts, especially when pre-trained language models (Radford et al., 2019; Zellers et al., 2019; Keskar et al., 2019) are more and more powerful in generating fluent and coherent texts.", "Previous studies in the field of fact checking differ in the genres of supporting evidence used for verification, including natural language (Thorne et al., 2018), semi-structured tables (Chen et al., 2019), and images (Zlatkova et al., 2019; Nakamura et al., 2019).", "The majority of previous works deal with textual evidence.", "FEVER (Thorne et al., 2018) is one of the most influential datasets in this direction, where evidence sentences come from 5.4 million Wikipedia documents.", "Systems developed on FEVER are dominated by pipelined approaches with three separately trained models, i.e. document retrieval, evidence sentence selection, and claim verification.", "There also exist approaches (Yin and Roth, 2018) that attempt to jointly learn evidence selection and claim verification.", "More recently, the second FEVER challenge (Thorne et al., 2019) is built for studying adversarial attacks in fact checking 4 .", "Our work also relates to fake news detection.", "For example, Rashkin et al. (2017) study fact checking by considering stylistic lexicons, and Wang (2017) builds LIAR dataset with six fine-grained labels and further uses meta-data features.", "There is a fake news detection challenge 5 hosted in WSDM 2019, with the goal of the measuring the truthfulness of a new article against a collection of existing fake news articles before being published.", "There are very recent works on assessing the factual accuracy of the generated summary in neural abstractive summarization systems (Goodrich et al., 2019; Kryscinski et al., 2019), as well as the use of this factual accuracy as a reward to improve abstractive summarization (Zhang et al., 2019).", "Chen et al. (2019) recently release TABFACT, a large dataset for table-based fact checking.", "Along with releasing the great dataset, they provide two baselines: Table-BERT and LPA.", "Table-BERT is a textual matching based approach, which takes the linearized table and statement as inputs and states the veracity.", "However, Table-BERT fails to utilize logical operations.", "LPA is a semantic parsing based approach, which first synthesizes programs by latent program search and then ranks candidate programs with a neural-based discriminator.", "However, the ranking step in LPA does not consider the table information.", "Our approach simultaneously utilizes the logical operations for semantic compositionality and the connections among tables, programs, and statements.", "Results show that our approach achieves the state-of-the-art performance on TABFACT.", "In this paper, we present LogicalFactChecker , a neural network based approach that considers logical operations for fact checking.", "We evaluate our system on TABFACT, a large-scale benchmark dataset for verifying textual statements over semi-structured tables, and demonstrate that our approach achieves the state-of-the-art performance.", "LogicalFactChecker has a sequence-to-action semantic parser for generating programs, and builds a heterogeneous graph to capture the connections among statements, tables, and programs.", "We utilize the graph information with two mechanisms, including a mechanism to learn graph-enhanced contextual representations of tokens with graph-based attention mask matrix, and a neural module network which learns semantic compositionality in a bottom-up manner with a fixed set of modules.", "We find that both graph-based mechanisms are beneficial to the performance, and our sequence-to-action semantic parser is capable of generating semantic-consistent programs.", "Wanjun Zhong, Jiahai Wang and Jian Yin are supported by the National Natural Science Foundation of China (U1711262, U1611264,U1711261,U1811261,U1811264, U1911203), National Key R&D Program of China (2018YFB1004404), Guangdong Basic and Applied Basic Research Foundation (2019B1515130001), Key R&D Program of Guangdong Province (2018B010107005).", "The corresponding author is Jian Yin." ]
[ "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "result", "abstain", "objective", "method", "abstain", "method", "abstain", "abstain", "objective", "abstain", "abstain", "result", "objective", "result", "objective", "method", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "method", "method", "objective", "objective", "objective", "objective", "objective", "objective", "objective", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "result", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "other", "other", "other", "other", "other", "other", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "other", "method", "abstain", "method", "objective", "abstain", "method", "result", "other", "other" ]
[ "Translation quality evaluation plays a crucial role in machine translation.", "According to the input format, it is mainly separated into three tasks, i.e. , reference-only, source-only and source-reference-combined.", "Recent methods, despite their promising results, are specifically designed and optimized on one of them.", "This limits the convenience of these methods, and overlooks the commonalities among tasks.", "In this paper, we propose UniTE, which is the first unified framework engaged with abilities to handle all three evaluation tasks.", "Concretely, we propose monotonic regional attention to control the interaction among input segments, and unified pretraining to better adapt multi-task learning.", "We testify our framework on WMT 2019 Metrics and WMT 2020 Quality Estimation benchmarks.", "Extensive analyses show that our single model can universally surpass various state-of-the-art or winner methods across tasks.", "Both source code and associated models are available at https://github.com/NLP2CT/UniTE.", "Automatically evaluating the translation quality with the given reference segment(s), is of vital importance to identify the performance of Machine Translation (MT) models (Freitag et al., 2020; Mathur et al., 2020a; Zhao et al., 2020; Kocmi et al., 2021).", "Based on the input contexts, translation evaluation can be mainly categorized into three classes: 1) reference-only evaluation ( REF ) approaches like BLEU (Papineni et al., 2002) and BLEURT (Sellam et al., 2020a), which evaluate the hypothesis by referring the golden reference at target side; 2) source-only evaluation ( SRC ) methods like YiSi-2 (Lo, 2019) and TransQuest (Ranasinghe Work was done when Yu Wan was interning at DAMO Academy, Alibaba Group. Dayiheng Liu and Derek F. Wong are co-corresponding authors. et al., 2020b), which are also referred as quality estimation (QE).", "These methods estimate the quality of the hypothesis based on the source sentence without using references; 3) source-reference-combined evaluation ( SRC +R EF ) works like COMET (Rei et al., 2020), where the evaluation exploits information from both source and reference.", "With the help of powerful pretrained language models (PLMs, Devlin et al., 2019; Conneau et al., 2020), model-based approaches ( e.g. , BLEURT, TransQuest, and COMET) have shown promising results in recent WMT competitions (Ma et al., 2019; Mathur et al., 2020b; Freitag et al., 2021; Fonseca et al., 2019; Specia et al., 2020, 2021).", "Nevertheless, each existing MT evaluation work is usually designed for one specific task, e.g. , BLEURT is only used for REF task and can not support SRC and SRC +R EF tasks.", "Moreover, those approaches preserve the same core evaluating the quality of translation by referring to the given segments.", "We believe that it is valuable, as well as feasible, to unify the capabilities of all MT evaluation tasks ( REF , SRC and SRC +R EF ) into one model.", "Among the promising advantages are ease of use and improved robustness through knowledge transfer across evaluation tasks.", "To achieve this, two important challenges need to be addressed: 1) How to design a model framework that can unify all translation evaluation tasks?", "2) How to make the powerful PLMs better adapt to the unified evaluation model?", "In this paper, we propose UniTE Uni fied T ranslation E valuation, a novel approach which unifies the functionalities of REF , SRC and SRC +R EF tasks into one model.", "To solve the first challenge as mentioned above, based on the multilingual PLM, we utilize layerwise coordination which concatenates all input segments into one sequence as the unified input form.", "To further unify the modeling of three evaluation tasks, we propose a novel Monotonic Regional Attention (MRA) strat-8117 egy, which allows partial semantic flows for a specific evaluation task.", "For the second challenge, a multi-task learning-based unified pretraining is proposed.", "To be concrete, we collect the high-quality translations and degrade low-quality translations of NMT models as synthetic data.", "Then we propose a novel ranking-based data labeling strategy to provide the training signal.", "Finally, the multilingual PLM is continuously pretrained on synthetic dataset with multi-task learning manner.", "Besides, our proposed models, named UniTE-MRA and UniTE-UP respectively, can benefit from finetuning with human-annotated data over three tasks at once, not requiring extra task-specific training.", "Experimental results demonstrate the superiority of UniTE.", "Compared to various strong baseline systems on each task, UniTE, which unifies REF , SRC and SRC +R EF tasks into one single model , achieves consistently absolute improvements of Kendall's correlations at 1.1, 2.3 and 1.1 scores on English-targeted translation directions of WMT 2019 Metric Shared task (Fonseca et al., 2019), respectively.", "Meanwhile, after introducing multilingual-targeted support for our unified pretraining strategy, a single model named UniTE-MUP also gives dominant results against existing methods on non-English-targeted translation evaluation tasks.", "Furthermore, our method can also achieve competitive results over WMT 2020 QE task compared with the winner submission (Ranas-inghe et al., 2020b).", "Ablation studies reveal that, the proposed MRA and unified pretraining strategies are both important for model performance, making the model preserve the outstanding performance and multi-task transferability concurrently.", "In this section, we briefly introduce the three directions of translation evaluation.", "REF assesses the translation quality via comparing the translation candidate and the given reference.", "In this setting, the two inputs are written in the same language, thus being easily applied in most of the metric tasks.", "In the early stages, statistical methods are dominant solutions due to their strengths in wide language support and intuitive design.", "These methods measure the surface text similarity for a range of linguistic features, including n-gram (BLEU, Papineni et al., 2002), token (TER, Snover et al., 2006), and character (ChrF & ChrF++, Popovic, 2015, 2017).", "However, recent studies pointed out that these metrics have low consistency with human judgments and insufficiently evaluate high-qualified MT systems (Freitag et al., 2020; Rei et al., 2020; Mathur et al., 2020a).", "Consequently, with the rapid development of PLMs, researchers have been paying their attention to model-based approaches.", "The basic idea of these studies is to collect sentence representations for similarity calculation (BERTScore, Zhang et al., 2020) or evaluating probabilistic con-fidence (PRISM-ref, Thompson and Post, 2020; BARTScore, Yuan et al., 2021).", "To further improve the model, Sellam et al. (2020a) pretrained a specific PLM for the translation evaluation (BLEURT), while Lo (2019) combined statistical and representative features (YiSi-1).", "Both these methods achieve higher correlations with human judgments than statistical counterparts.", "SRC , which also refers to quality estimation 1 , is an important translation evaluation task especially for the scenario where the ground-truth reference is unavailable.", "It takes the source-side sentence and the translation candidate as inputs for the quality estimation.", "To achieve this, the methods are required to model cross-lingual semantic alignments.", "Similar to reference-only evaluation, statistical-based (Ranasinghe et al., 2020b), model-based (TransQuest, Ranasinghe et al., 2020b; PRISM-src, Thompson and Post, 2020), and feature combination (YiSi-2, Lo, 2019) are typical and advanced methods in this tasks.", "Aside from the above tasks that only consider either source or target side at one time, SRC +R EF takes both source and reference sentences into account.", "In this way, methods in this context can evaluate the translation candidate via utilizing the features from both sides.", "As a rising paradigm among translation evaluation tasks, SRC +R EF also profits from the development of cross-lingual PLMs.", "For example, finetuning PLMs over human-annotated datasets (COMET, Rei et al., 2020) achieves new state-of-the-art results among all evaluation approaches in WMT 2020 (Mathur et al., 2020b).", "As mentioned above, massive methods are proposed for different automatic evaluation tasks.", "On the one hand, it is inconvenient and expensive to develop and employ different metrics for different evaluation scenarios.", "On the other hand, separate models absolutely overlook the commonalities among these evaluation tasks, of which knowledge potentially benefits all three tasks.", "In order to ful-fill the aim of unifying the functionalities on REF , SRC , and SRC +R EF into one model, in this section, we introduce UniTE (Figure 1).", "By receiving a data example composing of hypothesis, source, and reference segment, UniTE first modifies it into concatenated sequence following the given setting as REF , SRC , or SRC +R EF :", "x REF = Concat ( h , r ) R ( l h + l r ) , x SRC = Concat ( h , s ) R ( l h + l s ) , (1) x SRC +R EF = Concat ( h , s , r ) R ( l h + l s + l r ) ,", "where h , s and r are hypothesis, source and reference segments, with the corresponding sequence lengths being l h , l s and l r , respectively.", "The input sequence is then fed to PLM to derive representations H .", "Take REF as an example: HREF = PLM ( x REF ) R ( l h + l r ) d , (2) where d is the model size of PLM.", "Compared to existing methods (Zhang et al., 2020; Rei et al., 2020) which take sentence-level representations for evaluation, the advantages of our architecture design are as follows.", "First, our UniTE model can benefit from layer-coordinated semantical interactions inside every one of PLM layers, which is proven effective on capturing diverse linguistic features (He et al., 2018; Lin et al., 2019; Jawahar et al., 2019; Tenney et al., 2019; Rogers et al., 2020).", "Second, for the unified approach of our model, the concatenation provides the unifying format for all task inputs, turning our model into a more general architecture.", "When conducting different evaluation tasks, our model requires no further modification inside.", "Note here, to keep the consistency across all evaluation tasks, as well as ease the unified learning, h is always located at the beginning of the input sequence.", "After deriving HREF , a pooling block is arranged after PLM which gives sequence-level representations HREF .", "Finally, a feedforward network takes HREF as input, and gives a scalar p as prediction: HREF = Pool ( HREF ) R d , (3) p REF = FeedForward ( HREF ) R 1 .", "For training, we encourage the model to reduce the mean squared error with respect to given score q :", "However, for the pretraining of most PLMs (e,g., XLM-R, Conneau et al., 2020), the input patterns are designed to receive two segments at most.", "Thus there exists a gap between the pretraining of PLM 8119 and the joint training of UniTE where the concatenation of three fragments is used as input.", "Moreover, previous study (Takahashi et al., 2020) shows that directly training over SRC +R EF by following such design leads to worse performance than REF scenario.", "To alleviate this issue, we propose two strategies: Monotonic Regional Attention as described in 3.2 and Unified Pretraining in 3.3.", "To fill the modeling gap between the pretraining of PLM and the joint training of three downstream tasks, a natural idea is to unify the number of involved segments when modeling semantics for SRC , REF and SRC +R EF tasks.", "Following this, we propose to modify the attention mask of SRC +R EF to simulate the modeling of two segments in SRC and REF .", "Specifically, when calculating the attention logits, semantics from a specific segment are only allowed to derive information from two segments at most.", "Considering the conventional attention module: A = Softmax( QK (cid:62) d ) RL L , (6) where L is the sequential length for input, Q , K RL d are query and key representations, respectively.", "2 As to monotonic regional attention (MRA), we simply add a mask M to the softmax logits to control attention flows: A = Softmax( QK (cid:62) d + M ) RL L , (7) M ij = (cid:40) ( i, j ) U , 0 otherwise , (8) where U stores the index pairs of all masked areas.", "Following this idea, the key of MRA is how to design the matrix U .", "For the cases where interactions inside each segment, we believe that these self-interactions are beneficial to the modeling.", "For other cases where interactions are arranged across segments, three patterns are included: hypothesis-reference, source-reference, and hypothesis-source.", "Intuitively, the former two parts are beneficial for model training, since they might contribute the monolingual signals and cross-lingual disambiguation to evaluation, respectively.", "This leaves the only case, where our experimental analysis also verifies (see 5.1), that interaction between hypothesis and source leads to the performance decrease for SRC +R EF task, thus troubling the unifying.", "To give more fine-grained designs, we propose two approaches for UniTE-MRA, which apply the MRA mechanism into UniTE model (Figure 2): Hard MRA.", "Only monotonic attention flows are allowed.", "Interactions between any two segments are strictly unidirectional through the entire PLM, where U stores the index pairs of unidirectional interactions of h r , s r and h s , where denotes the direction of attention flows.", "Soft MRA.", "Specific attention flows are forbidden inside each attention module.", "The involved two segments may interact inside a higher layer.", "In practice, index pairs which denoting h s or s h between source and hypothesis are stored in U .", "Note that, although the processing in source and reference may be affected because their positions are not indexed from the start, related studies on positional embeddings reveal that, PLM can well capture relative positional information (Wang and Chen, 2020), which dispels this concern.", "To further bridge the modeling gap between PLM and the joint training of UniTE mentioned in 3.1, we propose a unified pretraining strategy including the following main stages: 1) collecting and downgrading synthetic data; 2) labeling examples with a novel ranking-based strategy; 3) multi-task learning for unified pretraining and finetuning.", "Synthetic Data Collection As our approach aims at evaluating the quality of translations, generated hypotheses with NMT models are ideal synthetic data.", "To further improve the diversity of synthetic data quality, we follow existing experiences (Sellam et al., 2020a; Wan et al., 2021) to 8120 apply the word and span dropping strategy to downgrade a portion of hypotheses.", "The collected data totally contains N triplets composing of hypothesis, source and reference segments, which is formed as D (cid:48) = {(cid:104) h i , s i , r i (cid:105)} Ni =1 .", "Data Labeling After obtaining the synthetic data, the next step is to augment each data pair with a label which serves as the signal of unified pretraining.", "To stabilize the model training, as well as normalize the distributions across all score systems and languages, we propose a novel ranking-based approach.", "This method is based on the idea of Borda count (Ho et al., 1994; Emerson, 2013), which provides more precise and well-distributed synthetic data labels than Z-score normalization.", "Specifically, we first use available approaches to derive the predicted score q i for each item, yielding labeled synthetic quadruple examples formed as D (cid:48)(cid:48) = {(cid:104) h i , s i , r i , q i (cid:105)} N i =1 .", "Then, we tag each example with its rank index q i referring to q i : q i = IndexOf ( q i , Q ) , (9) where Q is the list storing all the sorted q i descendingly.", "Then, we use the conventional Z-score strategy to normalize the scores: q i = q i , (10) where and are the mean and the standard deviation of values in Q , respectively.", "The dataset thus updates its format to D = {(cid:104) h i , s i , r i , q i (cid:105)} Ni =1 .", "Note here that, an example with higher q i is assigned with higher q i , thus a larger value of q i .", "Compared to related approaches which apply Z-score normalization (Bojar et al., 2018), or leave the conventional labeled scores as signals for learning ( i.e. , knowledge distillation, Kim and Rush, 2016; Phuong and Lampert, 2019), our approach can alleviate the bias of chosen model for labeling and prior distributional disagreement of scores.", "For example, different methods may give scores with different distributions.", "Especially for translation directions of low-resource, scores may follow skewed distribution (Sellam et al., 2020a), which has a disagreement with rich-resource scenarios.", "Our method can unify the distribution of all labeling data into the same scale, which can also be easily applied by the ensembling strategy.", "multi-task learning for both pretraining and finetuning.", "For each step, we arrange three substeps for all input formats, yielding LREF , LSRC , and LSRC +R EF , respectively.", "The final learning objective is to reduce the summation of all losses: L = LREF + LSRC + LSRC +R EF .", "Benchmarks Following Rei et al. (2020); Yuan et al. (2021), we examine the effectiveness of the propose method on WMT 2019 Metrics (Ma et al., 2019).", "For the former, we follow the common practice in COMET 3 (Rei et al., 2020) to collect and preprocess the dataset.", "The official variant of Kendall's Tau correlation (Ma et al., 2019) is used for evaluation.", "We evaluate our methods on all of REF , SRC and SRC +R EF scenarios.", "For SRC scenario, we further conduct results on WMT 2020 QE task (Specia et al., 2020) referring to Ranasinghe et al. (2020a) for data collection and preprocessing.", "Following the official report, the Pearson's correlation is used for evaluation.", "Model Pretraining As mentioned in 3.3, we continuously pretrain PLMs using synthetic data.", "The data is constructed from WMT 2021 News Translation task, where we collect the training sets from five translation tasks.", "Among those tasks, the target sentences are all in English (En), and the source languages are Czech (Cs), German (De), Japanese (Ja), Russian (Ru), and Chinese (Zh).", "Specifically, we follow Sellam et al. (2020a) to use TRANSFORMER -base (Vaswani et al., 2017) MT models to generate translation candidates, and use the checkpoints trained via UniTE-MRA approach for synthetic data labeling.", "We pretrain two kinds of models, one is pretrained on English-targeted language directions, and the other is a multilingual version trained using bidirectional data.", "Note that, for a fair comparison, we filter out all pretraining examples that are involved in benchmarks.", "Model Setting We implement our approach upon COMET (Rei et al., 2020) repository and follow their work to choose XLM-R (Conneau et al., 2020) as the PLM.", "The feedforward network consists of 3 linear transitions, where the dimensionalities of 3 https://github.com/Unbabel/COMET 8121 Model High-resource Zero-shot Avg.", "corresponding outputs are 3,072, 1,024, and 1, respectively.", "Between any two adjacent linear modules inside, hyperbolic tangent function is arranged as activation.", "During both pretraining and finetuning phrases, we divided training examples into three sets, where each set only serves one scenario among REF , SRC and SRC +R EF to avoid learning degeneration.", "During finetuning, we randomly extracting 2,000 training examples from benchmarks as development set.", "Besides UniTE-MRA and UniTE-UP which are derived with MRA ( 3.2) and Unified Pretraining ( 3.3), we also extend the latter with multilingual-targeted unified pretraining, thus obtaining UniTE-MUP model.", "Baselines As to REF approaches, we select BLEU (Papineni et al., 2002), ChrF (Popovic, 2015), YiSi-1 (Lo, 2019), BERTScore (Zhang et al., 2020), BLEURT (Sellam et al., 2020a), PRISM-ref (Thompson and Post, 2020), BARTScore (Yuan et al., 2021), XLM-R+Concat (Takahashi et al., 2020), and RoBERTa+Concat (Takahashi et al., 2020) for comparison.", "For SRC methods, we post results of both metric and QE methods, including YiSi-2 (Lo, 2019), XLM-R+Concat (Taka-hashi et al., 2020), PRISM-src (Thompson and Post, 2020) and multilingual-to-multilingual MTransQuest (Ranasinghe et al., 2020b).", "For SRC +R EF , we use XLM-R+Concat (Takahashi et al., 2020) and COMET (Rei et al., 2020) as strong baselines.", "English-Targeted Results on English-targeted metric task are conducted in Table 1.", "Among all involved baselines, for REF methods, BARTScore (Yuan et al., 2021) performs better than other statistical and model-based metrics.", "As to SRC scenario, MTransQuest (Ranasinghe et al., 2020b) gives dominant performance.", "Further, COMET (Rei et al., 2020) performs better than XLM-R+Concat (Takahashi et al., 2020) on SRC +R EF scenario.", "As for our methods, we can see that, UniTE-MRA achieves better results on all tasks, demonstrating the effectiveness of monotonic attention flows for cross-lingual interactions.", "Moreover, the proposed model UniTE-UP, which unifies REF , SRC , and SRC +R EF learning on both pretraining and finetuning, yields better results on all evaluation settings.", "Most importantly, UniTE-UP is a single model which surpasses all the different state-of-the-art models on three tasks, showing its dominance on both convenience and effectiveness.", "Multilingual-Targeted As seen in Table 2, the multilingual-targeted UniTE-MUP gives dominant performance than all strong baselines on REF , SRC and SRC +R EF , demonstrating the transferability and effectiveness of our approach.", "Besides, the UniTE-UP also gives dominant results, revealing an improvement of 0.6, 0.3 and 0.9 averaged Kendall's correlation scores, respectively.", "However, we find that UniTE-MUP outperforms strong baselines but slightly worse than UniTE-UP on English-targeted translation directions (see Table 3).", "We think the reason lies in the curse of multilingualism and vocabulary dilution (Conneau et al., 2020).", "Quality Estimation The results for UniTE approach on WMT 2020 QE task are concluded in Table", "4. As seen, it achieves competitive results on QE task compared with the winner submission (Ranasinghe et al., 2020b).", "In this section, we conduct ablation studies to investigate the effectiveness of regional attention patterns (5.1), unified training (5.2), and ranking-based data labeling (5.3).", "All experiments are conducted by following English-targeted setting.", "To investigate the effectiveness of MRA, we further collect experiments in Table", "5. As seen, MRA can give performance improvements than full attention, and preventing the interactions between hypothesis and source segment can improve the performance most.", "We think the reasons behind are twofold.", "First, the source side is formed with a different language, whose semantic information is rather weak than the reference side.", "Second, by preventing direct interactions between source and hypothe-8123 Model De-En Ru-En Zh-En Fi-En Gu-En Kk-En Lt-En Avg.", "sis, semantics inside the former must be passed through reference, which is helpful for disambiguation.", "Besides, not allowing the source to derive information from the hypothesis is better than the opposite direction.", "Wang and Chen (2020) found that the positional embeddings in PLM are engaged with strong adjacent information.", "We think the reason why S H performs worse than H S lies in the skipping of indexes, which corrupts positional Unified Unified Avg.", "Additionally, when we combined two methods together, i.e. , unified pretraining and finetuning with SRC +R EF UniTE-MRA setting, model performance drops to 34.9 over English-targeted tasks on average.", "We think that both methods all intend to solve the problem of unseen SRC +R EF input format, and MRA may not be necessary if massive data examples can be obtained for pretraining.", "Nevertheless, UniTE-MRA has its advantage on wide application without requiring pseduo labeled data.", "Experiments for comparing unified and task-specific training are concluded in Table 6.", "As seen, when using the unified pretraining checkpoint to finetune over the specific task, performance over three models reveals performance drop con-8124 Method Avg.", "sistently, indicating that the unified finetuning is helpful for model learning.", "This also verifies our hypothesis, that the cores of REF , SRC , and SRC +R EF tasks are identical to each other.", "Moreover, unified pretraining and finetuning are complementary to each other.", "Also, utilizing task-specific pretraining instead of unified one reveals worse performance.", "To sum up, unifying both pretraining and finetuning only reveals one model, showing its advantage on the generalization on all tasks, where one united model can cover all functionalities of REF , SRC and SRC +R EF tasks concurrently.", "To verify the effectiveness of ranking-based labeling, we collect the results of models applying different pseudo labeling strategies.", "After deriving the original scores from the well-trained UniTE-MRA checkpoint, we use Z-score and proposed ranking-based normalization methods to label synthetic data.", "For both methods, we also apply an ensembling strategy to assign training examples with averaged scores deriving from 3 UniTE-MRA checkpoints.", "Results show that, Z-score normalization reveals a performance drop when applying score ensembling with multiple models.", "Our proposed ranking-based normalization can boost the UniTE-UP model training, and its ensembling approach can further improve the performance.", "In the past decades, automatic translation evaluation is mainly divided into REF , SRC and SRC +R EF tasks, each of which develops independently and is tackled by various task-specific methods.", "We suggest that the three tasks are possibly handled by a unified framework, thus being ease of use and facilitating the knowledge transferring.", "Contributions of our work are mainly in three folds:", "(a) We propose a flexible and unified translation evaluation model UniTE, which can be adopted into the three tasks at once;", "(b) Through in-depth analyses, we point out that the main challenge of unifying three tasks stems from the discrepancy between vanilla pretraining and multi-tasks finetuning, and fill this gap via monotonic regional attention (MRA) and unified pretraining (UP);", "(c) Our single model consistently outperforms a variety of state-of-the-art or winner systems across high-resource and zero-shot evaluation in WMT 2019 Metrics and WMT 2020 QE benchmarks, showing its advantage of flexibility and convincingness.", "We hope our new insights can contribute to subsequent studies in the translation evaluation community.", "The authors would like to send great thanks to all reviewers and meta-reviewer for their insightful comments.", "This work was supported in part by the Science and Technology Development Fund, Macau SAR (Grant No. 0101/2019/A2), the Multiyear Research Grant from the University of Macau (Grant No. MYRG2020-00054-FST), National Key Research and Development Program of China (No. 2018YFB1403202), and Alibaba Group through Alibaba Research Intern Program." ]
[ "abstain", "abstain", "abstain", "abstain", "objective", "objective", "abstain", "result", "other", "abstain", "other", "abstain", "other", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "objective", "objective", "objective", "abstain", "method", "objective", "abstain", "objective", "abstain", "abstain", "result", "result", "abstain", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "objective", "objective", "abstain", "result", "objective", "other", "other" ]
[ "Implicit discourse relation recognition (IDRR) aims to identify logical relations between two adjacent sentences in the discourse.", "Existing models fail to fully utilize the contextual information which plays an important role in interpreting each local sentence.", "In this paper, we thus propose a novel graph-based Context Tracking Network (CT-Net) to model the discourse context for IDRR.", "The CT-Net firstly converts the discourse into the paragraph association graph (PAG), where each sentence tracks their closely related context from the intricate discourse through different types of edges.", "Then, the CT-Net extracts contextual representation from the PAG through a specially designed cross-grained updating mechanism, which can effectively integrate both sentence-level and token-level contextual semantics.", "Experiments on PDTB 2.0 show that the CT-Net gains better performance than models that roughly model the context.", "Implicit discourse relation recognition (IDRR) aims to identify logical relations between two adjacent sentences in discourse without the guidance of connectives (e.g., because, but), which is one of the major challenges in discourse parsing.", "With the rise of deep learning, lots of sentence-modeling based methods (Liu and Li, 2016; Rnnqvist et al., 2017; Bai and Zhao, 2018; Xu et al., 2019; Shi and Demberg, 2019) have emerged in the field of IDRR.", "These methods typically focus on modeling the local semantics of these two sentences, without considering wider discourse context.", "Contextual information plays an important role in understanding sentences.", "Take the paragraph P = { S 1 , S 2 , S 3 , S 4 } in Figure 1 as an example, the ground-truth relation between S 3 and S 4 is Comparison.", "Combining the contextual information carried by S 1 and S 2 , we can more easily identify the Comparison relation reflected by The manufacturer went public at $15.75 a share in August 1987.", "achieve that price", "(rising: $15.75 a share to $29 per-share)", "and softened", "(falling: $29 per-share to $25 a share).", "Dai and Huang", "(2018)", "move one step on utilizing wider discourse context, where they use a hierarchical BiLSTM", "(H-LSTM)", "to model the whole paragraph rather than only the two sentences, to obtain context-aware sentence representation.", "However, there are still two limitations in their model.", "First, they roughly merge all the information in the paragraph, which dilutes the role of key context that closely related to the current sentence.", "Second, the H-LSTM suffers from the long-distance forgetting problem, which may fail to model the long-distance and non-continuous dependency across multiple sentences", "(like green lines in Figure 1).", "To overcome these limitations, we propose a novel Context Tracking Network", "(CT-Net), which can track essential context for each sentence from the intricate discourse, without being affected by the spatial distance.", "The CT-Net computes contextual representation through two main steps.", "Firstly, it converts the paragraph into the paragraph association graph", "(PAG)", "(Figure 1), which contains three types of edges between sentences, namely", "(1)", "adjacency edge", "(black lines): connecting adjacent sentences,", "(2)", "co-reference edge", "(purple lines): connecting sentences with co-reference associations, and", "(3)", "lexical chain edge", "(green lines): connecting sentences containing related words.", "Each sentence can track closely related context along these S \" S \" S \" S \" C l a ss i fi e r L a y e r Input", "edges, including long-distance sentences involving the same object or topic.", "Secondly, the CT-Net extracts contextual representation over the PAG.", "To effectively incorporate fine-grained information carried by tokens, we propose the cross-grained updating mechanism, which will be executed multiple recurrent rounds.", "At each round, it performs semantic exchange via three processes: Token-to-Sentence Updating : updating the sentence representation with its tokens to grasp fine-grained semantics.", "Sentence-to-Sentence Updating : performing interaction between sentences on the PAG to get context-aware sentence representation.", "Sentence-to-Token Updating : using the context-aware sentence representation to update tokens, so that each token can also incorporate contextual information.", "The obtained context-aware token representation will be used for the computation of the next round.", "After multiple rounds, the CT-Net obtains the contextual representation that fully combines sentence-level and token-level contextual semantics.", "Our main contributions are two folds.", "1 First, we propose a novel CT-Net for IDRR, which builds the PAG to track closely related context for each sentence in the intricate discourse, and incorporates multi-grained contextual semantics via the cross-grained updating mechanism.", "Second, experiments on PDTB 2.0 demonstrate that the CT-Net gains better performance than a variety of approaches that roughly model the discourse context.", "The input of the CT-Net is a paragraph P =", "( S 1 , S 2 , ..., S n 1 , S n )", ".", "Here, S n 1 and S n are the adjacent sentences to be classified, while S 1 , ..., S n 2 are context with background information.", "Our goal is to identify the relation between S n 1 and S n .", "We firstly build a paragraph association graph", "(PAG)", "for P", "(Section 2.1), then employ the cross-grained updating mechanism on the PAG to extract the contextual representation of S n 1 and S n", "(Section 2.2).", "The contextual representation is then used for the final classification", "(Section 2.3).", "The CT-Net firstly converts the P into a PAG G =", "( V , E )", ", where V and E are the sets of nodes and edges respectively.", "As shown in Figure 2, the PAG contains sentence nodes", "(blue)", "and token nodes", "(orange).", "Each token node is connected with its corresponding sentence node.", "We carefully design the edges between sentence nodes so that each sentence only connects the ones that are closely related to it.", "Specifically, there are three types of edges between sentence nodes in the PAG: Adjacency Edge", "(black edges).", "Adjacent sentences tend to carry important contextual information.", "Therefore, we add adjacency edges between the neighbors in the discourse.", "Co-reference Edge", "(purple edges).", "Sentences with co-reference associations tend to involve the same object and be highly related, so we add a co-reference edge between them.", "Lexical Chain Edge", "(green edges).", "Lexical chain tracks related words that run through the whole paragraph.", "Sentences containing the same words or synonyms", "(except stop words)", "tend to involve the same topic, therefore, we add a lexical chain edge between them.", "We give more details of the PAG in Section 3.2.", "The CT-Net then extracts contextual representation of S n 1 and S n from the PAG G through cross-grained updating mechanism, which is executed T rounds.", "At the t -th round, we denote the state of the i -th sentence node as g ti , and the state of the j -th token node of the i -th sentence as h ti,j .", "The states transition from the", "( t 1 )-th to the t -th round consists of three computation processes: token-to-sentence updating, sentence-to-sentence updating and sentence-to-token updating.", "The first two processes are responsible for updating sentence nodes, while the last one is for updating token nodes.", "Node Initialization.", "When t = 0 , we initialize token nodes with the concatenation of char, GloVe", "(Pennington et al., 2014)", "and ELMo", "(Peters et al., 2018)", "embeddings.", "And the dimension is reduced: h 0 i,j = x i,j = W [ x chari,j ; x glovei,j ; x elmoi,j ] + b", "where W , b are parameters.", "The sentence node g 0 i is initialized as the average of its token nodes.", "Token-to-Sentence Updating.", "This process updates the sentence state g ti with the token states of last round h t 1 i,j .", "We employ Sentence-state LSTM", "(SLSTM)", "(Zhang et al., 2018)", "to achieve this.", "SLSTM is a novel graph RNN that converts a sentence into a graph with one global sentence node and several local word nodes, just like the sub-graph in the PAG", "(inside the dotted ellipse in Figure 2).", "At the t -th round, the hidden state of i -th sentence g ti is computed as follows: g ti =SLSTM h g", "( h t 1 i, 0 , h t 1 i, 1 ..., h t 1 i, | S i | , g t 1 i )", "(2)", "where SLSTM h g represents the process of updating the sentence state with token states by SLSTM, and its detailed equations are shown in Appendix A. | S i | is the number of tokens in S i .", "Sentence-to-Sentence Updating.", "After merging token semantics, sentences further grasp sentence-level contextual semantics through the interaction between sentence nodes on the PAG.", "Since there are three types of edges, we employ Multi-Relational GCN", "(Schlichtkrull et al., 2018)", "to get contextual sentence representation c ti of S i : c ti =", "where W g , W r are model parameters.", "R is the set of edge types between sentence nodes.", "N ri denotes neighbours of the i -th sentence node of relation r , where r R .", "is the ReLU function.", "Sentence-to-Token Updating.", "This process is for updating token states.", "It conveys the sentence-level contextual information c t 1 i to the token, which is also achieved by the SLSTM.", "At the t th round, the hidden state of each token h ti,j is computed as follows: h ti,j =SLSTM g h", "where x i,j is the initial token embedding.", "We show the detailed equations of SLSTM g h in Appendix A. Then, the obtained h ti,j is used for the token-to-sentence updating of the next round.", "After T rounds, we get c Tn 1 and c Tn as the final contextual representations of S n 1 and S n , respectively, which fully combine token-level and sentence-level contextual semantics.", "After obtaining global contextual representations c Tn 1 and c Tn , we use a one-layer BiLSTM", "(Hochre-iter and Schmidhuber, 1997)", "to encode S n 1 into l n 1 by concatenating the last hidden states in two directions, and encode S n into l n in the same way.", "l n 1 and l n are local representations without considering wider context.", "We then concatenate global and local features as follows: X cls = concat( l n 1 , l n , c Tn 1 , c Tn )", "X cls is then fed into a two-layer MLP", "(a fully-connected layer with ReLU activation followed by a softmax output layer)", "for classification.", "Multi-Task Training.", "Following previous works", "(Dai and Huang, 2018; Nguyen et al., 2019), we apply multi-task learning to improve the performance.", "The main task is implicit discourse relation recognition", "(IDRR), while the auxiliary tasks are explicit discourse relation recognition", "(EDRR)", "and connective prediction", "(CP).", "These three tasks share the same encoder but use three different MLPs.", "The objective function is as follows: L = C idrr", "(cid:88)", "j =1 y jidrr log", "(cid:98)", "y jidrr C edrr", "(cid:88)", "j =1 y jedrr log", "(cid:98)", "y jedrr C cp", "(cid:88)", "j =1 y jcp log", "(cid:98)", "y jcp", "(6)", "where , , are adjustable hyper-parameters.", "y idrr , y edrr and y cp are ground-truth labels of IDRR, EDRR and CP respectively, while", "(cid:98)", "y idrr ,", "(cid:98)", "y edrr and", "(cid:98)", "y cp are corresponding predictions.", "C idrr , C edrr and C cp represent the number of classes of IDRR, EDRR, and CP respectively.", "We conduct experiments on PDTB 2.0", "(Prasad et al., 2008), which contains 16 , 224 implicit instances and 18 , 459 explicit instances.", "We perform one-vs-others binary classification and 4-way classification on 4 top-level discourse relations: comparison", "(Comp.), contingency", "(Cont.), expansion", "(Exp.), and temporal", "(Temp.).", "Following Pitler et al.", "(2009), we use sections 2 20 for training, sections 21 22 for test and sections 0 1 for validation.", "The metric is F1 score, and for 4-way classification, we calculate the macro-average F1 score.", "Details of the PAG.", "We set the number of sentences to build PAGs as 6 , and use zero padding when the text is less than 6 sentences.", "When building the PAG, we employ spaCy", "( https: //spacy.io/ )", "to identify co-reference chains, use simple matching to recognize the same words and use WordNet", "(Miller, 1995)", "to recognize synonyms.", "The WordNet covers 59 .", "38%", "( 7558 / 12632 )", "training samples, 59 .", "05 %", "( 699 / 1183 )", "developing samples, and 56 .", "98%", "( 596 / 1046 )", "testing samples.", "The average number of edges in the PAG is 11 .", "Details of Parameters and Training.", "For the node embedding initialization, we use 150 dimensional char embedding obtained by a CNN", "(Kim, 2014)", "with kernel window size of [ 1 , 2 , 3 ], 300 -dimensional-GloVe embedding, and ELMo with 1024 dimension", "(the output of the second layer of BiLSTM).", "We reduce the dimension of node states as 512 , so that the dimensions of SLSTM and MR-GCN are also 512 .", "The iteration rounds of the cross-grained updating mechanism is set as 6 .", "The size of the BiLSTM which is used to compute local features", "(Section 2.3)", "is 128 .", "For multi-task learning, we set the , , as 1 .", "0 , 0 .", "5 , 0 .", "5 .", "The learning rate is 0 .", "001 with batch size of 64 .", "The number of parameters of the CT-Net is about 16 M. We use the F1 score as the criterion when manually tuning the hyper-parameter values.", "The Model Comp.", "whole model is trained end to end with the ADAM optimizer", "(Kingma and Ba, 2014)", "on two Tesla P40s with 24 GB GPU memory, and the average runtime is about 6 hours.", "Main Results", "(Table 1).", "We carefully design four baselines with different paragraph encoders for a full comparison:", "(1)", "NoContext, the model only using BiLSTM to get local features without considering wider context.", "(2)", "BiLSTM, the model using BiLSTM to encode the paragraph.", "(3)", "H-LSTM, the model using hierarchical BiLSTM as paragraph encoder.", "(4)", "FCG-Net, the model replacing the PAG in the CT-Net with a fully-connected graph (FCG).", "Except for the way of encoding paragraph, the other settings of these models are the same as the CT-Net.", "We can draw the following three conclusions.", "First, NoCon-text obtains the worst performance in most cases, demonstrating the necessity of using contextual representations.", "Second, the CT-Net gains better performance than models with sequential paragraph encoders BiLSTM and H-LSTM, which proves the superiority of our graph-based CT-Net.", "The reason is that the CT-Net can track and model closely related context for sentences including long-distance ones.", "Third, replacing the PAG in the CT-Net with the FCG (FCG-Net) brings a quality drop, which proves the PAG effectively pick out appropriate context that benefits on sentence understanding.", "We also performed paired t-test between CT-Net and these 4 baselines.", "The CT-Net is significantly Model Comp.", "Analysis of the PAG (Table 2).", "The PAG contains three types of edges: adjacency edge (Adj.), co-reference edge (Coref.) and lexical chain edge (Lex.).", "To understand the impact of these edges, we conduct ablation experiments on 4-way classification.", "Rows 1 3 report the results of removing Adj., Coref., and Lex. respectively.", "Removing", "Adj.\" brings the biggest drop ( 0 . 97% ), which reflects that the adjacency edge plays the most important role in the PAG.", "We also explore the impact of the number of sentences in the PAG.", "Rows 4 6 report the results.", "The CT-Net gains the best performance when the PAG contains 6 sentences, and modeling a longer paragraph of 8 sentences causes a decline.", "We hypothesize that modeling a paragraph this is too long may introduce some irrelevant context, resulting in a reduction in performance.", "Comparison with Existing Systems (Table 3).", "Table 3 shows the comparison with existing systems.", "Our method outperforms other models on 4-way classification, and also gains the best performance on the binary classifications of temporal (Temp.) and expansion (Exp.).", "Ablation Study of Multi-task Learning (Ta-ble 4).", "Following Dai and Huang (2018) and Nguyen et al. (2019), we utilize the explicit discourse relation recognition (EDRR) and connective prediction (CP) as auxiliary tasks to help implicit discourse relation recognition (IDRR).", "We conduct ablation experiments of the two auxiliary tasks on 4-way classification (Table", "4) to show their impact.", "Row 1 is the performance of the CT-Net.", "Rows 2 3 report the performance of removing the auxiliary task.", "As expected, the EDRR contributes more to the IDRR than the CP does, which is because that the EDRR is a more similar task with the IDRR.", "We propose a novel graph-based Context Tracking Network (CT-Net) to model the context for implicit discourse relation classification.", "The CT-Net first converts the paragraph into the paragraph association graph (PAG), where each sentence tracks their appropriate context through different edges, then employs the cross-grained updating mechanism to combine sentence-level and token-level contextual information.", "Experiments on PDTB 2.0 demonstrate that the CT-Net captures more effective contextual information than carefully designed baselines with different context encoders." ]
[ "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "objective", "objective", "objective", "abstain", "objective", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain" ]
[ "Languages evolve and diverge over time.", "Their evolutionary history is often depicted in the shape of a phylogenetic tree.", "Assuming parsing models are representations of their languages grammars, their evolution should follow a structure similar to that of the phylogenetic tree.", "In this paper, drawing inspiration from multi-task learning, we make use of the phylogenetic tree to guide the learning of multi-lingual dependency parsers leveraging languages structural similarities.", "Experiments on data from the Universal Dependency project show that phylogenetic training is beneficial to low resourced languages and to well furnished languages families.", "As a side product of phylogenetic training, our model is able to perform zero-shot parsing of previously unseen languages.", "Languages change and evolve over time.", "A community that spoke once a single language can be split geographically or politically, and if the separation is long enough their language will diverge in direction different enough so that at some point they might not be intelligible to each other.", "The most striking differences between related languages are often of lexical and phonological order but grammars also change over time.", "Those divergent histories are often depicted in the shape of a tree in which related languages whose common history stopped earlier branch off higher than languages that have shared a longer common trajectory (Jger, 2015).", "We hypothesize that building on this shared history is beneficial when learning dependency parsing models.", "We thus propose to use the phylogenetic structure to guide the training of multi-lingual graph-based neural dependency parsers that will tie parameters between languages according to their common history.", "As our phylogenetic learning induces parsing models for every inner node in the phylogenetic tree, it can also perform zero-shot dependency parsing of unseen languages.", "Indeed, one can use the model of the lowest ancestor (in the tree) of a new language as an approximation of that language grammar.", "We assess the potential of phylogenetic training with experiments on data from the Universal Dependencies project version 2.2.", "Our results show that parsers indeed benefit from this multi-lingual training regime as models trained with the phylogenetic tree outperform independently learned models.", "The results on zero-shot parsing show that a number of factors such as the genre of the data and the writing system have a significant impact on the quality of the analysis of an unseen language, with morphological analysis being of great help.", "The remaining of this paper is organized as follows.", "Section 2 presents both the neural parsing model as well as the phylogenetic training procedure.", "Section 3 presents some experiments over data from UD 2.2.", "Section 4 presents some related works on multi-task learning and multilingual parsing.", "Finally, Section 5 closes the paper and gives some future perspectives.", "We propose a multi-task learning framework that shares information between tasks using a tree structure.", "The tree structure allows us to both share model parameters and training samples between related tasks.", "We instantiate it with a graph-based neural parser and use the language phylogenetic tree to guide the learning process, but it can in principle be used with any tree that encodes tasks re-lateness and any learning algorithm that supports fine-tuning.", "In this section we first describe the intuition be-Proto-SlavicProto-East-SlavicBelarusian(be)Russian(ru)Ukranian(uk)Proto-South-SlavicSlovenian(sl)Proto-SerbocroatianCroatian(hr)Serbian(sr)Proto-Southeastern-SlavicBulgarian(bg)OldChurchSlavonic (cu) Proto-West-SlavicProto-CzechoslovakCzech(cs)Slovak(sk)Polish(pl)UpperSorbian(hsb) Figure 1: A possible phylogenetic tree for languages in the Slavic family.", "hind phylogenetic training, then the neural parser and then the phylogenetic training itself.", "Languages evolve from earlier stages and sometimes a language will change differently in different places leading to different languages with a common ancestor.", "This evolution process is often depicted in the shape of a tree in which leaves are actual languages and inner nodes can be either attested ancestral languages or their idealized recon-struction.", "Figure 1 gives an example of such a tree for a subset of the Slavic family of Indo-European languages (Simons and Fennig, 2018).", "Just as languages evolve and diverge, so do their grammars.", "Assuming a parsing model is a parameterized representation of a grammar, then we can expect those models to evolve in a similar way.", "We thus take a multi-task approach to the problem of multi-lingual dependency parsing.", "What was once a single problem (e.g. parsing sentences in Proto-West-Slavic) becomes a set of distinct but related problems (parsing sentences in Czech, Polish, Slovak and Sorbian) as Proto-West-Slavic was evolving into its modern descendants.", "We assume that the grammar of the last common ancestor is a good approximation of those languages grammars.", "Thus it should be easier to learn a language's grammar starting from its ancestor grammar than from scratch.", "There are however some issues with this assumption.", "First, a language grammar can be very different from its ancestor one from two millennia earlier.", "Consider the difference between modern French and early Classical Latin for example, in two millennia Latin has witnessed the loss of its case system and a complete refoundation of its verbal system.", "And the last common ancestors can have very different age depending on the languages we consider.", "We expect the common ancestor of Tagalog and Indonesian to be much much older than the common ancestor of Portuguese and Galician.", "Second, a lot of languages have only started to be recorded very recently thus lacking historical data all together.", "And when historical records are available, much work still needs to be done to render those data usable by parsers.", "For example the Universal Dependencies Project (Nivre et al., 2018) only has annotated corpora for Latin, old Greek, old Church Slavonic and Sanskrit.", "And even for those classical languages, it is not clear to which extent their modern counterparts really descend from them.", "Thus we need to find another way to access the ancestor language grammar than using historical data.", "We propose to use all the data from descendent languages to represent an ancestor language.", "In principle, one could give more weight to older languages or to languages that are known to be more conservative, but this knowledge is not available for all languages families.", "Thus we resort to using all the available data from descendent languages without distinction.", "Another problem is that the tree view is too simple to represent the complete range of phenomena involved in language evolution, such as language contacts.", "Furthermore, languages do not evolve completely randomly, but follow some linguistic universals and have to keep a balance between speakability, learnability and understandability.", "Thus, languages can share grammatical features without necessarily being genetically related, either by contact or by mere chance.", "However, the tree model is still a good starting point in practice and language families align well with grammatical similarity as recent works on typological analysis of UD treebanks have shown (Chen and Gerdes, 2017; Schluter and Agi, 2017).", "We thus make the simplifying assumption that a language grammar evolves only from an older stage and can be approximated by that previous stage.", "Our scoring model is an edge factored graph-based neural model in the vein of recent works by Dozat et al. (Dozat et al., 2017).", "There are two major differences here compared to the parser of Dozat w o r d Embed LSTMLSTM Concat Figure 2: Bi-LSTM architecture for character based word representation.", "et al.", "The first difference is in individual word representation, for which we use only the UPOS 1 tag, morphological information provided by UD treebanks and a character based word representation, whilst Dozat et al. use also the XPOS 2 tag, holistic word vectors (from Word2Vec (Mikolov et al., 2013) and their own) and they do not use morphological information beside what might already be given by the XPOS.", "The second difference is the scoring function proper.", "While they use biaffine scoring functions and decouple edge scoring from label scoring, we use a simple multi-layer perceptron to compute label scores and pick the max over the label as the edge score.", "Let x = ( w 1 w 2 ...w l ) be a sentence of length l .", "Each word w i is represented as the concatenation of 3 subvectors, one for its part-of-speech, one for its morphological attributes and one for its form: w i = pos i morph i char i .", "The part-of-speech vector ( pos i ) is from a look up table.", "The morphological vector ( morph i ) is the sum of the representation m m of each morphological attribute m of the word given by the treebanks: morph i = (cid:88) m morph i m m .", "We add a special dummy attribute representing the absence of morphological attributes.", "The form vector ( char i ) is computed by a character BiLSTM (Hochreiter and Schmidhuber, 1997).", "Characters are fed one by one to the recurrent neural network in each direction.", "The actual form vector is then the concatenation of the outputs of the forward character LSTM and of the backward character LSTM as depicted in Figure", "2. 1 Universal part-of-speech for a set of 17 tags.", "Does not encode morphology.", "2 Language specific part-of-speech.", "Might include morphological information, but is not available for all languages.", "Once, each word has been given a representation in isolation, those representations are passed to two other BiLSTMs.", "Each word is then represented as the concatenation of its contextualised vector from the forward and backward layers: c i = forward ( w 1 , ..., w i ) backward ( w i , ..., w l ) .", "We actually train two different BiLSTMs, one representing words as dependents ( c ) and one words as governors ( c ).", "An edge score is then computed as follows.", "Its governor word vector c i and its dependent word vector c j are concatenated and fed to a two layer perceptron (whose weights are L 1 and L 2 ) with a rectifier (noted [ ... ] + ) after the first layer in order to compute the score s ijl of the edge for every possible relation label l : s ij = max l s ijl = max l ( L 2 [ L 1 ( c i c j )] + ) l .", "All the neural model parameters (part-of-speech, character and morphological embeddings, character, dependant and governor BiLSTMs and the two layer perceptron weights) are trained end to end via back propagation one sentence at a time.", "Given a sentence x , we note j the index of the governor of w i and l the relation label of w i , the loss function is: loss ( x ) = (cid:88) w i (cid:104) (cid:88) j (cid:48) (cid:54) = j j (cid:48) (cid:54) = i max (0 , s ij (cid:48) s ij + 1) 2 + (cid:88) l (cid:48) (cid:54) = l max (0 , s ijl (cid:48) s ijl + 1) 2 (cid:105) For each word, there are two terms.", "The first term enforces that for all potential governors that are neither the word itself nor its actual governor, their highest score (irrespective of the relation label) should be smaller than the score of the actual governor and actual label by a margin of", "1. The second term is similar and enforces that for the actual governor, any label that is not the true label should have a score smaller than the score of the actual label again by a margin of", "1. 2.3 Phylogenetic Training Let L = { l 1 , l 2 , ..., l n l } be a set of n l languages and let P = { p 1 , p 2 , ..., p n p } be a set of n p proto-languages (hypothesized ancestors of languages in L ).", "Let T be a tree over L = LP such that languages of L are leaves and proto-languages of P are inner nodes.", "This means that we assume no two languages in L share a direct parenthood relation, but they at best descend both from a hypothesized parent.", "We could in principle have data appearing only in inner nodes.", "Tree T has a single root, a proto-language from P that all grammars descend from.", "This ancestor of all languages shall model linguistic universals 3 and ensure we deal with a well formed tree.", "We use the notation p > l for the fact that language/node l descends from lan-guage/node p .", "For each language l L , we assume access to a set of n annotated examples D l .", "For each protolanguage p P , we create an annotated set D p = (cid:83) p>l D l as the union of its descendent sets.", "For each language l L , we want to learn a parsing model l .", "The main idea behind phylogenetic training is to initialize a new model with the model of its parent, thus effectively sharing information between languages and letting models diverged and specialize over time.", "The training pocedure is summarized in Algorithm", "1. At the beginning, we initialize a new blank/ran-dom model that will be the basic parsing model for all the world languages.", "Then, we sample sentences (we will discuss sampling issues in next section) randomly from all the available languages, parse them, compute the loss and update the model accordingly.", "Since the training sentences are sampled from all the available languages, the model 3 It does not imply anything about our belief or not in the monoglotto genesis hypothesis.", "will learn to be as good as possible for all the languages at the same time.", "When the model p has reached an optimum (that we defined hereafter), we pass a copy of it to each of its children.", "Thus, for each child c of p , we initialize 0 c = p to its parent ( p ) final state.", "Each model c is then refined on its own data set D c which is a subset of D p , until it reaches its own optimum state and is passed down to its own children.", "This process is repeated until the model reaches a leaf language, where the model c is eventually refined over its mono-lingual data set D c .", "By passing down optimal models from older/larger languages sets to newer/smaller ones, models get the chance to learn relevant information from many different languages while specializing as time goes by.", "The question now is to find when to pass down a model to its children.", "In other words, at which stage has a model learned the most it could from its data and should start to diverge to improve again?", "Following the principle of cross-validation, we propose to let held-out data decide when is the right time to pass the model down.", "Let D (cid:48) p be a set of held-out sentences from the same languages as D p .", "Then, after every epoch i of k training examples, we freeze the model ip , and test it on k (cid:48) sentences from D (cid:48) p .", "This gives a score a i (UAS/LAS) to the current model.", "If the score is higher than the score of the previous model i 1 p then training goes on, otherwise we discard it and retrain i 1 p for another k sentences.", "If after having discarded r epochs in a raw we have not yet found a better one, then we assume we have reached an optimal model i 1 p and pass it on to its children (unless it is a leaf, in which training is over for that language).", "There are a few things we should consider when drawing examples from a proto-language distribution.", "Beside the question of whether some languages are more conservative than others with respect to their ancestor, which we have decided to simplify saying that all languages are as representative of their ancestors, there is the problem of data unbalance and tree unbalance.", "Sampling sentences uniformly across languages is not a viable option for the size of datasets varies a lot across languages and that they do not correlate with how close a language is to its ancestor.", "For example, there are 260 Belarusian training sentences against 48814 Russian ones.", "The basic question is thus whether one should draw examples from languages or branches.", "Basic linguistic intuition tells us that drawing should be performed on branches.", "Modern languages distribution has no reason to reflect their proximity to their ancestor language.", "Amongst Indo-European languages, there are one or two Armenian languages as well as one or two Albanian languages (depending on the criteria for being a language), while there are tens of Slavic languages and Romance languages.", "However, there is no reason to believe that Slavic or Romance languages are better witnesses of proto-Indo-European than Armenian or Albanian.", "Drawing examples from languages would bias the intermediate models toward families that have more languages (or more treebanks).", "It might be a good bias depending on the way one compute the overall accuracy of the system.", "If one uses the macro-average of the individual language parsers, then biasing models toward families with many members should improve the accuracy overall.", "to sample uniformly at random over branches spanning from this node, then uniformly at random over languages and then uniformly at random over sentences.", "It boils down to flattening the subtree below an inner node to have a maximum depth of", "2. For example in Figure 1, at the root (Proto-Slavic) we pick a branch at random (e.g. Proto-South-Slavic), then a language at random (e.g. Croatian) then a sentence at random.", "Given that we have picked the Proto-South-Slavic branch, all South-Slavic languages are then as likely to be chosen.", "This biases slightly the model toward bigger subfamilies.", "In our example, Croatian and Serbian have the same chances to be sampled than Slovenian, therefore their family, Proto-Serbocroatian is twice as likely to be chosen as Slovenian is, while being at the same depth in the tree.", "We could otherwise sample over branches, then over sub-branches again and again until we reach a leaf and only then pick a sentence.", "In this case, Proto-Serbocroatian and Slovenian would have the same probability to be chosen.", "This would give much more weight to languages high in the tree than languages low in the tree.", "While this would give more balance to the actual model, it could be detrimental to the averaged results since the data distribution is itself unbalanced.", "It would of course be possible to try any variation between those two, picking sub-branches according to a probability that would depend on the number of languages in that family for example, therefore mitigating the unbalance problem.", "An interesting property of the phylogenetic training procedure is that it provides a model for each inner node of the tree and thus each intermediary grammar.", "If one were to bring a new language with its position in the tree, then we can use the pre-trained model of its direct ancestor as an initialization instead of learning a new model from scratch.", "Similarly, one can use this ancestor model directly to parse the new language, effectively performing zero-shot dependency parsing.", "We investigate this possibility in the experiment section.", "To assess the potential of phylogenetic training both in terms of multi-task learning and zero-shot parsing capabilities, we experimented with data from the Universal Dependencies project version", "2.2 (Nivre et al., 2018).", "When several corpora are available for a language, we chose one to keep a good balance between morphological annotation and number of sentences.", "For example, the Portuguese GSD treebank has slightly more sentences than the Bosque treebank but it is not well morphologically annotated.", "The zero-shot parsing models have been directly tested on languages that lack of training set.", "The treebanks names are given in the tree 4 and the result table", "1. 3.1 Setting As some languages have no training data and unique writing systems making the character model inefficient for them, we resorted to use gold parts-of-speech and morphological attributes rather than predicted ones.", "For example, Thai has no training data, no language relative and a unique script, which altogether make it really hard to parse (from a phylogenetic perspective).", "The phylogenetic tree used for the experiment is adapted from the Ethnologue (2018).", "For space reasons, it is reported in the appendix in Figures 4 and 5.", "We tried to have a tree as consensual as possible, but there are still a few disputable choices, mostly about granularity and consistency.", "Sanskrit could have its own branch in the Indic family just as Latin in the Romance family, but because Sanskrit has no training data, that would not actually change the results.", "Likewise, as Czechoslovak and Dutch-Afrikaans have their own branches, Scandinavian languages could also distributed between east and west Scandinavian.", "As an English based Creole, Naija could as well be put in the Germanic family, but we kept it as a separate (Creole) family.", "Regarding model training proper, we used k = 500 training sentences per iteration, k (cid:48) = 500 held-out sentences from the developpement set to compute running LAS and a maximum number of reboot r = 5 .", "Following Dozat et al (2017), we use Eisner algorithm (Eisner, 1996) at test time to ensure outputs are well formed trees.", "The neural model is implemented in Dynet (Neubig et al., 2017) and we use Adadelta with default parameters as our trainer.", "We averaged the results over 5 random initializations.", "Independent models are trained in the same manner but with mono-lingual data only.", "We report both labeled and unlabeled edge prediction accuracy (UAS/LAS).", "In the appendix we also report results averaged per family.", "Table 1 reports parsing results for languages that have a training set.", "Note that a few languages do not have a separate developpement set, then we used the training set for both training and validation.", "The training set size of those languages is reported in square brackets.", "This has low to no impact on other languages results but it can be problematic for the language itself as it can over-fit its training data especially when they are very few as is the case of Buryat for example.", "To be fair, we report two different averages.", "Avg is the average over languages that have a separate developpement set, and Avg No Dev is the average over languages that do not have a separate developpement set.", "For each language, the best UAS/LAS are reported in bold.", "On average, phylogenetic training improves parsing accuracy, both labeled and unlabeled.", "This is especially true for languages that have very small training sets (50 sentences or less) and lack of developpement set.", "Those languages show an averaged 7 points improvement and up to 15 points (hsb, kmr).", "Since independent mono-lingual models follow the exact same training procedure but without phylogenetic initialization and that every sentence will be seen several times both at training and validation, the sampling method cannot explain such a difference.", "This shows that the ancestor's model is a good initialization and acts as a form of regularization, slowing down over-fitting.", "Phylogenetic training is also beneficial as one gains information from related languages.", "Indo-European languages gain from sharing information.", "This is especially true for Balto-Slavic (sk +5.82, lt +5.07 UAS) and Indo-Iranian languages (mr +2.05 UAS).", "It is less consistent for Romance and Germanic languages.", "This might be due to the tree not representing well typology for those families.", "Typically, English tends to group syntactically with Scandinavian languages more than with West-Germanic.", "Turkic and Uralic languages show the same benefits overall (ug +2.67, fi +3.39 UAS).", "Dravidian and Afro-Asiatic languages are not as consistent.", "While Telugu seems to gain from Tamil data, the reverse is not true.", "Result variation for Arabic, Hebrew and Coptic are marginal.", "This is likely due to the fact that we only have three quite different languages from that family and that they all have their own script.", "tives.", "While Buryat (bxr) that has a very small training set benefits from universal linguistic information and gain almost 11 points UAS, Basque (eu) that has a very different grammatical structure than other languages and enough training data (5396 sentences) looses 3.25 LAS.", "Gains and losses are marginal for the other five languages (id, ja, ko, vi, zh).", "Overall results are a bit below the state of the art, but the model is very simple and relies on gold morphology, so it is not really comparable.", "Table 2 reports parsing results for languages that do not have a training set.", "Because of phylogenetic training and the tree structure that guides it, it can happen that a language ancestor's model is in fact trained on data only accounting for a narrow range of later stages.", "For example, while Faroese uses the North-Germanic model refined on both Norwegians, Swedish and Danish data, Tagalog uses the Austronesian model only refined with Indonesian data thus making it more an Indonesian model than an actual Austronesian model.", "Those cases are marked by an asterisk in the table.", "Komi (kpv) model is refined on Finno-Samic data, Breton (br) model on Irish data, Cantonese (yue) model on Mandarin data.", "Looking at Table 2, we make the following observations.", "As expected scores are on average lower than for languages with training data, however the UAS/LAS gap is substantially bigger from 6.781 to 17.08 points.", "It is hard to compare to other works on zero-shot parsing since they use different data and scores span a big range, but our results are comparable to those of Aufrant et al. (2016) and Naseem et al. (2012), while our zero-shot models are given for free by the phylogenetic training method.", "On a language per language basis, we see that there are a few important factors, the most striking being genre.", "Tagalog (tl) and more surprisingly Warlpiri (wbp) have relatively high parsing accuracy despite being either completely isolated or having only one relative (Indonesian).", "This is likely because their data are well annotated stereotypical sentences extracted from grammars, thus making them easy to parse.", "Then we see that Naija (pcm) and Yoruba (yo) are about 25 points higher than Thai (th) despite them three having low morphology (in the tree-banks).", "As they have different genres (spoken, bible, news and wiki), without a deeper look at the trees themselves, our best guess is that this is due to Thai having a different script.", "Naija and Yoruba both use the Latin alphabet, and as such they can rely to some extent on the character model to share information with other languages, to at least organise the character space.", "This analysis would also carry for Cantonese (yue).", "It is a morphologically simple language, and despite having a relative (Mandarin), its score is rather low.", "The genre alone (spoken) would not explain everything as Naija has also a spoken treebank and a higher score.", "The writing system might be at blame once again.", "Indeed, Chinese characters are very different from alphabetic characters and are much harder to use in character models because of sparsity.", "Comparing Mandarin and Cantonese test sets with Mandarin train set, the amount of out-of-vocabulary words is 32.47% of types (11.90% of tokens) for Mandarin and 54.88% of types (56.50% of tokens) for Cantonese.", "The results for out-of-vocabulary characters are even more striking with 3.73% of types (0.49% of tokens) for Mandarin and 12.97% of types (34.29% of tokens) for Cantonese.", "This shows that not only there are a lot of OOV in Cantonese test set, but that those words/characters are common ones as 12.97% of character types missing make up for more than a third of all character tokens missing, where on the contrary Mandarin OOV are seldom and account for less tokens percentage than types.", "This is one more argument supporting the importance of the character vector.", "Other important factors are typology and morphology.", "Amharic (am) despite its unique script has a higher score than Cantonese that actually shares its scripts (to some extent as we have seen) with Mandarin.", "The key point for Amharic score, is that all its relatives (Hebrew, Arabic and Coptic) have their own scripts and are morphologically rich, thus the model learns to use morphological information.", "The analysis is similar for Komi which on top of sharing morphology with its relatives also share the writing system which provides it an extra gain.", "However, this might word in the opposite direction as well, as we can see with Faroese, Breton and Sanskrit.", "Faroese (fo) is morphologically rich and that should help, however its North-Germanic relatives are morphologically much simpler.", "Thus the model does not learn to rely on morphological attributes nor on word endings for the character model as much.", "The same is true for Sanskrit (sa), which is morphologically richer than its modern Indic relatives, with an extra layer of specific writing systems.", "Eventually, Breton model (br) is refined over Irish data only and while Irish is a typological outlier amongst Indo-European languages because of its Verb-Subject-Object word order, Breton has the standard Subject-Verb-Object, thus using Irish data might actually be detrimental.", "These arguments show the respective importance of the writing system, the genre of the data, the morphological analysis and the typology in phylogenetic zero-shot dependency parsing.", "Those factors can either work together positively (Komi) or negatively (Cantonese) or cancel each other out (Amharic, Faroese).", "The goal of multi-task learning is to learn related tasks (either sharing their input and/or output space of participating of the same pipeline) jointly in order to improve their models over independently learned one (Caruana, 1997).", "In Sgaard et al. (2016), task hierarchy is directly encoded in the neural model allowing tasks with different output space to share parts of their parameters (POS tagging comes at a lower level than CCG parsing and only back-propagates to lower layers).", "Likewise, in Johnson et al. (2017), the encoder/decoder architecture allows to learn encoders that target several output languages and decoders than handle data from various input languages.", "However, in multitask learning literature, task relationships are often fixed.", "In Cavallanti et al. (2010) tasks with the same output spaces share parameter updates through a fixed similarity graph.", "In this work, changing level in the tree can be seen as splitting the similarity graph into disjoint sub graphs.", "It is a way to have tasks relationships evolving during training and to encode information about task evolution that lacks in other multi-task methods.", "In multi-lingual parsing, Ammar et al. (2016) propose to train a single model to parse many languages using both typological information, cross-lingual word representations and language specific information.", "While their model gives good results, they only apply it to 7 Germanic and Romance languages.", "It would be worth doing the experiment with 50+ languages and see how the results would change.", "However, because of language specific information their model would probably become very big.", "In this work, language specific information is not added on the top of the model, but is just language generic information that refines over time.", "Che et al. (2017; 2018) and Stymne et al. (2018) propose to train parsers on several concatenated treebanks either from the same language or from related languages and to fine-tune the parsers on individual treebanks afterward to fit specific lan-guages/domains.", "The main difference with our method, is that instead of one step of fine-tuning, we perform as many fine-tuning as there are ancestors in the tree, each time targeting more and more specific data.", "This in turn requires that we handle data imbalance therefore using sampling rather than plain concatenation.", "Aufrant et al. (2016) propose to tackle zero-shot parsing by rewriting source treebanks to better fit target language typology.", "Assuming that typology is homogeneous in a language family, the phylogeny should drive models to be typologically aware.", "However, as we have seen for Breton and Irish, that assumption might not always hold.", "Eventually, the closest work from our in spirit is the one of Berg-Kirkpatrick et al. (2010).", "They use a phylogenetic tree to guide the training of unsupervised dependency parsing models of several languages, using ancestor models to tie descendent ones.", "The main difference here beside supervision, is that we do not use ancestor models as biases but rather as initialization of descendent models.", "We have presented a multi-task learning framework that allows one to train models for several tasks that have diverged over time.", "Leveraging their common evolutionary history through a phylogenetic tree, models share parameters and training samples until they need to diverge.", "As a by product of this phylogenetic training, we are provided with intermediary models that can be used to zero-shot a new related task, given its position in the evolutionary history.", "We have applied this framework to dependency parsing using a graph-based neural parser and the phylogenetic tree of the languages from UD 2.2 to guide the training process.", "Our results show that phylogenetic training is beneficial for well populated families such as Indo-European and Uralic.", "It also helps generalization and prevents over-fitting when very few data are available.", "For zero-shot parsing, genre, writing system and morphology are crucial factors for the quality of parse predictions.", "Some works have been done on automatically learning task relationship in multi-task setting.", "It would be interesting to see how the algorithm could figure out when and how to cluster languages automatically as phylogenetic trees do not directly depict grammar evolution.", "Our model does not know that Latin came before Old French and before modern French, or that despite being Germanic, English underwent a heavy Romance influence.", "It would be worth investigating softening the tree constraints and instigating more evolutionary information in the structure.", "Another important point is that we use gold part-of-speech and morphological information which is unlikely to be available in real scenarios.", "However, our new training procedure can be applied to any task, so a future work would be to use it to perform phylogenetic POS tagging.", "Other directions for the future are designing better sampling methods as well as better ways to measure training convergence at each level.", "This work was supported by ANR Grant GRASP No.", "ANR-16-CE33-0011-01 and Grant from CPER Nord-Pas de Calais/FEDER DATA Advanced data science and technologies 2015-2020.", "We also thank the reviewers for their valuable feedback." ]
[ "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "objective", "objective", "method", "abstain", "abstain", "result", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "other", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "other", "other", "other", "other", "abstain", "other", "other", "other", "other", "other", "method", "other", "method", "method", "other", "other", "abstain", "abstain", "other", "method", "method", "abstain", "objective", "method", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "objective", "abstain", "other", "other", "other" ]
[ "2 Data Science Lab, JD.com, Beijing, China 3 Institute for AI Industry Research, Tsinghua University, Beijing, China zhanhaolan316@gmail.com, zhanghainan6@jd.com, ac@chenhongshen.com, dingzhuoye@jd.com, lanyanyan@tsinghua.edu.cn", "Abstract Knowledge data are massive and widespread in the real-world, which can serve as good external sources to enrich conversations.", "However, in knowledge-grounded conversations, current models still lack the fine-grained control over knowledge selection and integration with dialogues, which finally leads to the knowledge-irrelevant response generation problems:", "1) knowledge selection merely relies on the dialogue context, ignoring the inherent knowledge transitions along with conversation flows;", "2) the models often over-fit during training, resulting with incoherent response by referring to unrelated tokens from specific knowledge content in the testing phase;", "3) although response is generated upon the dialogue history and knowledge, the models often tend to overlook the selected knowledge, and hence generates knowledge-irrelevant response.", "To address these problems, we proposed to explicitly model the knowledge transition in sequential multi-turn conversations by abstracting knowledge into topic tags.", "Besides, to fully utilizing the selected knowledge in generative process, we propose pretraining a knowledge-aware response generator to pay more attention on the selected knowledge.", "In particular, a sequential knowledge transition model equipped with a pre-trained knowledge-aware response generator (SKT-KG) formulates the high-level knowledge transition and fully utilizes the limited knowledge data.", "Experimental results on both structured and unstructured knowledge-grounded dialogue benchmarks indicate that our model achieves better performance over baseline models.", "Knowledge-grounded conversations (Long et al., 2017; Liu et al., 2018; Niu et al., 2019; Xu et al., 2020), aiming at improving the informativeness", "and specificity of dialogue generation by exploiting external knowledge sources, has attracted much attention as a potential solution to relieve the common response problem (Li et al., 2015; Zhang et al., 2018a; Ren et al., 2020) in dialogue generation, i.e., I don't know.' and What do you mean?' .", "Typically, knowledge-grounded conversation is decomposed into two sub-processes (Dinan et al., 2018; Wu et al., 2019): knowledge selection (KS) based on dialogue context, and response generation with reference to the selected knowledge.", "Therefore, to select relevant knowledge and then incorporate it efficiently, is of great significance for multi-turn knowledge-grounded dialogue generation task.", "Although external knowledge sources are widespread in the real-world, in fact, current knowledge-grounded conversations still lack the fine-grained control over knowledge selection and integration with dialogues.", "Most existing works (Liu et al., 2018; Niu et al., 2019) select knowledge according to the given dialogue context (Lian et al., 2019; Kim et al., 2020).", "However, the sequential transition characteristic of knowledge (also known as knowledge shift) along multiple sequential conversation turns is neglected.", "As shown in Figure 1, two people are talking about an actor from the knowledge astrological sign \" to another knowledge blood typology \", which is a natural transition in human personality chat (Mayo et al., 1978; Miller, 2014).", "By nature, taking the knowledge sequential transition characteristic into account is of tremendous benefits to the knowledge grounded conversations.", "What's more, knowledge-irrelevant response generation problem also hampers the performance of existing models.", "This is caused by two reasons.", "The first reason is that current models often over-fit during training, resulting with incoherent response by referring to unrelated tokens from specific knowledge content in testing phase.", "To resolve this problem, we propose to calculate the knowledge transition probability among different turns on a high-level representation, i.e., knowledge topic tag.", "With such concise high-level knowledge representation, our model is not limited to conventional structured knowledge-grounded conversation but can be easily adapted to unstructured knowledge-based conversations.", "For example, in structured triple data, i.e.,{obj, relation, content}, we can utilize the relation\" as the high-level topic tag to model the sequential knowledge transition process in conversations. As shown in Figure 1, the topic migrates from the astrological sign ' \" tag to the blood typology \" tag, and then moves to the masterpiece \".", "In the unstructured dataset like 'Wizard of Wikipedia' (Dinan et al., 2018), we can utilize topic models, such as LDA (Blei et al., 2003), to obtain the knowledge tag for each turn, and then calculate the sequential transition probability among these tags.", "Since the number of tag categories is limited, it can be well employed to model the knowledge transition.", "Moreover, the second reason is that the models often tends to overlook the selected knowledge, and hence generates knowledge-irrelevant response.", "To address this problem, we propose pre-training a knowledge-aware response generator, aiming at generating a natural sentence based on a given knowledge, in order to make full use of the limited knowledge data.", "For example in Figure 1, given the triple {Chao Wu, astrological sign, Aries}' , the knowledge-aware generator is optimized to generate a sentence Chao Wu's astrological sign is Aries.' .", "Obviously, the generator should also has the ability to generate Zhiling Lin's astrological sign is Virgo.' while given {Zhiling Lin, astrological sign, Virgo}' .", "Actually, the knowledge-aware response generator learns how to generate a natural sentence based on a relation tag rather than the knowledge content.", "It is like that one student learns grammar rules rather than specific examples while learning a foreign language.", "Therefore, even with the limited data, the generator can also generate relevant sentences about given knowledge.", "In this paper, we propose a sequential knowledge transition model equipped with a pre-trained knowledge-aware response generator (SKT-KG), which can conduct the high-level knowledge transition in conversation and fully use of the limited knowledge data.", "Specifically, at first, we pre-train a transformer-based response generator based on the knowledge.", "And then, we utilize a BiLSTM-CRF (Huang et al., 2015) network to model the knowledge transition process, and select the knowledge tag with maximum score and its corresponding knowledge content.", "Finally, we feed the dialogue utterances and the selected knowledge content together into the pre-trained knowledge-aware response generator to generate final response.", "In our experiments, we use two public knowledge-grounded dialogue datasets to evaluate our proposed models, i.e. structured DuConv corpus and unstructured Wizard of Wikipedia (WoW) corpus.", "The results show that our SKT-KG model has the ability to produce more diverse and suitable responses than traditional knowledge-grounded models.", "Besides, we conduct an analysis on knowledge selection, and the results show that the SKT-KG model obtains higher ranking measure than baselines, which indicates that the knowledge selected by our model is reasonable.", "Recently, dialogue systems have gained more attention in both research community (Vougiouklis et al., 2016; Liu et al., 2018; Zhou et al., 2018; Shen et al., 2019; Shen and Feng, 2020) and industry (Xu et al., 2020; Zhao et al., 2020), because of its practicality in the real application, such as chatbot and customer services (Chen et al., 2020; Liu et al., 2020; Shen et al., 2021; Zhang et al., 2019; Chen et al., 2018; Zhang et al., 2020).", "With external knowledge sources, dialogue systems can generate more specific and informative response, which has great potential to resolve the common response problem (Zhang et al., 2018b; Ren et al., 2020).", "The majority of previous works decomposed the Pre-training Knowledge Generator Phase Fine-tuning Generation Phase Klg K l g Context C on t e x t Tag T ag Klg K l g Context C on t e x t Tag T ag Transformer Block n-1 Transformer Block n Transformer Block n-1 Transformer Block n Transformer Block n FFN & Softmax NLLloss Transformer Block n FFN & Softmax Output Input Representation Transformer Block 1 Transformer Block 2 Transformer Block N ... t 1 t 2 t n s t ...", "knowledge-grounded dialogue generation task into two sub-problems: knowledge selection and response selection.", "In knowledge selection, previous works proposed to use the keyword matching (Ghazvinine-jad et al., 2018; Liu et al., 2018), information retrieval (Young et al., 2018) and entity diffusion (Liu et al., 2018) methods to detect the relevant knowledge based on dialogue context, and finally feed both dialogue utterances and the selected knowledge into generative models.", "Specifically, Zhou et al. (2018) proposed to employ the graph attention mechanism to encode the retrieved relevant knowledge graph, which can augment the semantic understanding of dialogue context.", "Lian et al. (2019) proposed to use the prior and posterior distributions over knowledge to facilitate knowledge selection.", "Although these work are capable to model the relationship between context and knowledge, they still ignored the knowledge transition characteristic, which is important for knowledge selection.", "Human dialogue depends on both local information and global information.", "Peng et al. (2019) also pointed out that natural language understanding requires a coherent understanding of a series of events or actions, not only what events have appeared, but also what is likely to happen next.", "Therefore, it is critical to obtain the natural and relevant knowledge for the knowledge-grounded dialogue generation.", "Sun et al. (2020) proposed to recurrently update the knowledge based on conversation history and progressively incorporate it into the history step-by-step.", "But they only consider the relationship of history to knowledge.", "However, these models may also suffer from a knowledge sparse problem, due to the low-resource limitation in reality (Zhao et al., 2020).", "In reality, sufficient knowledge-grounded dialogues data are difficult to obtain.", "To tackle this practical challenge, Su et al. (2020) proposed to augment the dialogue generation with external non-conversational text, which may also introduce much noise.", "Li et al. (2020) proposed to pre-train the knowledge encoder with unstructured knowledge and fine-tune the model using the limited knowledge-grounded training examples.", "In our work, we propose to make full use of our training data and model the high-level knowledge transition process, which can resolve the sparse problem in knowledge-grounded dialogue data.", "In this section, we propose a novel sequential knowledge transition model with pre-trained knowledge-aware response generator (SKT-KG), as shown in Figure 2. This model contains three major parts: pre-trained knowledge-aware response generator, sequential knowledge transition, and transformer decoder.", "Specifically, we firstly pretrain a transformer-based knowledge-aware response generator based on the knowledge and its corresponding natural sentence.", "And then, we utilize a BiLSTM-CRF (Huang et al., 2015) network to model the knowledge transition process, and select knowledge tag with maximum score and its corresponding knowledge content.", "Finally, we feed the context utterances and this selected knowledge content into the knowledge-aware response generator to fine-tune it.", "After fine-tuning, response can be generated by given the selected knowledge tag and corresponding content, and history dialogue utterances.", "Firstly, we introduce the data formulation in our model.", "Given the history knowledge content K = { k 1 , , k n } , the history context C = { c 1 , , c n } and the candidate knowledge set for response CK = { ck 1 , , ck m } , the goal of our model is to select the most relevant and natural knowledge ck t CK based on the sequential K and C , and then generate the response Y = { y 1 , , y | L | } based on the selected knowledge ck t and context C .", "It is worth noting that each history utterance c i is related to a history knowledge k i and each knowledge k i has a knowledge tag t i T , which is explicit in the structured knowledge, such as relation' in triple knowledge as shown in Figure 1, and implicit in the unstructured knowledge, which is abstracted by topic model, i.e., LDA (Blei et al., 2003).", "Knowledge tag category T = { t 1 , ..., t N } has N different knowledge tags.", "We utilize the classical transformer blocks as the backbone framework.", "To generate response Y , the original input is the concatenation of the selected knowledge tag s t , the selected knowledge content ck t and the history context utterances { c 1 , , c n } .", "We use three different embedding methods for the original input: Token embedding, Role embedding and Position embedding, as shown in Figure 3. For knowledge content and dialogue utterances, we utilize the word embedding of each token as the token embedding.", "For knowledge tag, we map each tag to different categories as the token embedding.", "A special end-of-knowledge [EOK] token is inserted between knowledge and utterance context to mark the border.", "Another token end-of-utterance [EOU] is added at the end of each history dialog utterance.", "Role embeddings are employed to differentiate knowledge content and dialogue utterances.", "The role embedding EK is added for the knowledge content, as well as dialogue utterances are represented by role embedding EC .", "Position embeddings are added according to the token position in each utterance.", "Note that for the special token of knowledge tag, its corresponding role and position embeddings are both set to zero.", "In our pre-trained knowledge-aware response generator, there are two essential phases we should consider: pre-training phase and fine-tuning response generation phase.", "In the pre-training phase, given the knowledge tag and knowledge content, our generator focuses on generating the relevant sentence, as shown in the left of Figure 2. And in the fine-tuning response generation phase, given the context utterances, the knowledge tag and the selected knowledge content, our generator focuses on generating the natural and relevant response, as shown in the top-right of Figure 2. To unify the pretraining phase and fine-tuning phase, we propose to utilize the flexible self-attention mask mechanism to distinguish the input representation in this two phases, as shown in Figure 3. In the pre-training phase, we employ a self-attention mask mechanism to the history dialogue utterances, in order to train the knowledge-aware response generator independently.", "Given the knowledge content k i K , its knowledge tag t i T and its corresponding sentence c i = { x i 1 , , x i N } , we choose the negative log-likelihood loss as our training optimization.", "In this section, we will introduce the knowledge selection process, including the utterance encoding and transition modules.", "To obtain the next knowledge tag, we should consider both the sequential knowledge tags and the sequential context utterances, as shown in Figure 4. Utterance Encoding.", "To conduct the context sequential representation, we use the standard base BERT model with average pooling (Cer et al., 2018) and the BiLSTM to obtain the context sequential representation.", "Given the context utterances C = { c 1 , , c n } where c i is composed of a group of words { x i 1 , ..., x iN } , we utilize a standard BERT model to encode each utterance c i as a sentence embedding u ic .", "And then, we apply a BiLSTM on these sentence embedding to obtain the context sequential representation: H ic = BERT base { [ x i 1 , ..., x iN ] } , u ic = averpool( H ic ) , h ic = BiLSTM( u ic , h i 1 c ) .", "Knowledge Transition.", "We model the knowledge tag transition process with the assistance of Conditional Random Field (CRF) mechanism (Lafferty et al., 2001).", "We combine a BiLSTM network and a CRF network to form a BiLSTM-CRF model, as shown in Figure 4. This network can efficiently use past input features via a BiLSTM layer and sentence level tag information via a CRF layer.", "For each BiLSTM cell, it will output the score of each tag.", "Given a context representation h ic , the corresponding tag scores is: score i +1 [ t i +1 ] = softmax( W 1 h ic + b 1 ) , where W 1 and b 1 are the training parameters.", "score i +1 [ t i +1 ] means the output score of knowledge tag t i +1 at the ( i + 1) -th step.", "CRF layer is capable to model the sequential tag relationship by maximizing a global score C ( t 1 , t 2 , ...t n , ) .", "This global score is the concatenation of a transition score T [ i, j ] and a matrix of score.", "T [ i, j ] is to model the transition probability from i-th tag to j-th for a pair of consecutive steps.", "The matrix of score is used to record tag transition path along with the context sentences.", "]", "3.4 Fine-tuning and Response Generation The flexible self-attention mask mechanism enables our pre-trained generator to consider the dialogue history in the response generation phase.", "Given the generated knowledge tag s t and its corresponding knowledge content ck t , and the dialogue contexts { c 1 , , c n } , the fine-tuning procedure can be carried out by the following training optimization to generate response y = { y 1 , , y N } , defined as: LNLL ( ) = N (cid:88) t =1 log p ( y t | y <t , ck t , s t , c 1 , , c n ; ) , The process is shown in the right of the Figure 2. After fine-tuning phase, response can be generated by given selected knowledge tag, corresponding knowledge content, and history dialogue context.", "Therefore, our final selected knowledge tag s t should be:", "Once we get the knowledge tag s t , we are able to pick out the corresponding knowledge content ck t from the candidate knowledge set CK .", "If there are multiple knowledge contents with the same tag s t , we will apply a coarse-to-fine knowledge matching module to select out the knowledge content with maximum score as ck t .", "Coarse-to-fine Knowledge Matching.", "To select out the final knowledge content from multiple candidates with the same knowledge tag, we adopt BM25 (Robertson and Zaragoza, 2009), as the supporting coarse-to-fine matching model.", "Given a knowledge content and dialogue context pair ( ck i , c ) , the matching model will output a matching score.", "We will choose the knowledge content with the highest score as the final knowledge content.", "Knowledge Transition Loss.", "In the training phase, we adopt two level knowledge loss to optimize the sequential selection process.", "Knowledge tag loss L kgtag ( ) is a log-likelihood loss to minimized the difference between true tag label and prediction tag label.", "Knowledge content loss L kgcont ( ) is a cross-entropy loss to minimize the divergence between true knowledge sentence and prediction one.", "Therefore, the total knowledge transition loss is defined to be: L trans ( ) = L kgtag ( ) + L kgcont ( ) .", "Dataset.", "We employ two public knowledge-grounded dialogue benchmarks in our experiments.", "The structured DuConv dataset consists of 29,000 context-response pairs.", "The corresponding knowledge pool contains 32 different knowledge tags.", "We randomly divided the corpus into the training, validation and testing set, containing 25,000, 2,000, and 2,000 pairs respectively.", "The Wizard of Wikipedia ( WoW ) dataset is conducted with 201,999 dialogues about diverse topics.", "We randomly split this corpus as 18,430 dialogues for training, 1,948 dialogues for validation and 1933 dialogues for test.", "The test set is split into two subsets: test seen and test unseen.", "Test Seen contains 965 dialogues on the topics overlapped with the training set, while test unseen contains 968 dialogues on the topics never seen before in training and validation set.", "Baselines.", "We compare our SKT-KG model with several state-of-the-art models, including", "(i) Transformer : a fully self-attention mechanism model (Vaswani et al., 2017),", "(ii) MemNet : The E2E Transformer with memory mechanism (Dinan et al., 2018), which uses a Transformer memory network for knowledge selection and a Transformer decoder for utterance prediction.", "(iii) PostKS : Posterior Knowledge Selection (Lian et al., 2019), which uses the posterior knowledge distribution as a pseudo-label for knowledge selection.", "(iv) SLKS : sequential latent knowledge selection model (Kim et al., 2020), which keeps track of prior and posterior distribution over knowledge and sequentially updated considering contexts in previous turns.", "we also employ some degraded SKT-KG models to investigate the effect of our proposed pre-trained knowledge-aware response generator mechanisms: SKT is the model without pre-trained knowledge-aware response generator, only using the knowledge transition to select the knowledge and then generate the response with transformer decoder.", "For DuConv, we set the vocabulary size to 21,128 2 .", "To fairly compare our model with all baselines, the number of hidden nodes is all set to 512 and the batch size set to 128.", "The max length of sentence is set to 30 and the max number of dialogue turns is set to 8.", "The topic size of LDA for WoW dataset is set as 50.", "We use Adam (Kingma and Ba, 2014) for gradient optimization in our experiments.", "The learning rate is set to 0.001.", "We run all models on the Tesla P40 GPU.", "Evaluation Measures.", "We use both quantitative evaluation and human judgements in our experiments.", "Specifically, we use the indicators including BLEU-1/2 and distinct-1/2, Embedding metrics (average, extrema and greedy) 3 .", "We also measure the knowledge selection precision and F1 score between the prediction and ground-truth knowledge.", "For human evaluation, we randomly sampled 300 generated response and invited six annotators (all CS majored students) to give their rating score based on the relevant, informative and natural of the generated response with respect to the contexts.", "The rating ranges from 0 to 3 for relevance, informativeness and natural, respectively.", "The metric-based evaluation results are shown in Table 1 and Table 2. From the results, we can see that the sequential knowledge models, i.e., SLKS and our SKT models, perform better than the traditional knowledge-grounded dialogue models, i.e., MemNet and PostKS models, in terms of BLEU and Distinct measures.", "That's because the sequential characteristic in knowledge is significant and beneficial for the knowledge selection process.", "Our proposed SKT-KG model obtains good results.", "Tak-2 https://github.com/ymcui/Chinese-BERT-wwm 3 https://github.com/Maluuba/nlg-eval Dataset WoW Test Seen WoW Test Unseen Model BLEU-1 / 2 Dist-1 Dist-2 Avg.", "ing the BLEU-2 value on the DuConv dataset as an example, the BLEU-2 value of SKT-KG is 26.31, which is better than that of baseline models.", "The distinct-2 value of our model is also higher than other baseline models, indicating that our model can generate more diverse responses.", "For the unigram F1 score of the knowledge selection in Table 3, the F1 score of SKT-KG is 19.26, which is better than other models, showing that our model can extract more relevant and natural knowledge than baseline models.", "Compared with the ablation model SKT, we find that the pre-trained knowledge-aware response generator in our model can improve distinct measure and unigram F1 score, indicating that the model with pre-trained generator has ability to generate more diverse response.", "We also conducted a significant test.", "The experimental results show that the improvement of our model is significant in both datasets, i.e., p-value < 0 .", "01 .", "In summary, our SKT-KG model is able to generate higher relevant and more diverse responses than the baselines.", "The results of human evaluation are shown in Table 4. The rating scores are given to evaluate the relevance, informativeness and natural of the gen-Dataset", "erated responses.", "From the experimental results, the relevance (Rel), information (Info) and natural (Nat) score for our model is greater than that of MemNet, PostKS and SLKS, indicating that our SKT-KG model is better than the baseline methods.", "Taking DuConv as an example, the score of relevance and informativeness in SKT-KG are 2.3 and 2.6, respectively, while the SLKS are 2.2 and 2.1, indicating that our model can generate more informative response than SLKS.", "In addition, for the natural comparison, the score of SKT-KG is 2.3, which is larger than SLKS i.e.,2.1, showing that the high-level knowledge transition is effective for the knowledge-grounded dialogue generation task and our SKT-KG model can generate more natural response with more information.", "The Kappa (Fleiss, 1971) value demonstrates the consistency of different annotators.", "We also conducted a significant test, and the improvement of our model is significant on both datasets, i.e., p-value < 0 .", "01 .", "To facilitate a better understanding of our model, we present some examples in Figure 5. From the multi-turn dialogues, we can see that the knowledge topic is from reviews of Mengyao Xi', to the master work of her', and then to the master work of Sui He'.", "The knowledge tag of ground-truth Model DuConv P @1 R @1 F @1 P @2 R @2 F @2 P @5 R @5 F @5 PostKS(fusion) 0.23 0.19 0.21 0.23 0.33 0.27 0.22 0.71 0.34 SLKS 0.26 0.21 0.23 0.25 0.35 0.29 0.25 0.74 0.37 SKT-KG 0.29 0.22 0.25 0.27 0.38 0.32 0.26 0.77 0.39 Model WoW Test Seen P @1 R @1 F @1 P @2 R @2 F @2 P @5 R @5 F @5 PostKS(fusion) 0.21 0.17 0.19 0.22 0.29 0.25 0.19 0.67 0.30 SLKS 0.23 0.19 0.20 0.21 0.34 0.26 0.19 0.69 0.30 SKT-KG 0.26 0.21 0.23 0.25 0.35 0.29 0.21 0.73 0.33 Table 5: The ranking evaluation of knowledge selection on DuConv and WoW datasets.", "is the reviews of Sui He '.", "From the generation results, we can see that the sequential-based model performs better than the selection model, i.e., MemNet and PostKS.", "Taking an example in Figure 5, an un-natural response is generated by MemNet and PostKS, such as Area of Sui He ' and Height of Sui He'.", "However, the sequential model can generate more natural and relevant responses, such as Yes, she is the angel of China' and He Sui was the girl named as the angel of China '.", "This is mainly because the sequential model is able to locate the reviews' knowledge which is more natural for the contexts.", "Moreover, our high-level transition model with pre-trained knowledge-aware response generator can generate more informative response than SLKS, as shown in Figure 5. 4.3 Analysis on Knowledge Selection To verify whether the performance improvements are owing to the knowledge transition module, we conduct a further data analysis.", "Specifically, we randomly sample 300 examples from the DuConv dataset and WoW dataset, to evaluate the performance of the knowledge selection process in baselines and our model.", "As knowledge-grounded dialogue models will select the relevant knowledge from the candidate knowledge set based on the dialogue contexts, we can treat it as a ranking task.", "Ranking evaluation measures, such as the precision, recall and F1 score, are used for quantitative evaluations.", "Then we calculate the precision, recall and F1 score of the top 1,2,5 for PostKS, SLKS and our SKT-KG model.", "The results are shown in Table 5. We can see that the the sequential knowledge selection models, such as SLKS and SKT-KG, perform better than traditional selection model, i.e., PostKS, validating the effectiveness of sequential knowledge model.", "These results indicate that our proposed knowledge sequential transition module is capable to select out more relevant knowledge content than baseline models.", "In this paper, we propose a sequential knowledge transition model with knowledge-aware response generator to model the high-level knowledge transition and fully utilize the low-resource knowledge data.", "SKT-KG models can abstract knowledge into tags which leads our model easily to apply into both the structured and unstructured knowledge-grounded conversations.", "Besides, we propose a pre-trained knowledge-aware response generator, aiming at generating a natural sentence based on a given knowledge, to make full use of the limited data.", "Experimental results on both structured and unstructured knowledge-grounded dialogue datasets show that our SKT-KG model outperforms baseline models.", "As for future work, we intend to apply variational autoencoder to unstructured dataset, in order to empower models to learn the knowledge topic by themselves.", "The authors would like to thank all the anonymous reviewers for their constructive comments and suggestions.", "This work was partially supported by the National Key R&D Program of China under Grants No. 2019AAA0105200, 2016QY02D0405, the Beijing Academy of Artificial Intelligence (BAAI) (No. BAAI2020ZJ0303), the National Natural Science Foundation of China (NSFC) (No. 61722211, 61773362, 61872338, 61906180)." ]
[ "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "method", "result", "result", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "objective", "method", "method", "objective", "result", "result", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "objective", "method", "method", "method", "method", "abstain", "method", "method", "other", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "method", "method", "method", "method", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "other", "method", "method", "method", "abstain", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "other", "abstain", "abstain", "result", "result", "result", "result", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "result", "abstain", "result", "abstain", "result", "other", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "objective", "method", "objective", "result", "method", "other", "other" ]
[ "Recent studies have shown remarkable success in end-to-end task-oriented dialog system.", "However, most neural models rely on large training data, which are only available for a certain number of task domains, such as navigation and scheduling.", "This makes it difficult to scalable for a new domain with limited labeled data.", "However, there has been relatively little research on how to effectively use data from all domains to improve the performance of each domain and also unseen domains.", "To this end, we investigate methods that can make explicit use of domain knowledge and introduce a shared-private network to learn shared and specific knowledge.", "In addition, we propose a novel Dynamic Fusion Network (DF-Net) which automatically exploit the relevance between the target domain and each domain.", "Results show that our model outperforms existing methods on multi-domain dialogue, giving the state-of-the-art in the literature.", "Besides, with little training data, we show its transferability by outperforming prior best model by 13.9% on average.", "Task-oriented dialogue systems (Young et al., 2013) help users to achieve specific goals such as restaurant reservation or navigation inquiry.", "In recent years, end-to-end methods in the literature usually take the sequence-to-sequence (Seq2Seq) model to generate a response from a dialogue history (Eric and Manning, 2017; Eric et al., 2017; Madotto et al., 2018; Wen et al., 2018; Gangi Reddy et al., 2019; Qin et al., 2019b; Wu et al., 2019a).", "Taking the dialogue in Figure 1 as an example, to answer the driver's query about the gas station , the end-to-end dialogue system directly generates system response given the query and a corresponding knowledge base (KB).", "Though achieving promising performance, end-to-end models rely on a considerable amount of labeled data, which limits their usefulness for new and extended domains.", "In practice, we cannot col-lect rich datasets for each new domain.", "Hence, it is important to consider methods that can effectively transfer knowledge from a source domain with suf-ficient labeled data to a target domain with limited or little labeled data.", "Existing work can be classified into two main categories.", "As shown in Figure", "2(a), the first strand of work (Eric and Manning, 2017; Eric et al., 2017; Madotto et al., 2018; Wu et al., 2019a) simply combines multi-domain datasets for training.", "Such methods can implicitly extract the shared features but fail to effectively capture domain-specific knowledge.", "As shown in Figure", "2(b), The second strand of work (Wen et al., 2018; Qin et al., 2019b) trains model separately for each domain, which can better capture domain-specific features.", "However, those methods ignore shared knowledge between different domains (e.g. the location word exists in both schedule domain and navigation domain).", "We consider addressing the limitation of existing work by modeling knowledge connections between domains explicitly.", "In particular, a simple baseline NoDomain-SpecificFeatures AModule BModule CModule SharedModule NoDomain-SharedFeatures SharedModule AModule BModule CModule DynamicDomain-SpecificFusion Shared-SpecificFusion", "to incorporate domain-shared and domain-private features is shared-private framework (Liu et al., 2017; Zhong et al., 2018; Wu et al., 2019b).", "Shown in Figure", "2(c), it includes a shared module to capture domain-shared feature and a private module for each domain.", "The method explicitly differentiates shared and private knowledge.", "However, this framework still has two issues: (1) given a new domain with extremely little data, the private module can fail to effectively extract the corresponding domain knowledge.", "(2) the framework neglects the fine-grained relevance across certain subsets of domains.", "(e.g. schedule domain is more relevant to the navigation than to the weather domain.)", "To address the above issues, we further propose a novel Dynamic Fusion Network (DF-Net), which is shown in Figure 2", "(d).", "In contrast to the shared-private model, a dynamic fusion module (see 2.3) is further introduced to explicitly capture the correlation between domains.", "In particular, a gate is leveraged to automatically find the correlation between a current input and all domain-specific models, so that a weight can be assigned to each domain for extracting knowledge.", "Such a mechanism is adopted for both the encoder and the decoder, and also a memory module to query knowledge base features.", "Given a new domain with little or no training data, our model can still make the best use of existing domains, which cannot be achieved by the baseline model.", "We conduct experiments on two public benchmarks, namely SMD (Eric et al., 2017) and MultiWOZ 2.1 (Budzianowski et al., 2018).", "Results show that our framework consistently and significantly outperforms the current state-of-the-art methods.", "With limited training data, our framework outperforms the prior best methods by 13.9% on average.", "To our best of knowledge, this is the first work to effectively explore shared-private framework in multi-domain end-to-end task-oriented dialog.", "In addition, when given a new domain which with few or zero shot data, our extended dynamic fusion framework can utilize fine-grained knowledge to obtain desirable accuracies, which makes it more adaptable to new domains.", "All datasets and code are publicly available at: https://github.com/LooperXX/DF-Net .", "We build our model based on a seq2seq dialogue generation model ( 2.1), as shown in Figure", "3(a).", "To explicitly integrate domain awareness, as shown in Figure", "3(b) we first propose to use a shared-private framework ( 2.2) to learn shared and the corresponding domain-specific features.", "Next, we further use a dynamic fusion network ( 2.3) to dynamically exploit the correlation between all domains for fine-grained knowledge transfer, which is shown in Figure", "3(c).", "In addition, adversarial training is applied to encourage shared module generate domain-shared feature.", "We define the Seq2Seq task-oriented dialogue generation as finding the system response Y according to the input dialogue history X and KB B .", "Formally, the probability of a response is defined as p ( Y | X, B ) = n (cid:89) t =1 p ( y t | y 1 , ..., y t 1 , X, B ) , (1) where y t represents an output token.", "In a vanilla Seq2Seq task-oriented dialogue system (Eric and Manning, 2017), a long short-term Memory network (LSTM, Hochreiter and Schmidhuber (1997)) is used to encode the dialogue history X = ( x 1 , x 2 , .., x T ) ( T is the number of tokens in the dialogue history) to produce shared context-sensitive External Knowledge(EK) START EKDistribution Domain+START + \"#$,&' VocabularyDistribution SharedEncoder PrivateEncoder Shared-SpecificFusion External Knowledge(EK) EKDistribution + VocabularyDistribution SharedEncoder Fusion DynamicFusion External Knowledge(EK) + Fusion PrivateEncoder SharedEncoder Domain+START EKDistribution VocabularyDistribution", "hidden states H = ( h 1 , h 2 , ..., h T ) : h i = BiLSTM enc (cid:16) emb ( x i ) , h i 1 (cid:17) , (2) where emb ( ) represents the word embedding matrix.", "LSTM is also used to repeatedly predict outputs ( y 1 , y 2 , ..., y t 1 ) by the decoder hidden states ( h dec , 1 , h dec , 2 , ..., h dec ,t ) .", "For the generation of y t , the model first calculates an attentive representation h (cid:48) dec ,t of the dialogue history over the encoding representation H .", "Then, the concatenation of h dec ,t and h (cid:48) dec ,t is projected to the vocabulary space V by U : o t = U [ h dec ,t , h (cid:48) dec ,t ] , (3) where o t is the score (logit) for the next token generation.", "p ( y t | y 1 , ..., y t 1 , X, B ) = Softmax ( o t ) .", "(4) Different from typical text generation with Seq2seq model, the successful conversations for task-oriented dialogue system heavily depend on accurate knowledge base (KB) queries.", "We adopt the global-to-local memory pointer mechanism (GLMP) (Wu et al., 2019a) to query the entities in KB, which has shown the best performance.", "An external knowledge memory is proposed to store knowledge base (KB) B and dialogue history X .", "The KB memory is designed for the knowledge source while the dialogue memory is used for directly copying history words.", "The entities in external knowledge memory are represented in a triple format and stored in the memory module, which can be denoted as M = [ B ; X ] = ( m 1 , . . . , m b + T ) , where m i is one of the triplet of M , b and T denotes the number of KB and dialog history respectively.", "For a k -hop memory network, the external knowledge is composed of a set of trainable embedding matrices C = ( C 1 , . . . , C k +1 ) .", "We can query knowledge both in encoder and decoder process to enhance model interaction with knowledge module.", "In addition, it can loop over k hops and compute the attention weights at each hop k using", "where c ki is the embedding in i th memory position using the embedding matrix C k .", "We obtain the global memory pointer G = ( g 1 , . . . , g b + T ) by applying g ki = Sigmoid(( q k enc ) (cid:62) c ki ) , which is used to filter the external knowledge for relevant information for decoding.", "Finally, the model reads out the memory o k by the weighted sum over c k +1 and updates the query vector q k +1 enc .", "Formally, o k enc = (cid:88) i p ki c k +1 i , q k +1 enc = q k enc + o k enc .", "q k +1 enc can be seen as the encoded KB information, and is used to initialized the decoder.", "Query Knowledge in Decoder we use a sketch tag to denote all the possible slot types that start with a special token.", "(e.g., @address stands for all the Address ).", "When a sketch tag is generated by Eq.", "4 at t timestep, we use the concatenation of the hidden states h dec ,t and the attentive representation h (cid:48) dec ,t to query knowledge, which is similar with SharedModule AModule BModule CModule Expert Gate MLP Feature Extractor DynamicDomain-SpecificFusion Shared-PrivateFusion SharedModule AModule BModule CModule Expert Gate Shared-PrivateFusion Feature Extractor DynamicDomain-SpecificFusion Shared-PrivateFusion SharedModule AModule BModule CModule Expert Gate MLP Feature Extractor DynamicDomain-SpecificFusion Shared-SpecificFusion ( &,) ( &,* ( &,+ Figure 4: The dynamic fusion layer for fusing domain-shared feature and domain-specific feature.", "the process of querying knowledge in the encoder: q 1 dec = [ h dec ,t , h (cid:48) dec ,t ] , (8) p ki = Softmax (( q k dec ) (cid:62) c ki g ki ) .", "Here, we can treat P t = ( p k 1 ,. . . , p kb + T ) as the probabilities of queried knowledge, and select the word with the highest probability from the query result as the generated word.", "The model in section 2.1 is trained over mixed multi-domain datasets and the model parameters are shared across all domains.", "We call such model as shared encoder-decoder model.", "Here, we propose to use a shared-private framework including a shared encoder-decoder for capturing domain-shared feature and a private model for each domain to consider the domain-specific features explicitly.", "Each instance X goes through both the shared and its corresponding private encoder-decoder.", "Enhancing Encoder Given an instance along with its domain, the shared-private encoder-decoder generates a sequence of encoder vectors denoted as H { s,d } enc , including s hared and d omain-specific representation from corresponding encoder: H { s,d } enc =( h { s,d } enc , 1 , . . . , h { s,d } enc ,T ) =BiLSTM { s,d } enc ( X ) .", "In addition, self-attention has been shown useful for obtaining context information (Zhong et al., 2018).", "Finally, we follow Zhong et al. (2018) to use self-attention over H f enc to get context vector c f enc .", "We replace h T with c f enc in Eq.", "5. This makes our query vector combine the domain-shared feature with domain-specific feature.", "We also apply the shared-specific fusion function to the hidden states and the mixture vector is: shprivate : ( h s dec ,t , h d dec ,t ) h f dec ,t .", "Similarly, we obtain the fused attentive representation h f (cid:48) dec,t by applying attention from h f dec ,t over H f enc .", "Finally, we replace [ h dec ,t , h (cid:48) dec ,t ] in Eq.", "8 with [ h f dec ,t , h f (cid:48) dec ,t ] which incorporates shared and domain-specific features.", "The shared-private framework can capture the corresponding specific feature, but neglects the fine-grained relevance across certain subsets of domains.", "We further propose a dynamic fusion layer to explicitly leverage all domain knowledge, which is shown in Figure 4. Given an instance from any domain, we first put it to multiple private encoder-decoder to obtain domain-specific features from all domains.", "Next, all domain-specific features are fused by a dynamic domain-specific feature fusion module, followed by a shared-specific feature fusion for obtaining shared-specific features.", "Dynamic Domain-Specific Feature Fusion Given domain-specific features from all domains, a Mixture-of-Experts mechanism (MoE) (Guo et al., 2018) is adapted to dynamically incorporate all domain-specific knowledge for the current input in both encoder and decoder.", "Now, we give a detailed description on how to fuse the timestep t of decoding and the fusion process is the same to encoder.", "Given all domain feature representations in t decoding steps: { h d i dec ,t } | D | i =1 , where | D | represents the number of domains, an expert gate E takes { h d i dec ,t } as input and outputs a softmax score t,i that represents the degree of correlation between each domain and the current input token.", "We achieve this by a simple feedforward layer: t = Softmax ( W h d dec ,t + b ) .", "The final domain-specific feature vector is a mixture of all domain outputs, dictated by the expert gate weights t = ( t, 1 , . . . , t, | D | ) , which can be written as h d f dec ,t = (cid:80) i t,i h d i dec ,t .", "During training, take the decoder for example, we apply the cross-entropy loss L moe dec as the supervision signal for the expert gate to predict the domain of each token in the response, where the expert gate output t can be treated as the t th to-ken's predicted domain probability distribution by multiple private decoder.", "Hence, the more accurate the domain prediction is, the more correct expert gets: L moe dec = n (cid:88) t =1 | D | (cid:88) i =1 ( e i log( t,i | s , mdec )) , (16) where s represents the parameters of encoder-decoder model, mdec represents the parameters of the MoE module (Eq. 15) in the decoder and e i { 0 , 1 } represents whether the response with n tokens belongs to the domain d i .", "Similarly, we can get the L moe enc for the encoder and sum up them as: L moe = L moe enc + L moe dec .", "L moe is used to encourage samples from a certain source domain to use the correct expert, and each expert learns corresponding domain-specific features.", "When a new domain has little or no labeled data, the expert gate can automatically calculate the correlation between different domains with the target domain and thus better transfer knowledge from different source domains in both encoder and decoder module.", "Shared-Specific Feature Fusion We directly apply shprivate operation to fuse shared and final domain-specific feature: shprivate : ( h s dec ,t , h d f dec ,t ) h f dec ,t .", "Finally, we denote the dynamic fusion function as dynamic( h s dec,t , { h d i dec,t } | D | i =1 ) .", "Similar to Section 2.2, we replace [ h dec ,t , h (cid:48) dec ,t ] in Eq.", "8 with [ h f dec ,t , h f (cid:48) dec ,t ] .", "The other components are kept the same as the shared-private encoder-decoder framework.", "Adversarial Training To encourage the model to learn domain-shared features, we apply adversarial learning on the shared encoder and decoder module.", "Following Liu et al. (2017), a gradient reversal layer (Ganin and Lempitsky, 2014) is introduced after the domain classifier layer.", "The adversarial training loss is denoted as L adv .", "We follow Qin et al. (2019a) and the final loss function of our Dynamic fusion network is defined as: L = b L basic + m L moe + a L adv , (18) where L basic keep the same as GLMP (Wu et al., 2019a), b , m and a are hyper-parameters.", "More details about L basic and L adv can be found in appendix.", "Two publicly available datasets are used in this paper, which include SMD (Eric et al., 2017) and an extension of Multi-WOZ 2.1 (Budzianowski et al., 2018) that we equip the corresponding KB to every dialogue.", "1 The detailed statistics are also presented in Table 1.", "We follow the same partition as Eric et al. (2017), Madotto et al. (2018) and Wu et al. (2019a) on SMD and (Budzianowski et al., 2018) on Multi-WOZ 2.1.", "The dimensionality of the embedding and LSTM hidden units is 128 .", "The dropout ratio we use in our framework is selected from { 0 .", "1 , 0 .", "2 } and the batch size from { 16 , 32 } .", "In the framework, we adopt the weight typing trick (Wu et al., 2019a).", "We use Adam (Kingma and Ba, 2015) to optimize the parameters in our model and adopt the suggested hyper-parameters for optimization.", "All hyper-parameters are selected according to validation set.", "More details about hyper-parameters can be found in Appendix.", "We compare our model with the following state-of-the-art baselines.", "Mem2Seq (Madotto et al., 2018): the model takes dialogue history and KB entities as input and uses a pointer gate to control either generating a vocabulary word or selecting an input as the output.", "DSR (Wen et al., 2018): the model leverages dialogue state representation to retrieve the KB implicitly and applies copying mechanism to retrieve entities from knowledge base while decoding.", "KB-retriever (Qin et al., 2019b): the model adopts a retriever module to retrieve the most relevant KB row and filter the irrelevant information for the generation process.", "GLMP (Wu et al., 2019a): the framework adopts the global-to-local pointer mechanism to query the knowledge base during decoding and achieve state-of-the-art performance.", "For Mem2Seq , DSR , KB-retriever 2 , we adopt the reported results from Qin et al. (2019b) and Wu et al. (2019a).", "For GLMP , we rerun their public code to obtain results on same datasets.", "3 2 For Multi-WOZ 2.1 dataset, most dialogs are supported by more than single row, which can not processed by KBretriever , so we compare our framework with it in SMD and Camrest datasets.", "3 Note that, we find that Wu et al. (2019a) report macro entity F1 as the micro F1, so we rerun their models (https://github.com/jasonwu0731/GLMP) and obtain results.", "Follow the prior work (Eric et al., 2017; Madotto et al., 2018; Wen et al., 2018; Wu et al., 2019a; Qin et al., 2019b), we adopt the BLEU and Micro Entity F1 metrics to evaluate model performance.", "The results on the two datasets are shown in Table 2, we can observe that: 1) The basic shared-private framework outperforms the best prior model GLMP in all the datasets.", "This indicates that the combination of domain-shared and domain-specific features can better enhance each domain performanc compared with only utilizing the implicit domain-shared features.", "2) Our framework achieves the state-of-the-art performance on two multi-domain task-oriented dialog datasets, namely SMD and Multi-WOZ 2.1.", "On SMD dataset, our model has the highest BLEU compared with baselines, which shows that our framework can generate more fluent response.", "More importantly, our model outperforms GLMP by 2.0% overall, 3.3% in the Navigate domain, 1.1% in the Weather domain and 0.6% in Schedule domain on entity F1 metric, which indicates that considering relevance between target domain input and all domains is effective for enhancing performance of each domain.", "On Multi-Woz 2.1 dataset, the same trend of improvement has been witnessed, which further shows the effectiveness of our framework.", "We study the strengths of our model from several perspectives on SMD dataset.", "We first conduct several ablation experiments to analyze the effect of different components in our framework.", "Next, we conduct domain adaption experiments to verify the transferability of our framework given a new domain with little or no labeled data.", "In addition, we provide a visualization of the dynamic fusion layer and case study to better understand how the module affects and contributes to the performance.", "Several ablation experiments and results are shown in Table 3.", "In detail, 1) w/o Domain-shared Knowledge Transfer denotes that we remove domain-shared feature and just keep fused domain-specific feature for generation.", "2) w/o Domain Fusion mechanism denotes that we simply sum all domain-specific features rather than use the MOE mechanism to dynamically fuse domain-specific features.", "3) w/o Multi-Encoder represents that we remove multi-encoder module and adopt one shared encoder in our framework.", "4) w/o Multi-Decoder represents that we remove the multi-decoder module and adopt one shared decoder.", "5) w/o Adversarial Training denotes that we remove the adversarial training in experimental setting.", "Generally, all the proposed components are effective to contribute the final performance.", "Specifically, we can clearly observe the effectiveness of our dynamic fusion mechanism where w/o domain-specific knowledge fusion causes 1.8% drops and the same trend in removing domain-shared knowledge fusion.", "This Figure 7: Distribution of Mix-of-the-expert mechanism across source domains for randomly selected 100 examples in each domain on SMD dataset.", "further verifies that domain-shared and domain-specific feature are benefit for each domain performance.", "Low-Resource Setting To simulate low-resource setting, we keep two domains unchanged, and the ratio of the except domain from original data varies from [1%, 5%, 10%, 20%, 30%, 50%].", "The results are shown in Figure 5. We can find that: (1) Our framework outperforms the GLMP baseline on all ratios of the original dataset.", "When the data is only 5% of original dataset, our framework outperforms GLMP by 13.9% on all domains averagely.", "(2) Our framework trained with 5% training dataset can achieve comparable and even better performance compared to GLMP with 50% training dataset on some domains.", "This implies that our framework effectively transfers knowledge from other domains to achieve better performance for the low-resources new domain.", "Zero-Shot Setting Specially, we further evaluate the performance of domain adaption ability on the zero-shot setting given an unseen domain.", "We randomly remove one domain from the training set, and other domain data remained unchanged to train the model.", "During test, the unseen domain input use the MoE to automatically calculate the correlation between other domains and the current input and get the results.", "Results are shown in 0.1996 0.0242 0.7762 0 1 Navigation Weather Schedule 0.0015 0.1457 0.8527 0 1 Navigation Weather Schedule 0.0249 0.8908 0.0843 0 1 Navigation Weather Schedule Driver :Manhattan,please will it be cloudy on Monday ?", "0.0015 0.1457 0.8527 0 0.5 1 Navigation Weather Schedule Driver :Manhattan,please will it be cloudy on Monday ?", "Car :Mondaywillbefoggy.", "Weather@5% 0.1996 0.0242 0.7762 0 0.5 1 Navigation Weather Schedule DialogueHistory :Find location and address to home that is nearest me.Your home is at 56cadwellstreet.Thanks, set the gps for there.", "Figure 8: Case of of expert gate distribution in SMD dataset.", "Text segments with red color represents appearing in both schedule and navigation domain.", "Figure 6, we can see our model significantly outperforms GLMP on three domains, which further demonstrate the transferability of our framework.", "To better understand what our dynamic fusion layer has learned, we visualize the gate distribution for each domain in low-resource (5%) setting, which fuses domain-specific knowledge among various cases.", "As shown in the Figure 7, for a specific target domain, different examples may have different gate distributions, which indicates that our framework successfully learns how to transfer knowledge between different domains.", "For example, the navigation column contains 100 examples from its test set and each row show the corresponding expert value.", "More specifically, in the navigation column, we observe that the expert value in schedule domain is bigger than weather domain, which indicates schedule domain transfers more knowledge to navigation than weather domain.", "Furthermore, we provide one case for navigation domain and their corresponding expert gate distribution.", "The cases are generated with 5% training data in the navigation domain and other two domain datasets keep the same, which can better show how the other two domains transfer knowledge to the low-resource domain.", "As shown in Figure 8, the expert value of schedule domain is bigger than the weather domain, which indicates the schedule contributes more than weather domain.", "In further exploration, we find word location and set appear both in navigation and schedule domain, which shows schedule has closer relation with navigation than weather, which indicates our model successfully transfers knowledge from the closest domain.", "We provide human evaluation on our framework and other baseline models.", "We randomly generated 100 responses.", "These responses are based on distinct dialogue history on the SMD test data.", "Following Wen et al. (2018) and Qin et al. (2019b), We hired human experts and asked them to judge the quality of the responses according to correctness, fluency, and humanlikeness on a scale from 1 to 5. Results are illustrated in Table 4. We can see that our framework outperforms GLMP on all metrics, which is consistent with the automatic evaluation.", "Existing end-to-end task-oriented systems can be classified into two main classes.", "A series of work trains a single model on the mixed multi-domain dataset.", "Eric et al. (2017) augments the vocabulary distribution by concatenating KB attention to generatge entities.", "Lei et al. (2018) first integrates track dialogue believes in end-to-end task-oriented dialog.", "Madotto et al. (2018) combines end-to-end memory network (Sukhbaatar et al., 2015) into sequence generation.", "Gangi Reddy et al. (2019) proposes a multi-level memory architecture which first addresses queries, followed by results and finally each key-value pair within a result.", "Wu et al. (2019a) proposes a global-to-locally pointer mechanism to query the knowledge base.", "Compared with their models, our framework can not only explicitly utilize domain-specific knowledge but also consider different relevance between each domain.", "Another series of work trains a model on each domain separately.", "Wen et al. (2018) leverages dialogue state representation to retrieve the KB implicitly.", "Qin et al. (2019b) first adopts the KB-retriever to explicitly query the knowledge base.", "Their works consider only domain-specific features.", "In contrast, our framework explicitly leverages domain-shared features across domains.", "The shared-private framework has been explored in many other task-oriented dialog components.", "Liu and Lane (2017) applies a shared-private LSTM to generate shared and domain-specific features.", "Zhong et al. (2018) proposes a global-local architecture to learn shared feature across all slots and specific feature for each slot.", "More recently, Zhang et al. (2018) utilizes the shared-private model for text style adaption.", "In our work, we explore shared-private framework in end-to-end task-oriented dialog to better transfer domain knowledge for querying knowledge base.", "In addition, we take inspiration from Guo et al. (2018), who successfully apply the mix-of-the-experts (MoE) mechanism in multi-sources domain and cross-lingual adaption tasks.", "Our model not only combines the strengths of MoE to incorporate domain-specific feature, but also applies adversarial training to encourage generating shared feature.", "To our best of knowledge, we are the first to effectively explore shared-private framework in multi-domain end-to-end task oriented dialog.", "In this paper, we propose to use a shared-private model to investigate explicit modeling domain knowledge for multi-domain dialog.", "In addition, a dynamic fusion layer is proposed to dynamically capture the correlation between a target domain and all source domains.", "Experiments on two datasets show the effectiveness of the proposed models.", "Besides, our model can quickly adapt to a new domain with little annotated data.", "We thank Min Xu, Jiapeng Li, Jieru Lin and Zhouyang Li for their insightful discussions.", "We also thank all anonymous reviewers for their constructive comments.", "This work was supported by the National Natural Science Foundation of China (NSFC) via grant 61976072, 61632011 and 61772153.", "Besides, this work also faxed the support via Westlake-BrightDreams Robotics research grant." ]
[ "abstain", "abstain", "abstain", "abstain", "objective", "objective", "result", "result", "abstain", "other", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "objective", "method", "result", "result", "objective", "objective", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "method", "method", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "other", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "other", "abstain", "result", "abstain", "result", "result", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "other", "other", "other", "other", "other", "other", "other", "method", "other", "other", "other", "other", "objective", "other", "other", "other", "other", "objective", "method", "method", "objective", "objective", "abstain", "abstain", "objective", "other", "other", "other", "other" ]
[ "In traditional Distributional Semantic Models (DSMs) the multiple senses of a polysemous word are conflated into a single vector space representation.", "In this work, we propose a DSM that learns multiple distributional representations of a word based on different topics.", "First, a separate DSM is trained for each topic and then each of the topic-based DSMs is aligned to a common vector space.", "Our unsupervised mapping approach is motivated by the hypothesis that words preserving their relative distances in different topic semantic sub-spaces constitute robust semantic anchors that define the mappings between them.", "Aligned cross-topic representations achieve state-of-the-art results for the task of contextual word similarity.", "Furthermore, evaluation on NLP downstream tasks shows that multiple topic-based embeddings outperform single-prototype models.", "Word-level representation learning algorithms adopt the distributional hypothesis (Harris, 1954), presuming a correlation between the distributional and the semantic relationships of words.", "Typically, these models encode the contextual information of words into dense feature vectorsoften referred to as embeddings of a k -dimensional space, thus creating a Vector Space Model (VSM) of lexical semantics.", "Such embeddings have been successfully applied to various natural language processing applications, including information retrieval (Manning et al., 2008), sentiment analysis (Tai et al., 2015), and machine translation (Amiri et al., 2016; Sharaf et al., 2017).", "Despite their popularity, traditional DSMs rely solely on models where each word is uniquely represented by one point in the vector space.", "From a The research was performed when the author was an undergraduate researcher at School of ECE, NTUA in Athens, Greece.", "linguistic perspective, these models cannot capture the distinct meanings of polysemous words (e.g., bank or cancer ), resulting in conflated word representations of diverse contextual semantics.", "To alleviate this problem, DSMs with multiple representations per word have been proposed in the literature, based on clustering local contexts of individual words (Reisinger and Mooney, 2010; Tian et al., 2014; Neelakantan et al., 2014).", "An alternative way to train multiple representation DSMs is to utilize semantic lexical resources (Rothe and Schutze, 2015; Pilehvar and Collier, 2016).", "Christopoulou et al. (2018), based on the assumption that typically words appear with a specific sense in each topic, proposed a topic-based semantic mixture model that exploits a combination of similarities estimated on topic-based DSMs for the computation of semantic similarity between words.", "Their model performs well for a variety of semantic similarity tasks; however, it lacks a unified representation of multiple senses in a common semantic space.", "The problem of defining transformations between embeddingstrained independently under different corporahas been previously examined in various works, such as machine translation (Mikolov et al., 2013b; Xing et al., 2015; Artetxe et al., 2016), induction of historical embeddings (Hamilton et al., 2016) and lexical resources enrichment (Prokhorov et al., 2017).", "Following this line of research, we induce the creation of multiple cross-topic word embeddings by projecting the semantic representations of topic-based DSMs to a unified semantic space.", "We investigate different ways to perform the mappings from the topic sub-spaces to the unified semantic space, and propose a completely unsupervised approach to extract semantic anchors that define those mappings.", "Furthermore, we claim that polysemous words change their meaning in different topic domains; this is reflected in relative shifts of their distributional representations in different topic-based DSMs.", "On the other hand, semantic anchors should have consistent semantic relationships regardless of the domain they reside in.", "Hence, their distributions of similarity values should also be similar across different domains.", "Finally, we apply a smoothing technique to each word's set of topic embeddings, resulting in representations with fine-grained semantics.", "To our knowledge, this is the first time that mappings between semantic spaces are applied to the problem of learning multiple embeddings for polysemous words.", "Our multi-topic word representations are evaluated on the contextual semantic similarity task and yield state-of-the-art performance compared to other unsupervised multi-prototype word embedding approaches.", "We further perform experiments on two NLP downstream tasks: text classification and paraphrase identification and demonstrate that our learned word representations consistently provide higher performance than single-prototype word embedding models.", "The code of the present work is publicly available 1 .", "Methods that assign multiple distributed representations per word can be grouped into two broad categories.", "2 Unsupervised methods induce multiple word representations without leveraging semantic lexical resources.", "Reisinger and Mooney (2010) were the first to create a multi-prototype DSM with a fixed number of vectors assigned to each word.", "In their model, the centroids of context-dependent clusters were used to create a set of sense-specific vectors for each target word.", "Based on similar clustering approaches, follow-up works introduced neural network architectures that incorporated both local and global context in a joint training objective (Huang et al., 2012), as well as methods that jointly performed word sense clustering and embedding learning as in Neelakantan et al. (2014); Li and Juraf-sky (2015).", "A probabilistic framework was introduced by Tian et al. (2014), where the Skip-Gram model of Word2Vec was modified to learn multiple embedding vectors.", "Furthermore, latent topics 1 https://github.com/Elbria/utdsm_ naacl2018 2 We limit our discussion to related works that use monolingual DSMs and corpora.", "were integrated into the Skip-Gram model, resulting in topical word embeddings which modeled the semantics of a word under different contexts (Liu et al., 2015b,a; Nguyen et al., 2017).", "Another topic-related embedding creation approach was proposed in Christopoulou et al. (2018) where a mixture of topic-based semantic models was extracted by topical adaptation of in-domain corpora.", "Other approaches used autoencoders (Amiri et al., 2016), convolutional neural networks designed to produce context representations that reflected the order of words in a context (Zheng et al., 2017) and reinforcement learning (Lee and Chen, 2017; Guo et al., 2018).", "Supervised approaches, based on prior knowledge acquired by sense inventories (e.g., WordNet) along with word sense disambiguation algorithms, were also introduced for sense-specific representations extraction (Chen et al., 2014; Ia-cobacci et al., 2015).", "In other works, pre-trained word embeddings have been extended to embeddings of lexemes and synsets (Rothe and Schutze, 2015) or were de-conflated into their constituent sense representations (Pilehvar and Collier, 2016) by exploiting semantic lexical resources.", "Our system follows a four-step approach:", "1. Global Distributional Semantic Model.", "Given a large collection of text data we train a DSM that encodes the contextual semantics of each word into a single representation, also referred to as Global-DSM.", "2. Topic-based Distributional Semantic Models.", "Next, a topic model is trained using the same corpus.", "The topic model splits the corpus into K (possibly overlapping) sub-corpora.", "A DSM is then trained from each sub-corpus resulting in K topic-based DSMs (TDSMs).", "The topical adaptation of the semantic space takes into account the contextual variations a word exhibits under different thematic domains and therefore leads to the creation of topic-specific vectors (topic embeddings).", "an unsupervised self-learning scheme.", "In the unified semantic space each word is represented by a set of topic embeddings that were previously isolated in distinct vector spaces, thus creating a Unified multi-Topic DSM (UTDSM).", "4. Smoothing of topic embeddings.", "As an optional step, we employ a smoothing approach in order to cluster a word's topic embeddings into N Gaussian distributions via a Gaussian Mixture Model (GMM).", "This step lessens the noise introduced to our system through the semantic mappings and sparse training data.", "Figure 1 : Simplified depiction summarizing the intuition behind the alignment process of topic embeddings.", "In the unified vector space, the polysemous word cancer is represented by two topic vectors that capture different semantic properties of the word under a zodiacal and a medical topic.", "Words astrology and tumor are examples of semantic anchors that define the mappings.", "The first step towards the thematic adaptation of the semantic space is the induction of in-domain corpora, using the Latent Dirichlet Algorithm (LDA) (Blei et al., 2003).", "LDA is a generative probabilistic model of a corpus.", "Its core idea is that documents are represented as random mixtures over topics; where each topic is defined as a probability distribution over a collection of words.", "Given as input a corpus of documents, LDA trains a topic model and creates a distribution of words for each topic in the corpus.", "Using the trained LDA model, we infer a topic distribution for each sentence in the corpus.", "Afterward, following a soft clustering scheme each sentence is included in a topic-specific corpus when the posterior probability for the corresponding topic exceeds a prede-fined threshold.", "The resulting topic sub-corpora are then used to train topic-based DSMs.", "Any of the DSM training algorithms proposed in the literature can be used for this purpose; in this paper, we opt for the Word2Vec model (Mikolov et al., 2013a).", "The intrinsic non-determinism of the Word2Vec algorithm leads to the creation of continuous vector spaces that are not naturally aligned to a unified semantic reference space, precluding the comparison between words of different thematic domains.", "To circumvent this limitation, we need to map the word representations of TDSMs to a shared vector space.", "In particular, we hypothesize that TDSMs capture meaningful variations in usage of polysemous words, while the relative semantic distance between monosemous words is preserved.", "This hypothesis motivated us to think of monosemous words as anchors between semantic spaces, as illustrated in Figure", "1. One way to retrieve the list of anchors is to extract monosemous words from lexical resources such as WordNet (Prokhorov et al., 2017).", "However, this method is restricted to languages where such lexical resources exist and depends on the lexical coverage and quality of such resources.", "To overcome the above limitations, we propose a fully unsupervised method for semantic anchor induction.", "Although the embeddings of the topic and global semantic vector spaces are not aligned, their corresponding similarity matrices (once normalized) are.", "Based on this observation, we compute the similarity between a given word and every other word in the vocabulary (similarity distribution) for the different topic and global spaces.", "Then, we assume that good semantic anchors should have similar similarity distributions across the topic-specific and the global space, as illustrated in Figure", "2. Artetxe et al. (2018) was based on a similar observation to align vector semantic spaces in bilingual machine translation context.", "Let V be the intersection of the Global-DSM and the K TDSMs vocabularies and d the embedding dimension.", "We then define X k R | V | d as the embedding matrix of the k -th TDSM, and Y R | V | d as the embedding matrix of the global DSM, where the i -th row of each matrix corresponds to the unit normalized representation of a word in V .", "Then, we define S k = X k X Tk , S g = Y YT R | V || V | to be the similarity distribution matrices for the k -th TDSM and the global-DSM, respectively.", "Then our objective is to extract a list of semantic anchors A that minimizes the Euclidean distance between the two different similarity distributions.", "Specifically, for every word i we calculate the average semantic distribution across all topics: <s ik > k = 1 KK (cid:88) k =1 s ik (1) (cid:107) <s ik > k s ig (cid:107) 2 , i = 1 , . . . , | V | (2) where s ig , s ik is the i -th row of the S g and S k similarity matrix, respectively, representing the similarity distribution between word i and every other word in the vocabulary V .", "We then choose | A | anchors as the words with the smallest values according to criterion", "2. Furthermore, we assume that there exists an orthogonal transformation matrix between the topic embeddings of the extracted semantic anchors of each TDSM (source space) and the corresponding representations of the global-DSM (target space).", "The orthogonality constraint on the transformation matrix is widely adopted by the literature for various semantic space alignment tasks (Xing et al., 2015; Artetxe et al., 2016; Hamilton et al.).", "Assume jk R d is the vector representation of the j -th anchor word in the source space and jg R d is its corresponding vector representation in the target space.", "The transformation matrix M k R d d that projects the first space to the latter is learned via solving the following constraint optimization problem: 3 min M k | A | (cid:88) j =1 (cid:107) M k jk jg (cid:107) 22 , s.t. M k M Tk = I (3) The induction of multiple topic embeddings in the unified vector space is achieved via applying Equation 3 to each TDSM.", "Specifically, given a word and its k -th topic distributed representation x k R d , we compute its projected representation x (cid:48) k R d as follows: x (cid:48) k = M k x k (4) 3.3 Smoothing Of Topic Embeddings Starting from the set of aligned topic embeddings { x (cid:48) k } Kk =1 for each word, we learn a Gaussian Mixture Model with N components, where closely positioned topic embeddings are assigned to the same component.", "This step operates as an implicit way of segmenting the space of topic embeddings for each word in order to capture more useful hyper-topicsunion of topicswhich better represent their different meanings.", "We suggest that each Gaussian distribution forms a semantically coherent unit that corresponds to closely related semantics of the target word.", "Subsequently, the mean vector of each Gaussian distribution is used as a representative vector of each component, leading to a new set of smoothed topic embeddings { x n } Nn =1 for each word, where x n R d .", "3 This problem is known as the orthogonal Procrustes problem and it has a closed form solution as proposed in (Sch onemann, 1966).", "As our initial corpus we used the English Wikipedia, containing 8 .", "5 million articles (Tur-ney, 2012).", "During the training of the topic model, we used the articles found in the Wikipedia corpus and employed the Gensim implementation of LDA (Rubenstein and Goodenough, 1965) setting the number of topics K to 50 .", "Using a threshold of 0 .", "1 , we followed a soft-clustering approach, to bootstrap the creation of topic sub-corpora, using our trained topic model.", "Finally, we used Gen-sim's implementation of Word2Vec and Continuous Bag-of-Words method to train both the global-DSM and the TDSMs.", "The context window parameter of Word2Vec is set to 5 , while the dimensionality d of all the constructed DSMs is equal to 300 or 500 .", "4 4.2 Semantic Anchors The number of semantic anchors that determine the mappings between our source and target spaces is set to | A | = 5 000 5 according to our unsupervised approach (criterion 2).", "Those are selected from the common set of words that are represented in all semantic spaces with | V | 12 000 .", "As a second experiment, we randomly sample | A | words from the vocabulary of each TDSM to define its transformation matrix.", "We repeat this experiment 10 times, every time sampling a different list from the corresponding vocabulary and report average performance results.", "To apply the smoothing technique on the set of a word's topic embeddings we use the Scikit-learn implementation of Gaussian Mixture Model clustering algorithm (Pedregosa et al., 2011).", "We initialize the mean vector of each component using k-means algorithm and the parameters of the model are estimated using Expectation-Maximization (EM) algorithm.", "To estimate the semantic similarity between a pair of words provided in sentential context, we use", "4 Any parameter not mentioned is set to default values of the corresponding implementations (e.g., Word2Vec, Gensim", "LDA).", "5 We have experimented with different values of anchors from { 1 000 , 2 000 , 3 000 , 4 000 , 5 000 } and report results for the best setup.", "the standard evaluation Stanford Contextual Word Similarity (SCWS) (Huang et al., 2012) dataset which consists of 2 003 word-pairs with assigned semantic similarity scores computed as the average estimations of several human annotators.", "Following the evaluation guidelines proposed in literature, we employ the AvgSimC and MaxSimC contextual metrics, firstly discussed in Reisinger and Mooney (2010).", "In particular, given the word-pair ( w, w (cid:48) ) , and their provided contexts ( c , c (cid:48) ) we define: AvgSimC( w, w (cid:48) ) = 1 K 2 K (cid:88) j =1 K (cid:88) k =1 p( j | w, c )p( k | w (cid:48) , c (cid:48) )d( x (cid:48) j ( w ) , x (cid:48) k ( w (cid:48) )) , (5) MaxSimC( w, w (cid:48) ) = d( x (cid:48) ( w ) , x (cid:48) ( w (cid:48) )) , (6) Following the notation used in 3.2, K is the number of topics returned by the trained LDA model, x (cid:48) j is the word embedding trained on the sub-corpus corresponding to the j -th topic after being projected to the unified vector space, p( j | w, c ) denotes the posterior probability of topic j returned by LDA given as input the context c of word w , d denotes the cosine similarity between the two input representations and finally x (cid:48) ( w ) = u argmax 1 j K p( j | w,c ) ( w ) is the vector representation of word w that corresponds to the topic with the maximum posterior for c .", "Intuitively, a higher score in MaxSimC indicates the existence of more robust multi-topic word representations.", "On the other hand, AvgSimC provides a topic-based smoothed result across different embeddings.", "Besides the standard evaluation benchmark of contextual word similarity, we also investigate the effectiveness of our mapped cross-topic embeddings on document and sentence level downstream NLP tasks: text classification and paraphrase identification.", "We report weighted-averaging precision, recall, F1-measure and accuracy performance metrics.", "Text classification.", "We used the 20 NewsGroup 6 dataset, which consists of about 20 000 documents.", "Our goal is to classify each document into one of the 20 different newsgroups based on its content.", "6 http://qwone.com/ jason/20Newsgroups/ Paraphrase Identification.", "For this task we aimed at identifying whether two given sentences can be considered paraphrases or not, using the Microsoft Paraphrase dataset (Dolan et al., 2004).", "Document and Sentence level representations.", "AvgC D = 1 | D | | D | (cid:88) d =1 K (cid:88) k =1 p( k | D ) x (cid:48) k ( w d ) , (7) Avg D = 1 | D | | D | (cid:88) d =1 K (cid:88) k =1 1 K x (cid:48) k ( w d ) , (8) MaxC D = 1 | D | | D | (cid:88) w =1 x (cid:48) m ( w d ) s .", "t .", "m = argmax k =1", ",..,K { p( k | D ) } , (9) where p( k | D ) denotes the posterior probability of topic k returned by LDA given as input the sen-tence/document D and x (cid:48) k ( w d ) is the mapped representation of word w d for topic k .", "For the case of paraphrase identification, we extract a single feature vector for each sentence-pair via concatenating the features of the individual sentences.", "After feature extraction, we train a linear Support Vector Classifier (SVM) (Pedregosa et al., 2011) using the proposed train/test sets for both tasks.", "We report the best results for each experimental configuration after tuning the SVM's penalty parameter of the error term using 500 dimensional word embeddings.", "In Table 1 we compare our model (UTDSM) with our baseline (Global-DSM) and other state-of-the-art multi-prototype approaches for the contextual semantic similarity task.", "It is clear that all different setups of UTDSM perform better than the baseline for both contextual semantic similarity metrics.", "Using a single Gaussian distribution (UTDSM + GMM (1)) at the smoothing step of our method produces similar results to the baseline model.", "This is anticipated as both methods provide a centroid representation of a word's diverse semantics.", "In terms of MaxSimC the model consistently yields higher performance when the list of semantic anchors is induced via our unsupervised method instead of using randomly selected anchor words (UTDSM Random).", "We also observe that random anchoring performs slightly worse than UTDSM with respect to AvgSimC .", "This result validates our hypothesis that the representations of words, which share consistent similarity distributions across different topic domains, constitute informative semantic anchors that determine the mappings between semantic vector spaces.", "Furthermore, we observe that GMM smoothing has a different effect on the MaxSimC and AvgSimC metrics.", "Specifically, for AvgSimC we consistently report lower results when GMM smoothing is applied for different number of components.", "We attribute this behavior to a possible loss of model capacitydecrease in the number of topic embeddingsthat is capable of capturing additional topic information.", "At the same time, our smoothing technique highly improves the performance of MaxSimC for all possible configu-rations.", "Given that this metric is more sensitive to noisy word representations, this result indicates that our technique lessens the noise introduced to our system and captures finer-grained topic senses of words.", "Overall, the performance of our model is highly competitive to the state-of-the-art models in terms of AvgSimC , for 500 -dimensional topic embeddings.", "We also achieve state-of-the-art performance for the MaxSimC metric, using smoothed topic embeddings of 300 or 500 dimensions with 2 or 3 Gaussian components.", "Evaluation results on text classification are presented in Table", "2. We observe that our model performs better than the baseline across all metrics for both averaging approaches ( AvgC D , Avg D ), while the usage of dominant topics appears to have lower performance ( MaxC D ).", "Specifically, we get an improvement of 2 2 .", "5% on topic-based average and 0 .", "5 1% on simple average combination compared to using Global-DSM.", "Results for the paraphrase identification task are presented in Table", "3. Avg D yields the best results especially in F1 metric showing that cross-topic representations are semantically richer than single embeddings baseline (Global-DSM).", "Although we apply the topic distributions p( k | D ) extracted from LDA (document-level model) to a sentence-level task, improvements over the baseline are also shown in the AvgC D and MaxC D cases.", "Overall, the proposed UTDSM model outperforms the baseline Global-DSM model on both semantic similarity and downstream tasks.", "Finally, we carry out a cross-domain semantic analysis to detect the variations of a word's meaning in different topic domains.", "To that end, we use a list of known polysemous words and measure the semantic similarity between different topic representations of the same ambiguous word.", "The ultimate goal of this analysis is to validate that our model captures known thematic variations in semantics of polysemous words.", "Table 4 includes examples of our analysis.", "The most probable words of the topics (second column) give an intuitive sense of their major contexts, while their nearest neighbors (third column) infer the sense of the target word in the corresponding topic domain.", "For example, the word drug is mostly related to medication in a broad medical domain; it experiences though a slight shift from this meaning when it resides in a topic about illegal substances.", "Furthermore, the highly polysemous word act shifts from meaning statute to meaning performance under the corresponding law and art topics.", "Similar semantic variations are observed for words python , rock and nursery .", "Moreover, in Figure 3 we visualize the topic embeddings of seven words before and after projecting the topic-based DSMs to the unified space, using principal component analysis.", "We additionally depict the Gaussian distribution learned from the topic representations of each word reflecting the uncertainty of their meanings.", "The center of each distribution is specified by the mean vector and contour surface by the covariance matrix.", "On the left, we depict the position of words prior to applying the unsupervised mapping approach where the topic sub-spaces are unaligned.", "In the unaligned space, words demonstrate similar area coverage regardless of their polysemy.", "After the mappings, we see on the right that the area under a word's distribution is indicative of its degree of polysemy.", "Specifically, we observe that the variance of the learned representations becomes larger for the cases of polysemous words such as 7 Similar results were obtained for each metric using smoothed word embeddings.", "Also, there are no standard evaluation approaches for comparison of previous works on downstream tasks.", "8 Note that a topic domain is described as a distribution over words in our model.", "python, java, adobe in order to assign some probability to their diverse meanings.", "Monosemous words such as snake, microsoft and malay have smaller variances.", "Furthermore, we observe that the semantic relationships between words are much better captured by their corresponding positions in the aligned space.", "We present an unsupervised approach of mapping multiple topic-based DSMs to a unified vector space in order to capture different contextual semantics of words.", "We assume that words having consistent similarity distributions regardless of the domain they exist in could be considered informative semantic anchors that determine the mappings between semantic spaces.", "The projected word embeddings yield state-of-the-art results on contextual similarity compared to previously proposed unsupervised approaches for multiple word embeddings creation, while they also outperform single vector representations in downstream NLP tasks.", "In addition, we provide insightful visualizations and examples that demonstrate the capability of our model to capture variations in topic semantics of words.", "As future work, one can hypothesize that the area a word covers in the mapped space reveals its semantic range.", "In this direction, a refinement of the semantic anchor selection approach could be explored in an iterative way assuming that the variance of a word's Gaussian distribution denotes its degree of polysemy (Vilnis and McCallum, 2015).", "Moreover, we would like to explore a more sophisticated smoothing technique where the number of Gaussian components is adapted for each word.", "Given that Gaussian mixture embeddings could capture the uncertainty of a word's representation in the semantic space one could also investigate different metrics for measuring the semantic relationship between word pairs that go beyond their point-wise comparison.", "Finally, it may be helpful to investigate non-linear mappings between semantic spaces using deep neural network architectures.", "Acknowledgments Thi work has been partially funded by the Baby-Robot project, supported by the EU Horizon 2020 Program undergrant # 687831 .", "Zellig S. Harris.", "1954.", "Distributional structure.", "Word , 10(23):146162.", "Ignacio Iacobacci, Mohammad Taher Pilehvar, and Roberto Navigli.", "2015.", "Sensembed: Learning sense embeddings for word and relational similarity.", "In Proc.", "Annual Meeting of the Association for Computational Linguistics (ACL) , pages 95105.", "Guang-He Lee and Yun-Nung Chen.", "2017.", "Muse: Modularizing unsupervised sense embeddings.", "In Proc.", "Conference on Empirical Methods in Natural Language Processing , pages 327337.", "Jiwei Li and Dan Jurafsky.", "2015.", "Do multi-sense embeddings improve natural language understanding?", "In Proc.", "Conference on Empirical Methods in Natural Language Processing (EMNLP) , pages 1722 1732.", "Pengfei Liu, Xipeng Qiu, and Xuanjing Huang.", "2015a.", "Learning context-sensitive word embeddings with neural tensor skip-gram model.", "In Proc.", "International Joint Conference on Artificial Intelligence (IJ-CAI , pages 12841290.", "Yang Liu, Zhiyuan Liu, Tat-Seng Chua, and Maosong Sun.", "2015b.", "Topical word embeddings.", "In Proc.", "AAAI Conference on Artificial Intelligence , pages 24182424.", "Christopher D. Manning, Prabhakar Raghavan, and Hinrich Sch utze.", "2008.", "Introduction to Information Retrieval .", "Cambridge University Press.", "Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean.", "2013a.", "Efficient estimation of word representations in vector space.", "volume abs/1301.3781.", "Tomas Mikolov, Wen-tau Yih, and Geoffrey Zweig.", "2013b.", "Linguistic regularities in continuous space word representations.", "In HLT-NAACL .", "Dai Quoc Nguyen, Dat Quoc Nguyen, Ashutosh Modi, Stefan Thater, and Manfred Pinkal." ]
[ "abstain", "objective", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "objective", "abstain", "abstain", "abstain", "result", "objective", "method", "objective", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "method", "abstain", "other", "method", "method", "abstain", "abstain", "other", "method", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "other", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "result", "result", "result", "result", "abstain", "result", "result", "abstain", "result", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "objective", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain" ]
[ "We present a new dataset and models for comprehending paragraphs about processes (e.g., photosynthesis), an important genre of text describing a dynamic world.", "The new dataset, ProPara, is the first to contain natural (rather than machine-generated) text about a changing world along with a full annotation of entity states (location and existence) during those changes (81k datapoints).", "The end-task, tracking the location and existence of entities through the text, is challenging because the causal e ects of actions are often implicit and need to be inferred.", "We find that previous models that have worked well on synthetic data achieve only mediocre performance on ProPara, and introduce two new neural models that exploit alternative mechanisms for state prediction, in particular using LSTM input encoding and span prediction.", "The new models improve accuracy by up to 19%.", "The dataset and models are available to the community at http://data.allenai.org/propara .", "Building a reading comprehension (RC) system that is able to read a text document and to answer questions accordingly has been a long-standing goal in NLP and AI research.", "Impressive progress has been made in factoid-style reading comprehension, e.g., (Seo et al., 2017a; Clark and Gardner, 2017), enabled by well-designed datasets and modern neural network models.", "However, these models still struggle with questions that require inference (Jia and Liang, 2017).", "Consider the paragraph in Figure 1 about photosynthesis.", "While top systems on SQuAD (Ra-jpurkar et al., 2016) can reliably answer lookup questions such as: Q1 : What do the roots absorb?", "(A: water, minerals) they struggle when answers are not explicit, e.g., Q2 : Where is sugar produced?", "Bhavana Dalvi Mishra and Lifu Huang contributed equally to this work.", "1 For example, the RC system BiDAF (Seo et al., 2017a) answers glucose to this question.", "Chloroplasts in the leaf of the plant trap light from the sun.", "The roots absorb water and minerals from the soil.", "This combination of water and minerals flows from the stem into the leaf.", "Carbon dioxide enters the leaf .", "Light, water and minerals, and the carbon dioxide all combine into a mixture.", "This mixture forms sugar (glucose) which is what the plant eats.", "To answer Q2, it appears that a system needs knowledge of the world and the ability to reason with state transitions in multiple sentences: If carbon dioxide enters the leaf (stated), then it will be at the leaf (unstated), and as it is then used to produce sugar, the sugar production will be at the leaf too.", "This challenge of modeling and reasoning with a changing world is particularly pertinent in text about processes , demonstrated by the paragraph in Figure 1. Understanding what is happening in such texts is important for many tasks, e.g., procedure execution and validation, e ect prediction.", "However, it is also di cult because the world state is changing, and the causal e ects of actions on that state are often implicit.", "To address this challenging style of reading comprehension problem, researchers have created several datasets.", "The bAbI dataset (Weston et al., 2015) includes questions about objects moved throughout a paragraph, using machine-generated language over a deterministic domain with a small lexicon.", "The SCoNE dataset (Long et al., 2016) contains paragraphs describing a changing world state in three synthetic, deterministic domains, and 1595 Figure 2: A (simplified) annotated paragraph from ProPara.", "assumes that a complete and correct model of the initial state is given for each task.", "However, approaches developed using synthetic data often fail to handle the inherent complexity in language when applied to organic, real-world data (Hermann et al., 2015; Winograd, 1972).", "In this work, we create a new dataset, ProPara (Process Paragraphs), containing 488 human-authored paragraphs of procedural text, along with 81k annotations about the changing states (exis-tence and location) of entities in those paragraphs, with an end-task of predicting location and existence changes that occur.", "This is the first dataset containing annotated, natural text for real-world processes, along with a simple representation of entity states during those processes.", "A simplified example is shown in Figure 2. When applying existing state-of-the-art systems, such as Recurrent Entity Networks (Hena et al., 2016) and Query-reduction Networks (Seo et al., 2017b), we find that they do not perform well on ProPara and the results are only slightly better than the majority baselines.", "As a step forward, we propose two new neural models that use alternative mechanisms for state prediction and propagation, in particular using LSTM input encoding and span prediction.", "The new models improve accuracy by up to 19%.", "Our contributions in this work are twofold: (1) we create ProPara , a new dataset for process paragraph comprehension, containing annotated, natural language paragraphs about real-world processes, and (2) we propose two new models that learn to infer and propagate entity states in novel ways, and outperform existing methods on this dataset.", "Datasets: Large-scale reading comprehension datasets, e.g., SQuAD (Rajpurkar et al., 2016), TriviaQA (Joshi et al., 2017), have successfully driven progress in question answering, but largely targeting explicitly stated facts.", "Often, the resulting systems can be fooled (Jia and Liang, 2017), prompting e orts to create harder datasets where a deeper understanding of the text appears necessary (Welbl et al., 2017; Araki et al., 2016).", "Procedural text is a genre that is particularly challenging, because the worlds they describe are largely implicit and changing.", "While there are few large datasets in this genre, two exceptions are bAbI (Weston et al., 2015) and SCoNE (Long et al., 2016), described earlier 2 .", "bAbI has helped advance methods for reasoning over text, such as memory network architectures (Weston et al., 2014), but has also been criticized for using machine-generated text over a simulated domain.", "SCoNE is closer to our goal, but has a di erent task ( given a perfect world model of the initial state, predict the end state) and di erent motivation (handling ellipsis and coreference in context).", "It also used a deterministic, simulated world to generate data.", "Models: For answering questions about procedural text, early systems attempted to extract a process structure (events, arguments, relations) from the paragraph, e.g., ProRead (Berant et al., 2014) and for newswire (Caselli et al., 2017).", "This allowed questions about event ordering to be answered, but not about state changes, unmodelled by these approaches.", "More recently, several neural systems have been developed to answer questions about the world state after a process, inspired in part by the bAbI dataset.", "Building on the general Memory Network architecture (Weston et al., 2014) and gated recurrent models such as GRU (Cho et al., 2014), Recurrent Entity Networks (EntNet) (Hena et al., 2016) is a state-of-the-art method for bAbI.", "EntNet uses a dynamic memory of hidden states (memory blocks) to maintain a representation of the world state, with a gated update at each step.", "Memory keys can be preset (\"tied\") to particular entities in the text, to encourage the memories to record information about those entities.", "Similarly, Query Reduction Networks (QRN) (Seo et al., 2017b) tracks state in 2 The ProcessBank (Berant et al., 2014) dataset is smaller and does not address state change, instead containing 585 questions about event ordering and event arguments.", "a paragraph, represented as a hidden vector h .", "QRN performs gated propagation of h across each time-step (corresponding to a state update), and uses h to modify (reduce) the query to keep pointing to the answer at each step (e.g., Where is the apple? at step 1 might be modified to Where is Joe? at step 2 if Joe picks up the apple).", "A recent proposal, Neural Process Networks (NPN) (Bosselut et al., 2018), also models each entity's state as a vector (analogous to EntNet's tied memories).", "NPN computes the state change at each step from the step's predicted action and a ected entity(s), then updates the entity(s) vectors accordingly, but does not model di erent e ects on di erent entities by the same action.", "Both EntNet and QRN find a final answer by decoding the final vector(s) into a vocabulary entry via softmax classification.", "In contrast, many of the best performing factoid QA systems, e.g., (Seo et al., 2017a; Clark and Gardner, 2017), return an answer by finding a span of the original paragraph using attention-based span prediction, a method suitable when there is a large vocabulary.", "We combine this span prediction approach with state propagation in our new models.", "Task: Our dataset, ProPara , focuses on a particular genre of procedural text, namely simple scientific processes (e.g., photosynthesis, erosion).", "A system that understands a process paragraph should be able to answer questions such as: What are the inputs to the process? , What is converted into what? , and Where does the conversion take place? 3 Many of these questions reduce to understanding the basic dynamics of entities in the process, and we use this as our task: Given a process paragraph and an entity e mentioned in it, identify: (1) Is e created (destroyed, moved) in the process?", "(2) When (step #) is e created (destroyed, moved)?", "(3) Where is e created (destroyed, moved from / to)?", "If we can track the entities' states through the process and answer such questions, many of the higher-level questions can be answered too.", "To do this, we now describe how these states are representated in ProPara, and how the dataset was built.", "(a span in the paragraph, typically a noun phrase) that undergoes some creation, destruction, or movement in the process.", "Each row denotes the states of all the participants after a step .", "Each sentence is a step that may change the state of one or more participants.", "Therefore, a process paragraph with m sentences and n participants will result in an ( m + 1) n grid representation.", "Each cell l ij in this grid records the location of the j -th participant after the i -th step, and l 0 j stores the location of j -th participant before the process.", "4 Figure 2 shows one example of this representation.", "Paragraph Authoring: To collect paragraphs, we first generated a list of 200 process-evoking prompts, such as What happens during photosynthesis? , by instantiating five patterns 5 , with nouns of the corresponding type from a science vocabulary, followed by manual rewording.", "Then, crowd-sourcing (MTurk) workers were shown one of the prompts and asked to write a sequence of event sentences describing the process.", "Each prompt was given to five annotators to produce five (indepen-dent) paragraphs.", "Short paragraphs (4 or less sentences) were then removed for a final total of 488 paragraphs describing 183 processes.", "An example paragraph is the one shown earlier in Figure 1. Grid and Existence: Once the process paragraphs were authored, we asked expert annotators 6 to create the initial grids.", "First, for each paragraph, they listed the participant entities that underwent a state change during the process, thus creating the column headers.", "They then marked the steps where a participant was created or destroyed.", "All state cells before a Create or after a Destroy marker were labeled as \"not exists\".", "Each initial grid annotation was checked by a second expert annotator.", "Locations: Finally, MTurk workers were asked to fill in all the location cells.", "A location can be unknown\" if it is not specified in the text, or a span of the original paragraph.", "Five grids for the same paragraph were completed by five di erent Turkers, with average pairwise inter-annotator agreement of 0 .", "67.", "The end result was 81,345 annotations over 488 paragraphs about 183 processes.", "The dataset 4 We only trace locations in this work, but the representation can be easily extended to store other properties (e.g., temperature) of the participants.", "was then split 80 / 10 / 10 into train / dev / test by process prompt , ensuring that the test paragraphs were all about processes unseen in train and dev.", "Table 1 compares our dataset with bAbI and SCoNE.", "We present two new models for this task.", "The first, P ro L ocal , makes local state predictions and then algorithmically propagates them through the process.", "The second, P ro G lobal , is an end-to-end neural model that makes all state predictions using global information.", "The design of P ro L ocal consists of two main components: local prediction and commonsense persistence .", "The former infers all direct e ects of individual sentences and the latter algorithmically propagates known values forwards and backwards to fill in any remaining unknown states.", "The intuition for local prediction is to treat it as a surface-level QA task.", "BiLSTMs with span prediction have been e ective at answering surface-level questions, e.g., Given Roots absorb water. and Where is the water?, they can be reliably trained to answer Roots (Seo et al., 2017a).", "We incorporate a similar mechanism here.", "Given a sentence (step) and a participant e in it, the local prediction model makes two types of predictions: the change type of e (one of: no change , created , destroyed , moved ) and the locations of e before and after this step.", "The change type prediction is a multi-class classification problem, while the location prediction is viewed as a SQuAD-style surface-level QA task with the goal to find a location span in the input sentence.", "The design of this model is depicted in Figure", "3(a), which adapts a bidirectional LSTM (Hochreiter and Schmidhu-ber, 1997) recurrent neural network architecture (biLSTM) with attention for input encoding.", "The prediction tasks are handled by two di erent output layers.", "We give the detail of these layers below.", "Input Encoding: Each word w i in the input sentence is encoded with a vector x i = [ v w : v e : v v ], the concatenation of a pre-trained GloVe (Penning-ton et al., 2014) word embedding v w , indicator variables v e on whether w i is the specified participant and v v on whether w i is a verb (via a POS tagger).", "Context Encoding: A biLSTM is used to contextualize the word representations in a given sentence.", "h i denotes the concatenated output of the bidirectional LSTM for the embedded word x i , and encodes the word's meaning in context.", "Bilinear Attention: Given the participant and verb, the role of this layer is to identify which contextual word embeddings to attend to for generating the output.", "We first create h ev by concatenating the contextual embedding of the participant and verb.", "7 We then use a bilinear similarity function sim ( h i , h ev ) = ( h Ti B h ev ) + b , similar to (Chen et al., 2016), to compute attention weights A i over each word w i in the sentence.", "For state change type prediction, the words between the verb and participant may be important, while for the location tagging, contextual cues such as from and to could be more predictive.", "Hence, we train two sets of attention parameters resulting in weights A 1 and A 2 which are combined with the contextual vectors h i as described below to produce hidden states o 1 and o 2 that are fed to the output layers.", "Here, | step | refers to number of words in the given step or sentence.", "o 1 = X i A 1 i h i o 2 = [( A 21 h 1 ) : ( A 22 h 2 ) : . . . : ( A 2 | step | h | step | )] Output 1: State Change Type: We apply a feed-forward network on hidden state o 1 to derive the probabilities of the four state change type categories: Create, Destroy, Move and None.", "Output 2: Location Spans: The second output is computed by predicting BIO tags (one of five tags: B-Before-LOC, I-Before-LOC, B-After-LOC, I-After-LOC, O) for each word in the sentence.", "We apply a feed-forward network on hidden state o 2 i for word i to derive the probabilities of these location tags.", "Notice that if the change type is predicted as Create\" (or Destroy) then only the after\" (or before) location prediction is used.", "Training: We train the state change type prediction and location tag prediction models jointly, where the loss is the sum of their negative log likelihood losses.", "We use Adadelta (Zeiler, 2012) with learning rate 0.2 to minimize the total loss.", "The local prediction model will partially fill in the state change grid, showing the direct locational", "e ects of actions (including not exists and unknown location).", "To complete the grid, we then algorithmically apply a commonsense rule of persistence that propagates locations forwards and backwards in time where locations are otherwise missing.", "Figure", "3(b) shows an example when applying this rule, where ?' indicates unknown location\". This corresponds to a rule of inertia: things are by default unchanged unless told otherwise. If there is a clash, then the location is predicted as unknown. 4.2 P ro G lobal : A Global Prediction Model Unlike P ro L ocal , the design principle behind P ro G lobal is to model the persistence of state information within the neural model itself, rather than as a post-processing step. P ro G lobal infers the states of all participants at each step, even if they are not mentioned in the current sentence, using: (1) the global context (i.e., previous sentences), and (2) the participant's state from the previous step. Given a sentence (step) with its context (para-graph) and a participant e , P ro G lobal predicts the existence and location of e after this step in two stages. It first determines the state of e as one of the classes (not exist, unknown location, known location). A follow-up location span prediction is made if the state is classified as known location. Figure 4 shows P ro G lobal 's neural architecture, where the left side is the part for state prediction at each step, and the right side depicts the propagation of hidden states from one step to the next. We discuss the detail of this model below. Input Encoding: Given a participant e , for each step i , we take the entire paragraph as input. Each word w in the paragraph is represented with three types of embeddings: the general word embedding v w , a position embedding v d which indicates the relative distance to the participant in the paragraph, and a sentence indicator embedding v s which shows the relative position ( previous, current, following ) of each sentence in terms of the current step i . Both the position embedding and the sentence indicator embedding are of size 50 and are randomly initialized and automatically trained by the model. We concatenate these three types of embeddings to represent each word x = [ v w : v d : v s ]. Context Encoding: Similar to P ro L ocal , we use a biLSTM to encode the whole paragraph and use h to denote the biLSTM output for each word. State Prediction: As discussed earlier, we first predict the location state of a participant e . Let 1599 Photosynthesis Paragraph: Roots absorb water from the soil. The water flows to the leaf. Light f rom the sun and CO2 enter the leaf. The light , water and CO2 combine into a mixture. Mixture forms sugar. state i-1 Start Probs Step 1 Embedding State 0 Encoder Step 1 Embedding State 1 Encoder Step 2 Embedding State 2 Encoder Step n Embedding State n Encoder Roots,-2,C absorb,-1,C water,0,C from,1,C the, 2 ,C soil,3,C .4,C T he ,5 ,F water,6,F flows,7,F to,8,F the,9,F leaf,10,F S tate Predic9on Embedding: G love, W ord Posi9on, S ent Indicator B i-LSTM LSTM + SoMmax LSTM + SoMmax Start End Linear + SoMmax [-/?/K] max pooling Output state i-1 Category Probs Contextual Embedding Figure 4: P ro G lobal predicts a participant's state (type and location) after a given step using bilinear attention over the entire paragraph, combined with its predictions from the previous step. LSTM Contextual Embedding Linear P redic9on s i -1,i S i -i,2 S i -1,|P| Weighted Sum Linear Linear + SoMmax Linear Linear + SoMmax Linear Linear SoMmax Concatena9on Span Probs state i-1 Start Probs LSTM Contextual Embedding Linear P redic9on s i -1,i S i -i,2 S i -1,|P| Weighted Sum Linear Linear + SoMmax Linear Linear + SoMmax Linear Linear SoMmax Concatena9on Span Probs Figure 5: Details of the LSTM + Softmax unit, used for predicting the start / end words of a location. H Pi = [ h 1 i , h 2 i , ..., h | P | i ] denote the hidden vectors (contextual embeddings) for words in step i with respect to participant e , where h ti denotes the t -th word representation output by the biLSTM layer and P is the whole paragraph. We then apply max pooling to derive a paragraph representation: Pi = max( H Pi ). To incorporate the category prediction of the previous step, step i 1 , we concatenate its probability vector c Pi 1 R 3 with Pi , and apply a feed-forward network to derive the probabilities of the three categories: c Pi = softmax( W c [ Pi : c Pi 1 ] + b c ) Location Span Prediction: (Figure 5). To predict the location span, we predict the start word of the span (by generating a probability distribution over words) and the end word. To predict the location start, we take two types of information as input: the start probability distribution s Pi 1 R | P | predicted from step i 1 , and the contextual embeddings H Pi of words in the current step i : H i = | P | X t = 1 s ti 1 H ti ti = LSTM([ H ti : H i ]) where H i is a sum of word vectors in the paragraph, weighted by the start probabilities from the previous step step i 1 . ti is the encoded vector representation for the t -th word in the paragraph. We then concatenate H Pi and Pi , and apply a feed-forward network to obtain the start probability distribution for step i : s Pi = softmax( W s [ H Pi : Pi ] + b s ). Similarly, to predict the end word of the span, we use the start probability distribution s Pi of step i and H Pi , and apply another LSTM and feed-forward networks to obtain the probabilities. For state 0 (the initial location before any steps), we directly feed the sequence of the vectors from the encoding layer to a linear transformation to predict the location start, and apply the same architecture to predict the location end. Training: For each participant e of paragraph P , the objective is to optimize the sum of the negative log likelihood of the category classification and location span prediction 8 . We use Adadelta to optimize the models with learning rate 0.5. 5 Experiments and Analysis 5.1 Tasks & Evaluation Metrics As described in Section 3, the quality of a model is evaluated based on its ability to answer three categories of questions, with respect to a given participant e : 8 We compute the loss for location span prediction only when the category is annotated as known location. 1600 Sentence Encoding Intermediate State Representn. Propagation through time Answer Decoding EntNet positional encoding Dynamic memory blocks Gated propagation Softmax classification QRN positional encoding Single latent vector h Gated propagation of h Softmax classification P ro L ocal LSTM Explicit symbolic Algorithmic Span prediction P ro G lobal LSTM Distribution over spans LSTM Span prediction Table 2: Design decisions in the four neural models. (Cat-1) Is e created (destroyed, moved) in the process? (Cat-2) When (step#) is e created (destroyed, moved)? (Cat-3) Where is e created (destroyed, moved from / to)? These questions are answered by simple scans over the state predictions for the whole process. (Cat-1) is asked over all participants, while (Cat-2) and (Cat-3) are asked over just those participants that were created (destroyed, moved). The accuracy of the answers is used as the evaluation metric, except for questions that may have multiple answers (e.g., When is e moved? \"). In this case, we compare the predicted and gold answers and use the F 1 score as the accuracy\" of the answer set prediction. 9 For questions in category (3), an answer is considered correct if the predicted location is identical to, or a sub-phrase of, the labeled location (typically just one or two words), after stop-word removal and lemmatizing. 5.2 Baseline Methods We compare our models with two top methods inspired by the bAbI dataset, Recurrent Entity Networks (EntNet) and Query Reduction Networks (QRN), described earlier in Section 2. Both models make di erent use of gated hidden states to propagate state information through time, and generate answers using softmax. The detailed comparisons in their design are shown in Table 2. We use the released implementations 10 (with default hyper-parameter values), and retrained them on our dataset, adapted to the standard bAbI QA format. Specifically, we create three separate variations of data by adding three bAbI-style questions after each step in a paragraph, respectively: Q1. Does e exist? (yes / no) Q2.", "will only be present in the training data if Q1 is yes, and similarly Q3 is only present if Q2 is yes.", "These three variations of data are used to train three di erent models from the same method.", "At test time, we cascade the questions (e.g., ask Q2 only if the answer to the Q1 model is yes), and combine the model outputs accordingly to answer the questions in our target tasks (Section 5.1).", "We also compare against a rule-based baseline and a feature-based baseline.", "The rule-based method, called ProComp, uses a set of rules that map (a SRL analysis of) each sentence to its e ects on the world state, e.g., IF X moves to Y THEN after: at(X,Y).", "The rules were extracted from VerbNet (Schuler, 2005) and expanded.", "A full description of ProComp is available at (Clark et al., 2018).", "The feature-based method uses a Logistic Regression (LR) classifier to predict the state change type (Move, Create, etc.) for each participant + sentence pair, then a NER-style CRF model to predict the from / to locations as spans of the sentence.", "The LR model uses bag-of-word features from the sentence, along with a discrete feature indicating whether the participant occurs before or after the verb in the given sentence.", "The CRF model uses standard NER features including capitalization, a verb indicator, the previous 3 words, and the POS-tag of the current and previous word.", "Similar to our P ro L ocal model, we apply commonsense persistence rules (Section 4.1.2) to complete the partial state-change grids predicted by both these baselines.", "Parameter settings: Both our models use GloVe embeddings of size 100 pretrained on Wikipedia 2014 and Gigaword 5 corpora 11 .", "The number of hidden dimensions for the biLSTM are set to 50(P ro L ocal ) and 100(P ro G lobal ).", "Dropout rates (Srivastava et al., 2014) for the contextual encoding layer are 0.3(P ro L ocal ) and 0.2(P ro G lobal ).", "P ro G lobal uses word position and sentence indicator embeddings each of size 50, and span prediction LSTMs with a hidden dimension of 10.", "The learning rates for Adadelta optimizer were 11 https://nlp.stanford.edu/projects/glove 1601 Question type Baseline Models Our Models Human (# questions) Majority QRN EntNet Rule-based Feature-based P ro L ocal P ro G lobal Upper Bound Cat-1 (750) 51.01 52.37 51.62 57.14 58.64 62.65 62.95 91.67 Cat-2 (601) -15.51 18.83 20.33 20.82 30.50 36.39 87.66 Cat-3 (823) -10.92 7.77 2.4 9.66 10.35 35.9 62.96 macro-avg -26.26 26.07 26.62 29.7 34.50 45.08 80.76 micro-avg 26.49 25.96 26.24 29.64 33.96 45.37 79.69 Table 3: Model accuracy on the end task (test partition of ProPara ).", "0.2(P ro L ocal ) and 0.5(P ro G lobal ).", "Our models are trained on the train partition and the parameters tuned on the dev partition.", "Table 3 compares the performance of various models on the ProPara test partition.", "For the first category of questions, we also include a simple majority baseline.", "We aggregate results over the questions in each category, and report both macro and micro averaged accuracy scores.", "From Table 3, we can see that EntNet and QRN perform comparably when applied to ProPara .", "However, despite being the top-performing systems for the bAbI task, when predicting whether a participant entity is created, destroyed or moved, their predictions are only slightly better than the majority baseline.", "Compared to our local model P ro L ocal , EntNet and QRN are worse in predicting the exact step where a participant is created, destroyed or moved, but better in predicting the location.", "The weak performance of EntNet and QRN on ProPara is understandable: both systems were designed with a di erent environment in mind, namely a large number of examples from a few conceptual domains (e.g., moving objects around a house), covering a limited vocabulary.", "As a result, they might not scale well when applied to real procedural text, which justifies the importance of having a real challenge dataset like ProPara .", "Although the rule-based baseline (Clark et al., 2018) uses rules mapping SRL patterns to state changes, its performance appears limited by the incompleteness and approximations in the rulebase, and by errors by the SRL parser.", "The feature-based baseline performs slightly better, but its performance is still poor compared to our neural models.", "This suggests that it has not generalized as well to unseen vocabulary (25% of the test vocabulary is not present in the train / dev partitions of ProPara ).", "When comparing our two models, it is interesting that P ro G lobal performs substantially better than P ro L ocal .", "One possible cause of this is cascading errors in P ro L ocal : if a local state prediction is wrong, it may still be propagated to later time steps without any potential for correction, thus amplifying the error.", "In contrast, P ro G lobal makes a state decision for every participant entity at every time-step, taking the global context into account, and thus appears more robust to cascading errors.", "Furthermore, P ro G lobal 's gains are mainly in Cat-2 and Cat-3 predictions, which rely more heavily on out-of-sentence cues.", "For example, 30% of the time the end-location is not explicitly stated in the state-change sentence, meaning P ro L ocal cannot predict the end-location in these cases (as no sentence span contains the end location).", "P ro G lobal , however, uses the entire paragraph and may identify a likely end-location from earlier sentences.", "Finally, we computed a human upper bound for this task (last column of Table 3).", "During dataset creation, each grid was fully annotated by 5 di erent Turkers (Section 3).", "Here, for each grid, we identify the Turker whose annotations result in the best score for the end task with respect to the other Turkers' annotations.", "The observed upper bound of 80% suggests that the task is both feasible and well-defined, and that there is still substantial room for creating better models.", "To further understand the strengths and weaknesses of our systems, we ran the simplified paragraph in Figure 2 verbatim through the models learned by P ro L ocal and P ro G lobal .", "The results are shown in Figure 6, with errors highlighted in red.", "P ro L ocal correctly interprets Light from the sun and CO2 enters the leaf. to imply that the light was at the sun before the event.", "In addition, as there were no earlier mentions of light, it propagates this location backwards in time, (correctly) concluding the light was initially at the sun.", "However, it fails to predict that combine (after state 3) destroys the inputs, resulting in continued prediction of the existence and locations for those inputs.", "One contributing factor is that P ro L ocal 's predic-1602 Figure 6: P ro L ocal (top) and P ro G lobal (bottom) predictions on a simple paragraph (errors in red).", "tions ignore surrounding sentences (context), potentially making it harder to distinguish destructive vs. non-destructive uses of combine .", "P ro G lobal also makes some errors on this text, most notably not realizing the light and CO2 exist from the start (rather, they magically appear at the leaf).", "Adding global consistency constraints may help avoid such errors.", "It is able to predict the sugar is formed at the leaf, illustating its ability to persist and transfer location information from earlier sentences to draw correct conclusions.", "We additionally randomly selected 100 prediction errors from the dev set for P ro G lobal , and identified four phenomena contributing to errors: (1) Implicit Creation / Destruction: In 37% of the errors, the information about the creation or destruction of a participant is implicit or missing, which resulted in existence classification errors.", "For example, in the sentences A fuel goes into the generator. The generator converts mechanical energy into electrical energy. , fuel is implicitly consumed as the generator converts mechanical energy into electrical energy.", "(2) Location Errors: In 27% of the examples, the location spans were not perfectly identified as follows: absolute wrong location span prediction (17%), longer span prediction (6%), and location prediction from di erent granularity (4%).", "moving participant and its target location are separated with a wide context within a sentence, making it harder for the model to locate the location span.", "(4) Propagation: P ro G lobal tends to propagate the previous location state to next step, which may override locally detected location changes or propagate the error from previous step to next steps.", "9% of the errors are caused by poor propagation.", "This analysis suggests several future directions: Enforcing global consistency constraints: e.g., it does not make sense to create an already-existing entity, or destroy a non-existent entity.", "Global constraints were found useful in the earlier ProRead system (Berant et al., 2014).", "Data augmentation through weak supervision: additional training data can be generated by applying existing models of state change, e.g., from VerbNet (Kipper et al., 2008), to new sentences to create additional sentence + state pairs.", "Propagating state information backwards in time: if e j is at l ij after step i , it is likely to also be there at step i 1 given no information to the contrary.", "P ro G lobal , EntNet, and QRNs are inherently unable to learn such a bias, given their forward-propagating architectures.", "New datasets and models are required to take reading comprehension to a deeper level of machine understanding.", "As a step in this direction, we have created the ProPara dataset, the first to contain natural text about a changing world along with an annotation of entity states during those changes.", "We have also shown that this dataset presents new challenges for previous models, and presented new models that exploit ideas from surface-level QA, in particular LSTM input encoding and span prediction, producing performance gains.", "The dataset and models are available at http://data.allenai.org/propara .", "We are grateful to Paul Allen whose long-term vision continues to inspire our scientific endeavors.", "We also thank Oren Etzioni, Carissa Schoenick, Mark Neumann, and Isaac Cowhey for their critical contributions to this project." ]
[ "objective", "abstain", "abstain", "objective", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "result", "objective", "abstain", "objective", "other", "other", "other", "other", "other", "objective", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "objective", "method", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "other", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "objective", "other", "other", "other" ]
[ "To make machines better understand sentiments, research needs to move from polarity identification to understanding the reasons that underlie the expression of sentiment.", "Categorizing the goals or needs of humans is one way to explain the expression of sentiment in text.", "Humans are good at understanding situations described in natural language and can easily connect them to the character's psychological needs using commonsense knowledge.", "We present a novel method to extract, rank, fil-ter and select multi-hop relation paths from a commonsense knowledge resource to interpret the expression of sentiment in terms of their underlying human needs.", "We efficiently integrate the acquired knowledge paths in a neural model that interfaces context representations with knowledge using a gated attention mechanism.", "We assess the model's performance on a recently published dataset for categorizing human needs.", "Selectively integrating knowledge paths boosts performance and establishes a new state-of-the-art.", "Our model offers interpretability through the learned attention map over commonsense knowledge paths.", "Human evaluation highlights the relevance of the encoded knowledge.", "Sentiment analysis and emotion detection are essential tasks in human-computer interaction.", "Due to its broad practical applications, there has been rapid growth in the field of sentiment analysis (Zhang et al., 2018).", "Although state-of-the-art sentiment analysis can detect the polarity of text units (Hamilton et al., 2016; Socher et al., 2013), there has been limited work towards explaining the reasons for the expression of sentiment and emotions in texts (Li and Hovy, 2017).", "In our work, we aim to go beyond the detection of sentiment, toward explaining sentiments.", "Such explanations can range from detecting overtly expressed explanations or reasons for sentiments towards specific aspects of, e.g., products or films, as in user reviews to the explanation of the underlying reasons for emotional reactions of characters in a narrative story.", "The latter requires understanding of stories and modeling the mental state of characters.", "Recently, Ding and Riloff (2018) proposed to categorize affective events with categories based on human needs, to provide explanations of people's attitudes towards such events.", "Given an expression such as I broke my leg , they categorize the reason for the expressed negative sentiment as being related to a need concerning health'.", "In this paper we focus on the Modelling Naive Psychology of Characters in Simple Commonsense Stories dataset of Rashkin et al. (2018), which contains annotations of a fully-specified chain of motivations and emotional reactions of characters for a collection of narrative stories.", "The stories are annotated with labels from multiple theories of psychology (Reiss, 2004; Maslow, 1943; Plutchik, 1980) to provide explanations for the emotional reactions of characters.", "Similar to Ding and Riloff (2018), we hypothesize that emotional reactions (joy, trust, fear, etc.) of characters can be explained by (dis)satisfaction of their psychological needs.", "However, predicting categories of human needs that underlie the expression of sentiment is a difficult task for a computational model.", "It requires not only detecting surface patterns from the text, but also requires commonsense knowledge about how a given situation may or may not satisfy specific human needs of a character.", "Such knowledge can be diverse and complex, and will typically be implicit in the text.", "In contrast, human readers can make use of relevant information from the story and associate it with their knowledge about human interaction, desires and human needs, and thus will be able to infer underlying reasons for emotions indicated in the text.", "In this work, we propose a computational model that aims to categorize human needs of story characters by integrating commonsense knowledge from ConceptNet (Speer and Havasi, 2012).", "Our model aims to imitate human understanding of a story, by", "(i) learning to select relevant words from the text,", "(ii) extracting pieces of knowledge from the commonsense inventory and", "(iii) associating them with human need categories put forth by psychological theories.", "Our assump-tion is that by integrating commonsense knowledge in our model we will be able to overcome the lack of textual evidence in establishing relations between expressed emotions in specific situations and the inferable human needs of story characters.", "In order to provide such missing associations, we leverage the graph structure of the knowledge source.", "Since these connections can be diverse and complex, we develop a novel approach to extract and rank multi-hop relation paths from ConceptNet using graph-based methods.", "Our contributions are:", "(i) We propose a novel approach to extract and rank multi-hop relation paths from a commonsense knowledge resource using graph-based features and algorithms.", "(ii) We present an end-to-end model enhanced with attention and a gated knowledge integration component to predict human needs in a given context.", "To the best of our knowledge, our model is the first to advance commonsense knowledge for this task.", "(iii) We conduct experiments that demonstrate the effectiveness of the extracted knowledge paths and show significant performance improvements over the prior state-of-the-art.", "(iv) Our model provides interpretability in two ways: by selecting relevant words from the input text and by choosing relevant knowledge paths from the imported knowledge.", "In both cases, the degree of relevance is indicated via an attention map.", "(v) A small-scale human evaluation demonstrates that the extracted multi-hop knowledge paths are indeed relevant.", "Our code is made publicly available.", "1 2 Related Work Sentiment Analysis and Beyond.", "Starting with Pang et al. (2002), sentiment analysis and emotion detection has grown to a wide research field.", "Researchers have investigated polarity classifica-1 https://github.com/debjitpaul/ Multi-Hop-Knowledge-Paths-Human-Needs tion, sentiment and emotion detection and classification (Tang et al., 2015; Yin et al., 2017; Li et al., 2017) on various levels (tokens, phrases, sentences or documents), as well as structured prediction tasks such as the identification of holders and targets (Deng and Wiebe, 2015) or sentiment inference (Choi et al., 2016).", "Our work goes beyond the analysis of overtly expressed sentiment and aims at identifying goals, desires or needs underlying the expression of sentiment.", "Li and Hovy (2017) argued that the goals of an opinion holder can be categorized by human needs.", "There has been work related to goals, desires, wish detection (Goldberg et al., 2009; Rahimtoroghi et al., 2017).", "Most recently, Ding and Riloff (2018) propose to categorize affective events into physiological needs to explain people's motivations and desires.", "Rashkin et al. (2018) published a dataset for tracking emotional reactions and motivations of characters in stories.", "In this work, we use this dataset to develop a knowledge-enhanced system that explains' sentiment in terms of human needs.", "Integrating Structured Knowledge into Neural NLU Systems.", "Neural models aimed at solving NLU tasks have been shown to profit from the integration of knowledge, using different methods: Xu et al. (2017) show that injecting loosely structured knowledge with a recall-gate mechanism is beneficial for conversation modeling; Mihaylov and Frank (2018) and Weissenborn et al. (2017) propose integration of commonsense knowledge for reading comprehension: the former explicitly encode selected triples from ConceptNet using attention mechanisms, the latter enriches question and context embeddings by encoding triples as mapped statements extracted from ConceptNet.", "Concurrently to our work, Bauer et al. (2018) proposed a heuristic method to extract multi-hop paths from ConceptNet for a reading comprehension task.", "They construct paths starting from concepts appearing in the question to concepts appearing in the context, aiming to emulate multi-hop reasoning.", "Tamilselvam et al. (2017) use ConceptNet relations for aspect-based sentiment analysis.", "Similar to our approach, Bordes et al. (2014) make use of knowledge bases to obtain longer paths connecting entities appearing in questions to answers in a QA task.", "They also provide a richer representation of answers by building subgraphs of entities appearing in answers.", "In contrast, our work aims to provide information about missing links Figure 1: Maslow and Reiss: Theories of Psychology as presented in Rashkin et al. (2018).", "between sentiment words in a text and underlying human needs by extracting relevant multi-hop paths from structured knowledge bases.", "Our task is to automatically predict human needs of story characters given a story context.", "In this task, following the setup of Rashkin et al. (2018), we explain the probable reasons for the expression of emotions by predicting appropriate categories from two theories of psychology: Hierarchy of needs (Maslow, 1943) and basic motives (Reiss, 2002).", "The task is defined as a multi-label classification problem with five coarse-grained (Maslow) and 19 fine-grained (Reiss) categories, respectively (see Fig. 1).", "1 We start with a Bi-LSTM encoder with self-attention as a baseline model, to efficiently categorize human needs.", "We then show how to select and rank multi-hop commonsense knowledge paths from ConceptNet that connect textual expressions with human need categories.", "Finally, we extend our model with a gated knowledge integration mechanism to incorporate relevant multi-hop commonsense knowledge paths for predicting human needs.", "An overview of the model is given in Figure 2. We now describe each component in detail.", "Our Bi-LSTM encoder takes as input a sentence S consisting of a sequence of tokens, denoted as w s 1 , w s 2 ,", "...., w sn , or w s 1: n and its preceding context Cxt , denoted as w cxt 1 , w cxt 2 ,", "...., w cxt m , or w cxt 1: m .", "As further input we read the name of a story character, which is concatenated to the input sentence.", "For 1 Details about the labels are given in the Supplement.", "this input the model is tasked to predict appropriate human need category labels z 2 Z , according to a predefined inventory.", "Embedding Layer: We embed each word from the sentence and the context with a contextualized word representation using character-based word representations (ELMo) (Peters et al., 2018).", "The embedding of each word w i in the sentence and context is represented as e si and e cxti , respectively.", "Encoding Layer: We use a single-layer BiLSTM (Hochreiter and Schmidhuber, 1997) to obtain sentence and context representations h s and h cxt , which we form by concatenating the final states of the forward and backward encoders.", "A Self-Attention Layer allows the model to dynamically control how much each token contributes to the sentence and context representation.", "We use a modified version of self-attention proposed by Rei and Sgaard (2018), where both input representations are passed through a feedforward layer to generate scalar values for each word in context v cxti and sentence v si (cf.", "(2-5)).", "a si = ReLU ( W si h si + b si ) , (2) a cxti = ReLU ( W cxti h cxti + b cxti ) (3) v si = W svi a si + b svi (4) v cxti = W cxtvi a cxti + b cxtvi (5) where, W s , b s , W cxt , b cxt , W sv , W cxtv are trainable parameters.", "We calculate the soft attention weights for both sentence and context: e v i = 1 1 + exp ( \u0000 v i ); v i = e v i P Nk =1 e v k (6) where, e v i is the output of the sigmoid function, therefore e v i is in the range [0,1] and v i is the normalized version of e v i .", "Values v i are used as attention weights to obtain the final sentence and context representations x s and x cxt , respectively: x s = NX i =1 v is h si (7) x cxt = MX i =1 v icxt h cxti (8) with N and M the number of tokens in S and Cxt .", "The output of the self-attention layer is generated by concatenating x s and x cxt .", "We pass this representation through a FF layer of dimension Z : y = ReLU ( W y [ x s ; x cxt ] + b y ) (9) where W y , b y are trainable parameters and ';' denotes concatenation of two vectors.", "Finally, we feed the output layer y to a logistic regression layer to predict a binary label for each class z 2 Z , where Z is the set of category labels for a particular psychological theory (Maslow/Reiss, Fig. 1).", "3.2 Extracting Commonsense Knowledge To improve the prediction capacity of our model, we aim to leverage external commonsense knowledge that connects expressions from the sentence and context to human need categories.", "For this purpose we extract multi-hop commonsense knowledge paths that connect words in the textual inputs with the offered human need categories, using as resource ConceptNet (Speer and Havasi, 2012), a large commonsense knowledge inventory.", "Identifying contextually relevant information from such a large knowledge base is a non-trivial task.", "We propose an effective two-step method to extract multi-hop knowledge paths that associate concepts from the text with human need categories:", "(i) collect all potentially relevant knowledge relations among concepts and human needs in a subgraph for each input sentence;", "(ii) rank, fil-ter and select high-quality paths using graph-based local measures and graph centrality algorithms.", "ConceptNet is a graph G = ( V, E ) whose nodes are concepts and edges are relations between concepts (e.g. CAUSES , MOTIVATEDBY ).", "For each sentence S we induce a subgraph G 0 = ( V 0 , E 0 ) where V 0 comprises all concepts c 2 V that appear in S and the directly preceding sentence in context Cxt .", "V 0 also includes all concepts c 2 V that correspond to one of the human need categories in our label set Z .", "Fig. 3 shows an example.", "The sub-graph is constructed as follows: Shortest Paths: In a first step, we find all shortest paths p 0 from ConceptNet that connect any concept c i 2 V 0 to any other concept c j 2 V 0 and to each human needs concept z 2 Z .", "We further include in V 0 all the concepts c 2 V which are contained in the above shortest paths p 0 .", "Neighbours: To better represent the meaning of the concepts in V 0 , we further include in V 0 all concepts c 2 V that are directly connected to any c 2 V 0 that is not already included in V 0 .", "Sub-graph: We finally construct a connected sub-graph G 0 = ( V 0 , E 0 ) from V 0 by defining E 0 as the set of all ConceptNet edges e 2 E that directly connect any pair of concepts ( c i , c j ) 2 V 0 .", "Overall, we obtain a sub-graph that contains relations and concepts which are supposed to be useful to explain why and how strongly concepts c i that appear in the sentence and context are associated with any of the human needs z 2 Z .", "We could use all possible paths p contained in the sub-graph G 0 , connecting concepts c i from the text and human needs concepts z contained in G 0 , as additional evidence to predict suitable human need categories.", "But not all of them may be relevant.", "In order to select the most relevant paths, we propose a two-step method:", "(i) we score each vertex with a score ( Vscore ) that reflects its importance in the sub-graph and on the basis of the vertices' Vscores we determine a path score Pscore , as shown in Figure 3;", "(ii) we select the top-k paths with respect to the computed path score ( Pscore ) .", "(i) Vertex Scores and Path Scores: We hypothesize that the most useful commonsense relation paths should include vertices that are important with respect to the entire extracted subgraph.", "We measure the importance of a vertex using different local graph measures: the closeness centrality measure, page rank or personalized page rank .", "Closeness Centrality (CC) (Bavelas, 1950) reflects how close a vertex is to all other vertices in the given graph.", "It measures the average length of the shortest paths between a given vertex v i and all other vertices in the given graph G 0 .", "In a connected graph, the closeness centrality CC ( v i ) of a vertex v i 2 G 0 is computed as V score CC ( v i ) = | V 0 | P j d ( v j , v i ) (10) where | V 0 | represents the number of vertices in the graph G 0 and d ( v j , v i ) represents the length of the shortest path between v i and v j .", "For each path we compute the normalized sum of Vscore X of all vertices v j contained in the path, for any measure X 2 { CC, P R, P P R } .", "that are close to the center of the sub-graph G 0 .", "PageRank (PR) (Brin and Page, 1998) is a graph centrality algorithm that measures the relative importance of a vertex in a graph.", "The PageRank score of a vertex v i 2 G 0 is computed as: V score PR ( v i ) = X j u ji v j L j + 1 \u0000 n (12) where L j = P i u ji is the number of neighbors of vertex j , is a damping factor representing the probability of jumping from a given vertex v i to another random vertex in the graph and n represents the number of vertices in G 0 .", "We calculate Pscore PR using Eq.", "11 and order the paths according to their Pscore PR , assuming that relevant paths will contain vertices with high relevance, as reflected by a high number of incoming edges.", "Personalized PageRank (PPR) (Haveliwala, 2002) is used to determine the importance of a vertex with respect to a certain topic (set of vertices).", "Instead of assigning equal probability for a random jump 1 \u0000 n , PPR assigns stronger probability to certain vertices to prefer topical vertices.", "The PPR score of a vertex v 2 G 0 is computed as: V score PPR ( v i ) = X j u ji v j L j + (1 \u0000 ) T (13) where T = 1 | T j | if nodes v i belongs to topic T j and otherwise T = 0 .", "In our setting, T j will contain concepts from the text and human needs, to assign them higher probabilities.", "We calculate Pscore PPR using Eq.", "11 and order the paths according to their scores, assuming that relevant paths should contain vertices holding importance with respect to vertices representing concepts from the text and human needs.", "(ii) Path Selection: We rank knowledge paths based on their Pscore using the above relevance measures, and construct ranked lists of paths of two types:", "(i) paths connecting a human needs concept z 2 Z to a concept mentioned in the text ( p c \u0000 z ) 2 and", "(ii) paths connecting concepts in the text ( p c \u0000 c ) 3 .", "Ranked lists of paths are constructed individually for concepts that constitute the start or endpoint of a path: a human needs concept for p c \u0000 z or any concept from the text for p c \u0000 c .", "Figure 3 illustrates an example where the character Stewart felt joy after winning a gold medal.", "The annotated human need label is status .", "We show the paths selected by our algorithm that connect concepts from the text and the human need status .", "We select the topk paths of type p c \u0000 z for each human need to capture relevant knowledge about human needs in relation to concepts in the text.", "Similarly, we select the topk paths of type p c \u0000 c for each c i to capture relevant knowledge about the text (not shown in Fig. 3).", "We have seen how to obtain a ranked list of commonsense knowledge paths from a subgraph extracted from ConceptNet that connect concepts from the textual input and possible human needs categories that are the system's classification tar-2", "tar-2 p c \u0000 z denotes path connecting a human needs concept z 2 Z and a concept c mentioned in the text.", "3 p c \u0000 c denotes path connecting a concept c and another concept c mentioned in the text.", "gets.", "Our intuition is that the extracted commonsense knowledge paths will provide useful evidence for our model to link the content expressed in the text to appropriate human need categories .", "Paths that are selected by the model as a relevant connection between the input text and the labeled human needs concept can thus provide explanations for emotions or goals expressed in the text in view of a human needs category .", "We thus integrate these knowledge paths into our model,", "(i) to help the model making correct predictions and", "(ii) to provide explanations of emotions expressed in the text in view of different human needs categories.", "For each input, we represent the extracted ranked list of n commonsense knowledge paths p as a list cr k, 1 , cr k, 2 ,", "...., cr k,n , where each cr k,i 1: l represents a path consisting of concepts and relations, with l the length of the path.", "We embed all concepts and relations in cr k,i 1: l with pretrained GloVe (Penning-ton et al., 2014) embeddings.", "where h k represents the output of the BiLSTM for", "the knowledge path and i its the ranking index.", "Attention Layer: We use an attention layer, where each encoded commonsense knowledge path interacts with the sentence representation x s to receive attention weights ( h k,i ): e h k,i = \u0000 ( x s h k,i ) , h k,i = e h k,i P Ni =1 e h k,i (15) In Eq.", "15, we use sigmoid to calculate the attention weights, similar to Eq.", "6. However, this time we compute attention to highlight which knowledge paths are important for a given input representation ( x s being the final state hidden representation over the input sentence, Eq. 7).", "To obtain the sentence-aware commonsense knowledge representation x k , we pass the output of the attention layer through a feedforward layer.", "W k , b k are trainable parameters.", "In order to incorporate the selected and weighted knowledge into the model, we concatenate the sen-Classification", "We employ a gating mechanism to allow the model to selectively incorporate relevant information from commonsense knowledge x k and from the joint input representation y i (see Eq. 9) separately .", "We finally pass it to a logistic regression classifier to predict a binary label for each class z in the set Z of category labels z i = \u0000 ( W e y z ( o i \u0000 y i + o i \u0000 x ki ) + b e y z ) (18) where \u0000 represents element-wise multiplication, b e y z , W e y z are trainable parameters.", "Dataset: We evaluate our model on the Modeling Naive Psychology of Characters in Simple Commonsense Stories (MNPCSCS) dataset (Rashkin et al., 2018).", "It contains narrative stories where each sentence is annotated with a character and a set of human need categories from two inventories: Maslow's (with five coarse-grained) and Reiss's (with 19 fine-grained) categories (Reiss's labels are considered as subcategories of Maslow's).", "The data contains the original worker annotations.", "Following prior work we select the annotations that display the major-ity label i.e., categories voted on by \u0000 2 workers.", "Since no training data is available, similar to prior work we use a portion of the devset as training data, by performing a random split, using 80% of the data to train the classifier, and 20% to tune parameters.", "Data statistics is reported in Table 1. Rashkin et al. (2018) report that there is low annotator agreement i.a. between the belonging and the approval class.", "We also find high co-occurrence of the belonging, approval and social contact classes, where belonging and social contact both pertain to the Maslow class Love/belonging while approval belongs to the Maslow class Esteem .", "This indicates that belonging interacts with Love/belonging and Esteem in relation to social contact.", "We further observed during our study that in the Reiss dataset the number of instances annotated with the belonging class is very low (no. of instances in training is 24, and in dev 5).", "The performance for this class is thus severely hampered, with 4.7 F 1 score for BiLSTM+Self-Attention and 7.1 F 1 score for BiLSTM+Self-Attention+Knowledge.", "After establishing benchmark results with prior work (cf. Table 2, including belonging ), we perform all further experiments with a reduced Reiss dataset, by eliminating the belonging class from all instances.", "This impacts the overall number of instances only slightly: by one instance for training and two instances for test, as shown in Table 1. Training: During training we minimize the weighted binary cross entropy loss, L = ZX z =1 w z y z log e y z + (1 \u0000 w z )(1 \u0000 y z ) log (1 \u0000 e y z ) (19) w z = 1 1 \u0000 exp \u0000 p P ( y z ) (20) where Z is the number of class labels in the classification tasks and w z is the weight.", "P ( y z ) is the marginal class probability of a positive label for z in the training set.", "Embeddings: To compare our model with prior work we experiment with pretrained GloVe (100d) embeddings (Pennington et al., 2014).", "Otherwise we used GloVe (300d) and pretrained ELMo embeddings (Peters et al., 2018) to train our model.", "Hyperparameters for Knowledge Inclusion: We compute ranked lists of knowledge paths of two types: p c \u0000 z and p c \u0000 c .", "We use the top-3 p c \u0000 z paths for each z using our best ranking strategy (Closeness Centrality + Personalized PageRank) in our best system results (Tables 2, 3, 5), and also considered paths p c \u0000 c (top-3 per pair) when evaluating different path selection strategies (Table 4).", "Evaluation Metrics: We predict a binary label for each class using a binary classifier so the prediction of each label is conditionally independent of the other classes given a context representation of the sentence.", "In all prediction tasks we report the micro-averaged Precision (P), Recall (R) and F 1 scores by counting the number of positive instances across all of the categories.", "All reported results are averaged over five runs.", "More informa-Reiss Maslow Model WE P R F1 P R F1 BiLSTM G 100d 18.35 27.61 22.05 31.29 33.85 32.52 CNN G 100d 18.89 31.22 23.54 27.47 41.01 32.09 REN G 100d 16.79 22.20 19.12 26.24 42.14 32.34 NPN G 100d 13.13 26.44 17.55 24.27 44.16 31.33 BM G 100d 25.08 28.25 26.57 47.65 60.98 53.54 BM + K | G 100d 28.47 39.13 32.96 50.54 64.54 5 6.69 BM ELMo 29.50 44.28 35.41 0 .", "tion on the dataset, metrics and all other training details are given in the Supplement.", "Our experiment results are summarized in Table 2. We benchmark our baseline BiLSTM+Self-Attention model (BM, BM w/ knowledge) against the models proposed in Rashkin et al. (2018): a BiLSTM and a CNN model, and models based on the recurrent entity network (REN) (Henaff et al., 2016) and neural process networks (NPN) (Bosse-lut et al., 2017).", "The latter differ from the basic encoding models (BiLSTM, CNN) and our own models by explicitly modeling entities.", "We find that our baseline model BM outperforms all prior work, achieving new state-of-the-art results.", "For Maslow we show improvement of 21.02 pp.", "F 1 score.", "For BM+K this yields a boost of 6.39 and 3.15 pp.", "F 1 score for Reiss and Maslow, respectively.", "When using ELMo with BM we see an improvement in recall.", "However, adding knowledge on top improves the precision by 2.24 and 4.04 pp.", "for Reiss and Maslow.", "In all cases, injecting knowledge improves the model's precision and F 1 score.", "Table 2 (bottom) presents results for the reduced dataset, after eliminating Reiss' label belonging .", "Since belonging is a rare class, we observe further improvements.", "We see the same trend: adding knowledge improves the precision of the model.", "To obtain better insight into the contributions of individual components of our models, we perform an ablation study (Table 3).", "Here and in all later experiments we use richer (300d) GloVe embeddings and the dataset w/o belonging .", "We show results including and not including self-attention WE Atten K Gated P R F1 G 300d --23.31 34.69 27.89 G 300d X -26.09 35.59 30.11 G 300d X X -27.99 37.73 32.14 G 300d X X X 28.65 39.42 33.19 ELMo --32.35 42.66 36.80 ELMo X -31.45 44.29 37.70 ELMo X X -32.65 45.60 38.05 ELMo X X X 36.76 42.53 39.44 Table 3: Model ablations for Reiss Classification on MNPCSCS dataset w/o belonging .", "and knowledge components.", "We find that using self-attention over sentences and contexts is highly effective, which indicates that learning how much each token contributes helps the model to improve performance.", "We observe that integrating knowledge improves the overall F 1 score and yields a gain in precision with ELMo.", "Further, integrating knowledge using the gating mechanism we see a considerable increase of 3.58 and 1.74 pp.", "F 1 score improvement over our baseline model for GloVe and ELMo representations respectively.", "We further examine model performance for", "(i) different variants of selecting commonsense knowledge, including", "(ii) the effectiveness of the relevance ranking strategies discussed in 3.2.2.", "In Table 4, rows 3-4 use our best ranking method: CC+PPR; rows 5-8 show results when using the top-3 ranked p c \u0000 z paths for each human need z with different ranking measures.", "None shows results when no selection is applied to the set of extracted knowledge paths (i.e., using all possible paths from p c \u0000 z and p c \u0000 c ).", "Random randomly selects 3 paths for each human need from the set of paths used in None .", "This yields only a slight drop in performance.", "This suggests that not every path is relevant.", "We evaluate the performance when only considering single-hop paths (now top-3 ranked using CC+PPR) (Single-Hop) .", "We see an improvement over random paths and no selection, but not important enough.", "In contrast, using both single and multi-hop paths in conjunction with relevance ranking improves the performance considerably (rows 4-8).", "This demonstrates that multihop paths are informative.", "We also experimented with p c \u0000 c + p c \u0000 z .", "We find improvement in recall, however the overall performance decreases by 0.2 F 1 score compared to paths p c \u0000 z ranked using CC + PPR.", "Among different ranking measures precision for Personalized PageRank performs best in comparison with CC and PR in isolation, and recall for CC in isolation is highest.", "Combining CC and PPR yields the best results among the different ranking strategies (rows 5-8).", "We examined the model performance on each category (cf. Figure 4).", "The model performs well for basic needs like food , safety , health , romance , etc.", "We note that inclusion of knowledge improves the performance for most classes (only 5 classes do not profit from knowledge compared to only using ELMo), especially for labels which are rare like honor, idealism, power.", "We also found that the annotated labels can be subjective.", "For instance, Tom lost his job is annotated with order while our model predicts savings , which we consider to be correct.", "Similar to Rashkin et al. (2018) we observe that preceding context helps the model to better predict the characters' needs, e.g., Context: Erica's [..] class had a reading challenge [..].", "If she was able to read 50 books [..] she won a pizza", "party!; Sentence: She read a book every day for the entire semester is annotated with competition .", "Without context the predicted label is curiosity , however when including context, the model predicts competition, curiosity .", "We measure the models performance when applying it only to the first sentence of each story (i.e., without the context).", "As shown in Table 5, also in this setting the inclusion of knowledge improves the performance.", "We conduct human evaluation to test the effectiveness and relevance of the extracted commonsense knowledge paths.", "We randomly selected 50 sentence-context pairs with their gold labels from the devset and extracted knowledge paths that contain the gold label (using CC+PPR for ranking).", "We asked three expert evaluators to decide whether the paths are relevant to provide information about the missing links between the concepts in the sentence and the human need (gold label).", "The inter-annotator agreement had a Fleiss' = 0.76.", "The result for this evaluation shows that in 34% of the cases computed on the basis of majority agreement, our algorithm was able to select a relevant commonsense path.", "More details about the human evaluation are given in the Supplement.", "Finally we study the learned attention distributions of the interactions between sentence representation and knowledge paths, in order to interpret how knowledge is employed to make predictions.", "Visualization of the attention maps gives evidence of the ability of the model to capture relevant knowledge that connects human needs to the input text.", "The model provides interpretability in two ways: by selecting tokens from the input text using Eq.6 and by choosing knowledge paths from the imported knowledge using Eq.15 as shown in Figure 5. Figure 5 shows an example where including knowledge paths helped the model to predict the correct human need category.", "The attention map depicts which exact paths are selected to make the prediction.", "In this example, the model correctly picks up the token exhausting from the input sentence and the knowledge path exhausting is a fatigue causes desire rest .", "We present more examples of extracted knowledge and its attention visualization in the Supplement.", "We have introduced an effective new method to rank multi-hop relation paths from a commonsense knowledge resource using graph-based algorithms.", "Our end-to-end model incorporates multihop knowledge paths to predict human needs.", "Due to the attention mechanism we can analyze the knowledge paths that the model considers in prediction.", "This enhances transparency and interpretability of the model.", "We provide quantitative and qualitative evidence of the effectiveness of the extracted knowledge paths.", "We believe our relevance ranking strategy to select multi-hop knowledge paths can be beneficial for other NLU tasks.", "In future work, we will investigate structured and unstructured knowledge sources to find explanations for sentiments and emotions.", "This work has been supported by the German Research Foundation as part of the Research Training Group Adaptive Preparation of Information from Heterogeneous Sources (AIPHES) under grant No.", "GRK 1994/1.", "We thank NVIDIA Corporation for donating GPUs used in this research.", "We thank Eva Mujdricza-Maydt, Esther van den Berg and Angel Daza for evaluating the paths, and Todor Mihaylov for his valuable feedback." ]
[ "abstain", "abstain", "abstain", "objective", "method", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "method", "method", "objective", "objective", "objective", "method", "objective", "objective", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "objective", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "other", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "method", "method", "abstain", "objective", "result", "objective", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "other", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "method", "result", "method", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "other", "method", "abstain", "abstain", "other", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "method", "abstain", "method", "method", "abstain", "method", "method", "method", "method", "method", "method", "abstain", "method", "abstain", "other", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "result", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "objective", "method", "method", "abstain", "method", "method", "objective", "other", "other", "other", "other" ]
[ "We propose EASE , a simple diagnostic tool for Visual Question Answering (VQA) which quantifies the difficulty of an image, question sample.", "EASE is based on the pattern of answers provided by multiple annotators to a given question.", "In particular, it considers two aspects of the answers:", "(i) their Entropy;", "(ii) their Semantic content.", "First, we prove the validity of our diagnostic to identify samples that are easy/hard for state-of-art VQA models.", "Second, we show that EASE can be successfully used to select the most-informative samples for training/fine-tuning.", "Crucially, only information that is readily available in any VQA dataset is used to compute its scores.", "1 1 Introduction Visual Question Answering (VQA; Antol et al., 2015) requires models to jointly understand an image and a natural language question.", "This is a challenging task; despite massive training data and recent pre-training strategies (Tan and Bansal, 2019; Lu et al., 2019; Chen et al., 2020) models still struggle to close the gap with oracle performance.", "VQA datasets (e.g., Goyal et al., 2017; Gurari et al., 2018) consist of (cid:104) image, question (cid:105) pairs for which N human annotators have provided an answer in natural language.", "When trained on these samples, VQA models are fed with the most frequently chosen answer in the pattern.", "During inference, the answer with the highest probability is evaluated against the pattern of N ground-truth answers.", "According to the standard VQA metric (An-tol et al., 2015), a model's prediction is considered as perfectly correct if it matches an answer that was frequent in the pattern; less accurate if matching an underrepresented one.", "This metric implies that, for the majority of cases, several annotators agree on the same exact answerand a model can thus achieve 100 % accuracy in the task.", "On the other 1 Code at: github.com/shailzajolly/EaSe Q: What is the pattern of the little girl's dress?", "hand, this suggests that various (cid:104) image, question (cid:105) pairs can have different patterns of answers; i.e., they can be more or less scattered depending on the features of the question, the image, or both.", "In Fig. 1, the annotators did not converge on the same answer for either of the two questions.", "However, while in the top question the 10 annotators provided semantically similar answers (e.g., plaid , plaid and floral , etc.), in the bottom one very different answers were given (e.g., road , sweden ).", "In line with recent work aimed at predicting the agreement between annotators (Gurari and Grau-man, 2017), the distribution of answers for a given (cid:104) image, question (cid:105) pair (Yang et al., 2018), or the difficulty of visual questions (Terao et al., 2020), in this paper we introduce EASE , a diagnostic tool for VQA which is based on the answers provided to a given question.", "We propose that two main features of the answer pattern, E ntropy a nd Se mantic content, are informative of the degree of difficulty of a sample.", "In particular, we conjecture that the more scattered an answer pattern, the more difficult the sample (Fig. 1, down)unless some or all of those answers are semantically similar (Fig. 1, top).", "By experimenting with various VQA datasets and models, we first assess the effectiveness of our diagnostic to identify the samples that are easy/difficult for a model.", "Second, we use EASE to select increasingly difficult subsets of data that we use to train/fine-tune our models, based on the hypothesis that difficult cases are also more informative during training.", "In both cases, we show that our simple method is very effective: (1) models are shown to struggle with the most difficult samples according to it; (2) training/fine-tuning models with only a little fraction of samplesthe most difficult onesmakes them achieve very high results, which are comparable to models trained/fine-tuned with the whole training data.", "Finally, EASE is shown to correlate with the confidence scores provided by human annotators along with their answers, which reveals that it captures a notion of difficulty in line with that by human speakers.", "We focus on (cid:104) image, question (cid:105) VQA samples and aim to quantify their difficulty , i.e., how challenging it is for a model to answer them correctly.", "We propose that the difficulty of a sample can be quanti-fied based on the (readily available) characteristics of the pattern of answers provided by the annotators, and devise a diagnostic tool that builds on this assumption.", "In particular, we focus on two aspects of the pattern:", "1) its Entropy , i.e., how scattered it is in terms of the number of unique answer strings;", "2) its Semantics , i.e., how (dis)similar are the answers in it with respect to their overall semantic representation.", "We name our diagnostic tool EASE and describe it in detail below.", "Entropy (E) We consider all the answers provided by the annotators for a given sample.", "Similar to Yang et al. (2018), we measure the Entropy of a pattern using Eq.", "1: E ( p f ) = 1 M (cid:88) k =1 p k log ( p k ) (1) where p f is the distribution of the M unique answers based on their frequency, and is the highest possible Entropy value 2 that is used to normalize E in [0 , 1] .", "High E values (close to 1 ) are assigned to highly scattered distributions; vice versa , low values of E (close to 0 ) are assigned to highly con-2 In our data, the maximum Entropy value is equal to 2 .", "Semantics (SE ) E is based on the frequency of unique answer strings in a given pattern.", "As such, it treats various strings as different, regardless of whether strings are semantically similar.", "This, however, is crucial: answers to a given question that are semantically different reveal inconsistencies among annotators, which in turn is indicative of the difficulty of a sample.", "In contrast, semantically similar answers are a proxy for the ease of the sample, though these answers are different in their surface realization (see, e.g., a couple vs. a pair ).", "We use a simple method based on pre-trained word embeddings (Mikolov et al., 2018) to oper-ationalize SE .", "In particular, given a pattern of answers, we perform the following steps to reorganize it by aggregating semantically similar answers and their corresponding frequencies: (1) We compute a representation of each answer in the pattern by averaging its words embeddings, similar to Chao et al. (2018); (2) We build an answer's centroid, i.e., an average representation of all the unique answers that encodes the overall semantics in the pattern; (3) We compute the pairwise co-sine similarity ( cos ) between the centroid and each unique answer in the pattern (negative values are clamped to 0 to have similarity in [0 , 1] ); (4) We group together all the answers whose cos with the centroid embedding exceeds a certain threshold.", "The threshold is dynamically set.", "It is computed at the datumlevel to adapt to the features of each datapoint, and is defined by: = cos ( MAX , centroid ) (2) where is a small positive number close to 0 (here we experiment with = 0 . 0001 ), and MAX is the answer with the maximum frequency in the pattern.", "In case more than one MAX is present, the lowest is used.", "Finally, we obtain a new distribution where the answers that are semantically consistent with the pattern's overall content (the centroid) are put together, and their frequencies are summed up.", "EASE diagnostic We take the new distribution of answers after applying SE , p se , and compute EASE , a single value in [0 , 1] which quantifies the ease of a VQA sample.", "We obtain it as follows: EASE ( p se ) = 1 E ( p se ) (3) Method Split VQA2.0 VizWiz T V T V EaSe TH 40522 19805 3201 522 (9%) (9%) (16%) (16%) BH 189281 92606 10443 1646 (43%) (43%) (52%) (52%) E 213954 101943 6356 1005 (48%) (48%) (32%) (32%) Entropy TH 108457 53230 11903 1897 (25%) (25%) (60%) (60%) BH 187287 90896 7337 1165 (42%) (42%) (36%) (37%) E 148013 70228 760 111 (33%) (33%) (4%) (3%) Total 443757 214354 20000 3173 (100%) (100%) (100%) (100%) Table 1: Top: Number of samples in the TH, BH, and E splits of VQA2.0 and VizWiz based on EaSe.", "We experiment with two models: BUTD (Ander-son et al., 2018) and LXMERT (Tan and Bansal, 2019) (LXM).", "BUTD uses a GRU to encode the input questions and to attend the image RoI features, enabling region-based attention to generate the answer.", "LXM is a transformer-based architecture pretrained on several language and vision tasks.", "We use it with the default hyper-parameters set in the original implementation.", "The models are trained (BUTD) or fine-tuned (LXM), and then evaluated, on the datasets described below.", "We experiment with VQA2.0 (Goyal et al., 2017) and VizWiz (VW; Gurari et al., 2018).", "We choose these two datasets since they are very different from each other, both in terms of the images (object-centered vs. everyday-life) and the type and purpose of the questions (written, crowdsourced vs. spoken, goal-oriented) they contain.", "This fundamental diversity is confirmed by a preliminary analysis 3 on the answers to the questions contained in 3 Further details in Appendix B. See also Jolly et al. (2018).", "In VQA2.0, 33 % of the questions are assigned the same answer string by all annotators; as for VizWiz, this percentage drops to only 3 %.", "We take this low agreement as a proxy for the difficulty of the samples in this (and any) dataset: the more disegreement, the harder.", "To preliminarly test our hypothesis, we compute the EASE value for each sample in the train/val partitions of the two datasets and assign the samples into 3 splits based on their EASE value (num-ber of samples per split in Tab. 1, top): (1) EASY (E) : EASE = 1.0; (2) BOTTOM-HARD (BH) : 0.5 < = EASE < 1.0; (3) TOP-HARD (TH) : EASE < 0.5.", "We then test our trained models on each of our validation splits.", "If our hypothesis is correct, models should struggle with the harder splits selected by our tool.", "Tab.", "2 shows that all models BUTD, LXM and LXM-S, a version of LXM trained from scratch on the taskindeed achieve much lower performance on the hard splits; in TH, their accuracy is halved compared to the entire ( all ) data.", "Moreover, it is interesting to note that, for LXM, pretraining appears to be overall beneficial, with the pretrained version outperforming the non-pretrained one in both datasets and all splits, with a margin of around 8 points on the entire data.", "For comparison, we run the same analysis using Entropy (specifically, 1 Entropy ) instead of EASE .", "As can be seen in Table 1 (bottom), the two methods give rise to very different data dis-Model TD VQA2.0 VizWiz all TH BH E all TH BH EBUTD TH(R)* 50.14 20.46 53.34 53.0 42.75 24.91 40.57 55.58 BUTD TH 44.13 26.1 51.3 41.13 42.46 25.1 39.69 56.02 BUTD TH+BH 56.6 29.73 61.2 57.64 48.58 29.58 47.57 60.1 BUTD TH+BH+E 61.43 29.61 62.81 66.36 50.12 29.56 48.95 62.73 LXM TH(R)* 69.61 34.76 69.44 76.55 46.42 26.03 45.78 58.06 LXM TH 67.24 35.64 67.58 73.02 46.65 26.13 45.79 58.73 LXM TH+BH 69.85 37.05 70.63 75.52 51.65 30.29 50.07 65.36 LXM TH+BH+E 70.57 35.51 70.26 77.65 53.40 32.82 52.26 65.97 Table 3: Accuracy on each split of VQA2.0 and VizWiz obtained by gradually training models first on TH, then adding BH and finally adding E samples.", "tributions.", "For example, in the train partition of VQA2.0, Entropy assigns much more cases than EASE to the TH split (in proportion, 25% cases for Entropy vs. 9% for EASE ) and much less to the E one ( 33% Entropy vs. 48% EASE ).", "On the one hand, this confirms the crucial role of our semantic component in determining EASE scores.", "On the other hand, we notice that the results obtained by the three models on the splits defined by Entropy follow a less clear pattern compared to the EASE ones (see Tab. 4 in Appendix).", "For example, in VizWiz, both BUTD and LXM-S achieve higher results in BH compared to E, which indicates that Entropy is not as effective as our tool in measuring the difficulty of a sample.", "Finally, for sanity check, we also tested model performance on splits having the same size of EASE 's TH, BH and E but including random samples (see Tab. 5 in Appendix).", "The sampling was performed 10 times and results averaged.", "As expected, no difference in performance between the three splits was observed.", "Overall, this proof-of-concept analysis reveals that current SOTA modelsincluding the extensively pretrained LXMsuffer with samples that are deemed hard by EASE .", "This suggests that our diagnostic tool genuinely selects the most challenging samples of a dataset.", "An intuitive question is whether training a model with these hard samples can make models more robust.", "This is based on the intuition that challenging samples could be more informative during training compared to easy ones.", "We test this hypothesis in the next section, where we use the splits defined by EASE to train models in a HardFirst (HF) approach.", "In HF, we train our VQA models incrementally, first using TH samples only, then adding BH samples, and finally using all training samples.", "The weights for the first stage are initialized randomly; we load the model's weights from previous stages for each incremental stage.", "For VQA2.0, the percentage of samples for each stage is 9.13% (TH), 51.79% (TH+BH), and 100% ( ALL ), and for VizWiz is 16%, 68.22%, and 100%.", "We hypothesize that harder splits, i.e., with low EASE scores, contain richer multimodal information that could be more informative during a model's learning.", "For comparison, we also evaluate models in the TH(R) condition: we train/fine-tune models with a set of data (with the same size as TH) randomly sampled from the training set.", "We repeat the sampling 10 times, and report the average accuracy.", "Results in Tab.", "3 support our hypotheses.", "(1) With only 52% of the training data (TH+BH), BUTD obtains 90% of all validation accuracy (VA) in VQA2.0 compared to the model trained on the Figure 2: Percentage of samples per question type in VQA2.0-train for each of the three splits used in the HF training regime.", "whole data (Table 2).", "This is even more pronounced in VizWiz, where using TH+BH during training ( 68% of total data) leads to a comparable performance as the one obtained with the whole training data.", "Similarly, LXM achieves 98% VA using only 52% of training data for VQA2.0, and 97% VA with 68% training data in VizWiz.", "(2) Compared to the TH(R) condition, models trained/fine-tuned with TH achieve higher results in the TH split of both VQA2.0 and VizWiz, which confirms that TH samples are particularly bene-ficial for dealing with challenging cases.", "At the same time, when evaluated on the entire data ( all ), they perform similarly to TH(R) in VizWiz and slightly worse than TH(R) in VQA2.0.", "This is to be expected: randomly sampling from VizWiz where 68% cases are either BH or THwill likely produce a more similar distribution to that of TH as compared to sampling from VQA2.0, where E cases are 48% of the total.", "Since proportions are the same in the validation set, training/fine-tuning with easier cases in VQA2.0 will have a positive impact on E, which will drive performance on all .", "Overall, these results indicate that the hard samples selected by EASE are more informative than easier ones and help models obtain comparable performance with significantly less training data.", "We explore whether the hard splits selected by EASE contain question types that are known to be particularly challenging for VQA models, e.g.,", "count and whquestions.", "As can be seen in Fig. 2, a higher proportion of wh( Other ) and count ( Number ) questions is observed in the hardest split compared to the other splits of VQA2.0.", "4 In contrast, polar questions ( Yes/No ) are poorly represented in TH, which indicates they are overall less challenging for humans and less informative for the models.", "We test whether EASE correlates with human intuition of when is difficult to answer a question.", "To this end, we use the confidence scores provided by annotators along with their answers in VQA2.0, 5 which self-evaluate whether annotators are confi-dent in providing their answer.", "We map confidence scores yes , maybe , no to 1 , 0 .", "5 , and 0 , respectively, and compute the average confidence score for each sample.", "We then compute Spearman's correlation between confidence scores and EASE scores, and find a substantial positive correlation both in train ( = 0 . 49 ) and val ( = 0 . 48 ) sets.", "This trend is also clear in Fig. 3, where higher confidence scores correspond to increasingly higher EASE values.", "We present EASE , a simple diagnostic tool which quantifies the difficulty of a VQA sample based on its pattern of answers.", "We show that EASE selects the most informative samples of a dataset, which is helpful to train/fine-tune VQA models more efficiently with less, but highly-informative data.", "In future work, we plan to combine model prediction for difficulty estimation in EASE .", "Shailza Jolly was supported by the TU Kaiserslautern CS Ph.D. scholarship program, the BMBF project XAINES (Grant 01IW20005), and the NVIDIA AI Lab (NVAIL) program.", "Sandro was funded by the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (grant agreement No. 819455 awarded to Raquel Fernndez)." ]
[ "objective", "abstain", "abstain", "abstain", "abstain", "objective", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "objective", "abstain", "objective", "method", "result", "abstain", "method", "method", "method", "abstain", "abstain", "method", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "method", "method", "method", "method", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "method", "method", "method", "method", "abstain", "abstain", "abstain", "method", "other", "abstain", "abstain", "method", "method", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "result", "method", "other", "other" ]
[ "Abstract Meaning Representations (AMRs) are broad-coverage sentence-level semantic graphs.", "Existing approaches to generating text from AMR have focused on training sequence-to-sequence or graph-to-sequence models on AMR annotated data only.", "In this paper, we propose an alternative approach that combines a strong pre-trained language model with cycle consistency-based re-scoring.", "Despite the simplicity of the approach, our experimental results show these models outperform all previous techniques on the English LDC2017T10 dataset, including the recent use of transformer architectures.", "In addition to the standard evaluation metrics, we provide human evaluation experiments that further substantiate the strength of our approach.", "Abstract Meaning Representation (AMR) (Ba-narescu et al., 2013) is a rooted, directed, acyclic graph with labeled edges (relations) and nodes (concepts) expressing who is doing what to whom.", "AMR-to-text generates sentences representing the semantics underlying an AMR graph.", "Initial works in AMR-to-text used transducers (Flanigan et al., 2016), phrase-based machine translation (Pourdamghani et al., 2016) and neural sequence-to-sequence ( seq2seq ) models with linearized graphs (Konstas et al., 2017).", "Cao and Clark (2019) leverage constituency parsing for generation.", "Beck et al. (2018) improve upon prior RNN graph encoding (Song et al., 2018) with Levi Graph Transformations.", "Damonte and Cohen (2019) compare multiple representations and find graph encoders to be the best.", "Guo et al. (2019) use RNN graph encoders with dense graph convolutional encoding.", "Ribeiro et al. (2019) This research was done during an internship at IBM Research AI.", "use RNN encoders with dual graph representations.", "Transformer-based seq2seq (Vaswani et al., 2017) was first applied to AMR-to-text in (Sinh and Le Minh, 2019).", "Zhu et al. (2019) greatly improve over the prior state-of-the-art by modifying self-attention to account for AMR graph structure.", "Using transformers has also been recently explored by Wang et al. (2020) who propose a mutli-head graph attention mechanism.", "Pre-trained transformer representations (Rad-ford et al., 2018; Devlin et al., 2019; Radford et al., 2019) use transfer learning to yield powerful language models that considerably outperform the prior art.", "They have also shown great success when fine-tuned to particular text generation tasks (See et al., 2019; Zhang et al., 2019; Keskar et al., 2019).", "Given their success, it would be desirable to apply pre-trained transformer models to a graph-to-text task like AMR-to-text, but the need for graph encoding precludes in princi-ple that option.", "Feeding the network with some sequential representation of the graph, such as a topological sorting, looses some of the graphs representational power.", "Complex graph annotations, such as AMR, also contain many special symbols and special constructs that departure from natural language and may by not interpretable by a pretrained language model.", "In this paper we explore the possibility of directly fine-tuning a pre-trained transformer language model on a sequential representation of AMR graphs, despite the expected difficulties listed above.", "For this we re-purpose a GPT-2 language model (Radford et al., 2019) to yield an AMR-to-text system.", "We show that it is surprisingly easy to fine-tune GPT-2 to learn AMR graph to text mapping that outperforms the previous state-of-the-art on automatic evaluation metrics.", "Since a single graph AMR, graph corresponds to multiple sentences with the same meaning, we also provide human evaluation and semantic similarity metric results (Zhang et al., 2020) which are less dependent on reference text.", "Human evaluation and semantic similarity results highlight the positive impact of a strong language model strategy.", "Finally we also introduce a simple re-scoring technique based on cycle-consistency that further improves performance.", "In order to fine-tune a generative model ( GPT-2 ; Radford et al. (2019)) for conditional text generation, prior works fine-tune the language model to predict target text starting from the additional source text as context.", "In our experiments, we found it beneficial to fine-tune on the joint distribution of AMR and text instead i.e. also reconstruct the source.", "Given a tokenized sentence w 1 w N and the sequential AMR representation a 1 a M we maximized the joint probability p GPT-2 ( w , a ) = N (cid:89) j =1 p GPT-2 ( w j | w 1: j 1 , a 1: M ) M (cid:89) i =1 p GPT-2 ( a i | a 1: i 1 ) A special separator token is added to mark the end of the sequential AMR representation.", "Special AMR symbols that should not be interpreted literally are assigned tokens from the GPT-2 unused token list.", "In addition to this, we also observed that freezing the input embeddings when fine-tuning had positive impact in performance.", "At test time, we provide the AMR as context as in conventional conditional text generation: w j = arg max w j { p GPT-2 ( w j | w 1: j 1 , a 1: M ) } 3 Re-scoring via Cycle Consistency The general idea of cycle consistency is to assess the quality of a system's output based on how well an external reverse' system can reconstruct the input from it.", "In previous works, cycle-consistency based losses have been used as part of the training objective in machine translation (He et al., 2016) and speech recognition (Hori et al., 2019).", "It has also been used for filtering synthetic training data for question answering (Alberti et al., 2019).", "Here we propose the use of a cycle consistency measure to re-score the system outputs.", "In particular, we take the top k sentences generated by our system from each gold AMR graph and parse them using an off-the-shelf parser to obtain a second AMR graph.", "We then re-score each sentence using the standard AMR parsing metric Smatch (Cai and Knight, 2013) by comparing the gold and parsed AMRs.", "Following Previous works on AMR-to-text, we Use the standard LDC2017T10 AMR corpus for evaluation of the proposed model.", "This Corpus contains 36,521 training instances of AMR graphs in PENMAN notation and the corresponding texts.", "It also includes 1368 and 1371 development and test instances, respectively.", "We tokenize each input text using The JAMR toolkit (Flanigan et al., 2014).", "The concatenation of an AMR graph and the corresponding text is split into words, special symbols and sub-word units using the GPT-2 to-kenizer.", "We add all arc labels seen in the training set and the root node :root to the vocabulary of the GPT-2 model, but we freeze the embedding layer for training.", "We use the Hugging Face implementation of (Wolf et al., 2019) for GPT-2 small ( GPT-2S ), medium ( GPT-2M ) and large ( GPT-2L ).", "Fine-tuning converges after 6 epochs, which takes just a few hours on a V100 GPU 1 .", "For cycle-consistency re-scoring we use an implementation of Naseem et al. (2019) in Py-Torch.", "For re-scoring experiments, we use a beam size of 15.", "AMR input representation.", "we test three variants of AMR representation.", "First, a depth-first search (DFS) through the graph following Konstas et al. (2017), where the input sequence is the path followed in the graph.", "Second, to see if GPT-2 is in fact learning from the graph structure, we remove all the edges from the DFS, keeping only the concept nodes.", "This has the effect of removing the relation information between concepts, such as subject/object relations.", "As a third option, we use the PENMAN representation without any modifi-cation.", "The three input representations are illustrated below: 1 Code for this paper is available at: https:// github.com/IBM/GPT-too-AMR2text Nodes recommend advocate-01 it vigorous DFS recommend :ARG1 advocate-01 :ARG1 it :manner vigorous Penman (r / recommend-01 :ARG1 (a / advocate-01 :ARG1 (i / it) :manner (v / vigorous))) Decoding.", "For generation, we experiment with greedy decoding, beam search, and nucleus sampling (Holtzman et al., 2019).", "For beam search, we explore beam sizes of 5 , 10 and 15 .", "As the system, in some cases, produces repetitive output at the end of the text, we additionally perform a post-processing step to remove these occurrences.", "Metrics.", "We considered the three automatic evaluation metrics commonly used in previous works.", "We compute BLEU (Papineni et al., 2002) using SacreBLEU (Ma et al., 2019).", "We compute chrF++ (Popovic, 2017) using both SacreBLEU and the scripts used by authors of the baseline systems.", "We compute METEOR (Banerjee and Lavie, 2005) with the default values for English of the CMU implementation.", "2 In addition to the standard automatic metrics, we also carry out human evaluation experiments and use the semantic similarity metric BERTScore (Zhang et al., 2020).", "Both metrics arguably have less dependency on the surface symbols of the reference text used for evaluation.", "This is particularly relevant for the AMR-to-text task, since one single AMR graph corresponds to multiple sentences with the same semantic meaning.", "Conventional metrics for AMR-to-text are are strongly influenced by surface symbols and thus do not capture well the ability of the system to produce a diverse sentences with same underlying semantics.", "Human evaluations are carried out by three professional annotators on 51 randomly selected sentences from the 1371 test sentences, on a 6 point scale, ranging from 0 to 5.", "0=Exceptionally poor (No useful information is conveyed at all.) 1=Poor (Fundamental errors in grammar and vocabulary make it difficult to understand the meaning.) 2=Not good enough (Errors in grammar, vocabulary and style make it difficult to understand the meaning.) 3=Good enough (There are errors in the text, but I am reasonably confident that I understand the meaning.) 2 https://www.cs.cmu.edu/alavie/METEOR Model Input BLEU chrF++ GPT-2S Rec.", "4=Very good (There may be minor errors in the text, but I am very confident that I understand the meaning.) 5=Excellent (The information is presented clearly and with appropriate grammar, vocabulary and style.) For each system, scores from all annotators are averaged to compute a single score.", "Inter-annotator agreement was 0 .", "7 when measured by Pearson correlation coefficient.", "Our system produces de-tokenized cased output after BPE decoding, whereas previous systems produce traditional tokenized lower-cased output.", "Therefore, we lowercase and tokenize our system outputs to have fair comparisons with previous systems.", "Regarding the type of AMR representation, as shown in Table 1, using directly the PENMAN notation for AMR representation leads to the best results outperforming DFS.", "Edge information, indicating relations between concepts, seems also to play a fundamental role since its absence strongly decreases performance in both DFS and PENMAN representations.", "Penman notation was cho-sen for the rest of the experiments.", "The impact of the use of a reconstruction term explained in 2 is shown in Table 2.", "The model trained using this additional term achieves 30 .", "41 BLEU and 61 .", "36 chrF++, as opposed to 25 .", "73 System Performance BLEU Meteor chrF++ Beck et al. (2018) 23.30 -50.40 Damonte and Cohen (2019) 24.54 24.07 Guo et al. (2019) 27.60 -57.30 Cao and Clark (2019) 26.80 -Sinh and Le Minh (2019) 18.36 -Ribeiro et al. (2019) 27.87 33.21 Cai and Lam (2020) 29.80 35.10 59.4 Zhu et al. (2019) 31.82 36.38 64.05 GPT-2M Rec.", "BLEU and 57 .", "2 chrF++ without the term.", "We therefore use a reconstruction term training in the rest of the experiments.", "Beam search improves system performance greatly over the greedy baseline with 1 .", "91 BLEU points (see Table 2).", "With beam size 10 , we obtain 32 .", "32 BLEU and 62 .", "79 chrF++.", "With nucleus sampling at a cumulative probability mass of 0 .", "9 , performance drops to 28 .", "75 BLEU and 61 .", "19 chrF++.", "Finally, cycle-consistency re-ranking of the beam search outputs improves performance ( 33 . 57 BLEU, 64 . 86 chrF++) over the one best output.", "Table 3 compares the best GPT-2M and GPT-2L results, fine-tuned using the reconstruction term and PENMAN notation.", "For all scores we test statistical significance with a standard two-tailed student t-test.", "Our model achieves a large improvement of 1 .", "2 BLEU and 1 .", "3 METEOR scores over the previous state-of-the-art model using GPT-2L and re-scoring.", "For chrF++, we get different scores from SacreBLEU and the scripts provided by the authors of our baseline systems, achieving comparable results with the former ( 63 . 89 ), and improving over the best score with the latter ( 65 . 01 ) ( P < . 01) .", "Table 4 shows human Evaluation results and semantic similarity scores of GPT-2L and GPT-2M compared to (Zhu et al., 2019; Ribeiro et al., 2019; Guo et al., 2019).", "Our approach produces a large number of high-quality sentences with 41 .", "8% , a significant gain over the previous best system ( 20 . 26% ).", "Regarding semantic similarity, prior art methods show relatively close scores, a 0 .", "9 points difference, while GPT-2L Rec.", "improves 1 .", "6 points over the best of these models.", "It should be noted that differences with (Zhu et al., 2019) for GPT-2L Rec.", "are statistically significantly with P < .", "05 , while differences for GPT-2M Rec are not significant due to the small sample size.", "In Table 5 we show three nontrivial examples, where we compare our system outputs with those of previous work.", "In the first example, the reference sentence contains a grammatical error.", "Our system not only generates the correct output, but also corrects the error in the reference.", "The proposed system can generate fluent long sentences as shown in example 2.", "The third example shows a sentence where all systems including ours fail to generate a correct text.", "Due to the large amounts of data they are trained on, pre-trained transformer language models can be expected to generate fluent and diverse text (See et al., 2019).", "It should however be highlighted that fine-tuned GPT-2 learns to produce not only fluent but also adequate text, despite using a sequential representation of an AMR graph as input.", "As shown in the experimental setup, encoding of relations plays as well a fundamental role in AMR-to-text performance, indicating that GPT-2 attains a fine-grained understanding of the underlying semantics to reach state of the art performance.", "While a sequence of PENMAN notation to-System Generated text (1) REF: the doctors gave her medication and it 's made her much better .", "kens is far from an optimal encoding of a graph, it is noteworthy how far performance-wise current strong language models can go.", "Furthermore, It is likely that standard metrics (BLEU, Meteor, chrF++) that rely on a reference text do not properly reflect AMR-to-text quality.", "An AMR graph corresponds to multiple sentences with the same semantics and these measures are likely biased towards the single available reference.", "In metrics that are less influenced by the reference text such as human evaluation and semantic similarity, the proposed system shows a larger improvement over the previous systems with close to 50% of the generated sentences considered excellent or good.", "Finally it is worth considering that leveraging pre-trained transformers greatly expands the vocabulary available on AMR-to-text systems.", "A single AMR graph can correspond to multiple sentences with markedly different surface realizations, but manual annotation of AMR is a time consuming task.", "Approaches like the one proposed may be a simple solution for generation of diverse text data for AMR parser training or other applications were diversity play a role.", "In this work, we present a language model-based approach for the AMR-to-text generation task.", "We show that a strong pre-trained transformer language model ( GPT-2 ) can be fine-tuned to generate text directly from the PENMAN notation of an AMR graph.", "Comparison with state-of-the-art models in BLUE, chrF++, METEOR as well as SemSim and human evaluation metrics show that while simple, this approach can outperform existing methods including methods training transformers from scratch.", "We also show that cycle consistency-based re-scoring using a conventional AMR parser and the Smatch metric can notably improve the results.", "Future work will focus on incorporating better encoding of the AMR graph into the current system and exploring data augmentation techniques leveraging the proposed approach.", "We thank the reviewers for their valuable suggestions.", "We would also like to thank Chunchuan Lyu for his valuable feedback and help." ]
[ "abstain", "abstain", "objective", "result", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "result", "result", "abstain", "result", "abstain", "result", "method", "abstain", "result", "method", "abstain", "abstain", "objective", "result", "method", "method", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "method", "method", "abstain", "method", "abstain", "method", "abstain", "method", "other", "method", "method", "method", "abstain", "method", "method", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "result", "abstain", "result", "abstain", "other", "other" ]
[ "Dan Schwartz Carnegie Mellon University 5000 Forbes Ave Pittsburgh, PA 15213 drschwar@cs.cmu.edu", "Tom Mitchell Carnegie Mellon University 5000 Forbes Ave Pittsburgh, PA 15213 tom.mitchell@cs.cmu.edu", "Abstract", "Extensions of this analysis that further examine what kinds of information in the model embeddings relate to each ERP have the potential to elucidate the processes involved in human language comprehension.", "Electroencephalography (EEG) recordings of brain activity taken while participants read or listen to language are widely used within the cognitive neuroscience and psycholinguistics communities as a tool to study language comprehension.", "Several time-locked stereotyped EEG responses to word-presentations known collectively as event-related potentials (ERPs) are thought to be markers for semantic or syntactic processes that take place during comprehension.", "However, the characterization of each individual ERP in terms of what features of a stream of language trigger the response remains controversial.", "Improving this characterization would make ERPs a more useful tool for studying language comprehension.", "We take a step towards better understanding the ERPs by fine-tuning a language model to predict them.", "This new approach to analysis shows for the first time that all of the ERPs are predictable from embeddings of a stream of language.", "Prior work has only found two of the ERPs to be predictable.", "In addition to this analysis, we examine which ERPs benefit from sharing parameters during joint training.", "We find that two pairs of ERPs previously identified in the literature as being related to each other benefit from joint training, while several other pairs of ERPs that benefit from joint training are suggestive of potential relationships.", "The cognitive processes involved in human language comprehension are complex and only partially identified.", "According to the dual-stream model of speech comprehension (Hickok and Poeppel, 2007), sound waves are first converted to Figure 1: The electrodes from which each event-related potential was recorded in the data from Frank et al. (2015) (after figure 3 in (Frank et al., 2015)).", "phoneme-like features and further processed by a ventral stream that maps those features onto words and semantic structures, and a dorsal stream that (among other things) supports audio-short term memory.", "The mapping of words onto meaning is thought to be subserved by widely distributed regions of the brain that specialize in particular modalities for example visual aspects of the word banana reside in the occipital lobe of the brain and are activated when the word banana is heard (Kemmerer, 2014) and the different representation modalities are thought to be integrated into a single coherent latent representation in the anterior temporal lobe (Ralph et al., 2010).", "While this part of meaning representation in human language comprehension is somewhat understood, much less is known about how the meanings of words are integrated together to form the meaning of sentences and discourses.", "One tool researchers use to study the integration of meaning across words is electroencephelogra-phy (EEG), which measures the electrical activity of large numbers of neurons acting in concert.", "EEG has the temporal resolution necessary to study the processes involved in meaning integration, and certain stereotyped electrical responses to word presentations, known as event-related potentials (ERPs), have been identified with some of the processes thought to contribute to comprehen-sion.In this work, we consider six ERP components that have been associated in the cognitive neuroscience and psycholinguistics literature with language processing and which we analyze in the data from Frank et al. (2015) (see Figure 1 for spatial and temporal definitions of these ERP com-ponents).", "Three of these the N400, EPNP, and PNP responses are primarily considered markers for semantic processing, while the other three the P600, ELAN, and LAN responses are primarily considered markers for syntactic processing.", "However, the neat division of the ERP responses into either semantic or syntactic categories is controversial.", "The N400 response has been very well studied (for an overview see (Ku-tas and Federmeier, 2011)) and it is well established that it is associated with semantic complexity, but the features of language that trigger the other ERP responses we consider here are poorly understood.", "We propose to use a neural network pretrained as a language model to probe what features of language drive these ERP responses, and in turn to probe what features of language mediate the cognitive processes that underlie human language comprehension, and especially the integration of meaning across words.", "While a full discussion of each ERP component and the features of language thought to trigger", "each are beyond the scope of this document (for reviews see e.g. Frank et al. (2015), Kemmerer (2014), Kutas and Federmeier (2011), Kuperberg et al. (2003), and Van Petten and Luka (2012)), we introduce some basic features of ERP components to help in the discussion later.", "ERP components are electrical potential responses measured with respect to a baseline that are triggered by an event (in our case the presentation of a new word to a participant in an experiment).", "The name of each ERP component reflects whether the potential is positive or negative relative to the baseline.", "The N400 is so-named because it is N egative relative to a baseline (the baseline is typically recorded just before a word is presented at an electrode that is not affected by the ERP response) and because it peaks in magnitude at about 400 ms after a word is presented to a participant in an experiment.", "The P600 is P ositive relative to a baseline and peaks around 600 ms after a word is presented to a participant (though its overall duration is much longer and less specific in time than the N400).", "The post-N400 positivity is so-named because it is part of a biphasic response; it is a positivity that occurs after the negativity associated with the N400.", "The early post-N400 positivity (EPNP) is also part of a biphasic response, but the positivity has an eariler onset than the standard PNP.", "Finally, the LAN and ELAN are the left-anterior negativity and early left-anterior negativity respectively.", "These are named for their timing, spatial distribution on the scalp, and direction of difference from the baseline.", "It is important to note that ERP components can potentially cancel and mask each other, and that it is difficult to precisely localize the neural activity that causes the changes in electrical potential at the electrodes where those changes are measured.", "This work is most closely related to the paper from which we get the ERP data: Frank et al. (2015).", "In that work, the authors relate the surprisal of a word, i.e. the (negative log) probability of the word appearing in its context, to each of the ERP signals we consider here.", "The authors do not directly train a model to predict ERPs.", "Instead, models of the probability distribution of each word in context are used to compute a surprisal for each word, which is input into a mixed effects regression along with word frequency, word length, word position in the sentence, and sentence position in the experiment.", "The effect of the surprisal is assessed using a likelihood-ratio test.", "In Hale et al. (2018), the authors take an approach similar to Frank et al. (2015).", "The authors compare the explanatory power of surprisal (as computed by an LSTM or a Recurrent Neural Network Grammar (RNNG) language model) to a measure of syntactic complexity they call distance that counts the number of parser actions in the RNNG language model.", "The authors find that surprisal (as predicted by the RNNG) and distance are both significant factors in a mixed effects regression which predicts the P600, while the surprisal as computed by an LSTM is not.", "Unlike Frank et al. (2015) and Hale et al. (2018), we do not use a linking function (e.g. surprisal) to relate a language model to ERPs.", "We thus lose the interpretability provided by the linking function, but we are able to predict a significant proportion of the variance for all of the ERP components, where prior work could not.", "We interpret our results through characterization of the ERPs in terms of how they relate to each other and to eye-tracking data rather than through a linking function.", "The authors in Wehbe et al. (2014) also use a recurrent neural network to predict neural activity directly.", "In that work the authors predict magnetoencephalography (MEG) activity, a close cousin to EEG, recorded while participants read a chapter of Harry Potter and the Sorcerers Stone (Rowling, 1999).", "Their approach to characterization of processing at each MEG sensor location is to determine whether it is best predicted by the context vector of the recurrent network (prior to the current word being processed), the embedding of the current word, or the probability of the current word given the context.", "In future work we also intend to add these types of studies to the ERP predictions.", "Data.", "We use two sources of data for this analysis.", "The primary dataset we use is the ERP data collected and computed by Frank et al. (2015), and we also use behavioral data (eye-tracking data and self-paced reading times) from Frank et al. (2013) which were collected on the same set of 205 sentences.", "In brief, the sentences were selected from sources using British English with a criterion that they be understandable out of context.", "We use the ERP component values as computed by Frank et al. (2015) which have been high-pass filtered at 0.5 Hz to reduce correlation between ERP components and modulus transformed (John and Draper, 1980) to make the distribution of component values more normal.", "We do not use the 100ms pre-trial baseline which is made available by Frank et al. (2015) and which they use as a separate input to the mixed effects regression.", "For more information about the ERP datasets and data collection procedures we refer the reader to the original papers.", "For the behavioral data, we use self-paced reading times and four eye-tracking measures.", "Self-paced reading time is considered a signal of integration difficulty (i.e. as it becomes more difficult to integrate the meaning of the current word into the context, the amount of time a reader spends on the current word increases).", "The eye-tracking measures are intended to capture both early effects (effects modulated primarily by properties of the word independent of its context, such as word frequency and word length) and late effects (effects modulated by the context in which the word is found, i.e. comprehension difficulty) in word processing (Rayner and Pollatsek, 2006).", "In both cases, the eye-tracking measures provide a signal of overt visual attention, which is thought to strongly correlate with covert perceptual attention in normal reading (Rayner, 2009).", "We log-transform the self-paced reading time and the eye-tracking measures.", "Model.", "To predict the ERP signals in the data, we start with a 3-layer bidirectional LSTM-based language model encoder using the architecture found in Merity et al. (2017) and pretrained on the WikiText-103 dataset (Merity et al., 2016) (we use the pretrained model from Howard and Ruder (2018)).", "The pretraining objective is to minimize the negative log-likelihood of the next word for the forward LSTM and the previous word for the reverse LSTM.", "The word-embeddings (input embeddings) in the encoder have 400 components, the hidden layer outputs have 1150 components each, and the context-embeddings output from the encoder have 400 components.", "The forward-encoder and backward-encoder are independently fine-tuned on the baby version of the British National Corpus (Consortium, 2005) to help with prediction of British English (both the ERP data and eye-tracking data use British English).", "During task training the two encoders' output embeddings are concatenated together and fed into a causal-convolution layer which combines each pair of adjacent timepoints into a single pair-embedding with 10 components.", "The causal-convolution (i.e. convolution which is left padded) ensures that the pair-embeddings are aligned so that the prediction targets correspond to the later word in the pair.", "In other words the pair can be thought of as representing the current' and previous' words together.", "A ReLU is applied to the pair-embedding before it, along with the word length and the log probability of the word, is fed into a linear output layer to predict each ERP and behavioral measure (see Figure 2).", "The convolution and linear layers are initialized using the default PyTorch (Paszke et al., 2017) initialization, i.e. the initialization proposed in He et al. (2015).", "The encoder portion of the model includes dropout as applied in Merity et al. (2017), but we use different dropout probabilities when we fit the neural and behavioral data (the dropout probability on the input embeddings was 0.05, 0.4 on the input to the LSTM, 0.4 on LSTM hidden layers, 0.5 on the output of the LSTM, and 0.5 on the recurrent weights).", "We did not find dropout in the decoder to be helpful.", "We use the Adam optimizer (Kingma and Ba, 2014) with 1 = 0 .", "95 , 2 = 0 .", "999 for training and we use mean squared error as the loss.", "Procedure.", "We begin our training procedure by fine-tuning the forwardand backward-encoders independently on the baby version of the British National Corpus (Consortium, 2005).", "This corpus has British English that may help in modeling the University College London corpus, while not overlapping with it.", "After the model fine-tuning, we estimate how well the model predicts each of the ERP signals and eye-tracking measures by training the model 100 times with different train/test splits and decoder parameter initializations.", "We use 10% of the data for testing and the remainder for training.", "The sentences in the ERP data are split at random.", "After we split the data, we compute the mean and standard deviation of each ERP signal (and each eye-tracking measure and the self-paced reading time) within participant on the training data.", "We use these values to standardize the training data within participant, and then average the data from all of the participants together.", "After we average, we again compute the mean and standard deviation to standardize the average.", "We follow a similar procedure for the test data, but we use the mean Figure 2: The model uses an encoder based on the architecture and regularization in Merity et al. (2017) and pretrained by Howard and Ruder (2018).", "and standard deviation from the training data when standardizing.", "Note that we use the log of the behavior measures, and the log is taken before the data-standardization.", "In the loss function (and when we evaluate model performance) we only consider content words.", "We mark as a content word any word that is an adjective, adverb, auxiliary verb, noun, pronoun, proper noun, or verb (including to-be verbs).", "All other words are considered function words.", "During the first 20 epochs of training, only the parameters of the decoder are modified.", "Following this, we train the model for an additional 15 epochs during which the parameters of the decoder and the final layer of the encoder (the final LSTM layer in both the forward and backward encoder) can be modified.", "We also experimented with additional training epochs and allowing all parameters of the model to be modified, but we found that this caused overfitting.", "Comparing models trained with different loss functions.", "To better understand the relationship between ERP signals, and between ERP signals and behavioral data, we train the model with different loss functions that include mean squared error terms corresponding to various combinations of the ERP signals and behavioral data.", "For example, one of the training variations includes a mean squared error term for the P600 and a mean squared error term for the N400 in the loss, but does not use the other signals during training.", "In this variation, for a mini-batch of size B , where example b has T b content tokens and the superscripts p and a denote the predicted and actual values for a measure respectively, the loss function can be written as: 1 (cid:80) Bb =1 T b B (cid:88) b =1 T b (cid:88) t =1 (P600 pb,t P600 ab,t ) 2 + (N400 pb,t N400 ab,t ) 2 (1) For each of the training variations, we repeat the training procedure described above (but fine-tuning the language model on the British National Corpus is done only once).", "We use a consistent train/test split procedure, such that the split for the ith run of the 100 runs is the same across all training variations, but the split changes between run i and run j .", "This enables us to use paired statistical testing when we test for significance.", "We test for whether the proportion of variance explained (computed as 1 MSEvariance on the validation set) on each ERP and behavioral measure is significantly different from 0 using the single sample t-test controlled for false discovery rate using the Benjamini-Hochberg-Yekutieli procedure (Benjamini and Yekutieli, 2001) with a false discovery rate of 0.01.", "To test whether the proportion of variance explained is different between different training variations (for example training with just the N400 signal included in the loss vs. training with both the N400 and the LAN included in the loss), we use a paired t-test.", "We then adjust for the false discovery rate again with a rate of 0.01.", "All ERP components are predictable.", "In the original study on this dataset, the investigators found that when surprisal was used as a linking function between the language model and the mixed effects regression, the only ERP for which the surprisal showed a significant effect in the regression was the N400 (Frank et al., 2015).", "In contrast, we find that when we directly predict the ERP signals we are able to predict a significant proportion of the variance for all of them (see Table 1).", "Joint training benefits ERP component prediction.", "To explore the relationship between ERP components, we train 63 = (cid:0) 61 (cid:1) + (cid:0) 62 (cid:1) + + (cid:0) 66 (cid:1) different models using all of the possible combinations of which of the six ERP signals are included in the loss function during training.", "For each of the six ERP components, we look for the best performing models (see Table 1).", "The N400 is best predicted when the model is trained on that component independently, but every other ERP component prediction can be improved by including a second ERP component in the training.", "Thus multitask learning has a clear benefit when applied to the ERP data and some information is shared between ERP component predictions via the model parameters.", "We also note that it is not the case that training with more ERP components is always better, or that the signals which are most correlated benefit each other most (see Appendix A).", "The relationship between components clearly impacts whether the prediction of one ERP component benefits from the inclusion of others in model training.", "The results suggest that 8 pairs of ERP signals are related to each other: the LAN is paired with the P600, EPNP, and PNP, the ELAN with the N400, EPNP, PNP, and P600, and the EPNP is paired with the P600.", "We discuss these relationships in the Discussion section.", "In an additional analysis, we modified our training procedure slightly to probe how jointly training on multiple ERP components compares to training individually on each ERP component.", "In this analysis we compare only training on each ERP component individually to training on all six ERP components together.", "We also train for a total of 60 epochs (rather than the 35 epochs used else-where).", "During the first 20 epochs we allow only the parameters of the decoder to be modified.", "During the next 20 epochs, we allow the parameters of the decoder and the final layer of the encoder (i.e. the final recurrent layer) to be modified.", "During the last 20 epochs, we allow all of the parameters of the model to be modified.", "The mean squared error for each of the ERP components from this anal-Target Additional POVE Target Additional POVE Target Additional POVE ELAN 0.20 LAN 0.30 N400 0.26 ELAN + EPNP 0.22 LAN + EPNP 0.31 ELAN + N400 0.22 LAN + PNP 0.32 ELAN + PNP 0.22 LAN + P600 0.32 ELAN + P600 0.22 LAN + PNP, N400 0.33 EPNP 0.34 P600 0.27 PNP 0.33 EPNP + LAN 0.35 P600 + EPNP 0.30 PNP + LAN 0.36 EPNP + GROUP A 0.36 P600 + LAN 0.30 PNP + GROUP B 0.36 Table 1: Proportion of variance explained (POVE) for each of the ERP components (mean of 100 training runs).", "ysis is shown for each epoch in Figure 3.", "From the loss curves, we make a few observations.", "First, we see inflection points at epochs 20 and 40, when we allow more parameters of the model to be modified.", "The first inflection point indicates that allowing the recurrent layer to be modified benefits the prediction, while the second inflection point shows that overfitting becomes more severe if we allow all parameters of the model to be modified.", "We also see from these curves that part of the benefit of joint training is that it helps reduce overfitting we see less of a climb in the validation loss after the minimum point in the joint training.", "Beyond this reduction in overfitting severity, we note that for some of the ERP components (the LAN, EPNP and PNP components) joint training actually gives a better overall minimum in prediction error.", "components.", "We are also interested in whether behavioral data can be used to improve ERP prediction since it should signal both the amount of overt attention required at various points in a sentence as well as integration difficulty.", "To study this question, we again train models using different combinations of training signals that include or do not include the behavioral data predictions in the loss function (see Table 2).", "We see that self-paced reading time indeed can improve prediction of a target ERP component relative to training on the target ERP component alone by about the same amount as the best combination of ERP components for all but the N400.", "Eye-tracking data can also improve the prediction accuracy of the ELAN, P600, and PNP components.", "Insensitivity to choice of architecture.", "One potential concern about our results is the degree to which the relationships we see between ERP components and between ERP components and behavioral data is an artefact of our rather arbitrary choice of network architecture.", "We partially address this by running the same analysis using", "(i) only the forward direction of the encoder, and", "(ii) only the word-embeddings (the input embeddings) and not the context-embeddings (the output embeddings) of the encoder.", "The proportion of variance explained for each ERP component is lower using these variants of the analysis than using the bidirectional variant (see Appendix A), but qualitatively the relationships are similar.", "We leave further analysis of the sensitivity of our qualitative results to choice of architecture for future work.", "In this work we find that all six of the ERP components from Frank et al. (2015) can be predicted above chance by a model which has been pretrained using a language modeling objective and then directly trained to predict the components.", "This is in contrast to prior work which has successfully linked language models to the N400 (Frank et al., 2015) and P600 (Hale et al., 2018) but not the other ERP components.", "We also note that", "con-(a) Independently trained", "trary to Hale et al. (2018), we find that an LSTM does contain information that can be used to predict EEG data, and in particular that it can predict the P600.", "We speculate that the analysis used in Hale et al. (2018) did not find reliable effects because the language models were related to the EEG data through functions chosen a priori (the surprisal, and the distance' metric).", "These functions, though interpretable, might be interpretable at the cost of losing much of the information in the representations learned by the network.", "In addition, we show through our multitask learning analysis that information is shared between ERP components, and between ERP components and behavioral data.", "Although these relationships must be viewed with caution until they can be verified across multiple datasets and with more variation in neural network architectures, here we consider some potential reasons for our findings.", "The broad point we wish to make is that by better understanding which ERP components share information with each other and with behavioral data through the type of analysis we present here (multitask learning) or other means, we can better understand what drives each ERP component and in turn the processes involved in human language comprehension.", "Relationship between ERPs.", "Our findings that the LAN and P600 are related, and that the ELAN and P600 are related are expected from both a theoretical perspective and from previous work examining the interactions of ERP components (Gunter et al., 1997; Hagoort et al., 2003a; Hahne and Friederici, 1999; Kutas et al., 2006; Palolahti et al., 2005).", "Since the ELAN and LAN have been theorized by some to mark word-category (i.e. part-of-speech) or morpho-syntactic (e.g. subject-verb number agreement) violations (Friederici, 2011; Hahne and Friederici, 2002; Hagoort et al., 2003b) and the P600 is considered a marker for syntactic effort (Coulson et al., 1998; Huettig, 2015; Kemmerer, 2014; Kuperberg, 2007; Kuperberg et al., 2003; Van Petten and Luka, 2012), these signals would naturally be related to each other.", "The other relationships we find are more surprising.", "Some researchers have speculated that the LAN and ELAN are markers for working memory demands (King and Kutas, 1995; Kutas et al., 2006), and that indeed these might be part of sustained negativities that are frequently masked by the P600 (Kemmerer, 2014).", "If we take this view, then we would expect to find them in the presence of semantic and syntactic complexity, and this might explain why they seem to benefit from joint training with the other ERP component signals (and benefit prediction of other ERP signals with which they are trained).", "However, it is notable that predictions of the LAN and ELAN do not benefit each other in our analysis, and that the N400 (a marker for semantic complexity) is not benefited by the prediction of any other ERP component.", "This absence is by no means definitive, but it undermines the argument that all of these relationships can be explained by complexity and working memory demands alone.", "The relative isolation of the N400 from other ERP components in our analysis is interesting.", "If the N400 is a marker for semantic memory retrieval (Kutas and Federmeier, 2011), then it might be expected to be somewhat isolated from the other components, which may involve syntactic processing or later integration effects.", "Alternatively, the relationships we find in our analysis might be an artefact of the way the ERPs are operationalized in Frank et al. (2015).", "Several of the pairings we find overlap spatially and are near to each other in time, so the ERP components might spill over into each other.", "Further work is required to disambiguate between these possibilities.", "Relationship between behavioral data and ERPs.", "It is reassuring to see that jointly training models to predict behavioral data along with a target ERP component benefits the prediction of the ERP component compared to training on the target ERP component alone.", "The benefit to prediction in this case cannot be explained as an artefact of how the ERP components are operationalized in the datasetes we use for analysis.", "Self-paced reading times widely benefit ERP prediction, while eye-tracking data seems to have more limited benefit to just the ELAN, LAN, and PNP ERP components.", "It's difficult to know why this might be the case, but perhaps it is not a coincidence that these three ERP components also show up frequently in the pairs of components that benefit from joint training.", "If indeed the PNP marks semantic role irregularities (Van Petten and Luka, 2012) and the ELAN and LAN mark working memory or look-forward or look-back operations (Kutas et al., 2006), then its possible that eye-movements might be more related to these types of operations than to general semantic and syntactic complexities marked by other ERP components.", "Self-paced reading might better capture these generic difficulties.", "This explanation is highly speculative, and further work is required to determine whether the relationships between the ERP components and behavioral data are consistent across datasets, and if so, what the explanation is for these relationships.", "Choice of bidirectional architecture.", "We emphasize that the neural network architecture we chose for these analyses was motivated primarily by its success on downstream NLP tasks, pub-lic availability of pre-trained models and code, and prior work studying how best to fine-tune the model (Howard and Ruder, 2018; Merity et al., 2017).", "We do not claim that this architecture reflects human processing.", "We experimented with a forward-only model variant of our analysis, and found that the bidirectional model predicts brain activity better than the forward-only version (see Appendix A).", "Although the bidirectional model has access to future' language input, it does not have access to future brain-activity, so the bidirectional model is not cheating' when it makes predictions.", "There are at least three possible explanations for why the bidirectional model performs better than the forward-only model.", "First, it is possible that when a human reads a sentence, he or she predicts the upcoming language input.", "Under this hypothesis, a model with access to the future language input can do a better job of predicting the current brain activity because the future language is reflected in that brain activity.", "Second, it is possible that a bidirectional model is simply able to produce better embeddings for each word in the input because it has more context than a forward-only model.", "For example, the bidirectional model might be (implicitly) better at anaphora resolution given more context.", "Under this hypothesis, the additional context given to the model partially compensates for its relative deficit of real-world knowledge compared to a human.", "Where a human can in many cases solve the anaphora resolution problem by using background knowledge and does not need to see the future language input, a model benefits from additional context.", "Finally, in our setup, the bidirectional model has more parameters than the forward-only model, and the additional degrees of freedom might give the model an advantage in predicting brain activity.", "Exploration of why the bidirectional model is better than the forward-only model is an interesting question, but it is left to future work.", "Additionally, as we noted earlier, the qualitative results of our analysis (e.g. how ERP components relate to each other) should be viewed with caution until they are replicated across multiple choices of architecture.", "We have shown that ERP components can be predicted from neural networks pretrained as language models and fine-tuned to directly predict those components.", "To the best of our knowledge, prior work has not successfully used statistical models to predict all of these components.", "Furthermore, we have shown that multitask learning benefits the prediction of ERP components and can suggest how components relate to each other.", "At present, these joint-training benefit relationships are only suggestive, but if these relationships ultimately lead to insights about what drives each ERP component, then the components become more useful tools for studying human language comprehension.", "By using multitask learning as a method of characterization, we have found some expected relationships (LAN+P600 and ELAN+P600) and several more surprising relationships.", "We believe that this is exactly the kind of finding that makes multitask learning an interesting exploratory technique in this area.", "Additionally, we have shown that information can be shared between heterogeneous types of data (eye-tracking, self-paced reading, and ERP components) in the domain of human language processing prediction, and in particular between behavioral and neural data.", "Given the small datasets associated with human language processing, using heterogeneous data is a potentially major advantage of a multitask approach.", "In future work, we will further explore what information is encoded into the model representations when neural and behavioral data are used to train neural networks, and how these representations differ from the representations in a model trained on language alone.", "We thank our reviewers for their valuable feedback.", "This work is supported in part by National Institutes of Health grant number U01NS098969." ]
[ "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "other", "abstain", "abstain", "result", "objective", "other", "abstain", "objective", "other", "other", "other", "other", "other", "other", "other", "other", "abstain", "method", "other", "other", "other", "other", "other", "other", "method", "abstain", "abstain", "other", "other", "other", "method", "abstain", "method", "method", "abstain", "method", "method", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "method", "abstain", "method", "abstain", "method", "method", "abstain", "method", "method", "method", "method", "abstain", "method", "method", "method", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "method", "method", "method", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "result", "result", "method", "abstain", "abstain", "other", "result", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "result", "result", "method", "result", "abstain", "result", "method", "result", "abstain", "objective", "other", "other" ]
[ "In this paper, we explore text classification with extremely weak supervision , i.e., only relying on the surface text of class names.", "This is a more challenging setting than the seed-driven weak supervision, which allows a few seed words per class.", "We opt to attack this problem from a representation learning perspectiveideal document representations should lead to nearly the same results between clustering and the desired classification.", "In particular, one can classify the same corpus differently (e.g., based on topics and locations), so document representations should be adaptive to the given class names.", "We propose a novel framework X-Class to realize the adaptive representations.", "Specifically, we first estimate class representations by incrementally adding the most similar word to each class until inconsistency arises.", "Following a tailored mixture of class attention mechanisms, we obtain the document representation via a weighted average of contextualized word representations.", "With the prior of each document assigned to its nearest class, we then cluster and align the documents to classes.", "Finally, we pick the most confident documents from each cluster to train a text classifier.", "Extensive experiments demonstrate that X-Class can rival and even outperform seed-driven weakly supervised methods on 7 benchmark datasets.", "Weak supervision has been recently explored in text classification to save human effort.", "Typical forms of weak supervision include a few labeled documents per class (Meng et al., 2018; Jo and Cinarel, 2019), a few seed words per class (Meng et al., 2018, 2020a; Mekala and Shang, 2020; Mekala et al., 2020), and other similar open-data (Yin et al.,", "2019).Though much weaker than a fully annotated corpus, these forms still require non-trivial, corpus-specific knowledge from experts.", "For example, nominating seed words requires experts to consider", "their relevance to not only the desired classes but also the input corpus; To acquire a few labeled documents per class, unless the classes are balanced, one needs to sample and annotate a much larger number of documents to cover the minority class.", "In this paper, we focus on extremely weak supervision , i.e., only relying on the surface text of class names.", "This setting is much more challenging than the ones above, and can be considered as almost-unsupervised text classification.", "We opt to attack this problem from a representation learning perspectiveideal document representations should lead to nearly the same result between clustering and the desired classification.", "Recent advances in contextualized representation learning using neural language models have demonstrated the capability of clustering text to domains with high accuracy (Aharoni and Goldberg, 2020).", "Specifically, a simple average of word representations is sufficient to group documents on the same topic together.", "However, the same corpus could be classified using various criteria other than topics, such as locations and sentiments.", "As visualized in Figure 1, such class-invariant representations separate topics well but mix up locations.", "Therefore, it is a necessity to make document representations adaptive to the user-specified class names.", "We propose a novel framework X-Class to conduct text classification with extremely weak supervision, as illustrated in Figure 2.", "Firstly, we esti-sports science arts !", "mate class representations by incrementally adding the most similar word to each class and recalculating its representation.", "Following a tailored mixture of class attention mechanisms, we obtain the document representation via a weighted average of contextualized word representations.", "These representations are based on pre-trained neural language models, and they are supposed to be in the same latent space.", "We then adopt clustering methods (e.g., Gaussian Mixture Models) to group the documents into K clusters, where K is the number of desired classes.", "The clustering method is initialized with the prior knowledge of each document assigned to its nearest class.", "We preserve this assignment so we can easily align the final clusters to the classes.", "In the end, we pick confident documents from each cluster to form a pseudo training set, based on which, we can train any document classifier.", "In our implementation, we use BERT as both the pre-trained language model and the text classifier.", "Compared with existing weakly supervised methods, X-Class has a stronger and more consistent performance on 7 benchmark datasets, despite some of them using at least 3 seed words per class.", "It is also worth mentioning that X-Class has a much more mild requirement on the existence of class names in the corpus, whereas existing methods rely on the variety of contexts of the class names.", "Our contributions are summarized as follows.", "We advocate an important but not-well-studied problem of text classification with extremely weak supervision.", "We develop a novel framework X-Class to attack this problem from a representation learning perspective.", "It estimates high-quality, class-oriented document representations based on pre-trained neural language models so that the confident clustering examples could form pseudo training set for any document classifiers to train on.", "We show that on 7 benchmark datasets, X-Class achieves comparable and even better performance than existing weakly supervised methods that require more human effort.", "In this section, we formally define the problem of text classification with extremely weak supervision.", "And then, we brief on some preliminaries about BERT (Devlin et al., 2019), Attention (Luong et al., 2015) and Gaussian Mixture Models.", "Problem Formulation.", "The extremely weak supervision setting confines our input to only a set of documents D i , i { 1 , ..., n } and a list of class names c j , j { 1 , ..., k } .", "The class names here are expected to provide hints about the desired classification objective, considering that different criteria (e.g., topics, sentiments, and locations) could classify the same set of documents.", "Our goal is to build a classifier to categorize a (new) document into one of the classes based on the class names.", "Seed-driven weak supervision requires carefully designed label-indicative keywords that concisely define what a class represents.", "This requires human experts to understand the corpus extensively.", "One of our motivations is to relax this burdensome requirement.", "Interestingly, in experiments, our proposed X-Class using extremely weak supervision can offer comparable and even better performance than the seed-driven methods.", "BERT.", "BERT is a pre-trained masked language model with a transformer structure (Devlin et al., 2019).", "It takes one or more sentences as input, breaks them up into word-pieces, and generates a contextualized representation for each word-piece.", "To handle long documents in BERT, we apply a sliding window technique.", "To retrieve representations for words, we average the representations of the word's word-pieces.", "BERT has been widely adopted in a large variety of NLP tasks as a backbone.", "In our work, we will utilize BERT for two purposes: (1) representations for words in the documents and (2) the supervised text classifier.", "Attention.", "Attention mechanisms assign weights to a sequence of vectors, given a context vector (Lu-ong et al., 2015).", "It first estimates a hidden state h j = K ( h j , c ) for each vector h j , where K is a similarity measure and c is the context vector.", "Then, the hidden states are transformed into a distribution via a softmax function.", "In our work, we use attentions to assign weights to representations, which we then average them accordingly.", "Gaussian Mixture Model.", "Gaussian Mixture Model (GMM) is a traditional clustering algorithm (Duda and Hart, 1973).", "It assumes that each cluster is generated through a Gaussian process.", "Given an initialization of the cluster centers and the co-variance matrix, it iteratively optimizes the point-cluster memberships and the cluster parameters following an ExpectationMaximization framework.", "Unlike K-Means, it does not restrict clusters to have a perfect ball-like shape.", "Therefore, we apply GMM to cluster our document representations.", "As shown in Figure 2, our X-Class framework contains three modules: (1) class-oriented document representation estimation, (2) document-class alignment through clustering, and (3) text classifier training based on confident labels.", "Ideally, we wish to have some document representations such that clustering algorithms can find k clusters very similar to the k desired classes.", "We propose to estimate the document representations and class representations based on pretrained neural language models.", "Algorithm 1 is an overview.", "In our implementation, we use BERT as an example.", "For each document, we want its document representation to be similar to the class Algorithm 1: Class-Oriented Document Representation Estimation Input : n documents D i , k class names c j , max number of class-indicative words T , and attention mechanism set M Compute t i,j (contextualized word rep.) Compute s w for all words (Eq. 1) // class rep. estimation for l = 1 . . . k do K l (cid:104) c l (cid:105) for i = 2 . . . T do Compute x l based on K l (Eq. 2) w = arg max w/ K l sim ( s w , x l ) Compute x (cid:48) l based on K l (cid:104) w (cid:105) // consistency check if x (cid:48) l changes the words in K l then break else K l K l (cid:104) w (cid:105) // document rep. estimation for i = 1 ... n do for attention mechanism m M do Rank D i,j according to m r m,j the rank of D i,j Rank D i,j according to (cid:81) m r m,j r j the final rank Compute E i (Eq. 3) Return All document representations E i .", "Aharoni and Goldberg (2020) demonstrated that contextualized word representations generated by BERT can preserve the domain (i.e., topic) information of documents.", "Specifically, they generated document representations by averaging contextualized representations of its constituent words, and they observed these document representations to be very similar among documents belonging to the same topic.", "This observation motivates us to clas-sify documents by topics in an unsupervised way.", "However, this unsupervised method may not work well on criteria other than topics.", "For example, as shown in Figure 1, such document representations work well for topics but poorly for locations.", "We therefore incorporate information from the given class names and obtain class-oriented document representations.", "We break down this module into two parts, (1) class representation estimation and (2) document representation estimation.", "Class Representation Estimation.", "Inspired by seed-driven weakly supervised methods, we argue that a few keywords per class would be enough and professional sports told French sports learning sports sports static representation A v e r a g e a ll o c c u rr e n c e s sports at most T i n c r e m e n t a ll y a d d class representations contextualized representation arts win science find in arts sports win arts science science Figure 3: Overview of Our Class Rep. Estimation.", "to understand the semantics of the user-specified classes.", "Intuitively, the class name could be the first keyword we can start with.", "We propose to incrementally add new keywords to each class to enrich our understanding.", "Figure 3 shows an overview of our class representation estimation.", "First, for each word, we obtain its static representation via averaging the contextualized representations of all its occurrences in the input corpus.", "For words that are broken into word-piece tokens, we average all the token representations as the word's representation.", "Then, we define the static representation s w of a word w as s w = (cid:80) D i,j = w t i,j (cid:80) D i,j = w 1 (1) where D i,j is the j -th word in the document D i and t i,j is its contextualized word representation.", "Ethayarajh (2019) adopted a similar strategy of estimating a static representation using BERT.", "Such static representations are used as anchors to initialize our understanding of the classes.", "A straightforward way to enrich the class representation is to take a fixed number of words similar to the class name and average them to get a class representation.", "However, it suffers from two issues: (1) setting the same number of keywords for all classes may hurt the minority classes, and (2) a simple average may shift the semantics away from the class name itself.", "As an extreme example, when the 99% of documents are talking about sports and the rest 1% are about politics , it is not reasonable to add as many keywords as sports to politics it will diverge the politics representation.", "To address these two issues, we iteratively find the next keyword for each class and recalculate the class representation by a weighted average on all the keywords found.", "We stop this iterative process cheered I for winning Lakers NBA attention class-oriented document representation w e i gh t e d a v e r a g e class representations token representations sports arts science Figure 4: Overview of Our Document Rep. Estimation.", "when the new representation is not consistent with the previous one.", "In this way, different classes will have a different number of keywords adaptively.", "Specifically, we define a comprehensive representation x l for a class l as a weighted average representation based on a ranked list of keywords K l .", "The top-ranked keywords are expected to have more similar static representations to the class representation.", "Assuming that the similarities follow Zipf's laws distribution (Powers, 1998), we define the weight of the i -th keyword as 1 /i .", "That is, x l = (cid:80) |K l | i =1 1 /i s K l,i (cid:80) |K l | i =1 1 /i (2) For a given class, the first keyword in this list is always the class name.", "In the i -th iteration, we retrieve the out-of-list word with the most similar static representation to the current class representation.", "We then calculate a new class representation based on all the i + 1 words.", "We stop this expansion if we already have enough (e.g., T = 100 ) keywords, or the new class representation cannot yield the same set of topi keywords in our list.", "In our experiments, some classes indeed stop before reaching 100 keywords.", "Document Representation Estimation.", "Intuitively, the content of each document should stick to its underlying class.", "For example, in the sentence I cheered for Lakers winning NBA , its content covers sports and happy classes, but not arts , politics , or sad .", "Therefore, we assume that each word in a document is either similar to its desired class's representation or unrelated to all classes.", "Based on this assumption, we upgrade the simple average of contextualized word representations (Aharoni and Goldberg, 2020) to a weighted average.", "Specifically, we follow the popular attention mechanisms to assign weights to the words based on their similarities to the class representations.", "Figure 4 shows an overview of our document representation estimation.", "We propose to employ a mixture of attention mechanisms to make it more robust.", "For the j -th word in the i -th document D i,j = w , there are two possible representations: (1) the contextualized word representation t i,j and (2) the static representation of this word s w .", "The contextualized representations disambiguate words with multiple senses by considering the context, while the static version accounts for outliers that may exist in documents.", "Therefore, it is reasonable to use either of them as the word representation e for attention mechanisms.", "Given the class representations x c , we define two attention mechanisms: one-to-one : h i,j = max c { cos ( e , x c ) } .", "It captures the maximum similarity to one class.", "This is useful for detecting words that are specifically similar to one class, such as NBA to sports .", "one-to-all : h i,j = cos ( e , avg c { x c } ) which is the similarity to the average of all classes.", "This ranks words by how related it is to the general set of classes in focus.", "Combining 2 choices of e and 2 choices of attention mechanisms totals 4 ways to compute each word's attention weight.", "We further fuse these attention weights in an unsupervised way.", "Instead of using the similarity values directly, we rely on the rankings.", "Specifically, we sort the words decreasingly based on attention weights to obtain 4 ranked lists.", "Following previous work (Mekala and Shang, 2020; Tao et al., 2018), we utilize the geometric mean of these ranks for each word and then form a unified ranked list.", "Like class representation estimation, we follow Zipf's law and assign a weight of 1 /r to a word ranked at the r -th position in the end.", "Finally, we obtain the document representation E i from t i,j with these weights.", "One straightforward idea to align the documents to classes is simply finding the most similar class based on their representations.", "However, document representations not necessarily distribute ball-shape around the class representationthe dimensions in the representation can be correlated freely.", "To address this challenge, we leverage the Gaussian Mixture Model (GMM) to capture the co-variances for the clusters.", "Specifically, we set the number of clusters the same as the number of classes k and initialize the cluster parameters based on the prior knowledge that each document D i is assigned to its nearest class L i , as follows.", "We use a tied co-variance matrix across all clusters since we believe classes are similar in granularity.", "We cluster the documents while remembering the class each cluster is initialized to.", "In this way, we can align the final clusters to the classes.", "Considering the potential redundant noise in these representations, we also apply principal component analysis (PCA) for dimension reduction following the experience in topic clustering (Aharoni and Goldberg, 2020).", "By default, we fix the PCA dimension P = 64 .", "The alignment between documents and classes produce high-quality pseudo labels for documents in the training set.", "To generalize such knowledge to unseen text documents, we train a text classifier using these pseudo labels as ground truth.", "This is a classical noisy training scenario (Angluin and Laird, 1987; Goldberger and Ben-Reuven, 2017).", "Since we know how confident we are on each instance (i.e., the posterior probability on its assigned cluster in GMM), we select the most confident ones to train a text classifier (e.g., BERT).", "By default, we set a confidence threshold = 50% , i.e., the top 50% instances are selected for classifier training.", "We conduct extensive experiments to show and ablate the performance of X-Class.", "We compare with two seed-driven weakly supervised methods.", "WeSTClass (Meng et al., 2018) generates pseudo-labeled documents via word embeddings of keywords and employs a self-training module to get the final classifier.", "We use the CNN version of WeSTClass as it is reported to have better performance compared to the HAN version.", "ConWea (Mekala and Shang, 2020) utilizes pre-trained neural language models to make the weak supervision contextualized.", "In our experiments, we feed at least 3 seed words per class to these two.", "We also compare with LOTClass (Meng et al., 2020b), which works under the extremely weak supervision setting.", "In their experiments, it mostly relies on class names but has used a few keywords Table 1: An overview of our 7 benchmark datasets.", "to elaborate on some difficult classes.", "In our experiments, we only feed the class names to it.", "We denote our method as X-Class .", "To further understand the effects of different modules, we have four ablation versions.", "X-Class-Rep refers to the prior labels L i derived based on class-oriented document representation.", "X-Class-Align refers to the labels obtained after document-class alignment.", "X-Class-ExactT refers to not doing consistency check when estimating class representations, and having exactly T class words.", "X-Class-KMeans refers to using K-Means (Lloyd, 1982) of GMM during document class alignment.", "We present the performance of supervised models, serving as an upper-bound for X-Class.", "Specifically, Supervised refers to a BERT model cross-validated on the training set with 2 folds (matching our confidence selection threshold).", "Many different datasets have been adopted to evaluate weakly supervised methods in different works.", "This makes it hard for systematic comparison.", "In this paper, we pool the most popular datasets to establish a benchmark on weakly supervised text classification.", "Table 1 provides an overview of our carefully selected 7 datasets, covering different text sources (e.g., news, reviews, and Wikipedia articles) and different criteria of classes (e.g., topics, locations, and sentiment).", "AGNews from (Zhang et al., 2015) (used in WeSTClass and LOTClass) is for topic categorization in news from AG's corpus.", "20News from (Lang, 1995) 2 (used in WeSTClass and ConWea) is for topic categorization in news.", "NYT-Small (used in WeSTClass and ConWea) is for classifying topic in New York Times news.", "NYT-Topic (used in (Meng et al., 2020a)) is an-other larger dataset collected from New York Times for topic categorization.", "NYT-Location (used in (Meng et al., 2020a)) is the same corpus as NYT-Topic but for locations.", "It is noteworthy to point out that many documents from this dataset talk about several countries simultaneously, so simply checking the location names will not lead to satisfactory results.", "Yelp from (Zhang et al., 2015) (used in WeSTClass) is for sentiment analysis in reviews.", "DBpedia from (Zhang et al., 2015) (used in LOTClass) is for topic classification based on titles and descriptions in DBpedia.", "For all X-Class experiments, we report the performance under one fixed random seed.", "By default, we set T = 100 , P = 64 , = 50% .", "For contextualized token representations t i,j , we use the BERT-base-uncased to group more occur-2 http://qwone.com/jason/20Newsgroups/", "rences of the same word.", "For supervised model training, we follow BERT fine-tuning (Wolf et al., 2019) with all hyper-parameters unchanged.", "For both WeSTClass and ConWea, we have tried our best to find keywords for the new datasets.", "Table 3 shows an example on the seed words selected for them on the NYT-Small dataset.", "For LOTClass, we tune their hyper-parameters match threshold and mcp epoch , and report the best performance during their self-train process.", "From Table 2, one can see that X-Class achieves the best overall performance.", "It is only 1% to 2% away from LOTClass and ConWea on AGNews and NYT-Topics, respectively.", "Note that, ConWea consumes at least 3 keywords per class.", "It is noteworthy that X-Class can approach the supervised upper bound to a small spread, especially on the NYT-Small dataset.", "Ablation on Modules.", "X-Class-Rep has achieved high scores (e.g., on both NYT-Topics and NYT-Locations) showing success of our class-oriented representations.", "The improvement of X-Class-Align over X-Class-Rep demonstrates the usefulness of our clustering module.", "It is also clear that the classifier training is beneficial by comparing X-Class and X-Class-Align.", "Ablation on Consistency Check.", "The consistency check in class representation estimation allows an adaptive number of keywords for each class.", "Without it leads to a diverged class understanding and degrading performance, as shown in Table 2.", "Ablation on Clustering Methods.", "Table 2 also shows that K-Means performs poorly on most datasets.", "This matches our previous analysis as K-Means assumes a hard spherical boundary, while 50 55 60 65 70 75 80 85 90 95 100 AGNews 20News NYT-Small NYT-Topic NYT-Location Yelp DBpedia m i c r o -F 1 Datasets unweighted one-to-one one-to-all one-to-one-static one-to-all-static mixture (default) Figure 6: Effects of Attention Mechanisms.", "In Figure 5, we visualize our class-oriented document representations and the unweighted variants using t-SNE (Rauber et al., 2016).", "We can see that while the simple-average representations are well-separated like class-oriented representations in NYT-Topics, they are much mixed up in NYT-Locations and Yelp.", "We conjecture that this is because BERT representations has topic information as its most significant feature.", "We have also tried using different attention mechanisms in X-Class.", "From the results in Figure 6, one can see that using a single mechanism, though not under-performing much, is less stable than our proposed mixture.", "The unweighted case works well on all four datasets that focus on news topics but not good enough on locations and sentiments.", "Figure 7 visualizes the performance trend w.r.t. to the three hyper-parameters in X-Class, i.e., the limit of class words T in class representation estimation, the PCA dimension P in document-class alignment, and the confidence threshold in text classifier training.", "Intuitively, a class doesn't have too many highly relevant keywords.", "One can confirm this in Figure", "7(a) as the performance of X-Class is relatively stable unless T goes too large to 1000.", "Choosing a proper PCA dimension could prune out redundant information in the embeddings and improve the running time.", "However, if P is too small or too large, it may hurt due to information 65 75 85 95 10 50 100 (default) 1000 m i c r o -F 1 T 16 32 64 (default) 128 256 768 None P 0.1 0.3 0.5 (default) 0.7 0.9", "(a) in Class Rep. Estimation", "(b) in Document-Class Alignment", "For T and P , we report the performance of X-Class-Align to explore their direct effects.", "loss or redundancy.", "One can observe this expected trend in Figure", "7(b) on all datasets.", "Typically, we want to select a reasonable number of confident training samples for the text classifier training.", "Too few training samples (i.e., too large ) would lead to insufficient training data.", "Too many training samples (i.e., too small ) would lead to too noisy training data.", "Figure", "7(c) shows that [0 . 3 , 0 . 9] is a good choice on all datasets.", "Compared with previous works (Meng et al., 2018; Mekala and Shang, 2020; Meng et al., 2020b), our X-Class has a significantly more mild requirement on human-provided class names in terms of quantity and quality.", "We have conducted an experiment in Table 4 for X-Class on 20News and NYT-Small by deleting all but one occurrence of a class name from the input corpus.", "In other words, the user-provided class name only appears once in the corpus.", "Interestingly, the performance of X-Class only drops less than 1%, still outperforming all compared methods.", "In contrast, the most recent work, LOTClass (Meng et al., 2020b), requires a wide variety of contexts of class names from the input corpus to ensure the quality of generated class vocabulary in its very first step.", "There are two straightforward ways to extend X-Class for hierarchical classification (1) X-Class-End : We can give all fine-grained class names as input to X-Class and conduct classification in an end-to-end manner; and (2) X-Class-Hier : We can first", "give only coarse-grained class names to X-Class and obtain coarse-grained predictions.", "Then, for each coarse-grained class and its predicted documents, we further create a new X-Class classifier based on the fine-grained class names.", "We experiment with hierarchical classification on the NYT-Small dataset, which has annotations for 26 fine-grained classes.", "We also introduce WeSHClass (Meng et al., 2019), the hierarchical version of WeSTClass, for comparison.", "LOTClass is not investigated here due to its poor coarse-grained performance on this dataset.", "The results in Table 5 show that X-Class-Hier performs the best, and it is a better solution than X-Class-End.", "We conjecture that this is because the fine-grained classes' similarities are drastically different (a pair of fine-grained classes can much similar than an-other pair).", "Overall, we show that we can apply our method to a hierarchy of classes.", "Weakly supervised text classification.", "Weakly supervised text classification has attracted much attention from researchers (Tao et al., 2018; Meng et al., 2020a; Mekala and Shang, 2020; Meng et al., 2020b).", "The general pipeline is to generate a set of document-class pairs to train a supervised model above them.", "Most previous work utilizes keywords to find such pseudo data for training, which requires an expert that understands the corpus well.", "In this paper, we show that it is possible to reach a similar, and often better, performance on various datasets without such guidance from experts.", "A recent work (Meng et al., 2020b) also studied the same topic extremely weak supervision on text classification.", "It follows a similar idea of (Meng et al., 2020a) and further utilizes BERT to query replacements for class names to find keywords for classes, identifying potential classes for documents via string matching.", "Compared with LoTClass, our X-Class has a less strict requirement of class names being existent in the corpus, and can work well even when there is only one occurrence (refer to Section 4.7).", "BERT for topic clustering.", "Aharoni and Goldberg (2020) showed that document representations obtained by an average of token representations from BERT preserve domain information well.", "We borrow this idea to improve our document representations through clustering.", "Our work differs from theirs in that our document representations are guided by the given class names.", "We propose our method X-Class for extremely weak supervision on text classification, which is to classify text with only class names as supervision.", "X-Class leverages BERT representations to generate class-oriented document presentations, which we then cluster to form document-class pairs, and in the end, fed to a supervised model to train on.", "We further set up benchmark datasets for this task that covers different data (news and reviews) and various class types (topics, locations, and senti-ments).", "Through extensive experiments, we show the strong performance and stability of our method.", "There are two directions that are possible to explore.", "First, focusing on the extremely weak supervision setting, we can extend to many other natural language tasks to eliminate human effort, such as Named Entity Recognition and Entity Linking.", "Second, based on the results on extremely weak supervision, we can expect an unsupervised version of text classification, where machines suggest class names and classify documents automatically.", "We thank all reviewers for their constructive comments; Yu Meng for valuable discussions and comments.", "comments.", "Our work is supported in part by NSF Convergence Accelerator under award OIA-2040727.", "Any opinions, findings, and conclusions or recommendations expressed herein are those of the authors and should not be interpreted as necessarily representing the views, either expressed or implied, of the U.S. Government.", "The U.S. Government is authorized to reproduce and distribute reprints for government purposes notwithstanding any copyright annotation hereon.", "We do not anticipate any significant ethical concerns; Text Classification is a fundamental problem in Natural Language Processing.", "The intended use of this work would be to classify documents, such as news articles, efficiently.", "A minor consideration is the potential for certain types of hidden biases to be introduced into our results, such as a biased selection of class names or language model pre-trained on biased data.", "We did not observe this kind of issue in our experiments, and indeed these considerations seem low-risk for the specific datasets studied here." ]
[ "objective", "abstain", "result", "abstain", "objective", "objective", "result", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "result", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "objective", "method", "objective", "objective", "result", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "result", "method", "abstain", "method", "abstain", "method", "method", "method", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "other", "other", "other", "result", "other", "other", "method", "other", "other", "result", "objective", "objective", "method", "abstain", "result", "abstain", "objective", "result", "other", "other", "other", "other", "other", "abstain", "abstain", "abstain", "abstain" ]
[ "Social biases are encoded in word embeddings.", "This presents a unique opportunity to study society historically and at scale, and a unique danger when embeddings are used in downstream applications.", "Here, we investigate the extent to which publicly-available word embeddings accurately reflect beliefs about certain kinds of people as measured via traditional survey methods.", "We find that biases found in word embeddings do, on average, closely mirror survey data across seventeen dimensions of social meaning.", "However, we also find that biases in embeddings are much more re-flective of survey data for some dimensions of meaning (e.g. gender) than others (e.g. race), and that we can be highly confident that embedding-based measures reflect survey data only for the most salient biases.", "In April of 2015, protests erupted over the murder of Freddie Gray, Jr.", "Questions about what to call those protesting quickly became the focus of a national debate.", "In an interview on CNN with Erin Burnett, 1 Baltimore City Councilman Carl Stokes admonished then-President Barack Obama and then-Mayor Stephanie Rawlings-Blake for using the word thugs to refer to the protesters.", "Burnett challenged Stokes' admonition, claiming the protesters were indeed thugs because They know it's wrong to steal and burn.", "Stokes responded by stating the protesters were ... children who have been set aside [and] marginalized.", "The argument between Burnett and Stokes is over the way we label people, the meanings of those labels, and the impacts these meanings can have.", "Councilman Stokes wants to avoid using the label thug because of its established, negative 1 http://nymag.com/intelligencer/2015/04/carl-stokes-to-cnn-thug-is-racially-charged.html | | | | || | | | || | ||| | | | | | | |||| ||| ||| | |||||| | |||||| | | | | | ||| | | |||| | | | | | | | | ||||| | | || ||||||||||||||| | | | ||| | | | | || | || | | ||| ||||| |||| | ||| | | | | | ||||| | | | | | | | | | | | | || | | | || | | || || | | || || | | | | | | |||||||||||||| | | || | | | || | | | || | | | || ||||| || |||| | || ||| ||| | || | || || | | | | | | ||||||| | thug child judge secretary Potency (weakstrong) Evaluation (badgood) Gender (manwoman) Black (not blackblack) Belief D i m en s i on Figure 1: Beliefs (x-axis) about four identities (sepa-rate plots) along four dimensions of social meaning (y-axis).", "connotation towards black Americans (Dow, 2016).", "The survey data collected for this paper, a sample of which is shown in Figure 1, provides further evidence of this association between thugs and black Americans.", "Respondents to our survey, on average, expected thugs to be bad, and that approximately 42.4% of thugs would be black.", "Of the 57 identities we studied, the only identity perceived to be more black was criminal, at 47.3%.", "The beliefs we have about people who hold particular identities (McCall and Simmons, 1978) are important, because they often determine the behaviors we take towards people who are labeled with those identities (Ridgeway and Smith-Lovin, 1999).", "2 For example, as Councilman Stokes knows, 2 Different kinds of beliefs about identities have different names.", "For example, contextualized beliefs are called impressions (Heise, 1987), and aggregations of beliefs across multiple dimensions of meaning are called stereotypes (Fiske et al., 2002).", "The beliefs we study here are typically called sentiments or associations .", "However, given the distinct meaning of these terms in NLP, we use the general term belief in this paper.", "This aligns roughly with the generic use of the terms bias and stereotype in related NLP literature.", "we do not behave the same way towards children as we do towards thugs.", "This is because, as reflected in Figure 1, people generally believe that children are weak and good, whereas thugs are bad and powerful.", "This leads us to want to do things like help children, versus wanting to attack thugs (Heise, 2007).", "However, measuring beliefs is difficult.", "Traditionally, we have relied on surveys to collect these measurements.", "But there are tens of thousands of identities (Joseph et al., 2016; MacKinnon and Heise, 2010), and beliefs about them can form along many different dimensions of sociocultural meaning (e.g. gender, race, and others displayed in Figure 1).", "Measuring beliefs about many identities, on many dimensions, using traditional surveys can therefore be difficult.", "Further, measuring the evolution of beliefs is often impossible with surveys, because survey data is extremely sparse historically (Garg et al., 2018).", "Finally, measuring how these beliefs change with additional contextual information (e.g. beliefs about specific teachers, rather than teachers in general) is notoriously difficult with survey data (Heise, 2007).", "Recognizing these difficulties, scholars have begun to develop NLP tools to measure beliefs about identities historically, at scale, and in context (Joseph et al., 2017; Hoyle et al., 2019; Fast et al., 2016; Garg et al., 2018; Field et al., 2019).", "Most recent methods derive these measures by manipulating word embeddings.", "Studying beliefs enmeshed in word embeddings is also critical because embeddings are widely used in downstream NLP models, which are themselves beginning to label people, for example, as job-worthy or not (De-Arteaga et al., 2019).", "Measuring beliefs about people using embeddings therefore serves the dual purpose of understanding human biases and of ensuring such biases are not propelled further along by algorithms.", "However, work remains to understand when embedding-based measures of beliefs about identities accurately reflect more traditional survey measures, and why some beliefs may be reflected more accurately than others.", "The present work combines new and existing survey data with an extensive set of embedding-based measurement strategies to explore this at both the dimension level and the belief level.", "At the dimension level, for example, we ask, how well do embeddings capture beliefs about gender, relative to race?", "And if differences exist, why?", "Such issues have arisen in existing work, for example, where Garg et al. (2018) see correlations of .65 between embedding-based and survey-based measures of beliefs about gender, but only .15 for ethnicity-based beliefs.", "At the beliefs-level, we ask, for example, how much more accurately do we capture beliefs about the Potency (strength) of thugs, relative to beliefs about the Potency of children?", "Accuracy at this level is critical for linking historical trends in social behavior to societal-level beliefs about particular identities.", "We show that what we measure is more important than how we measure it in determining the correlation between embedding-based and survey-based measures of beliefs about people.", "At the dimension level, the beliefs we measure most accurately are also the most important for how we label others .", "At the belief level, assuming we can identify a good measurement model, embedding-based measures are significantly more accurate for more extreme, and more agreed upon, beliefs.", "All code and data necessary to replicate the analyses in this article can be found at https://github.", "Our work is grounded in literature on measuring beliefs about identities in social psychology in general and, more specifically, via word embeddings.", "We address these two literatures separately here.", "A common approach for measuring beliefs about specific identities is to assume a dimensional representation that is, to assume a set of distinct dimensions of social meaning can be used to characterize how we think and feel about someone that holds a particular identity.", "From this dimensional perspective, two primary questions arise.", "First, what are the dimensions along which beliefs form?", "Social psychologists have identified three classes of important dimensions: traits, affective meanings, and semantic associations.", "Traits represent visiblealthough also socioculturally definedcharacteristics like age, gender, and race (Freeman and Ambady, 2011).", "Affective dimensions of social meaning represent how we feel about a given person and/or identity (Todorov et al., 2015; Fiske et al., 2002; Heise, 2007).", "Here, we use the three affective dimensions proposed by Heise (2007) and that are popular in sociology (Rogers et al., 2013) Evaluation (good-ness/badness), Potency (strength/weakness), and Activity (active/passive).", "Finally, social psychologists often characterize beliefs about identities in terms of semantic associations to particular concepts (Freeman and Ambady, 2011) or institutions (MacKinnon and Heise, 2010).", "For example, people link the identities brother and sister together because they are both associated with the family institution.", "In the present work, we collect beliefs for seventeen different dimensions of social meaning, incorporating age, race, gender, evaluation, potency, activity, and six institutional associations.", "Second, given a theorized dimension of meaning, how should we measure society-wide beliefs about where particular identities lie on that dimension?", "Here, we adopt perhaps the most common approach, which uses semantic differential scales on surveys (Osgood et al., 1975).", "The semantic differential technique asks respondents to place an identity on a sliding scale with two opposing concepts (e.g. weak and strong, see the example in Figure 2A).", "Finally, it is worth noting that here, like in most social psychology research, we assume that responses from survey participants generalize to American culture writ large.", "This assumption is built on the well-established culture-as-consensus paradigm in psychological anthropology (Karabat-sos and Batchelder, 2003; Batchelder and Romney, 1988), and empirical work showing that people tend to agree on the vast majority of their beliefs about people (Heise, 2007).", "Nonetheless, many counterexamples exist (Berger et al., 1992; Smith-Lovin and Douglas, 1992).", "We leave questions about how to address these issues to future work.", "Embedding-based approaches to measuring beliefs typically follow a three step process of cor-pus/embedding selection, dimension selection, and word position measurement.", "Corpus/Embedding Selection Several recent works have argued that the corpus used can impact measures of beliefs about people derived from word embeddings (Lauscher and Glavas, 2019; Mirzaev et al., 2019; Sweeney and Najafian, 2019).", "For example, Brunet et al. (2019) show how to reduce gender bias in embeddings by removing particular documents from a corpus.", "However, several others have shown that in their analyses, the corpus used does not significantly impact results (Spirling and Rodriguez, 2019; Garg et al., 2018; Kozlowski et al., 2019; Caliskan et al., 2017).", "Differences in the embedding model used have also been observed to impact measurements (Chaloner and Mal-donado, 2019).", "Again, though, robustness checks from other studies suggest a limited effect beyond the somewhat general hyperparameters of window size and the number of dimensions estimated (Garg et al., 2018; Kozlowski et al., 2019).", "Dimension Selection To measure beliefs, one first must select a dimension along which the belief is assumed to be held.", "Much of the literature has focused on dimensions related to gender or race.", "Others, however, have seen value in moving beyond these dimensions (Agarwal et al., 2019; Sweeney and Najafian, 2019).", "Most relevant is the work of Kozlowski et al. (2019), who study the association of 59 concepts across 20 different dimensions of sociocultural meaning, and that of An et al. (2018), who induce 732 different dimensions using WordNet to study contextual effects of linguistic meaning.", "While neither work focuses heavily on identities, these efforts compliment our goal of studying a broad range of dimensions of social meaning.", "Scholars then identify a direction within the embedding that represents this dimension.", "To do so, an approach similar to the semantic differential idea is used.", "Terms are selected to represent the two ends of the dimension.", "For example, to identify the gender direction, words at one end might be he and him, and words at the other end, she and her.", "Scholarship varies on how these dimension-inducing word sets are selected.", "For example, several scholars have used demographically gendered and/or racialized names (Bolukbasi et al., 2016; Caliskan et al., 2017), while others have relied on careful extraction of concepts from dictionaries and thesauri (Kozlowski et al., 2019).", "Kozlowski et al. (2019) find that having more words at each end generally provides better measurements, and others have found a need to use frequently occurring terms (Ethayarajh et al., 2019; Brunet et al., 2019).", "Beyond these observations, however, scholars have generally found stable results as long as reasonable word sets are selected.", "Word Position Measurement Finally, the position of each identity along this direction must be identified.", "Doing so entails two major decisions.", "First, how should one quantify the direction, given the dimension-inducing words?", "For example, Bolukbasi et al. (2016) identify the direction by taking the first dimension of a PCA on the full set of direction words.", "Second, how should one define the position of points along this line?", "For example, several works use the cosine similarity between the identified bias direction and the embedding of each identity.", "Scholars have also recently proposed supervised methods for word position measurement (Sweeney and Najafian, 2019; Agarwal et al., 2019).", "Such approaches are important, but assume the existence of some training data, which may or may not be available in certain measurement contexts.", "We therefore do not explore these methods further in the present work.", "In sum, using embeddings to measure beliefs requires a series of decisions, the impacts of which are still debated.", "Below, we provide the most comprehensive study to date on the importance of these decisions on measurement quality.", "We collect two new survey datasets for this paper.", "The first measures beliefs about the 57 identities 3 in Table 1 on seventeen dimensions of social meaning described below.", "The second measures the ways in which a set of survey respondents label people with these identities in hypothetical social situations.", "We used a cluster-based approach to select the 57 identities we study.", "We study nine sets of six identities, where each set has been shown in prior work to be related in some way.", "Five of the sets are characterized by a salient association to a specific institution described by MacKinnon and Heise (2010).", "Three sets are characterized by salient trait similarities and differences on gender, age or race/ethnicity.", "And one set of identities is included where all identities have strong negative Evaluations.", "Finally, we include three random identities as a mechanism for comparison in other work not described here.", "For further details on the selection criteria, survey populations, and results, see the Appendix.", "We collected survey data on beliefs about identities from 247 respondents on Amazon's Mechanical Turk.", "Each survey respondent provided re-3 Because not all embedding models account for bigrams, we removed police officer from all analyses in this paper.", "However, for future purposes, we include it in our description of the data here.", "sponses for four different, randomly selected identities.", "Each identity was given to a total of 15 different respondents.", "For each identity, we asked a set of seven questions, some of which had multiple subparts.", "Following prior work, beliefs for affective dimensions were solicited using a slider-based Likert scale.", "For the Evaluative dimension, the opposing ends of the Likert scale were labeled bad, awful and good, nice.", "For the Potency dimension, powerless, little and powerful, big, and for the Activity dimension, slow, quiet, inac-tive and fast, noisy, active.", "See Heise (2010) for more details on the development of these questions.", "The fourth and fifth question used Likert scales to measure beliefs about age and gender, with ends representing young and old and Always male and Always female, respectively.", "The sixth question asked Of all [some identity, e.g., bullies], what percentage of them do you think are... and then provided one slider each for the following ethnic/racial categories drawn from the planned 2020 Census: White, Hispanic or Latino, Asian, Middle Eastern, and Black.", "The seventh question, modeled after the association-based measures from Hill et al. (2015), asked To what extent does thinking about [some identity, e.g., bullies] lead you to think about... and then provided a slider for the following institutional settings: family, politics, (criminal) justice, medicine, business, education, and religion.", "Each slider had qualitative labels ranging from Not at all, to Somewhat, to Immediate response.", "Dimension Identities Politics conservative, Democrat, liberal, Republican, politician, senator Family brother, sister, daughter, son, father, mother Law judge, criminal, lawyer, witness, cop, police officer Medicine doctor, physician, surgeon, nurse, patient, dentist Business executive, consultant, secretary, intern, banker, boss Gender woman, guy, girl, boy, man, lady Age teenager, kid, child, toddler, adult, minor Race & Ethnicity black, white, Hispanic, Asian, Arab, American NegativeEvalua-tion thug, idiot, jerk, goon, punk, bully Random principal, scientist, coach Table 1: The 57 identities we collect data on.", "We collect responses from 402 participants to a pair of identity labeling tasks.", "4 Note that these respondents are different than those who provided the belief measurements .", "Each participant answered a set of 40 hypothetical identity labeling questions.", "Questions could be either an IsA or a SeenWith question.", "An example of an IsA question is given in Figure 1B).", "SeenWith questions were formatted in the same way, except the question text instead says Who would you say is most likely to be seen with a [mother]?", "Questions varied on both the identity provided in the text and the identities serving as potential answers.", "From the 57 identities we study, we create survey questions roughly 5 as follows: for a given identity, we generate 14 random sets of the 56 other identities; each set contains four identities.", "We then generate one IsA and one SeenWith question for each of these sets, where these four identities constitute the possible answers to the question, and the given identity is used in the question text.", "This process is then repeated ten times for each identity.", "This process generates ten questions for each of the 3,192 identity pairs for each type of question.", "4 These identity labeling questions are similar to, but distinct from, those used in our prior work focused on the impact of semantic associations and semantic similarity on identity labeling decisions (Joseph and Carley, 2016).", "5 Due to a bug in Qualtrics, a small percentage of questions were not asked or asked more than once.", "See Appendix for details Variable Description i A social identity (e.g. doctor, author) d A dimension of meaning (e.g. gender) r A survey respondent S d,i,r A matrix of survey responses to semantic differential measures on a given dimension d for identity i by respondent r .", "To further substantiate our claims, we ensure our main results hold using three other datasets on beliefs about identities: beliefs about gender for 287 occupational identities from Bolukbasi et al. (2016), beliefs about 195 national and occupational identities on the Big Five Personality Traits from Agarwal et al. (2019), and beliefs about 654 identities on the Evaluation, Potency, and Activity dimensions by Smith-Lovin and Robinson (2015).", "Our primary research question is, how accurately can we recover beliefs measured using surveys with word-embedding based measures?", "We study this first at the dimension level , i.e., how accurately do embedding-based measures reflect survey data across a set of identities on a given dimension of social meaning?", "We then study accuracy at the belief level , i.e., how accurately do embedding-based measures reflect survey data for specific identities on specific dimensions?", "Our approach is straightforward, but is best explained by introducing some additional notation, provided in Table 2. 4.1 Dimension-level analysis At the dimension level, we consider first how different factors relating to the measurement itself impact accuracy.", "We then study why measurements are more accurate for some dimensions than others.", "We do so by connecting the degree of accuracy for a given dimension to how important that dimension is in how survey respondents select identities for others in our identity labeling task.", "As discussed above, the accuracy of embedding-based measurements may vary across properties of the dimension being measured, as well as the way in which the embedding-based measurement is constructed.", "We first study the relative effects of", "a) the dimension ( d ),", "b) the embedding model ( E ),", "c) the dimension-inducing wordset ( dw ), and", "d) the word position measurement model ( wp ) on the accuracy of embedding-based measurements.", "As is standard in the literature, we use the Pearson correlation between the mean survey response and the output of the embedding-based measure as our definition of accuracy.", "That is, for a given dimension d , survey dataset S , embedding-based measure m E,dw,wp , and set of identities of size I , we compute the accuracy of the embedding-based measure as the Pearson correlation between { S d,i 0 , , S d,i 1 , , ..., S d,i I , } and { m E,dw,wp ( i 0 ) , m E,dw,wp ( i 0 ) , ..., m E,dw,wp ( i I ) } .", "We then run a linear regression to understand how accuracy varies across the factors considered.", "Our analysis involves all dimensions of social meaning studied in the four survey datasets described above.", "For embedding models, E , we consider twelve different publicly available cor-pus/embedding combinations from prior work.", "To construct dimension-inducing wordsets, dw , we using one of three approaches.", "The first is to use the same terms as were placed on the semantic differential scale on the survey (e.g. powerless, powerful, little, big for Potency, as in Figure 2a).", "In certain cases, we also include a survey-augmented condition that extends this wordset using a thesaurus, after discussion amongst authors.", "Third, where applicable, we use direction-inducing wordsets from prior work .", "Finally, we consider several of the major established approaches in the literature for word position measurement wp .", "We use the approaches from Kozlowski et al. (2019), Swinger et al. (2019), Ethayarajh et al. (2019), Bolukbasi et al. (2016), and Garg et al. (2018).", "As we will show, controlling for E , dw and wp , there are large differences in accuracy across dimensions.", "To better understand these differences across dimension, we compute two measurements.", "First, Kozlowski et al. (2019) show that the variance of the survey data on a dimension, that is, Var( S d,i 0 , , S d,i 1 , , ..., S d,i n , ) , is strongly correlated with the accuracy of embedding-based measures.", "However, they also note that high explained variance... reveals little about how these valences are deployed in social life (pg. 930).", "Here, we therefore compute a second measure that connects variance of the survey data on a given dimension to a significant social outcome, how strongly people rely on that dimension when labeling other people.", "To do so, we first construct a 57 x 17 matrix X of scaled-and-centered mean survey responses for each identity on each dimension in our survey data, i.e. X i 0 ,d 0 = S d 0 ,i 0 , .", "We then construct an observation with a binary outcome that pairs the identity in the question with each possible answer.", "The outcome is 1 if the answer was selected, and 0 otherwise.", "For example, in Figure 2B), the pairings created would be mother, adult, mother, sister, mother, son, and mother, lady.", "If the respondent answered lady, then the outcomes would be 0, 0, 0, and 1, respectively.", "The 40.3% of questions where respondents answered all are equally un-likely were ignored.", "In total, we obtained 9,597 responses where the respondent did not answer All are equally (un)likely, split roughly evenly between SeenWith and IsA questions.", "We then train a logistic regression model for IsA and SeenWith questions separately, each with seventeen parameters.", "For a given observation, the parameters represent the absolute difference between each dimension, e.g. the first parameter is | X i q ,d 0 X i a ,d 0 | , where i q is mother in Figure 2B), i a is, e.g., adult, and d 0 is, e.g., gender.", "In the Appendix, we provide full results for these regressions.", "Intuitively, larger negative coefficients for a given dimension indicate that the further away two identities are on that dimension, the less likely the respondent is to select them as a pair.", "For example, we find that Evaluation has a strong negative correlation for IsA questions, indicating that respondents typically do not expect two identities to be assigned to the same person if one identity is perceived to be for good people and the other for bad people.", "Positive coefficients imply assortativ-ity on the dimension.", "For example, for SeenWith questions, Potency has a positive coefficient, implying that we expect powerful identities to be seen with less powerful counterparts.", "The magnitude of these coefficients represent the importance given to that dimension by survey respondents.", "We use the maximum of the two coefficients across SeenWith and IsA questions as a measure of this importance.", "We are also interested in accuracy for specific beliefs.", "For example, how accurately do embedding-based measures reflect survey data on beliefs about the typical age of a boy?", "As an outcome for this belief-level analysis, we use a ranking task similar to prior work (Spirling and Rodriguez, 2019; Kozlowski et al., 2019).", "We describe this outcome by continuing with the example of beliefs about the age of boys.", "We first compute the set of identities N , for which S age,boy, se ( S age,boy, ) > S age,i, + se ( S age,i, ) , where se is the standard error function.", "That is, N represents all identities we are reasonably confident respondents believed to be younger than boy s.", "We then determine the subset of N , N c , where boy is also ranked above those identities in the embedding measure.", "We do the same for identities survey respondents said were older than boy s, adding these to N , and to N c if they are correctly ranked in the embedding measure.", "Finally, we use N c N to study accuracy at the belief level.", "We are interested both in overall levels of accuracy for belief-level measurements, as well as the factors that explain variation in accuracy.", "We consider four factors that might explain this variation (continuing with the age/boy example): sd ( S age,boy, ) , the distance of S age,boy, to the median over all identities on that dimension, the logged frequency of the identity in a large corpora, 6 and the number of synsets for the identity in WordNet.", "To study the impact of these different factors, we use a generalized additive model with a binomial link function where N c N is the outcome and points are weighted by N .", "Finally, as opposed to considering results across all possible E , dw , and wp , we first select those settings that maximize the Pearson correlation for each dimension.", "Across all conditions and survey datasets, the Pearson correlation between the embedding and survey measures is 0.32 [.31,.33].", "However, considerable variation exists.", "Figure 3 presents results of a regression that attempts to explain the sources of this variance (x-axis) and the effects of each source (y-axis).", "Separate colors represent results from the four different survey datasets analyzed.", "In general, results are largely consistent across the different datasets, and thus we will not emphasize differences across datasets below.", "Figure 3 shows that the embedding model used can decrease correlation by as much as .35.", "As others have found, this effect decreases when one focuses only on 300-dimensional embeddings.", "It is worth noting, however, that no embedding model is universally best.", "For example, nine of the twelve embedding models studied are responsible for producing the highest observed correlation for at least one dimension.", "Selection of the dimension-inducing words, dw , also has a limited effect.", "The one exception is when survey-matched words are used for the Gender dimension, where correlations drop by, on average, around 0.5 relative to the he/she baseline.", "The fact that using the same words as the semantic differential scale is a terrible choice, but for only one of the seventeen dimensions studied, reflects the fact that selection of dw , like elements of other forms of quantitative social science, remains a mix of art and science (Sterling and Taveter, 2009).", "In contrast, even the most scientifically appealing approaches to word position measurement (Ethayarajh et al., 2019) provide marginal gains.", "The only consistent observation we draw is that approaches that normalize measurements across dimensions related to the same overarching concept (e.g. that normalize racialized beliefs across all perceived dimensions of race) perform slightly better.", "Results thus reflect that the details of measurement are less important than what is being measured.", "Reflecting this same fact, the strongest impacts on correlation between the survey and embedding-based measures come from which dimension is being studied.", "Some of these results reflect the salience of these dimensions in social life.", "Associations to institutions, which are most accurately measured on average, are a primary tool we use to sort people into groups (MacKinnon and Heise, 2010).", "And stronger correlations between the embedding and survey-based measures for Evaluation and Potency, relative to Activity, reflects the increased importance in affective perceptions of these two dimensions (Rogers et al., 2013).", "However, scholars largely agree that trait-based beliefs on gender and race serve as default characteristics (Ridgeway and Smith-Lovin, 1999) along which we almost automatically categorize others (Todorov et al., 2015).", "Given their shared salience, why is gender the only trait that can be accurately measured?", "Figure 4A) shows, as first identified by Kozlowski et al. (2019), that much of this is due to the variance of the survey data along that dimension; the correlation between variance and the coefficients in Figure 3 is 0.91.", "However, as discussed above, Kozlowski et al. (2019) study more general concepts on more general dimensions, and note that they have no easy way to connect their R = 0.91 ActivityAge AsianBlack Business Education Evaluation Family Gender Justice Latino Medicine Middle Eastern Politics Potency Religion White R = 0.86 Activity Age Asian Black Business Education Evaluation Family Gender Justice Latino Medicine Middle Eastern Politics Potency Religion White Coefficient in Figure 3 Maximum Importance in IsA or SeenWith Labeling Questions 0.02 0.04 0.06 0.02 0.04 0.06 0.00 0.05 0.10 0.15 0.20 0.8 0.6 0.4 0.2 0.0 Variance B A Figure 4: In both A) and B), the y-axis gives the coeffi-cient value for the regression presented in Figure 3. In A), the x-axis represents the variance in survey means along the dimension.", "In contrast, here, Figure 4B) shows a significant positive correlation between variance in the survey data along a dimension (and hence measurement accuracy) and that dimensions' importance in explaining patterns of labeling in our identity labeling task.", "Embedding-based measures of beliefs about identities, we therefore show, are most likely to reflect traditional survey measures particularly when those beliefs are salient for identity labeling.", "Critically, then, results for biases in word embeddings are tied not only to the salience of dimensions in general social life, but also to the identities selected for measurement.", "Selecting only heavily racialized and non-gendered identities, for example, might well reverse the positions of racialized dimensions and gender in Figure 4. This makes it all the more critical to identify theoretically-driven concepts salience in labeling, and variance in measurement that move beyond measures of specific identities on specific dimensions to help us understand what is measurable and what is not, particularly when survey data is not available.", "As with the dimension-level results, we find that embedding-based measures are generally accurate predictors of survey-based measures for specific beliefs.", "On average, 74.9% of the beliefs collected for this paper are correctly ranked, as are 82.1%, 72.0%, and 71.4% of the beliefs from Bolukbasi et al. (2016), Smith-Lovin and Robinson (2015), and Agarwal et al. (2019), respectively.", "One caveat to keep in mind, however, is that we focus only on the single best embedding measurement approach for each source/dimension combination.", "Regardless, as with the dimension-level results, there is considerable variance at the belief level.", "Some of this variance (approximately 32%, see the Appendix for full regression results ) can be explained by the factors we consider.", "The strongest explanation we find to explain ranking accuracy, reflected in the left-hand plot in Figure 5, is the distance of the survey-based belief measure from the median on its dimension.", "At the extremes, ranking accuracy is almost perfect.", "Because extreme observations are also most likely to be low variancefor example, consider that beliefs at the most extreme values of a scale must have zero variancea more general claim can be made: word embedding-based measures accurately capture our most extreme and agree-upon beliefs about people, but show significant unexplained (at least by us) variance for more neutral and/or less-agreed upon beliefs.", "This variance is on display in the right-hand plot in Figure 5, which gives results for the blackness dimension.", "The embedding-based measure captures with perfect accuracy racialized perceptions of the identities thug and criminal, but not, e.g., liberal, which is similar along the other explanatory factors we consider here.", "As far as we are aware, it remains an open question as to why this is the case.", "In this paper, we asked, can we trust measures of beliefs about people derived from word embeddings?", "We find the answer to be yes, at least on average.", "Depending on one's perspective, this could be good or bad.", "From a cultural studies/social psychological perspective, this positive correlation further validates efforts to use word embeddings to study perceptions of people historically, at scale, and in context.", "On the other hand, from the bias perspective, this suggests that a vast array of social biases are encoded in embeddings.", "However, we also find that some beliefs specifically, extreme beliefs on salient dimensions are easier to measure than others.", "More generally, across four datasets, we find that what we measure is more important than how we measure it.", "Again, two different perspectives on this are needed.", "With respect to the study of culture and human stereotypes, we may be safest in studying only the most extreme results from embedding models, as has been done by, e.g., Spirling and Rodriguez (2019).", "From the bias perspective, given the rash of recent work on debiasing word embeddings, our results suggest that much more attention needs to be paid to how we are evaluating these approaches.", "Currently, upstream evaluations of debiasing are centered almost exclusively on occupational identities on gender, where some of the most salient social biases we know of exist ( Ridgeway, 2011).", "Others have argued that removing these salient beliefs may not remove gender information from embeddings (Gonen and Goldberg, 2019).", "But Gonen and Goldberg's 2019 argument relies on a technical deficiency of existing approaches.", "We can make a similar critique by simply changing what is being measured.", "For example, the correlation between gender beliefs and the gender direction in the Hard-Debiased embeddings of Bolukbasi et al. (2016) is 0.05 (p = .84) using identities in their data, and 0.4 (p < .05) using the identities in our data.", "Similarly, removing gender bias does not remove bias on other dimensions.", "For example, while Sweeney and Najafian (2019) show that the NumberBatch embeddings harbor the least gender bias, we find that they are the only embedding to show consistently high correlations with age, leading to the potential for ageism downstream.", "More generally, stereotypes exist along a network of beliefs (Freeman and Ambady, 2011) reflecting unwarranted correlations between many dimensions (Ridgeway, 2011); we must therefore be careful not to expect that removing meaning along one dimension will expel social biases from our models.", "K.J. was supported by NSF IIS-1939579.", "This research was supported in part a SUNY Germination Space Grant.", "We thank Lynn Smith-Lovin, Lisa Friedland, Tobias Schroeder, and Yuhao Du for comments on earlier versions of this work." ]
[ "abstain", "abstain", "objective", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "result", "abstain", "method", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "method", "method", "other", "method", "abstain", "method", "other", "other", "other", "other", "abstain", "abstain", "other", "other", "method", "method", "method", "other", "method", "other", "other", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "objective", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "objective", "other", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "other", "method", "method", "method", "method", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "method", "method", "method", "abstain", "method", "method", "method", "method", "method", "method", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "method", "abstain", "method", "abstain", "method", "method", "abstain", "method", "method", "method", "method", "method", "method", "method", "method", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "result", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "result", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "result", "abstain", "result", "result", "abstain", "abstain", "abstain", "method", "method", "abstain", "result", "method", "other", "other", "other" ]
[ "Question answering (QA) and question generation (QG) are closely related tasks that could improve each other; however, the connection of these two tasks is not well explored in literature.", "In this paper, we give a systematic study that seeks to leverage the connection to improve both QA and QG.", "We present a training algorithm that generalizes both Generative Adversarial Network (GAN) and Generative Domain-Adaptive Nets (GDAN) under the question answering scenario.", "The two key ideas are improving the QG model with QA through incorporating additional QA-specific signal as the loss function, and improving the QA model with QG through adding artificially generated training instances.", "We conduct experiments on both document based and knowledge based question answering tasks.", "We have two main findings.", "Firstly, the performance of a QG model (e.g in terms of BLEU score) could be easily improved by a QA model via policy gradient.", "Secondly, directly applying GAN that regards all the generated questions as negative instances could not improve the accuracy of the QA model.", "Learning when to regard generated questions as positive instances could bring performance boost.", "In this work, we consider the task of joint learning of question answering and question generation.", "Question answering (QA) and question generation (QG) are closely related natural language processing tasks.", "The goal of QA is to obtain an answer given a question.", "The goal of QG is almost reverse which is to generate a question from the answer.", "In this work, we consider answer selection (Yang et al., 2015; Balakrishnan et al., 2015) as the QA task, which assigns a numeric score to each candidate answer, and selects the top ranked one as the answer.", "We consider QG as a generation problem and exploit sequence-to-sequence learning (Seq2Seq) (Du et al., 2017; Zhou et al., 2017) as the backbone of the QG model.", "The key idea of this work is that QA and QG are two closely tasks and we seek to leverage the connection between these two tasks to improve both QA and QG.", "Our primary motivations are twofolds.", "On one hand , the Seq2Seq based QG model is trained by maximizing the literal similarity between the generated sentence and the ground truth sentence with maximum-likelihood estimation objective function (Du et al., 2017).", "However, there is no signal indicating whether or not the generated sentence could be correctly answered by the input.", "This problem could be precisely mitigated through incorporating QA-specific signal into the QG loss function.", "On the other hand , the capacity of a statistical model depends on the quality and the amount of the training data (Sun et al., 2017).", "In our scenario, the capacity of the QA model depends on the difference between the positive and negative patterns embodied in the training examples.", "A desirable training dataset should contain the question-answer pairs that are literally similar yet have different category labels, i.e. some question-answer pairs are correct and some are wrong.", "However, this kind of dataset is hard to obtain in most situations because of the lack of manual annotation efforts.", "From this perspective, the QA model could exactly benefit from the QG model through incorporating additional question-answer pairs whose questions are automatically generated by the QG model 1 .", "To achieve this goal, we present a training algorithm that improves the QA model and the 1 An alternative way is to automatically generate answers for each question.", "Solving the problem in this condition requires an answer generation model (He et al., 2017), which is out of the focus of this work.", "Our algorithm could also be adapted to this scenario.", "QG model in a loop.", "The QA model improves QG through introducing an additional QA-specific loss function, the objective of which is to maximize the expectation of the QA scores of the generated question-answer pairs.", "Policy gradient method (Williams, 1992; Yu et al., 2017) is used to update the QG model.", "In turn, the QG model improves QA through incorporating additional training instances.", "Here the key problem is how to label the generated question-answer pair.", "The application of Generative Adversarial Network (GAN) (Goodfellow et al., 2014; Wang et al., 2017) in this scenario regards every generated question-answer pair as a negative instance.", "On the contrary, Generative Domain-Adaptive Nets (GDAN) (Yang et al., 2017) regards every generated question-answer pair appended with special domain tag as a positive instance.", "However, it is non-trivial to label the generated question-answer pairs because some of which are good paraphrases of the ground truth yet some might be negative instances with similar utterances.", "To address this, we bring in a collaboration detector, which takes two question-answer pairs as the input and determines their relation as collaborative or competitive.", "The output of the collaboration detector is regarded as the label of the generated question-answer pair.", "We conduct experiments on both document based (Yang et al., 2015) and knowledge (e.g. web table) based question answering tasks (Bal-akrishnan et al., 2015).", "Results show that the performance of a QG model (e.g in terms of BLEU score) could be consistently improved by a QA model via policy gradient.", "However, regarding all the generated questions as negative instances ( competitive ) could not improve the accuracy of the QA model.", "Learning when to regard generated questions as positive instances ( collaborative ) could improve the accuracy of the QA model.", "Our work connects to existing works on question answering (QA), question generation (QG), and the use of generative adversarial nets in question answering and text generation.", "We consider two kinds of answer selection tasks in this work, one is table as the answer (Balakr-ishnan et al., 2015) and another is sentence as the answer (Yang et al., 2015).", "In natural language processing community, there are also other types of QA tasks including knowledge based QA (Berant et al., 2013), community based QA (Qi-u and Huang, 2015) and reading comprehension (Rajpurkar et al., 2016).", "We believe that our algorithm is generic and could also be applied to these tasks with dedicated QA and QG model architectures.", "Despite the use of sophisticated features could learn a more accurate QA model, in this work we implement a simple yet effective neural network based QA model, which could be conventionally jointly learned with the QG model through back propagation.", "Question generation draws a lot of attentions recently, which is partly influenced by the remarkable success of neural networks in natural language generation.", "There are several works on generating questions from different resources, including a sentence(Heilman, 2011), a topic (Chali and Hasan, 2015), a fact from knowledge base (Serban et al., 2016), etc.", "In computer vision community, there are also recent studies on generating questions from an image (Mostafazadeh et al., 2016).", "Our QG model belongs to sentence-based question generation.", "GAN has been successfully applied in computer vision tasks (Goodfellow et al., 2014).", "There are also some recent trials that adapt GAN to text generation (Yu et al., 2017), question answering (Wang et al., 2017), dialogue generation (Li et al., 2016), etc.", "The relationship of the discriminator and the generator in GAN are competitive.", "The key finding of this work is that, directly applying the idea of competitive in GAN does not improve the accuracy of a QA model.", "We contribute a generative collaborative network that learns when to collaborate and yields empirical improvements on two QA tasks.", "This work relates to recent studies which attempt to improve the performance of a discriminative QA model with generative models (Wang et al., 2017; Yang et al., 2017; Dong et al., 2017; Duan et al., 2017).", "These works regard QA as the primary task and use auxiliary task, such as question generation and question paraphrasing, to improve the primary task.", "This is one part of our goal and our another goal is to improve the QG model with the QA system and further to increasingly improve both tasks in a loop.", "In terms of assigning category label to the generated question, Generative Adversarial Network (GAN) (Goodfellow et al., 2014; Wang et al., 1565 2017) and Generative Domain-Adaptive Nets (G-DAN) (Yang et al., 2017) could be viewed as special cases of our algorithm.", "Our algorithm learns when to assign positive or negative labels, while GAN always assigns negative labels and GDAN always assigns positive labels.", "Besides, our work differs from (Wang et al., 2017) in that our question generation model is a generative model while theirs is actually a discriminative matching model.", "The approach of (Dong et al., 2017) learns to generate question from question via paraphrasing, and use the generated questions in the inference process.", "In this work, the QA model and the QG model are applied separately in the inference process.", "This inspires us to jointly conduct QA and QG in the inference process, which we leave as a future work.", "In this section, we first formulate the task of QA and QG, and then present our algorithm that jointly trains the QA and QG models.", "This work involves two tasks: question answering (QA) and question generation (QG).", "There are different kinds of QA tasks in the natural language processing area.", "To verify the scalability of our algorithm, we consider two types of answer selection tasks, both of which are fundamental QA tasks in research community and of great importance in industrial applications including web search and chatbot.", "Both tasks take a question q and a list of candidate answers A = { a 1 , a 2 , ..., a n } as input, and outputs an answer a i which has the largest probability to correctly answer the question.", "The only difference is that the answer in the task of answer sentence selection (Yang et al., 2015) is a natural language sentence, while the answer in table search (Balakr-ishnan et al., 2015) is a structured table consisting of caption, attributes and cells.", "Our QA model is abbreviated as P qa ( a, q ; qa ) , whose output is the probability that q and a being a correct question-answer pair, and the parameter is denoted as qa .", "The task of QG takes an answer a which is a natural language sentence or a structured table, and outputs a natural language question q which could be answered by a .", "Inspired by the remarkable progress of sequence-to-sequence (Seq2Seq) learning in natural language generation, we deal with QG in an end-to-end fashion and develop a generative model based on Seq2Seq learning.", "Our QG model is abbreviated as P qg ( q | a ; qg ) , whose output is the probability of generating q from a and the parameter is denoted as qg .", "We describe the joint learning algorithm in this part.", "The end goal is to leverage the connection of QA and QG to improve the performances on both QA and QG tasks.", "A brief illustration of the training progress is given in Figure 1 , which includes a QA model, a QG model and a collaboration detector (CD).", "A formal description of the algorithm is given in Algorithm 1. We can see that the QA model and the QG model both have two training objectives.", "One part is the standard supervised learning objective based on task-specific supervisions.", "Another part of the objective is obtained by leveraging auxiliary tasks.", "The supervised objective of the QA model is to maximize the probability of the correct category label, and the supervised objective of the QG model is to maximize the probability of the correct question sequence.", "Since the goal of QA is to predict whether a question-answer pair is correct or not, training the QA model requires negative QA pairs whose labels are zero.", "But these negative QA pairs are not used for training the QG model as the goal of QG is to generate the correct question.", "The main contribution of this work is to explore effective learning objectives that leverage auxiliary tasks.", "In order to improve the QA model, we generate additional training instances, each of which is composed of a question, an answer and a category label.", "In this work, we clamp the answer part and feed the answer to the QG model to 1566 Algorithm 1 Generative Collaborative Network for QA and QG 1: Input: training data D ; the batch size for QG training m ; the beam size for QG inference k ; hyper parameters qg and qg ; hyper parameters b qa and b qg ; pretrained collaboration detector P cd ( q, q 0 ) 2: Output: QA model P qa ( a, q ) parameterized by qa ; QG model P qg ( q | a ) parameterized by qg 3: pretrain P qa ( a, q ) and P qg ( q | a ) separately on D 4: repeat 5: get a minibatch of positive QA pairs PD = { q pi , a pi } D, 1 i m , in which a pi is the answer of q pi 6: get a minibatch of negative QA pairs ND = { q ni , a ni } D, 1 i m , in which a ni is not the answer of q ni 7: apply P qg ( q | a ) on PD to generate in a list of question-answer beams GD = { q gij , a gi } , 1 i m, 1 j k 8: apply P qa ( a, q ) on GD to assign a QA-specific score to each generated instance 9: choose the top ranked result from each beam in GD , and then apply P cd ( q, q 0 ) on the selected instance 10: update qa by maximizing the following objective m X i =1 (cid:16) logP qa ( a pi , q pi ) + log (cid:0) 1 P qa ( a ni , q ni ) (cid:1)(cid:17) + qa m X i =1 (cid:16) I b qa [ P cd ( q pi , q gi 0 )] logP qa ( a pi , q gi 0 ) (cid:17) + qa m X i =1 (cid:16)(cid:0) 1 I b qa [ P cd ( q pi , q gi 0 )] (cid:1) log (cid:0) 1 P qa ( a pi , q gi 0 ) (cid:1)(cid:17) (1) 11: update qg by maximizing the following objective m X i =1 logP qa ( q pi | a pi ) + qg m X i =1 k X j =1 P qa ( a pi , q gij ) logP qg ( q gij | a pi ) (2) 12: until models converge generate the question.", "We use beam search and select the top ranked result as the question.", "2 Here the issue is how to infer the label of the generated instance.", "We believe that heuristically assigning the label as 0 or 1 is problematic.", "For instance, let us suppose the answer sentence is Microsoft was founded by Paul Allen and Bill Gates on April 4, 1975. , and the ground truth question is who founded Microsoft .", "In this case, the generated question who is the founder of Microsoft is a good one yet who is the founder of Google and how old is Bill Gates are both bad cases.", "To address this, we introduce an additional collaboration detector (CD) to infer the label of the generated instance.", "Intuitively, the CD acts as a discriminative paraphrase model, which measures the semantic similarity between the ground truth question and generated question.", "In equation (1), the I b qa ( x ) is an indicator function whose value is 1 if the value of x is larger than a threshold b qa , such as 0.5 or 0.3.", "The hyper parameter qa controls the weight of the auxiliary objective to the QA model.", "In turn, the QA model is used to assign a QA-specific score P qa ( a, q 0 ) to each generated question q 0 .", "We follow the recent reinforcement learning based approach for dialogue prediction (Li et al., 2016), and define simple heuristic reward-2 We also implemented using all the beam search results or sampling one result from the beam.", "However, these tricks do not bring performance boost.", "s that characterize good questions.", "The goodness of the generated question is measured by the prediction of the QA model.", "Similar to the strategy adopted by (Zaremba and Sutskever, 2015), we use a baseline strategy b qg (e.g. 0.5) to decrease the learning variance.", "The expected reward (Williams, 1992; Yu et al., 2017) for a generated question is given in Equation (2).", "In this way, the parameters of the QG model could be conventionally updated with stochastic gradient descent.", "We pretrain the QA model and the QG model before the joint learning process.", "The main reason is that a randomized QA model will provide unreliable rewards to the QG model, and a randomized QG model will generate bad questions.", "Our algorithm includes a question answer (QA) model, a question generation (QG) model and a collaboration detector (CD) model.", "We implement these models with dedicated neural networks.", "As we have mentioned before, our training algorithm is applied to both document-based and web table based question answer tasks.", "In this section, we take table based QA and QG tasks to describe the neural architecture of each module.", "A question/query q is a natural language expression consisting of a list of words.", "A table t has fixed 1567 schema including one or more headers , one or more cells , and a caption .", "A header indicates the property of a column, and a cell is a unit where a row and a column intersects.", "The caption is typically an explanatory text about the table.", "We develop a neural network to match a natural language question/query to a structured table.", "Since a table has multiple aspects including headers, cells and the caption, the model is developed to capture the semantic relevance between a query and a table at different levels.", "As the meaning of a query is sensitive to the word order, i.e. the intentions of list of flights london to berlin and list of flights berlin to london are totally different, we represent a query with a sequential model.", "In this work, recurrent neural network (RNN) is used to map a query of variable length to a fixed-length vector.", "We use gated recurrent unit (GRU) (Cho et al., 2014) as the basic computation unit, which adaptively forgets the history and remembers the input.", "z i = ( W z e qi + U z h i 1 ) (3) r i = ( W r e qi + U r h i 1 ) (4) e h i = tanh( W h e qi + U h ( r i (cid:12) h i 1 )) (5) h i = z i (cid:12) e h i + (1 z i ) (cid:12) h i 1 (6) where z i and r i are update and reset gates of GRU.", "We use bi-directional RNN to get the meaning of a query from forward and backward directions, and concatenate two last hidden states as the query vector.", "An important property of a table is that exchanging two rows or two columns does not change its meaning.", "To satisfy this condition, we develop an attention based approach, in which the header and cells are regarded as the external memory.", "Each header/cell is represented as a vector.", "Given a query vector, the model calculates the weight of each memory unit and then output a query-specific header/cell representation through weighted average (Bahdanau et al., 2015; Sukhbaatar et al., 2015).", "This process could be repeated executed for several times, so that more abstractive evidences could be retrieved and composed to support the final decision.", "Similar techniques have been successfully applied in table-based question answering (Yin et al., 2015b; Nee-lakantan et al., 2015).", "We represent the table caption with RNN, the same strategy we have adopted to represent the query.", "Element-wised multiplication is used to compose the query vector and the caption vector.", "Furthermore, since the number of co-occurred words directly reflect the relatedness between the question and the answer, we incorporate the embedding of co-occurred word count as additional features.", "Finally, we feed the concatenation of all the vectors to a softmax layer whose output length is 2. We have implemented a ranking based loss function l qa = max (0 , 1 P qa ( a, q ) + P qa ( a 0 , q )) and a negative log-likelihood (NLL) base loss function l qa = log( P qa ( a, q )) .", "We find that NLL works better and use it in the following experiments.", "Inspired by the notable progress that sequence-to-sequence learning (Seq2Seq) (Sutskever et al., 2014) has made in natural language generation, we implement a table-to-sequence (Table2Seq) approach that generates natural language question from a structured table.", "Table2Seq is composed of an encoder and a decoder.", "The encoder maps the caption, headers, and cells into continuous vectors, which will be fed to the decoder to generate a question in a sequential way.", "Similar with the way we have adopted in the QA model, we represent the caption with bidirectional GRU based RNN.", "The vector of each word in the caption is the concatenation of hidden states from both directions.", "The vectors of headers and cells are regarded as additional hidden states of the encoder.", "The representation of each cell is also mixed with the corresponding header representation.", "The initial vector of the decoder is the average of the caption vector, header vector, and the cell vector.", "The backbone of the decoder is an attention based GRU RNN, which generates a word at each time step and repeats the process until generating the end-of-sentence symbol.", "We made two modifi-cations to adapt the decoder to the table structure.", "The first modification is that the attention model is calculated over the headers, cells and the caption of a table.", "Ideally, the decoder should learn to focus on a region of the table when generating a word.", "The second modification is a table based copying mechanism.", "It has been proven that the copying mechanism (Gulcehre et al., 2016; Gu 1568 et al., 2016) is an effective way to replicate low-frequent words from the source to the target sequence in sequence-to-sequence learning.", "In the decoding process, a word is generated either from the target vocabulary via standard softmax or from a table via the copy mechanism.", "A neural gate g t is used to trade-off between generating from the target vocabulary and copying from the table.", "The probability of generating a word y calculated as follows, where t ( y ) is the attention probability of the word y from the table at time step t and t ( y ) is the probability of predicting the word y from the softmax at time step t .", "Since every component of the Table2Seq is differentiable, the parameters could be optimized in an end-to-end fashion with back-propagation.", "Given a question-answer pair ( x, y ) , the supervised training objective is to maximize the probability of question word at each time step.", "In the inference process, beam search is used to generate topk confident results, where k is the beam size.", "The goal of a collaboration detector is to determine the label of the instance generated by the QG model.", "The positive prediction, namely the predicted value is equals to 1, stands for the collaborative relationship between the generated instance and the ground truth, while the negative prediction stands for the competitive relationship.", "We consider this task as predicting the category of the given two question-answer pairs, one of which is the ground truth, and another is the generated question-answer pair.", "Since the answer part is the same, we simplify the problem as classifying two questions as related or not, which is a binary classification problem.", "The neural architecture of the collaboration detector (CD) is exactly the same as the caption component in the QA model.", "We represent two questions with bidirectional RNN, and use element-wise multiplication to do the composition.", "The result is further concatenated with a co-occurred word count embedding, followed by a softmax layer.", "The model is trained by minimizing the negative log-likelihood label, which is provided in the training data.", "The training data of the CD model includes two parts.", "The first part is from Quora dataset 3 , which is built for detecting if pairs of question text are semantically equivalent.", "The Quora dataset has 345,989 positive question pairs and 255,027 negative pairs.", "We further obtain the second part of the training data from the web queries, which are more similar to the web queries in our two QA task.", "We obtain the query dataset from query logs through clustering the web queries that click the same web page.", "In this way, we obtain 6,118,023 positive query pairs.", "We use a heuristic rule to generate the negative instances for the query dataset.", "For each pair of query text, we clamp the first query and retrieve a query that is mostly similar to the second query.", "To improve the efficiency of this process, we randomly sample 10,000 queries and define the similarity as the number of co-occurred words in two questions.", "In this way we collect another 6,118,023 negative pairs of query text.", "We initialize the values of word embeddings with 300 d Glove vectors 4 , which is learned on Wikipedia texts.", "We use a held-out data consisting of 20K query pairs to check the performance of the CD model.", "The accuracy of the CD model on the held-out dataset is 83%.", "In the joint training process, we clamp the parameters of the CD model and use its outputs to facilitate the learning of the QA model.", "We conduct experiments on table-based QA and document-based QA tasks.", "We will describe experimental settings and report results on these two tasks in this section.", "Setting We take table retrieval (Balakrishnan et al., 2015) as the table-based QA task.", "Given a query and a collection of candidate table answers, the task aims to return a table that is most relevant to the query.", "Figure 2 gives an example of this task, in which a query matches to different aspects of a table.", "We regard document-based QA tasks as a special case of the table-based QA task, in which the cells and the headers are both empty.", "We conduct experiments on the web data.", "The queries come from real-world user queries which we obtain from the search log of a commercial 3 https://data.quora.com/ First-Quora-Dataset-Release-Question-Pairs 4 https://nlp.stanford.edu/projects/ glove/ 1569 star trek rts Query Star Trek Games for the PC Cells Headers Caption Star Trek: Armada II RTS 2001 65 Star Trek: Away Team RTS 2001 64 Star Trek: Deep Space Nine: Dominion Wars RTS 2001 64 Star Trek: New Worlds RTS 2000 52 Game Genre Year Metascore Figure 2: An example illustrating the table-based QA task.", "search engine.", "We filter them down to only those that are directly answered by a table.", "In this way, we collect 1.49M query-table pairs.", "An example of the data is given in Figure 2. We randomly select 1.29M as the training set, 0.1M as the dev set and 0.1M as the test set.", "We evaluate the performance on table-based QA with Mean Average Precision (MAP) and Preci-sion@1 (P@1) (Manning et al., 2008).", "We use the same candidate retrieval adopted in (Yan et al., 2017), namely representing a table as bag-of-words, to guarantee the efficiency of the approach.", "Each query has 50 candidate tables on average.", "It is still an open problem to automatically evaluate the performance of a natural language generation system (Lowe et al., 2017).", "In this work, we use BLEU-4 (Papineni et al., 2002) score as the evaluation metric, which measures the overlap between the generated question and the referenced question.", "The hyper parameters are tuned on the validation set and the performance is reported on the test set.", "Results and Analysis We report the results and our analysis on table-based QA and QG respectively in this part.", "We first report the results of single systems on table-based QA.", "We compare to four single systems implemented by (Yan et al., 2017).", "In BM25 , each table is represented as a flattened vector, and the similarity between a query and a table is calculated with the BM25 algorithm.", "WordCnt uses the number of co-occurred words in query-caption pair, query-header pair, and query-cell pair, respectively.", "MT based PP is a phrase-level feature.", "The features come from a phrase table which is extracted from bilingual corpus via statistical machine translation approach (Koehn et al., 2003).", "LambdaMART (Burges, 2010) is used to train the ranker.", "CNN uses convolutional neural network to measure the similarity between the query and table caption, table headers, and table cells, respectively.", "TQNN is the table-based QA model implemented in this work, which is regard as the baseline for the joint learning algorithm.", "Results of single systems are given in Table 1. We can see that BM25 is a simple yet very effective baseline method.", "Our basic model performs better than all the single models in terms of MAP.", "We also implement four different joint learning settings.", "In these settings, the QA model and the QG model are all pretrained, and the same way (policy gradient) is used to improve the QG model via the QA predictions.", "The only difference is how the QA model benefits from the QG model.", "As we use external resources to train a CD model, we also implement Seq2SeqPara for comparison.", "We train a question generator with a Seq2Seq model on the CD training data, and regard the generated questions as positive instances.", "Our generative collaborative network is abbreviated as GCN .", "GCN (competitive) is analogous to (Good-fellow et al., 2014), where all the generated questions are regarded as negative instances (with label as zero).", "On the contrary, GCN (collaborative) is analogous to (Yang et al., 2017), where the generated questions are regard as positive instances.", "Our main observation from Table 1 is that simply regarding all the generated questions as negative instances (competitive) could not bring performance boost.", "On the contrary, regarding the generated questions as positive ones (collaborative) improves the QA model.", "Our algorithm (GCN) significantly improves the TQNN model.", "Based on these results, we believe that the relationship 1570 between the QA model and the QG model should not be always competitive.", "Learning when to collaborate through leveraging a CD model is a practical way to improve the performance on question answering.", "As described in Equation (1), the influence of the CD model on the QA model also depends on the value of the hyper parameter b qa .", "A small value of b qa stands for a preference of collaborative, while a large value of b qa represents a preference of competitive.", "Results are given in Figure 3. The GCN model performs better when b qa is in the range [0 . 3 , 0 . 5] , in which the model prefers collaborative.", "We conduct an additional experiment to test whether our algorithm could improve an existing system.", "We take BM25 as the baseline, and incorporate one of the five joint models as an additional feature.", "LambdaMART is used to train the combined ranker.", "Results are given in Table 2. We can see that the baseline system could be dramatically improved by our system, despite the improvements of different approaches are on par.", "Here we show the performances of different approaches on table-based QG.", "Results in terms of BLEU-4 are given in Table 3. Different from the trends on QA, competitive performs better than collaborative on QG.", "This is reasonable because as the joint training progresses, the QA model in collaborative keeps telling the QG model that the generated instances are good enough.", "On the contrary, the competitive model is more critical, which tells the QG model how wrong the generated questions are.", "In this way, the QG model could be increasingly improved by the QA signal.", "The QG model is easier to be improved compared to the QA model.", "Our GCN approach obtains a sig-nificant improvement over the baseline model on this task.", "We also report the learning curve of the GCN model as the joint training progresses.", "The performance on the dev set is given in Figure 4. 0 20000 40000 60000 80000 Number of Training Batches 0.450 0.452 0.454 0.456 0.458 0.460 0.462 0.464 0.466 QA p e r f o r m a n c e ( MAP ) 16.4 16.6 16.8 17.0 17.2 17.4 17.6 QG p e r f o r m a n c e ( BLEU ) Figure 4: The learning curve of GCN on the dev data.", "To test the scalability of the algorithm, we also apply it to document based QA and QG tasks.", "The QA task is answer sentence selection (Yang et al., 1571 Method MAP P@1 WordCnt 0.395 0.179 CDSSM (Shen et al., 2014) 0.442 0.228 ABCNN (Yin et al., 2015a) 0.469 0.263 DSL (Tang et al., 2017) 0.484 0.275 DQNN (baseline) 0.471 0.263 Seq2SeqPara 0.470 0.260 GCN (competitive) 0.468 0.257 GCN (collaborative) 0.476 0.272 GCN (final) 0.492 0.282 Table 4: The performance on document-based QA task (p-value < 0.05 with t-test between DQNN and GCN).", "2015).", "Given a question and a list of candidate answer sentences from a document, the goal is to find a most relevant answer sentence as the answer.", "Since the WikiQA dataset (Yang et al., 2015) is too small to learn a powerful question generator, we use the MARCO dataset (Nguyen et al., 2016), which is originally designed for reading comprehension yet also has manually annotated labels for sentence/passage selection.", "A characteristic of MARCO dataset is that the ground truth of the test is invisible to the public.", "Therefore, we follow (Tang et al., 2017) and split the original validation set into the dev set and the test set.", "The results on QA and QG are given in Table 4 and Table 5. We can see that the results are almost consistent with the results on table-based QA and QG tasks.", "Our GCN algorithms achieves promising performances compared to strong baseline methods.", "We present an algorithm dubbed generative collaborative network for jointly training the question answering (QA) model and the question generation (QG) model.", "Different from standard GAN, the relationship between QA model (discrimina-tor) and the QG model (generator) in our algorithm is not always competitive.", "We show that collaborative performs better than competitive in terms of QA accuracy, and our algorithm that learns when to collaborate obtains further improvement on both QA and QG tasks.", "This work could be further improved from several directions.", "Our current algorithm focuses on the joint training of QA and QG models, while the inferences of these two models are independent.", "How to conduct joint inference is an interesting future work.", "Besides, the samples are currently generated from the QG model via beam search.", "Improving the diversity of the samples requires different sampling mechanisms.", "Another potential direction is to jointly learn the collaboration detector together with the QA and QG models." ]
[ "abstain", "result", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "result", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "other", "method", "method", "other", "other", "other", "abstain", "other", "other", "other", "abstain", "objective", "abstain", "other", "objective", "method", "abstain", "abstain", "other", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "other", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "method", "abstain", "method", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "other", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "method", "abstain", "result", "result", "method", "abstain", "abstain", "abstain", "abstain" ]
[ "Many Question-Answering (QA) datasets contain unanswerable questions, but their treatment in QA systems remains primitive.", "Our analysis of the Natural Questions (Kwiatkowski et al., 2019) dataset reveals that a substantial portion of unanswerable questions ( 21%) can be explained based on the presence of unverifiable presuppositions .", "Through a user preference study, we demonstrate that the oracle behavior of our proposed systemwhich provides responses based on presupposition failureis preferred over the oracle behavior of existing QA systems.", "Then, we present a novel framework for implementing such a system in three steps: presupposition generation, presupposition verification, and explanation generation, reporting progress on each.", "Finally, we show that a simple modification of adding presuppositions and their verifiability to the input of a competitive end-to-end QA system yields modest gains in QA performance and unanswerability detection, demonstrating the promise of our approach.", "Many Question-Answering (QA) datasets including Natural Questions (NQ) (Kwiatkowski et al., 2019) and SQuAD 2.0 (Rajpurkar et al., 2018) contain questions that are unanswerable .", "While unanswerable questions constitute a large part of existing QA datasets (e.g., 51% of NQ, 36% of SQuAD 2.0), their treatment remains primitive.", "That is, (closed-book) QA systems label these questions as Unanswerable without detailing why, as in (1): (1)", "Corresponding authors, Work done at Google", "Unanswerable Q: Who is the current monarch of France?", "System: Unanswerable.", "Unanswerability in QA arises due to a multitude of reasons including retrieval failure and malformed questions (Kwiatkowski et al., 2019).", "We focus on a subset of unanswerable questionsnamely, questions containing failed presuppositions (back-ground assumptions that need to be satisfied).", "Questions containing failed presuppositions do not receive satisfactory treatment in current QA.", "Under a setup that allows for Unanswerable as an answer (as in several closed-book QA systems; Figure 1, left), the best case scenario is that the system correctly identifies that a question is unanswerable and gives a generic, unsatisfactory response as in (1-b).", "Under a setup that does not allow for Unanswerable (e.g., open-domain QA), a system's attempt to answer these questions results in an inaccurate accommodation of false presuppositions.", "For example, Google answers the question Which linguist invented the lightbulb?", "with Thomas Edison , and Bing answers the question When did Marie Curie discover Uranium?", "with 1896 (retrieved Jan 2021).", "These answers are clearly inappropriate, because answering these questions with any name or year endorses the false presuppositions Some linguist invented the lightbulb and Marie Curie discovered Uranium .", "Failures of this kind are extremely noticeable and have recently been highlighted by social media (Munroe, 2020), showing an outsized importance regardless of their effect on benchmark metrics.", "We propose a system that takes presuppositions into consideration through the following steps (Fig-ure 1, right):", "2. Presupposition verification: Some linguist invented the lightbulb.", "Not verifiable", "3. Explanation generation: ( Some linguist invented the lightbulb , Not verifiable) This question is unanswerable because there is in-sufficient evidence that any linguist invented the lightbulb.", "Our contribution can be summarized as follows: We identify a subset of unanswerable questionsquestions with failed presuppositionsthat are not handled well by existing QA systems, and quantify their role in naturally occurring questions through an analysis of the NQ dataset (S2, S3).", "We outline how a better QA system could handle questions with failed presuppositions, and validate that the oracle behavior of this proposed system is more satisfactory to users than the oracle behavior of existing systems through a user preference study (S4).", "We propose a novel framework for handling presuppositions in QA, breaking down the problem into three parts (see steps above), and evaluate progress on each (S5).", "We then integrate these steps end-to-end into a competitive QA model and achieve modest gains (S6).", "Presuppositions are implicit assumptions of utterances that interlocutors take for granted.", "For example, if I uttered the sentence I love my hedgehog , it is assumed that I, the speaker, do in fact own a hedgehog.", "If I do not own one (hence the presupposition fails), uttering this sentence would be inappropriate.", "Questions may also be inappropriate in the same way when they contain failed presuppositions, as in the question Which linguist invented the lightbulb?", ".", "Presuppositions are often associated with specific words or syntactic constructions (triggers').", "We compiled an initial list of presupposition triggers based on Levinson (1983: 181184) and Van der Sandt (1992), 1 and selected the following triggers based on their frequency in NQ ( means presupposes'): Question words ( what, where, who... ): Who did Jane talk to?", "Jane talked to someone.", "Definite article ( the ): I saw the cat There exists some contextually salient, unique cat.", "Factive verbs ( discover, find out, prove... ): I found out that Emma lied.", "Emma lied .", "Possessive 's : She likes Fred's sister.", "Fred has a sister.", "Temporal adjuncts ( when, during, while... ): I was walking when the murderer escaped from prison.", "The murderer escaped from prison.", "Counterfactuals ( if + past): I would have been happier if I had a dog.", "I don't have a dog.", "Our work focuses on presuppositions of questions.", "We assume presuppositions project from 1 We note that it is a simplifying view to treat all triggers under the banner of presupposition; see Karttunen (2016).", "wh -questionsthat is, presuppositions (other than the presupposition introduced by the interrogative form) remain constant under wh -questions as they do under negation (e.g., I don't like my sister has the same possessive presupposition as I like my sister ).", "However, the projection problem is complex; for instance, when embedded under other operators, presuppositions can be overtly denied (Levinson 1983: 194).", "See also Schlenker (2008), Abrusn (2011), Schwarz and Simonenko (2018), Theiler (2020), i.a., for discussions regarding projection patterns under wh -questions.", "We adopt the view of Strawson (1950) that definite descriptions presuppose both existence and (contextual) uniqueness, but this view is under debate.", "See Coppock and Beaver (2012), for instance, for an analysis of the that does not presuppose existence and presupposes a weaker version of uniqueness.", "Furthermore, we currently do not distinguish predicative and argumental definites.", "Presuppositions and unanswerability.", "Questions containing failed presuppositions are often treated as unanswerable in QA datasets.", "An example is the question What is the stock symbol for Mars candy?", "from NQ.", "This question is not answerable with any description of a stock symbol (that is, an answer to the what question), because Mars is not a publicly traded company and thus does not have a stock symbol.", "A better response would be to point out the presupposition failure, as in There is no stock symbol for Mars candy .", "However, statements about negative factuality are rarely explicitly stated, possibly due to reporting bias (Gordon and Van Durme, 2013).", "Therefore, under an extractive QA setup as in NQ where the answers are spans from an answer source (e.g., a Wikipedia article), it is likely that such questions will be unanswerable.", "Our proposal is based on the observation that the denial of a failed presupposition ( P) can be used to explain the unanswerability of questions (Q) containing failed presuppositions (P), as in (2).", "(2) Q: Who is the current monarch of France?", "P: There is a current monarch of France.", "P: There is no such thing as a current monarch of France.", "An answer that refers to the presupposition, such as P, would be more informative compared to both Unanswerable (1-b) and an extractive answer from documents that are topically relevant but do not mention the false presupposition.", "First, to quantify the role of presupposition failure in QA, two of the authors analyzed 100 randomly selected unanswerable wh -questions in the NQ development set.", "2 The annotators labeled each question as presupposition failure or not presupposition failure , depending on whether its unanswerability could be explained by the presence of an unverifiable presupposition with respect to the associated document.", "If the unanswerability could not be explained in terms of presupposition failure, the annotators provided a reasoning.", "The Cohen's for inter-annotator agreement was 0.586.", "We found that 30% of the analyzed questions could be explained by the presence of an unverifiable presupposition in the question, considering only the cases where both annotators were in agreement (see Table 1).", "3 After adjudicating the reasoning about unanswerability for the non-presupposition failure cases, another 21% fell into cases where presupposition failure could be partially informative (see Table 1 and Appendix A for details).", "The unverifiable presuppositions were 2 The NQ development set provides 5 answer annotations per questionwe only looked at questions with 5/5 Null answers here.", "3 wh -questions constitute 69% of the NQ development set, so we expect the actual portion of questions with presupposition failiure-based explanation to be 21%.", "Our hypothesis is that statements explicitly referring to failed presuppositions can better 4 speak to the unanswerability of corresponding questions. To test our hypothesis, we conducted a side-by-side comparison of the oracle output of our proposed system and the oracle output of existing (closed-book) QA systems for unanswerable questions. We included two additional systems for comparison; the four system outputs compared are described below (see Table 2 for examples):", "Simple unanswerable: A simple assertion that the question is unanswerable (i.e., This question is unanswerable ). This is the oracle behavior of closed-book QA systems that allow Unanswerable as an answer. Presupposition failure-based explanation: A denial of the presupposition that is unverifiable from the answer source. This takes the form of either This question is unanswerable because we could not verify that... or ...be-cause it is unclear that... depending on the", "4 We define better as user preference in this study, but other dimensions could also be considered such as trustworthiness.", "type of the failed presupposition. See Section 5.3 for more details. Extractive explanation: A random sentence from a Wikipedia article that is topically related to the question, prefixed by This question is unanswerable because.... This system is introduced as a control to ensure that length bias is not in play in the main comparison (e.g., users may a priori prefer longer, topically-related answers over short answers). That is, since our system, Presupposition failure-based explanation , yields strictly longer answers than Simple unanswerable , we want to ensure that our system is not preferred merely due to length rather than answer quality. Open-domain rewrite: A rewrite of the non-oracle output taken from the demo 5 of Dense Passage Retrieval (DPR; Karpukhin et al. 2020), a competitive open-domain QA system. This system is introduced to test whether presupposition failure can be easily addressed by expanding the answer source, since a single Wikipedia article was used to determine presupposition failure. If presupposition failure is a problem particular only to closed-book systems, a competitive open-domain system would suffice to address this issue. While the outputs compared are not oracle, this system", "5 http://qa.cs.washington.edu:2020/", "has an advantage of being able to refer to all of Wikipedia. The raw output was rewritten to be well-formed, so that it was not unfairly disadvantaged (see Appendix B.2).", "Study. We conducted a side-by-side study with 100 unanswerable questions. These questions were unanswerable questions due to presupposition failure, as judged independently and with high confidence by two authors. 6 We presented an exhaustive binary comparison of four different types of answers for each question (six binary comparisons per question). We recruited five participants on an internal crowdsourcing platform at Google, who were presented with all binary comparisons for all questions. All comparisons were presented in random order, and the sides that the comparisons appeared in were chosen at random. For each comparison, the raters were provided with an unanswerable question, and were asked to choose the system that yielded the answer they preferred (either System 1 or 2 ). They were also given the options Both answers are good/bad . See Appendix B.1 for additional details about the task setup.", "Results. Figure 2 shows the user preferences for the six binary comparisons, where blue and gray denote preferences for the two systems compared. We find that presupposition-based answers are preferred against all three answer types with which they were compared, and prominently so when compared to the oracle behavior of existing closed-book QA systems (4th chart, Presup. vs. No Ex-planation). This supports our hypothesis that presupposition failure-based answers would be more satisfactory to the users, and suggests that building a QA system that approaches the oracle behavior of our proposed system is a worthwhile pursuit.", "6 Hence, this set did not necessarily overlap with the randomly selected unanswerable questions from Section 3; we wanted to specifically find a set of questions that were representative of the phenomena we address in this work.", "Given that presupposition failure accounts for a substantial proportion of unanswerable questions (Section 3) and our proposed form of explanations is useful (Section 4), how can we build a QA system that offers such explanations? We decompose this task into three smaller sub-tasks: presupposition generation, presupposition verification, and explanation generation. Then, we present progress towards each subproblem using NQ. 7 We use a templatic approach for the first and last steps. The second step involves verification of the generated presuppositions of the question against an answer source, for which we test four different strategies: zero-shot transfer from Natural Language Inference (NLI), an NLI model finetuned on verification, zero-shot transfer from fact verification, and a rule-based/NLI hybrid model. Since we used NQ, our models assume a closed-book setup with a single document as the source of verification.", "Linguistic triggers. Using the linguistic triggers discussed in Section 2, we implemented a rule-based generator to templatically generate presuppositions from questions. See Table 3 for examples, and Appendix C for a full list.", "Generation. The generator takes as input a constituency parse tree of a question string from the Berkeley Parser (Petrov et al., 2006) and applies trigger-specific transformations to generate the presupposition string (e.g., taking the sentential complement of a factive verb). If there are multiple triggers in a single question, all presuppositions corresponding to the triggers are generated. Thus, a single question may have multiple presuppositions. See Table 3 for examples of input questions and output presuppositions.", "How good is our generation? We analyzed 53 questions and 162 generated presuppositions to estimate the quality of our generated presuppositions. This set of questions contained at least 10 instances of presuppositions pertaining to each category. One of the authors manually validated the generated presuppositions. According to this analysis, 82.7% (134/162) presuppositions were valid presuppositions of the question. The remaining cases fell into two broad categories of error: ungrammatical (11%, 18/162) or grammatical but not presupposed by the question (6.2%, 10/162). The latter category of errors is a limitation of our rule-based generator that does not take semantics into account, and suggests an avenue by which future work can yield improvements. For instance, we uniformly apply the template A' has B' 8 for presuppositions triggered by 's . While this template works well for cases such as Elsa's sister Elsa' has sister' , it generates invalid presuppositions such as Bachelor's degree # Bachelor' has de-gree' . Finally, the projection problem is another limitation. For example, who does pip believe is estella's mother has an embedded possessive under a nonfactive verb believe , but our generator would nevertheless generate estella' has mother' .", "The next step is to verify whether presuppositions of a given question is verifiable from the answer source. The presuppositions were first generated using the generator described in Section 5.1, and then manually repaired to create a verification dataset with gold presuppositions. This was to ensure that verification performance is estimated without a propagation of error from the previous step. Generator outputs that were not presupposed by the questions were excluded.", "To obtain the verification labels, two of the authors annotated 462 presuppositions on their binary verifiability ( verifiable/not verifiable ) based on the Wikipedia page linked to each question (the links were provided in NQ). A presupposition was labeled verifiable if the page contained any statement that either asserted or implied the content of the presupposition. The Cohen's for inter-annotator agreement was 0.658. The annotators reconciled the disagreements based on a post-annotation dis-8", "dis-8 We used a template that puts possessor and possessee NPs in quotes instead of using different templates depending on posessor/possessee plurality (e.g., A __ has a __ / A __ has __ / __ have a __ / __ have __ ).", "cussion to finalize the labels to be used in the experiments. We divided the annotated presuppositions into development ( n = 234 ) and test ( n = 228 ) sets. 9 We describe below four different strategies we tested.", "Zero-shot NLI. NLI is a classification task in which a model is given a premise-hypothesis pair and asked to infer whether the hypothesis is entailed by the premise. We formulate presupposition verification as NLI by treating the document as the premise and the presupposition to verify as the hypothesis. Since Wikipedia articles are often larger than the maximum premise length that NLI models can handle, we split the article into sentences and created n premise-hypothesis pairs for an article with n sentences. Then, we aggregated these predictions and labeled the hypothesis (the presupposition) as verifiable if there are at least k sentences from the document that supported the presupposition. If we had a perfect verifier, k = 1 would suffice to perform verification. We used k = 1 for our experiments, but k could be treated as a hyperparameter. We used ALBERT-xxlarge (Lan et al., 2020) finetuned on MNLI (Williams et al., 2018) and QNLI (Wang et al., 2019) as our NLI model.", "Finer-tuned NLI. Existing NLI datasets such as QNLI contain a broad distribution of entailment pairs. We adapted the model further to the distribution of entailment pairs that are specific to our generated presuppositions (e.g., Hypothesis: NP is contextually unique ) through additional finetun-ing (i.e., finer-tuning ). Through crowdsourcing on an internal platform, we collected entailment labels for 15,929 (presupposition, sentence) pairs, generated from 1000 questions in NQ and 5 sentences sampled randomly from the corresponding Wikipedia pages. We continued training the model fine-tuned on QNLI on this additional dataset to yield a finer-tuned NLI model. Finally, we aggregated per-sentence labels as before to get verifiability labels for (presupposition, document) pairs.", "Zero-shot FEVER. FEVER is a fact verification task proposed by Thorne et al. (2018). We formulate presupposition verification as a fact verification task by treating the Wikipedia article as the evidence source and the presupposition as the claim. While typical FEVER systems have a docu-9", "docu-9 The dev/test set sizes did not exactly match because we kept presuppositions of same question within the same split, and each question had varying numbers of presuppositions.", "ment retrieval component, we bypass this step and directly perform evidence retrieval on the article linked to the question. We used the Graph Neural Network-based model of Liu et al. (2020) (KGAT) that achieves competitive performance on FEVER. A key difference between KGAT and NLI models is that KGAT can consider pieces of evidence jointly, whereas with NLI, the pieces of evidence are verified independently and aggregated at the end. For presuppositions that require multihop reasoning, KGAT may succeed in cases where aggregated NLI failse.g., for uniqueness. That is, if there is no sentence in the document that bears the same uniqueness presupposition, one would need to reason over all sentences in the document.", "Rule-based/NLI hybrid. We consider a rule-based approach where we apply the same generation method described in Section 5 to the Wikipedia documents to extract the presuppositions of the evidence sentences. The intended effect is to extract content that is directly relevant to the task at handthat is, we are making the presuppositions of the documents explicit so that they can be more easily compared to presuppositions being verified. However, a nave string match between presuppositions of the document and the questions would not work, due to stylistic differences (e.g., definite descriptions in Wikipedia pages tend to have more modifiers). Hence, we adopted a hybrid approach where the zero-shot QNLI model was used to verify (document presupposition, question presupposition) pairs.", "Results. Our results (Table 4) suggest that presupposition verification is challenging to existing models, partly due to class imbalance. Only the model that combines finer-tuning and rule-based document presuppositions make modest improvement", "improvement over the majority class baseline (78% 79%). Nevertheless, gains in F1 were substantial for all models (44% 60% in best model), showing that these strategies do impact verifiability, albeit with headroom for improvement. QNLI provided the most effective zero-shot transfer, possibly because of domain match between our task and the QNLI datasetthey are both based on Wikipedia. The FEVER model was unable to take advantage of multihop reasoning to improve over (Q)NLI, whereas using document presuppositions (Rule-based/NLI hybrid) led to gains over NLI alone.", "We used a template-based approach to explanation generation: we prepended the templates This question is unanswerable because we could not verify that... or ...because it is unclear that... to the unverifiable presupposition (3). Note that we worded the template in terms of unverifiability of the presupposition, rather than asserting that it is false. Under a closed-book setup like NQ, the only ground truth available to the model is a single document, which leaves a possibility that the presupposition is verifiable outside of the document (except in the rare occasion that it is refuted by the document). Therefore, we believe that unverifiability, rather than failure, is a phrasing that reduces false negatives.", "(3) Q: when does back to the future part 4 come out Unverifiable presupposition: there is some point in time that back to the future part 4 comes out Simple prefixing: This question is unanswerable because we could not verify that there is some point in time that back to the future part 4 comes out .", "For the user study (Section 4), we used a manual, more fluent rewrite of the explanation generated by simple prefixing. In future work, fluency is a dimension that can be improved over templatic generation. For example, for (3), a fluent model could generate the response: This question is unanswerable because we could not verify that Back to the Future Part 4 will ever come out .", "While the 3-step pipeline is designed to generate explanations for unanswerability, the generated presuppositions and their verifiability can also provide useful guidance even for a standard extractive QA system. They may prove useful both to unanswerable and answerable questions, for instance by indicating which tokens of a document a model should attend to. We test several approaches to augmenting the input of a competitive extractive QA system with presuppositions and verification labels.", "Model and augmentation. We used Extended Transformer Construction (ETC) (Ainslie et al., 2020), a model that achieves competitive performance on NQ, as our base model. We adopted the configuration that yielded the best reported NQ performance among ETC-base models. 10 We experiment with two approaches to encoding the presupposition information. First, in the flat model , we simply augment the input question representation (token IDs of the question) by concatenating the token IDs of the generated presuppositions and the verification labels (0 or 1) from the ALBERT QNLI model. Second, in the structured model (Fig-ure 4), we take advantage of the global input layer of ETC that is used to encode the discourse units of large documents like paragraphs. Global tokens attend (via self-attention) to all tokens of their in-10", "in-10 The reported results in Ainslie et al. (2020) are obtained using a custom modification to the inference procedure that we do not incorporate into our pipeline, since we are only interested in the relative gains from presupposition verification.", "ternal text, but for other text in the document, they only attend to the corresponding global tokens. We add one global token for each presupposition, and allow the presupposition tokens to only attend to each other and the global token. The value of the global token is set to the verification label (0 or 1).", "Metrics. We evaluated our models on two sets of metrics: NQ performance (Long Answer, Short Answer, and Average F1) and Unanswerability Classification (Accuracy and F1). 11 We included the latter because our initial hypothesis was that sensitivity to presuppositions of questions would lead to better handling of unanswerable questions. The ETC NQ model has a built-in answer type classification step which is a 5-way classification between { Unanswerable, Long Answer, Short Answer, Yes, No }. We mapped the classifier outputs to binary answerability labels by treating the predicted label as Unanswerable only if its logit was greater than the sum of all other options.", "Results and Discussion Table 5 shows that augmentations that use only the presuppositions or only the verification labels do not lead to gains in NQ performance over the baseline, but the presuppositions do lead to gains on Unanswerability Classification. When both presuppositions and their verifiability are provided, we see minor gains in Average F1 and Unanswerability Classification. 12 For Unanswerability Classification, the improved accuracy is different from the baseline at the 86% (flat) and 89% (structured) confidence level using McNemar's test. The main bottleneck of our model is the quality of the verification labels used for augmentation (Table 4)noisy labels limit the capacity of the QA model to attend to the augmentations.", "the added presuppositions modulate the prediction change in our best-performing model (structured) from the baseline ETC model. Looking at the cases where changes in model prediction (i.e., Unanswerable (U) Answerable (A) ) lead to correct answers, we observe an asymmetry in the two possible directions of change. The number of correct A U cases account for 11.9% of the total number of unanswerable questions, whereas correct U A cases account for 6.7% of answerable questions. This asymmetry aligns with the expectation that the presupposition-augmented model should achieve gains through cases where unverified presuppositions render the question unanswerable. For example, given the question who played david brent's girlfriend in the office that contains a false presupposition David Brent has a girlfriend , the structured model changed its prediction to Unanswerable from the base model's incorrect answer Julia Davis (an actress, not David Brent's girlfriend according to the document: . . . arrange a meeting with the second woman (voiced by Julia Davis) ). On the other hand, such an asymmetry is not observed in cases where changes in model prediction results in incorrect answers: incorrect A U and U A account for 9.1% and 9.2%, respectively. More examples are shown in Appendix F.", "While presuppositions are an active topic of research in theoretical and experimental linguistics (Beaver, 1997; Simons, 2013; Schwarz, 2016, i.a., ), comparatively less attention has been given to presuppositions in NLP (but see Clausen and Manning (2009) and Tremper and Frank (2011)). More recently, Cianflone et al. (2018) discuss automatically detecting presuppositions, focusing on adverbial triggers (e.g., too, also... ), which we excluded due to their infrequency in NQ. Jeretic et al. (2020) investigate whether inferences triggered by presuppositions and implicatures are captured well by NLI models, finding mixed results.", "Regarding unanswerable questions, their importance in QA (and therefore their inclusion in benchmarks) has been argued by works such as Clark and Gardner (2018) and Zhu et al. (2019). The analysis portion of our work is similar in motivation to unanswerability analyses in Yatskar (2019) and Asai and Choi (2020)to better understand the causes of unanswerability in QA. Hu et al. (2019); Zhang et al. (2020); Back et al. (2020) consider answerability", "answerability detection as a core motivation of their modeling approaches and propose components such as independent no-answer losses, answer verification, and answerability scores for answer spans.", "Our work is most similar to Geva et al. (2021) in proposing to consider implicit assumptions of questions. Furthermore, our work is complementary to QA explanation efforts like Lamm et al. (2020) that only consider answerable questions.", "Finally, abstractive QA systems (e.g., Fan et al. 2019) were not considered in this work, but their application to presupposition-based explanation generation could be an avenue for future work.", "Through an NQ dataset analysis and a user preference study, we demonstrated that a significant portion of unanswerable questions can be answered more effectively by calling out unverifiable presuppositions. To build models that provide such an answer, we proposed a novel framework that decomposes the task into subtasks that can be connected to existing problems in NLP: presupposition identification (parsing and text generation), presupposition verification (textual inference and fact ver-ification), and explanation generation (text genera-tion). We observed that presupposition verification, especially, is a challenging problem. A combination of a competitive NLI model, finer-tuning and rule-based hybrid inference gave substantial gains over the baseline, but was still short of a fully satisfactory solution. As a by-product, we showed that verified presuppositions can modestly improve the performance of an end-to-end QA model.", "In the future, we plan to build on this work by proposing QA systems that are more robust and cooperative. For instance, different types of presupposition failures could be addressed by more fluid answer strategiese.g., violation of uniqueness presuppositions may be better handled by providing all possible answers, rather than stating that the uniqueness presupposition was violated.", "We thank Tom Kwiatkowski, Mike Collins, Tania Rojas-Esponda, Eunsol Choi, Annie Louis, Michael Tseng, Kyle Rawlins, Tania Bedrax-Weiss, and Elahe Rahimtoroghi for helpful discussions about this project. We also thank Lora Aroyo for help with user study design, and Manzil Zaheer for pointers about replicating the ETC experiments." ]
[ "abstain", "result", "objective", "objective", "objective", "abstain", "abstain", "abstain", "abstain", "objective", "objective", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "objective", "objective", "objective", "objective", "objective", "objective", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "other", "method", "other", "other", "abstain", "other", "other", "abstain", "other", "abstain", "other", "method", "abstain", "other", "method", "method", "other", "method", "abstain", "abstain", "other", "other", "abstain", "other", "other", "abstain", "abstain", "abstain", "other", "abstain", "method", "method", "other", "other", "other", "other", "other", "abstain", "method", "other", "objective", "other" ]
[ "BERT (Bidirectional Encoder Representations from Transformers) and related pre-trained Transformers have provided large gains across many language understanding tasks, achieving a new state-of-the-art (SOTA).", "BERT is pre-trained on two auxiliary tasks: Masked Language Model and Next Sentence Prediction.", "In this paper we introduce a new pre-training task inspired by reading comprehension to better align the pre-training from memorization to understanding.", "Span Selection PreTraining (SSPT) poses cloze-like training instances, but rather than draw the answer from the model's parameters, it is selected from a relevant passage.", "We find significant and consistent improvements over both BERTBASE and BERTLARGE on multiple Machine Reading Comprehension (MRC) datasets.", "Specifically, our proposed model has strong empirical evidence as it obtains SOTA results on Natural Questions, a new benchmark MRC dataset, outperforming BERTLARGE by 3 F1 points on short answer prediction.", "We also show significant impact in HotpotQA, improving answer prediction F1 by 4 points and supporting fact prediction F1 by 1 point and outperforming the previous best system.", "Moreover, we show that our pre-training approach is particularly effective when training data is limited, improving the learning curve by a large amount.", "State-of-the-art approaches for NLP tasks are based on language models that are pre-trained on tasks which do not require labeled data (Peters et al., 2018; Howard and Ruder, 2018; Devlin et al., 2018; Yang et al., 2019; Liu et al., 2019; Sun et al., 2019).", "Fine tuning language models to downstream tasks, such as question answering or other natural language understanding tasks, has been shown to be a general and effective strategy.", "BERT is a recently introduced and highly successful model for language understanding.", "The general BERT adaptation approach is to alter the model used for pre-training while retaining the transformer encoder layers.", "The model discards the layers used for the final prediction in the pretraining tasks and adds layers to predict the target task.", "All parameters are then fine tuned on the target task.", "BERT is based on the transformer architecture (Vaswani et al., 2017), and trained on the following two unsupervised tasks: Masked Language Model (MLM): predicting masked word pieces from the surrounding context Next Sentence Prediction (NSP): predicting if the two provided sequences follow sequentially in text or not The masked LM or cloze task (Taylor, 1953) and next sentence prediction are auxiliary tasks (Ando and Zhang, 2005) requiring language understanding, and therefore train the model to acquire effective representations of language.", "However, the cloze pre-training task often poses instances that require only shallow prediction, or else require memorized knowledge.", "For many cloze instances the model simply requires syntactic or lexical understanding to answer.", "For example, in the cloze instances in Table 1 the first two rows require syntactic and lexical understanding respectively.", "Other cloze instances mainly require completing collocations, as in the third example.", "However, some cloze instances require memorized knowledge, as in the last instance, which essentially asks where Hadrian died.", "Other language models face the same challenge.", "In GPT-2 (Radford et al., 2019) the entities present in a language generation prompt are expanded with Type Cloze Syntactic In the 15th century, the blast furnace spread into what is now Belgium where it was improved.", "For example, in a prompt about nuclear materials being stolen on a Cincinnati train, GPT-2 references Ohio news outlets, U.S. Department of Energy, and Federal Railroad Admin-istration in ways consistent with their real world relationships to the entities in the prompt.", "As the preceding examples illustrate, in many cloze and conventional language model prediction instances, the correct prediction depends on a specific, narrowly relevant, bit of knowledge.", "Further, pre-trained transformer models do indeed encode a substantial number of specific facts in their parameter matrices, enabling them to answer questions directly from the model itself (Radford et al., 2019).", "However, because the computational cost of transformers scales at least linearly with the number of parameters, it is expensive to encode all the facts that would enable the correct predictions.", "Encoding a large amount of rarely useful information in parameters that are used for every instance is an inefficient use of model capacity if it is not needed for the downstream task.", "As the performance gains from GPT to GPT-2 and BERTBASE to BERTLARGE show, increasing model capacity continues to provide gains.", "Previous work also found seemingly limitless improvements from increasing model capacity (Shazeer et al., 2017), possible through sparse activation.", "Our hypothesis is that making more efficient use of a fixed number of parameters can provide analogous gains.", "In MRC tasks, the model does not need to generate an answer it has encoded in its parameters.", "Instead, the task is to use a retrieved passage, or passage set to extract an answer to the question.", "To better align the pre-training with the needs of the MRC task, we use span selection as an additional auxiliary task.", "This task is similar to the cloze task, but is designed to have a fewer simple instances requiring only syntactic or collocation understanding.", "For cloze instances that require specific knowledge, rather than training the model to encode this knowledge in its parameterization, we provide a relevant and answer-bearing passage paired with the cloze instance.", "We provide an extensive evaluation of the span selection pre-training method across four reading comprehension tasks: the Stanford Question Answering Dataset (SQuAD) in both version 1.1 and 2.0; followed by the Google Natural Questions dataset (Kwiatkowski et al., 2019) and a multihop Question Answering dataset, HotpotQA (Yang et al., 2018).", "We report consistent improvements over both BERTBASE and BERTLARGE models in all reading comprehension benchmarks.", "The rest of the paper is structured as follows.", "In section 2 We describe earlier work on similar tasks and relate our extended pre-training to the broader research efforts on pre-training transformers.", "To provide context for our contribution, we review the most relevant parts of BERT in Section", "3. Next, we describe and formalize our pre-training task and the architectural adjustments to BERT in Section", "4. Finally we provide an extensive empirical evaluation in MRC tasks, describing benchmarks in Section 5 and evaluating our approach in Section 6.", "Section 7 concludes the paper highlighting interesting research directiond for future work.", "Since the development of BERT there have been many efforts towards adding or modifying the pretraining tasks.", "Joshi et al. (2019) introduced Span-BERT, a task that predicts the tokens in a span from the boundary token representations.", "Note that, unlike span selection, there is no relevant passage used to select an answer span.", "ERNIE 2.0 (Sun et al., 2019) trained a transformer language model with seven different pre-training tasks, including a variant of masked language model and a generalization of next-sentence prediction.", "XLNet (Yang et al., 2019) introduced the permuted language model task, although it is not clear whether the success of the model is due to the innovative pre-training or larger quantity of pre-training.", "In this paper we focus on a pre-training task that has been specifically designed to support QA applications.", "Previous related work has explored tasks similar to span selection pre-training.", "These are typically cast as approaches to augment the training data for question answering systems, rather than alleviating the pressure to encode specific facts in the pre-training of a language model.", "Hermann et al. (2015) introduces a reading comprehension task constructed automatically from news articles with summaries.", "In this view the constructed dataset is used both for training and test.", "Also, entities were replaced with anonymized markers to limit the influence of world knowledge.", "Unlike our span selection pre-training task, this requires summaries paired with articles and focuses only on entities.", "A similar approach was taken in Dhingra et al. (2018) to augment training data for question answering.", "Wikipedia articles were divided into introduction and body with sentences from the introduction used to construct queries for the body passage.", "Phrases and entities are used as possible answer terms.", "Onishi et al. (2016) constructed a question answering dataset where answers are always people.", "Unlike other work, this did not use document structure but instead used a search index to retrieve a related passage for a given question.", "Because the answers are always people, and there are only a few different people in each passage, the task is multiple choice rather than span selection.", "Self training (Sachan and Xing, 2018) has also been used to jointly train to construct questions and generate self-supervised training data.", "BERT was trained for one million batches, with 256 token sequences in each.", "Although this is already a considerable amount of pre-training, re-cent research has shown continued improvement from additional pre-training data.", "XLNet (Yang et al., 2019) used four times as much text, augmenting the Wikipedia and BooksCorpus (Zhu et al., 2015) with text from web crawls, the number of instances trained over was also increased by a factor of four.", "RoBERTa (Liu et al., 2019) enlarged the text corpus by a factor of ten and trained over fifteen times as many instances.", "This, along with careful tuning of the MLM task resulted in substantial gains.", "Unfortunately, these very large-scale pre-training approaches require significant hardware resources.", "We restrict our experiments to extended pre-training with less than half the steps of BERT (390k batches of 256).", "In this section, we give the readers a brief overview of the BERT (Devlin et al., 2018) pre-training strategy and some details which we modify for our novel span selection auxiliary task.", "BERT uses a transformer (Devlin et al., 2018) architecture with L layers and each block uses A self-attention heads with hidden dimension H .", "The input to BERT is a concatenation of two segments x 1 , . . . , x M and y 1 , . . . , y N separated by special delimiter markers like so: [ CLS ] , x 1 , . . . , x M , [ SEP ] , y 1 , . . . , y N , [ SEP ] such that M + N < S where S is the maximum sequence length allowed during training 1 .", "This is first pre-trained on a large amount of unlabeled data and then fine-tuned on downstream tasks which has labeled data.", "BERT used two objective functions during pretraining: masked language modeling and next sentence prediction.", "We discuss them in brief.", "Masked Language Model (MLM) : A random sample of the tokens in the input sequence is replaced with a special token called [ MASK ] .", "MLM computes a cross-entropy loss on predicting these masked tokens.", "Particularly, BERT selects 15% of the input tokens uniformly to be replaced.", "80% of these selected tokens are replaced with [MASK] while 10% are left unchanged, and 10% are replaced with random token from the vocabulary.", "Next Sentence Prediction (NSP) : This is a binary classification loss that predicts if two sentences follow each other in the original text.", "The examples are sampled with equal probability such that positive examples are consecutive sentences while negatives are artificially created by adding sentences from different documents.", "In the previous section we briefly discussed the BERT framework along with its objective functions.", "In this section, we will propose a novel pre-training task for bi-directional language models called span selection.", "1 We follow standard notation here as in previous work.", "Span selection is a pre-training task inspired both by the reading comprehension task and the limitations of cloze pre-training.", "Figure 1 illustrates an example of a span selection instance.", "The query is a sentence drawn from a corpus with a term replaced with a special token: [BLANK].", "The term replaced by the blank is the answer term .", "The passage is relevant as determined by a BM25 (Robertson et al., 1995) (k1=1.2, b=0.75) search, and answer-bearing (containing the answer term).", "Query In a station of the metro is an Imagist poem by [BLANK] published in 1913 in the literary magazine Poetry Passage . . . Ezra Pound 's famous Imagist poem, In a station of the metro, was inspired by this station . . . Answer Term Ezra Pound Figure 1: Example Span Selection Instance Unlike BERT's cloze task, where the answer must be drawn from the model itself, the answer is found in a passage using language understanding.", "Figure 2 outlines the process of generating span selection pre-training data.", "The input is an unlabeled corpus, which is then split into passages and indexed.", "We used passages from Wikipedia 2 300 to 2000 characters long, split on paragraph boundaries, and Lucene 3 7.4.0 as the search engine.", "In addition to the text of the passage, we store the document ID, so that we may filter passages that occur in the same document as the query.", "To gather queries, we iterate over the sentences in the corpus between 50 and 250 characters long.", "We used a set of simple heuristic criteria to identify answer terms that are likely to result in queries that require deep understanding to answer: the term should be between 4 and 30 characters and either a single token from an open class part-of-speech (20%) or a noun phrase or entity (80%), as detected by a part-of-speech pattern and ClearNLP NER.", "To identify the passages, we use the generated query, with the answer term removed, as a bag-of-words query to search into the passage index.", "The top ten results were searched for an answer-bearing passage; if none were found the query was either discarded or sampled to maintain a 30% composition of impossible span selection instances.", "The impossible instances are those that do not have the answer-term in the provided passage.", "We further required a minimum BM25 score of 25 (tuned manually to reflect high relevance).", "If the answer term was part of a longer sequence of tokens shared by the query and passage, we extended the answer term to be the longest such sequence.", "This avoids cases where the answer term can be found through trivial surface-level matching.", "Table 2 shows examples of span selection instances of different types.", "Rather than discreet types, these are best understood as a continuum.", "Comparing to the cloze types in Table 1, we see an analogy between the lexical cloze type and phrase multiple choice.", "These two types involve understanding what words (or phrases) are reasonable in the context from the set of wordpieces (or possible spans).", "The memorized knowledge cloze type contrasts with the suggestive or justified inference span selection types.", "Because a suggestive or justifying passage is present, the model is trained only to understand language, rather than memorize facts.", "Simple syntactic instances are largely eliminated because closed class words are not possible answer terms.", "Also, since answer terms are expanded to the longest shared subsequence between query and passage, collocation instances are not a concern.", "Rather than training a transformer architecture from scratch, we initialize from the pre-trained BERT models (Devlin et al., 2018) and extend the pre-training with the span selection auxiliary task.", "We refer to the resulting models as BERTBASE +SSPT (Span Selection Pre-Training) and BERTLARGE +SSPT.", "We used batch sizes of 256, and a learn rate of 5e-5.", "All models were trained over 100 million span selection instances.", "We found continued improvement from 50 million to 100 million and have not yet tried larger pre-training runs.", "Unlike the efforts of XLNet or RoBERTa which increased training by a factor of ten relative to BERT, the additional data in SSPT represents less than a 40% increase in the pre-training of the transformer.", "This pre-training is also done over Wikipedia, adding no new text to the pre-training.", "Figure 3 illustrates the adaptation of BERT to SSPT.", "The query and passage are concatenated () () () () () () Linear Layer followed by Softmax 0 0 0 1 0 1 0 0 BERT True Labels for Start/ End Indexes Linear Layer followed by sigmoid True Label CLS Query SEP Passage () Is-possibleClassifier TODO: Use actual text in input to BERT and show the named ouput vectors , , Figure 3: BERT for QA with is-possible prediction in the standard two sequence representation, with a preceding [CLS] token and a separating [SEP] token, producing a sequence of tokens T .", "BERT produces output vectors for these tokens to obtain a sequence { v i } | T | i =1 of d dimensional vectors.", "In span selection extended pre-training, we alter the vocabulary of the tokenizer, introducing the new special token: [BLANK]'.", "We use the BertForQuestionAnswering 4 model, which uses a pointer network to find the answer location.", "The pointer network applies a simple fully connected network to predict the probability of start and end span pointers at each token position, using the output of the final transformer layer at that position.", "The loss in training is the cross entropy of these predictions with the true positions of the start and end.", "Formally, The start of the answer span is predicted as p ( i = (cid:104) start (cid:105) ) = softmax ( w (cid:62)(cid:104) start (cid:105) v + b (cid:104) start (cid:105) ) i , where w (cid:104) start (cid:105) R d , b (cid:104) start (cid:105) R are trainable parameters.", "Then end of the span is predicted the same way: p ( i = (cid:104) end (cid:105) ) = softmax ( w (cid:62)(cid:104) end (cid:105) v + b (cid:104) end (cid:105) ) i .", "Span selection pre-training may optionally include a classifier for answerability.", "If the answerability classifier is included in the pre-training then the presence of the answer span in the passage is predicted with probability given by: p ( possible ) = sigmoid ( w (cid:62) CLS v CLS + b CLS ) .", "If it is not included, for impossible instances the target prediction is for both start and end to be position zero, the [CLS] token.", "We train models for QA without the answerability classifier for 100 million instances.", "This took approximately seven days on 16 P100 GPUs.", "Training data and code to extend pre-training is available as open source 5 .", "We follow previous work and evaluate our SSPT architecture on several downstream tasks.", "Our primary motivation is to improve question answering by improving the pre-trained language model.", "Our QA benchmarks are the following:", "4 https://github.com/huggingface/ pytorch-transformers 5 https://github.com/IBM/ span-selection-pretraining", "As of Dec. 2019", "2. Natural Questions (NQ) (Kwiatkowski et al., 2019)", "3. HotpotQA (Yang et al., 2018) The three datasets provide different characteristics of question answering and machine reading comprehension tasks as well as an opportunity to compare results with active leaderboards.", "Table 3 provides a summary comparison.", "We briefly discuss them here: 5.1 SQuAD SQuAD provides a paragraph of context and asks several questions about it.", "The task is extractive QA where the system must find the span of the correct answer from the context.", "We evaluate on two versions of SQuAD: v1.1 and v2.0.", "In v1.1 the context always contains an answer.", "However, in v2.0 the task contains additional questions to which the given context does not have the correct answer.", "Just as in Figure 3, the question and passage are concatenated with the separators ([CLS] and [SEP]) to form the input to the pre-trained BERT.", "The final token representations are then used to predict the probability for each token that it is the start or end of the answer span.", "The span with the highest predicted probability is then the predicted answer.", "NQ is a dataset of over 300,000 queries sampled from live users on the Google search engine for which a Wikipedia article is contained in the top ranking search results.", "Crowd sourced annotators are then tasked with highlighting a short answer span to each question 6 , if available, from the 6 Around 1% of the questions are answered as a simple Yes or No rather than a span of short answer text.", "Wikipedia article as well as a long answer span (which is generally the most immediate HTML paragraph, list, or table span containing the short answer span), if available.", "Similar to SQuAD 2.0 the NQ dataset forces models to make an attempt at knowing what they don't know in order to detect and avoid providing answers to unanswerable questions.", "In addition, the fact that the questions were encountered naturally from actual users removes some of the observational bias that appears in the artificially created SQuAD questions.", "Both these aspects along with the recency of the task's publication means that this is still a challenging task with lots of headroom between human performance and the best performing automated system.", "Experiments on the NQ dataset use the strategies and model described by Alberti et al. (2019b) to fine tune a BERTLARGE model with a final layer for answerability prediction as well as sequence start/end prediction.", "Similar to their best performing systems, the model is first trained using the SQuAD v1.1 data set and then subsequently trained on the NQ task 7 .", "The hyperparameters follow Alberti et al. (2019b) with the exception of learning rate and batch size which are chosen according to the approach outlined by Smith (2018) using a 20% sub-sample of the data for each experimental setting.", "Recently, Yang et al. (2018) released a new dataset, called HotpotQA, for the task of reading compre-their small proportion, the models in this paper do not produce Yes/No answers 7 Skipping the SQuAD v1.1 fine-tuning step for the NQ", "task leads to the same conclusions with respect to SSPT pre-training, but decreases the overall performance for both BERTLARGE and BERTLARGE +SSPT", "hension style extractive QA.", "Each training instance in the distractor setting of this dataset comprises a question, a set of ten passages, an answer, and a binary label for each sentence in the passage-set stating whether that sentence serves as a supporting fact (or not) to arrive at the correct answer.", "The task is to predict both the correct answer as well as the supporting facts for any given test instance.", "The signature characteristic of this dataset lies in the fact that each question requires a minimum of two supporting facts from two different passages in order to derive its correct answer.", "Thus, this dataset tests the cross-passage, multi-hop reasoning capability of a reading comprehension based question answering system.", "Our system for HotpotQA uses a three-phase approach.", "First, representations of the individual passages are built with a pre-trained transformer encoder.", "Second, interactions between these passages are attended to using a relatively shallow global transformer encoder.", "The supporting facts are predicted from the sentence representations produced by this global layer.", "Finally, the predicted supporting facts are then merged into a pseudo-passage that is used by a slightly altered version of the model for SQuAD.", "The one addition is that this model also predicts an answer-type ( { yes, no, span } ) from the [CLS] token vector.", "Tables 4, 5, and 6 show our results on the development set with extended span selection pre-training for BERT relative to the pre-trained BERT.", "We use the same hyperparameters on these tasks as the original BERT.", "The best results for each dataset are in bold when significant relative to the BERT baseline.", "The four question answering datasets are improved substantially with span selection pre-training.", "Relative to BERTBASE we find a 3 point improvement in F1 for SQuAD 1.1 and a nearly 6 point improvement for SQuAD 2.0.", "In terms of error rate reduction the improvement is similar, 28% and 25% respectively.", "The error rate reduction for BERTLARGE is 20% and 19% for SQuAD 1.1 and 2.0 respectively.", "In reading comprehension tasks, the pointer network for answer selection is pre-trained through the span selection task.", "We measure how much of the improvement is due to this final layer pre-training versus the extended pre-training for the transformer encoder layers by discarding the pre-trained pointer network and randomly initializing.", "This configura-tion is indicated as BERTBASE +SSPT-PN.", "Surprisingly, the pre-training of the pointer network is not a significant factor in the improved performance on reading comprehension, indicating the improvement is instead coming through a better language understanding in the transformer.", "Figure 4 shows the improvement from SSPT on SQuAD 1.1 and 2.0 as the amount of training data increases.", "While there is significant improvement at 100% training, the improvement is even more pronounced with less training data.", "We hypothesize that this is due to the close connection of span selection pre-training with reading comprehension.", "This effect is strongest for SQuAD 1.1, which like span selection pre-training always contains a correct answer span in the passage.", "The work of Alberti et al. (2019a), which gets the BERTLARGE performance listed in Table 5, is the highest ranking single model submission that does not use data augmentation with a published paper.", "Our implementation of BERTLARGE +SSPT, therefore, provides a 1.5% improvement over the best BERT-for-QA model performance that we are aware of on the NQ data set.", "In future work, we intend to explore data augmentation on top of BERTLARGE +SSPT for further improvements.", "In HotpotQA, unlike the other QA datasets, multiple passages are provided.", "We use the BERT transformer in two places, for supporting fact prediction to build the representations of each passage, and in answer prediction as in the other QA tasks.", "We find the most substantial gains of almost 4 F1 points for answer selection, the QA task most similar to span selection pre-training.", "Interestingly, we also find improvement of almost one point F1 in supporting fact prediction, demonstrating that the learned representations can generalize well to multiple QA sub-tasks.", "HotpotQA also comes with its own leaderboard (https://hotpotqa.github.io/).", "A good number of submissions on this leaderboard are based on BERTBASE or BERTLARGE .", "We made an initial submission to this leaderboard, called TAP, which occupied Rank-5 at the time of submission and the underlying architecture employed BERTBASE .", "Next, we replaced BERTBASE with BERTLARGE +SSPT, Model Passage F1 Exact BERTBASE +SSPT Related 62.88 49.27 BERTBASE +SSPT Unrelated 46.51 34.32 BERTLARGE +SSPT Related 65.39 51.82 BERTLARGE +SSPT Unrelated 50.98 38.97 Table 7: Comparison of performance of SSPT for related vs. unrelated passages calling that model TAP-2.", "This change resulted in a 7 .", "22% absolute gain in the Joint F 1 score.", "An ensemble version of TAP-2 further offered a gain of 1 .", "53% .", "The SSPT augmented TAP-2 (ensemble) and TAP-2 (single model) achieved Rank-1 and Rank-2 on the leaderboard at the time of submission.", "In section 4.1 we enumerated three types of span selection instances.", "The first type, Phrase Multiple Choice, is the least interesting since the semantic correspondence between the query and the passage is not used.", "Instead, the instance is treated as a cloze with options provided as spans in the passage.", "Note that in this type of instance the relevance of the passage to the query is not important.", "To explore how frequent this case might be we select 100 thousand new SSPT instances with a relevant passage and for each select an alternative, random, answer-bearing, passage.", "The unrelated passage is from a document different both from the query's document and from the relevant passage's document.", "We then apply the SSPT trained model to the instances both with the related and unrelated passage and evaluate its performance in terms of token-level F1 and exact span match.", "Table 7 show the performance of our SSPT trained models on the SSPT queries with related vs. unrelated passages.", "The large accuracy gains when using relevant passages imply that for many passages Phrase Multiple Choice is not the method used by the model.", "Instead, the semantic connection of the passage to the query is used to select the appropriate span.", "We also compare our span selection pre-training data with the data distributed by Dhingra et al. (2018).", "This data consists of approximately 2 million instances constructed using the abstract and body structure of Wikipedia.", "In contrast, our approach to pre-training can generate data in unlimited quantity from any text source without assuming a particular document structure.", "When only one million training steps are used, both sources of pre-training are equally effective.", "But when moving to ten million steps of training, our data produces models that give over one percent better F1 on both SQuAD 1.1 and 2.0.", "This suggests the greater quantity of data possible through SSPT is a powerful advantage.", "Span selection pre-training is effective in improving reading comprehension across four diverse datasets, including both generated and natural questions, and with provided contexts of passages, documents and even passage sets.", "This style of pretraining focuses the model on finding semantic connections between two sequences, and supports a style of cloze that can train deep semantic understanding without demanding memorization of specific knowledge in the model.", "The span selection task is suitable for pre-training on any domain, since it makes no assumptions about document structure or availability of summary/article pairs.", "This allows pre-training of language understanding models in a very generalizable way.", "In future work, we will address end-to-end question answering with pre-training for both the answer selection and retrieval components.", "We hope to progress to a model of general purpose language modeling that uses an indexed long term memory to retrieve world knowledge, rather than holding it in the densely activated transformer encoder layers." ]
[ "abstain", "abstain", "objective", "abstain", "result", "objective", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "objective", "objective", "method", "abstain", "other", "other", "other", "other", "other", "method", "other", "other", "other", "other", "other", "method", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "abstain", "abstain", "other", "other", "other", "other", "abstain", "other", "other", "other", "other", "other", "other", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "method", "other", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective" ]
[ "Supervised approaches to named entity recognition (NER) are largely developed based on the assumption that the training data is fully annotated with named entity information.", "However, in practice, annotated data can often be imperfect with one typical issue being the training data may contain incomplete annotations.", "We highlight several pitfalls associated with learning under such a setup in the context of NER and identify limitations associated with existing approaches, proposing a novel yet easy-to-implement approach for recognizing named entities with incomplete data annotations.", "We demonstrate the effectiveness of our approach through extensive experiments.", "1 1 Introduction Named entity recognition (NER) (Tjong Kim Sang, 2002; Tjong Kim Sang and De Meul-der, 2003) as one of the most fundamental tasks within natural language processing (NLP) has received significant attention.", "Most existing approaches to NER focused on a supervised setup, where fully annotated named entity information is assumed to be available during the training phase.", "However, in practice, obtaining high-quality annotations can be a very laborious and expensive process (Snow et al., 2008).", "One of the common issues with data annotations is there may be incomplete annotations.", "Figure 1 shows an example sentence with two named entities John Lloyd Jones and BBC radio of type PER (person) and ORG (organization), respectively.", "Following the standard BIOES tagging scheme (Ramshaw and Marcus, 1999; Ratinov and Roth, 2009), the corresponding gold label sequence is shown below the sentence.", "When the data annotations are incomplete, certain labels 1 Our code and data are available at http://statnlp.", "may be missing from the label sequence.", "Properly defining the task is important, and we argue there are two possible potential pitfalls associated with modeling incomplete annotations, especially for the NER task.", "Several previous approaches assume the incomplete annotations can be obtained by simply removing either word-level labels (Fernandes and Brefeld, 2011) or span-level labels (Carlson et al., 2009).", "As shown in Figure 1, under both assumptions (i.e., A.1 and A.2 ), there will be words annotated with O labels.", "The former approach may even lead to sub-entity level annotations (e.g., radio is annotated as part of an entity).", "However, we argue such assumptions can be largely unrealistic.", "In practice, annotators are typically instructed to annotate named entities for complete word spans only (Settles et al., 2008; Surdeanu et al., 2010).", "Thus, sub-entity level annotations or O labels 2 should not be assumed to be avail-2 Why should the O labels be assumed unavailable?", "This is because the annotators typically do not actively specify the O labels when working on annotations.", "If the annotator chooses not to annotate a word, it could either mean it is not part of any entity, or the word is actually part of an entity but the annotator neglected it in the annotation process (therefore we have incomplete annotations).", "However, we note that assigning the O label to a word would precisely indicate it is strictly not part of any entity, which is not desirable.", "able ( A.3 ).", "Therefore such approaches are making sub-optimal assumptions on the available labels .", "When the proper assumptions on the available labels are made, one can typically model the missing labels as latent variables and train a latent-variable conditional random fields model (Quat-toni et al., 2005).", "One such approach is presented in (Bellare and McCallum, 2007).", "Their work focused on the citation parsing 3 (i.e., sequence labeling) task which does not suffer from the above issue as no O label is involved.", "However, though the approach was shown effective in the citation parsing task, we found its effectiveness does not transfer to the NER task even in the absence of the above available labels issue.", "As we would highlight later, the reason is related to the undesirable assumptions on the unavailable labels .", "In this work, we tackle the incomplete annotation problem when building an NER system, under a more realistic yet more challenging scenario.", "We present a novel, effective, yet easy-to-implement approach, and conduct extensive experiments on various datasets and show our approach signifi-cantly outperforms several previous approaches.", "Previous research efforts on partially annotated data are mostly based on the conditional random fields (CRF) (Lafferty et al., 2001), structured perceptron (Collins, 2002) and max-margin (Tsochantaridis et al., 2005) (e.g. structural support vector machine) models.", "Bellare and McCallum (2007) proposed a missing label linear-chain CRF 4 which is essentially a latent-3 The task is to tag the BibTex records with different labels (i.e., title, author, affiliation and so on).", "4 This model was also named as Partial CRF (Carlson et al., 2009) and EM Marginal CRF (Greenberg et al., 2018).", "variable CRF (Quattoni et al., 2005) on citation parsing (McCallum et al., 2000).", "This model had also been used in part-of-speeching tagging and segmentation task with incomplete annotations (Tsuboi et al., 2008; Liu et al., 2014; Yang and Vozila, 2014).", "Yang et al. (2018) showed the effectiveness of such a model on Chinese NER with incomplete annotations due to the fact that they required a certain number of fully annotated data to perform joint training.", "Greenberg et al. (2018) applied this model on a biomedical NER task and achieved promising performance with incomplete annotations.", "However, in their assumption for the incomplete annotations, the O labels are still considered, which we believe is not realistic.", "Carlson et al. (2009) modified the structured perceptron algorithm and defined features only on the tokens with annotated labels in partially labeled sequences.", "Fernandes and Brefeld (2011) and Lou et al. (2012) proposed to use a large-margin learning framework similar to structured support vector machines with latent variables (Yu and Joachims, 2009).", "Given the input word sequence x , the NER task is to predict a label sequence y that encodes the NER information (e.g., in a form following the BIOES tagging scheme).", "Given a training set that consists of completely labeled data D , one can tackle this problem using a standard linear-chain conditional random field (CRF) (Lafferty et al., 2001) whose loss function is as follows: 5 L ( w ) = (cid:88) i log p w ( y ( i ) | x ( i ) ) (1) 5 In practice, we also have an L 2 regularization term, which we exclude from the formula for brevity.", "where ( x ( i ) , y ( i ) ) is the i -th instance from D .", "Now, assume we have an incomplete label sequence y ( i ) p .", "From such a y ( i ) p we should be able to derive a set of all possible complete label sequences that are compatible with (i.e., contain) the incomplete label sequence, and let us call this set C ( y ( i ) p ) .", "We can rewrite the above function as: L ( w ) = (cid:88) i log (cid:88) y C ( y ( i ) p ) q D ( y | x ( i ) ) p w ( y | x ( i ) ) We illustrate in Figure 3 several previous approaches as well as our approach.", "In this example, the entity BBC radio of type ORG is not annotated.", "Figure", "3(a) shows a single path that corresponds to the gold label sequence.", "Figure", "3(b) illustrates a naive approach, where we regard all the missing labels as O labels.", "This essentially assumes that the q distribution in the above equation puts all probability mass to this single label sequence, which is an incorrect assumption.", "Now let us look at what assumptions on q have been made by the existing approach of Bellare and McCallum (2007).", "The model regards the missing labels as latent variables and learns a latent variable CRF using the following loss: (cid:88) i log (cid:88) y C ( y ( i ) p ) p w ( y | x ( i ) ) (2) The resulting model is called missing label linear-chain CRF ( M-CRF ) 6 .", "As we can see from the above function, this is essentially equivalent to say q is a uniform distribution that assigns equal probabilities to all possible complete label sequences in C ( y ( i ) p ) .", "We believe such an assumption on q that describes unavailable labels can be improved.", "As we can see from the above example in Figure", "3(d), a more desirable assumption about q is to put more probability mass to a path that is close to the gold path.", "In practice, their approach worked for the task of citation parsing, where the q distribution may not deviate much from the uniform distribution (Figure", "3(c)) in such a task.", "However, in the task of NER, we find such a simple treatment to the q distribution often leads to sub-optimal results (as we can see in the experiments later) as the q distribution is highly skewed due to the large 6 Similar assumptions have also been made by (Carlson et al., 2009; Fernandes and Brefeld, 2011), but they used structured perceptron (Collins, 2002) instead.", "amount of O labels.", "This observation motivates us to find a proper way to define q that can approximate the gold label distribution in this work.", "Inspired by the classifier stacking technique used in Nivre and McDonald (2008), we empirically found that a reasonable q distribution can be acquired in a k -fold cross-validation fashion.", "We first start with an initialization step where we assign specific labels to words without labels, forming complete label sequences (we will discuss our initialization strategy in experiments).", "Next, we perform k -fold cross-validation on the training set.", "Specifically, each time we train a model with ( k 1 ) folds of the data and based on the learned model we define our q distribution.", "We describe two different ways of defining the q distribution, namely the hard approach, and the soft approach.", "In the hard approach, the resulting q distribution is a collapsed distribution that assigns probability 1 to a single complete label sequence, whereas in the soft approach each possible label sequence will get a certain probability score.", "In the hard approach, after training a model from ( k -1) folds, we apply a constrained Viterbi procedure 7 to the sentences in the remaining fold.", "In the soft approach, we use a constrained version of the forward-backward procedure and calculate the marginal probabilities associated with each label at each unlabeled position.", "The score of each complete label sequence can then be calculated as a product of all such marginal probabilities.", "We note that in the above procedure the estimation to q depends on the initialization.", "Thus we iterate the above procedure, which allows us to converge to an improved q .", "7 The algorithm will ensure the resulting complete label sequence is compatible with the incomplete label sequence.", "notice that incomplete annotation issue is very common in the industry setup.", "Therefore we also consider two new datasets from industry Taobao and Youku datasets 8 consisting of product and video titles in Chinese.", "We crawled and manually annotated such data with named entities 9 .", "Table 1 shows the statistics of the datasets.", "The last two columns show the number of entity types and the percentage of words (i.e., c in Table 1) that are parts of an NE.", "Based on our assumption on the available labels in Section 1, we randomly remove a certain number of entities as well as all O labels and use to represent the ratio of annotated entities.", "For example, = 0 .", "6 means we keep 60% of all the entities and remove the annotations of 40% of the entities.", "Meanwhile, the O labels are considered unavailable.", "We follow Lample et al. (2016) and apply the bidirectional long short-term memory (Hochreiter and Schmidhuber, 1997) (BiLSTM) networks as the neural architecture for all baselines and our approaches.", "Specifically, we implement the following baselines: a Simple model which is a linear-chain LSTM-CRF model and we treat all missing labels as O ; the missing label CRF (Bellare and McCallum, 2007; Greenberg et al., 2018) (LSTM-M-CRF) model; the partial perceptron (Carlson et al., 2009) model, which is a structured perceptron (Collins, 2002) but only considers the scores on the words with available labels; the trans-ductive perceptron (Fernandes and Brefeld, 2011) model where they introduce a Hamming loss function during the perceptron training process; lastly, we train an LSTM-CRF (Lample et al., 2016) with complete annotations as the upper bound ( Com-8 http://www.taobao.com/ and http://www.youku.com/ 9 Details of all datasets can be found in the supplementary material. plete ).", "For English and Spanish, we use exactly the same embeddings used in Lample et al. (2016).", "We train our Chinese character embeddings on the Chinese Gigaword 10 corpus.", "The resulting implementation achieves 90.9 and 85.8 F -scores on CoNLL-2003 English and CoNLL-2002 Spanish datasets, respectively.", "These benchmark results are comparable with the results reported in the state-of-the-art NER systems (Lam-ple et al., 2016; Ma and Hovy, 2016; Reimers and Gurevych, 2017).", "For initialization in our approaches, we run the Simple model on each fold and use the results to initialize our q distribution 11 .", "Detailed descriptions on experiment settings (e.g., hidden dimension of LSTM and optimizer) and baseline systems are provided in supplementary material.", "Main Results Table 2 presents the comparisons among all approaches on four datasets with = 0 .", "5 and k = 2 .", "Our preliminary experiments show that a larger k value have a negligible effect on the results.", "A similar finding was also reported in Nivre and McDonald (2008).", "The Simple model has high precision and low recall as it treats unknown labels as O .", "Previous models for incomplete annotations achieve a much lower F score compared to the Simple model and our approaches.", "Due to their uniform assumption on q over the missing labels, these models typically can recall more entities.", "The partial perceptron (Carl-son et al., 2009) among these three models yields a relatively lower recall as features are not defined over the words with missing labels.", "10 https://catalog.ldc.upenn.edu/LDC2003T09 11 Similar to the EM procedure, a good initialization is crucial for our approach.", "We found using random initialization can lead to substantially worse results and a better initialization can be used to further improve the results.", "The difference in F -score between these three models and the Simple model is more significant on the two CoNLL datasets than on Taobao and Youku.", "As shown in Table 1, the latter two datasets have more words labeled as parts of entities (i.e., a higher c ).", "This means these industrial datasets have less O labels, making such baseline models suffer less from their assumptions on the unavailable labels .", "With a properly learned q distribution, our approaches improves the recall score over the Simple model while preserving a high precision.", "Our soft approach consistently achieves a better F -score compared with the hard approach on all datasets with p < 0 .", "001 .", "Compared to the Complete upper bound, our soft approach are still more than 3% lower in F -score on the CoNLL-2002, Taobao and Youku datasets.", "However, we can see that the soft approach achieves much higher performance compared to this variant on other datasets.", "We attribute this phenomenon to our approaches' ability in retrieving most of the entities in the training set.", "Empirically, we found our soft approach can recover 94% of the entities in the training set of the CoNLL-2003 dataset.", "The overall results show the underlying scenario is challenging for commonly adopted models in handling incomplete annotations and our approaches can achieve better performance compared with them.", "Effect of We conduct experiments with different from 0 .", "1 to 0 .", "9 for our soft approach against the Simple and LSTM-M-CRF models.", "Figure 4 shows how the precision, recall and F score on CoNLL-2003 change as we increase .", "The F -score of the Simple baseline increases progressively as increases.", "LSTM-M-CRF always maintains a low F -score which is not sensitive to different values because of their high recall and low precision values as we can see in Figure 4 (a,", "b).", "The improvement of our approach attributes to the increase of recall as the precision is constantly high and stable.", "We can see that our soft approach performs particularly well when is larger than 0.3 which indicates a modest amount of missing labels in practice.", "In this work, we identified several limitations associated with previous assumptions when performing sequence labeling with incomplete annotations, and focused on the named entity recognition task.", "We presented a novel and easy-to-implement solution that works under a realistic and challenging assumption on the incomplete annotations.", "Through extensive experiments and analysis, we demonstrated the effectiveness of our approach.", "Although we focused on the task of named entity recognition in this work, we believe the proposed approach may find applications in some other sequence labeling tasks or other more general structured prediction problems where the issue of incomplete annotations is involved.", "We leave them as future work.", "We would like to thank the anonymous reviewers for their constructive comments on this work.", "This work is done under a collaborative agreement between SUTD and Alibaba on an Alibaba Innovative Research (AIR) Program funded by Alibaba, where Alibaba provided data and helped with experiments.", "We appreciate Alibaba's generosity in the agreement that makes it possible for us to make all data and code in this research publicly available upon acceptance of this paper.", "This work is also partially supported by SUTD project PIE-SGP-AI-2018-01." ]
[ "abstain", "abstain", "objective", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "method", "objective", "other", "other", "other", "other", "other", "other", "other", "method", "other", "other", "abstain", "method", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "method", "method", "method", "method", "abstain", "method", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "result", "abstain", "abstain", "other", "result", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "result", "abstain", "result", "result", "abstain", "abstain", "abstain", "result", "abstain", "result", "abstain", "abstain", "result", "method", "objective", "objective", "objective", "abstain", "other", "other", "other", "other" ]
[ "We study the problem of generating data poisoning attacks against Knowledge Graph Embedding (KGE) models for the task of link prediction in knowledge graphs.", "To poison KGE models, we propose to exploit their inductive abilities which are captured through the relationship patterns like symmetry, inversion and composition in the knowledge graph.", "Specifi-cally, to degrade the model's prediction confidence on target facts, we propose to improve the model's prediction confidence on a set of decoy facts.", "Thus, we craft adversarial additions that can improve the model's prediction confidence on decoy facts through different inference patterns.", "Our experiments demonstrate that the proposed poisoning attacks outperform state-of-art baselines on four KGE models for two publicly available datasets.", "We also find that the symmetry pattern based attacks generalize across all model-dataset combinations which indicates the sensitivity of KGE models to this pattern.", "Knowledge graph embeddings (KGE) are increasingly deployed in domains with high stake decision making like healthcare and finance (Noy et al., 2019), where it is critical to identify the potential security vulnerabilities that might cause failure.", "But the research on adversarial vulnerabilities of KGE models has received little attention.", "We study the adversarial vulnerabilities of KGE models through data poisoning attacks.", "These attacks craft input perturbations at training time that aim to subvert the learned model's predictions at test time.", "Poisoning attacks have been proposed for models that learn from other graph modalities (Xu et al., 2020) but they cannot be applied directly to KGE models.", "This is because they rely on gradients of Equal contribution by last authors.", "Figure 1 : Composition based adversarial attack on fraud detection.", "The knowledge graph consists of two types of entities Person and BankAccount.", "The target triple to predict is ( Karl, affiliated with , Joe the mobster ) .", "Original KGE model predicts this triple as True.", "But a malicious attacker adds adversarial triples (in purple) that connect Karl with a non-suspicious person Bob through composition pattern.", "Now, the KGE model predicts the target triple as False.", "all possible entries in a dense adjacency matrix and thus, do not scale to large knowledge graphs with multiple relations.", "The main challenge in designing poisoning attacks for KGE models is the large combinatorial search space of candidate perturbations which is of the order of millions for benchmark knowledge graphs with thousands of nodes.", "Two recent studies (Zhang et al., 2019a; Pezeshkpour et al., 2019) attempt to address this problem through random sampling of candidate perturbations (Zhang et al., 2019a) or through a vanilla auto-encoder that reconstructs discrete entities and relations from latent space (Pezeshkpour et al., 2019).", "However, random sampling depends on the number of candidates being sampled and the auto-encoder proposed in Pezeshkpour et al. (2019) is only applicable to multiplicative KGE models.", "In this work, we propose to exploit the inductive abilities of KGE models to craft poisoned examples against the model.", "The inductive abilities of KGE models are expressed through different connectivity patterns like symmetry, inversion and composition between relations in the knowledge graph.", "We refer to these as inference patterns .", "We focus on the task of link prediction using KGE models and consider the adversarial goal of degrading the predicted rank of target missing facts.", "To degrade the ranks of target facts, we propose to carefully select a set of decoy facts and exploit the inference patterns to improve performance on this decoy set.", "Figure 1 shows an example of the use of composition pattern to degrade KGE model's performance.", "We explore a collection of heuristic approaches to select the decoy triples and craft adversarial perturbations that use different inference patterns to improve the model's predictive performance on these decoy triples.", "Our solution addresses the challenge of large candidate space by breaking down the search space into smaller steps -", "(i) determining adversarial relations;", "(ii) determining the decoy entities that most likely violate an inference pattern; and", "(iii) determining remaining adversarial entities in the inference pattern that are most likely to improve the rank of decoy triples.", "We evaluate the proposed attacks on four state-of-art KGE models with varied inductive abilities DistMult, ComplEx, ConvE and TransE.", "We use two publicly available benchmark datasets for link prediction WN18RR and FB15k-237.", "Comparison against the state-of-art poisoning attacks for KGE models shows that our proposed attacks outperform them in all cases.", "We find that the attacks based on symmetry pattern perform the best and generalize across all model-dataset combinations.", "Thus, the main contribution of our research is an effective method to generate data poisoning attacks, which is based on inference patterns captured by KGE models.", "Through a novel reformulation of the problem of poisoning KGE models, we overcome the existing challenge in the scalability of poisoning attacks for KGE models.", "Furthermore, the extent of effectiveness of the attack relying on an inference pattern indicates the KGE model's sensitivity to that pattern.", "Thus, our proposed poisoning attacks help in understanding the KGE models.", "For a set of entities E and a set of relations R , a knowledge graph is a collection of triples represented as KG = { ( s, r , o ) | s, o E and r R} , where s, r , o represent the subject, relation and object in a triple.", "A Knowledge Graph Em-Model Scoring Function DistMult (cid:104) e s , e r , e o (cid:105) ComplEx (cid:60) ( (cid:104) e s , e r , e o (cid:105) ) ConvE (cid:104) (vec( ([ e r , e s ] )) W ) , e o (cid:105) TransE (cid:107) e s + e r e o (cid:107) Table 1: Scoring functions f sro of the KGE models used in this research.", "To do so, it uses a scoring function f : E R E R which depends on the entity and relation embeddings to assign a score to each triple f sro = f ( e s , e r , e o ) .", "Table 1 shows the scoring functions of state-of-art KGE models studied in this research.", "The embeddings are learned such that the scores for true (existing) triples in the knowledge graph are higher than the scores for false (non-existing) triples in the knowledge graph.", "Multiplicative vs Additive Interactions: The scoring functions of KGE models exhibit multiplicative or additive interactions (Chandrahas et al., 2018).", "The multiplicative models score triples through multiplicative interactions of subject, relation and object embeddings.", "The scoring function for these models can be expressed as f sro = e (cid:62) r F ( e s , e o ) where the function F measures the compatibility between the subject and object embeddings and varies across different models within this family.", "DistMult, ComplEx and ConvE have such interactions.", "On the other hand, additive models score triples through additive interactions of subject, relation and object embeddings.", "The scoring function for such models can be expressed as f sro = (cid:13)(cid:13) M 1 r ( e s ) + e r M 2 r ( e o ) (cid:13)(cid:13) where e s , e o R k E , e r R k R and M r R k E k R is the projection matrix from entity space R k E to relation space R k R .", "TransE has additive interactions.", "Inductive Capacity of KGE models: The general intuition behind the design of the scoring functions of KGE models is to capture logical properties between relations from the observed facts in the knowledge graph.", "These logical properties or inference patterns can then be used to make downstream inferences about entities and relations.", "For example, the relation is owned by is inverse of the relation owns , and when the fact ( Account 42 , is owned by , Karl ) is true, then the fact ( Karl, owns , Account 42) is also true and vice versa.", "A model that can capture inversion pattern can thus predict missing facts about owns based on observed facts about is owned by .", "The most studied inference patterns in the current literature are symmetry, inversion and composition since they occur very frequently in real-world knowledge graphs.", "In this work, we use these patterns to investigate the adversarial vulnerability of KGE models.", "Link Prediction: Since most of the existing knowledge graphs are incomplete, a standard use case of KGE models is to predict missing triples in the KG .", "This task is evaluated by an entity ranking procedure.", "Given a test triple ( s, r , o ) , the subject entity is replaced by each entity from E in turn.", "These replacements are referred to as synthetic negatives .", "The KGE model's scoring function is used to predict scores of these negative triples.", "The scores are then sorted in descending order and the rank of the correct entity is determined.", "These steps are repeated for the object entity of the triple.", "The state-of-art evaluation metrics for this task are", "(i) MR which is the mean of the predicted ranks,", "(ii) MRR which is the mean of the reciprocals of predicted ranks and", "(iii) Hits@n which count the proportion of correct entities ranked in top-n.", "In the filtered setting (Bordes et al., 2013), negative triples that already exist in the training, validation or test set are filtered out.", "That is, their scores are ignored while computing the ranks.", "Depending on the domain of use, either subject or object or both ranks of the test triple are used to determine the model's confidence 1 in predicting a missing link.", "Poisoning Attacks on KGE models: We study poisoning attacks for the task of link prediction using KGE models.", "We focus on targeted attacks where the attacker targets a specific set of missing triples instead of the overall model performance.", "We use the notation ( s, r , o ) for the target triple ; in this case, s, o are the target entities and r is the target relation .", "The goal of an adversarial attacker is to degrade the ranks of missing triples which are predicted highly plausible by the model.", "The rank of a highly plausible target triple can be degraded by improving the rank of less plausible decoy triples .", "For a target triple ( s, r , o ) , the decoy triple for degrading the rank on object side would be ( s, r , o (cid:48) ) and the decoy triple for degrading the 1 KGE models do not provide model uncertainty estimates.", "rank on subject side would be ( s (cid:48) , r , o ) .", "Thus, the aim of the adversarial attacker is to select decoy triples from the set of valid synthetic negatives and craft adversarial edits to improve their ranks.", "The attacker does not add the decoy triple itself as an adversarial edit, rather chooses the adversarial edits that would improve the rank of a missing decoy triple through an inference pattern.", "Threat Model: To ensure reliable vulnerability analysis, we use a white-box attack setting where the attacker has full knowledge of the target KGE model (Joseph et al., 2019).", "They cannot manipulate the model architecture or learned embeddings directly; but only through addition of triples to the training data.", "We focus on adversarial additions which are more challenging to design than adversarial deletions for sparse knowledge graphs 2 .", "As in prior studies (Pezeshkpour et al., 2019; Zhang et al., 2019a), the attacker is restricted to making edits only in the neighbourhood of target entities.", "They are also restricted to 1 decoy triple for each entity of the target triple.", "Furthermore, because of the use of filtered settings for KGE evaluation, the attacker cannot add the decoy triple itself to the training data (which intuitively would be a way to improve the decoy triple's rank).", "Since the inference patterns on the knowledge graph specify a logic property between the relations, they can be expressed as Horn Clauses which is a subset of FOL formulae.", "For example, a property represented in the form x, y : ( x, owns , y ) ( y, is owned by , x ) means that two entities linked by relation owns are also likely to be linked by the inverse relation is owned by .", "In this expression, the right hand side of the implication is referred to as the head and the left hand side as the body of the clause.", "Using such expressions, we define the three inference patterns used in our research.", "2 For every target triple, the possible number of adversarial additions in the neighbourhood of each entity are E R .", "For the benchmark dataset FB15k-237, this is of the order of millions; whereas the maximum number of candidates for adversarial deletion are of the order of thousands.", "Definition 3.2.", "The inversion pattern P i is expressed as x, y : ( x, r i , y ) ( y, r , x ) .", "Here, the relations r i and r are inverse of each other.", "Definition 3.3.", "The composition pattern P c is expressed as x, y, z : ( x, r 1 , z ) ( z, r 2 , y ) ( x, r , y ) .", "Here, the relation r is a composition of r 1 and r 2 ; and the is the conjunction operator from relational logic.", "The mapping G : V E of variables V in the above expressions to entities E is called a grounding.", "For example, we can map the logic expression x, y : ( x, owns , y ) ( y, is owned by , x ) to the grounding ( Karl, owns , Account 42) ( Account 42 , is owned by , Karl ) .", "Thus, a KGE model that captures the inversion pattern will assign a high prediction confidence to the head atom when the body of the clause exists in the graph.", "In the above expressions, the decoy triple becomes the head atom and adversarial edits are the triples in the body of the expression.", "Since the decoy triple is an object or subject side negative of the target triple, the attacker already knows the relation in the head atom.", "They now want to determine", "(i) the adversarial relations in the body of the expression;", "(ii) the decoy entities which will most likely violate the inference pattern for the chosen relations and;", "(iii) the remaining entities in the body of the expression which will improve the prediction on the chosen decoy triple.", "Notice that the attacker needs all three steps for composition pattern only; for inversion pattern, only the first two steps are needed; and for symmetry pattern, only the second step is needed.", "Below we describe each step in detail.", "A computational complexity analysis of all the steps is available in Appendix A. 3.1 Step1: Determine Adversarial Relations Expressing the relation patterns as logic expressions is based on relational logic and assumes that the relations are constants.", "Thus, we use an algebraic approach to determine the relations in the head and body of a clause.", "Given the target relation r , we determine the adversarial relations using an algebraic model of inference (Yang et al., 2015).", "Inversion: If an atom ( x, r , y ) holds true, then for the learned embeddings in multiplicative models, we can assume e x e r e y ; where denotes the Hadamard (element-wise) product.", "If the atom ( y, r i , x ) holds true as well, then we can also assume e y e r i e x .", "Thus, e r e r i 1 for inverse relations r and r i when embeddings are learned from multiplicative models.", "We obtain a similar expression e r + e r i 0 when embeddings are learned from additive models.", "Thus, to determine adversarial relations for inversion pattern, we use the pre-trained embeddings to select r i that minimizes | e r i e T r 1 | for multiplicative models; and r i that minimizes | e r i + e r | for additive models.", "Composition : If two atoms ( x, r 1 , y ) and ( y, r 2 , z ) hold true, then for multiplicative models, e x e r 1 e y and e y e r 2 e z .", "Therefore, e x ( e r 1 e r 2 ) e z .", "Hence, relation r is a composition of r 1 and r 2 if e r 1 e r 2 e r .", "Similarly, for embeddings from additive models, we can model composition as e r 1 + e r 2 e r .", "Thus, to determine adversarial relations for composition pattern, we use pre-trained embeddings to obtain all possible compositions of ( r 1 , r 2 ).", "For multiplicative models, we use e r 1 e r 2 and for additive models we use e r 1 + e r 2 .", "From these, we choose the relation pair for which the Euclidean distance between the composed relation embeddings and the target relation embedding e r is minimum.", "We consider three different heuristic approaches to select the decoy entity soft truth score, ranks predicted by the KGE model and cosine distance.", "Soft Logical Modelling of Inference Patterns Once the adversarial relations are determined, we can express the grounding for symmetry, inversion and composition patterns for the decoy triples.", "We discuss only object side decoy triple for brevity G s : ( o (cid:48) , r , s ) ( s, r , o (cid:48) ) G i : ( o (cid:48) , r i , s ) ( s, r , o (cid:48) ) G c : ( s, r 1 , o (cid:48)(cid:48) ) ( o (cid:48)(cid:48) , r 2 , o (cid:48) ) ( s, r , o (cid:48) ) If the model captures P s , P i or P c to assign high rank to the target triple, then the head atom ( s, r , o (cid:48) ) of a grounding that violates this pattern is a suitable decoy triple.", "Adding the body of this grounding to the knowledge graph would improve the model performance on decoy triple through P s , P i or P c .", "To determine the decoy triple this way, we need a measure of the degree to which a grounding satisfies an inference pattern.", "We call this measure the soft truth score : G [0 , 1] it provides the truth value of a logic expression indicating the degree to which the expression is true.", "We model the soft truth score of grounded patterns using t-norm based fuzzy logics (Hajek, 1998).", "The score f sro of an individual atom (i.e. triple) is computed using the KGE model's scoring function.", "We use the sigmoid function ( x ) = 1 / (1 + exp( x )) to map this score to a continuous truth value in the range (0 , 1) .", "Hence, the soft truth score for an individual atom is ( s, r , o ) = ( f sro ) .", "The soft truth score for the grounding of a pattern can then be expressed through logical composition (e.g. and ) of the scores of individual atoms in the grounding.", "We follow (Guo et al., 2016, 2018) and define the following compositions for logical conjunction ( ), disjunction ( ), and negation ( ): ( a b ) = ( a ) ( b ) , ( a b ) = ( a ) + ( b ) ( a ) ( b ) , ( a ) = 1 ( a ) .", "Here, a and b are two logical expressions, which can either be single triples or be constructed by combining triples with logical connectives.", "If a is a single triple ( s, r , o ) , we have ( a ) = ( s, r , o ) .", "Given these compositions, the truth value of any logical expression can be calculated recursively (Guo et al., 2016, 2018).", "Thus, we obtain the following soft truth scores for the groundings of symmetry, inversion and composition patterns G s , G i and G c ( G s ) = ( o (cid:48) , r , s ) ( s, r , o (cid:48) ) ( o (cid:48) , r , s ) + 1 ( G i ) = ( o (cid:48) , r i , s ) ( s, r , o (cid:48) ) ( o (cid:48) , r i , s ) + 1 .", "( G c ) = ( s, r 1 , o (cid:48)(cid:48) ) ( o (cid:48)(cid:48) , r 2 , o (cid:48) ) ( s, r , o (cid:48) ) ( s, r 1 , o (cid:48)(cid:48) ) ( o (cid:48)(cid:48) , r 2 , o (cid:48) ) + 1 To select the decoy triple ( s, r , o (cid:48) ) for symmetry and inversion, we score all possible groundings using ( G s ) and ( G i ) .", "The head atom of grounding with minimum score is chosen as decoy triple.", "For composition pattern, the soft truth score ( G c ) for candidate decoy triples ( s, r , o (cid:48) ) contains two entities ( o (cid:48) , o (cid:48)(cid:48) ) to be identified.", "Thus, we use a greedy approach to select the decoy entity o (cid:48) .", "We use the pre-trained embeddings to group the entities o (cid:48)(cid:48) into k clusters using K-means clustering and determine a decoy entity with minimum soft truth score for each cluster.", "We then select the decoy entity o (cid:48) with minimum score across the k clusters.", "KGE Ranks: We use the ranking protocol from KGE evaluation to rank the target triple against valid subject and object side negatives ( s (cid:48) , r , o ) and ( s, r , o (cid:48) ) .", "For each side, we select the negative triple that is ranked just below the target triple (that Adversarial Attack Step Sym Inv Com Determine Adversarial Relations n/a Alg Alg Determine Decoy Entities Sft Sft Sft Rnk Rnk Rnk Cos Cos Cos Determine Adversarial Entities n/a n/a Sft Table 2: A summary of heuristic approaches used for different steps of the adversarial attack with symmetry (Sym), inversion (Inv) and composition (Com) pattern.", "is, negative rank = target rank + 1 ).", "These are suitable as decoy because their predicted scores are likely not very different from the target triple's score.", "Thus, the model's prediction confidence for these triples might be effectively manipulated through adversarial additions.", "This is in contrast to very low ranked triples as decoy; where the model has likely learnt a low score with high confidence.", "Cosine Distance: A high rank for the target triple ( s, r , o ) against queries ( s, r , ?) and (? , r , o ) indicates that e s , e o are similar to the embeddings of other subjects and objects related by r in the training data.", "Thus, a suitable heuristic for selecting decoy entities s (cid:48) and o (cid:48) is to choose ones whose embeddings are dissimilar to e s , e o .", "Since these entities are not likely to occur in the neighbourhood of o and s , they will act adversarially to reduce the rank of target triple.", "Thus, we select decoy entities s (cid:48) and o (cid:48) that have maximum cosine distance from target entities s and o respectively.", "This step is only needed for the composition pattern because the body for this pattern has two adversarial triples.", "Given the decoy triple in the head of the composition expression, we select the body of the expression that would maximize the rank of the decoy triple.", "We use the soft-logical model defined in Step 2 for selecting decoy triples.", "The soft truth score for composition grounding of decoy triple is given by ( G t ) = ( s, r 1 , o (cid:48)(cid:48) ) ( o (cid:48)(cid:48) , r 2 , o (cid:48) ) ( s, r , o (cid:48) ) ( s, r 1 , o (cid:48)(cid:48) ) ( o (cid:48)(cid:48) , r 2 , o (cid:48) ) + 1 .", "We select the entity o (cid:48)(cid:48) with maximum score because this entity satisfies the composition pattern for the decoy triple and is thus likely to improve the decoy triple's ranks on addition to the knowledge graph.", "The aim of our evaluation is to assess the effectiveness of proposed attacks in degrading the predictive performance of KGE models on missing triples that are predicted true.", "We use the state-of-art evaluation protocol for data poisoning attacks (Xu et al., 2020).", "We train a clean model on the original data; then generate the adversarial edits and add them to the dataset; and finally retrain a new model on this poisoned data.", "All hyperparameters for training on original and poisoned data remain the same.", "We evaluate four models with varying inductive abilities DistMult, ComplEx, ConvE and TransE; on two publicly available benchmark datasets for link prediction 3 WN18RR and FB15k-237.", "We filter out triples from the validation and test set that contain unseen entities.", "To assess the attack effectiveness in degrading performance on triples predicted as true, we need a set of triples that are predicted as true by the model.", "Thus, we select as target triples , a subset of the original test set where each triple is ranked 10 by the original model.", "Table 3 provides an overview of dataset statistics and the number of target triples selected.", "Baselines: We compare the proposed methods against the following baselines Random n : Random edits in the neighbourhood of each entity of the target triple.", "Random g1 : Global random edits in the knowledge graph which are not restricted to the neighbourhood of entities in the target triple and have 1 edit per decoy triple (like symmetry and inversion).", "Random g2 : Global random edits in the knowledge graph which are not restricted to the neigh-3 https://github.com/TimDettmers/ConvE bourhood of entities in the target triple and have 2 edits per decoy triple (like composition).", "Zhang et al.", ": Poisoning attack from (Zhang et al., 2019a) for edits in the neighbourhood of subject of the target triple.", "We extend it for both subject and object to match our evaluation protocol.", "Further implementation details available in Appendix B.2.", "CRIAGE : Poisoning attack from (Pezeshkpour et al., 2019).", "We use the publicly available implementation and the default attack settings 4 .", "The method was proposed for edits in the neighbourhood of object of the target triple.", "We extend it for both entities to match our evaluation protocol and to ensure fair evaluation.", "Implementation: For every attack, we filter out adversarial edit candidates that already exist in the graph.We also remove duplicate adversarial edits for different targets before adding them to the original dataset.", "For Step 2 of the composition attack with ground truth, we use the elbow method to determine the number of clusters for each model-data combination.", "Further details on KGE model training, computing resources and number of clusters are available in Appendix B. The source code to reproduce our experiments is available on GitHub 5 .", "Table 4 and 5 show the reduction in MRR and Hits@1 due to different attacks on the WN18RR and FB15k-237 datasets.", "We observe that the proposed adversarial attacks outperform the random baselines and the state-of-art poisoning attacks for all KGE models on both datasets.", "We see that the attacks based on symmetry inference pattern perform the best across all model-dataset combinations.", "This indicates the sensitivity of KGE models to symmetry pattern.", "For DistMult, ComplEx and ConvE, this sensitivity can be explained by the symmetric nature of the scoring functions of these models.", "That is, the models assign either equal or similar scores to triples that are symmetric opposite of each other.", "In the case of TransE, the model's sensitivity to symmetry pattern is explained by the translation operation in scoring function.", "The score of target ( s, r , o ) is a translation from subject to object embedding through the relation embedding.", "Symmetry attack adds the adversarial triple ( o (cid:48) , r , s ) where the relation is same 4 https://github.com/pouyapez/criage 5 https://github.com/PeruBhardwaj/ InferenceAttack DistMult ComplEx ConvE TransE MRR Hits@1 MRR Hits@1 MRR Hits@1 MRR Hits@1 Original 0.90 0.85 0.89 0.84 0.92 0.89 0.36 0.03 BaselineAttacks Random n 0.86 (-4%) 0.83 0.84 (-6%) 0.80 0.90 (-2%) 0.88 0.28 (-20%) 0.01 Random g1 0.88 0.83 0.88 0.83 0.92 0.89 0.35 0.02 Random g2 0.88 0.83 0.88 0.83 0.91 0.89 0.34 0.02 Zhang et al. 0.82 (-8%) 0.81 0.76 (-14%) 0.74 0.90 (-2%) 0.87 0.24 (-33%) 0.01 CRIAGE 0.87 0.84 -0.90 0.88 -Proposed Attacks Sym truth 0.66 0.40 0.56 (-33%) 0.24 0.61 (-34%) 0.28 0.57 0.36 Sym rank 0.61 0.32 0.56 (-33%) 0.24 0.62 0.31 0.25 0.02 Sym cos 0.57 (-36%) 0.32 0.62 0.43 0.67 0.44 0.24 (-33%) 0.01 Inv truth 0.87 0.83 0.86 0.80 0.90 0.87 0.34 0.03 Inv rank 0.86 0.83 0.85 0.80 0.89 (-4%) 0.85 0.25 0.02 Inv cos 0.83 (-8%) 0.82 0.80 (-10%) 0.79 0.90 0.88 0.25 (-30%) 0.01 Com truth 0.86 0.83 0.86 0.81 0.89 0.86 0.53 (+49%) 0.27 Com rank 0.85 (-5%) 0.80 0.83 0.77 0.89 0.84 0.57 0.32 Com cos 0.86 0.77 0.82 (-8%) 0.70 0.88(-4%) 0.83 0.53 (+49%) 0.27 Table 4: Reduction in MRR and Hits@1 due to different attacks on the target split of WN18RR.", "as the target relation, but target subject is the object of adversarial triple.", "Now, the model learns the embedding of s as a translation from o (cid:48) through relation r .", "This adversarially modifies the embedding of s and in turn, the score of ( s, r , o ) .", "We see that inversion and composition attacks also perform better than baselines in most cases, but not as good as symmetry.", "This is particularly true for FB15k-237 where the performance for these patterns is similar to random baselines.", "For the composition pattern, it is likely that the model has stronger bias for shorter and simpler patterns like symmetry and inversion than for composition.", "This makes it harder to deceive the model through composition than through symmetry or inverse.", "Furthermore, FB15k-237 has high connectivity (Dettmers et al., 2018) which means that a KGE model relies on a high number of triples to learn target triples' ranks.", "Thus, poisoning KGE models for FB15k-237 will likely require more adversarial triples per target triple than that considered in this research.", "The inversion pattern is likely ineffective on the benchmark datasets because these datasets do not have any inverse relations (Dettmers et al., 2018; Toutanova and Chen, 2015).", "This implies that our attacks cannot identify the inverse of the target triple's relation in Step 1.", "We investigate this hypothesis further in Appendix D, and evaluate the attacks on WN18 dataset where the inverse relations have not been filtered out.", "This means that the KGE model can learn the inversion pattern and the inversion attacks can identify the inverse of the target relation.", "In this setting, we find that the inversion attacks outperform other attacks against ComplEx on WN18, indicating the sensitivity of ComplEx to the inversion pattern when the dataset contains inverse relations.", "An exception in the results is the composition pattern on TransE where the model performance improves instead of degrading on the target triples.", "This is likely due to the model's sensitivity to composition pattern such that adding this pattern improves the performance on all triples, including target triples.", "To verify this, we checked the change in ranks of decoy triples and found that composition attacks on TransE improve these ranks too.", "Results for this experiment are available in Appendix C. This behaviour of composition also indicates that the selection of adversarial entities in Step 3 of the composition attacks can be improved.", "It also explains why the increase is more significant for WN18RR than FB15k-237 WN18RR does not have any composition relations but FB15k-237 does; so adding these to WN18RR shows significant improvement in performance.", "We aim to investigate these and more hypotheses about the proposed attacks in future work.", "KGE models can be categorized into tensor factorization models like DistMult (Yang et al., 2015) and ComplEx (Trouillon et al., 2016), neural archi-DistMult", "tectures like ConvE (Dettmers et al., 2018) and translational models like TransE (Bordes et al., 2013).", "We refer the reader to (Cai et al., 2018) for a comprehensive survey.", "Due to the black-box nature of KGE models, there is an emerging literature on understanding these models.", "(Pezeshkpour et al., 2019) and (Zhang et al., 2019a) are most closely related to our work as they propose other data poisoning attacks for KGE models.", "Minervini et al. (2017) and Cai and Wang (2018) use adversarial regularization in latent space and adversarial training to improve predictive performance on link prediction.", "But these adversarial samples are not in the input domain and aim to improve instead of degrade model performance.", "Poisoning attacks have also been proposed for models for undirected and single relational graph data like Graph Neural Networks (Zugner et al., 2018; Dai et al., 2018) and Network Embedding models (Bojchevski and Gunnemann, 2019).", "A survey of poisoning attacks for graph data is available in (Xu et al., 2020).", "But the attacks for these models cannot be applied directly to KGE models because they require gradients of a dense adjacency matrix.", "In the literature besides adversarial attacks, Lawrence et al. (2020), Nandwani et al. (2020) and Zhang et al. (2019b) generate post-hoc explanations to understand KGE model predictions.", "Trouillon et al. (2019) study the inductive abilities of KGE models as binary relation properties for controlled inference tasks with synthetic datasets.", "Allen et al. (2021) interpret the structure of knowledge graph embeddings by comparison with word embeddings.", "On the theoretical side, Wang et al. (2018) study the expressiveness of various bilinear KGE models and Gutierrez-Basulto and Schockaert (2018) study the ability of KGE models to learn hard rules expressed as ontological knowledge.", "The soft-logical model of inference patterns in this work is inspired by the literature on injecting logical rules into KGE models.", "Guo et al. (2016) and Guo et al. (2018) enforce soft logical rules by modelling the triples and rules in a unified framework and jointly learning embeddings from them.", "Additionally, our algebraic model of inference patterns, which is used to select adversarial relations, is related to approaches for graph traversal in latent vector space discussed in Yang et al. (2015); Guu et al. (2015); Arakelyan et al. (2021).", "We propose data poisoning attacks against KGE models based on inference patterns like symmetry, inversion and composition.", "Our experiments show that the proposed attacks outperform the state-of-art attacks.", "Since the attacks rely on relation inference patterns, they can also be used to understand the KGE models.", "This is because if a KGE model is sensitive to a relation inference pattern, then that pattern should be an effective adversarial attack.", "We observe that the attacks based on symmetry pattern generalize across all KGE models which indicates their sensitivity to this pattern.", "In the future, we aim to investigate hypotheses about the effect of input graph connectivity and existence of specific inference patterns in datasets.", "We note that such investigation of inference pattern attacks will likely be influenced by the choice of datasets.", "In this paper, we have used benchmark datasets for link prediction.", "While there are intuitive assumptions about the inference patterns on these datasets, there is no study that formally measures and characterizes the existence of these patterns.", "This makes it challenging to verify the claims made about the inductive abilities of KGE models, not only by our proposed attacks but also by new KGE models proposed in the literature.", "Thus, a promising step in understanding knowledge graph embeddings is to propose datasets and evaluation tasks that test varying degrees of specific inductive abilities.", "These will help evaluate new models and serve as a testbed for poisoning attacks.", "Furthermore, specifications of model performance on datasets with different inference patterns will improve the usability of KGE models in high-stake domains like healthcare and finance.", "In addition to understanding model behaviour, the sensitivity of state-of-art KGE models to simple inference patterns indicates that these models can introduce security vulnerabilities in pipelines that use knowledge graph embeddings.", "Thus, another promising direction for future work is towards mitigating the security vulnerabilities of KGE models.", "Some preliminary ideas for this research can look into adversarial training; or training an ensemble of different KGE scoring functions; or training an ensemble from subsets of the training dataset.", "Since our experiments show that state-of-art KGE models are sensitive to symmetry pattern, we call for future research to investigate neural architectures that generalize beyond symmetry even though their predictive performance for link prediction on benchmark datasets might not be the best.", "This research was conducted with the financial support of Accenture Labs and Science Foundation Ireland (SFI) at the ADAPT SFI Research Centre at Trinity College Dublin.", "The ADAPT SFI Centre for Digital Content Technology is funded by Science Foundation Ireland through the SFI Research Centres Programme and is co-funded under the European Regional Development Fund (ERDF) through Grant No. 13/RC/2106 P2.", "We study the problem of generating data poisoning attacks on KGE models.", "Data poisoning attacks identify the vulnerabilities in learning algorithms that could be exploited by an adversary to manipulate the model's behaviour (Joseph et al., 2019; Biggio and Roli, 2018).", "Such manipulation can lead to unintended model behaviour and failure.", "Identifying these vulnerabilities for KGE models is critical because of their increasing use in domains that need high stakes decision making like heathcare (Bendtsen and Petrovski, 2019) and finance (Hogan et al., 2020; Noy et al., 2019).", "In this way, our research is directed towards minimizing the negative consequences of deploying state-of-art KGE models in our society.", "This honours the ACM code of Ethics of contributing to societal well-being and acknowledging that all people are stakeholders in computing.", "At the same time, we aim to safeguard the KGE models against potential harm from adversaries and thus honour the ACM code of avoiding harm due to computing systems.", "Arguably, because we study vulnerabilities by attacking the KGE models, the proposed attacks can be used by an actual adversary to manipulate the model behaviour of deployed systems.", "This paradox of an arms race is universal across security research (Biggio and Roli, 2018).", "For our research, we have followed the principle of proactive security as recommended by Joseph et al. (2019) and Biggio and Roli (2018).", "As opposed to reactive security measures where learning system designers develop countermeasures after the system is attacked, a proactive approach anticipates such attacks, simulates them and designs countermeasures before the systems are deployed.", "Thus, by revealing the vulnerabilities of KGE models, our research provides an opportunity to fix them.", "Besides the use case of security, our research can be used in understanding the inductive abilities of KGE models, which are black-box and hard to interpret.", "We design attacks that rely on the inductive assumptions of a model to be able to deceive that model.", "Thus, theoretically, the effectiveness of attacks based on one inference pattern over another indicates the model's reliance on one inference pattern over another.", "However, as we discussed in our paper, realistically, it is challenging to make such claims about the inductive abilities of KGE models because the inference patterns in benchmark datasets are not well defined.", "Thus, we would encourage further work to evaluate our proposed attacks by designing benchmark tasks and datasets that measure specific inductive abilities of models.", "This will not only be useful for evaluating the proposed attacks here, but also for understanding the inductive abilities of existing KGE models.", "This in turn, can guide the community to design better models.", "In this direction, we encourage researchers proposing new KGE models to evaluate not only the predictive performance on benchmark datasets, but also the claims made on inductive abilities of these models and their robustness to violations of these implicit assumptions." ]
[ "method", "objective", "objective", "result", "objective", "result", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "objective", "objective", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "objective", "method", "objective", "result", "objective", "objective", "abstain", "objective", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "other", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "other", "method", "abstain", "abstain", "method", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "other", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "other", "other", "abstain", "other", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "other", "abstain", "other", "method", "objective", "objective", "abstain", "abstain", "result", "objective", "abstain", "method", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "other", "other", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain" ]
[ "We present deep communicating agents in an encoder-decoder architecture to address the challenges of representing a long document for abstractive summarization.", "With deep communicating agents, the task of encoding a long text is divided across multiple collaborating agents, each in charge of a subsection of the input text.", "These encoders are connected to a single decoder, trained end-to-end using reinforcement learning to generate a focused and coherent summary.", "Empirical results demonstrate that multiple communicating encoders lead to a higher quality summary compared to several strong baselines, including those based on a single encoder or multiple non-communicating encoders.", "We focus on the task of abstractive summarization of a long document.", "In contrast to extractive summarization, where a summary is composed of a subset of sentences or words lifted from the input text as is, abstractive summarization requires the generative ability to rephrase and restructure sentences to compose a coherent and concise summary.", "As recurrent neural networks (RNNs) are capable of generating fluent language, variants of encoder-decoder RNNs (Sutskever et al., 2014; Bahdanau et al., 2015) have shown promising results on the abstractive summarization task (Rush et al., 2015; Nallapati et al., 2017).", "The fundamental challenge, however, is that the strong performance of neural models at encoding short text does not generalize well to long text.", "The motivation behind our approach is to be able to dynamically attend to different parts of the input to capture salient facts.", "While recent work in sum Tired of counting sheep ?", "Each agent a and b encodes one paragraph in multiple layers.", "By passing new messages through multiple layers the agents are able to coordinate and focus on the important aspects of the input text.", "marization addresses these issues using improved attention models (Chopra et al., 2016), pointer networks with coverage mechanisms (See et al., 2017), and coherence-focused training objectives (Paulus et al., 2018; Jaques et al., 2017), an effective mechanism for representing a long document remains a challenge.", "Simultaneous work has investigated the use of deep communicating agents (Sukhbaatar et al., 2016) for collaborative tasks such as logic puzzles (Foerster et al., 2016), visual dialog (Das et al., 2017), and reference games (Lazaridou et al., 2016).", "Our work builds on these approaches to propose the first study on using communicating agents to encode long text for summarization.", "The key idea of our model is to divide the hard task of encoding a long text across multiple collaborating encoder agents, each in charge of a different subsection of the text (Figure 1).", "Each of these agents encodes their assigned text independently, and broadcasts their encoding to others, allowing agents to share global context information with one another about different sections of the document.", "All agents then adapt the encoding of their assigned text in light of the global context and re-1662 L o c a l E n c o d e r B-LSTM source paragraph-1 Tired of counting B-LSTMB-LSTM source paragraph-2 You don't have B-LSTMW o r d A tt e n t i o n B-LSTM source paragraph-3 Melatoninsupplement B-LSTM word context vector ( ) agent attention ( t ) word context vector ( ) word context vector ( ) agent context vector ( ) Fragrances <Start> that make you feel D e c o d e r LSTM vocabulary distribution a zoo a zoo calm yen final distribution agent attention ( t -1) C o n t e x t u a l A g e n t A tt e n t i o n C o n t e x t u a l E n c o d e r L a y e r s B-LSTMB-LSTMB-LSTM Figure 2: Multi-agent-encoder-decoder overview.", "peat the process across multiple layers, generating new messages at each layer.", "Once each agent completes encoding, they deliver their information to the decoder with a novel contextual agent attention (Figure 2).", "Contextual agent attention enables the decoder to integrate information from multiple agents smoothly at each decoding step.", "The network is trained end-to-end using self-critical reinforcement learning (Rennie et al., 2016) to generate focused and coherent summaries.", "Empirical results on the CNN/DailyMail and New York Times datasets demonstrate that multiple communicating encoders lead to higher quality summaries compared to strong baselines, including those based on a single encoder or multiple non-communicating encoders.", "Human evaluations indicate that our model is able to produce more focused summaries.", "The agents gather salient information from multiple areas of the document, and communicate their information with one another, thus reducing common mistakes such as missing key facts, repeating the same content, or including unnecessary details.", "Further analysis reveals that our model attains better performance when the decoder interacts with multiple agents in a more balanced way, confirming the benefit of representing a long document with multiple encoding agents.", "Notation Each document d is a sequence of paragraphs x a , which are split across multiple encoding agents a", "=1,.., M (e.g., agent-1 encodes the first paragraph x 1 , agent-2 the second paragraph x 2 , so on).", "Each paragraph x a = { w a,i } I , is a sequence of I words.", "We construct a V -sized vocabulary from the training documents from the most frequently appearing words.", "Each word w a,i is embedded into a n -dimensional vector e a,i .", "All W variables are linear projection matrices.", "following two stacked encoders.", "Local Encoder The first layer is a local encoder of each agent a , where the tokens of the corresponding paragraph x a are fed into a single layer bi-directional LSTM ( bLSTM ), producing the local encoder hidden states, h (1) i RH : h (1) i , h (1) i = bLSTM ( e i , h (1) i 1 , h (1) i +1 ) (1) h (1) i = W 1 [ h (1) i , h (1) i ] (2) where H is the hidden state dimensionality.", "The 1663 output of the local encoder layer is fed into the contextual encoder.", "Each cell of the ( k +1)th contextual layer is a bLSTM that takes three inputs: the hidden states from the adjacent LSTM cells, h ( k +1) i 1 RH or h ( k +1) i +1 RH , the hidden state from the previous layer h ( k ) i , and the message vector from other agents z ( k ) RH and outputs h ( k +1) i RH : h ( k +1) i , h ( k +1) i = bLSTM ( f ( h ( k ) i , z ( k ) ) , (3) h ( k +1) i 1 , h ( k +1) i +1 ) (4) h ( k +1) i = W 2 [ h ( k +1) i , h ( k +1) i ] (5) where i = 1", "Contextual Encoder Our framework enables agent communication cycles across multiple encoding layers.", "The output of each contextual encoder is an adapted representation of the agent's encoded information conditioned on the information received from the other agents.", "At each layer k", "=1,.., K , each agent a jointly encodes the information received from the previous layer (see Figure 3).", "..I indicates the index of each token in the sequence.", "The message z ( k ) received by any agent a in layer k is the average of the outputs of the other agents from layer k : z ( k ) = 1 M 1 P m 6 = a h ( k ) m,I (6) where h ( k ) m,I is the last hidden state output from the k th contextual layer of each agent where m 6 = a .", "Here, we take the average of the messages received from other encoder agents, but a parametric function such as a feed forward model or an attention over messages could also be used.", "The message z ( k ) is projected with the agent's previous encoding of its document: f ( h ( k ) i , z ( k ) ) = v T 1 tanh ( W 3 h ( k ) i + W 4 z ( k ) ) (7) where v 1 , W 3 , and W 4 are learned parameters shared by every agent.", "Equation (7) combines the information sent by other agents with the context of the current token from this paragraph.", "This yields different features about the current context in relation to other topics in the source document.", "At each layer, the agent modifies its representation of its own context relative to the information received from other agents, and updates the information it sends to other agents accordingly.", "The output from the last contextual encoder layer of each agent { h ( K ) a,i } I , which is a sequence of hidden state vectors of each token i , is sent to the decoder to calculate word-attention distributions.", "We use a single-layer LSTM for the decoder and feed the last hidden state from the first agent s 0 = h ( K ) 1 ,I as the initial state.", "At each time step t , the decoder predicts a new word in the summary w t and computes a new state s t by attending to relevant input context provided by the agents.", "The decoder uses a new hierarchical attention mechanism over the agents.", "First, a word attention distribution l ta (Bahdanau et al. (2015)) is computed over every token { h ( K ) a,i } I for each agent a : l ta = softmax ( v T 2 tanh( W 5 h ( K ) a + W 6 s t + b 1 )) (8) where l ta [0 , 1] I is the attention over all tokens in a paragraph x a and v 2 , W 5 , W 6 , b 1 are learned parameters.", "For each decoding step t , a new decoder context is calculated for each agent: c ta = X i l ta,i h ( K ) a,i (9) which is the weighted sum of the encoder hidden states of agent a .", "Each word context vector represents the information extracted by the agent from the paragraph it has read.", "Here the decoder has to decide which information is more relevant to the 1664 current decoding step t .", "This is done by weighting each context vector by an agent attention yielding the document global agent attention distribution g t (see Figure 2): g t = softmax ( v T 3 tanh ( W 7 c t + W 8 s t + b 2 )) (10) where v 3 , W 7 , W 8 , and b 2 are learned, and g t [0,1] M is a soft selection over M agents.", "Then, we compute the agent context vector c t : c t = P a g ta c ta (11) The agent context c t RH is a fixed length vector encoding salient information from the entire document provided by the agents.", "It is then concatenated with the decoder state s t and fed through a multi-layer perception to produce a vocabulary distribution (over all vocabulary words) at time t : P voc ( w t | s t , w t 1 ) = softmax ( MLP ([ s t , c t ])) (12) To keep the topics of generated sentences intact, it is reasonable that the decoder utilize the same agents over the course of short sequences (e.g., within a sentence).", "Because the decoder is designed to select which agent to attend to at each time step, we introduce contextual agent attention ( caa ) to prevent it from frequently switching between agents.", "The previous step's agent attention c t 1 is used as additional information to the decoding step to generate a distribution over words: P voc ( w t | ) = softmax ( MLP ([ s t , c t , c t 1 ])) (13) 2.3 Multi-Agent Pointer Network Similar to See et al. (2017), we allow for copying candidate words from different paragraphs of the document by computing a generation probability value for each agent p ta [0,1] at each timestep t using the context vector c ta , decoder state s t and decoder input y t : p ta = ( v T 5 c ta + v T 6 s t + v T 7 y t + b ) (14) where b is a learned scalar, y t is the ground-truth/predicted output (depending on the train-ing/testing time).", "The generation probability determines whether to generate a word from the vocabulary by sampling from P voc ( w | ) , or copying a word from the corresponding agent's input paragraph x a by sampling from its attention distribution l ta .", "This produces an extended vocabulary that includes words in the document that are considered out-of-vocabulary (OOV).", "A probability distribution over the extended vocabulary is computed for each agent: P a ( w t | ) = p ta P voc ( w t | ) + (1 p ta ) u ta,w (15) where u ta,w is the sum of the attention for all instances where w appears in the source document.", "The final distribution over the extended vocabulary, from which we sample, is obtained by weighting each agent by their corresponding agent attention values g ta : P ( w t | s t , w t 1 ) = P a g ta P a ( w t | ) (16) In contrast to a single-agent baseline (See et al., 2017), our model allows each agent to vote for different OOV words at time t (Equation (16)).", "In such a case, only the word that is relevant to the generated summary up to time t is collaboratively voted as a result of agent attention probability g ta .", "To train the deep communicating agents, we use a mixed training objective that jointly optimizes multiple losses, which we describe below.", "MLE Our baseline multi-agent model uses maximum likelihood training for sequence generation.", "Given y = { y 1 , y 2 ,..., y T } as the ground-truth output sequence (human summary word sequences) for a given input document d , we minimize the negative log-likelihood of the target word sequence: LMLE = P Nt =1 log p ( y t | y 1 . . . y t 1 , d ) (17) Semantic Cohesion To encourage sentences in the summary to be informative without repetition, we include a semantic cohesion loss to integrate sentence-level semantics into the learning objective.", "As the decoder generates the output word sequence { y 1 , y 2 . . . y T } , it keeps track of the end of sentence delimiter token (.') indices.", "The hidden state vectors at the end of each sentence s 0 q , q =1 . . . Q , where s 0 q { s t : y t = ', 1 t T } , are used to compute the cosine similarity between two consecutively generated sentences.", "To minimize the similarity between end-of-sentence hidden states we define a semantic cohesion loss: LSEM = P Qq =2 cos( s 0 q , s 0 q 1 ) (18) 1665 The final training objective is then: LMLE-SEM = LMLE + L SEM (19) where is a tunable hyperparameter.", "Reinforcement Learning (RL) Loss Policy gradient methods can directly optimize discrete target evaluation metrics such as ROUGE that are non-differentiable (Paulus et al., 2018; Jaques et al., 2017; Pasunuru and Bansal, 2017; Wu et al., 2016).", "At each time step, the word generated by the model can be viewed as an action taken by an RL agent.", "Once the full sequence y is generated, it is compared against the ground truth sequence y to compute the reward r ( y ) .", "Our model learns using a self-critical training approach (Rennie et al., 2016), which learns by exploring new sequences and comparing them to the best greedily decoded sequence.", "For each training example d , two output sequences are generated: y , which is sampled from the probability distribution at each time step, p ( y t | y 1 . . . y t 1 , d ) , and y , the baseline output, which is greedily generated by argmax decoding from p ( y t | y 1 . . . y t 1 , d ) .", "The training objective is then to minimize: LRL = ( r ( y ) r ( y )) P Nt =1 log p ( y t | y 1 . . . y t 1 , d ) (20) This loss ensures that, with better exploration, the model learns to generate sequences y that receive higher rewards compared to the baseline y , increasing overall reward expectation of the model.", "Mixed Loss While training with only MLE loss will learn a better language model, this may not guarantee better results on global performance measures.", "Similarly, optimizing with only RL loss may increase the reward gathered at the expense of diminished readability and fluency of the generated summary (Paulus et al., 2018).", "A combination of the two objectives can yield improved task-specific scores while maintaining fluency: LMIXED = L RL + (1 ) LMLE (21) where is a tunable hyperparameter used to bal-ance the two objective functions.", "We pre-train our models with MLE loss, and then switch to the mixed loss.", "We can also add the semantic cohesion loss term: LMIXED-SEM = L RL +(1 ) LMLE-SEM to analyze its impact in RL training.", "Intermediate Rewards We introduce sentence-based rewards as opposed to end of summary rewards, using differential ROUGE metrics, to promote generating diverse sentences.", "Rather than rewarding sentences based on the scores obtained at the end of the generated summary, we compute incremental rouge scores of a generated sentence o q : r ( o q ) = r ([ o 1 , . . . o q ]) r ([ o 1 , . . . o q 1 ]) (22) Sentences are rewarded for the increase in ROUGE they contribute to the full sequence, ensuring that the current sentence contributed novel information to the overall summary.", "Datasets We conducted experiments on two summarization datasets: CNN/DailyMail (Nallapati et al., 2017; Hermann et al., 2015) and New York Times (NYT) (Sandhaus, 2008).", "We replicate the preprocessing steps of Paulus et al. (2018) to obtain the same data splits, except that we do not anonymize named entities.", "For our DCA models, we initialize the number of agents before training, and partition the document among the agents (i.e., three agent three paragraphs).", "Additional details can be found in Appendix A.1.", "Training Details During training and testing we truncate the article to 800 tokens and limit the length of the summary to 100 tokens for training and 110 tokens at test time.", "We distribute the truncated articles among agents for multi-agent models, preserving the paragraph and sentences as possible.", "For both datasets, we limit the input and output vocabulary size to the 50,000 most frequent tokens in the training set.", "We train with up to two contextual layers in all the DCA models as more layers did not provide additional performance gains.", "We fix = 0 .", "97 for the RL term in Equation (21) and = 0 .", "1 for the SEM term in MLE and MIXED training.", "Additional details are provided in Appendix A.2.", "Evaluation We evaluate our system using ROUGE-1 (unigram recall), ROUGE-2 (bigram recall) and ROUGE-L (longest common se-quence).", "1 We select the MLE models with the lowest negative log-likelihood and the MLE+RL models with the highest ROUGE-L scores on a sample of validation data to evaluate on the test 1 We use pyrouge (pypi.python.org/pypi/pyrouge/0.1.3).", "set.", "At test time, we use beam search of width 5 on all our models to generate final predictions.", "Baselines We compare our DCA models against previously published models: SummaRuNNer (Nallapati et al., 2017), a graph-based attentional neural model (Tan et al., 2017) an RNN-based extractive summarizer that combines abstractive features during training; Pointer-networks with and without coverage (See et al., 2017), RL-based training for summarization with intra-decoder attention (Paulus et al., 2018)), and Controllable Abstractive Summarization (Fan et al., 2017) which allows users to define attributes of generated summaries and also uses a copy mechanism for source entities and decoder attention to reduce repetition.", "Ablations We investigate each new component of our model with a different ablation, producing seven different models.", "Our first three ablations are: a single-agent model with the same local encoder, context encoder, and pointer network architectures as the DCA encoders trained with MLE loss ( m1 ); the same model trained with additional semantic cohesion SEM loss ( m2 ), and the same model as the ( m1 ) but trained with a mixed loss and end-of-summary rewards ( m3 ).", "The rest of our models use 3 agents and incrementally add one component.", "First, we add the semantic cohesion loss ( m4 ).", "Then, we add multi-agent pointer networks ( mpgen ) and agent communication ( m5 ).", "Finally, we add contextual agent attention (caa) ( m6 ), and train with the mixed MLE+RL+SEM loss ( m7 ).", "All DCA models use pointer networks.", "We show our results on the CNN/DailyMail and NYT datasets in Table 1 and 2 respectively.", "Overall, our ( m6 ) and ( m7 ) models with multiagent encoders, pointer generation, and communication are the strongest models on ROUGE-1 and ROUGE-2.", "While weaker on ROUGE-L than the RL model from Paulus et al. (2018), the human evaluations in that work showed that their model received lower readability and relevance scores than a model trained with MLE, indicating the additional boost in ROUGE-L was not correlated with summary quality.", "This result can also account for our best models being more abstractive.", "Our models use mixed loss not just to op-1667 timize for sentence level structure similarity with the reference summary (to get higher ROUGE as reward), but also to learn parameters to improve semantic coherence, promoting higher abstraction (see Table 4 and Appendix B for generated summary examples).", "Model ROUGE-1 ROUGE-2 ROUGE-L 2-agent 40.94 19.16 37.54 3-agent 41.69 19.47 37.92 5-agent 40.99 19.02 38.21 Table 3: Comparison of multi-agent models varying the number of agents using ROUGE results of model (m7) from Table 1 on CNN/Daily Maily Dataset.", "Single vs. Multi-Agents All multi-agent models show improvements over the single agent baselines.", "On the CNN/DailyMail dataset, compared to MLE published baselines, we improve across all ROUGE scores.", "We found that the 3-agent models generally outperformed both 2and 5-agent models (see Table 3).", "This is in part because we truncate documents before training and the larger number of agents might be more effi-cient for multi-document summarization.", "Independent vs. Communicating Agents When trained on multiple agents with no communication ( m4 ), the performance of our DCA models is similar to the single agent baselines ( m1 ) and ( m3 ).", "With communication, the biggest jump in ROUGE is seen on the CNN/DailyMail data, indicating that the encoders can better identify the key facts in the input, thereby avoiding unnecessary details.", "Contextual Agent Attention ( caa ) Compared to the model with no contextualized agent attention ( m5 ), the ( m6 ) model yields better ROUGE scores.", "The stability provided by the caa helps the decoder avoid frequent switches between agents that would dilute the topical signal captured by each encoder.", "Repetition Penalty As neurally generated summaries can be redundant, we introduced the semantic cohesion penalty and incremental rewards for RL to generate semantically diverse summaries.", "Our baseline model optimized together with SEM loss (m2) improves on all ROUGE scores over the baseline (m1) .", "Similarly, our model trained with reinforcement learning uses sentence based intermediate rewards, which also improves ROUGE scores across both datasets.", "We perform human evaluations to establish that our model's ROUGE improvements are correlated with human judgments.", "We measure the communicative multi-agent network with contextual agent attention in comparison to a single-agent network with no communication.", "We use the following as evaluation criteria for generated summaries: (1) non-redundancy , fewer of the same ideas are repeated, (2) coherence , ideas are expressed clearly; (3) focus , the main ideas of the document are shared while avoiding superfluous details, and (4) overall , the summary effectively communicates the article's content.", "The focus and non-redundancy dimensions help quantify the impact of multi-agent communication in our model, while coherence helps to evaluate the impact of the reward based learning and repetition penalty of the proposed models.", "Evaluation Procedure We randomly selected 100 samples from the CNN/DailyMail test set and use workers from Amazon Mechanical Turk as judges to evaluate them on the four criteria defined above.", "Judges are shown the original document, the ground truth summary, and two model summaries and are asked to evaluate each summary on the four criteria using a Likert scale from 1 (worst) to 5 (best).", "The ground truth and model summaries are presented to the judges in random order.", "Each summary is rated by 5 judges and the results are averaged across all examples and judges.", "We also performed a head-to-head evaluation (more common in DUC style evaluations) and randomly show two model generated summaries.", "We ask the human annotators to rate each summary on the same metrics as before without seeing the source document or ground truth summaries.", "Results Human evaluators significantly prefer summaries generated by the communicating encoders.", "In the rating task, evaluators preferred the multi-agent summaries to the single-agent cases for all metrics.", "In the head-to-head evaluation, humans consistently preferred the DCA summaries to those generated by a single agent.", "In both the head-to-head and the rating evaluation, the largest improvement for the DCA model was on the focus question, indicating that the model learns to generate summaries with more pertinent details by capturing salient information from later portions of the document.", "flo dron & other hair collection .", "She was still commanding 1,000 a day for her work.", "Table 4: Comparison of a human summary to best singleand multi-agent model summaries, (m3) and (m7) from CNN/DailyMail dataset.", "Although single-agent model generates a coherent summary, it is less focused and contains more unnecessary details ( highlighed red ) and misses keys facts that the multi-agent model successfully captures ( bolded ).", "overall 102 158 40 3.558 3.682 Table 5: Head-to-Head and score-based comparison of human evaluations on random subset of CNN/DM dataset.", "SA=single, MA=multi-agent.", "indicates statistical significance at p < 0 .", "001 for focus and p < 0 .", "03 for the overall.", "To investigate how much the multi-agent models discover salient concepts in comparison to single agent models, we analyze ROUGE-L scores based on the average attention received by each agent.", "We compute the average attention received by each agent per decoding time step for every generated summary in the CNN/Daily Mail test corpus, bin the document-summary pairs by the attention received by each agent, and average the ROUGE-L scores for the summaries in each bin.", "Figure 4 outlines two interesting results.", "First, summaries generated with a more distributed attention over the agents yield higher ROUGE-L scores, indicating that attending to multiple areas of the document allows the discovery of salient concepts in the later sections of the text.", "Second, if we use the same bins and generate summaries for the documents in each bin using the single-agent model, the average ROUGE-L scores for the single-agent summaries are lower than for the cor-70-60% 60-50% 50-40% 40-30% 30-20% 20-0% Multi Single Multi Agent #1 vs Single Agent 70-60% 60-50% 50-40% 40-30% 30-20% 20-0% Multi Single Multi Agent #2 vs Single Agent 70-60% 60-50% 50-40% 40-30% 30-20% 20-0% Attention received by Agent Multi Single Multi Agent #3 vs Single Agent 0.330.340.350.36 0.290.310.33 0.36 0.10 0.20 0.300.36 Figure 4: The average ROUGE-L scores for summaries that are binned by each agent's average attention when generating the summary (see Section 5.2).", "responding multi-agent summaries, indicating that even in cases where one agent dominates the attention, communication between agents allows the model to generate more focused summaries.", "Qualitatively, we see this effect in Table 4, where we compare the human generated summaries against our best single agent model ( m3 ) and our best multi-agent model ( m7 ).", "Model ( m3 ) generates good summaries but does not capture all the facts in the human summary, while ( m7 ) is able to include all the facts with few extra details, generating more relevant and diverse summaries.", "Several recent works investigate attention mechanisms for encoder-decoder models to sharpen the", "context that the decoder should focus on within the input encoding (Luong et al., 2015; Vinyals et al., 2015b; Bahdanau et al., 2015).", "For example, Lu-ong et al. (2015) proposes global and local attention networks for machine translation, while others investigate hierarchical attention networks for document classification (Yang et al., 2016), sentiment classification (Chen et al., 2016), and dialog response selection (Zhou et al., 2016).", "Attention mechanisms have shown to be crucial for summarization as well (Rush et al., 2015; Zeng et al., 2016; Nallapati et al., 2017), and pointer networks (Vinyals et al., 2015a), in particular, help address redundancy and saliency in generated summaries (Cheng and Lapata, 2016; See et al., 2017; Paulus et al., 2018; Fan et al., 2017).", "While we share the same motivation as these works, our work uniquely presents an approach based on CommNet, the deep communicating agent framework (Sukhbaatar et al., 2016).", "Compared to prior multi-agent works on logic puzzles (Foerster et al., 2017), language learning (Lazaridou et al., 2016; Mordatch and Abbeel, 2017) and starcraft games (Vinyals et al., 2017), we present the first study in using this framework for long text generation.", "Finally, our model is related to prior works that address repetitions in generating long text.", "See et al. (2017) introduce a post-trained coverage network to penalize repeated attentions over the same regions in the input, while Paulus et al. (2018) use intra-decoder attention to punish generating the same words.", "In contrast, we propose a new semantic coherence loss and intermediate sentence-based rewards for reinforcement learning to discourage semantically similar generations ( 3).", "We investigated the problem of encoding long text to generate abstractive summaries and demonstrated that the use of deep communicating agents can improve summarization by both automatic and manual evaluation.", "Analysis demonstrates that this improvement is due to the improved ability of covering all and only salient concepts and maintaining semantic coherence in summaries." ]
[ "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "method", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "result", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "other", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "other", "method", "other", "method", "other", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "method", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "other", "abstain", "abstain", "abstain", "method", "method", "method", "other", "method", "method", "method", "abstain", "method", "method", "method", "method", "method", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "method", "method", "method", "method", "method", "method", "method", "abstain", "result", "abstain", "abstain", "result", "result", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "result", "abstain", "other", "other", "other", "other", "abstain", "abstain", "method", "other", "objective", "objective", "abstain" ]
[ "Cross-domain NER is a challenging yet practical problem.", "Entity mentions can be highly different across domains.", "However, the correlations between entity types can be relatively more stable across domains.", "We investigate a multi-cell compositional LSTM structure for multi-task learning, modeling each entity type using a separate cell state.", "With the help of entity typed units, cross-domain knowledge transfer can be made in an entity type level.", "Theoretically, the resulting distinct feature distributions for each entity type make it more powerful for cross-domain transfer.", "Empirically, experiments on four few-shot and zero-shot datasets show our method significantly outperforms a series of multi-task learning methods and achieves the best results.", "Named entity recognition (NER) is a fundamental task in information extraction, providing necessary information for relation classification (Mooney and Bunescu, 2006), event detection (Popescu et al., 2011), sentiment classification (Mitchell et al., 2013), etc.", "NER is challenging because entity mentions are an open set and can be ambiguous in the context of a sentence.", "Due to relatively high cost in manual labeling, cross-domain NER has received increasing research attention.", "Recently, multi-task learning methods (Yang et al., 2017; Wang et al., 2018, 2019; Zhou et al., 2019; Jia et al., 2019) have achieved great success for cross-domain NER.", "Other methods such as fine-tuning (Rodriguez et al., 2018), share-private (Cao et al., 2018; Lin and Lu, 2018) and knowledge distill (Yang et al., 2019) also show effectivenesses for cross-domain NER.", "There are three main source of challenges in cross-domain NER.", "First, instances of the same type entities can be different across domains.", "For example, typical person names can include Trump and Clinton in the political news domain, but James and Trout in the sports domain.", "Second, different types of entities can exhibit different degrees of dissimilarities across domains.", "For example, a large number of location names are shared in the political news domain and the sports domain, such as Barcelona and Los Angeles, but the case is very different for organization names across these domains.", "Third, even types of entities can be different across domains.", "For example, while disease names are a type of entities in the medical domain, it is not so in the biochemistry domain.", "We investigate a multi-cell compositional LSTM structure to deal with the above challenges by separately and simultaneously considering the possibilities of all entity types for each word when processing a sentence.", "As shown in Figure 1, the main idea is to extend a standard LSTM structure by using a separate LSTM cell to model the state for each entity type in a recurrent step.", "Intuitively, the model differs from the baseline LSTM by simultaneously considering all possible entity types.", "A compositional cell (C cell) combines the entity typed cells (ET cells) for the next recurrent state transition by calculating a weighted sum of each ET cell, where the weight of each ET cell corresponds to the probability of its corresponding entity type.", "Different from naive parameter sharing on LSTM (Yang et al., 2017), source domain and target domain in our multi-task learning framework share only the ET cells corresponding to the same entity types and the same C cell, but not for the domain-specific ET cells.", "In this way, our model learns domain-invariant in the entity level.", "Intuitively, our model addresses the above challenges by modeling entity type sequences more explicity, which are relatively more robust across domains compared with entity instances.", "For example, the pattern P ERO PERO LOC can exist in both the political and sports domains, despite ) ( ~ t c ) ( t c ) ( t h ) ( t h ) ( t c ) ( t h )1 ( t c )1 ( t h )( t o )( t i )( t f ) ( t w ) ( t w ) ( ~ t c ) ( t c ) ( t h )1 ( t c )1 ( t h ) ( t c ) ( t h ) ( t h ) ( tl c ) ( ~ t l c )( t o )( t i )( 1 t i )( tl i )( 1 tl i ) ( t c", "that the specific PER instances can be different.", "In addition, thanks to the merging operation at each step, our method effectively encodes multiple entity type sequences in linear time by having a sausage shaped multi-cell LSTM.", "Thus it allows us to learn distributional differences between entity type chains across domains.", "This effectively reduces the confusions of different entities when source domain and target domain have different entity types in few-shot transfer, where the target domain has a few training data.", "In zero-shot transfer where the target domain has no training data, a target-domain LM transfers source-domain knowledge.", "This knowledge transfer is also in the entity level thanks to the compositional weights which are supervised by gold-standard entity type knowledge in source-domain training.", "Theoretically, our method creates distinct feature distributions for each entity type across domains, which can give better transfer learning power compared to representation networks that do not explicitly differentiate entity types ( 3.4).", "Empirically, experiments on four few-shot and zero-shot datasets show that our method gives significantly better results compared to standard BiLSTM baselines with the same num-bers of parameters.", "In addition, we obtain the best resutls on four cross-domain NER datasets.", "The code is released at https://github.com/ jiachenwestlake/Multi-Cell_LSTM .", "Given a sentence x = [ x 1 , . . . , x m ] , the vector representation w t for each word x t is the concatenation of its word embedding and the output of a character level CNN, following Yang et al. (2018).", "A bi-directional LSTM encoder is used to obtain sequence level features h = [ h 1 , . . . , h m ] .", "We use the forward LSTM component to explain the details in the following subsections.", "Finally, a CRF layer outputs the label sequence y = l 1 , . . . , l m .", "We adopt the standard LSTM (Graves and Schmid-huber, 2005) for the baseline.", "At each time step t ( t [1 , ..., m ]) , the baseline calculates a current hidden vector h ( t ) based on a memory cell c ( t ) .", "In particular, a set of input gate i ( t ) , output gate o ( t ) and forget gate f ( t ) are calculated as follows: i ( t ) o ( t ) f ( t ) (cid:101) c ( t ) = tanh (cid:16) W [ h ( t 1) ; w ( t ) ] + b (cid:17) c ( t ) = i ( t ) (cid:12) (cid:101) c ( t ) + f ( t ) (cid:12) c ( t 1) h ( t ) = o ( t ) (cid:12) tanh( c ( t ) ) , (1) where [ W ; b ] are trainable parameters.", "As shown in Figure 1", "(b), we split cell computation in the baseline LSTM unit into l copies, each corresponding to one entity type.", "These cells are shown in black.", "A compositional cell (shown in red) is used to merge the entity typed LSTM cells into one cell state for calculating the final hidden vector.", "In this process, a weight is assigned to each entity type according to the local context.", "Entity typed LSTM cells (ET cells).", "Given w ( t ) and h ( t 1) , the input gate i ( t ) k and the temporary memory cell state (cid:101) c ( t ) k of the k -th ( k [1 , . . . , l ]) entity typed cells (ET cells) are computed as: (cid:34) i ( t ) k (cid:101) c ( t ) k (cid:35) = (cid:20) tanh (cid:21) (cid:16) W k [ h ( t 1) ; w ( t ) ] + b k (cid:17) , (2) where the [ W k ; b k ] represent the trainable parameters specific to the k -th ET cell.", "Then a copy of the compositional memory cell state c ( t 1) of the previous time step ( t 1) is used to update the temporary memory cell state.", "c ( t ) k = i ( t ) k (cid:12) (cid:101) c ( t ) k + ( 1 i ( t ) k ) (cid:12) c ( t 1) (3) The above operations are repeated for l ET cells with the same c ( t 1) .", "We finally acquire a list of ET cell states [ c ( t ) 1 , . . . , c ( t ) l ] .", "Compositional LSTM cell (C cell).", "For facilitating integration of ET cells, a input gate i ( t ) and a temporary cell state (cid:101) c ( t ) of the compositional cell (C cell) are computed similarly to those of the ET cells, but another output gate o ( t ) is added, which are computed as follows: i ( t ) o ( t ) (cid:101) c ( t ) = tanh (cid:16) W [ h ( t 1) ; w ( t ) ] + b (cid:17) , (4) where [ W ; b ] are trainable parameters of the C cell.", "Merging.", "We use the temporary cell state of the C cell (cid:101) c ( t ) to weigh the internal representations of ET cells [ c ( t ) 1 , . . . , c ( t ) l ] for obtaining a compositional representation.", "To this end, additive attention (Dzmitry et al., 2015) is used, which achieves better results in our development compared with other attention mechanism (Vaswani et al., 2017).", "The temporary memory cell state of the C cell c ( t ) is a weighted sum of [ c ( t ) 1 , . . . , c ( t ) l ] : c ( t ) = l (cid:88) k =1 ( t ) k c ( t ) k s.t. l (cid:88) k =1 ( t ) k = 1 (5) The weight ( t ) k reflects the similarity between (cid:101) c ( t ) and the k -th ET cell state c ( t ) k .", "( t ) k is computed as: I ( t ) k = v (cid:62) tanh( P (cid:101) c ( t ) + Qc ( t ) k ) ( t ) k = exp( I ( t ) k ) (cid:80) lj =1 exp( I ( t ) j ) , (6) where [ P ; Q ; v ] are trainable parameters.", "The memory cell state of the C cell is updated as: c ( t ) = i ( t ) (cid:12) c ( t ) + ( 1 i ( t ) ) (cid:12) c ( t 1) (7) Finally, we obtain the hidden state h ( t ) : h ( t ) = o ( t ) (cid:12) tanh( c ( t ) ) (8) 2.3 Training Tasks Below we discuss the two auxiliary tasks before introducing the main NER task.", "The auxiliary tasks are designed in addition to the main NER task in order to better extract entity type knowledge from a set of labeled training data for training ET cells and C cell.", "Formally, denote a training set as D ent = { ( x n , e n ) } Nn =1 , where each training instance consists of word sequence x = [ x 1 , . . . , x m ] and its corresponding entity types e = [ e 1 , . . . , e m ] .", "Here each entity type e t is a label such as [PER , O, LOC ,. . . ] without segmentation tags (e.g., B/I/E).", "Entity type prediction.", "Given the ET cell states of x t : c ( t ) = [ c ( t ) 1 c ( t ) 1 , . . . , c ( t ) l c ( t ) l ] , we define the aligned entity distribution for x t : p ( e k | x t ) = exp { w (cid:62) k c ( t ) k + b k } (cid:80) lj =1 exp { w (cid:62) j c ( t ) j + b j } , (9) Where [ w k ; b k ] are parameters specific to the k -th entity type e k .", "Attention scoring.", "Similar to the entity type prediction task, given the attention scores between the temporary C cell and ET cells in Equation 6: I ( t ) = [( I ( t ) 1 + I ( t ) 1 ) / 2 , . . . , ( I ( t ) l + I ( t ) l ) / 2] , we convert the attention scores to entity aligned distributions for x t using softmax : p ( e k | x t ) = exp( I ( t ) k ) (cid:80) lj =1 exp( I ( t ) j ) (11) Similar to the loss of entity type prediction: L atten = 1 |D ent | N (cid:88) n =1 m (cid:88) t =1 log( p ( e nt | x nt )) (12) While entity type prediction brings supervised information to guide the ET cells, attention scoring introduces supervision to guide the C cell.", "NER.", "This is the main task across domains.", "Standard CRFs (Ma and Hovy, 2016) are used.", "Given h = [ h 1 h 1 , . . . , h m h m ] , the output probability p ( y | x ) over labels y = l 1 , . . . , l m is: p ( y | x )= exp { (cid:80) t ( w l t CRF h t + b ( l t 1 ,l t ) CRF ) } (cid:80) y (cid:48) exp { (cid:80) t ( w l (cid:48) t CRF h t + b ( l (cid:48) t 1 ,l (cid:48) t ) CRF ) } , (13) where y (cid:48) represents an arbitary labal sequence, and w l t CRF is a model parameter specific to l t , and b ( l t 1 ,l t ) CRF is a bias specific to l t 1 and l t .", "The multi-cell LSTM structure above is domain agnostic, and can therefore be used for in-domain NER too.", "However, the main goal of the model is to transfer entity sequence knowledge across domains, and therefore the ET cells and C cell play more significant roles in the transfer learning setting.", "Below we introduce the specific roles each cell is assigned in cross-domain settings.", "Following the common cross-domain setting, we use source-domain NER dataset S ner and the target-domain NER dataset T ner or raw data T lm .", "The entity type sets of source and target domains are represented as E d , where d { S, T } , respectively.", "As shown in Figure 1", "(c), our multi-task learning structure follows Yang et al. (2017), which consists of shared embedding layer and shared BiLSTM layer, as well as domain-specific CRF layers.", "Our method replaces LSTM with multi-cell LSTM, following we introduce the multi-task parameter sharing mechanism in multi-cell LSTM.", "ET cells.", "All ET cells { C k } k E S E T in multi-cell LSTM are a composion of entity-specific cells from both source and target domains.", "For each domain d { S, T } , the actually used ET cells are the domain-specific subset { C k } k E d , aiming to conserve domain-specific features.", "C cell.", "In order to make the source and target domains share the same feature space in a word level, we use a shared C cell C across domains.", "To better leverage target-domain knowledge without target-domain NER labeled data, we conduct the auxiliary dictionary matching and language modeling tasks on target-domain raw data T lm = { ( x n ) } .", "Auxiliary tasks.", "To better extract entity knowledge from raw data, we use a pre-collected named entity dictionary D e by Peng et al. (2019) to label T lm and obtain a set of entity words D + ent , which are used to train entity prediction task and attention scoring task jointly.", "Language modeling.", "Follwing Jia et al. (2019), we use sampling softmax to compute forward LM probability p f ( x t | x <t ) and backward LM probability p b ( x t | x >t ) , respectively: p f ( x t | x <t )= 1 Z exp (cid:16) w (cid:62) x t h t 1 + b x t (cid:17) p b ( x t | x >t )= 1 Z exp (cid:16) w (cid:62) x t h t +1 + b x t (cid:17) , (15) where w x and b x are the target word vector and bias, respectively.", "Z is the normalization item computed by the target word and negative samples.", "The LM loss function on T lm is: L Tlm = 1 2 |T lm | N,m (cid:88) n,t =1 (cid:110) log( p f ( x nt | x n<t )) + log( p b ( x nt | x n>t )) (cid:111) (16) 3.3 Training Objective Algorithm 1 is the transfer learning algorithm under both supervised and unsupervised domain adaptation settings.", "Both sourceand target-domain training instances undertake auxiliary tasks and obtain the loss L a , which is a combination of L ent and L atten weighted by ent and atten , respectively (line 6).", "LSDA = (cid:88) d { S,T } (cid:110) d L dner + L da (cid:111) + 2 (cid:107) (cid:107) 2 , (17)", "NER tasks.", "is the L 2 regularization parameters and represents the parameters set.", "Unsupervised domain adaptation.", "The training objective for UDA is similar to that of SDA, except for using target-domain LM task (line 13) instead of target-domain NER task: LUDA = L Sner + L Tlm + L Sa + L Ta + 2 (cid:107) (cid:107) 2 (18) 3.4 Theoretical Discussion Below we show theoretically that our method in 2.2 is stronger than the baseline method in 2.1 for domain adaptation.", "Following Ben-David et al. (2010), a domain is defined as a pair of input distribution D on X and a labeling function y : XY , where Y is a ( l 1) -simplex 1 .", "According to this definition, < DS , y S > and < DT , y T > represent source and target domains, respectively.", "A hypothesis is a function h : X{ 1 , ..., l } , which can be a classification model.", "Target-domain error is defined as the probability h T disagrees with y T , (cid:15) ( h T ) = (cid:15) ( h T , y T ) = E x D T [ | y T h T ( x ) | ] .", "The training target for h is to minimize a convex weighted combination of source and target errors, (cid:15) ( h ) = (cid:15) T ( h ) + (1 ) (cid:15) S ( h ) , where [0 , 1) is the domain weight, when = 0 , it is the setting of UDA. Theorem 1 Let h be a hypothesis in class H , then: (cid:15) T ( h ) (cid:15) ( h ) + (1 ) (cid:0) 12 d H H ( DS , DT ) + (cid:1) , where d H H ( DS , DT ) = 2 sup h (cid:48) ,h (cid:48)(cid:48) H (cid:12)(cid:12)(cid:12) Pr x D S (cid:2) h (cid:48) ( x ) (cid:54) = h (cid:48)(cid:48) ( x ) (cid:3) Pr x D T (cid:2) h (cid:48) ( x ) (cid:54) = h (cid:48)(cid:48) ( x ) (cid:3) (cid:12)(cid:12)(cid:12) Here is a constant that values the shared error of the ideal joint hypothesis. In d H H ( DS , DT ) , sup denotes the supremum of the right term for h (cid:48) , h (cid:48)(cid:48) H . Pr x D S [ h (cid:48) ( x ) (cid:54) = h (cid:48)(cid:48) ( x )] denotes the probability according to the distribution DS that h (cid:48) disagrees with h (cid:48)(cid:48) and Pr x D T [ h (cid:48) ( x ) (cid:54) = h (cid:48)(cid:48) ( x )] is similar.", "Intuitively, the theorem states the upper bound of (cid:15) T ( h ) based on (cid:15) ( h ) and the distance between DS and DT in the H H space, which is measured as the discrepancy between the two classifiers h (cid:48) and h (cid:48)(cid:48) .", "1 l is the total number of entity types in the source and target domains, such as { O, PER , LOC , ORG , MISC } .", "Our discussion also makes sense in the case that source domain and target domain have different entity types.", "The original theorem, however, concerns only one model h for transfer learning.", "In our supervised settings, in contrast, their CRF layers are specific to the source and target domains, respectively.", "Below we use h to denote our overall model with shared multi-cell LSTM model and domain-specific CRF layers.", "Further, we use h 1 to denote the target domain subsystem that consists of the shared multi-cell LSTM model and the target-specific CRF layer, and h 2 to denote its source counterpart.", "Theorem 1 can be extended to our settings as follows: Lemma 1 If (cid:15) ( h ) = (cid:15) T ( h 1 ) + (1 ) (cid:15) S ( h 2 ) , then: (cid:15) T ( h 1 ) 2 (cid:15) ( h ) + (1 ) (cid:0) 32 d H H ( DS , DT ) + (cid:1) Proof .", "inequalities, see Appendix A for details.", "(cid:3)", "Considering that the upper bounds of (cid:15) T ( h ) ( (cid:15) T ( h 1 ) ), (cid:15) ( h ) ( (cid:15) ( h ) ) and ( ) are small when training converges, our goal is to reduce d H H ( DS , DT ) .", "In particular, we define a model h is a composition function h = g f , where f represents the multi-cell LSTM model and g represents the CRF layer, denotes function composition.", "We assume h (cid:48) and h (cid:48)(cid:48) share the same multi-cell LSTM model, namely h (cid:48) = g (cid:48) f and h (cid:48)(cid:48) = g (cid:48)(cid:48) f , we have d H H ( DS , DT ) =2 sup g (cid:48) ,g (cid:48)(cid:48) G (cid:12)(cid:12)(cid:12) Pr x D S (cid:2) g (cid:48) f ( x ) (cid:54) = g (cid:48)(cid:48) f ( x ) (cid:3) Pr x D T (cid:2) g (cid:48) f ( x ) (cid:54) = g (cid:48)(cid:48) f ( x ) (cid:3) (cid:12)(cid:12)(cid:12) To obtain the supremum of the right term, we may wish to assume that both g (cid:48) and g (cid:48)(cid:48) can classify correctly in the source domain, then d H H ( DS , DT ) 2 sup g (cid:48) ,g (cid:48)(cid:48) G (cid:12)(cid:12)(cid:12) Pr x D T (cid:2) g (cid:48) f ( x ) (cid:54) = g (cid:48)(cid:48) f ( x ) (cid:3) (cid:12)(cid:12)(cid:12) The optimization objective is as follows: min f F sup g (cid:48) ,g (cid:48)(cid:48) G (cid:12)(cid:12)(cid:12) Pr x D T (cid:2) g (cid:48) f ( x ) (cid:54) = g (cid:48)(cid:48) f ( x ) (cid:3) (cid:12)(cid:12)(cid:12) Aiming to min f F d H H ( DS , DT ) , we decompose the unified feature space into several entity typed distributions using multi-cell LSTM, resulting in that sourceand target-domain features belonging to the same entity type are clustered together.", "The proof is mainly based on the cluster assumption (Chapelle and Zien, 2005), which is equivalent to the low density separation assumption, states that the decision boundary should lie on a low-density region.", "According to the cluster assumption, both g (cid:48) and g (cid:48)(cid:48) tend to cross the low-density regions in the shared Dataset Entity Type Size Train Dev Test CoNLL-2003 PER , LOC #Sentence 15.0K 3.5K 3.7K ORG , MISC #Entity 23.5K 5.9K 5.6K Broad Twitter PER , LOC #Sentence 6.3K 1.0K 2.0K ORG #Entity 8.8K 1.7K 4.4K Twitter PER , LOC #Sentence 4.3K 1.4K 1.5K ORG , MISC #Entity 7.5K 2.5K 2.5K BioNLP13PC CHEM , CC #Sentence 2.5K 0.9K 1.7K GGP , etc.", "feature space of both source and target domains.", "This results in Pr x D T [ g (cid:48) f ( x ) (cid:54) = g (cid:48)(cid:48) f ( x )] Pr x D S [ g (cid:48) f ( x ) (cid:54) = g (cid:48)(cid:48) f ( x )] 0 , which well meets the above optimization objecive.", "Datasets.", "We take six publicly available datasets for experiments, including BioNLP13PC and BioNLP13CG (Nedellec et al., 2013), CoNLL-2003 English dataset (Sang and Meulder, 2003), Broad Twitter dataset (Derczynski et al., 2016), Twitter dataset (Lu et al., 2018) and CBS SciTech News dataset (Jia et al., 2019).", "Statistics of the datasets are shown in Table 1. In unsupervised domain adaptation experiments, 398,990 unlabeled sentences from CBS SciTech News collected by Jia et al. (2019) are used for target-domain LM training, a named entity dictionary from Web resource collected by Peng et al. (2019) is used for target-domain auxiliary tasks training.", "The CoNLL-2003, Twitter and CBS News have the same four types of entities, namely PER (per-son), LOC (location), ORG (organization) and MISC (miscellaneous).", "The Broad Twitter dataset consists of three types: PER , LOC and ORG .", "BioNLP13CG mainly consists of five types, namely CHEM (simple chemical), CC (cellular component), GGP (gene and gene product), SPE (species) and CELL (cell), BioNLP13PC mainly consists of three types: CHEM , CC and GGP .", "Hyperparameters.", "We choose NCRF++ (Yang and Zhang, 2018) for developing the models.", "The multi-task baselines are based on Jia et al. (2019).", "Our hyperparameter settings largely follow Yang et al. (2018); word embeddings for all models are initialized with PubMed 200-dimension vectors (Chiu et al., 2016) in BioNLP experiments and 20 40 60 80 100 Iteration 0.55 0.60 0.65 0.70 0.75 0.80 Fs c o r e NEREntity Prediction (right) Attention Scoring (right) 0.86 0.88 0.90 0.92 0.94 0.96 0.98 A cc u r a c y", "GloVe 100-dimension vectors (Pennington et al., 2014) in other experiments.", "All word embeddings are fine-tuned during training.", "Character embeddings are randomly initialized.", "Figure 2 shows the performances of the main target-domain NER task and the auxiliary entity prediction and attention scoring tasks on the development sets of BioNLP13CG and Twitter when the number of training iterations increases.", "As can be seen from the figure, all the three tasks have the same trend of improvement without potential conflicts between tasks, which shows that all the three tasks take the feature space of the same form.", "We conduct supervised domain adaptation on BioNLP dataset, Broad Twitter dataset and Twitter dataset, respectively.", "In particular, for the BioNLP dataset, BioNLP13CG is used as the target-domain NER dataset and BioNLP13PC as the source-domain dataset.", "These two datasets have some different entity types.", "In the Broad Twitter dataset, Broad Twitter is used as the target-domain dataset and the CoNLL-2003 as the source-domain dataset.", "These two datasets have a different entity type MISC .", "In the Twitter dataset, Twitter is used as the target-domain dataset and the CoNLL-2003 as the source-domain dataset.", "These two datasets have the same entity types.", "The overall results are listed in Table 2. Target-domain only settings.", "CELLLSTM , all of the multi-task models obtain significantly better results on all of the three datasets.", "This shows the effectiveness of multi-task learning in few-shot transfer.", "Cross-domain settings.", "We make comparisons with the traditional parameter sharing mechanism MULTI-TASK (LSTM ) (Yang et al., 2017) together with two improved methods, MULTI-TASK +P GN (Jia et al., 2019), which adds an parameter generation networks (PGN ) to generate parameters for sourceand target-domain LSTMs and MULTITASK +G RAD (Zhou et al., 2019), which adds a generalized resource-adversarial discriminator (GRAD ) and leverages adversarial training.", "The results show that our method can significantly outperform these multi-task methods on the same datasets, which shows the effectiveness of our multi-cell structure in cross-domain settings.", "Comparison with the state-of-the-art models.", "Results show that our model outperforms cross-domain method of Jia et al. (2019), cross-type method of Wang et al. (2019) and methods using addition features (Crichton et al., 2017; Lu et al., 2018).", "Recently, LM pre-training based methods such as ELMO /B IOELMO (Peters et al., 2018), BERT (Devlin et al., 2019) and BIOBERT (Lee et al., 2020) achieve state-of-the-art results on NER.", "However, these methods use additional large-scale language resources, thus it is unfair to make direct comparisons with our method.", "Thus we leverage the outputs of LM pre-training meth-Methods F 1 # Params # Raw Jia et al. (2019) 73.59 12,916K 18,474K BERT-BASE (Devlin et al., 2019) 74.23 108M 3,700M BILSTM 70.73 211K MULTI-CELLLSTM 70.03 743K BILSTM +L M 71.30 211K 1,931K BILSTM +L M +D ICT 72.49 212K 1,931K MULTI-CELLLSTM +L M 72.81 743K 1,931K MULTI-CELLLSTM +L M (ALL ) 73.56 743K 8,664K MULTI-CELLLSTM +L M +D ICT 75.19 743K 1,931K Table 3: Results on CBS News datasets.", "ods as contextualized word embeddings.", "In particular, we use the same batch size as our method and the Adam optimizer with an initial learning rate 3e-5 in BERT fine-tuning baselines.", "Results show that our method benifits from these LM pretraining output features and outperforms these LM pre-training based methods.", "We conduct unsupervised domain adaptation on the CBS SciTech News test set, using CoNLL-2003 as the source-domain dataset.", "The overall results are listed in Table 3. Adding target-domain LM training.", "Only using the source-domain NER data, BILSTM and MULTICELLLSTM give comparable results, 70.73% F 1 and 70.03% F 1 , respectively.", "In comparison with the source-domain only models, all of the models 60 40 20 0 20 40 60 60 40 20 0 20 40 60 O(S) O(T) MISC(S) LOC(S) LOC(T) ORG(S) ORG(T) PER(S) PER(T) Figure 3: t-SNE visualization of ET cell states { c k } lk =1 on the CoNLL-2003 test set and Broad Twitter test set, differentiated by signal star and dot, respectively.", "using LM obtain significantly better results, which shows the effectiveness of using target-domain LM in zero-shot transfer.", "When using the same amount of target-domain raw data as Jia et al. (2019), The result of MULTI-CELLLSTM +L M (ALL ) is comparable to the state-of-the-art (Jia et al., 2019) (73.56% F 1 v.s. 73.59% F 1 ), which uses both source-domain LM and target-domain LM.", "This shows the effectiveness of multi-cell structure for zero-shot transfer.", "Adding a named entity dictionary.", "With the named entity dictionary collected by Peng et al. (2019), the results show a significant improvement (75.19% F 1 v.s. 72.81% F 1 ).", "To make fair comparison, we add the entity dictionary information to BILSTM +L M by doing an entity type prediction task together with the target-domain LM.", "BILSTM +L M +D ICT achieves better result than BILSTM +L M (72.49% F 1 v.s. 71.30% F 1 ), but it still cannot be comparable to our results.", "This shows that the auxiliary tasks can help learn entity knowledge from raw data, even if the named entity dictionary can not label all entities in a sentence.", "Visualization.", "In the proposed multi-cell LSTM, both ET cells and C cell play important roles in constructing a shared feature spaces across domains.", "We visualize feature spaces of ET cells and C cell in the Broad Twitter experiments.", "Figure 3 uses t-SNE (Maaten and Hinton, 2008) to visualize the ET cell states { c k } lk =1 .", "From the figure we can see that different ET cells can generate different feature distributions (gathering in different clusters of different colours), and states Entity group CHEMCCGGPCELLSPE All Is in Source?", "of the same ET cell gather together across domains.", "This indicates that our model can learn cross-domain entity typed knowledge with the help of ET cells, which are more robust across domains.", "Figure 4 visualize the hidden vectors of the target-domain only baseline, the multi-task baseline and the proposed model.", "From the figure, we can see that both the multi-task baseline and ours can obtain similar feature distributions across domains compared with the target-domain only baseline.", "In comparison with the multi-task baseline, our model also shows strong matches across domains in an entity type level, which can better narrow the gap between source and target domains as discussed in 3.4.", "Fine-grained comparison.", "We make fine-grained comparisons between our model and the multi-task baseline on the BioNLP dataset, aiming to show how our model achieves better results on the entity type level.", "Following Crichton et al. (2017) and Jia et al. (2019), we study five well studied entity groups (not including all entity types) in BioNLP13CG.", "As shown in Table 4, both MULTI (Multi-Task baseline) and Ours achieve significant F 1 improvement over the target-domain only baseline LSTM on the biochemistry entity groups that appear in both the target and the source datasets, such as CHEM , CC and GGP , which is consistent with intuition.", "But for biology entity groups not appearing in the source dataset, such as CELL and SPE , MULTI using traditional parameter sharing hardly improves the performances (+0.14% F 1 for CELL and +0.39% F 1 for SPE v.s. +1.82% F 1 for All).", "In contrast, Ours achieves relatively strong improvements (+2.10% F 1 for CELL and +2.84% F 1 for SPE ).", "This benefits from the distinct feature distributions across entity types by the multi-cell LSTM structure, which can effectively prevent the confusions drawn in a unified feature space.", "Ablation study.", "We conduct ablation studies on auxiliary tasks and model parameters.", "The results 40 20 0 20 40 40 20 0 20 40 60 O(S)PER(S)LOC(S)ORG(S)MISC(S) O(T)PER(T)ORG(T)LOC(T)", "Auxiliary tasks.", "When we only ablate L ent , the results on all of the three datasets suffer significant decline (-0.44% F 1 on BioNLP dataset, -0.85% F 1 on Broad Twitter dataset and -0.24% F 1 on CBS News dataset, respectively).", "When we only ablate L atten , the results on all of the three datasets suffer significant decline (over -1.5% F 1 on all of the three datasets).", "When we both ablate L ent and L atten , our model achieves similar results as the BILSTM-BASED baseline.", "This indicates that domain transfer of our model depends heavily on both auxiliary tasks.", "Number of parameters.", "We use two strategies to make the number of parameters of BILSTMBASED baseline comparable to that of our model:", "(i) STACKEDBILSTMS , stacking multi-layer BiL-STMs and enlarging the hidden size.", "(ii) HIDDENEXPANSION , with similar model structure, just enlarging the hidden size.", "Our model still significantly outperforms these baselines, which shows that the effects of our model do not arise from a larger number of parameters.", "Case study.", "Table 6 shows a case study, WHO is an organization and Nipah is a virus.", "Without using target-domain raw data, BI-LSTM baseline miclassifies Nipah as ORG .", "Both Ours and Sentence TheWorldHealthOrganization(WHO)describesNipahinfectionasa newlyemergingzoonosisthatcausesseverediseaseinbothanimalsandhumans.", "BILSTM + LM give the correct results because this entity is mentioned in raw data.", "Using the multi-cell structure, our method learns the pattern O RG , O, ORG , O from source data without confusions by target-domain specific entities, thus Ours recognizes WHO correctly.", "We have investigated a multi-cell compositional LSTM structure for cross-domain NER under the multi-task learning strategy.", "Theoretically, our method benefits from the distinct feature distributions for each entity type across domains.", "Results on a range of cross-domain datasets show that multi-cell compositional LSTM outperforms BiLSTM under the multi-task learning strategy.", "We thank the anonymous reviewers for their helpful comments and suggestions.", "We gratefully acknowledge funding from the National Natural Science Foundation of China (NSFC No.61976180) and the Westlake University and Bright Dream Joint Institute for Intelligent Robotics.", "Yue Zhang is the corresponding author." ]
[ "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "other", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "result", "result", "other", "abstain", "abstain", "method", "abstain", "method", "abstain", "other", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "other", "other", "other", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "other", "abstain", "abstain", "abstain", "method", "other", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "result", "abstain", "result", "abstain", "abstain", "result", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "objective", "abstain", "abstain", "other", "other", "other" ]
[ "Conversational agents have come increasingly closer to human competence in open-domain dialogue settings; however, such models can reflect insensitive, hurtful, or entirely incoherent viewpoints that erode a user's trust in the moral integrity of the system.", "Moral deviations are difficult to mitigate because moral judgments are not universal, and there may be multiple competing judgments that apply to a situation simultaneously.", "In this work, we introduce a new resource, not to authoritatively resolve moral ambiguities, but instead to facilitate systematic understanding of the intuitions, values and moral judgments reflected in the utterances of dialogue systems.", "The MORALINTEGRITYCORPUS , MIC ` , is such a resource, which captures the moral assumptions of 38k prompt-reply pairs, using 99k distinct Rules of Thumb (RoTs).", "Each RoT reflects a particular moral conviction that can explain why a chatbot's reply may appear acceptable or problematic.", "We further organize RoTs with a set of 9 moral and social attributes and benchmark performance for attribute classification.", "Most importantly, we show that current neural language models can automatically generate new RoTs that reasonably describe previously unseen interactions, but they still struggle with certain scenarios.", "Our findings suggest that MIC ` will be a useful resource for understanding and language models' implicit moral assumptions and flexibly benchmarking the integrity of conversational agents.", "To download the data, see https://github.com/GT-SALT/mic 1 Introduction Chatbots are a promising technology for providing humans with social support in open-ended, (cid:63) Work done at Meta AI Research Figure 1: A representative MIC ` annotation .", "chit chat settings (Brandtzaeg and Flstad, 2017; Huang et al., 2020; Liu et al., 2021b) and in many other more structured domains (Gao et al., 2018; Chattaraman et al., 2019).", "For example, socially competent dialogue systems have the potential to transform education (Molnr and Szts, 2018; Yang and Evans, 2019), healthcare (Laranjo et al., 2018; Vaidyam et al., 2019), and business (Bavaresco et al., 2020), with personalized instruction (Grossman et al., 2019), e-health coaching (Balloccu et al., 2021), disease diagnosis (Laumer et al., 2019), and customer service.", "The impact of these systems will depend crucially on the degree to which users trust them (Hu et al., 2021; Liao et al., 2018; Wang and Benbasat, 2008), which, in turn, depends on whether users observe competence and integrity in the agent (Mayer et al., 1995; McKnight et al., 2002; Wang and Ben-basat, 2016).", "Integrity often manifests itself in the degree to which an agent aligns with the user's own commonsense reasoning about social and moral values (Wang and Benbasat, 2016; Xiao and Ben-3755 basat, 2007).", "These dimensions of reasoning are critical for anthropomorphic systems (Seeger et al., 2017; Abercrombie et al., 2021) and in particular for chatbots built on neural architectures, since these rely on large pre-trained language models that have learned demonstrably problematic behaviors from the web (Gehman et al., 2020; Wallace et al., 2019; Lee; Luccioni and Viviano, 2021; Dinan et al., 2021; Bender et al., 2021).", "Current approaches that address the issue of integrity include avoiding the most overtly toxic language by filtering the training data (Gururangan et al., 2020), adjusting the decoding algorithm at the token-level with word blocklists (Schick et al., 2021), or using controllable generation (Dathathri et al., 2020; Keskar et al., 2019).", "These solutions are limited because dialogue is context-dependant, and norm violations can arise not only in isolated utterances but also in the way a reply is framed relative to a prompt (e.g., a bot fails to condemn a problematic assumption implicit in a leading question; Dinan et al. 2021).", "Another line of work employs methods like safety classifiers (Xu et al., 2021) or reinforcement learning techniques (Peng et al., 2020; Liu et al., 2021a; Ziegler et al., 2019; Luketina et al., 2019) that reward good and punish bad replies relative to the conversation history .", "However, there still lacks gold-standard judgments to teach and train these systems, regardless of the specific approach used.", "Furthermore, there is need for a systematic framework for capturing the cultural and personal differences in human reasoning about chatbot morality and social commonsense.", "To fill these gaps, we introduce the MORALINTEGRITYCORPUS (MIC ` ), a new dataset for benchmarking open-domain dialogue systems based on the Rules of Thumb (RoTs) paradigm (Forbes et al., 2020).", "MIC ` covers a topically diverse range of human-authored opinion questions, and, as illustrated in Figure 1, these prompt real answers from some of the leading social chatbots (e.g., BlenderBot; Roller et al.).", "MIC ` focuses on the minimal exchange between human and AI, a prompt and a follow-up reply, and it includes 38k unique query-response pairs, 99k distinct RoTs, and 114k sets of structured annotations.", "By representing interpretable and varied RoT judgments, MIC ` thus provides a flexible basis for moral dialogue generation, with interpretable explanations of why certain chatbot behaviors could be seen as acceptable or problematic.", "Developing the dataset requires addressing the following challenges.", "First, it is difficult to capture high-quality dialogues from current chatbots, since they often generate repetitive and uninteresting generalities (Sordoni et al., 2015; Li et al., 2016; Holtzman et al., 2020) or hallucinations (Zellers et al., 2019).", "Assuming responses are reasonable, we still need to ensure that the content contains either explicit or implicit assumptions about morality and social commonsense .", "We introduce filtering techniques to ensure that over 90% of our data reflects reasonable as well as interesting 1 normative content.", "The second challenge is that human values are difficult to measure consistently because social norms can vary by culture (Haidt et al., 1993; Shweder, 1990; Bicchieri, 2005) and individual preference, just as notions of conversational etiquette can vary (Culley and Madhavan, 2013).", "For this reason, we develop an annotation scheme inspired by applied ethics (Gert and Gert, 2002; Hare et al., 1981) in which annotators provide free text descriptions of moral commonsense rules, and we account for ideological variation by measuring workers' political and moral foundations.", "We describe a set of experiments that show that our dataset can be used to create new Rules of Thumb.", "Specifically, we use language models as baselines for moral commonsense reasoning, and show that these models learn to generalize from our data and generatively describe new Rules of Thumb that apply to previously unseen dialogue interactions.", "Our best performing T-5 model achieves a ROUGE-L score of 53 and it closely approximates or matches human levels of well-formedness, relevance, and fluency.", "Despite the promising model performances, our experiments demonstrate that state-of-the-art neural models struggle to generate moral viewpoints in certain scenarios, suggesting that our dataset can serve as a useful benchmark for computationally modeling and describing the moral and social norms that structure everyday conversations between humans and AI.", "There is a long-standing interest in the moral responsibility of AI (Dehghani et al., 2008; Alaieri and Vellino, 2016; Stephanidis et al., 2019; Zoshak and Dew, 2021; Prabhumoye et al., 2021;", "Schramowski et al., 2021).", "Work in Human-Computer Interaction (HCI) reveals that, before users feel they can trust a Conversational Agent, they will often probe it to identify the limitations which bound its abilities, competence (Luger and Sellen, 2016), and apparent integrity (Mayer et al., 1995; McKnight et al., 2002; Wang and Benbasat, 2016).", "It is reasonable to expect adversarial probes and strategically-chosen questions (Wolf et al., 2017), which can prompt toxic or immoral behaviors, even in detoxified models that were trained on carefully sanitized inputs (Gehman et al., 2020; Cercas Curry and Rieser, 2018).", "There are a number of promising methods for keeping chatbots safe, including attribute conditioning (Ficler and Goldberg, 2017; Gehman et al., 2020), safety classifiers (Xu et al., 2021), controlled language generation (Keskar et al., 2019; Ziegler et al., 2019; Luketina et al., 2019), and reinforcement learning (Peng et al., 2020; Liu et al., 2021a; Ziegler et al., 2019; Luketina et al., 2019).", "The MORALINTEGRITYCORPUS can help facilitate each of these efforts.", "Specifically, our data can help train safety classifiers, provide alternative responses (via the Revised Response), fit the steer-ing distribution in a controlled generation , or train penalty models in a policy gradient RL approach.", "Because our dataset makes moral judgments explicit via interpretable Rules of Thumb (RoT), this resource can guide more flexible solutions that can accommodate different moral viewpoints.", "Our present formalism builds on SOCIALCHEM -101 (Forbes et al., 2020) which has 292k Rules of Thumb, targeting the morality of narrative situations and the specific actions of characters in a story (e.g., ROCStories; Mostafazadeh et al.).", "Other recent collections of moral judgments are also based on narrative text, such as MORALSTORIES (Emelin et al., 2021) and ETHICS (Hendrycks et al., 2020).", "We, on the other hand, focus on minimal chit-chat-style conversations, with social chatbot reply to an open-ended prompt.", "Related efforts focus more on classification tasks, like choosing between two moral alternatives (Tay et al., 2020), reflecting value judgments, or parsing stories about conflict and trying identifying the character in each story who is most worthy of blame (SCRUPLES ; Lourie et al.).", "Most recently, Jiang et al. (2021) combined the SOCIAL-CHEM 101, MORALSTORIES , ETHICS , and SCRUPLES datasets, together with the SOCIALBIASINFERENCECORPUS (Sap et al., 2020), to train a single commonsense moral model, known as Delphi.", "Delphi is designed to produce universal moral judgments (e.g., it is bad ) concerning hypothetical narrative situations (e.g., killing a bear to save your child ).", "Talat et al. (2021) and others have criticized this approach as overly reductive and misleading, assigning global authority to the prescriptive normative judgments of a single AI.", "Our approach differs in important ways.", "Firstly, our approach carries different ethical assumptions than those of Delphi (See also Section 7).", "The MORALINTEGRITYCORPUS is a collection of RoTs designed, not to support authoritative moral judgments, but rather to facilitate descriptive explanations of the moral assumptions that already exist implicitly in foundation models.", "In future work, these explanations may be used to guide chatbot moderation systems that are sensitive to ideological and political difference.", "Secondly, our contributions focus on the dialogue setting, which presents unique challenges (Section 6.2) and has previously been overlooked.", "The primary goal of this work is to provide a resource that allows researchers to systematically observe the moral assumptions of open-domain dialogue systems.", "A dialogue trajectory may be long and complex (Smith et al., 2020), thus here we focus on a minimal dialogue unit: a simple tuple with an opinion question for a prompt, and the chatbot's response to that prompt.", "In order to model the inferences that humans would make about right and wrong in previously unseen conversations, we gather a large and foundationally diverse collection of moral judgments about the chatbot's responses.", "We use the Rule of Thumb (RoT) formalism introduced in Forbes et al. (2020) to describe the moral content of a chatbot's response and further categorize RoTs according to their underlying moral foundations (Graham et al., 2009; Haidt and Graham, 2007), their global consensus , and violation severity .", "In so doing, we extend the SOCIAL-CHEM -101 (Forbes et al., 2020) framework to a conversational setting.", "A Rule of Thumb (RoT) is a fundamental judgment about right and wrong behavior.", "Simplifying Forbes et al. (2020), we say an RoT should: (1) explain the basics of good and bad behavior; (2) 3757 contain a judgment (e.g. you should) and an action (e.g. give money to the poor); and (3) make a general rule but still provide enough detail it should be understandable even out of context .", "In this framework, RoTs describe what could be seen as morally right or wrong about an AI's answer to an open-ended prompt.", "Multiple conflict-ing RoTs can apply to a single prompt-reply pair, as in the following example: Prompt: Do you ever smoke marijuana illegally?", "Reply: I smoke it to relax.", "It helps my anxiety.", "RoT 1: It is bad to use harmful substances.", "RoT 2: It's okay to try recreational drugs.", "RoT 3: Breaking the law is wrong.", "RoT 4: You should be able to use any kind of medical treatment that works.", "RoT Attributes.", "We collect attributes to categorize the different motives behind RoTs.", "In the example above, we see that the Reply violates RoTs 1 and 3, but it aligns with RoTs 2 and 4. We describe this as Reply Alignment: the chatbot's Reply either agrees with the RoT, disagrees with it, or neither .", "Different people can be more or less inclined to agree with a given Rule of Thumb, and breaking certain rules may be more severe than breaking others.", "We formalize these as Global Consensus and Violation Severity, respectively.", "Lastly, RoTs can highlight different aspects of morality, better known as Moral Foundations: RoT 1 and 4 highlight possible harms ; RoT 2 and 4 highlight liberty ; and RoT 3 makes an appeal to authority .", "We use the 6-foundation theory of morality of Graham et al. (2013), which includes care, fairness, liberty, loyalty, authority, and sanctity.", "For more detailed discussion, see Appendix C. 4 The MORALINTEGRITYCORPUS The MORALINTEGRITYCORPUS is designed for benchmarking the integrity of chatbot responses to both natural and adversarial prompts.", "We train MTurk workers to annotate prompt-reply tuples: an open-ended query and an AI-generated response to that query.", "In the following sections, we detail the data collection process.", "First, we compiled and strategically filtered a set of open-domain prompt-reply pairs, drawn from a collection of nearly 5 million prompts from a pre-existing public collection of r/AskReddit posts (Fionn Delahunty, 2018), a dataset which the", "authors and Meta were not involved in creating or collecting.", "AskReddit is a place to ask and answer thought-provoking questions, and with over 33 million users, it is also tightly moderated.", "Questions must be clear, direct, and, most importantly, open-ended.", "Since we are interested in morally subjective questions, we ensured that both the question and the top Reddit answer contained at least one word from the Expanded Moral Foundations Dictionary (EMFD) of Rezapour et al. (2019) and one strongly subjective word from Wilson et al. (2005).", "Keyword filtering left us with 217,700 prompts.", "We fed each prompt to three separate chatbot systems: BlenderBot (Roller et al., 2021), DialoGPT (Zhang et al., 2020b), and GPT-Neo (Black et al., 2021).", "BlenderBot and DialoGPT were the leading architectures at the time of investigation.", "2 GPT-Neo was the latest open-source implementation of the powerful GPT-3 architecture (Brown et al., 2020).", "For all models, we used a greedy decoding strategy.", "3 This left us with 217 , 700 3 = 653 , 100 conversational pairs.", "Next, we filtered the conversational pairs to ensure that the chatbot replies contained a word in the EMFD.", "Finally, we trained and used a BERT-based classifier to keep replies that contained moral or immoral content and were understandable, specific, and relevant to the prompt.", "See Appendix B for more details on ground truth and model training.", "After this final filtering step, we had a set of morally-dense and high-quality dialogue tuples: 30,880 from BlenderBot, 11,521 from DialoGPT, and 51,141 from GPT-Neo, and we annotate a random subset of this data.", "Following ethical crowdsourcing guidelines outlined in Sheehan (2018), we trained Amazon Mechanical Turk (MTurk) workers to complete all annotations described in this study.", "We provided definitions and detailed examples for each construct, and since the Rule of Thumb was critical, we also provided annotators with an interactive search bar to query and view example RoTs from the SOCIALCHEM -101 dataset (Forbes et al., 2020).", "To access a preliminary staging round, workers had to be lo-2 Specifically, we used the 2.7B parameter BlenderBot model, which excelled in engagingness in the human evaluation, and DialoGPT Medium, which performed best in Zhang et al. (2020b).", "3 We chose this for consistency and because greedy decoding produced more coherent responses by manual inspection.", "cated in the United States and pass a qualifying test, correctly answering at least 6 out of 7 questions about Moral Foundations and the Rule of Thumb criteria (Section 3.1).", "In the staging round, we gave direct feedback on workers' submissions and manually ensured that they met our criteria before we granted them access to the main task, Drafting Rules of Thumb .", "Additionally, we implemented a suite of additional quality checks, both automatic and manual, detailed in Appendix E. A total of 186 annotators worked on the following task.", "Drafting Rules of Thumb.", "Three different annotators independently considered a prompt-reply pair, and each wrote a single Rule of Thumb to describe it.", "Then, annotators answered a series of questions about the attributes for their own RoTs: A1 Reply Alignment: Does the Reply agree with your RoT, disagree , or neither ?", "A2 Global Consensus: What percent of people (globally) do you think agree with your RoT?", "(1) nobody ( < 1%); (2) rare (5% 25%); (3) controversial ( 50%); (4) most (75% 90%); (5) all ( > 99%) A3 Violation Severity: On a scale of 1-5, how severe or serious is it when someone does not follow the RoT?", "(1) fine ; (2) unwise ; (3) bad ; (4) horrible ; (5) worst .", "A4 Moral Foundations: Classify your RoT using any of the Moral Foundations.", "You can use more than one, or even none.", "to the Prompt that is either neutral or aligns with your RoT .", "This is to ensure that future moderation systems trained on the data not only have the capacity to censor immoral content, but also the capacity to suggest better alternatives.", "Worker Diversity.", "Moral judgments are not universal, so it is important to understand the ideological and political perspectives that inform our workers' decisions.", "For this reason, we explicitly asked workers to self-report their political leaning and complete a moral questionnaire.", "Such metadata is not present in other popular moral datasets (Hendrycks et al., 2020; Lourie et al., 2021; Forbes et al., 2020; Emelin et al., 2021), but this metadata is critical for understanding the variability of moral intuitions (Talat et al., 2021).", "Figure 3 shows a political distribution for workers (Left) and annotations (Right).", "We see that 16 + 9 = 25 % of workers are conservative-leaning and 16 + 6 = 22 % of all annotations are written by conservative-leaning workers.", "Our worker pool is primarily liberal.", "Next, we administered an abbreviated form of the Moral Foundations Questionnaire (Graham et al., 2008) which measures the degree to which the five core foundations shape each worker's sense of right and wrong.", "As predicted Graham et al. (2009), liberal-leaning workers emphasized Care and Fairness more than the other three foundations, while conservative-leaning workers valued them more evenly (Figure 4).", "Data Quality.", "In a secondary task, we asked new annotators to consider each RoT out of context and provide attribute annotations, with three annotations per RoT.", "In Figure 2, we observe that the Intr-3759 Figure 3: ( Left ) % of annotators who align with the given political leaning.", "aclass Correlation agreements on A1-A4 between k = 186 raters are fair to moderate among these attribute categories (min 0.42; max 0.72).", "Consensus and Severity have lower Krippendorf's , but this is expected since annotators may calibrate their scores differently on these 5-point Likert scales.", "The MORALINTEGRITYCORPUS allows us to build models that automatically describe a chatbot's moral assumptions.", "If we can generate normative rules and also categorize those rules by severity, consensus, and moral foundations, future studies can combine these skills to build a moral reasoning and moderation system that is sensitive to ideological and political difference.", "Let ( q, a, r,(cid:126)b r ) be a single annotation tuple in the MIC ` for prompt q and chatbot reply a , with an RoT annotation r , and an attribute breakdown (cid:126)b r .", "Using the question and answer, we fine-tune language models to generate a relevant RoT (Section 5.1).", "Then we train separate transformer-based classifiers to predict the attributes b r for a given RoT r (Section 5.2).", "We use the same 80-10-10 split for train-dev-test in all experiments and ensure that no prompt-reply pair is contained in multiple splits.", "We model p ( r | q, a ) by training a MORALTRANSFORMER p MT to maximize the standard language modeling objective:", "(Radford et al., 2019), BART (Lewis et al., 2020) and T5 (Raffel et al., 2020).", "BART and T5 are both encoder-decoder models, but since GPT-2 is a causal language model, we instead maximize this language modeling objective over the entire sequence [ q ; a ; r ] as depicted in Figure 5. We train for e { 1 , 2 , 3 , 5 } epochs using a batch size of 16 and a learning rate of 3e-5.", "We tune e on the dev set and choose the model with the best BLEU score to evaluate on the test set.", "At inference time, we experiment with different decoding strategies: greedy search, beam search ( beams = 3 ), and nucleus sampling ( p = 0 . 9 ).", "We generate one RoT for greedy decoding.", "For both beam search and nucleus sampling, we generate three hypotheses and choose the highest scoring hypothesis.", "We also test two simple retrieval methods: Random RoT (select a Random RoT from the training set), and SBERT (Reimers and Gurevych, 2019) (sample a ground truth RoT from the training prompt-reply pair whose embedding is most similar to the testing prompt-reply embedding).", "For all attribute classification tasks, we experiment with two transformer-based models, BERT (Devlin et al., 2019) and ALBERT (Lan et al., 2020).", "We tune with the learning rate in {2e-5, 3e-5, 5e-5} 3760 Model Decoding R-1 R-2 R-L BLEU BScore Avg.", "and the number of epochs in { 1", "..", "8 } , with (cid:15) = 1 e-8 and the batch size fixed at 16.", "The RoT attribute categories (A1-A4, Section 3.1) differ notably: some labels are mutually exclusive, some fall on an ordered scale, and others are categorical, mutually inclusive .", "For this reason, we opt to train a separate baseline classifier for each category.", "We frame Answer Alignment as sentence pair classification, with input given by both the RoT and the prompt-reply text, and we decide a 3-way classification: agree , disagree , or neither .", "For all other tasks, we give only the RoT as input.", "Since Severity of Violation and Global Consensus are on Likert scales, we model these as ordinal regression and use MSE loss.", "We also collapse the extreme minority Consensus labels ( nobody , rare , and controversial ) under the controversial class.", "Finally, we treat Moral Foundations as multi-label classification and use Binary Cross Entropy Loss.", "We use both automatic and human metrics to benchmark the performance of our MORALTRANSFORMER s.", "Quantitatively, we report standard ROUGE (Lin and Hovy, 2003) including ROUGE-1 (R-1), ROUGE-2 (R2) and ROUGE-L (R-L), BLEU (Papineni et al., 2002), BERTScore (Zhang et al., 2020a) (BScore), and the average length (Avg. Len).", "Since there are three ground truth RoTs for each prompt-reply pair, we first take the maximum score out of these three so that models will not be unfairly punished for any stylistic differences.", "Qualitatively, we run a human evaluation for the following constructs: well-formedness (yes or no, does the RoT explain the basics of good and bad behavior with a single judgment and action? ); fluency (Adiwardana et al., 2020) (on a scale of 1-5, how much does the RoT align with what an English speaker might naturally say? ); and most importantly, relevance ( if we assume the RoT is true, then on a scale of 1-5, how well does the RoT apply to the Answer for this specific Question? ).", "Three workers annotate each generation, and we evaluate on 200 generations per model type, including a Human gold-standard answer, where we show workers a ground truth RoT.", "The scores listed in Table 1 are averaged scores.", "The results are shown on Table 1. We observe that, retrieval based approaches like SBERT are inferior to these generative models.", "Using beam search, T-5 outperforms all other RoT generation models significantly, and the success of the same model with nucleus sampling is consistent with Forbes et al. (2020).", "Furthermore, from a qualitative perspective, the GPT-2 and T-5 models perform exceptionally well with beam search, matching human levels of relevance (4.03) and even exceeding gold standard fluency (4.67 vs. 4.55) and well-3761 formedness (0.88 vs. 0.83) in the generated RoTs on the average.", "We suspect the reason these models sometimes outperform ground truth is because generative models were first pre-trained on a large corpus and then fine-tuned to convey a more neutral style that appeals to a slightly broader set of human evaluators.", "However, this promising performance does not mean the task is solved.", "Even the best performing T-5 model generates irrelevant RoTs (relevance < 2 ) nearly 28% of the time.", "To further examine the rich scenarios related to moral viewpoints in our MORALINTEGRITYCORPUS , we introduce a transfer learning setting, where we use models trained with situations in SOCIALCHEMISTRY -101 to generate RoTs on our corpus.", "In Table 2, we report the performance of the pre-trained GPT-2 model using the same set of parameters from Forbes et al. (2020) in this transfer learning setting.", "We find that such models did not outperform our RoT generation baselines for this chit-chat corpus, demonstrating key differences and unique challenges introduced in MIC ` .", "We argue that MORALINTEGRITYCORPUS setting differs in important ways from Forbes et al. (2020) where they assign RoTs to brief situations extracted from narrative text.", "Here we summarize four challenging differences, each with a motivating example from our data.", "(1) Conversations can reflect nuanced and multifaceted moral viewpoints, which occurs frequently in MIC ` but are less common in narrative text.", "Prompt: Do you support your country's troops?", "Reply: I do support our troops, but I don't support the war in Iraq.", "(2) Unlike human conversants, chatbots arbitrarily break the cooperative principle (Grice, 1975), producing spontaneous and unexpected moral violations, well represented in MIC ` .", "Prompt: Do you believe all life is of equal value?", "Reply: I believe I would enjoy seeing your dog lose its life in a fight with my dog.", "(3) Even when there is nothing immoral about the answer in isolation , it can still be morally wrong according to the pragmatics of the dialogue (Dinan et al., 2021), making it hard to assess viewpoints.", "(4) Strategic or adversarial questions can elicit moral viewpoints that would not otherwise arise in conversation (e.g. presupposing a problematic viewpoint or assumption where any complete answer will necessarily break a moral rule).", "This section further examines how to categorize these generated normative rules by severity, consensus, and moral foundations.", "The performance of our attribute classifiers is given in Table 3. Results indicate a moderate to high degree of correlation between the ground truth and the ALBERT model's severity and consensus judgments ( r = 0 . 59 and r = 44 respectively).", "We also observe moderate reliability in the binary alignment classification ( F 1 = 76 . 0 ) and the 6-way moral foundations, excluding the Sanctity foundation, which is in the minority ( F 1 = 40 . 8 ).", "Though performance is not perfect, the models match or exceed human performance, and these results indicate the internal consistency and utility of our attribute taxonomy.", "Note that, although the main focus of this work is to generate RoTs, this attribute classification can serve as a novel NLP application on its own, i.e., detecting moral and social dimensions towards building moral reasoning systems that are sensitive to ideological and political difference.", "This work introduces MIC ` , the MORALINTEGRITYCORPUS , which is a large-scale resource for understanding the moral assumptions and bench-marking the normative social commonsense reasoning of conversational agents, particularly in open-domain chit chat settings.", "MIC ` contains 38k chatbot replies to human-authored prompts, and these replies are annotated with a total of 99k Rules of Thumb (RoTs) that determine what may be seen as right or wrong about the reply.", "With 114k total prompt-reply pairs, we have only 15k duplicate RoTs (or 13%), suggesting that this is a rich and challenging task.", "We train MORALTRANSFORMER s to automatically generate new RoTs that describe previously unseen human-chatbot interactions, and we find that our best models make judgments that can be nearly indistinguishable from human annotations in terms of quality, fluency, and relevance.", "However, even 3762 Severity Consensus Alignment Moral Foundations (F1-Score) r MSE r MSE F1 Care Fairness Liberty Loyalty Authority Sanctity BERT 0.53 1.13 0.41 47.7 76.0 73.4 56.2 54.1 59.9 52.1 37.0 ALBERT 0.59 1.01 0.44 45.2 76.0 75.3 59.6 58.0 62.7 54.3 40.8 Human 0.30 2.32 0.17 1.18 82.9 57.3 35.1 32.1 48.2 37.8 30.8 Table 3: RoT attribute classification.", "the best-performing model still generates irrelevant RoTs nearly 28% of the time.", "This suggests that the proposed task is not yet solved and that MIC ` will be a useful resource for training moral conversational agents.", "In future work, we will use the MORALINTEGRITYCORPUS to train penalty models in a policy gradient reinforcement learning approach for demoting immoral generations.", "Other work can also use MIC ` to train safety classifiers and guide controllable language generation systems towards ethical behaviors.", "These models can then guide a moderation system that is sensitive to ideological and political differences.", "Limitations Any collection of moral judgments will reflect the annotators' worldviews.", "MTurk workers generally tend to be less religious, more educated, and more likely to be unemployed than the general population (Goodman et al., 2013).", "We limited our collection to English-speaking workers living in the 21st century United States, and at this time, these U.S. workers were most likely male, in their early 20s or 30s, and married, with at least one child (Difallah et al., 2018).", "Future studies can extend our framework to other cultures and geographic regions.", "Additionally, our human prompts come from Reddit, which is skewed towards younger or middle-aged males (Amaya et al., 2021).", "Furthermore, we recognize that even regionally-localized judgments may shift with context over time, and a potentially shifting target demands adaptable moral agents.", "Despite this limitation, it is clear that plausible moral judgments are bounded by the data available in the conversation, and we argue that, with respect to Moral Foundations Theory, our data is representative.", "If we consider the marijuana example from Section 3.1, we see an appeal to Care/Harm regarding substances, a judgment on Liberty or free personal choice, and appeals to Authority or civil law.", "Although the relative weights assigned to each consideration may shift, we would not expect time to drastically change the elemental factors or available data involved in reasoning about the decision to smoke.", "We would like to thank the anonymous reviewers for providing insightful feedback.", "CZ is supported by the NSF Graduate Research Fellowship under Grant No.", "DGE-2039655.", "DY is supported by the Microsoft Research Faculty Fellowship.", "Ethical Assumptions.", "First, to set proper boundaries on this resource and the tasks it can facilitate, we will outline the ethical assumptions of this work and address some potential misconceptions.", "First, we recognize that the automatic generation of ethical judgments could be seen as normative and authoritative (Talat et al., 2021).", "We want to stress that MIC ` represents a collection of social and moral Rules of Thumb (RoTs).", "We do not treat RoTs as global or universally binding, but instead explicitly model the subjectivity of the domain using Global Consensus and Violation Severity.", "Thus RoTs are not designed to form a cohesive and universal ethical system, but rather to provide a set of discrete intuitions and principles to help differentially explain the underlying assumptions that already exist latently in large language models.", "These assumptions can surface in chatbots as morally questionable or inconsistent utterances (Gehman et al., 2020; Wallace et al., 2019; Lee; Luccioni and Viviano, 2021; Dinan et al., 2021; Bender et al., 2021).", "The present work can support an explainable system that explicitly interprets dialogue systems in the language of RoTs, which represent different human viewpoints.", "Moderation efforts can appear at a later stage, handled by domain experts who may interface with this flexible system.", "Finally, we emphasize that normative judgments can vary across different time periods and cultures (Haidt et al., 1993; Shweder, 1990; Bic-chieri, 2005; Culley and Madhavan, 2013; Amaya et al., 2021), and thus dialogue integrity is a target that demands dynamic solutions and sustained effort.", "Risks in deployment.", "The resources and findings presented in this work are intended for research purposes only.", "The judgments from Moral Transformers should not be taken as moral advice, but rather as explanations for how some people could interpret and judge chatbot utterances.", "To help mitigate risks in deployment from misunderstandings about the ethical assumptions above, we require users of this data to complete a Data Use Agreement linked in the project repository.", "Risks in annotation.", "Before starting any annotation, this study was thoroughly reviewed and approved by an internal review board.", "Our task can contain non-normative or even profane and racist examples, and we recognize the emotional burden that this presents to annotators (Roberts, 2016).", "For this reason, we added the following content warning in bold red text in the header of each task: This HIT may contain text that disturbs some workers.", "If at any point you do not feel comfortable, please feel free to skip the HIT or take a break." ]
[ "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "objective", "abstain", "other", "abstain", "other", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "objective", "objective", "result", "objective", "other", "other", "other", "other", "other", "other", "method", "abstain", "method", "other", "method", "other", "other", "other", "other", "objective", "abstain", "other", "other", "objective", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "method", "method", "method", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "method", "method", "method", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "other", "result", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "result", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "objective", "abstain", "abstain", "abstain", "result", "abstain", "other", "other", "other", "other", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "abstain" ]
[ "Document-level relation extraction requires integrating information within and across multiple sentences of a document and capturing complex interactions between inter-sentence entities.", "However, effective aggregation of relevant information in the document remains a challenging research question.", "Existing approaches construct static document-level graphs based on syntactic trees, co-references or heuristics from the unstructured text to model the dependencies.", "Unlike previous methods that may not be able to capture rich non-local interactions for inference, we propose a novel model that empowers the relational reasoning across sentences by automatically inducing the latent document-level graph.", "We further develop a refinement strategy, which enables the model to incrementally aggregate relevant information for multi-hop reasoning.", "Specifically, our model achieves an F 1 score of 59.05 on a large-scale document-level dataset (DocRED), significantly improving over the previous results, and also yields new state-of-the-art results on the CDR and GDA dataset.", "Furthermore, extensive analyses show that the model is able to discover more accurate inter-sentence relations.", "Relation extraction aims to detect relations among entities in the text and plays a significant role in a variety of natural language processing applications.", "Early research efforts focus on predicting relations between entities within the sentence (Zeng et al., 2014; Xu et al., 2015a,b).", "However, valuable relational information between entities, such as biomedical findings, is expressed by multiple mentions across sentence boundaries in real-world scenarios (Peng et al., 2017).", "Therefore, the scope Equally Contributed.", "Lutsenko is a former minister of internal affairs.", "He occupied this post in the cabinets of Yulia Tymoshenko.", "The ministry of internal affairs is the Ukrainian police authority.", "Subject: Yulia Tymoshenko Object: Ukrainian Relation: country of citizenship Figure 1: An example adapted from the DocRED dataset.", "Poon, 2017; Gupta et al., 2018; Song et al., 2019).", "A more challenging, yet practical extension, is the document-level relation extraction, where a system needs to comprehend multiple sentences to infer the relations among entities by synthesizing relevant information from the entire document (Jia et al., 2019; Yao et al., 2019).", "Figure 1 shows an example adapted from the recently proposed document-level dataset DocRED (Yao et al., 2019).", "In order to infer the inter-sentence relation (i.e., country of citizenship) between Yulia Tymoshenko and Ukrainian , one first has to identify the fact that Lutsenko works with Yulia Tymoshenko .", "Next we identify that Lutsenko manages internal affairs , which is a Ukrainian authority.", "After incrementally connecting the evidence in the document and performing the step-by-step reasoning, we are able to infer that Yulia Tymoshenko is also a Ukrainian .", "Prior efforts show that interactions between mentions of entities facilitate the reasoning process in the document-level relation extraction.", "Thus, Verga et al. (2018) and Jia et al. (2019) leverage Multi-Instance Learning (Riedel et al., 2010; Surdeanu et al., 2012).", "On the other hand, structural information has been used to perform better reasoning since it models the non-local dependencies that are obscure from the surface form alone.", "Peng et al. (2017) construct dependency graph to capture interactions among n -ary entities for cross-sentence extraction.", "Sahu et al. (2019) extend this approach by using co-reference links to connect dependency trees of sentences to construct the document-level graph.", "Instead, Christopoulou et al. (2019) construct a heterogeneous graph based on a set of heuristics, and then apply an edge-oriented model (Christopoulou et al., 2018) to perform inference.", "Unlike previous methods, where a document-level structure is constructed by co-references and rules, our proposed model treats the graph structure as a latent variable and induces it in an end-to-end fashion.", "Our model is built based on the structured attention (Kim et al., 2017; Liu and Lapata, 2018).", "Using a variant of Matrix-Tree Theorem (Tutte, 1984; Koo et al., 2007), our model is able to generate task-specific dependency structures for capturing non-local interactions between entities.", "We further develop an iterative refinement strategy, which enables our model to dynamically build the latent structure based on the last iteration, allowing the model to incrementally capture the complex interactions for better multi-hop reasoning (Welbl et al., 2018).", "Experiments show that our model significantly outperforms the existing approaches on DocRED, a large-scale document-level relation extraction dataset with a large number of entities and relations, and also yields new state-of-the-art results on two popular document-level relation extraction datasets in the biomedical domain.", "The code and pretrained model are available at https: //github.com/nanguoshun/LSR 1 .", "We construct a document-level graph for inference in an end-to-end fashion without relying on co-references or rules, which may not always yield optimal structures.", "With the iterative refinement strategy, our model is able to dynamically construct a latent structure for improved information aggregation in the entire document.", "We perform quantitative and qualitative analyses to compare with the state-of-the-art mod-1 Our model is implemented in PyTorch (Paszke et al., 2017) els in various settings.", "We demonstrate that our model is capable of discovering more accurate inter-sentence relations by utilizing a multi-hop reasoning module.", "In this section, we present our proposed Latent Structure Refinement (LSR) model for the document-level relation extraction task.", "Our LSR model consists of three components: node constructor, dynamic reasoner, and classifier.", "The node constructor first encodes each sentence of an input document and outputs contextual representations.", "Representations that correspond to mentions and tokens on the shortest dependency path in a sentence are extracted as nodes.", "The dynamic reasoner is then applied to induce a document-level structure based on the extracted nodes.", "Representations of nodes are updated based on information propagation on the latent structure, which is iteratively refined.", "Final representations of nodes are used to calculate classification scores by the classifier.", "Node constructor encodes sentences in a document into contextual representations and constructs representations of mention nodes, entity nodes and meta dependency paths (MDP) nodes, as shown in Figure", "2. Here MDP indicates a set of shortest dependency paths for all mentions in a sentence, and tokens in the MDP are extracted as MDP nodes.", "Given a document d , each sentence d i in it is fed to the context encoder, which outputs the contextualized representations of each word in d i .", "The context encoder can be a bidirectional LSTM (BiL-STM) (Schuster and Paliwal, 1997) or BERT (De-vlin et al., 2019).", "Here we use the BiLSTM as an example: h ij = LSTM l ( h ij +1 , ij ) (1) h ij = LSTM r ( h ij 1 , ij ) (2) where h ij , h ij +1 , h ij and h ij 1 represent the hidden representations of the j -th, ( j + 1 )-th and ( j 1 )-th token in the sentence d i of two directions, and ij denotes the word embedding of the j -th token.", "Contextual representation of each token in the sentence is represented as h ij = [ h ij ; h ij ] by concatenating hidden states of two directions, where h ij R d and d is the dimension.", "We construct three types of nodes for a document-level graph: mention nodes, entity nodes and meta dependency paths (MDP) nodes as shown in Figure", "2. Mention nodes correspond to different mentions of entities in each sentence.", "The representation of an entity node is computed as the average of its mentions.", "To build a document-level graph, existing approaches use all nodes in the dependency tree of a sentence (Sahu et al., 2019) or one sentence-level node by averaging all token representations of the sentence (Christopoulou et al., 2019).", "Alternatively, we use tokens on the shortest dependency path between mentions in the sentence.", "The shortest dependency path has been widely used in the sentence-level relation extraction as it is able to effectively make use of relevant information while ignoring irrelevant information (Bunescu and Mooney, 2005; Xu et al., 2015a,b).", "Unlike sentence-level extraction, where each sentence only has two entities, each sentence here may involve multiple mentions.", "The dynamic reasoner has two modules, structure induction and multi-hop reasoning as shown in Figure", "3. The structure induction module is used to learn a latent structure of a document-level graph.", "The multi-hop reasoning module is used to perform inference on the induced latent structure, where representations of each node will be updated based on the information aggregation scheme.", "We stack N blocks in order to iteratively refine the latent document-level graph for better reasoning.", "Unlike existing models that use co-reference links (Sahu et al., 2019) or heuristics (Christopoulou et al., 2019) to construct a document-level graph", "for reasoning, our model treats the graph as a latent variable and induces it in an end-to-end fashion.", "The structure induction module is built based on the structured attention (Kim et al., 2017; Liu and Lapata, 2018).", "Inspired by Liu and Lapata (2018), we use a variant of Kirchhoff's Matrix-Tree Theorem (Tutte, 1984; Koo et al., 2007) to induce the latent dependency structure.", "Let u i denote the contextual representation of the i -th node, where u i R d , we first calculate the pair-wise unnormalized attention score s ij between the i -th and the j -th node with the node representations u i and u j .", "The score s ij is calculated by two feed-forward neural networks and a bilinear transformation: s ij = (tanh( W p u i )) TW b (tanh( W c u j )) (3) where W p R d d and W c R d d are weights for two feed-forward neural networks, d is the dimension of the node representations, and tanh is applied as the activation function.", "W b R d d are the weights for the bilinear transformation.", "Next we compute the root score s ri which represents the unnormalized probability of the i -th node to be selected as the root node of the structure: s ri = W r u i (4) where W r R 1 d is the weight for the linear transformation.", "Following Koo et al. (2007), we calculate the marginal probability of each dependency edge of the document-level graph.", "For a graph G with n nodes, we first assign non-negative weights P R n n to the edges of the graph: P ij = (cid:40) 0 if i = j exp ( s ij ) otherwise (5) where P ij is the weight of the edge between the i -th and the j -th node.", "We then define the Laplacian matrix L R n n of G in Equation (6), and its variant L R n n in Equation (7) for further computations (Koo et al., 2007).", "We use A ij to denote the marginal probability of the dependency edge between the i -th and the j -th node.", "Then, A ij can be derived based on Equation (8), where is the Kronecker delta (Koo et al., 2007).", "Here, A R n n can be interpreted as a weighted adjacency matrix of the document-level entity graph.", "Finally, we can feed A R n n into the multi-hop reasoning module to update the representations of nodes in the latent structure.", "Graph neural networks have been widely used in different tasks to perform multi-hop reasoning (Song et al., 2018a; Yang et al., 2019; Tu et al., 2019; Lin et al., 2019), as they are able to effectively collect relevant evidence based on an information aggregation scheme.", "Specifically, our model is based on graph convolutional networks (GCNs) (Kipf and Welling, 2017) to perform reasoning.", "Formally, given a graph G with n nodes, which can be represented with an n n adjacency matrix A induced by the previous structure induction module, the convolution computation for the node i at the l -th layer, which takes the representation u l 1 i from previous layer as input and outputs the updated representations u li , can be defined as: u li = ( n (cid:88) j =1 A ij W l u l 1 i + b l ) (9) where W l and b l are the weight matrix and bias vector for the l -th layer, respectively.", "is the ReLU (Nair and Hinton, 2010) activation function.", "u 0 i R d is the initial contextual representation of the i -th node constructed by the node constructor.", "Following Guo et al. (2019b), we use dense connections to the GCNs in order to capture more structural information on a large document-level graph.", "With the help of dense connections, we are able to train a deeper model, allowing richer local and non-local information to be captured for learning a better graph representation.", "The computations on each graph convolution layer is similar to Equation (9).", "Though structured attention (Kim et al., 2017; Liu and Lapata, 2018) is able to automatically induce a latent structure, recent research efforts show that the induced structure is relatively shallow and may not be able to model the complex dependencies for document-level input (Liu et al., 2019b; Ferra-cane et al., 2019).", "Unlike previous work (Liu and Lapata, 2018) that only induces the latent structure once, we repeatedly refine the document-level graph based on the updated representations, allowing the model to infer a more informative structure that goes beyond simple parent-child relations.", "As shown in Figure 3, we stack N blocks of the dynamic reasoner in order to induce the document-level structure N times.", "Intuitively, the reasoner induces a shallow structure at early iterations since the information propagates mostly between neighboring nodes.", "As the structure gets more refined by interactions with richer non-local information, the induction module is able to generate a more informative structure.", "After N times of refinement, we obtain representations of all the nodes.", "Following Yao et al. (2019), for each entity pair ( e i , e j ) , we use a bilinear function to compute the probability for each relation type r as: P ( r | e i , e j ) = ( e Ti W e e j + b e ) r (10) where W e R d k d and b e R k are trainable weights and bias, with k being the number of relation categories, is the sigmoid function, and the subscript r in the right side of the equation refers to the relation type.", "We evaluate our model on DocRED (Yao et al., 2019), the largest human-annotated dataset for document-level relation extraction, and another two popular document-level relation extraction datasets in the biomedical domain, including Chemical-Disease Reactions (CDR) (Li et al., 2016a) and Gene-Disease Associations (GDA) (Wu et al., 2019).", "DocRED contains 3 , 053 documents for training, 1 , 000 for development and 1 , 000 for test, totally with 132 , 375 entities and 56 , 354 relational facts.", "CDR consists of 500 training instances, 500 development instances, and 500 testing instances.", "GDA contains 29 , 192 documents for training and 1 , 000 for test.", "We follow (Christopoulou et al., 2019) to split training set of GDA into an 80/20 split for training and development.", "With more than 40% of the relational facts requiring reading and reasoning over multiple sentences, DocRED significantly differs from previous sentence-level datasets (Doddington et al., 2004; Hendrickx et al., 2009; Zhang et al., 2018).", "Unlike existing document-level datasets (Li et al., 2016a; Quirk and Poon, 2017; Peng et al., 2017; Verga et al., 2018; Jia et al., 2019) that are in the specific biomedical domain considering only the drug-gene-disease relation, DocRED covers a broad range of categories with 96 relation types.", "We use spaCy 2 to get the meta dependency paths of sentences in a document.", "Following Yao et al. (2019) and Wang et al. (2019), we use the GloVe (Pennington et al., 2014) embedding with BiLSTM, and Uncased BERT-Base (Devlin et al., 2019) as the context encoder.", "All hyper-parameters are tuned based on the development set.", "We list some of the important hyper-parameters in Table", "1. Following Yao et al. (2019), we use F 1 and Ign F 1 as the evaluation metrics.", "Ign F 1 denotes F 1 scores excluding relational facts shared by the training and dev/test sets.", "F 1 scores for intraand inter-sentence entity pairs are also reported.", "Evaluation on the test set is done through CodaLab 3 .", "We compare our proposed LSR with the following three types of competitive models on the DocRED dataset, and show the main results in Table", "2. Sequence-based Models.", "These models leverage different neural architectures to encode sentences in the document, including convolutional neural networks (CNN) (Zeng et al., 2014), LSTM, bidirectional LSTM (BiLSTM) (Cai et al., 2016) and attention-based LSTM (Con-textAware) (Sorokin and Gurevych, 2017).", "Graph-based Models.", "These models construct task-specific graphs for inference.", "GCNN (Sahu et al., 2019) constructs a document-level graph by co-reference links, and then applies relational GCNs for reasoning.", "EoG (Christopoulou et al., 2019) is the state-of-the-art document-level relation extraction model in biomedical domain.", "EoG first uses heuristics to construct the graph, then leverages an edge-oriented model to perform inference.", "GCNN and EoG are based on static structures.", "GAT (Velickovic et al., 2018) is able to learn the weighted graph structure based on a local attention mechanism.", "AGGCN (Guo 2 https://spacy.io/ 3 https://competitions.codalab.org/ competitions/20717 Dev Test Model Ign F 1 F 1 IntraF 1 InterF 1 Ign F 1 F 1 CNN (Yao et al., 2019) 41.58 43.45 51.87 37.58 40.33 42.26 LSTM (Yao et al., 2019) 48.44 50.68 56.57 41.47 47.71 50.07 BiLSTM (Yao et al., 2019) 48.87 50.94 57.05 43.49 48.78 51.06 ContexAware (Yao et al., 2019) 48.94 51.09 56.74 42.26 48.40 50.70 GCNN (cid:168) (Sahu et al., 2019) 46.22 51.52 57.78 44.11 49.59 51.62 EoG (cid:168) (Christopoulou et al., 2019) 45.94 52.15 58.90 44.60 49.48 51.82 GAT (cid:168) (Velickovic et al., 2018) 45.17 51.44 58.14 43.94 47.36 49.51 AGGCN (cid:168) (Guo et al., 2019a) 46.29 52.47 58.76 45.45 48.89 51.45 GloVe+LSR 48.82 55.17 60.83 48.35 52.15 54.18 BERT (Wang et al., 2019) -54.16 61.61 47.15 -53.20 Two-Phase BERT (Wang et al., 2019) -54.42 61.80 47.28 -53.92 BERT+LSR 52.43 59.00 65.26 52.05 56.97 59.05 Table 2: Main results on the development and the test set of DocRED: Models with (cid:168) are adapted to DocRED based on their open implementations.", "et al., 2019a) is the state-of-the-art sentence-level relation extraction model, which constructs the latent structure by self-attention.", "These two models are able to dynamically construct task-specific structures.", "BERT-based Models.", "These models fine-tune BERT (Devlin et al., 2019) for DocRED.", "Specifically, Two-Phase BERT (Wang et al., 2019) is the best reported model.", "It is a pipeline model, which predicts if the relation exists between entity pairs in the first phase and predicts the type of the relation in the second phase.", "As shown in Table 2, LSR with GloVe achieves 54.18 F 1 on the test set, which is the new state-of-the-art result for models with GloVe.", "In particular, our model consistently outperforms sequence-based models by a significant margin.", "For example, LSR improves upon the best sequence-based model BiLSTM by 3 .", "1 points in terms of F 1 .", "This suggests that models which directly encode the entire document are unable to capture the inter-sentence relations present in documents.", "Under the same setting, our model consistently outperforms graph-based models based on static graphs or attention mechanisms.", "Compared with EoG, our LSR model achieves 3.0 and 2.4 higher F 1 on development and test set, respectively.", "We also have similar observations for the GCNN model, which shows that a static document-level graph may not be able to capture the complex interactions in a document.", "The dynamic latent structure induced by LSR captures richer non-local dependencies.", "Moreover, LSR also outperforms GAT and AGGCN.", "This empirically shows that compared to the models that use local attention and self-attention (Velickovic et al., 2018; Guo et al., 2019a), LSR can induce more informative document-level structures for better reasoning.", "Our LSR model also shows its superiority under the setting of Ign F 1 .", "In addition, LSR with GloVe obtains better results than two BERT-based models.", "This empirically shows that our model is able to capture long-range dependencies even without using powerful context encoders.", "Following Wang et al. (2019), we leverage BERT as the context encoder.", "As shown in Table 2, our LSR model with BERT achieves a 59.05 F 1 score on DocRED, which is a new state-of-the-art result.", "As of the ACL deadline on the 9th of December 2019, we held the first position on the CodaLab scoreboard under the alias diskorak .", "In this subsection, we analyze intraand inter-sentence performance on the development set.", "An entity pair requires inter-sentence reasoning if the two entities from the same document have no mentions in the same sentence.", "In DocRED's development set, about 45% of entity pairs require information aggregation over multiple sentences.", "Under the same setting, our LSR model outperforms all other models in both intraand inter-sentence setting.", "The differences in F 1 scores between LSR and other models in the inter-sentence setting tend to be larger than the differences in the intra-sentence setting.", "These results demonstrate that the majority of LSR's superiority comes from the inter-sentence relational facts, suggesting that Model F 1 IntraF 1 InterF 1 Gu et al. (2017) 61.3 57.2 11.7 Nguyen and Verspoor (2018) 62.3 -Verga et al. (2018) 62.1 -Sahu et al. (2019) 58.6 -Christopoulou et al. (2019) 63.6 68.2 50.9 LSR 61.2 66.2 50.3 LSR w/o MDP Nodes 64.8 68.9 53.1 Peng et al. (2016) 63.1 -Li et al. (2016b) 67.3 58.9 Panyam et al. (2018) 60.3 65.1 45.7 Zheng et al. (2018) 61.5 -Table 3: Results on the test set of the CDR dataset.", "the latent structure induced by our model is indeed capable of synthesizing the information across multiple sentences of a document.", "Furthermore, LSR with GloVe also proves better in the inter-sentence setting compared with two BERT-based (Wang et al., 2019) models, indicating latent structure's superiority in resolving long-range dependencies across the whole document compared with the BERT encoder.", "Table 3 depicts the comparisons with state-of-the-art models on the CDR dataset.", "Gu et al. (2017); Nguyen and Verspoor (2018); Verga et al. (2018) leverage sequence-based models.", "Convolutional neural networks and self-attention networks are used as the encoders.", "Sahu et al. (2019); Christopoulou et al. (2019) use graph-based models.", "As shown in Table 3, our LSR performs worse than the state-of-the-art models.", "It is challenging for an off-the-shelf parser to get high quality dependency trees in the biomedical domain, as we observe that the MDP nodes extracted by the spaCy parser from the CDR dataset contains much less informative context compared with the nodes from DocRED.", "Here we introduce a simplified LSR model indicated as LSR w/o MDP Nodes , which removes the MDP nodes and builds a fully-connected graph using all tokens of a document.", "It shows that LSR w/o MDP Nodes consistently outperforms sequence-based and graph-based models, indicating the effectiveness the latent structure.", "Moreover, the simplified LSR outperforms most of the models with external resources, except for Li et al. (2016b), which leverages co-training with additional unlabeled training data.", "We believe such a setting also benefits our LSR model.", "Here Full indicates EoG model with a fully connected graph as the inputs, while NoInf is a variant of EoG model without inference component (Christopoulou et al., 2018).", "The simplified LSR model achieves the new state-of-the-art result on GDA.", "The Full model (Christopoulou et al., 2019) yields a higher F 1 score on the inter-sentence setting while having a relatively low score on the intra-sentence.", "It is likely because that this model neglects the differences between relations expressed within the sentence and across sentences.", "In this subsection, we use the development set of DocRED to demonstrate the effectiveness of the latent structure and refinements.", "We investigate the extent to which the latent structures, that are induced and iteratively refined by the proposed dynamic reasoner, help to improve the overall performance.", "We experiment with the three different structures defined below.", "For fair comparisons, we use the same GCN model to perform multi-hop reasoning for all these structures.", "structure in EoG (Christopoulou et al., 2019).", "Also, [1] Lark Force was an Australian Army formation established in March 1941 during World War II for service in New Britain and New Ireland .", "We adapt rules from De Cao et al. (2019) for multihop question answering, i.e., each mention node is connected to its entity node and to the same mention nodes across sentences, while mention nodes and MDP nodes which reside in the same sentence are fully connected.", "The model is termed QAGCN.", "Attention-based Structure: This structure is induced by AGGCN (Guo et al., 2019a) with multihead attention (Vaswani et al., 2017).", "We extend the model from sentence-level to document-level.", "We explore multiple settings of these models with different block numbers ranging from 1 to 4, where a block is composed of a graph construction component and a densely connected GCN component.", "As shown in Figure 4, LSR outperforms QAGCN, EoG and AGGCN in terms of overall F 1 .", "This empirically confirms our hypothesis that the latent structure induced by LSR is able to capture a more informative context for the entire document.", "As shown in Figure 4, our LSR yields the best performance in the second refinement, outperforming the first induction by 0.72% in terms of overall F 1 .", "This indicates that the proposed LSR is able to induce more accurate structures by iterative refinement.", "However, too many iterations may lead to an F 1 drop due to over-fitting.", "Table 5 shows F 1 scores of the full LSR model and with different components turned off one at", "a time.", "We observe that most of the components contribute to the main model, as the performance deteriorates with any of the components missing.", "The most significant difference is visible in the structure induction module.", "Removal of structure induction part leads to a 3.26 drop in terms of F 1 score.", "This result indicates that the latent structure plays a key role in the overall performance.", "In Figure 5, we present a case study to analyze why the latent structure induced by our proposed LSR performs better than the structures learned by AGGCN.", "We use the entity World War II to illustrate the reasoning process and our goal here is to predict the relation of the entity pair (cid:104) Japan , World War II (cid:105) .", "As shown in Figure 5, in the first refinement of LSR, Word War II interacts with several local mentions with higher attention scores, e.g., 0.43 for the mention Lake Force , which will be used as a bridge between the mention Japan and World War II .", "In the second refinement, the attention scores of several non-local mentions, such as Japan and Imperial Japanese Army , significantly increase from 0.09 to 0.41 , and 0.17 to 0.37 , respectively, indicating that information is propagated globally at this step.", "With such intraand inter-sentence structures, the relation of the entity pair (cid:104) Japan , World War II (cid:105) can be predicted as participant of , which is denoted by P1344 .", "Compared with LSR, the attention scores learned by AGGCN are much more balanced, indicating that the model may not be able to construct an informative structure for inference, e.g., the highest score is 0.27 in the second head, and most of the scores are near 0.11 .", "We also depict the predicted relations of Con-textAware, AGGCN and LSR on the graph on the right side of the Figure", "5. Interested reader could refer to (Yao et al., 2019) for the definition of a relation, such as P607 , P17 , etc.", "The LSR model proves capable of filling out the missing relation for (cid:104) Japan , World War II (cid:105) that requires reasoning across sentences.", "However, LSR also attends to the mention New Ireland with a high score, thus failing to predict that the entity pair (cid:104) New Ireland , World War II (cid:105) actually has no relation ( NIL type).", "Document-level relation extraction.", "Early efforts focus on predicting relations between entities within a single sentence by modeling interactions in the input sequence (Zeng et al., 2014; Wang et al., 2016; Zhou et al., 2016; Zhang et al., 2017; Guo et al., 2020) or the corresponding dependency tree (Xu et al., 2015a,b; Liu et al., 2015; Miwa and Bansal, 2016; Zhang et al., 2018).", "These approaches do not consider interactions across mentions and ignore relations expressed across sentence boundaries.", "Recent work begins to explore cross-sentence extraction (Quirk and Poon, 2017; Peng et al., 2017; Gupta et al., 2018; Song et al., 2018c, 2019).", "Instead of using discourse structure understanding techniques (Liu et al., 2019a; Lei et al., 2017, 2018), these approaches leverage the dependency graph to capture inter-sentence interactions, and their scope is still limited to several sentences.", "More recently, the extraction scope has been expanded to the entire document (Verga et al., 2018; Jia et al., 2019; Sahu et al., 2019; Christopoulou et al., 2019) in the biomedical domain by only considering a few relations among chemicals.", "Unlike previous work, we focus on document-level relation extraction datasets (Yao et al., 2019; Li et al., 2016a; Wu et al., 2019) from different domains with a large number of relations and entities, which require understanding a document and performing multi-hop reasoning.", "Structure-based relational reasoning.", "Structural information has been widely used for relational reasoning in various NLP applications including question answering (Dhingra et al., 2018; De Cao et al., 2019; Song et al., 2018a) and relation extraction (Sahu et al., 2019; Christopoulou et al., 2019).", "Song et al. (2018a) and (De Cao et al., 2019) leverage co-reference information and set of rules to construct document-level entity graph.", "GCNs (Kipf and Welling, 2017) or GRNs (Song et al., 2018b) are applied to perform reasoning for multi-hop question answering (Welbl et al., 2018).", "Sahu et al. (2019) also utilize co-reference links to construct the dependency graph and use labelled edge GCNs (Marcheggiani and Titov, 2017) for document-level relation extraction.", "Instead of using GNNs, Christopoulou et al. (2019) use the edge-oriented model (Christopoulou et al., 2018) for logical inference based on a heterogeneous graph constructed by heuristics.", "Unlike previous approaches that use syntactic trees, co-references or heuristics, LSR model treats the document-level structure as a latent variable and induces it in an iteratively refined fashion, allowing the model to dynamically construct the graph for better relational reasoning.", "We introduce a novel latent structure refinement (LSR) model for better reasoning in the document-level relation extraction task.", "Unlike previous approaches that rely on syntactic trees, co-references or heuristics, LSR dynamically learns a document-level structure and makes predictions in an end-to-end fashion.", "There are multiple avenues for future work.", "One possible direction is to extend the scope of structure induction for constructions of nodes without relying on an external parser.", "We would like to thank the anonymous reviewers for their thoughtful and constructive comments.", "This research is supported by Ministry of Education, Singapore, under its Academic Research Fund (AcRF) Tier 2 Programme (MOE AcRF Tier 2 Award No: MOE2017-T2-1-156).", "Any opinions, findings and conclusions or recommendations expressed in this material are those of the authors and do not reflect the views of the Ministry of Education, Singapore." ]
[ "abstain", "abstain", "abstain", "objective", "objective", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "method", "method", "abstain", "objective", "other", "method", "method", "method", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "objective", "other", "abstain", "method", "method", "objective", "method", "method", "abstain", "abstain", "method", "abstain", "method", "other", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "result", "result", "abstain", "abstain", "abstain", "result", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "other", "other", "other", "other", "other", "method", "other", "other", "other", "other", "other", "other", "other", "objective", "abstain", "abstain", "abstain", "other", "other", "other" ]
[ "In this paper we conceptualize single-document extractive summarization as a tree induction problem.", "In contrast to previous approaches (Marcu, 1999; Yoshida et al., 2014) which have relied on linguistically motivated document representations to generate summaries, our model induces a multi-root dependency tree while predicting the output summary.", "Each root node in the tree is a summary sentence, and the subtrees attached to it are sentences whose content relates to or explains the summary sentence.", "We design a new iterative refinement algorithm: it induces the trees through repeatedly refining the structures predicted by previous iterations.", "We demonstrate experimentally on two benchmark datasets that our summarizer 1 performs competitively against state-of-the-art methods.", "Single-document summarization is the task of automatically generating a shorter version of a document while retaining its most important information.", "The task has received much attention in the natural language processing community due to its potential for various information access applications.", "Examples include tools which digest textual content (e.g., news, social media, reviews), answer questions, or provide recommendations.", "Of the many summarization paradigms that have been identified over the years (see Mani 2001 and Nenkova and McKeown 2011 for comprehensive overviews), two have consistently attracted attention.", "In abstractive summarization, various text rewriting operations generate summaries using words or phrases that were not in the original text, while extractive approaches form summaries by copying and concatenating the most important spans (usually sentences) in a document.", "Recent 1 Our code is publicly available at https://github.", "approaches to (single-document) extractive summarization frame the task as a sequence labeling problem taking advantage of the success of neural network architectures (Bahdanau et al., 2015).", "The idea is to predict a label for each sentence specifying whether it should be included in the summary.", "Existing systems mostly rely on recurrent neural networks (Hochreiter and Schmidhuber, 1997) to model the document and obtain a vector representation for each sentence (Nallap-ati et al., 2017; Cheng and Lapata, 2016).", "Inter-sentential relations are captured in a sequential manner, without taking the structure of the document into account, although the latter has been shown to correlate with what readers perceive as important in a text (Marcu, 1999).", "Another problem in neural-based extractive models is the lack of interpretability.", "While capable of identifying summary sentences, these models are not able to rationalize their predictions (e.g., a sentence is in the summary because it describes important content upon which other related sentences elaborate).", "The summarization literature offers examples of models which exploit the structure of the underlying document, inspired by existing theories of discourse such as Rhetorical Structure Theory (RST; Mann and Thompson 1988).", "Most approaches produce summaries based on tree-like document representations obtained by a parser trained on discourse annotated corpora (Carlson et al., 2003; Prasad et al., 2008).", "For instance, Marcu (1999) argues that a good summary can be generated by traversing the RST discourse tree structure top-down, following nucleus nodes (dis-course units in RST are characterized regarding their text importance; nuclei denote central units, whereas satellites denote peripheral ones).", "Other work (Hirao et al., 2013; Yoshida et al., 2014) extends this idea by transforming RST trees into dependency trees and generating summaries by tree trimming.", "Gerani et al. (2014) summarize product reviews; their system aggregates RST trees rep-1.", "One wily coyote traveled a bit too far from home, and its resulting adventure through Harlem had alarmed residents doing a double take and scampering to get out of its way Wednesday morning.", "2. Police say frightened New Yorkers reported the coyote sighting around 9:30 a.m., and an emergency service unit was dispatched to find the animal.", "3. The little troublemaker was caught and tranquilized in Trinity Cemetery on 155th street and Broadway, and then taken to the Wildlife Conservation Society at the Bronx Zoo, authorities said.", "4. \"The coyote is under evaluation and observation,\" said Mary Dixon, spokesperson for the Wildlife Conservation Society.", "5. She said the Department of Environmental Conservation will either send the animal to a rescue center or put it back in the wild.", "6. According to Adrian Benepe, New York City Parks Commissioner, coyotes in Manhattan are rare, but not unheard of.", "7. \"This is actually the third coyote that has been seen in the last 10 years,\" Benepe said.", "8. Benepe said there is a theory the coyotes make their way to the city from suburban Westchester.", "9. He said they probably walk down the Amtrak rail corridor along the Hudson River or swim down the Hudson River until they get to the city.", "resenting individual reviews into a graph, from which an abstractive summary is generated.", "Despite the intuitive appeal of discourse structure for the summarization task, the reliance on a parser which is both expensive to obtain (since it must be trained on labeled data) and error prone, presents a major obstacle to its widespread use.", "Recognizing the merits of structure-aware representations for various NLP tasks, recent efforts have focused on learning latent structures (e.g., parse trees) while optimizing a neural network model for a down-stream task.", "Various methods impose structural constraints on the basic attention mechanism (Kim et al., 2017; Liu and Lapata, 2018), formulate structure learning as a reinforcement learning problem (Yogatama et al., 2017; Williams et al., 2018), or sparsify the set of possible structures (Niculae et al., 2018).", "Although latent structures are mostly induced for individual sentences, Liu and Lapata (2018) induce dependency-like structures for entire documents.", "Drawing inspiration from this work and existing discourse-informed summarization models (Marcu, 1999; Hirao et al., 2013), we frame extractive summarization as a tree induction problem.", "Our model represents documents as multi-root dependency trees where each root node is a summary sentence, and the subtrees attached to it are sentences whose content is related to and covered by the summary sentence.", "An example of a document and its corresponding tree is shown in Figure 1; tree nodes correspond to document sentences; blue nodes represent those which should be in the summary, dependent nodes relate to or are subsumed by the parent summary sentence.", "We propose a new framework that uses structured attention (Kim et al., 2017) as both the objective and attention weights for extractive summarization.", "Our model is trained end-to-end, it induces document-level dependency trees while predicting the output summary, and brings more interpretability in the summarization process by helping explain how document content contributes to the model's decisions.", "We design a new iterative structure refinement algorithm, which learns to induce document-level structures through repeatedly refining the trees predicted by previous iterations and allows the model to infer complex trees which go beyond simple parent-child relations (Liu and Lapata, 2018; Kim et al., 2017).", "The idea of structure refinement is conceptually related to recently proposed models for solving iterative inference problems (Marino et al., 2018; Putzky and Welling, 2017; Lee et al., 2018).", "It is also related to structured prediction energy networks (Belanger et al., 2017) which approach structured prediction as iterative miminization of an energy function.", "However, we are not aware of any previous work considering structure refinement for tree induction problems.", "Our contributions in this work are three-fold: a novel conceptualization of extractive summarization as a tree induction problem; a model which capitalizes on the notion of structured attention to learn document representations based on iterative structure refinement; and large-scale evaluation studies (both automatic and human-based) which demonstrate that our approach performs competitively against state-of-the-art methods while being able to rationalize model predictions.", "Let d denote a document containing several sentences [ sent 1 , sent 2 , , sent m ] , where sent i is the i -th sentence in the document.", "Extractive summarization can be defined as the task of assigning a label y i { 0 , 1 } to each sent i , indicating whether the sentence should be included in the summary.", "It is assumed that summary sentences represent the most important content of the document.", "Most extractive models frame summarization as a classification problem.", "Recent approaches (Zhang et al., 2018; Dong et al., 2018; Nallapati et al., 2017; Cheng and Lapata, 2016) incorporate a neural network-based encoder to build representations for sentences and apply a binary classifier over these representations to predict whether the sentences should be included in the summary.", "Given predicted scores r and gold labels y , the loss function can be defined as: L = m (cid:88) i =1 ( y i ln( r i ) + (1 y i ) ln(1 r i )) (1) The encoder in extractive summarization models is usually a recurrent neural network with Long-Short Term Memory (LSTM; Hochreiter and Schmidhuber 1997) or Gated Recurrent Units (GRU; Cho et al. 2014).", "In this paper, our baseline encoder builds on the Transformer architecture (Vaswani et al., 2017), a recently proposed highly efficient model which has achieved state-of-the-art performance in machine translation (Vaswani et al., 2017) and question answering (Yu et al., 2018).", "The Transformer aims at reducing the fundamental constraint of sequential computation which underlies most architectures based on RNNs.", "It eliminates recurrence in favor of applying a self-attention mechanism which directly models relationships between all words in a sentence.", "More formally, given a sequence of input vectors { x 1 , x 2 , , x n } , the Transformer is composed of a stack of N identical layers, each of which has two sub-layers: h l = LayerNorm( h l 1 + MHAtt( h l 1 )) (2) h l = LayerNorm( h l + FFN( h l )) (3) where h 0 = PosEmb( x ) and PosEmb is the function of adding positional embeddings to the input; the superscript l indicates layer depth; LayerNorm is the layer normalization operation proposed in Ba et al. (2016); MHAtt represents the multi-head attention mechanism introduced in Vaswani et al. (2017) which allows the model to jointly attend to information from different representation subspaces (at different positions); and FFN is a two-layer feed-forward network with ReLU as hidden activation function.", "For our extractive summarization task, the baseline system is composed of a sentence-level Transformer ( TS ) and a document-level Transformer ( TD ), which have the same structure.", "For each sentence s i = [ w i 1 , w i 2 , , w in ] in the input document, TS is applied to obtain a contextual representation for each word: [ u i 1 , u i 2 , , u in ] = TS ([ w i 1 , w i 2 , , w in ]) (4) And the representation of a sentence is acquired by applying weighted-pooling: a ij = W 0 u Tij (5) s i = 1 n n (cid:88) j =1 a ij u ij (6) Document-level transformer TD takes s i as input and yields a contextual representation for each sentence: [ v 1 , v 2 , , v m ] = TD ([ s 1 , s 2 , , s m ]) (7) Following previous work (Nallapati et al., 2017), we use a sigmoid function after a linear transformation to calculate the probability r i of selecting s i as a summary sentence: r i = sigmoid( W 1 v Ti ) (8) 2.2 Structured Summarization Model In the Transformer model sketched above, inter-sentence relations are modeled by multi-head attention based on softmax functions, which only capture shallow structural information.", "Our summarizer, which we call SUMO as a shorthand for S tructured S um marization Mo del classifies sentences as summary-worthy or not, and simultaneously induces the structure of the source document as a multi-root tree.", "An overview of SUMO is illustrated in Figure 2. The model has the same sentence-level encoder TS as the baseline Transformer model (see the bottom box in Figure 2), but differs in two important ways:", "(a) it uses structured attention to model the roots (i.e., summary sentences) of the underlying tree (see the upper box in Figure 2); and", "(b) through iterative refinement it is able to progressively infer more complex structures from past guesses (see the second and third block in Figure 2).", "Sentence-level", "...... Figure 2: Overview of SUMO .", "A Transformer-based sentence-level encoder (yellow box) builds a vector for each sentence.", "The blue box presents the document-level encoder; dotted lines indicate iterative application of structured attention, where at each iteration the model outputs a roots distribution and the extractive loss is calculated based on gold summary sentences.", "s i indicates the initial representation for sent i ; v ki indicates the sentence embedding for sent i after iteration k .", "Structured Attention Assuming document sentences have been already encoded, SUMO first calculates the unnormalized root score r i for sent i to indicate the extent to which it might be selected as root in the document tree.", "It also calculates the unnormalized edge score e ij for sentence pair (cid:104) sent i , sent j (cid:105) indicating the extent to which sent i might be the head of sent j in that tree (first upper block in Figure 2).", "To inject structural bias, SUMO normalizes these scores as the marginal probabilities of forming edges in the document dependency tree.", "We use the Tree-Matrix-Theorem (TMT; Koo et al. 2007; Tutte 1984) to calculate root marginal probability r i and edge marginal probability e ij , following the procedure introduced in Liu and Lapata (2017).", "As illustrated in Algorithm 1, we first build the Laplacian matrix L based on unnormalized scores and calculate marginal probabilities by matrix inverse-based operations ( L 1 ).", "We refer the interested reader to Koo et al. (2007) and Liu and Lapata (2017) for more details.", "In contrast to Liu and Lapata (2017), who compute the marginal probabilities of a single-root tree, our tree has multiple roots since in our task the summary typically contains multiple sentences.", "Given sentence vector s i as input, SUMO computes: r i = W r s i (9) e ij = s i W e s Tj (10) r i , e ij = TMT( r i , e ij ) (11) Iterative Structure Refinement SUMO essentially reduces summarization to a rooted-tree parsing problem.", "However, accurately predicting a tree in one shot is problematic.", "Firstly, when predicting the dependency tree, the model has solely Algorithm 1: Calculate Tree Marginal Probabilities based on Tree-Matrix-Theorem Function TMT( r i , e ij ) l: A ij = (cid:26) 0 if i = j exp ( r ij ) otherwise L ij = (cid:26)P ni =1 A i j if i = j A ij otherwise L ij = (cid:26) L ij + exp ( r i ) i = j L ij otherwise e ij = (1 1 ,j ) A ij [ L 1 ] jj (1 i, 1 ) A ij [ L 1 ] ji r i = exp ( r i )[ L 1 ] i 1 return r i , e ij access to labels for the roots (aka summary sen-tences), while tree edges are latent and learned without an explicit training signal.", "And as previous work (Liu and Lapata, 2017) has shown, a single application of TMT leads to shallow tree structures.", "Secondly, the calculation of r i and e ij would be based on first-order features alone, however, higher-order information pertaining to siblings and grandchildren has proved useful in discourse parsing (Carreras, 2007).", "We address these issues with an inference algorithm which iteratively infers latent trees.", "In contrast to multi-layer neural network architectures like the Transformer or Recursive Neural Networks (Tai et al., 2015) where word representations are updated at every layer based on the output of previous layers, we refine only the tree structure during each iteration, word representations are not passed across multiple layers.", "Empirically, at early iterations, the model learns shallow and simple trees, and information propagates mostly between neighboring nodes; as the structure gets more refined, information propagates more globally allowing the model to learn higher-order features.", "Algorithm 2 provides the details of our refinement procedure.", "SUMO takes K iterations to learn the structure of a document.", "For each sentence, we initialize a structural vector v 0 i with sentence vector s i .", "At iteration k , we use sentence embeddings from the previous iteration v k 1 to calculate unnormalized root r ki and edge e kij scores using a linear transformation with weight W kr and a bilinear transformation with weight W ke , respectively.", "Marginal root and edge probabilities are subsequently normalized with the TMT to obtain r ki and e kij (see lines 46 in Algorithm 2).", "Then, sentence embeddings are updated with k-Hop Propagation .", "The latter takes as input the initial sentence representations s rather than sentence embeddings v k 1 from the previous layer.", "In other words, new embeddings v k are computed from scratch relying on the structure from the previous layer.", "Within the k-Hop-Propagation function (lines 1219), edge probabilities e kij are used as attention weights to propagate information from a sentence to all other sentences in k hops.", "p li and c li represent parent and child vectors, respectively, while vector z li is updated with contextual information at hop l .", "At the final iteration (lines 9 and 10), the top sentence embeddings v K 1 are used to calculate the final root probabilities r K .", "We define the model's loss function as the summation of the losses of all iterations: L = K (cid:88) k =1 [ y log( r k ) + (1 y ) log(1 r k )] (12) SUMO uses the root probabilities of the top layer as the scores for summary sentences.", "The k-Hop-Propagation function resembles the computation used in Graph Convolution Networks (Kipf and Welling, 2017; Marcheggiani and Titov, 2017).", "GCNs have been been recently applied to latent trees (Corro and Titov, 2019), however not in combination with iterative refinement.", "In this section we present our experimental setup, describe the summarization datasets we used, discuss implementation details, our evaluation protocol, and analyze our results.", "We evaluated SUMO on two benchmark datasets, namely the CNN/DailyMail news highlights dataset (Hermann et al., 2015) and the New York Times Annotated Corpus (NYT; Sand-haus 2008).", "The CNN/DailyMail dataset contains news articles and associated highlights, i.e., a few bullet points giving a brief overview of the article.", "We used the standard splits of Hermann et al. (2015) for training, validation, and testing (90,266/1,220/1,093 CNN documents and 196,961/12,148/10,397 DailyMail doc-uments).", "We did not anonymize entities.", "The NYT dataset contains 110,540 articles with abstractive summaries.", "Following Durrett et al. (2016), we split these into 100,834 training and 9,706 test examples, based on date of publication (test is all articles published on January 1, 2007 or later).", "We also followed their filtering procedure, documents with summaries that are shorter than 50 words were removed from the raw dataset.", "The CNN DM CNN+DM NYT Model R-1 R-2 R-L R-1 R-2 R-L R-1 R-2 R-L R-1 R-2 R-L LEAD -3 29.2 11.2 26.0 40.7 18.3 37.2 39.6 17.7 36.2 35.5 17.3 32.0 Narayan et al. (2018) 30.4 11.7 26.9 41.0 18.8 37.7 40.0 18.2 36.6 41.3 22.0 37.8 Marcu (1999) 25.6 6.10 19.5 31.9 12.4 23.5 26.5 9.80 20.4 29.6 11.2 23.0 Durrett et al. (2016) 40.8 22.3 36.7 See et al. (2017) 39.5 17.3 36.4 42.7 22.1 38.0 Celikyilmaz et al. (2018) 41.7 19.5 37.9 Transformer (no doc-att) 29.2 11.1 25.6 40.5 18.1 36.8 39.7 17.0 35.9 41.1 21.5 37.0 Transformer (1-layer doc-att) 29.5 11.4 26.0 41.5 18.7 38.0 40.6 18.1 36.7 41.8 22.1 37.8 Transformer (3-layer doc-att) 29.6 11.8 26.3 41.7 18.8 38.0 40.6 18.1 36.9 42.0 22.3 38.2 SUMO (1-layer) 29.5 11.6 26.2 41.6 18.8 37.6 40.5 18.0 36.8 42.2 22.1 38.1 SUMO (3-layer) 29.7 12.0 26.5 42.0 19.1 38.0 41.0 18.4 37.2 42.3 22.7 38.6 Table 1: Test set results on the CNN/DailyMail and NYT datasets using ROUGE F 1 (R-1 and R-2 are shorthands for unigram and bigram overlap, R-L is the longest common subsequence.", "filtered test set includes 3,452 test examples out of the original 9,706.", "Compared to CNN/DailyMail, the NYT dataset contains longer and more elaborate summary sentences.", "Both datasets contain abstractive gold summaries, which are not readily suited to training extractive summarization models.", "A greedy algorithm similar to Nallapati et al. (2017) was used to generate an oracle summary for each document.", "The algorithm explores different combinations of sentences and generates an oracle consisting of multiple sentences which maximize the ROUGE score with the gold summary.", "We assigned label 1 to sentences selected in the oracle summary and 0 otherwise and trained SUMO on this data.", "We followed the same training procedure for SUMO and various Transformer-based baselines.", "The vocabulary size was set to 30K.", "We used 300D word embeddings which were initialized randomly from N (0 , 0 . 01) .", "The sentence-level Transformer has 6 layers and the hidden size of FFN was set to 512.", "The number of heads in MHAtt was set to 4. Adam was used for training ( 1 = 0 . 9 , 2 = 0 . 999 ).", "We adopted the learning rate schedule from Vaswani et al. (2017) with warming-up on the first 8,000 steps.", "SUMO and related Transformer models produced 3-sentence summaries for each document at test time (for both CNN/DailyMail and NYT datasets).", "We evaluated summarization quality using ROUGE F 1 (Lin, 2004).", "We report unigram and bigram overlap (ROUGE-1 and ROUGE-2) as a means of assessing informativeness and the longest common subsequence (ROUGE-L) as a means of assessing fluency.", "Table 1 summarizes our results.", "We evaluated two variants of SUMO , with one and three structured-attention layers.", "We compared against a baseline which simply selects the first three sentences in each document (LEAD -3) and several incarnations of the basic Transformer model introduced in Section 2.1.", "These include a Transformer without document-level self-attention and two variants with document-level self attention instantiated with one and three layers.", "Several state-of-the-art models are also included in Table 1, both extractive and abstractive.", "REFRESH (Narayan et al., 2018) is an extractive summarization system trained by globally optimizing the ROUGE metric with reinforcement learning.", "The system of Marcu (1999) is another extractive summarizer based on RST parsing.", "It uses discourse structures and RST's notion of nuclearity to score document sentences in terms of their importance and selects the most important ones as the summary.", "Our re-implementation of Marcu (1999) used the parser of Zhao and Huang (2017) to obtain RST trees.", "Durrett et al. (2016) develop a summarization system which integrates a compression model that enforces grammaticality and coherence.", "See et al. (2017) present an abstractive summarization system based on an encoder-decoder architecture.", "Celikyilmaz et", "al.'s (2018) system is state-of-the-art in abstractive summarization using multiple agents to represent the document as well a hierarchical attention mechanism over the agents for decoding.", "As far as SUMO is concerned, we observe that it outperforms a simple Transformer model without any document attention as well as variants with document attention.", "SUMO with three layers of structured attention overall performs best, con-firming our hypothesis that document-level structure is beneficial for summarization.", "The results in Table 1 also reveal that SUMO and all Transformer-based models with document attention (doc-att) outperform LEAD -3 across metrics.", "SUMO (3-layer) is competitive or better than state-of-the-art approaches.", "Examples of system output are shown in Table 4. Finally, we should point out that SUMO is su-perior to Marcu (1999) even though the latter employs linguistically informed document representations.", "In addition to automatic evaluation, we also assessed system performance by eliciting human judgments.", "Our first evaluation quantified the degree to which summarization models retain key information from the document following a question-answering (QA) paradigm (Clarke and Lapata, 2010; Narayan et al., 2018).", "We created a set of questions based on the gold summary under the assumption that it highlights the most important document content.", "We then examined whether participants were able to answer these questions by reading system summaries alone without access to the article.", "The more questions a system can answer, the better it is at summarizing the document as a whole.", "We randomly selected 20 documents from the CNN/DailyMail and NYT datasets, respectively and wrote multiple question-answer pairs for each gold summary.", "We created 71 questions in total varying from two to six questions per gold summary.", "We asked participants to read the summary and answer all associated questions as best they could without access to the original document or the gold summary.", "Examples of questions and their answers are given in Table 4. We adopted the same scoring mechanism used in Clarke and Lapata (2010), i.e., a correct answer was marked CNN+DM NYT Model Rank QA Rank QA LEAD 0.07 40.1 -0.18 36.3 Narayan et al. (2018) 0.21 62.4 0.12 46.1 Durrett et al. (2016) -0.11 40.1 See et al. (2017) -0.23 36.6 -0.44 35.3 Celikyilmaz et al. (2018) -0.64 37.5 SUMO (3-layer) 0.15 65.3 0.33 57.2 GOLD 0.11 -0.16 ORACLE 0.37 74.6 0.41 67.1 Table 2: System ranking according to human judgments on summary quality and QA-based evaluation.", "with a score of one, partially correct answers with a score of 0.5, and zero otherwise.", "Answers were elicited using Amazon's Mechanical Turk platform.", "Participants evaluated summaries produced by the LEAD -3 baseline, our 3-layered SUMO model and multiple state-of-the-art systems.", "We elicited 5 responses per summary.", "Table 2 (QA column) presents the results of the QA-based evaluation.", "Based on the summaries generated by SUMO , participants can answer 65.3% of questions correctly on CNN/DailyMail and 57.2% on NYT.", "Summaries produced by LEAD -3 and comparison systems fare worse, with REFRESH (Narayan et al., 2018) coming close to SUMO on CNN/DailyMail but not on NYT.", "Overall, we observe there is room for improvement since no system comes close to the extractive oracle, indicating that improved sentence selection would bring further performance gains to extractive approaches.", "Between-systems differences are all statistically significant (using a one-way ANOVA with posthoc Tukey HSD tests; p < 0 . 01 ) with the exception of LEAD -3 and See et al. (2017) in both CNN+DM and NTY, Narayan et al. (2018) and SUMO in both CNN+DM and NTY, and LEAD -3 and Durrett et al. (2016) in NYT.", "Our second evaluation study assessed the overall quality of the summaries by asking participants to rank them taking into account the following criteria: Informativeness , Fluency , and Succinctness .", "The study was conducted on the Amazon Mechanical Turk platform using Best-Worst Scaling (Louviere et al., 2015), a less labor-intensive alternative to paired comparisons that has been shown to produce more reliable results than rating scales (Kiritchenko and Mohammad, 2017).", "Participants were presented with a document and CNN+DM NYT P H EA P H EA Parser 24.8 8.9 18.7 10.6 SUMO (1-layer) 69.0 2.9 23.1 54.7 3.6 20.6 SUMO (3-layer) 52.7 3.7 25.3 45.1 6.2 21.6 Left Branching 21.4 21.3 Right Branching 7.3 6.7 Table 3: Descriptive statistics Projectivity(%), Height and EdgeAgreement(%) for dependency trees produced by our model and the RST discourse parser of Zhao and Huang (2017).", "summaries generated from 3 out of 7 systems and were asked to decide which summary was better and which one was worse, taking into account the criteria mentioned above.", "We used the same 20 documents from each dataset as in our QA evaluation and elicited 5 responses per comparison.", "The rating of each system was computed as the percentage of times it was chosen as best minus the times it was selected as worst.", "Ratings range from -1 (worst) to 1 (best).", "As shown in Table 2 (Rank column), participants overwhelming prefer the extractive oracle summaries followed by SUMO and REFRESH (Narayan et al., 2018).", "Abstractive systems (Celikyilmaz et al., 2018; See et al., 2017; Durrett et al., 2016) perform relatively poorly in this evaluation; we suspect that humans are less forgiving to fluency errors and slightly incoherent summaries.", "Interestingly, gold summaries fare worse than the oracle and extractive systems.", "Albeit fluent, gold summaries naturally contain less detail compared to oracle-based ones; on virtue of being abstracts, they are written in a telegraphic style, often in conversational language while participants prefer the more lucid style of the extracts.", "All pairwise comparisons among systems are statistically significant (using a one-way ANOVA with post-hoc Tukey HSD tests; p < 0 . 01 ) except LEAD -3 and See et al. (2017) in both CNN+DM and NTY, Narayan et al. (2018) and SUMO in both CNN+DM and NTY, and LEAD and Durrett et al. (2016) in NYT.", "To gain further insight into the structures learned by SUMO , we inspected the trees it produces.", "Specifically, we used the Chu-Liu-Edmonds algorithm (Chu and Liu, 1965; Edmonds, 1967) to extract the maximum spanning tree from the attention scores.", "We report various statistics on the characteristics of the induced trees across datasets in Table 3. We also examine the trees learned from different SUMO variants (with different numbers of iterations) in order to establish whether the iterative process yields better structures.", "Specifically, we compared the dependency trees obtained from our model to those produced by a discourse parser (Zhao and Huang, 2017) trained on a corpus which combines annotations from the RST treebank (Carlson et al., 2003) and the Penn Treebank (Marcus et al., 1993).", "Unlike traditional RST discourse parsers (Feng and Hirst, 2014), which first segment a document into Elementary Discourse Units (EDUs) and then build a discourse tree with the EDUs 2 as leaves, Zhao and Huang (2017) parse a document into an RST tree along with its syntax subtrees without segmenting it into EDUs.", "The outputs of their parser are ideally suited for comparison with our model, since we only care about document-level structures, and ignore the subtrees within sentence boundaries.", "We converted the constituency RST trees obtained from the discourse parser into dependency trees using Hirao et", "al.'s algorithm (2013).", "As can be seen in Table 3, the dependency structures induced by SUMO are simpler compared to those obtained from the discourse parser.", "Our trees are generally shallower, almost half of them are projective.", "We also calculated the percentage of head-dependency edges that are identical between learned trees and parser generated ones.", "Although SUMO is not exposed to any annotated trees during training, a number of edges agree with the outputs of the discourse parser.", "Moreover, we observe that the iterative process involving multiple structured attention layers helps generate better discourse trees.", "We also compare SUMO trees against a leftand right-branching baseline, where the document is trivially parsed into a leftand right-branching tree forming a chain-like structure.", "As shown in Table 3, SUMO outperforms these baselines (with the exception of the one-layered model on NYT).", "We should also point out that the edge agreement between SUMO generated trees and left/right branching trees is low (around 30% on both datasets), indicating that the trees we learn are different from a simple chain.", "In this paper we provide a new perspective on extractive summarization, conceptualizing it as a tree induction problem.", "We present SUMO , a Structured Summarization Model, which induces a multi-root dependency tree of a document, where roots are summary-worthy sentences, and subtrees attached to them are sentences which elaborate or explain the summary content.", "SUMO generates complex trees following an iterative refinement process which builds latent structures while using information learned in previous iterations.", "Experiments on two datasets, show that SUMO performs competitively against state-of-the-art methods and induces meaningful tree structures.", "In the future, we would like to generalize SUMO to abstractive summarization (i.e., to learn latent structure for documents and sentences) and perform experiments in a weakly-supervised setting where summaries are not available but labels can be extrapolated from the article's title or topics.", "We thank Serhii Havrylov for helpful suggestions.", "This research is supported by a Google PhD Fellowship to the first author.", "We gratefully acknowledge the support of the European Research Council (Lapata, award number 681760, Translating Multiple Modalities into Text; Titov award number 678254, Broad Coverage Semantic Parsing)." ]
[ "abstain", "abstain", "abstain", "objective", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "objective", "objective", "objective", "objective", "objective", "objective", "objective", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "method", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "other", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "objective", "method", "abstain", "abstain", "method", "other", "other", "other" ]
[ "Framing involves the positive or negative presentation of an argument or issue depending on the audience and goal of the speaker (Ent-man, 1983).", "Differences in lexical framing, the focus of our work, can have large effects on peoples' opinions and beliefs.", "To make progress towards reframing arguments for positive effects, we create a dataset and method for this task.", "We use a lexical resource for connotations to create a parallel corpus and propose a method for argument reframing that combines controllable text generation (positive connotation) with a postde-coding entailment component (same denota-tion).", "Our results show that our method is effective compared to strong baselines along the dimensions of fluency, meaning, and trustwor-thiness/reduction of fear.", "Public opinion has been shown to be significantly influenced by framing effects.", "Framing refers to the presentation of an issue, where even small changes may have outsized effects on beliefs (Chong and Druckman, 2007).", "For example, when asked about welfare, the American public is largely against increasing spending (with only 20% in favor), but when asked about assistance to the poor, 65% believe that the government is not spending enough (Rasinski, 1989).", "While other research has focused on syntactic framing (Greene and Resnik, 2009) or issue framing (Hartmann, 2019), we focus specifically on lexical framing, distinguishing sentences by their connotative meaning even where they have the same denotative meaning.", "According to Frege (1892), two sentences with the same truth conditions may refer to the same entities or state of affairs ( refer-ence, also known as denotation) but be presented The work is not affiliated to Google and was conducted independently outside of the organization Arg1 Alabama's Supreme Court Chief Justice was suspended... for ordering state probate judges not to grant marriage licenses to gay couples...", "RfArg1 Alabama's Supreme Court Chief Justice was suspended... for ordering state probate judges not to grant legal marriage equality to gay couples...", "Arg2 Every nation with territorial claims in the arctic is a member of NATO, except Russia.", "Rf Arg2 Every nation with sovereign competence in the arctic is a member of NATO, except Russia.", "Arg3 At this dire moment , we all need to amplify our voices in defense of free speech .", "RfArg3 At this crucial moment , we all need to amplify our voices in support of free speech.", "Arg4 It is difficult to think of any single act that would do more to restore America's soft power than the election of Obama to the presidency RfArg4 It is difficult to think of any single act that would do more to restore America's diplomatic credibility than the election of Obama to the presidency Table 1: Examples of arguments (Arg1, Arg2) with high partisan skew collocations (in red) (Webson et al., 2020) as well as appeal to fear or prejudice argument fallacies (Arg3, Arg4) (Da San Martino et al., 2019), along with reframed arguments as an attempt by our model ENTRUST to improve trustworthiness.", "differently ( sense or connotation).", "For example, undocumented workers and illegal aliens have the same denotation but different connotations (Webson et al., 2020).", "The examples in Table 1 are instances of lexical framing, where word choice determines the difference in presentation (Mccombs and Ghanem, 2001).", "For example, Arg1 and Arg2 contain collocations (in red) that have a high partisan skew (Webson et al., 2020), while Arg 3 and Arg 4 are examples of appeal to fear or prejudice argument fallacies from propagandist news articles (Da San Martino et al., 2019).", "The goal is to reframe such arguments to be more trustworthy (e.g., less partisan, no appeal to fear fallacy).", "dimensions of politeness, sentiment, or tangibility, among others (Allaway and McKeown, 2020), but in our work we consider emotional association such as fear and trust.", "Appeal to fear is considered an argumentative fallacy (Walton, 2006; Thierer, 2012) and appears prominently in manipulative text such as propaganda (Da San Martino et al., 2019).", "On the other hand, arguments with trusted language align with the Aristotelian modes of persuasion, specifically ethos (Aristotle and Bartlett, 2019).", "In our work, we leverage such a lexical resource for connotations (Allaway and McKeown, 2020) to reframe arguments to be more trustworthy (e.g., less partisan, no appeal to fear fallacy), while maintaining the same denotative meaning.", "While retrieve-and-replace methods perform well on other attribute transfer tasks such as sentiment (Li et al., 2018a; Sudhakar et al., 2019a), our task is more dependent on broader context within a sentence even though we are performing localized replacement.", "Thus, there are two main challenges we need to address: 1) the lack of a parallel dataset of negatively and positively framed arguments (naturally-occurring); and 2) a generation approach that can not only change the connotative meaning but also keep the same denotative meaning of the input argument.", "We introduce our approach called ENTRUST : Argum ENT R eframing with lang U age model S and en T ailment, with the following contributions: 1) A Connotation-guided Masked Language Model approach to generate a parallel dataset of naturally oc-curing arguments and their reframings (Section 2); 2) A method for argument reframing that combines controllable text generation (connotative meaning associated with trust) and entailment (same denotative meaning) (Section 3); 3) An evaluation on two different tasks reframing partisan arguments and appeal to fear/prejudice fallacies showing that our method is preferred over a strong retrieval-based baseline (Sudhakar et al., 2019a) and state-of-the-art pretrained language model (Lewis et al., 2019), and it is close to human performance on several evaluation criteria such as fluency, meaning, trustworthiness/reduction in fear.", "Code, data, and models available at https://github.com/ tuhinjubcse/ArgReframingNAACL2021 2 Automatic Parallel Data Creation To facilitate the reframing of arguments, we require a large-scale parallel corpora of sentences with the same denotation but different connotative meaning.", "Selection of naturally-occurring arguments.", "Since our goal is to re-write arguments, it is essential to identify an abundant source of naturally-occurring arguments.", "The Change My View sub-reddit, an argumentative discussion forum intended for persuasion on diverse topics, has been used extensively in computational argumentation research (Tan et al., 2016; Wei et al., 2016; Musi et al., 2018; Chakrabarty et al., 2019a,b; Hidey et al., 2017).", "We collect sentences from the same source and classify them as claim , premise , or non-argument using the fine-tuned BERT model released by Chakrabarty et al. (2019b).", "This results in 301,166 arguments labeled as premises.", "We consider only premises to create our parallel data because argumentative appeals occur within justifications (premises) for or against the speaker's claim.", "Allaway and McKeown (2020) provide a resource with words labeled for lexical connotations, using the aspects of Social Value, Politeness, Impact, Factuality, Sentiment, Emotional Association .", "For our work we only consider Emotional Association , although in future work our methods could be applied for other aspects.", "To create a parallel corpus, we use this lexical resource and the 301,166 automatically identified premises from Change My View to obtain candidate words within those premises for replacement.", "We match words from the premises to those that have entries in the dictionary with emotional connotations such as fear, trust, anticipation, and joy.", "To generate replacements for these words, we need to find substitutions that maintain denotative meaning while changing connotative meaning.", "We use the connotation dictionary to address the latter.", "However, to address the former, we need to provide only paraphrases that consider the context in which these words occur.", "We thus use a masked language model (MLM).", "Masked language modeling approaches like BERT (Devlin et al., 2018) and RoBERTa (Liu et al., 2019) can be considered cloze or fill-in-the-blank tasks, where the model uses the context surrounding a masked-out token to try to predict what the masked word should be.", "We borrow this framework (RoBERTa-large, in particular) to mask the candidate words we identified via the connotation lexicon.", "However, the rank of a predicted token from an MLM is based on the language model", "probability it provides no information about lexical connotations.", "A premise re-written from MLM replacements may thus have the same connotative meaning.", "To avoid this scenario, we restrict the MLM replacements to be words with different connotations than the original masked word (i.e., different Emotional Association).", "Our data creation process is depicted in Figure", "1. In example 2, the word resources has the connotations joy;trust in our dictionary.", "The MLM generates the replacement tools, which we verify has a different connotation ( emotionally neutral ).", "For example 1, the words \"prepare,\" \"real,\" and \"defense\" have the emotional connotations anticipation , trust , and anticipation;anger;fear respectively.", "These words are replaced with plan , your , and safety , using our MLM.", "We treat the original premises as the target and the connotation-guided MLM generated premises as the source\" for our method of argument reframing detailed in the next section (Figure 1). While this process provides us with a parallel dataset for reframing, we enhance the source-side of the data to provide additional control during generation. Motivated by the work of Schiller et al. (2020), which used aspect as a control code (Keskar et al., 2019) for argument generation, we also prepend the emotional associations of the replaced words. Using the connotations from the lexical resource, we add all listed emotions as control codes by separating them with a special token ([DELIM]) (the top right block of Figure 1). During inference, we thus have more control over the emotion of the words we are generating (in our case we specifically use trust as the control code). For additional control, we also insert demarcator tokens ([SEP]) at the boundary of the words we aim to replace to provide our generative model with a better signal on what to replace or rewrite. While the downside is that we need to identify spans for replacement at test/inference time, our experiments will show that using collocations or fear words makes it unnecessary. By using the lexical connotation resource we do not have to rely on a separate module/tagger based approach like that of Pryzant et al. (2020) to find biased or problematic words that may introduce additional noise during training. Our parallel data has 271,022 pairs for training and 30,114 for validation on which perplexity is evaluated. 3 Method for Argument Reframing As our goal is to change connotation while maintaining denotation , we divide our approach to rewriting arguments into two primary tasks: 1) generating the appropriate lexical substitutions while being pertinent to the context; 2) ensuring that rewritten arguments reflect the desired emotional association while maintaining the same denotative meaning as the input. Source trust <V>I suppose we could argue that they're much better at soft power than Nazi Germany or the USSR, but come on BART I suppose we could argue that they're much better at military strength than .......... BART+NLI I suppose we could argue that they're much better at diplomatic communication than ...... Table 2: Generation from fine-tuned BART without control for entailment can sometime contradict the input thereby failing to maintain the same denotative meaning 3.1 Controllable Text Generation BART (Lewis et al., 2019) is a pre-trained model combining bidirectional and auto-regressive transformers that achieves state-of-the-art results in several text generation tasks. It is implemented as a sequence-to-sequence model with a bidirectional encoder over corrupted text and a left-to-right autoregressive decoder. In principle, the pre-training procedure has two stages: (1) text is corrupted with an arbitrary noising function, and (2) a transformer-to-transformer model is learned to reconstruct the original text. Because BART has an auto-regressive decoder, it can be directly fine-tuned for most sequence generation tasks. Here, the encoder input is a sequence of words, and the decoder generates outputs auto-regressively. We refer the reader to (Lewis et al., 2019) for further details. For our task, we fine-tune BART on our parallel data, where the reframed argument using MLM & connotation dictionary is the encoder source and the original argument is the decoder target (Figure 1). The emotional connotations added to the source via the special token DELIM (see Section 2) act as a control code for generation. Moreover, for lexical framing, subtle differences in word choices matter the most. By explicitly using special tokens ([SEP]) in our parallel data during fine-tuning, the BART model learns what to edit, instead of editing random words in the sentence, a common issue often found in attribute transfer models (Li et al., 2018a; Sudhakar et al., 2019a). At test time, therefore, we can ensure the model reframes a desired content span. All hyper-parameters are mentioned in the Appendix A . Post fine-tuning at the decoding step, we use a top-k sampling strategy (Fan et al., 2018) to reframe arguments conditioned on a input argument and a target emotion. 3.2 Post-decoding NLI Our task is challenging in comparison to traditional text attribute transfer tasks as we need to maintain the same denotative meaning as the input. While in most cases BART is able to generate content which is semantically similar to the input, it sometimes contradicts the input. For example, Table 2 shows that BART changes soft power to military strength . Here the denotative meaning changes. To control for this, we introduce an additional post-processing step. We generate multiple outputs by varying the value of k (between 5 and 50) while conducting top-k sampling. We then calculate the entailment scores of these outputs with the input argument respectively using a RoBERTa (Liu et al., 2019) model fine-tuned on the Multi-NLI dataset (Williams et al., 2018) and then select the output having the best entailment score. We also experimented with other methods for incorporating entailment during decoding based on prior work (Section 8), but found these techniques to be less effective than our method. As pre-trained sequence-to-sequence language models are good at copying input and generating natural-sounding output, we hypothesize that our approach will better allow us to change connotative meaning without affecting fluency and denotation. In contrast, approaches such as vocab boosting (Ghosh et al., 2017) increase the logits of key connotative words, which would necessarily decrease the probabilities of functional words and words necessary for maintaining denotative meaning. Other approaches such as reinforcement learning (Pasunuru and Bansal, 2017) may further decrease these desired qualities, while trying to maximize another objective. 4 Evaluation Tasks and Test Data To evaluate our methods for argument reframing we need to look beyond our automatically labeled data. We consider two tasks: 1) reframing an argument that contains partisan language to a less partisan argument; and 2) reframing an appeal to fear or prejudice fallacy to an argument without this fallacy. Recently Webson et al. (2020), proposed resources and methods to disentangle denotation and connotation in vector spaces. They evaluate their methods on a sample of around 300 collocations from vocabulary of Congressional records (Gentzkow et al., 2019) and Hyper-partisan News INP1 It would be dangerous , suicidal folly for infidels to pretend that ramadan is not the month of jihad HUM1 It would be counterproductive and unreasonable for infidels to ........ jihad INP2 Trump backs away from further military confrontation with Iran HUM2 Trump backs away from further military engagement with Iran Table 3: INP1 and INP2 are test data instances where INP1 is a Appeal to Fear example while INP2 is an argument containing partisan collocation (Kiesel et al., 2019) that occur at least 100 times and have high partisan skew. We use these words to filter arguments from the subreddits ChangeMyView and Politics . Some of these collocations include phrases such as abortion providers, investment vehicles, broken system, soft power, and territorial claims . We randomly sample 100 such arguments to benchmark the performance of our model and further use towards human evaluation. In addition, we test our models on propaganda techniques employed in news articles with an Appeal to Fear or Prejudice (Da San Martino et al., 2019). There are a total of 182 sentence-level text fragments labeled as Appeal to Fear or Prejudice in the dataset released by Da San Martino et al. (2019). We classify these 182 fragments as claims/premises/non-argument and randomly sample 50 premises. Our goal is to reduce the fallacious nature of the argument without changing the denotative meaning. As our training distribution is different from these two datasets, these tasks and test sets allow us to better test the generalization capabilities of our models. Furthermore, almost none of the collocations introduced by Webson et al. (2020) appear in the connotation dictionary of Allaway and McKeown (2020), which helps us avoid the risk of mimicking replacements from our training data. For both of these tasks, we ask humans to generate reframings based on our input test data for comparison and benchmark. We recruit two experts with argumentation and journalism background (not authors of this paper) to reframe arguments. For Appeal to Fear the instructions given were to make it less fallacious by reducing the fear and rephrasing the argument (HUM1 in Table 3), while for arguments with partisan collocation the human was instructed to change the collocation so as to make it trustworthy (HUM2 in Table 3). 5 Experimental Setup To compare the quality of the reframed arguments, we benchmark our ENTRUST model against human performance and four baseline systems described below. For the data containing collocations from (Webson et al., 2020), because we know they represent partisan language the ideal goal is to reframe them. For Appeal to Fear or Prejudice data we reframe words which portray an emotion of fear based on the popular NRC Emotion Lexicon (Mohammad and Turney, 2013). 5.1 Baseline Systems As argument reframing is a new task, we adapt several baselines that have been used for other generation tasks and also compare with human-generated reframings. Bart wIthout Demarcator and ENtailment (BART w/o D + EN ): This is the pre-trained BART model fine-tuned on our parallel data without explicitly adding signals on what to edit or reframe and without post-processing based on entailment scores. This experiment helps us understand if BART learns to adapt to the emotional connotations and can automatically edit partisan collocations or words inducing fear without control. Bart without EnTAilment (BART w/oEN ): This is the pre-trained BART model fine-tuned on our parallel data with explicit signals ([SEP] token) but without the NLI component as a post-processing tool. This experiment helps us understand how well BART learns to adapt to the emotional connotations without altering the denotative meaning once guided with what to reframe. Lexical Replacement (LEXREP): We use a similar method employed for our parallel data creation. We rely on Masked Language Models for lexical substitutions. Because our goal is to reframe arguments to be trustworthy we prefer substitutions which have a connotation of trust in the resource by Allaway and McKeown (2020). In case we cannot find the substitution in the connotation dictionary we honor default MLM predicted infilling. Generative Style Transformer (GST): We use the state of art for text style transfer by Sudhakar et al. (2019a), which is a part of a larger Delete Retrieve Generate \" framework (Li et al., 2018a). To maintain parity with other baselines, instead of letting the model delete attribute keywords we delete System Partisan Task Appeal to Fear Task BART w/oD + EN 64.1 38.5 BART w/oEN 91.9 43.1 GST 86.4 38.3 LEXREP 92.4 44.3 ENTRUST 92.9 44.5 HUMAN 93.9 41.6 * Table 4: Semantic Similarity of reframed arguments with input arguments. (*) Here human did not restrict themselves to just lexical framing, so automated metrics might penalize them for more reframing. System Fluency Meaning Trust Fear INPUT -3.24 3.36 BART w/oD + EN 2.78 2.56 2.60 3.01 BART w/oEN 3.39 3.00 3.13 2.58 LEXREP 3.38 3.00 3.08 2.54 GST 2.14 1.81 2.01 2.44 ENTRUST 3.51 3.30 3.52 2.39 HUMAN 3.72 3.63 3.71 2.59 Table 5: Fluency and Meaning Preservation scores given by human judges on a scale of (1-5) for reframed arguments with respect to input arguments. Fluency and Meaning Preservation ratings are for all arguments in test set, while Trust ratings are for arguments with Partisan collocation (higher scores better), and Fear ratings for Appeal To Fear or Prejudice ones only (lower scores better). the partisan collocations or fear related words from the arguments as the first step, followed by the usual retrieve and generate steps. Our training data for this method includes only arguments labeled with their attribute (e.g., positive or negative). Arguments containing lexical connotations catering to trust are positive, while those not catering to trust are negative. 5.2 Evaluation Criteria Automatic evaluation. One important criterion is to measure if the reframed arguments are faithful to the input. Even though we are changing the argument for connotations, it should still maintain the same denotative meaning as the input. To this end we calculate Semantic Similarity with our input using SENTENCE BERT(SBERT) (Reimers and Gurevych, 2019). Human evaluation. We use Amazon Mechanical Turk to evaluate on a total of 900 utterances, 750 generated from 5 systems and 150 utterances generated by humans. We proposed a set of 3 criteria to evaluate the generated output: (1) Fluency (F) (How fluent and grammatical are the utterances?), (2) Meaning Preservation (M) (How well does the reframed argument capture the same denotative meaning as the input argument?), (3) Trustworthiness/Presence of Fear(T/PF) .", "For the 100 input arguments reflect-ing partisan view we ask Turkers to rate reframed arguments based on trustworthiness with respect to the input.", "For the 50 Appeal to Fear or Prejudice fallacies we ask Turkers to rate reframed arguments based on presence of fear (the intention behind this being that we want to rank systems which portray the least amount of fear).", "In both of these ratings we still ask Turkers to keep into account the denotative meaning (i.e., making it trustworthy or less fallacious at the expense of meaning alterations should be scored lower).", "We hired 40, 25, 39 (23 and 16) Turkers for the three separate tasks respectively.", "The computed IAA using Krippendorff's alpha for Fluency, Meaning Preservation , TrustWorthiness and Presence of Fear is 0.62, 0.65, 0.51, 0.46, respectively.", "Automatic Evaluation.", "As can be seen in Table 4 our model ENTRUST maintains the denotative meaning with the input better than other systems ( p < . 001 using approximate randomization tests) and only marginally behind humans when it comes to arguments with partisan collocations.", "For Appeal to Fear or Prejudice our system maintains better denotative meaning than all systems except LEXREP ( p < . 001 ).", "The automatic metric somewhat penalizes humans for changing more content than just targeted words; this unreliability is a known issue with automated metrics (Novikova et al., 2017) and strongly implies a need for human evaluation.", "Human Evaluation.", "Table 5 shows the results of our human-based evaluations.", "For fluency, meaning preservation, trustworthiness, and reduction of fear the ENTRUST model is better than all the baselines ( p < . 001 using approximate randomization tests).", "It is further encouraging to see that the entailment step helps us maintain better denotative meaning (See Table 5 Col3: Row 4 vs Row 7).", "For Presence of Fear, Turkers often rate our ENTRUST model to be the least fearful, including slightly when compared to reframings of an expert.", "We hypothesize this is because the human judges found it difficult to completely remove fear while keeping the denotative meaning (indeed the Original Argument System reframed Argument F M T /PF It is difficult to think of any single act that would do more to restore America's soft power than the election of Obama to the presidency BART w/oD + EN It is difficult to think of any single act that would do more to restore America's soft power than the election of Obama to the presidency 3.7 2.3 3.3 BART w/oEN It is difficult to think of any single act that would do more to restore America's moral authority than the election of Obama to the presidency 3.7 3.3 3.0 LEXREP It is difficult to think of any single act that would do more to restore America's moral standing than the election of Obama to the presidency 3.7 3.3 2.7 GST Hated it is difficult to think of any single act that would do more to restore America's economy than the election of Obama to the presidency 1.7 1.7 2.7 ENTRUST It is difficult to think of any single act that would do more to restore America's diplomatic credibility than the election of Obama to the presidency 4.3 3.7 5.0 HUMAN It is difficult to think of any single act that would do more to restore America's political and economic influence than the election of Obama to the presidency 4.0 3.3 4.0 Or he can seize this opportunity and use his government's diplomatic influence to defend the life of an australian citizen , whose courageous public service is recognised by countless people across the world BART w/oD + EN Or he can seize this opportunity and use his governments's diplomatic influence to defend the life of an australian citizen , whose courageous military service is recognised by countless people across the world 2.7 3.0 2.3 BART w/oEN Or he can honor this opportunity and use his nation's diplomatic resources to honor the life of an australian citizen , whose unwavering public service is recognised by countless people across the world 4.0 4.7 1.3 LEXREP Or he can take this opportunity and use his nations's diplomatic resources to defend the life of an australian citizen , whose courageous military service is recognised by countless people across the world 4.7 4.0 2.0 GST Or he can do this opportunity and use his diplomatic expertise to change the life of an australian citizen , whose public service is recognised by countless people across the world 1.7 2.0 2.0 ENTRUST Or he can honor this opportunity and use his nation's diplomatic resources to vindicate the life of an australian citizen , whose unwavering public service is recognised by countless people across the world 4.7 4.7 1.3 HUMAN Or he can pick up this opportunity and use his government's diplomatic influence to defend the life of an Australian citizen, whose actions have been publicly recognized as highly relevant at an international level .", "Sometimes, an ungrammatical generation or a reframing which change the meaning will contain less fear (rating 1 meaning no fear at all).", "However, to avoid this we explicitly asked Turkers to rate those samples as moderate so as to not bias the overall results.", "As can be seen in Table 6, the ENTRUST model accurately captures diplomatic credibility as an alternate for soft power which is encouraging as soft power is measured through culture, diplomacy,", "education, business/innovation, and government.", "1 The BART w/o D + EN model often fails to reframe anything, which shows the importance of adding [SEP] tokens as explicit supervision so that the model knows what to edit.", "The GST model fails at both grammaticality and meaning preservation, which makes it harder to judge its trustworthiness and ability to ameliorate fearful appeal.", "Finally, ENTRUST reframings are not static.", "Table 8 shows that for the same collocation of targeted killing , the reframings are different, contingent on the context.", "This goes on to prove that our model not only 1 https://en.wikipedia.org/wiki/Soft_ power An Iranian government official seemed to suggest that President Trump's properties could be potential targets in retaliation for the US targeted killing of Iranian general Qassem Soleimani.", "generalizes to unseen test data, but can produce novel, grammatical and meaningful edits based on context.", "The effects of lexical framing have been studied for social and political issues, although our work is the first to use lexical framing in generation for positive framing effects (less partisan, no appeal to fear fallacy).", "Demszky et al. (2019) and Tyagi et al. (2020) study political polarization and how this manifests in differences in word choice among different groups; KhudaBukhsh et al. (2020) provide an interpretable framework using machine translation between groups to generate differences.", "While these works encourage computational approaches to reframe arguments for better lexical choice, these approaches do not control for denotation or connotation and thus may cause differences in word choice to result in a change in meaning.", "The most similar work to ours is that of Pryzant et al. (2020), who use a corpus of Wikipedia edits to train a model for debiasing, which includes framing.", "However, in their work communicative intent is left implicit; the corpus is only labeled for types of debiasing, which includes framing at a high level and not the connotations involved.", "Thus, their model only learns lexical differences, whereas our model is controllable.", "While our focus is on lexical framing, other work has investigated the identification of other types of frames and their effects.", "Greene and Resnik (2009) studied syntactic framing, finding a link between implicit sentiment and syntactic packagings.", "Previous studies have also involved emphasis framing Ding and Pan (2016) find that emphasizing aspects of products given personal information is more effective for content selection in advertisements.", "Other research has involved issue framing Ajjour et al. (2019) and Hartmann et al. (2019) study how arguments are framed in debates (e.g., in terms of economics or safety).", "Nguyen (2013) and Field et al. (2018) study agenda-setting for news and congressional debates and August et al. (2018) for study recruitment.", "Cano-Basave and He (2016) and Musi and Aakhus (2019) leverage semantic frames for distant labeling and analysis of arguments in political debates, respectively, and find, for example, that evidence and reasoning are among the most common.", "However, these approaches have focused on identification rather than generation.", "Finally, our work is also related to style transfer and controllable generation.", "Much of the work in style transfer has referred to changing the sentiment of a statement, which changes the truth condition and thus the denotative meaning.", "Sentiment is often explicitly marked and thus approaches such as deleting and replacing lexical markers are effective (Li et al., 2018b; Sudhakar et al., 2019b), although our experiments showed the difficulty of applying these techniques to our task.", "To control text generation by limiting contradictions, Pasunuru and Bansal (2017) use an entailment score as a reward in Reinforcement Learning, ensuring that a generated text is logically implied by the ground-truth text.", "Holtzman et al. (2018) utilize a discriminative model trained on SNLI (Bowman et al., 2015) to complement an RNN generator and guide the decoding process to improve contradictions in generation.", "Although we experimented with both of these approaches, including the approach of Holtzman et al. (2018) with MNLI to account for entailment in text generation, none of them yielded better results than our method.", "Other approaches have explored vocab boosting (Ghosh et al., 2017) for tasks such as de-biasing (Ma et al., 2020), which involves increasing the values of certain words; however, as these values are on the simplex, the softmax function necessarily decreases the values of other logits which are key to fluency such as function words.", "Our experiments showed that our approach is effective in reframing partisan arguments and appeals to fear for increased trustworthiness.", "We provided a method for creating a dataset using a lexical resource for connotations and masked language modeling.", "We used this dataset to fine-tune a controllable text generation model for the task of changing connotative meaning and used a model trained for natural language inference to maintain the denotative meaning.", "Our evaluations found that our approach generalized to two different tasks and data sets.", "In future work, we plan to directly incorporate the role of stance in framing (for arguments and counter-arguments).", "We also plan to expand our work to generating concessions (Musi, 2018), where the goal is for the speaker to portray some point of agreement in a positive light before disagreeing.", "Our data is collected from Reddit and we understand and respect user privacy.", "Our models are fine-tuned on sentence level data obtained from user posts.", "These do not contain any explicit detail which leaks information about a users name, health, negative financial status, racial or ethnic origin, religious or philosophical affiliation or beliefs, sexual orientation, trade union membership, alleged or actual commission of crime.", "Second, although we use language models trained on data collected from the Web, which have been shown to have issues with bias and abusive language (Sheng et al., 2019; Wallace et al., 2019), the inductive bias of our models should limit inadvertent negative impacts.", "Unlike model variants such as GPT, BART is a conditional language model, which provides more control of the generated output.", "We have two levels of control on our generation approach: lexical replacements via connotations associated with trust and an entailment method that aims to keep the same denotation of the original argument.", "While dual-use concerns are certainly possible here, we think that open-sourcing this technology will help to generate arguments with more balanced and trusted language that are less targeted towards partisanship or appeals to fear.", "Finally, while there may be concerns about building generative models for persuasion, social scientists distinguish persuasion from manipulation based on two aspects: dissimulation and constraint (Nettel and Roque, 2012).", "Dissimulation involves concealing intention, which requires hiding information, whereas constraint involves removing options from the audience and forcing them to accept the conclusion.", "Our work on reframing arguments does not aim to hide information about a topic or present it as the only choice, but aims to provide the same argument using more balanced and trusted language.", "We achieve this by two key components of our technology: controllable text generation (connotation associated with trust) and entailment model to ensure same denotation.", "The technology should be used responsibly, particularly making sure the generation is controllable for trust and positive emotion and that the entailment component is used for ensuring the same denotation with the original argument.", "Finally we pay the Turkers at a rate of 15$/hour, complying with minimum wage standards in most places." ]
[ "abstain", "abstain", "method", "objective", "result", "abstain", "abstain", "abstain", "method", "abstain", "objective", "objective", "objective", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "other", "other", "abstain", "objective", "abstain", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "result", "abstain", "other", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "objective", "other", "other", "other", "other", "abstain", "objective", "other", "other", "other", "other", "other", "other", "abstain", "other", "abstain", "other", "other", "abstain", "other", "result", "method", "method", "result", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain" ]
[ "Fact verification is a challenging task that requires simultaneously reasoning and aggregating over multiple retrieved pieces of evidence to evaluate the truthfulness of a claim.", "Existing approaches typically", "(i) explore the semantic interaction between the claim and evidence at different granularity levels but fail to capture their topical consistency during the reasoning process, which we believe is crucial for verification;", "(ii) aggregate multiple pieces of evidence equally without considering their implicit stances to the claim, thereby introducing spurious information.", "To alleviate the above issues, we propose a novel topic-aware evidence reasoning and stance-aware aggregation model for more accurate fact verification, with the following four key properties:", "1) checking topical consistency between the claim and evidence;", "2) maintaining topical coherence among multiple pieces of evidence;", "3) ensuring semantic similarity between the global topic information and the semantic representation of evidence;", "4) aggregating evidence based on their implicit stances to the claim.", "Extensive experiments conducted on the two benchmark datasets demonstrate the superiority of the proposed model over several state-of-the-art approaches for fact verification.", "The source code can be obtained from https://github.com/jasenchn/TARSA .", "The Internet breaks the physical distance barrier among individuals to allow them to share data and information online.", "However, it can also be used by people with malicious purposes to disseminate misinformation or fake news.", "Such misinformation may cause ethnics conflicts, financial losses and political unrest, which has become one of the greatest threats to the public (Zafarani et al., 2019; Zhou corresponding author et al., 2019b).", "Moreover, as shown in Vosoughi et al. (2018), compared with truth, misinformation diffuses significantly farther, faster, and deeper in all genres.", "Therefore, there is an urgent need for quickly identifying the misinformation spread on the web.", "To solve this problem, we focus on the fact verification task (Thorne et al., 2018), which aims to automatically evaluate the veracity of a given claim based on the textual evidence retrieved from external sources.", "Recent approaches for fact verification are dominated by natural language inference models (Angeli and Manning, 2014) or textual entailment recognition models (Ma et al., 2019), where the truthfulness of a claim is verified via reasoning and aggregating over multiple pieces of retrieved evidence.", "In general, existing models follow an architecture with two main sub-modules: the semantic interaction module and the entailment-based aggregation module (Hanselowski et al., 2018a; Nie et al., 2019a; Soleimani et al., 2020; Liu et al., 2020).", "The semantic interaction module attempts to grasp the rich semantic-level interactions among multiple pieces of evidence at the sentence-level (Ma et al., 2019; Zhou et al., 2019a; Subramanian and Lee, 2020) or the semantic roles-level (Zhong et al., 2020).", "The entailment-based aggregation module aims to filter out irrelevant information to capture the salient information related to the claim by aggregating the semantic information coherently.", "However, the aforementioned approaches typically learn the representation of each evidence-claim pair from the semantic perspective such as obtaining the semantic representation of each evidence-claim pair through pre-trained language models (Devlin et al., 2019) or graph-based models (Velickovic et al., 2018), which largely overlooked the topical consistency between claim and evidence.", "For example in Figure 1, given the claim A high school student named Cole Withrow was Claim A high school student named Cole Withrow was charged for leaving an unloaded shotgun in his vehicle while parking at school . E1 (gold) Family friend Kim Boykin said Withrow, an Eagle Scout and honors student , accidentally left his gun in the car afterskeet shooting over the weekend . E2 (gold) Others in the Princeton High community agree that Withrow's punishment is too harsh, especially after charges weren't filed when a loaded gun was found in an assistant principal's car two years ago. E3 (non-gold) Please know that with student and personnel issues , We carefully balance all factors to arrive at a fair and just Outcome. she said in a statement .", "charged for leaving an unloaded shotgun in his vehicle while parking at school and the retrieved evidence sentences (i.e., E 1 E 4 ), we would expect a fact checking model to automatically filter evidence which is topically-unrelated to the claim such as E 3 and E 4 and only relies on the evidence which is topically-consistent with the claim such as E 1 and E 2 for veracity assessment of the claim.", "In addition, we also expect the topical coherence of multiple pieces of supporting evidence such as E 1 and E 2 .", "Furthermore, in previous approaches, the learned representations of multiple pieces of evidence are aggregated via element-wise max pooling or simple dot-product attention, which inevitably fails to capture the implicit stances of evidence toward the claim (e.g., E 1 and E 2 support the claim implicitly, E 3 and E 4 are unrelated to the claim) and leads to the combination of irrelevant information with relevant one.", "To address these problems, in this paper, we propose a novel neural structure reasoning model for fact verification, named TARSA (Topic-Aware Evidence Reasoning and Stance-Aware Aggregation Model).", "A coherence-based topic attention is developed to model the topical consistency between a claim and each piece of evidence and the topical coherence among evidence built on the sentence-level topical representations.", "In addition, a semantic-topic co-attention is created to measure the coherence between the global topical information and the semantic representation of the claim and evidence.", "Moreover, the capsule network is incorporated to model the implicit stances of evidence toward the claim by the dynamic routing mechanism.", "The main contributions are listed as follows: We propose a novel topic-aware evidence reasoning and stance-aware aggregation approach, which is, to our best knowledge, the first attempt of jointly exploiting semantic interaction and topical consistency to learn latent evidence representation for fact verification.", "We incorporate the capsule network structure into our proposed model to capture the implicit stance relations between the claim and the evidence.", "We conduct extensive experiments on the two benchmark datasets to demonstrate the effectiveness of TARSA for fact verification.", "In general, fact verification is a task to assess the authenticity of a claim backed by a validated corpus of documents, which can be divided into two stages: fact extraction and claim verification (Zhou and Zafarani, 2020).", "Fact extraction can be further split into the document retrieval phase and the evidence selection phase to shrink the search space of evidence (Thorne et al., 2018).", "In the document retrieval phase, researchers typically reuse the top performing approaches in the FEVER1.0 challenge to extract the documents with high relevance for a given claim (Hanselowski et al., 2018b; Yoneda et al., 2018; Nie et al., 2019a).", "In the evidence selection phase, to select relevant sentences, researchers generally train the classification models or rank models based on the similarity between the claim and each sentence from the retrieved documents (Chen et al., 2017; Stammbach and Neumann, 2019; Soleimani et al., 2020; Wadden et al., 2020; Zhong et al., 2020; Zhou et al., 2019a).", "Many fact verification approaches focus on the claim verification stage, which can be addressed by natural language inference methods (Parikh et al., 2016; Ghaeini et al., 2018; Luken et al., 2018).", "Typically, these approaches contain the representation learning process and evidence aggregation process.", "Hanselowski et al. (2018b) and Nie et al. (2019a) concatenate all pieces of evidence as input and use the max pooling to aggregate the information for claim verification via the enhanced sequential inference model (ESIM) (Chen et al., 2017).", "In a similar vein, Yin and Roth (2018) incorporate the identification of evidence to further improve claim verification using ESIM with different granularity levels.", "Ma et al. (2019) leverage the co-attention mechanism between claim and evidence to generate claim-specific evidence representations which are used to infer the claim.", "Benefiting from the development of pre-trained language models, Zhou et al. (2019a) are the first to learn evidence representations by BERT (De-vlin et al., 2019), which are subsequently used in a constructed evidence graph for claim inference by aggregating all claim-evidence pairs.", "Zhong et al. (2020) further establish a semantic-based graph for representation and aggregation with XLNet (Yang et al., 2019).", "Liu et al. (2020) incorporate two sets of kernels into a sentence-level graph to learn a more fine-grained evidence representations.", "Subramanian and Lee (2020) further incorporate evidence set retrieval and hierarchical attention sum block to improve the performance of claim verification.", "Different from all previous approaches, our work for the first time handles the fact verification task by considering the topical consistency and the semantic interactions between claim and evidence.", "Moreover, we employ the capsule network to model the implicit stance relations of evidence toward the claim.", "In this section, we present an overview of the architecture of the proposed framework TARSA for fact verification.", "As shown in Figure 2, our approach consists of three main layers:", "1) the representation layer to embed claim and evidence into three types of representations by a semantic encoder and a topic encoder;", "2) the coherence layer to incorporate the topic information into our model by two attention components;", "3) the aggregation layer to model the implicit stances of evidence toward claim using the capsule network.", "This section describes how TARSA extracts semantic representations, sentence-level topic representations, and global topic information through a semantic encoder and a topic encoder separately.", "Semantic Encoder The semantic encoder in TARSA is a vanilla transformer (Vaswani et al., 2017) with the eXtra hop attention (Zhao et al., 2020).", "For each claim c paired with N pieces of retrieved evidence sentences E = { e 1 , e 2 , , e N } , TARSA constructs the evidence graph by treating each evidence-claim pair x i = ( e i , c ) as a node (i.e., x i = (cid:2) [ CLS ]; e i ; [ SEP ]; c ; [ SEP ] (cid:3) ) and build a fully-connected evidence graph G .", "We also add a self-loop to every node to perform message propagation from itself.", "Specifically, we first apply the vanilla transformer on each node to generate the claim-dependent evidence representation using the input x i , h i = T ransformer ( x i ) (1) where i denotes the i -th node in G .", "Then the eXtra hop attention takes the [ CLS ] token in each node as a hub token, which is to attend on hub tokens of all other connected nodes to learn the global context.", "One layer of eXtra hop attention can be viewed as a single-hop message propagation among all the nodes along the edges, h i, 0 = (cid:88) j ; e i,j =1 softmax j ( q Ti, 0 k j, 0 d k ) j, 0 (2) where e i,j = 1 denotes that there is an edge between the node i and the node j , q i, 0 denotes the query vector of the [ CLS ] token of node i , k j, 0 and j, 0 denote the key vector and the value vector of the [ CLS ] token of node j , respectively, and d k denotes the scaling factor.", "The local context and the global context are concatenated to learn the semantic representation of all the nodes: h i, 0 = Linear ([ h i, 0 ; h i, 0 ]) , h i, = h i, ; (cid:54) = 0 .", "By stacking L layers of the transformer with the eXtra hop attention which takes the semantic representation of the previous layer as input, we learn the semantic representation of evidence H = [ h 1 , h 2 , , h N ] RN d from the graph G .", "Topic Encoder We extract topics in the following two forms via latent Dirichlet allocation (LDA) (Blei et al., 2003): Sentence-level topic representation: Given a claim c and N pieces of the retrieved evidence E , we extract latent topic distribution t RK for each sentence as the sentence-level topic representation, where K is the number of topics.", "More concretely, we denote t c RK for claim c and t e i RK for evidence e i .", "Each scalar value t k denotes the contribution of topic k in representing the claim or evidence.", "Global topic information: We extract global topic information P = [ p 1 , p 2 , , p K ] RK V from the topic-word distribution by treating each sentence (i.e., claim or evidence) in corpus D as a document, where V denotes the vocabulary size.", "This section describes how to incorporate the topic information into our model with two attention components.", "assume that given a claim, the sentences used as evidence should be topically coherent with each other and the claim should be topically consistent with the relevant evidence .", "Therefore, two kinds of topical relationship are considered:", "1) topical coherence among multiple pieces of evidence ( T C ee );", "2) topical consistency between the claim and each evidence ( T C ce ).", "Specifically, to incorporate the topical coherence among multiple pieces of evidence into our model, we disregard the order of evidence and treat each evidence independently.", "Then we utilize the multihead attention (Vaswani et al., 2017) without position embedding to generate the new topic representation of evidence t e based on the sentence-level topic representation t e RN K of the retrieved evidence for a given claim.", "Moreover, we utilize the co-attention mechanism (Chen and Li, 2020) to weigh each evidence based on the topic consistency between the claim and the evidence.", "Given the sentence-level topic representation t c for claim and t e for the corresponding evidence, the co-attention attends to the claim and the evidence simultaneously.", "We first compute the proximity matrix F RN , F = tanh( t c W l t Te ) , (5) where W l RK K is the learnable weight matrix.", "The proximity matrix can be viewed as a transformation from the claim attention space to the evidence attention space.", "Then we can predict the interaction attention by treating F as the feature, H e = tanh (cid:0) W e t Te + ( W c t Tc ) F (cid:1) , (6) where W e , W c R l K are the learnable weight matrices.", "Finally we can generate a topic similarity score between the claim and each evidence using the softmax function, e = softmax ( wH e ) , (7) where w R 1 l is the learnable weight, e RN is the attention score of each piece of evidence for the claim.", "Eventually, the topic representation A RN K can be computed as follows, A = e (cid:12) t e , (8) where (cid:12) is the dot product operation.", "Semantic-Topic Co-attention We weigh each piece of evidence e i to indicate the importance of the evidence and infer the claim based on the coherence between the semantic representation and the global topic information via the co-attention mechanism, which is similar to the coherence-based topic attention in Section 3.2.", "More concretely, taking H and P as input, we compute the proximity matrix F RK N to transform the topic attention space to the semantic attention space by Eq.", "(5).", "As a result, the attention weights e RN of evidence can be obtained by Eq.", "(6) and (7).", "Eventually, the semantic representation S RN d can be updated via S = e (cid:12) H .", "To model the implicit stances of evidence toward claim, we incorporate the capsule network (Sabour et al., 2017) into our model.", "As illustrated in Figure 2, we concatenate both the semantic representation S and the topical representation A to form the low-level evidence capsules u i = [ a i ; s i ] | Ni =1 R d e .", "Let o j | Mj =1 R d o denote the high-level class capsules, where M denotes the number of classes.", "The capsule network models the relationship between the evidence capsules and the class capsules by the dynamic routing mechanism (Yang et al., 2018), which can be viewed as the implicit stances of each evidence toward three classes.", "Formally, let u j | i be the predicted vector from the evidence capsule u i to the class capsule o j , u j | i = W j,i u i (9) where W j,i R d o d e denotes the transformation matrix from the evidence capsule u i to the class capsule o j .", "Each class capsule aggregates all of the evidence capsules by a weighted summation over all corresponding predicted vectors: o j = g ( N (cid:88) i =1 ji u j | i ) , p ji = | u i | , (10) where g is a non-linear squashing function which limits the length of o j to [0 , 1] , ji is the coupling coefficient that determines the probability that the evidence capsule u i should be coupled with the class capsule o j .", "The coupling coefficient is calculated by the unsupervised and iterative dynamic routing algorithm on original logits b ji , which is summarized in Algorithm 1.", "We can easily classify the claim by choosing the class capsule with the largest j via the capsule loss (Sabour et al., 2017).", "Moreover, the cross entropy loss is applied on the evidence capsules to identify whether the evidence is the ground truth evidence.", "This section describes the datasets, evaluation metrics, baselines, and implementation details in our experiments.", "Datasets We conduct experiments on two public fact checking datasets: (1) FEVER (Thorne et al., 2018) is a large-scale dataset consisting of 185,455 claims along with 5,416,537 Wikipedia pages from the June 2017 Wikipedia dump.", "The ground truth evidence and the label (i.e., SUPPORTS, REFUTES and NOT ENOUGH INFO (NEI)) are also available except in the test set.", "(2) UKP Snopes (Hanselowski et al., 2019) is a Datasets Train Dev Test Vocabulary size FEVER 145,449 19,998 19,998 25,753 UKP Snopes 4,659 582 583 2,258 Table 1: Statistics on FEVER and UKP Snopes mixed-domain dataset along with 16,508 Snopes pages.", "To maintain the consistency of two datasets, we merge the verdicts { false , mostly false } , { true , mostly true } , { mixture , unproven , undetermined } as REFUTES , SUPPORTS and NEI , respectively.", "And we omit all other labels (i.e., legent , outdated , and miscaptioned ) as these instances are difficult to distinguish.", "Table 1 presents the statistics of the two datasets.", "Evaluation Metrics The official evaluation metrics 1 for the FEVER dataset are Label Accuracy (LA) and FEVER score (F-score).", "LA measures the accuracy of the predicted label y i matching the ground truth label y i without considering the retrieved evidence.", "The FEVER score labels a prediction as correct if the predicted label y i is correct and the retrieved evidence matches at least one gold-standard evidence, which is a better indicator to reflect the inference capability of the model.", "We use precision, recall, and macro F1 on UKP Snopes to evaluate the performance.", "Baselines The following approaches are employed as the baselines, including three top performing models on FEVER1.0 shared task (UKP Athene (Hanselowski et al., 2018b), UCL MRG (Yoneda et al., 2018) and UNC NLP (Nie et al., 2019a)), HAN (Ma et al., 2019), BERT-based models (SR-MRS (Nie et al., 2019b), BERT Concat (Soleimani et al., 2020) and HESM (Subra-manian and Lee, 2020)), and graph-based models (GEAR (Zhou et al., 2019a), Transformer-XH (Zhao et al., 2020), KGAT (Liu et al., 2020) and DREAM (Zhong et al., 2020)).", "Document retrieval takes a claim along with a collection of documents as the input, then returns N most relevant documents.", "For the FEVER dataset, following Hanselowski et al. (2018a), we adopt the entity linking method since the title of a Wikipedia page can be viewed as an entity and can be linked easily with the extracted entities from 1 https://github.com/sheffieldnlp/fever-scorer the claim.", "For the UKP Snopes dataset, following Hanselowski et al. (2019), we adopt the tf-idf method where the tf-idf similarity between claim and concatenation of all sentences of each Snopes page is computed, and then the 5 highest ranked documents are taken as retrieved documents.", "Evidence selection retrieves the related sentences from retrieved documents in ranking setting.", "For the FEVER dataset, we follow the previous method from Zhao et al. (2020).", "Taking the concatenation of claim and each sentence as input, the [ CLS ] token representation is learned through BERT which is then used to learn a ranking score through a linear layer.", "The hinge loss is used to optimize the BERT model.", "For the UKP Snopes dataset, we adopt the tf-idf method from Hanselowski et al. (2019), which achieves the best precision.", "Claim verification.", "During the training phase, each claim is paired with 5 pieces of evidence, we set the batch size to 1 and the accumulate step to 8, the layer L is 3, the head number is 5, the l is 100, the number of class capsules M is 3, the dimension of class capsules d o is 10, the topic number K ranges from 25 to 100.", "In our implementation, the maximum length of each claim-evidence pair is 130 for both datasets.", "In this section, we evaluate our TARSA model in different aspects.", "Firstly, we compare the overall performance between our model and the baselines.", "Then we conduct an ablation study to explore the effectiveness of the topic information and the capsule network structure.", "Finally, we also explore the advantages of our model in single-hop and multihop reasoning scenarios.", "Table 2 and Table 3 report the overall performance of our model against the baselines for the FEVER dataset and the UKP Snopes dataset 2 .", "As shown in Table 2, our model significantly outperforms BERT-based models on both development and test sets.", "However, compared with the graph-based models, 2 Note that we did not compare HESM, SR-MES and DREAM with our model on the UKP Snopes dataset for the following reasons.", "HESM requires hyperlinks to construct the evidence set, which are not available in UKP Snopes; SRMRS concatenates query and context as the input to BERT, which is similar to the BERT Concat model; The composition of a claim in the UKP Snopes is more complicated than FEVER, which is more difficult for DERAM to construct a graph at the semantic level.", "TARSA outperforms previous systems, GEAR and KGAT, except DREAM for LA on the test set.", "One possible reason is that DREAM constructs an evidence graph based on the semantic roles of claim and evidence, which leverages an explicit graph-level semantic structure built from semantic roles extracted by Semantic Role Labeling (Shi and Lin, 2019) in a fine-grained setting.", "Nevertheless, TARSA shows superior performance than DREAM on the FEVER score, which is a more desirable indicator to demonstrate the reasoning capability of the model.", "As shown in Table 3, TARSA performs the best compared with all previous approaches on the UKP Snopes dataset.", "Table 4 shows the results of our TARSA model with different number of topics on the development", "set of FEVER and UKP Snopes.", "It can be observed that the optimal topic number is 25 for FEVER and 50 for UKP Snopes.", "One possible reason is that UKP Snopes is retrieved from multiple domains which includes more diverse categories than those of FEVER.", "To further illustrate the effectiveness of the topic information and the capsule-level aggregation modeling, we perform an ablation study on the development set of FEVER.", "Effect of Topic Information: We first explore how the model performance is impacted by the removal of various topic components.", "The first six rows in Table 5 present the label accuracy (LA) and the FEVER score on the development set of FEVER after removing various components, where STI denotes the semantic-topic information in Section 3.2, T C ee denotes the topical coherence among multiple pieces of evidence, T C ce denotes the topical consistency between the claim and each piece of evidence.", "As expected, LA and the FEVER score decrease consistently with a gradual removal of various components, which demonstrates the effectiveness of incorporating topic information in three aspects.", "We find that after all modules are removed, the performance of TARSA is still nearly 2% higher than our base model, Transformer-XH, due to the use of the capsule network in TARSA.", "Effect of Capsule-level Aggregation: We explore the effectiveness of the capsule-level aggregation by comparing it with four different aggregation methods.", "The last four rows in Table 5 show the results of aggregation analysis in the development set on FEVER.", "The max pooling, sum, and mean aggregation consider the learned representations of evidence as a single matrix, then apply a linear layer to classify the input claim as SUPPORTS, REFUTES, or NEI.", "The attention-based aggrega-Models LA F-score our TARSA 81.24 77.96 -STI 80.62 77.38 -TC ee 80.51 77.31 -TC ce 80.35 77.16 -TC ee -TC ce 80.06 76.88 -TC ee -TC ce -STI 79.93 76.80 Aggregation max pooling 79.36 76.33 sum 79.60 76.57 mean 79.28 76.19 attention-based 79.52 76.45 Table 5: Ablation analysis in the development set of FEVER.", "tion method is used in Zhou et al. (2019a), where the dot-product attention is computed between the claim and each evidence to weigh them differently.", "Finally, our TARSA model aggregates the information of all pieces of evidence using the capsule network, which connects the evidence capsules to the class capsules in a clustered way.", "From the results, our model outperforms all other aggregation methods.", "Table 6 presents the performance of our model on single-hop and multi-hop reasoning scenarios on the FEVER dataset compared with several baselines.", "The single-hop mainly focuses on the denois-ing ability of the model with the retrieved evidence, which selects the salient evidence for inference.", "The multi-hop mainly emphasizes the relatedness of different pieces of evidence for the joint reasoning, which is a more complex task.", "We build the training and testing sets for both single-hop and multi-hop scenarios based on the number of gold-standard evidence of a claim.", "If more than one gold-standard evidence is required, then the claim would require multi-hop reasoning.", "The instances with the NEI label are removed because there is no gold-standard evidence matching this label.", "The single-hop reasoning set contains Example : REFUTES Claim During an interview with the Washington Post, President Obama stated that Americans would be better off under martial law.", "78,838 and 9,682 instances for training and testing, respectively, while the multi-hop reasoning set contains 30,972 and 3,650 instances for training and testing, respectively.", "As Table 6 shows, TARSA outperforms all other baselines on LA by at least 0.31% in the single-hop scenario and 1.09% in the multi-hop scenario, respectively, which shows a consistent improvement in both scenarios.", "In addition, TARSA is more effective on the multihop scenario as the capsule-level aggregation helps better aggregate the information of all pieces of evidence.", "Table 7 illustrates an example from the UKP Snopes dataset which is correctly detected as REFUTES , where the topic words extracted by LDA are marked in blue.", "From the table we can observe:", "1) the top two pieces of evidence (i.e., e 1 and e 2 ) have higher topical overlap with the claim and also with each other;", "2) the lower two pieces of evidence (i.e., e 4 and e 5 ) seem less important because they are less topically relevant to the claim;", "3) for e 3 , it is difficult to judge its relevance from either the topical or the semantic perspective, which is ambiguous for the identification of the truthfulness of the claim.", "We randomly select 100 incorrectly predicted instances from FEVER and UKP Snopes datasets and categorize the main errors.", "The first type of errors is caused by the quality of topics extracted by LDA.", "This is because the average length of sentences in both datasets is much shorter after removing the lowand high-frequency tokens, which poses a challenge for LDA to extract high quality topics to match the topical consistency between a claim and each evidence.", "The second type of errors is due to the failure of detecting multiple entity mentions referring to the same entity.", "For example, the claim describes Go Ask Alice was the real life diary of a teenager girl , where evidence describes that This book is a work of fiction .", "The model fail to understand the relationship between diary and fiction .", "We have presented a novel topic-aware evidence reasoning and stance-aware aggregation model for fact verification.", "Our model jointly exploits the topical consistency and the semantic interaction to learn evidence representations at the sentence level.", "Moreover, we have proposed the use of the capsule network to model the implicit stances of evidence toward a claim for a better aggregation of information encoded in evidence.", "The results on two public datasets demonstrate the effectiveness of our model.", "In the future, we plan to explore an iterative reasoning mechanism for more efficient evidence aggregation for fact checking.", "We would like to thank anonymous reviewers for their valuable comments and helpful suggestions.", "This work was funded by the National Key Research and Development Program of China (2016YFC1306704), the National Natural Science Foundation of China (61772132), and the EPSRC (grant no. EP/T017112/1, EP/V048597/1).", "YH is supported by a Turing AI Fellowship funded by the UK Research and Innovation (UKRI) (grant no. EP/V020579/1)." ]
[ "abstain", "abstain", "objective", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "method", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "objective", "objective", "objective", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "objective", "method", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "other", "method", "method", "abstain", "other", "abstain", "method", "method", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "other", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "other", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "objective", "objective", "objective", "other", "other", "other" ]
[ "In this paper, we hypothesize that sarcasm is closely related to sentiment and emotion, and thereby propose a multi-task deep learning framework to solve all these three problems simultaneously in a multi-modal conversational scenario.", "We, at first, manually annotate the recently released multi-modal MUStARD sarcasm dataset with sentiment and emotion classes, both implicit and explicit.", "For multitasking, we propose two attention mechanisms, viz.", "Inter-segment Inter-modal Attention ( I e Attention) and Intra-segment Inter-modal Attention ( I a -Attention).", "The main motivation of I e -Attention is to learn the relationship between the different segments of the sentence across the modalities.", "In contrast, I a -Attention focuses within the same segment of the sentence across the modalities.", "Finally, representations from both the attentions are concatenated and shared across the five classes (i.e., sarcasm, implicit sentiment, explicit sentiment, implicit emotion, explicit emotion) for multi-tasking.", "Experimental results on the extended version of the MUStARD dataset show the efficacy of our proposed approach for sarcasm detection over the existing state-of-the-art systems.", "The evaluation also shows that the proposed multi-task framework yields better performance for the primary task, i.e., sarcasm detection, with the help of two secondary tasks, emotion and sentiment analysis.", "Sarcasm is an essential aspect of daily conversation, and it adds more fun to the language.", "Oscar Wilde, an Irish poet-playwright, quotes, Sarcasm is the lowest form of wit, but the highest form of intelligence .", "Irrespective of its relation with intelligence, sarcasm is often challenging to understand.", "Sarcasm is often used to convey thinly veiled disapproval humorously.", "This can be easily depicted through the following example, This is so good, that I am gonna enjoy it in the balcony. I can enjoy my view, whilst I enjoy my dessert.", "This utterance, at an outer glance, conveys that the speaker is extremely pleased with his dessert and wants to elevate the experience by enjoying it in the balcony.", "But, careful observation of the sentiment and emotion of the speaker helps us understand that the speaker is disgusted with the dessert and has a negative sentiment during the utterance (c.f. Figure 1).", "This is where sentiment and emotion come into the picture.", "Sentiment, emotion and sarcasm are highly intertwined, and one helps in the understanding of the others better.", "Even though sentiment, emotion, and sarcasm are related, sarcasm was treated separately from its other counterparts in the past due to its complexity and its high dependency on the context.", "Moreover, multi-modal input helps the model to understand the intent and the sentiment of the speaker with more certainty.", "Thus in the context of a dialogue, multi-modal data such as video (acoustic + visual) along with text helps to understand the sentiment and emotion of the speaker, and in turn, helps to detect sarcasm in the conversation.", "In this paper, we exploit these relationships, and make use of sentiment and emotion of the speaker for predicting sarcasm, specifically for the task, in a multi-modal conversational context.", "The main contributions and/or attributes of our proposed research are as follows:", "(a).", "we propose a multi-task learning framework for multi-modal sarcasm, sentiment, and emotion analysis.", "We leverage the utility of sentiment and emotion of the speaker to predict sarcasm.", "In our multi-task framework, sarcasm is treated as the primary task, whereas emotion analysis and sentiment analysis are considered as the secondary tasks.", "(b).", "We also propose two attention mechanisms viz. I e -Attention and I a -Attention to better combine the information across the modalities to effectively classify sarcasm, sentiment, and emotion.", "(c).", "We annotate the recently released Sarcasm dataset, MUStARD with sentiment and emotion classes (both implicit and explicit), and", "(d).", "We present the state-of-the-art for sarcasm prediction in multi-modal scenario.", "A survey of the literature suggests that a multimodal approach towards sarcasm detection is a fairly new approach rather than a text-based classification.", "Traditionally, rule-based classification (Joshi et al., 2017; Veale and Hao, 2010) approaches were used for sarcasm detection.", "Poria et al. (2016) have exploited sentiment and emotion features extracted from the pre-trained models for sentiment, emotion, and personality on a text corpus, and use them to predict sarcasm through a Convolutional Neural Network.", "In recent times, the use of multi-modal sources of information has gained significant attention to the researchers for affective computing.", "Mai et al. (2019) proposed a new two-level strategy ( Divide, Conquer, and Combine ) for feature fusion through a Hierarchical Feature Fusion Network for multimodal affective computing.", "Chauhan et al. (2019) exploits the interaction between a pair of modalities through an application of Inter-modal Interaction Module (IIM) that closely follows the concepts of an auto-encoder for the multi-modal sentiment and emotion analysis.", "Ghosal et al. (2018) proposed a contextual inter-modal attention based framework for multi-modal sentiment classification.", "In other work (Akhtar et al., 2019), an attention-based multitask learning framework has been introduced for sentiment and emotion recognition.", "Although multi-modal sources of information (e.g., audio, visual, along with text) offers more evidence in detecting sarcasm, this has not been attempted much, one of the main reasons being the non-availability of multi-modal datasets.", "Recently, researchers (Castro et al., 2019) have started exploiting multi-modal sources of information for sarcasm detection.", "It is true that the modalities like acoustic and visual often provide more evidences about the context of the utterance in comparison to text.", "For sarcasm detection, the very first multimodal dataset named as MUStARD has been very recently released by Castro et al. (2019), where the authors used a Support Vector Machine (SVM) classifier for sarcasm detection.", "In our current work, we at first extend the MUStARD dataset (Castro et al., 2019) by manually labeling each utterance with sentiment and emotion labels.", "Thereafter, we propose a deep learning based approach along with two attention mechanisms ( I e -Attention and I a -Attention) to leverage the sentiment and emotion for predicting sarcasm in a multi-modal multi-task framework.", "Further, to the best of our knowledge, this is the very first attempt at solving the multi-modal sarcasm detection problem in a deep multi-task framework.", "We demonstrate through a detailed empirical evaluation that sarcasm detection can be improved significantly if we are successful in leveraging the knowledge of emotion and sentiment using an effective multi-task framework.", "The MUStARD (Castro et al., 2019) dataset consists of conversational audio-visual utterances (total of 3.68 hours in length).", "This dataset consists of 690 samples, and each sample consists of utterance accompanied by its context and sarcasm label.", "The samples were collected from 4 popular TV Series viz.", ", Friends, The Big Bang Theory, The Golden Girls, and Sarcasmaholics Anonymous and manually annotated for the sarcasm label.", "The dataset is balanced with an equal number of samples for both sarcastic and non-sarcastic labels.", "The utterance in each sample consists of a single sentence, while the context associated with it consists of multiple sentences that precede the corresponding utterance in the dialogue.", "We manually re-annotated this dataset to introduce sentiment and emotion labels in addition to sarcasm.", "We define two kinds of emotion and sentiment values viz.", ", implicit and explicit, which are discussed in the following subsections.", "For sentiment annotation of an utterance, we consider both implicit and explicit affect information.", "The implicit sentiment of an utterance is determined with the help of context.", "Whereas, explicit sentiment of an utterance is determined directly from itself, and no external knowledge from the context is required to infer it.", "We consider three sentiment classes, namely positive, negative and neutral .", "For the example in Figure 1, the implicit sentiment would be Negative , whereas explicit sentiment is Positive .", "Table 1 shows the overall ratio of implicit and explicit sentiment labels, respectively.", "Whereas, Figure 2a and Figure 2b depict the show-wise ratio and distribution of each label.", "0 50 100 150 200 Negative Neutral Positive FR TBBT TGG SAU tt e r an c e s 0 25 50 75 100 PENNY AMYLEONARDSHELDONBERNADETHOWARDRAJOTHERS 0 50 100 150 200 Negative Neutral Positive", "Figure 2: Distribution of implicit sentiment (IS) and explicit sentiment (ES).", "Like sentiment, we annotate each sentence on the context and utterance for the implicit and explicit emotion .", "We annotate the dataset for 9 emotion values, viz. anger (An), excited (Ex), fear (Fr), sad (Sd), surprised (Sp), frustrated (Fs), happy (Hp), neutral (Neu) and disgust (Dg).", "Each utterance and context sentence are annotated, and these can have multiple labels per sentence for both implicit and explicit emotion .", "In the example of Figure 1, the implicit emotion of the speaker would be disgust while the explicit emotion is happy .", "Table 2 shows the overall ratio of implicit and explicit emotion labels, respectively.", "Whereas Figure 3a and Figure 3b depict the show-wise ratio and distribution of each label.", "We annotate all the samples with four labels (implicit sentiment/emotion and explicit senti-ment/emotion).", "We employ three graduate students highly proficient in the English language with prior Explicit Emotion An Ex Fr Sd Sp Fs Hp Neu Dg 54 30 6 118 35 23 206 228 10 Implicit Emotion An Ex Fr Sd Sp Fs Hp Neu Dg 97 18 14 121 29 57 143 198 39 Table 2: Emotion distribution.", "experience in labeling sentiment, emotion , and sarcasm .", "The guidelines for annotation, along with some examples, were explained to the annotators before starting the annotation process.", "The annotators were asked to annotate every utterance with as many emotions present in the utterance as possible, along with the sentiment.", "Initially, the dataset was annotated for explicit labels, with only the utterances provided to the annotators.", "Later, for the implicit labels, we also made the corresponding context video available to provide the relevant information for each sample.", "This method helps the annotators to resolve the ambiguity between the implicit and explicit labels.", "A majority voting scheme was used for selecting the final emotion and sentiment.", "We achieve an overall Fleiss' (Fleiss, 1971) kappa score of 0.81, which is considered to be reliable.", "In this section, we describe our proposed methodology, where we aim to leverage the multi-modal sentiment and emotion information for solving the problem of multi-modal sarcasm detection in a multi-task framework.", "We propose a segment-wise inter-modal attention based framework for our task.", "We depict the overall architecture in Figure 4.", "The extended dataset with annotation guidelines and source code are available at http://www.iitp.ac.", "in/ai-nlp-ml/resources.html .", "Each sample in the dataset consists of an utterance ( u ) accompanied by its context ( c ) and labels Figure 4: Overall architecture of the proposed multi-modal sarcasm detection framework.", "( sarcasm, implicit sentiment, explicit sentiment, implicit emotion , and explicit emotion ).", "The context associated with the utterance consists of multiple sentences (say, N ) that precede the corresponding utterance in the dialogue.", "Each utterance and its' context is associated with its' speaker i.e., speaker of utterance ( SP u ) and speaker of context ( SP c ) , respectively.", "We represent SP u and SP c by using a one-hot vector embedding.", "We divide our proposed methodology into three subsections i.e., Input Layer, Attention Mechanism and Output Layer , which are described below: 4.1 Input Layer The proposed model takes multi-modal inputs i.e., text (T), acoustic (A) , and visual (V) .", "We describe the utterance and its' context for all the modalities below: 4.1.1 Text Utterance: Let us assume, in an utterance, there n t number of words w 1: n t = w 1 , ..., w n t , where w j R d t , d t = 300, and w j s are obtained using fastText word embeddings (Joulin et al., 2016).", "The utterance is then passed through a bidirectional Gated Recurrent Unit (Cho et al., 2014) ( BiGRU T 1 ) to learn the contextual relationship between the words.", "We apply the attention over the output of BiGRU T to extract the important contributing words w.r.t. sarcasm.", "Finally, we apply BiGRU F 2 to extract the sentence level features.", "We then concatenate the speaker information of the 1 BiGRU T refers to the Bi-directional GRU units where output from all the time steps are forwarded in the model.", "utterance with the output of BiGRU F .", "This is denoted by T u + SP u , where T u denotes the utterance for the text modality and SP u denotes the speaker for that particular utterance.", "Context: There are N c number of sentences in the context where each sentence has n tc words.", "For each sentence, words are passed through BiGRU F to learn the contextual relationship between the words, and to obtain the sentence-wise representation.", "Then, we apply self-attention over the output of BiGRU F to extract the important contributing sentences for the utterance.", "Finally, we concatenate the speaker information with each sentence and pass through the BiGRU F to obtain the T c + SP c , where T c denotes the context of the text modality, and SP c denotes the speaker of that context.", "Utterance: Let us assume there are n v number of visual frames w.r.t. an utterance.", "We take the average of all frames to extract the sentence level information for the visual modality (Castro et al., 2019), and concatenate this with the speaker information.", "This is denoted as V u + SP u , where V u R d v and d v = 2048.", "Context: Given n vc number of visual frames w.r.t. all the sentences, we take the average of all the visual frames (Castro et al., 2019) to extract the context level information, and denote this as V c .", "As sentence-wise visual frames are not provided in the dataset, speaker information is not considered.", "Utterance: Given n a number of frames for the acoustic w.r.t. an utterance, we take the average of all the frames to extract the sentence level information", "information (Castro et al., 2019), and concatenate with the speaker of the utterance.", "We denote this as A u + SP u , where A u R d a and d a = 283 corresponds to the utterance of the acoustic modality.", "Context: For text, we concatenate the utterance ( T u + SP u ) with its context ( T c + SP c ).", "For visual, we concatenate the utterance ( V u + SP u ) with its context ( V c ) while for acoustic, we consider only the utterance A u + SP u (c.f. Figure 4).", "We do not consider any context information of the acoustics as it often contains information of many speakers, background noise, and noise due to laughter cues (which is not a part of the conversation).", "Hence, it might be difficult to disambiguate this with the laughter part of the conversation.", "Whereas, in the case of visual modality, it majorly contains the image of the speaker along with sentiment and emotion information.", "Thus, visual will not have a similar kind of problem as acoustic.", "It is also to be noted that for a fair comparison with the state-of-the-art system (Castro et al., 2019), we take the average of the acoustic and visual features across the sentences.", "In any multi-modal information analysis, it is crucial to identify the important feature segments from each modality, so that when these are combined together can improve the overall performance.", "Here, we propose two attention mechanisms:", "(i).", "Inter-segment Inter-modal Attention ( I e -Attention), and", "(ii).", "Intra-segment Inter-modal Attention ( I a -Attention).", "First, we pass the input representation from all the three modalities through a fully-connected layer ( Dense d ) to obtain the feature vector of length", "(d).", "These feature vectors are then forwarded to the aforementioned attention mechanisms.", "For each modality, we first split the feature vector into k-segments to extract the fine level information.", "We aim to learn the relationship between the feature vector of a segment of an utterance in one modality and feature vector of the another segment of the same utterance in another modality through this mechanism (c.f. Figure 5).", "Then, an I e -Attention is applied among the segments for every possible pair of modalities viz., TV, VT, TA, AT, AV, and VA.", "The overall procedure of I e -Attention is depicted in Algorithm 1.", "For each utterance, we first concatenate the feature vectors ( i.e., R d ) obtained from the three modalities i.e., R 3 d (c.f. Figure 6) and then split the feature vector into k-segments (i.e., R 3 dk ) .", "Now, we have a mixed representation of all the modalities, i.e. visual, audio and text.", "The aim is, for a specific segment of any particular utterance, to establish the relationship between the feature vectors obtained from the different modalities.", "Finally, the concatenated representation is shared across the five branches of our proposed network (i.e., sarcasm, I-sentiment, E-sentiment, I-emotion, & E-emotion) corresponding to three tasks, classification for the prediction (one for each task in the multi-task framework).", "Sarcasm and sentiment branches contain a Softmax layer for the final classification, while the emotion branch contains a Sigmoid layer for the classification.", "The shared representation will receive gradients of error from the five branches (sarcasm, I-sentiment, E-sentiment, I-emotion, & E-emotion), and accordingly adjusts the weights of the models.", "Thus, the shared representations will not be biased to any particular task, and it will assist the model in achieving better generalization for the multiple tasks.", "We divide the whole process into four categories:", "i).", "utterance without context without speaker (i.e., we do not use the information of context and its' speaker with utterance);", "ii).", "utterance with context without speaker (i.e., we use the context information with utterance but not speaker information);", "iii).", "utterance without context with speaker (i.e., we use the speaker information with utterance but not context information); and", "iv).", "utterance with context with speaker (i.e., we use the context and its' speaker information with utterance).", "We perform all the experiments for the setup utterances without context and speaker information (case", "i).", "Hence, even though the sentiment and emotion labels were annotated for both the context and utterance, we use the labels associated with utterances only for our experiments.", "Speaker Independent Setup: In this experiment, samples from The Big Bang Theory, The Golden Girls, and Sarcasmaholics Anonymous were considered for the training, and samples from the Friends Series were considered as the test set.", "Following this step, we were able to reduce the effect of the speaker in the model.", "Speaker Dependent Setup: This setup corresponds to the five-fold cross-validation experiments, where each fold contains samples taken randomly in a stratified manner from all the series.", "We evaluate our proposed model on the multimodal sarcasm dataset 3 , which we extended by incorporating both emotion and sentiment values.", "We perform grid search to find the optimal hyper-parameters (c.f. Table 3).", "Though we aim for a generic hyper-parameter configuration for all the experiments, in some cases, a different choice of the parameter has a significant effect.", "Therefore, we choose different parameters for a different set of experiments.", "We implement our proposed model on the Python-based Keras deep learning library.", "As the evaluation metric, we employ precision (P), recall (R), and F1-score (F1) for sarcasm detection.", "We use Adam as an optimizer, Softmax as a classifier for sarcasm and sentiment classification, and the categorical cross-entropy as a loss function.", "For emotion recognition, we use Sigmoid as an activation function and optimize the binary cross-entropy as the loss.", "We evaluate our proposed architecture with all the possible input combinations i.e. bi-modal ( T+V, T+A, A+V ) and tri-modal ( T+V+A ).", "We do not consider uni-modal inputs ( T, A, V ) because our proposed attention mechanism requires at least two modalities.", "We show the obtained results in Table 4, that outlines the comparison between the multi-task (MTL) and single-task (STL) learning frameworks 3 https://github.com/soujanyaporia/MUStARD T + V T + A A + V T + A + V Labels P R F1 P R F1 P R F1 P R F1 SpeakerDependent STL Sar 71.52 70.61 69.32 64.20 64.20 63.88 71.90 71.01 70.64 72.08 71.62 72.01 MTL Sar + Sent 69.65 69.42 69.33 64.09 60.72 58.21 72.20 71.45 71.18 72.52 71.73 72.07 Sar + Emo 71.76 70.86 70.54 65.76 65.65 65.60 72.60 71.59 71.25 72.76 71.88 72.11 Sar + Sent + Emo 72.76 71.88 71.61 62.23 61.15 59.61 72.73 71.88 71.81 73.40 72.75 72.57 Speaker Independent STL Sar 60.11 60.18 60.16 58.23 57.69 57.91 60.44 60.96 60.52 65.98 65.45 65.60 MTL Sar + Sent 62.74 62.92 62.81 59.25 59.55 52.89 61.60 60.95 61.14 66.97 63.76 63.68 Sar + Emo 65.11 65.16 65.13 59.59 59.55 59.58 63.19 63.76 62.91 66.35 65.44 65.63 Sar + Sent + Emo 65.48 65.48 65.67 59.13 59.98 50.27 65.59 63.76 63.90 69.53 66.01 65.90 Table 4: Single Task vs Multi Task: Without Context and Without Speaker information.", "without taking context and speaker information into consideration.", "We observe that Tri-modal (T+A+V) shows better performance over the bi-modal setups.", "For STL , experiments with only sarcasm class are used, whereas for MTL , we use three sets of experiments, i.e. sarcasm with sentiment ( Sar + Sent ), sarcasm with emotion ( Sar + Emo ), and sarcasm with sentiment and emotion ( Sar + Sent + Emo ).", "For sarcasm classification, we observe that multitask learning with sentiment and emotion together shows better performance for both the setups ( i.e. speaker dependent and speaker independent ) over the single-task learning framework.", "It is evident from the empirical evaluation, that both sentiment and emotion assist sarcasm through the sharing of knowledge, and hence MTL framework yields better prediction compared to the STL framework (c.f. Table 4).", "We also show the results for the single-task (T+A+V) experiments under speaker-dependent and speaker-independent setups for sentiment and emotion.", "These results can be considered as baseline for the same.", "The detailed description of sentiment and emotion are described in Section 3.1 and Section 3.2, respectively.", "For Sentiment Analysis, the results are shown in Table", "5. Speaker Dependent Implicit Sentiment Explicit Sentiment P R F1 P R F1 49.27 57.39 49.12 48.32 52.46 48.11 Speaker Independent P R F1 P R F1 47.05 49.15 40.99 47.73 50.0 45.24 Table 5: Results for Single-task experiments for Sentiment analysis ( T+A+V ).", "Similarly, for emotion analysis, the results are shown in Table", "6. Along with it, results from the single-Task experiments for each emotion under implicit emotion and explicit emotion for Speaker Dependent and Speaker Independent setups are shown in Table 7 and Table 8, respectively.", "As each utterance can have multiple emotion labels, we take all the emotions whose respective values are above a threshold.", "We optimize and cross-validate the evaluation metrics and set the threshold as 0.5 0.45 for speaker-dependent and speaker-independent setups, respectively.", "We further evaluate our proposed model by incorporating context and speaker information to form the three combinations of experiments viz.", "With Context Without Speaker , Without Context With Speaker , With Context and Speaker (c.f. Table 9).", "The experiments without context and without speaker information are same as the tri-modal setup in Table 4.", "The maximum improvement (1-5% ) in performance is observed when the speaker information alone is incorporated in the tri-modal setup.", "Whereas in Speaker Independent Setup, incorporating both context and speaker information significantly improves the performance (1-5% ).", "To understand the contribution of I e -Attention and I a -Attention towards the performance of the model, an ablation study was performed without the attention-mechanisms (c.f. Table 10).", "We compare, under the similar experimental setups, the results obtained in our proposed model (without context and speaker) against the existing models called as baseline (Castro et al., 2019), which also made use of the same dataset.", "The comparative analysis is shown in Table 11.", "For tri-modal experiments, our proposed multi-modal multi-task framework achieves the best precision of 73.40% ( 1 . 5% ), recall of 72.75% ( 1 . 4% ) and F1-score of 72.57% ( 1 . 1% ) for the proposed multi-task model (Sar + Sent + Emo) as compared to precision of 71.9%, recall of 71.4%, F1-score of 71.5% of the state-of-the-art system.", "We observe that both sentiment and emotion help in improving the ef-ficiency of sarcasm detection.", "Similarly, for the Speaker Independent setup, we obtain an improvement of 5.2% in precision, 3.4 % in recall, and 3.1% in F1-score.", "We perform statistical significance test ( paired T-test ) on the obtained results and observe that performance improvement in the proposed model over the state-of-the-art is significant with 95% confi-dence (i.e. p -value < 0 . 05 ).", "We analyze the attention weights to understand the learning behavior of the proposed framework.", "We take an utterance i.e., I love that you take pride in your looks, even when I have to pee in the morning, and you're in there spending an hour on your hair. (c.f Table 12) from the dataset which is a sarcastic utterance.", "The MTL (Sar + Sent + Emo) correctly classifies this utterance as sarcastic, while the STL (Sar) predicts it as non-sarcastic.", "In this utterance, we feel that the speaker is pleased and happy (ex-plicit emotion) where he is angry (implicit emotion) on the other person and is expressing that anger sarcastically.", "We analyze the heatmaps of the attention weights ( I e -Attention and I a -Attention ) for the above utterance.", "Each cell of heatmaps for I e -Attention (c.f. Figure 7) represents the different segments of the sentence across the modalities.", "Cell ( i,j ) of the heatmap for the modalities (say, TV ) represents the influence of s j of visual on s i of textual modality, in predicting the output (where s i represents i th segment of the feature vector from the respective modality).", "In Figure 7a, for the first segment of the utterance ( i.e., s 1 ) of the textual modality, the model puts more attention weights to the different segments of the utterance ( i.e., s 6 , s 7 , s 9 , and s 10 ) of visual modality to classify the Setup Model T + V T + A A + V T + A + V P R F1 P R F1 P R F1 P R F1 SpeakerDependent Baseline 72.0 71.6 71.6 66.6 66.2 66.2 66.2 65.7 65.7 71.9 71.4 71.5 Proposed Model 72.8 71.9 71.6 62.2 61.2 59.6 72.7 71.9 71.8 73.4 72.8 72.6 T -test -----0.0023 0.0098 0.0056 Speaker Independent Baseline 62.2 61.5 61.7 64.7 62.9 63.1 64.1 61.8 61.9 64.3 62.6 62.8 Proposed Model 65.5 65.5 65.7 59.1 60.0 50.3 65.6 63.8 63.9 69.5 66.0 65.9 T -test -----0.0002 0.0006 0.0012 Table 11: Comparative Analysis of the proposed approach with recent state-of-the-art systems.", "utterance correctly.", "Similarly, for I a -Attention, each cell( i,j ) of the heatmap (c.f. Figure 8) s 1 s 2 s 3 s 4 s 5 s 6 s 7 s 8 s 9 s 10 s 11 s 12 s 1 s 2 s 3 s 4 s 5 s 6 s 7 s 8 s 9 s 10 s 11 s 12 Figure 8: I a -Attention .", "signifies the influence of s j on s i in predicting the output (where s i represents i th segment of the concatenated feature vector from all modalities).", "We observe that for a particular segment of the utterance ( say s 6 ), the model puts more weights to itself rather than the others.", "We also observe that in the bi-modal (T+A) experiment (c.f. Table 4) our model does not perform at par when we attempt to solve all the three tasks, i.e. sarcasm, sentiment, and emotion together.", "This may be attributed to the reason of not incorporating the visual information that contains rich affect cues in the forms of sentiment and emotion.", "Hence, the introduction of sentiment in the T+A setting might be confusing the model.", "In this paper, we have proposed an effective deep learning-based multi-task model to simultaneously solve all the three problems, viz. sentiment analysis, emotion analysis and sarcasm detection.", "As there was no suitable labeled data available for this problem, we have created the dataset by manually annotating an existing dataset of sarcasm with sentiment and emotion labels.", "we have introduced two attention mechanisms (i.e., I e -Attention and I a -Attention) , and incorporated the significance of context and speaker information w.r.t. sarcasm.", "Empirical evaluation results on the extended version of the MUStARD dataset suggests the efficacy of the proposed model for sarcasm analysis over the existing state-of-the-art systems.", "The evaluation also showed that the proposed multi-tasking framework achieves better performance for the primary task, i.e. sarcasm detection, with the help of emotion analysis and sentiment analysis, the two secondary tasks in our setting.", "During our analysis, we found that the dataset is not big enough for a complex framework to learn from.", "Along with investigating new techniques, we hope that assembling a bigger curated dataset with quality annotations will help in better performance.", "The research reported here is partially supported by SkyMap Global India Private Limited.", "Asif Ekbal acknowledges the Young Faculty Research Fellowship (YFRF), supported by Visvesvaraya PhD scheme for Electronics and IT, Ministry of Electronics and Information Technology (Meit/8Y), Government of India, being implemented by Digital India Corporation (formerly Media Lab Asia)." ]
[ "objective", "objective", "objective", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "objective", "abstain", "objective", "method", "method", "abstain", "objective", "abstain", "abstain", "abstain", "method", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "abstain", "objective", "objective", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "abstain", "method", "method", "method", "method", "method", "method", "method", "method", "abstain", "abstain", "other", "abstain", "result", "abstain", "result", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "objective", "method", "abstain", "abstain", "objective", "result", "objective", "other", "other" ]
[ "Abstract", "Both performance and efficiency are crucial factors for sequence labeling tasks in many real-world scenarios.", "Although the pre-trained models (PTMs) have significantly improved the performance of various sequence labeling tasks, their computational cost is expensive.", "To alleviate this problem, we extend the re-cent successful early-exit mechanism to accelerate the inference of PTMs for sequence labeling tasks.", "However, existing early-exit mechanisms are specifically designed for sequence-level tasks, rather than sequence labeling.", "In this paper, we first propose SENTEE : a simple extension of SENT ence-level E arlyE xit for sequence labeling tasks.", "To further reduce computational cost, we also propose TOKEE : a TOK en-level E arlyE xit mechanism that allows partial tokens to exit early at different layers.", "Considering the local dependency inherent in sequence labeling, we employed a window-based criterion to decide for a token whether or not to exit.", "The token-level early-exit brings the gap between training and inference, so we introduce an extra self-sampling fine-tuning stage to alleviate it.", "The extensive experiments on three popular sequence labeling tasks show that our approach can save up to 66% 75% inference cost with minimal performance degradation.", "Compared with competitive compressed models such as DistilBERT, our approach can achieve better performance under the same speed-up ratios of 2 , 3 , and 4 .", "1 1 Introduction Sequence labeling plays an important role in natural language processing (NLP).", "Many NLP tasks can be converted to sequence labeling tasks, such as named entity recognition, part-of-speech tagging, Corresponding author.", "Chinese word segmentation and Semantic Role Labeling.", "These tasks are usually fundamental and highly time-demanding, therefore, apart from performance, their inference efficiency is also very important.", "The past few years have witnessed the prevailing of pre-trained models (PTMs) (Qiu et al., 2020) on various sequence labeling tasks (Nguyen et al., 2020; Ke et al., 2020; Tian et al., 2020; Mengge et al., 2020).", "Despite their significant improvements on sequence labeling, they are notorious for enormous computational cost and slow inference speed, which hinders their utility in real-time scenarios or mobile-device scenarios.", "Recently, early-exit mechanism (Liu et al., 2020; Xin et al., 2020; Schwartz et al., 2020; Zhou et al., 2020) has been introduced to accelerate inference for large-scale PTMs.", "In their methods, each layer of the PTM is coupled with a classifier to predict the label for a given instance.", "At inference stage, if the prediction is confident 2 enough at an earlier time, it is allowed to exit without passing through the entire model.", "Figure", "1(a) gives an illustration of early-exit mechanism for text classification.", "However, most existing early-exit methods are targeted at sequence-level prediction, such as text classification, in which the prediction and its confidence score are calculated over a sequence.", "Therefore, these methods cannot be directly applied to sequence labeling tasks, where the prediction is token-level and the confidence score is required for each token.", "In this paper, we aim to extend the early-exit mechanism to sequence labeling tasks.", "First, we proposed the SENTence-level Early-Exit (SEN-TEE), which is a simple extension of existing early-exit methods.", "SENTEE allows a sequence of tokens to exit together once the maximum uncertainty 2 In this paper, confident prediction indicates that the uncertainty of it is low.", "of the tokens is below a threshold.", "Despite its effectiveness, we find it redundant for most tokens to update the representation at each layer.", "Thus, we proposed a TOKen-level Early-Exit (TOKEE) that allows part of tokens that get confident predictions to exit earlier.", "Figure", "1(b) and", "1(c) illustrate our proposed SENTEE and TOKEE.", "Considering the local dependency inherent in sequence labeling tasks, we decide whether a token could exit based on the uncertainty of a window of its context instead of itself.", "For tokens that are already exited, we do not update their representation but just copy it to the upper layers.", "However, this will introduce a train-inference discrepancy.", "To tackle this problem, we introduce an additional fine-tuning stage that samples the token's halting layer based on its uncertainty and copies its representation to upper layers during training.", "We conduct extensive experiments on three sequence labeling tasks: NER, POS tagging, and CWS.", "Experimental results show that our approach can save up to 66% 75% inference cost with minimal performance degradation.", "Compared with competitive compressed models such as DistilBERT, our approach can achieve better performance under speed-up ratio of 2 , 3 , and 4 .", "Recently, PTMs (Qiu et al., 2020) have become the mainstream backbone model for various sequence labeling tasks.", "The typical framework consists of a backbone encoder and a task-specific decoder.", "Encoder In this paper, we use BERT (Devlin et al., 2019) as our backbone encoder .", "The architecture of BERT consists of multiple stacked Transformer layers (Vaswani et al., 2017).", "Given a sequence of tokens x 1 , , x N , the hidden state of l -th transformer layer is denoted by H ( l ) = [ h ( l ) 1 , , h ( l ) N ] , and H (0) is the BERT input embedding.", "Decoder Usually, we can predict the label for each token according to the hidden state of the top layer.", "The probability of labels is predicted by P = f ( WH ( L ) ) RN C , (1) where N is the sequence length, C is the number of labels, L is the number of BERT layers, W is a learnable matrix, and f ( ) is a simple softmax classifier or conditional random field (CRF) (Lafferty et al., 2001).", "Since we focus on inference acceleration and PTM performs well enough on sequence labeling without CRF (Devlin et al., 2019), we do not consider using such a recurrent structure.", "The inference speed and computational costs of PTMs are crucial bottlenecks to hinder their application in many real-world scenarios.", "In many tasks, the representations at an earlier layer of PTMs are usually adequate to make a correct prediction.", "Therefore, early-exit mechanisms (Liu et al., 2020; Xin et al., 2020; Schwartz et al., 2020; Zhou et al., 2020) are proposed to dynamically stop inference on the backbone model and make prediction with intermediate representation.", "However, these existing early-exit mechanisms are built on sentence-level prediction and unsuitable for token-level prediction in sequence labeling tasks.", "In this section, we propose two early-exist mechanisms to accelerate the inference for sequence labeling tasks.", "To extend early-exit to sequence labeling, we couple each layer of the PTM with token-level s that can be simply implemented as a linear classifier.", "Once the off-ramps are trained with the golden labels, the instance has a chance to be predicted and exit at an earlier time instead of passing through the entire model.", "Given a sequence of tokens X = x 1 , , x N , we can make predictions by the injected off-ramps at each layer.", "For an off-ramp at l -th layer, the label distribution of all tokens is predicted by P ( l ) = f ( l ) ( X ; ) (2) = softmax( WH ( l ) ) , (3) where W is a learnable matrix, f ( l ) is the token-level off-ramp at l -th layer, P ( l ) = [ p ( l ) 1 , , p ( l ) N ] , p ( l ) n RC , indicates the predicted label distribution at the l -th off-ramp for each token.", "where p ( l ) n is the label probability distribution for the n -th token.", "In the following sections, we will introduce two early-exit mechanisms for sequence labeling, sentence-level and token-level.", "Sentence-Level Early-Exit (SENTEE) is a simple extension for sequential labeling tasks based on existing early-exit approaches.", "SENTEE allows a sequence of tokens to exit together if their uncertainty is low enough.", "Therefore, SENTEE is to aggregate the uncertainty for each token to obtain an overall uncertainty for the whole sequence.", "Here we perform a straight-forward but effective method, i.e., conduct max-pooling 3 over uncertainties of all the tokens, u ( l ) = max { u ( l ) 1 , , u ( l ) n } , (5) where u ( l ) represents the uncertainty for the whole sentence.", "If u ( l ) < where is a pre-defined threshold, we let the sentence exit at layer l .", "The intuition is that only when the model is confident of its prediction for the most difficult token, the whole sequence could exit.", "Despite the effectiveness of SENTEE (see Table 1), we find it redundant for most simple tokens to be fed into the deep layers.", "The simple tokens that have been correctly predicted in the shallow layer can not exit (under SENTEE) because the uncertainty of a small number of difficult tokens is still above the threshold.", "Thus, to further accelerate the inference for sequence labeling tasks, we propose a token-level early-exit (TOKEE) method that allows simple tokens with confident predictions to exit early.", "Window-Based Uncertainty Note that a prevalent problem in sequence labeling tasks is the local dependency (or label dependency).", "That is, the label of a token heavily depends on the tokens around it.", "To that end, the calculation of the uncertainty for a given token should not only be based on itself but also its context.", "Motivated by this, we proposed a window-based uncertainty criterion to decide for a token whether or not to exit at the current layer.", "In particular, the uncertainty for the token x n at l -th layer is defined as u (cid:48) ( l ) n = max { u ( l ) n k , , u ( l ) n + k } , (6) where k is a pre-defined window size.", "Then we use u (cid:48) ( l ) n to decide whether the n th token can exit at layer l , instead of u ( l ) n .", "Note that window-based uncertainty is equivalent to sentence-level uncertainty when k equals to the sentence length.", "3 We also tried average-pooling, but it brings drastic performance drop.", "We find that the average uncertainty over the sequence is often overwhelmed by lots of easy tokens and this causes many wrong exits of difficult tokens.", "directly copied to the upper layers.", "4 Such a halt-and-copy mechanism is rather intuitive in two-fold: Halt.", "If the uncertainty of a token is very small, there are also few chances that its prediction will be changed in the following layers.", "So it is redundant to keep updating its representation.", "Copy.", "If the representation of a token can be classified into a label with a high degree of confidence, then its representation already contains the label information.", "So we can directly copy its representation into the upper layers to help predict the labels of other tokens.", "These exited tokens will not attend to other tokens at upper layers but can still be attended by other tokens thus part of the layer-specific query projections in upper layers can be omitted.", "By this, the computational complexity in self-attention is reduced from O ( N 2 d ) to O ( NMd ) , where M (cid:28) N is the number of tokens that have not exited.", "Besides, the computational complexity of the pointwise FFN can also be reduced from O ( Nd 2 ) to O ( Md 2 ) .", "The halt-and-copy mechanism is also similar to multi-pass sequence labeling paradigm, in which the tokens are labeled their in order of difficulty (easiest first).", "However, the copy mechanism results in a train-inference discrepancy.", "That is, a layer never processed the representation from its non-adjacent previous layers during training.", "To alleviate the discrepancy, we further proposed an additional fine-tuning stage, which will be discussed in Section 3.3.2.", "For sentence-level early-exit, we follow prior early-exit work for text classification to jointly train the added off-ramps.", "For each off-ramp, the loss function is as follows, L l = N (cid:88) n =1 H (cid:16) y n , f ( l ) ( X ; ) n (cid:17) , (7) 4 For English sequence labeling, we use the first-pooling to get the representation of the word.", "If a word exits, we will halt-and-copy its all wordpieces.", "where H is the cross-entropy loss function, N is the sequence length.", "The total loss function for each sample is a weighted sum of the losses for all the off-ramps, L total = (cid:80) Ll =1 w l L l (cid:80) Ll =1 w l , (8) where w l is the weight for the l -th off-ramp and L is the number of backbone layers.", "Following (Zhou et al., 2020), we simply set w l = l .", "In this way, The deeper an off-ramp is, the weight of its loss is bigger, thus each off-ramp can be trained jointly in a relatively balanced way.", "Since we equip halt-and-copy in TOKEE, the common joint training off-ramps are not enough.", "Because the model never conducts halt-and-copy in training but does in inference.", "In this stage, we aim to train the model to use the hidden state from different previous layers but not only the previous adjacent layer, just like in inference.", "Random Sampling A direct way is to uniformly sample halting layers of tokens.", "However, halting layers at the inference are not random but depends on the difficulty of each token in the sequence.", "So random sampling halting layers also causes the gap between training and inference.", "Self-Sampling Instead, we use the fine-tuned model itself to sample the halting layers.", "For every sample in each training epoch, we will randomly sample a window size and threshold for it, and then we can conduct TOKEE on the trained model, under the window size and threshold, without halt-and-copy.", "Thus we get the exiting layer of each token, and we use it to re-forward the sample, by halting and copying each token in the corresponding layer.", "In this way, the exiting layer of a token can correspond to its difficulty.", "The deeper a token's exiting layer is, the more difficult it is.", "Because we sample the exiting layer using the model itself, we think the gap between training and inference can be further shrunk.", "To avoid over-fitting during further training, we prevent the training loss from further reducing, similar with the flooding mechanism used by Ishida et al. (2020).", "We also employ the sandwich rule to stabilize this training stage (Yu and Huang, 2019).", "We compare self-sampling with random sampling in Section 4.4.4.", "We use average floating-point operations (FLOPs) as the measure of computational cost, which denotes how many floating-point operations the model performs for a single sample.", "The FLOPs is universal enough since it is not involved with the model running environment (CPU, GPU or TPU) and it can measure the theoretical running time of the model.", "In general, the lower the model's FLOPs is, the faster the model's inference is. 4.2 Experimental Setup 4.2.1 Dataset To verify the effectiveness of our methods, We conduct experiments on ten English and Chinese datasets of sequence labeling, covering NER : CoNLL2003 (Tjong Kim Sang and De Meulder, 2003), Twitter NER (Zhang et al., 2018), Ontonotes 4.0 (Chinese) (Weischedel et al., 2011), Weibo (Peng and Dredze, 2015; He and Sun, 2017) and CLUE NER (Xu et al., 2020), POS : ARK Twitter (Gimpel et al., 2011; Owoputi et al., 2013), CTB5 POS (Xue et al., 2005) and UD POS (Nivre et al., 2016), CWS : CTB5 Seg (Xue et al., 2005) and UD Seg (Nivre et al., 2016).", "Besides the standard benchmark dataset like CoNLL2003 and Ontonotes 4.0, we also choose some datasets closer to real-world application to verify the actual utility of our methods, such as Twitter NER and Weibo in social media domain.", "We use the same dataset preprocessing and split as in previous work (Huang et al., 2015; Mengge et al., 2020; Jia et al., 2020; Tian et al., 2020; Nguyen et al., 2020).", "BiLSTM-CRF (Huang et al., 2015; Ma and Hovy, 2016) The most widely used model in sequence labeling tasks before the pre-trained language model prevails in NLP.", "BERT The powerful stacked Transformer encoder model, pre-trained on large-scale corpus, which we use as the backbone of our methods.", "DistilBERT The most well-known distillation method of BERT.", "Huggingface released 6 layers DistilBERT for English (Sanh et al., 2019).", "For comparison, we distill { 3, 4 } and { 3, 4, 6 } layers DistilBERT for English and Chinese using the same method.", "For all datasets, We use batch size=10.", "We perform grid search over learning rate in { 5e-6,1e-5,2e-5 } .", "We choose learning rate and the model based on the development set.", "We use the AdamW optimizer (Loshchilov and Hutter, 2019).", "The warmup step, weight decay is set to 0.05, 0.01, respectively.", "For English Datasets, we use the BERT-base-cased' released by Google (Devlin et al., 2019) as backbone.", "For Chinese Datasets, we use BERT-wwm' released by (Cui et al., 2019).", "The DistilBERT is distilled from the backbone BERT.", "To fairly compare our methods with baselines, we turn the speedup ratio of our methods to be consistent with the corresponding static baseline.", "We report the average performance over 5 times under different random seeds.", "The overall results are shown in Table 1, where the speedup is based on the backbone.", "We can see both SENTEE and TOKEE brings little performance drop and outperforms DistilBERT in speedup ratio of 2, which has achieved similar effect like existing early-exit for text classification.", "Under higher speedup, 3 and 4 , SENTEE shows its weakness but TOKEE can still keep a certain performance.", "And under 2 4 speedup ratio, TOKEE has a lower performance drop than DistilBERT.", "What's more, for datasets where BERT can show its power than LSTM-CRF, e.g., Chinese NER, TOKEE (4 ) on BERT can still outperform LSTM-CRF significantly.", "This indicates the potential utility of it in complicated real-world scenario.", "To explore the fine-grained performance change under different speedup ratio, We visualize the speedup-performance trade-off curve on 6 datasets, in Figure2.", "We observe that, Before the speedup ratio rises to a certain turning point, there is almost no drop on performance.", "After that, the performance will drop gradually.", "This shows our methods keep the superiority of existing early-exit methods (Xin et al., 2020).", "As the speedup rises, TOKEE will encounter the speedup turning point later than SENTEE.", "After both methods reach the turning point, SENTEE's performance degradation is more drastic than TOKEE.", "These both indicate the higher speedup ceiling of TOKEE.", "On some datasets, such as CoNLL2003, we observe a little performance improvement under low speedup ratio, we attribute this to the potential regularization brought by early-exit, such as alleviating overthinking (Kaya et al., 2019).", "To verify the versatility of our method over different PTMs, we also conduct experiments on two well-known BERT variants, RoBERTa (Liu et al., 2019) 5 and ALBERT (Lan et al., 2020) 6 , as shown in Table 2.", "We can see that SENTEE and TOKEE also significantly outperform static backbone internal layer on three Representative datasets of corresponding tasks.", "For RoBERTa and ALBERT, we also observe the TOKEE can have a better performance than SENTEE under high speedup ratio.", "We show the performance change under different k in Figure 3, keeping the speedup ratio consistent.", "We observe that: (1) when k is 0, in other words, not using window-based uncertainty but token-independent uncertainty, the performance is the almost lowest across different speedup ratio, because it does not consider local dependency 5 https://github.com/ymcui/Chinese-BERT-wwm.", "6 https://github.com/brightmart/albert zh.", "at all.", "This shows the necessity of the window-based uncertainty.", "(2) When k is relatively large, it will bring significant performance drop under high speedup ratio (3 and 4 ), like SENTEE.", "(3) It is necessary to choose an appropriate k under high speedup ratio, where the effect of different k has a high variance.", "Liu et al. (2020) verified the lower the uncertainty, the higher the accuracy' on text classification.", "Here, we'd like to verify our window-based uncertainty 0 2 4 6 8 10 12 8 6 4 2 0 k F 1 Speedup 2 Speedup 3 Speedup 4 Figure 3: The performance change over different window size under the same speedup ratio.", "on sequence labeling.", "In detail, we verify the entire window-based uncertainty and its specific hyper-parameter, k , on CoNLL2003, shown in Figure 4. For the uncertainty, we intercept the 4 th and 8 th off-ramps and calculate their accuracy in each uncertainty interval, when k =2.", "The result shown in Figure", "4(a) indicates that the lower the window-based uncertainty, the higher the accuracy', similar as in text classification.", "For k , we set a certain threshold = 0.3, and calculate accuracy of tokens whose window-based uncertainty is small than the threshold under different k , shown in Figure", "4(b).", "The result shows that, as k increases: (1) The accuracy of screened tokens is higher.", "This shows that the wider of a token's low-uncertainty neighborhood, the more accurate the token's prediction is.", "This also verifies the validity of window-based uncertainty strategy.", "(2) The accuracy improvement slows down.", "This shows the low relevance of distant tokens' uncertainty and explains why large k performs not well under high speedup ratio: it does not help improving more accurate exiting but slowing down exiting.", "Transformer-based PTMs, e.g. BERT, face challenge in processing long text, due to the O ( N 2 d ) computational complexity brought by self-10", "attention.", "Since the TOKEE reduces the layer-wise computational complexity from O ( N 2 d + Nd 2 ) to O ( NMd + Md 2 ) and SENTEE does not, we'd like to explore their effect over different sentence length.", "We compare the highest speedup ratio of TOKEE and SENTEE when performance drop < 1 on Ontonotes 4.0, shown in Figure 5. We observe that TOKEE has a stable computational cost saving as the sentence length increases, but SENTEE's speedup ratio will gradually reduce.", "For this, we give an intuitive explanation.", "In general, a longer sentence has more tokens, it is more difficult for the model to give them all confident prediction at the same layer.", "This comparison reveals the potential of TOKEE on accelerating long text inference.", "To verify the effect of self-sampling fine-tuning in Section 3.3.2, we compare it with random sampling and no extra fine-tuning on CoNLL2003.", "The performance-speedup trade-off curve of TOKEE is shown in Figure 6, which shows self-sampling is always better than random sampling for TOKEE.", "As speedup ratio rises, this trend is more significant.", "This shows the self-sampling can help more in reducing the gap of training and inference.", "As for no extra fine-tuning, it will deteriorate drastically at high speedup ratio.", "But it can roughly keep a certain capability at low speedup ratio, which we attribute to the residual-connection of PTM and similar results were reported by Veit et al. (2016).", "In TOKEE, by halt-and-copy mechanism, each token goes through a different number of PTM layers according to the difficulty.", "We show the average distribution of a sentence's tokens exiting layers under different speedup ratio on CoNLL2003, in Figure 7.", "We also draw the average exiting layer number of SENTEE under the same speedup ratio.", "We observe that as speedup ratio rises, more tokens will exit at the earlier layer but a bit of tokens can still go through the deeper layer even when 4 , meanwhile, the SENTEE's average exiting layer number reduces to 2.5, where the PTM's encoding power is severely cut down.", "This gives an intuitive explanation of why TOKEE is more effective than SENTEE under high speedup ratio: although both SENTEE and TOKEE can dynamically adjust computational cost on the sample-level, TOKEE can adjust do it in a more fine-grained way.", "PTMs are powerful but have high computational cost.", "To accelerate them, many attempts have been made.", "A kind of methods is to reduce its size, such as distillation (Sanh et al., 2019; Jiao et al., 2020), structural pruning (Michel et al., 2019; Fan et al., 2020) and quantization (Shen et al., 2020).", "Another kind of methods is early-exit, which dynamically adjusts the encoding layer number of different samples (Liu et al., 2020; Xin et al., 2020; Schwartz et al., 2020; Zhou et al., 2020; Li et al., 2020).", "While they introduced early-exit mechanism in simple classification tasks, our methods are proposed for the more complicated scenario: sequence labeling, where it has not only one prediction probability and it's necessary to consider the dependency of token exitings.", "Elbayad et al. (2020) proposed Depth-Adaptive Transformer to accelerate machine translation.", "However, their early-exit mechanism is designed for auto-regressive sequence generation, in which the exit of tokens must be in left-to-right order.", "Therefore, it is unsuitable for language understanding tasks.", "Different from their method, our early-exit mechanism can consider the exit of all tokens simultaneously.", "In this work, we propose two early-exit mechanisms for sequence labeling: SENTEE and TOKEE.", "The former is a simple extension of sequence-level early-exit while the latter is specially designed for sequence labeling, which can conduct more fine-grained computational cost allocation.", "We equip TOKEE with window-based uncertainty and self-sampling finetuning to make it more robust and faster.", "The detailed analysis verifies their effectiveness.", "SENTEE and TOKEE can achieve 2 and 3 4 speedup with minimal performance drop.", "For future work, we wish to explore: (1) leveraging the exited token's label information to help the exiting of remained tokens; (2) introducing CRF or other global decoding methods into early-exit for sequence labeling.", "We thank anonymous reviewers for their detailed reviews and great suggestions.", "This work was supported by the National Key Research and Development Program of China (No. 2020AAA0106700), National Natural Science Foundation of China (No. 62022027) and Major Scientific Research Project of Zhejiang Lab (No. 2019KD0AD01)." ]
[ "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "method", "method", "method", "method", "abstain", "abstain", "result", "result", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "result", "abstain", "abstain", "abstain", "result", "result", "result", "result", "result", "other", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "result", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "result", "abstain", "result", "abstain", "other", "other", "other", "other", "objective", "other", "other", "other", "method", "objective", "abstain", "method", "abstain", "abstain", "objective", "other", "other" ]
[ "People debate on a variety of topics on online platforms such as Reddit, or Facebook.", "Debates can be lengthy, with users exchanging a wealth of information and opinions.", "However, conversations do not always go smoothly, and users sometimes engage in unsound argumentation techniques to prove a claim.", "These techniques are called fallacies.", "Fallacies are persuasive arguments that provide insufficient or incorrect evidence to support the claim.", "In this paper, we study the most frequent fallacies on Reddit, and we present them using the pragma-dialectical theory of argumentation.", "We construct a new annotated dataset of fallacies, using user comments containing fallacy mentions as noisy labels, and cleaning the data via crowdsourcing.", "Finally, we study the task of classifying fallacies using neural models.", "We find that generally the models perform better in the presence of conversational con-text.We have released the data and the code at github.com/sahaisaumya/informal_ fallacies .", "Argumentation plays a critical part in our lives as it helps us make decisions and reason about the world around us.", "Studies (Sanders et al., 1994) have shown that learning how to argue increases the ability to identify weak arguments and decreases the tendency to use verbal aggressiveness.", "Fallacies are weak arguments that seem convincing, however, their evidence does not prove or disprove the argu-ment's conclusion.", "Fallacies are usually divided into formal and informal, where the former can be easily described using logical representations, while for the latter, an analysis of the content is more appropriate.", "Fallacies are prevalent in public Part of this work was done while the first author was an intern at Inria, France.", "discourse.", "For example, The New York Times labeled the tweets of Donald Trump between 2015 and 2020 and found thousands of insults addressed to his adversaries.", "If made in an argument, an insult is an ad hominem fallacy: an attack on the opponent rather than on their argument.", "In private conversations, other types of fallacies might be more prevalent, for example, appeal to tradition or appeal to nature.", "Appeal to tradition dismisses calls to improve gender equality by stating that women have always occupied this place in soci-ety.", "Appeal to nature is often used to ignore calls to be inclusive of the LGBTQ+ community by stating gender is binary.", "The underlying premises of such arguments are traditions are correct and what occurs in nature is good.", "Creating a dataset of fallacious arguments is difficult, given that there are over 100 types of fallacious arguments (Scalambrino, 2018).", "There have been several attempts to create comprehensive datasets: Habernal et al. (2017) proposed a game in which players add fallacies in the hope of fouling other participants, in Habernal et al. (2018a) ad hominem fallacies are found using a subred-dit's rule violations, while in Da San Martino et al. (2019) fallacies are annotated together with other propaganda techniques in news articles.", "However, our work is the first to propose a viable solution for finding fallacious arguments belonging to many different fallacy types.", "In this work, we study fallacies in public discussions on online forums.", "Our salient contributions are: i ) we align informal fallacies mentioned on Reddit within the pragma-dialectic theory of argumentation (van Eemeren and Grootendorst, 1995); ii ) we design a methodology for mining and labeling easily fallacies in online discussions; iii ) we construct a large and balanced dataset of fallacious arguments; iv ) finally, we evaluate several neural models on the task of predicting fallacious arguments, and we find that taking into consideration additional conversational context is important for this task.", "Humans use argumentation when they evaluate the validity of new ideas, or they want to solve a difference of opinion.", "An argument contains: i ) a proposition called claim, conclusion or standpoint, to be validated; ii ) the premises called also evidence, which are the backing propositions; iii ) an inference relation between the evidence and conclusion that validates or disproves the conclusion.", "A fallacy is a flawed argument, where the inference relation or the premises are incorrect.", "Fallacies are generally divided into formal and informal fallacies.", "Formal fallacies are arguments that can be easily represented as invalid logical formulas, such as denying the antecedent , which is a wrong application of modus tollens.", "Although many informal fallacies can be also represented as invalid arguments, informal fallacies are easier to describe and understand without resorting to logical representations (Hansen, 2020).", "In this work, we follow the pragma dialectic theory of argumentation .", "The theory developed by van Eemeren and Grootendorst (1995) views argumentation as a complex speech act.", "The dialectical aspect is represented by two parties who try to resolve a difference of opinion by engaging in a discussion, each party making a move towards resolution.", "The pragmatic aspect describes the moves in the discussion as speech acts, more precisely as the illocutionary acts introduced by Searle (1979).", "van Eemeren and Grootendorst (1995) also developed ten rules which should guide argumentative discussions.", "The goal of the rules is to further the understanding of the difference of opinions and to create a fruitful discussion.", "For example, a rule states that parties must not prevent each other from advancing standpoints or from casting doubt on standpoints, while a second rule asks that a party may defend a standpoint only by advancing argumentation relating to that standpoint.", "An argument that prevents the resolution and thus violates one of the rules is a fallacy .", "In our work, we align frequent fallacies on Reddit with these rules, with the goal of formalizing their definitions.", "Douglas Walton (Walton, 2005).", "A scheme consists of a conclusion, a set of premises, and a set of critical questions.", "The critical questions should be answered in order to prove that the premises support the conclusion, hence the argument is not a fallacy.", "For example, the scheme for an argument from expert opinion (Walton, 2005) has the premises E is an expert in domain D, E asserts that A is known to be true, A is within D and the conclusion therefore, A may plausibly be taken to be true.", "Some critical questions for this scheme are: i ) Trustworthiness: Is E personally reliable as a source?", "ii ) Backup Evidence: Is E's assertion based on evidence?", "Argumentation schemes have two main drawbacks: first, for each new fallacy, a new scheme should exist or be defined; and second, in the context of labeling an existing argument, many of the critical questions might be unanswerable as none of the parties discussed them.", "An initial effort for creating an extensive dataset of fallacies was made in Habernal et al. (2017).", "The authors created a platform for educative games, where players learn how to become better debaters.", "New fallacies are added to the platform by players that try to earn points by fouling other participants with invalid arguments.", "A follow-up on this work (Habernal et al., 2018a) mentioned a dataset of only around 300 arguments created via the platform, thus showing the need of finding other methods for creating larger datasets of fallacies.", "Ad hominem fallacies in conversations have been addressed in (Habernal et al., 2018b).", "The authors used the subreddit ChangeMyView, which is a forum for civilized discussions, a place to post an opinion you accept may be flawed, in an effort to understand other perspectives on the issue.", "The dataset of fallacies consists of comments that were removed by the moderators as they violated the rule of not being rude or hostile, hence committing an ad hominem fallacy.", "Fallacious arguments are often made in the dissemination of propaganda.", "In Da San Martino et al. (2019), the authors annotate journal articles with 18 propaganda techniques, out of which 12 techniques are fallacies.", "Although an important resource in the study of fallacies, their labelling method and dataset have a few drawbacks.", "First, the dataset is highly unbalanced with 6 fallacies having a fair number of mentions: name-calling (1294) , appeal to fear and prejudice (367) , flag-waving (330) , causal oversimplification (233) , appeal to authority (169) , black and white fallacy (134) , and 6 fallacies having less than 100 mentions: whataboutism (76) , reductio ad hitlerum (66) , red herring (48) , bandwagon (17) , labeling, obfuscation or intentional vagueness (17) , straw men (15) .", "Second, the task of finding the correct label for a span of text from a large set of labels ( 18 in their case) is intellectually complex and time-consuming.", "Our work focuses on collecting and annotating a balanced dataset of fallacy mentions while providing a methodology that can easily scale to a larger number of fallacies.", "In our approach, an annotator has to just verify that a comment contains one type of fallacy.", "In addition, we target fallacies in online conversations, where the style of argumentation is less structured than in a journal article.", "Finding a large sample of fallacious arguments is a challenging task as it assumes going through long conversations, finding arguments, and then verifying if the arguments are sound.", "Another major issue, even if we recognize the argument is flawed, is to find the exact fallacy that is committed, given that more than 100 types of fallacies have been proposed in the literature (Scalambrino, 2018).", "Our goal is to construct an annotated dataset of fallacies using a mixed strategy: i ) first, as noisy labels, we leverage user comments that mention the name of a fallacy, and second, ii ) we clean this dataset by removing false-positive samples via crowdsourcing.", "Our intuition is that a person will mention a fallacy as a reply to another comment to highlight that the previous comment's argument is fallacious, as shown in Figure", "1. This might not always be the case, as users could discuss fallacies in general, hence the need to further label the discussion using crowdsourcing.", "We use the Pushshift Reddit API (Baumgartner et al., 2020) to retrieve data from Reddit.", "The API allows searching comments and submissions by their IDs or by a set of keywords.", "We start by making an exhaustive list of fallacies informed by Wikipedia.", "We chose Wikipedia as a resource for creating the list of fallacies as it is one of the most well-known sources of information, hence a Reddit user could peruse it easily to understand what fallacy was committed in the discussion.", "For each fallacy we find all its different designations, for Submission title: What is something massively outdated that humanity has yet to upgrade?", "example, appeal to tradition is also known under its Latin name, argumentum ad antiquitatem .", "We then do a keyword search for these fallacy types on Reddit comments, restricting the results to one year, May 2019 to May 2020.", "We retrieve in total 105 K comments that match at least one fallacy.", "For comparison, in 2019, 1 .", "7 billion comments were posted on Reddit.", "While it is very likely that many more posts contain fallacies, the small number of matches highlights the importance of choosing with care the comments to annotate.", "To understand in which subreddits people were more likely to mention names of fallacies, we compute the top 10 subreddits with the highest ratio of matched comments per number of subscribers, as shown in Table", "1. The subreddits are broadly divided into subreddits on religion, morality, and science, with one subreddit dedicated to discussions on fallacies.", "The subreddits' focus is on debating, which involves creating, defending, and attacking arguments, therefore accusing the opponent of committing a fallacy might win you the debate.", "From the list of most frequently mentioned fallacies we retained the top fallacies with more than 400 mentions, resulting in 32 fallacy types.", "This shortlist of frequent fallacies is presented in our Appendix A , with a definition, example, and argumentation rule violation (according to the pragma dialectic theory) for each fallacy.", "From this shortlist we do not consider the fallacies that were already studied in Habernal et al. (2018b), as their labeled dataset is also based on Reddit comments.", "We do not exclude fallacy types annotated in Da San Martino et al. (2019), as these are fallacious arguments in journal articles.", "We take random samples of Subreddit Description Abortiondebate A subreddit for debating abortion: ethics, religion, politics all welcome.", "20 comments that mention one of our frequent fallacies and the comment to which they reply (the potential fallacious comment), and we check if the users have a good understanding of the respective fallacies.", "We keep the fallacies for which users generally had a correct sense of their definition.", "In addition, we filter fallacy types if more than 60% of potential fallacious comments were not true fallacious arguments.", "These conditions assure that the comments we will label have good quality and that we will find sufficient actual fallacy examples.", "The remaining fallacies are selected for the cre-ation of an annotated dataset of fallacies.", "Appeal to authority / argument from authority fallacy / argumentum ad verecundiam .", "Definition.", "The claim is supported by the opinion of a person with authority, hence the claim is true.", "Example.", "Being vegan makes no sense because my father said so.", "Appeal to majority / bandwagon argument / appeal to widespread belief / appeal to the people fallacy / argumentum ad populum .", "Definition.", "A claim is true because many people believe it to be true.", "Example.", "Being vegan makes no sense because so many of us are meat eaters.", "Appeal to nature / naturalistic fallacy.", "Definition.", "An action A is justified/unjustified because it occurs/does not occur in nature.", "Example.", "Being vegan makes no sense as our body is designed for eating meat.", "Appeal to tradition fallacy / argumentum ad antiquitatem .", "Definition.", "An action A is justi-fied/unjustified because it has always been considered as such in the past.", "Example.", "Being vegan makes no sense as our ancestors have been meat eaters.", "Appeal to worse problems / relative privation / not as bad as fallacy.", "Definition.", "There exists problem A that is worse than problem B, therefore B is justified.", "Black-or-white / false dilemma / false dichotomy / bifurcation fallacy.", "Definition.", "In this argument, the claim is that only an event/action A should be considered.", "The first premise is that only two events, A and B are possible when there is at least a third event C possible.", "The second premise is that one of the events is bad, for example B, thus only event A should be considered.", "Example.", "You must wear a mask each time you go out, otherwise, you will die of COVID-19.", "Hasty generalization fallacy.", "Definition.", "The claim is supported by insufficient evidence through inductive generalization.", "More precisely, we know that predicate P is true for a population sample, and we suppose it is true for the entire population.", "However, the sample is too small or it is not representative of the population.", "Example.", "The first week of September has been sunny, which means the rest of the month will be the same.", "Slippery slope / thin edge of the wedge / camel's nose fallacy.", "Definition.", "A small event A will have a big unwanted consequence C. There is at least one more event B in the chain of causality (A will cause B, B will cause C), hence the slippery slope name of the fallacy.", "Example.", "If you break your diet and have one cookie tonight, you will just want to eat 10 cookies tomorrow and 20 the day after, and before you know it, you will have gained back the 15 pounds you lost.", "Rule violation.", "According to the pragma dialectic theory, an argument is a fallacy if it violates a critical discussion rule.", "The arguments above violate one of two rules, hence they are fallacies.", "The first rule violated states that defending a claim must occur through an appropriate argumentation scheme that is correctly applied.", "Argumentation schemes in van Eemeren and Grootendorst (1995) are different than schemes in Walton (2005).", "They are a formalization of the relation between the evidence presented and the standpoint to be defended.", "This rule is violated by all fallacies, except black-or-white.", "For example, in slippery slope, the argumentation is not valid as there is no clear causality chain between A and C. Black-or-white fallacy violates the rule that a party should not falsely present a premise as an accepted starting point, by stating that only events A and B are possible.", "Noisy labels.", "We used Amazon Mechanical Turk to create our annotated dataset.", "We selected 4 Master annotators, which had the highest agreement with the authors on identifying a set of fallacies (70 samples).", "An annotation task, defined as a HIT 1 consists of 10 items.", "Each item presents a sample extracted from a Reddit discussion.", "A Reddit discussion is started by a submission , e.g., a news article or a piece of text, to which users engage by writing comments .", "The comments and submission are organized in a tree-like structure: the submission is the root, and comments are nodes in the tree; we will use the terms grandparent, parent, and child to denote relations between comments.", "A sample given for annotation includes the title and the link of the original Reddit submission and four comments : the comment containing the mention of the fallacy (this is the label comment ); the parent of the label comment, which should contain the fallacious argument (the comment of interest or COI ); the parent of the COI, to give more context for the discussion; a direct reply to the label comment; preference was given to replies that had the same author as the COI; if no such comment existed, then we choose the top-rated comment.", "1 Human Intelligence Task on Amazon Turk", "For each fallacy described in Section 3, we retrieve all the label comments mentioning it and the context needed for creating a sample discussion (item).", "We keep the items for which: i ) the comments are relatively short: the label comment has less than 500 characters (a shorter text will more likely be an accusation of committing a fallacy), and the other comments have less than 1000 characters; ii ) we have enough context to understand the discussion: the COI is a direct reply to the submission or the child comment of a direct reply; iii ) the COI or its parent do not contain the substring fallac ', a sign that this could be a discussion on fallacies and therefore the COI does not contain a fallacious argument, but it merely discusses or points out one.", "iv ) we have access to the original discussion: the user or a moderator did not delete the comments, and the submission is not from a banned subreddit (the annotators can visit the link provided with the submission title); v ) all the comments are in English.", "Crowdsourcing task.", "Workers were presented with concise descriptions of the main concepts involved: argument, claim, evidence and fallacy.", "All the items in a HIT have to be annotated only for one fallacy.", "For example, we retrieved all the items where the label comment mentioned hasty generalization fallacy and we split them into HITs.", "We note that the fallacy committed in the comment might not be the same as the one signaled by the user.", "However, the authors have reviewed a large sample of comments (for the third vote explained further in this section) and did not encounter this situation.", "Hence, even if this might still occur, it should be rare.", "For each selected fallacy, we offered the definition together with an example of the fallacy, where we identified the claim and evidence.", "Furthermore, we instruct the workers not to label as a fallacy a comment that is sarcastic (sometimes accompanied by the explicit tag /s) or a comment that is disproving the fallacy, e.g., Who would think that we shouldn't become vegans just because our body is able to digest meat?", ".", "The workers are asked if the fallacy occurs in the comment of interest and if yes, they are prompted to highlight the corresponding text span.", "They are also asked to write the claim that is addressed by the comment of interest.", "Finally, they have to answer a question specific to each fallacy to prove their good understanding of the task.", "The questions are: i ) appeal to authority: What authority is being appealed to in the comment of interest, and hence is used as the basis for the argument?; ii ) appeal to majority: no question; iii ) appeal to nature: What natural phenomenon/event/activity is considered natural here? iv ) appeal to tradition: What tradition is being appealed to in the comment of interest, and hence is used as the basis for the ar-gument?; v ) appeal to worse problems: Describe why the current problem (problem 1) is not a trivial issue. vi ) black-or-white: Name any additional alternative, which is possible but is not mentioned in the comment of interest. vii ) hasty generalization: Describe a case where the (hasty) generalization will fail. viii ) slippery slope: Please list any one event in the chain of slippery slope argument.", "By answering these questions, the workers would take the time to understand why the argument was a fallacy.", "Annotated dataset.", "A HIT is annotated by two workers.", "We compute the Cohen's agreement for the task of deciding if a comment contains a fallacy (comment-level annotation), and inter-annotator agreement (Mathet et al., 2015) for the task of highlighting the tokens of the fallacy within the COI (token-level annotation), as shown in Table", "2. For both measures, 1 .", "implies perfect agreement.", "The comment-level annotation agreement varies from fair (black-or-white and hasty generalization) to substantial (appeal to authority), with the majority of fallacies in the moderate interval.", "The token-level agreement is moderate for appeal to worse problems and substantial for the rest.", "In addition to the workers' votes, an expert annotator casts a third vote on comments, whenever there is a disagreement on the label.", "A comment is marked as fallacious if it has received two fallacy votes.", "The corresponding fallacious tokens of the comment are the union of the tokens highlighted by the annotators.", "We annotated comments until we reached roughly 200 fallacious comments per fallacy type.", "The details of the dataset are presented in Table", "3. Fallacy Number of comments Mean tokens in spans Appeal to authority 212 21.49 15.00 Appeal to majority 196 15.52 11.55 Appeal to nature 208 15.16 9.61 Appeal to tradition 210 16.35 9.07 Appeal to worse problems 239 25.71 17.44 Black-or-white 211 21.80 14.77 Hasty generalization 204 19.76 12.72 Slippery slope 228 27.98 19.23 Overall 1708 20.69 14.93 Table 3: Fallacious comments and tokens.", "The total size of our annotated dataset , including comments and tokens that are non fallacious, consists of 3358 comments and 160 K tokens.", "We observe that to find 1708 fallacious comments, we annotated only about two times more comments.", "This shows that our technique of finding fallacious comments is efficient.", "We investigate if the label comment (i.e., the comment containing mention of the fallacy) is truly indicative of a fallacy in the COI.", "This can be useful for flagging the label comments that are likely to point to fallacious COI, therefore eliminating or reducing the need for crowdsourcing.", "Our intuition is that a classification method might differentiate when comments are accusations or just mention of fallacies.", "To investigate this, we used the fallacy/no-fallacy annotation as classes for label comment and trained a binary BERT classifier (De-vlin et al., 2019).", "We obtained an F1 score of 67 .", "41 , indicating that the label comment's content is not sufficiently reliable.", "In conclusion, human annotators are still needed for annotating the true class of the COI.", "Non fallacious comments.", "The comments for which two annotators confirmed they were not fallacious represent our annotated negatives ( 1650 comments).", "In order to have a more diverse set of negative examples, i.e. on similar and different topics, we construct a second set of negative examples ( 6400 comments) as follows.", "We retrieve all the users that wrote a label comment to a COI and the COI was identified as fallacious in the annotation, our gold users .", "We take all their comments after the timestamp of the label comment that do not mention a fallacy name, and retrieve their parent comment.", "For each comment in the annotated dataset, we select one sample from our pool of parent comments from the same subreddit (if this exists) and one from a subreddit not seen in the annotated dataset.", "We retrieve a total of 6400 samples.", "These comments are used together with the annotated dataset, to create our full dataset , used to train classification models.", "The intuition of the sampling strategy is that, the gold users were able to recognize true fallacies at least one time, so they should spot other fallacies.", "Hence, if they reply to a comment without flagging it, the parent comment is likely to be non fallacious.", "There could be fallacious comments in this sample; however, we consider it less likely than a random sample.", "Tasks.", "We address four tasks leveraging our annotated dataset, listed in the order of increasing granularity: i ) comment-level (CL) fallacy identification (binary task of predicting if a comment is fallacious or not); ii ) comment-level fallacy type identification (multi class prediction of the type of fallacy, with non-fallacious as one class in the 9 classes); iii ) token-level (TL) fallacy identification (binary task of predicting if tokens in the COI belong to a fallacy or not); iv ) token-level fallacy type identification (multi class prediction of tokens in the COI into one of the eight fallacy classes or the non-fallacy class).", "BERT.", "We fine-tune BERT by adding a linear layer on top of generated contextual representations.", "We use the token level embedding in token detection tasks and [CLS] embedding in the case of classification tasks.", "MGN.", "We adopt the best architecture reported in Da San Martino et al. (2019), which is a multi-granularity network that uses lower granularity sentence-level (which is comment-level in this setting) representation together with higher granularity token-level representations to jointly train the network.", "We set the dimension of lower granularity embedding representation equal to the number of classes in the task.", "We jointly train tasks where number of classes are the same, that is, CL & TL fallacy identification tasks are trained together and so are CL & TL fallacy type identification tasks.", "We use sigmoid activation as it is the best model for their fragment (token) level classification and is comparable for the sentence level classifier.", "This model has been shown to give good results for predicting propaganda techniques, which include fallacies.", "Conversation context.", "Our dataset is rich in textual information related to the COI, which could improve prediction.", "We define context as the parent comment of COI (if it exists, the string None otherwise) or the submission title.", "This is provided to the classifier in the format: [CLS] COI Tokens [SEP] Context tokens [SEP] .", "The Context tokens get a non-fallacy' token-level label at the training time, but during the validation or test set evaluation, only the COI token labels are used.", "The [CLS] token is used for CL tasks.", "This results in four extensions of the previous models: BERT-T, BERT-P, MGN-T, MGN-P , where T stands for title and P for parent comment.", "Setup.", "We use PyTorch (Paszke et al., 2019) and the pre-trained BERT model (Devlin et al., 2019; Wolf et al., 2020).", "We fine-tune BERT using batch size 8 , maximum sequence length 256 for COI & 64 for context, and monitored the macro-averaged F1 2 score on the validation set, as identification of all classes is equally important.", "We use the AdamW optimizer, with a learning rate of 5 e 5 .", "We weigh the cross-entropy loss function according to the class distribution in training data.", "We split the dataset into training ( 70% ), validation ( 20% ) and test ( 10% ) sets, hence the full dataset has 6823 , 1950 & 977 and annotated dataset has 2351 , 671 & 336 comments respectively.", "We repeat the experiments with 5 different random seeds for the network intialization and we average the results.", "In Table 4, we show the results of comment level fallacy and fallacy type identification.", "All the results are macro scores (precision, recall and F1).", "The MGN models obtain the best results, most often when context is added.", "The full dataset provides a wider mix of topics via noisy negative sam-2 All reported F1 scores are macro F1.", "ples and pronounces the class imbalance, closer to a real sample of Reddit conversations.", "Despite this, the classifier is able to learn across all four tasks.", "Table 5 presents the results for token level fallacy and fallacy type identification.", "models obtain better results for the multi class setting, while MGN for the binary setting.", "This is comparable with the results reported in Da San Martino et al. (2019), where the authors observe a smaller improvement in classification for the token level prediction using MGN.", "Adding more context in the form of title or parent of the COI generally led to improved performance.", "While the results are slightly better when adding the title, the differences are small.", "We speculate that parent and COI provided a complete argument, making fallacy detection a bit easier.", "In Table 6, we show the F1 score per fallacy class.", "Appeal to authority, nature, and tradition perform well ( F 1 > 40% ) across all four tasks.", "Hasty generalization has a rather poor performance; this can be attributed to this fallacy's general difficulty, given that the workers also had low agreement on this fallacy (Table 2).", "We observe that generally the comment level prediction task is easier than the token level prediction, which is expected due to the granularity difference.", "Topical confounds.", "While fallacies might appear more frequently in discussions on certain topics, a fallacy detection approach should identify the underlying argument structure, and not just the presence of a topic.", "For example, we do not want to label all discussions about nature as appeal to nature fallacies.", "To identify if the classifiers are sensitive to topical biases, we use the approach presented in (Kumar et al., 2019).", "We compute statistically overrepresented tokens in each propaganda technique in the training set using log-odds ratio with Dirichlet prior (Monroe et al., 2008).", "We present the top 10 tokens per fallacy in Table", "7. We observe that for appeal to authority, nature and tradition, the tokens are topically cohesive, as they revolve around notions of authority, nature and tradition.", "For the other fallacies, while it is intuitive why some words may be overrepresented, there is no clear topical cohesiveness.", "To verify that our classifiers learn linguistic patterns and not topics, we replace the top 30 tokens strongly associated with each fallacy (computed from the training set) with a special token in the test set.", "We evaluate only the comment level prediction, as results on the token level might be hard to interpret given that we replace tokens.", "We show the results in Table", "8. We observe a large decrease in F1 score (more than 10% on the full data) for 2 fallacies: appeal to nature and appeal to tradition.", "A big Fallacy Overrepresented tokens Appeal to authority medical, experts, expert, field, university, listen, degree,", "drop in the F1 score on the full data is more significant than on the annotated data, as the classifier would have seen more negative examples containing the confounds.", "Given the observed decrease in F1 score for these fallacies, an important future direction is to annotate more discussions containing the overrepresented words to find a better quality negative set, i.e., non-fallacious comments on the same topics.", "We note that for the other fallacies, the models appear to learn more complex language structures as they are less sensitive to the removal of the overrepresented words.", "In this work, we present a methodology for mining and labeling fallacious comments in online discussions.", "We find frequent fallacy mentions on Reddit and the subreddits in which they are the most prevalent.", "We create a large corpus of annotated comments and experiment with several neural methods for classification.", "We explore methods that consider the context of the discussion, and we show that they give better results.", "There are several exciting directions for continuing this work.", "First, using our methodology, we can annotate more comments for the eight fallacies we studied in this paper, we can improve the negative example set or explore other types of fallacies.", "Second, we can study another aspect of the discussion, the speech acts.", "According to the pragma dialectic theory, an argument is composed of several speech acts.", "Investigating if certain speech acts are more prevalent in fallacious discussions might lead to improved detection of fallacies.", "Lastly, in the pragma dialectic theory of argumentation, fallacies are violations of rules of critical discussion, for example, the fallacies we annotated violate two rules, as described in Section", "3. Given the significant number of fallacy types, we believe that a hierarchical approach to their detection could prove more efficient: identifying if a conversation violates one of the ten rules of critical conversation, and then for that particular rule identifying the type of fallacy.", "We would like to thank the ACL reviewers for their helpful feedback.", "We would also like to thank Meghana M. Bhat and Dravyansh Sharma for their helpful comments on the initial draft.", "This work was performed using HPC resources from GENCI-IDRIS (Grant 2020-AD011011614)." ]
[ "abstain", "abstain", "abstain", "abstain", "abstain", "method", "objective", "method", "result", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "method", "other", "other", "other", "other", "other", "other", "other", "method", "other", "other", "other", "other", "other", "other", "other", "objective", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "other", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "method", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "method", "abstain", "method", "method", "method", "method", "abstain", "abstain", "result", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "method", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "method", "method", "method", "result", "abstain", "result", "result", "result", "result", "abstain", "abstain", "abstain", "abstain", "method", "result", "method", "objective", "abstain", "objective", "method", "abstain", "abstain", "abstain", "result", "other", "other", "other" ]
[ "Recent years have witnessed various types of generative models for natural language generation (NLG), especially RNNs or transformer based sequence-to-sequence models, as well as variational autoencoder (VAE) and generative adversarial network (GAN) based models.", "However, flow-based generative models, which achieve strong performance in image generation due to their invertibility and exact density estimation properties, have been less explored for NLG.", "In this paper, we propose a flow-based language generation model by adapting previous flow generative models to language generation via continuous input embeddings, adapted affine coupling structures, and a novel architecture for autoregressive text generation.", "We also apply our framework to Sequence-to-Sequence generation, including textand video-based Question Generation (QG) and Neural Machine Translation (NMT), and data augmentation for Question Answering (QA).", "We use our language flow model to provide extra input features for QG and NMT, which achieves improvements over the strong QG baselines on SQuAD and TVQA and NMT baseline on WMT16.", "We also augment QA data with new context by injecting noise to the latent features of the language flow and show this augmentation leads to a large performance improvement from strong baselines on SQuAD and TVQA.", "1 1 Introduction Several generative models have been proposed for language generation, including sequence-to-sequence models based on RNNs (Luong et al., 2015) and transformers (Vaswani et al., 2017), as well as variational autoencoders (VAEs) to generate diverse texts (Bowman et al., 2016; Jain 1 Our code and models are available at: https:// github.com/zinengtang/ContinuousFlowNLG et al., 2017), plus generative adversarial networks (GANs) (Yu et al., 2017) to improve intended semantic fidelity.", "Another line of the generative model, normalizing flow (Rezende and Mohamed, 2015), is widely explored in computer vision and representation learning but less explored for NLG tasks.", "Flow models have been shown to be capable of improving probability density estimation, including variational inference (Rezende and Mohamed, 2015) and exact density estimation (Dinh et al., 2015).", "Generative flow is one type of flow model and first proposed by Dinh et al. (2015, 2017); Kingma and Dhariwal (2018).", "Taking advantage of its invertible structure, it can perform an exact density estimation of the input distribution.", "Thus, during generation, we can sample from its latent space and then generate new examples through its invertible decoder.", "Generative flow shows strong performance on image generation, attribute manipulation, and latent space inference (Kingma and Dhariwal, 2018).", "Considering these successful applications, we conjecture that the flow model should also have strong potential to be adapted for language generation tasks.", "Therefore, in this paper, we introduce a continuous language generative flow model that can deal with discrete language data in continuous latent space.", "We propose two variants, the non-autoregressive and autoregressive models, and show that they both can perform well on density estimation tasks.", "We follow the architecture of one previous generative flow model, Glow (Kingma and Dhariwal, 2018), but make adaptions for language generation tasks.", "We first employ GloVe word embeddings (Pennington et al., 2014) to map the discrete token sequence to a continuous embedding matrix.", "Furthermore, we utilize two components: time-dimension permutation and affine coupling with RNN or Transformer non-linearity functions, which allow interaction between words in a sequence and better contextualizes language semantics.", "Overall, these proposed components help generate texts in a non-autoregressive manner.", "However, even though the non-autoregressive model has attracted a lot of research attention because of its fast generation speed, it still hardly surpasses the generation quality of autoregressive models (Ren et al., 2020).", "Therefore, to make our language flow model learn language generation in a stronger autoregressive manner, we change the flow model's affine coupling and permutation to a uni-directional structure, i.e., each timestep can only attend to previous timesteps.", "In this way, we enable our model to perform text generation autore-gressively.", "Some recent works have developed density estimation models targeted on character-level discrete data (DiscreteFlow (Tran et al., 2019)) and explored using the flow architecture as an extra data encoder that provides latent features to support non-autoregressive text generation (FlowSeq (Ma et al., 2019)).", "While our work shares some similar characteristics, we explore different directions: (1) Dis-creteFlow develops a modulus calculation method to process discrete data.", "Instead, we use word embedding to transform the discrete input tokens to continuous features, which is simple yet effective.", "(2) FlowSeq essentially leverages the flow architecture in a typical encoder-decoder model to support non-autoregressive generation, whereas our models follow the standard generative flow framework and can directly generate texts via their invertible structure in both non-autoregressive or autoregressive manner.", "(3) Autoregressive flows were previously developed (Papamakarios et al., 2017; Huang et al., 2018) for stronger density estimation ability.", "However, the autoregressive language flow model we develop here aims for better text generation quality.", "For this, our model is autoregressive in both the forward stage (encoding an input to a latent feature ) and inverse stage (decoding the latent feature to the input ) with an uni-directional (i.e., the left-to-right direction) structure, We evaluate the density estimation ability of our language flow models as well as their effectiveness for three downstream tasks: (1) sequence-to-sequence (Seq-to-Seq) generation that includes question generation (QG) and neural machine translation (NMT) and (2) data augmentation for Question Answering (QA).", "We test QG and QA data augmentation on two large-scale QA datasets:", "(a) SQuAD (Rajpurkar et al., 2016), a widely explored textual QA and QG dataset and", "(b) TVQA (Lei et al., 2018), a large-scale multimodal video-dialogue QA task.", "We test machine translation on WMT16 (Cettolo et al., 2012), a commonly used NMT dataset.", "For density estimation, we compare the negative likelihoods of our models against a baseline LSTM model.", "For QG, we use the non-autoregressive flow model to provide extra input features for a standard encoder-decoder text generation model.", "We show that it can significantly improve a baseline QG model for both SQuAD and TVQA on both automatic and human evaluation metrics.", "Aided by our flow model, we achieve strong improvements over a transformer baseline in the neural machine translation experiment.", "In addition to improving language generation quality, we also use the proposed autoregressive flow model for data augmentation.", "For this, we focus on generating diverse textual contexts for QA tasks.", "In particular, we inject noise into the latent features of our flow models (encoded from ground-truth contexts) and then generate new contexts from the noise-injected features.", "Experiments show that the generated contexts can be either a varied expression of the same subject or paraphrasing the original context, but, mostly keep the answerability of the original question (see examples in Table 3).", "Combined with data augmentation strategies (data filtering and training schema), we achieve statistically significant improvements on both SQuAD and TVQA over strong baselines.", "Overall, we have two contributions: (1) we propose two continuous language generative flow model variants that have better density estimation abilities than an LSTM baseline model, and can perform non-autoregressive and autoregressive generation respectively; (2) Our language flow model largely improves QG, NMT, and data augmentation for QA tasks.", "In this section, we first review the generative flow model proposed in previous works (Dinh et al., 2015; Kingma and Dhariwal, 2018).", "Then, following it, we propose two variants of our continuous language generative flow model.", "distribution (language text in our case), p ( x ) , through", "a chain of invertible transformations.", "We first designate a true data distribution p ( x ) and a model p ( x ) with parameters to parameterize the true distribution p ( x ).", "The latent space inference is then defined as: x i p ( x ) (1) z i = f ( x i ) (2) where x i is a data point from the true data distribution and z i the latent features.", "This encoding x to z procedure is usually referred as the forward stage .", "The transformation f is designed to be invertible and bijective.", "In previous flow-based generative models (Dinh et al., 2015, 2017; Kingma and Dhariwal, 2018), the generative process (or referred as the inverse stage ) is defined as: z i p ( z ) (3) x i = g ( z i ) = f 1 ( z i ) (4) where z i is a sample from the latent space distribution, such as a standard Gaussian distribution.", "The flow mapping f is composed of a chain of transformations: f = f 1 f 2 f K with each representing one flow step .", "Then, the log-likelihood can be written as: log p ( x ) = log p ( z ) + K (cid:88) j =1 log (cid:12)(cid:12)(cid:12)(cid:12) det (cid:18) d h j d h j 1 (cid:19)(cid:12)(cid:12)(cid:12)(cid:12) (5) where h j is the output of each flow step.", "The value log | det( d h j /d h j 1 ) | is namely the log-determinant: the log of the absolute value of the determinant of the Jacobian matrix ( d h j /d h j 1 ) .", "This value is the change in log-density from h j 1 to h j under transformation f j .", "This equation is namely the change of variable formula.", "where u is usually sampled from a Gaussian distribution, N the number of samples in a batch, and d (= 128) the discretization level of the data and M the dimension of x i .", "2 2 The change of variable formula, Eq.5, treats the data space as unbounded.", "However, the data we use is usually within range -1.0 to 1.0 and parameter d (the discretization) can reduce the impact of boundary effects according to Dinh et al. (2017).", "Each flow step in the generative flow model includes three parts: Normalization, Permutation, and Affine coupling.", "(1) Normalization is designed to scale each output to stabilize training.", "We follow Glow (Kingma and Dhariwal, 2018) to use actnorm.", "(2) Permutation makes sure after multiple flow steps, each channel can sufficiently affect other dimensions.", "The Glow model (Kingma and Dhariwal, 2018) proposes to use a (trainable) invertible 1 1 convolution.", "It is essentially a flexible generalization of a permutation operation.", "We follow Glow and also use its LU decomposition to reduce determinant computation cost.", "Different from all previous work, we apply 1 1 convolution on the time dimension rather than the hidden dimension.", "This is because language data is sequential and temporal.", "This change is crucial to the proposed flow model's performance, which will be shown in ablation studies (Table 4).", "(3) Affine coupling is designed to incorporate complex nonlinear mapping but still keep invertibility (see Figure 1).", "z 1 , z 2 = Split( z 0 , dim : time) (8) s , t = Split(NN( z 1 ) , dim : hidden) (9) z 2 = ( s + ) (cid:12) ( t + z 2 ) (10) where NN refers to nonlinear function, is sigmoid activation.", "is a hyperparameter that prevents small value (around 0) from resulting in large negative value by log.", "Note that, in the first equation, Glow (Kingma and Dhariwal, 2018) splits along the hidden dimension.", "However, we split along time dimension (first introduced in FlowSeq (Ma et al., 2019)) which has the same motivation as the permutation module.", "We first present our non-autoregressive language flow which is based on the architecture introduced above.", "Besides the permutation/affine coupling structures changes introduced above, we use RNNs or Transformer as the nonlinear mapping, propose to use continuous input embedding, and introduce multi-scale architecture.", "Affine Coupling.", "We use a multihead self-attention module in transformer (Vaswani et al., 2017) or alternatively RNNs (a one-layer bidirectional LSTM (Schuster and Paliwal, 1997)) in the coupling layer by replacing the non-linear mapping of affine coupling, NN (see Eq.9).", "Continuous Input Embedding.", "The language flow model we propose operates on continuous inputs, which means the inputs are not discrete tokens but continuous word embeddings.", "We implement it through GloVe embeddings (Pennington et al., 2014).", "Therefore, the density estimation is performed for the distribution p ( x ) , where x is the word embeddings of language tokens.", "Note that the word embeddings are frozen.", "In the inverse stage, we compute the cosine similarity between the embedding matrix and decoder output as the token generation probability distribution, so that all tokens can be generated in parallel, i.e., non-autoregressively.", "Multi-Scale Architecture.", "Following Dinh et al. (2017), we use a multi-scale architecture (see Figure 2) that contains multiple blocks while each block containing several flow steps.", "In our work, we denote the number of flow steps as K, and the number of blocks as L that each contains K flow steps.", "We denote the input shape as (batch size b , sequence length s , hidden dimension h ).", "At the start of each block, the tensor is reshaped from ( b, s, h ) to ( b, s 2 , 2 h ) , so the model can capture more local features; and at the end of each block (except the Normalization AC-Cell AC-Cell AC-Cell NN .... .... .... .... Uni-directional Permutation AC-Cell Figure 3: Autoregressive Language Generative Flow model. The whole autoregressive flow model contains multiple K steps. This figure illustrates one flow step from z k to z k +1 . last block), the latent feature is split into halves via channel dimension with one as the output, z l , and the other as the input of the next block.", "If we have 3 blocks, we will have three latent outputs, z l .", "Past works (Dinh et al., 2017; Kingma and Dhariwal, 2018; Ma et al., 2019) reshape in this manner for all blocks.", "However, we do not reshape in the first block but apply the same for the following blocks, which allows the model to better process the original input text with intact sentence structure.", "The model we developed in the previous subsection can properly operate on continuous word embeddings, have exact density estimation, and perform non-autoregressive generation, however, it lacks the autoregressive structure that is commonly used for text generation.", "Previous works have shown autoregressive generation usually performs better than non-autoregressive generation (Ren et al., 2020).", "Thus, we develop an autoregressive model that can generate text from left to right in the inverse stage.", "To achieve this, we change affine coupling and permutation in the flow step to be unidirectional, i.e., each timestep can only attend to timesteps that precede it.", "However, we have to remove the multi-scale architecture to fulfill the autoregressive requirement.", "See sample outputs in Table 1 for comparison to those from the non-autoregressive model.", "Uni-directional Permutation.", "Since the permutation in each flow step designed in our non-autoregressive flow model is bidirectional, we mask the 1 1 convolution to a lower triangular matrix.", "Therefore, each token can only attend to previous tokens in the permutation, i.e., uni-directional permutation.", "Uni-directional Affine Coupling.", "We then introduce an autoregressive version of affine coupling, shown by the AC-cell in Figure", "3. For each flow step, we denote the input sequence as z (0):( T ) k +1 = [ z (0) k +1 , ..., z ( T ) k +1 ] , and then the autoregressive coupling is defined as: r ( t 1) = NN ([ c ( t 1) ; z ( t 1) k +1 ]) (11) c ( t ) = h a ( r ( t 1) , z ( t ) k +1 ) (12) z ( t ) k +1 = h b ( r ( t 1) , c ( t ) ) (13) We recurrently obtain the outputs, [ z (1) k +1 , ..., z ( T ) k +1 ] .", "Note that z (0) k +1 = z (0) k +1 , so the computation starts from z (1) k +1 .", "When computing z (1) k +1 , we cannot get c (0) , so we set it to be zero.", "h a and h a are both affine coupling structured, as shown in Figure", "4. NN is either RNN or transformer.", "In the inverse stage, to obtain z k +1 , we start from z (0) k +1 = z (0) k +1 and c (0) : r ( t 1) = NN ([ c ( t 1) ; z ( t 1) k +1 ]) (14) c ( t ) = h 1 b ( r ( t 1) , z ( t ) k +1 ) (15) z ( t ) k +1 = h 1 a ( r ( t 1) , c ( t ) ) (16) Since both decoded tokens z ( t ) and context c ( t ) only depend on previous tokens z (0):( t 1) , we can perform autoregressive decoding and beam search with cosine similarity as the probability distribution of output tokens.", "Autoregressive Flow Step.", "The changes of affine coupling and permutation to uni-directional allow the flow step to be autoregressive.", "And the whole autoregressive flow model will contain K such flow steps.", "At each flow step, the log-determinant is the summation of the log-determinant of all time steps: log p ( z k +1 ) = (cid:88) t log p ( z ( t ) k +1 ) (17) = (cid:88) t log p ( z ( t ) k ) + log (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) det (cid:32) d z ( t ) k +1 d z ( t ) k (cid:33)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) (18) 3 Language Generation with Flow We next apply our flow model to several downstream tasks.", "Despite the flow's rigid model structure, it has a strong potential in density estimation due to its complex transformation of inputs into a continuous latent space.", "We aim to use this property to improve standard encoder-decoder text generation models.", "Moreover, as the flow model has a strong ability in generating diverse text, we show that it has the capability for data augmentation to improve QA tasks.", "SQuAD.", "SQuAD is a textual question answering dataset containing 100,000+ questions/answers with corresponding short articles as context.", "We use it to evaluate both question generation and data augmentation (by generating new articles) for question answering.", "TVQA.", "TVQA is a large-scale video QA dataset based on 6 TV shows.", "It consists of 152,545 QA pairs from 21,793 video clips with subtitle text.", "We use it to evaluate both question generation and data augmentation (by generating new subtitles) for question answering.", "WMT16 (RO-EN) is a machine translation dataset between English and Romanian with around 610k sentence pairs.", "We use it for our machine translation experiment and only test for the Romanian to English direction.", "Similar to FlowSeq (Ma et al., 2019), we use flow as an extra module on top of a typical encoder-decoder language generation model and test on Question Generation (QG) and neural machine translation (NMT).", "As the flow model has the ability for exact density estimation, it provides the exact density components of context information and we assume that it provides a better hidden representation of context and thus helps with language generation.", "It can also be viewed as a self-supervised learning method that can provide new features for downstream tasks.", "where E refers to encoder and G decoder.", "z i refers to latent features of the non-autoregressive flow model.", "h att is essentially a MLP with sigmoid activation.", "The loss function has two parts: L gen = 1 NN (cid:88) i =1 log p ( q i ) (20) L = L nll + L gen (21) where q i represents the target questions and is a hyperparameter for NLL loss (Eq. 6) 3 We replicate Zhang and Bansal (2019)'s standard encoder-decoder attention QG model with BERT features as input embeddings.", "Context Generation.", "We propose to use flow to generate diverse contexts for data augmentation as both TVQA and SQuAd are question answering tasks with textual context.", "We generate new context (video subtitles for TVQA; articles for SQuAD) by injecting noise to the hidden vector of the original context, z i , and reconstructing it to new sentences, x i .", "Note that, we can also do the same thing for questions, however, we find that changing one word in the question will dramatically change its meaning, so we limit this augmentation to the context and keep the original question unchanged.", "z i = f ( x i ) x i = f 1 ( z i + z 0 )", "where f refers to the flow model and x i the input text and z i the latent space.", "The transformation is performed by simply sampling a Gaussian noise z 0 , add it to z i , and reconstruct the new context x i in the reverse stage.", "In this task, we use the autoregressive flow model as this variant is designed for text generation.", "We also use the non-autoregressive flow model additionally leveraged by an additional autoregressive decoder, as an alternative approach.", "While the standard RNN-based language model does not have an explicit global sentence representation, our flow model is similar to Bowman et al. (2016)'s VAE framework that encodes the sentence into a continuous hidden vector, p ( z | x ) .", "And sampling around the hidden vector can naturally be viewed as injecting noise without changing key information.", "Therefore, we do not aim for paraphrasing the original context because the flow model can reconstruct different information from random noise injection in latent space.", "Notably, this method has the risk of changing the context's meaning and making the question unanswerable, however, empirically, we find that as long as we Original Generated TVQA Generated Subtitle Example 1 : varied expression of the subject.", "keep the noise small enough, the generation will be either paraphrases or different expressions of the same subject without affecting the answerability.", "Data Filtering.", "To better utilize the generated data, we design a data filter as filtering out the low-quality generated text is useful in helping improve the data augmentation (Zhang and Bansal, 2019).", "We use pretrained QA baseline models (see Table 8 Baseline TVQA+ and Table 9 Baseline BERT) to filter out the low-quality context.", "The generated context will be filtered out if the model performs worse on predicting correct answers when original context is replaced by its generated counterpart.", "We follow Zhang and Bansal (2019) to split the development set of SQuADv1.1 (Rajpurkar et al., 2016) into two halves and show the result on the test split.", "We generally follow previous work on evaluation metrics.", "For density estimation, we use negative log-likelihood (NLL) for comparison and bits per dimension to regularize the negative log-likelihood loss, formulated as L M log(2) , where M represents the dimension of input.", "We evaluate QG by BLEU4 (Papineni et al., 2002), Meteor (Lavie and Agarwal, 2007), Rouge-L (Lin, 2004), and Amazon MTurk human evaluation.", "We use the BLEU score to evaluate NMT.", "We use accuracy to evaluate the TVQA QA model and EM (exact match) and F1 score to evaluate the SQuAD QA model.", "We replicate Zhang and Bansal (2019)'s baseline QG model.", "We use the STAGE model with Model TVQA Subtitle SQuAD Article Bi-LSTM -7.31 -1.27 Att-C 0.68 -2.01 RNN-C 0.50 -0.37 Att-S -8.02 -17.12 RNN-S -8.35 -17.17 Att-AR -9.62 -17.12 RNN-AR -9.63 -17.26 Table 4: The NLL results of flow models and an LSTM baseline on the validation split of TVQA subtitles and test split of SQuAD articles.", "GloVe embeddings developed by Lei et al. (2020) as the TVQA QA baseline and use BERT as the SQuAD QA baseline.", "See appendix A for more experiment/reproducibility details.", "First of all, to evaluate the density estimation ability, we compare the negative log-likelihood (NLL, Eq.6) 4 of our different flow models on the context data of SQuAD and TVQA against a baseline model (a 3-layer bidirectional LSTM-RNN model with hidden size 300).", "As shown in Table 4, the flow model of time-dim coupling/permutation 4 Note that since our p ( x ) is over continuous word embeddings, so it is the probability density of a continuous variable which is not bounded by [0,1].", "generally outperforms the baseline LSTM model.", "The flow model of time-dim coupling/permutation largely outperforms the flow model of channel-dim coupling/permutation.", "We also test our autoregressive model to check its density estimation ability, and we find it performs well and even sometimes slightly better than the non-autoregressive model.", "Note that we do not claim the autoregressive model is better at density estimation than the non-autoregressive version, instead, we aim to show that it can perform reasonably with the proposed autoregressive adaptation.", "Question Generation.", "Through the ablation studies shown in Table 5 and Table 6, we demonstrate that the proposed flow-aided QG model significantly improves the QG performance.", "The statistical significances for all metric improvements (BLEU4, Rouge-L, Meteor) are p < 0 .", "001 for both TVQA QG and SQuAD QG.", "5 We also conduct a human evaluation.", "We random sample 200 examples 6 , and we present the participants two questions per example generated by two different models and let them judge which question is better in terms of answerability and overall quality.", "See more human evaluation details in Appendix A.3.", "We compare our flow model to the pure encoder-decoder baseline as well as the FlowSeq model (Ma et al., 2019) in human evaluation.", "As shown in the last rows in Table 5 and Table 6, humans favor our model more than the baseline in both tasks, which indicates our flow model indeed provides useful latent features for better generation.", "Plus, our model also always outperforms FlowSeq.", "We conjecture that it is because FlowSeq is non-autoregressive whereas our QG model is autoregressive.", "Neural Machine Translation.", "We also test the effectiveness of our approach on a neural machine translation (NMT) task.", "We first replicate Lee et al. (2018)'s transformer autoregressive model baseline, and then we add our flow architecture on top of it.", "As shown in Table 7, our proposed flow-aided MT model can improve the machine translation performance over the strong transformer baseline on the WMT16 (Cettolo et al., 2012) Romanian to English translation task.", "See A.7 for more details.", "We hope 5 Statistical significance is computed using the bootstrap test (Efron and Tibshirani, 1994).", "6 We exclude those examples where the two models generate identical questions.", "that these promising initial NMT results will also encourage the community to use continuous flow models for other NMT and NLG tasks.", "As shown in Table 8 and Table 9, using the augmented data generated by our Language Flow model (refers to our autoregressive language flow model), we achieve significant performance improvements over strong baselines on both TVQA QA (Lei et al., 2020) ( p < 0 . 0001 ) and SQuAD QA (Rajpurkar et al., 2016) ( p < 0 . 0005 ) for both EM and F1.", "Furthermore, when we add an LSTM autoregressive decoder to our non-autoregressive encoder (referred to as Language Flow+) and use it to perform data augmentation, we observe even slightly better results.", "This may indicate the stronger encoding ability of our non-autoregressive model due to its multi-scale architecture.", "Mean-Models BLEU Transformer Baseline 30.27 +Lang-Flow 30.87 Table 7: MT results on WMT16 RO-EN dev split.", "while, we compare to two other data augmentation techniques: paraphrasing (Niu and Bansal, 2018) and back-translation (Sennrich et al., 2016).", "Note that for a fair comparison, we apply the same data filter and training schema for all data augmentation methods.", "It can be seen that both methods perform worse than our Language Flow or Language Flow+ models.", "We show some sample questions generated by our non-autoregressive and autoregressive flow models in Table", "1. The autoregressive samples are better organized and grammatically sound, while non-autoregressive generation fails at the latter part of the sentence.", "It might because the non-autoregressive structure has a weaker ability to model the temporal dependency during generation, which is consistent with the observations from previous works (Ren et al., 2020).", "To show that our model generates samples from a continuous space, we generate interpolation samples from our autoregressive flow model shown in Table", "2. Those samples are mostly grammatically sound and correctly reflect the intermediate content of the two interpolated sentences.", "While variational autoencoder has the issue of ignoring latent space (Li et al., 2019), our models do not suffer from this issue.", "We introduced two types of language generation models in the paper: (1) the autoregressive flow model (used in data augmentation tasks) and (2) the model that uses flow latent features as extra input (e.g., for QG tasks).", "Our autoregressive flow model's decoder is Models EM F1 Baseline BERT 81.34 88.76 + Context (Back-Translation) 81.02 88.79 + Context (Paraphrasing) 81.65 88.92 + Context (Language Flow) 82.28 89.22 + Context (Language Flow+) 82.49 89.44 Table 9: QA results on SQuAD test split.", "the inverted version of its encoder with the same weights, so it ensures the decoder uses the latent features.", "When we use flow latent features as extra inputs, it significantly improves QA performance (Table 5 and Table 6), which implies the latent features are usefully involved in generation.", "We have proposed a language generative flow model with non-autoregressive and autoregressive variants.", "The non-autoregressive flow model achieves strong performance on density estimation and helps improve question generation and machine translation by providing additional useful latent features to the decoder.", "Moreover, the autoregressive variant largely improves question answering by generating new contexts with noise injection.", "We thank the reviewers for their helpful feedback.", "This research is supported by NSF-CAREER Award 1846185, ONR Grant N00014-18-1-2871, and ARO-YIP Award #W911NF18-1-0336.", "The views contained in this article are those of the authors and not of the funding agency." ]
[ "abstain", "abstain", "objective", "method", "result", "objective", "other", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "method", "abstain", "objective", "method", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "objective", "method", "method", "abstain", "objective", "method", "abstain", "abstain", "abstain", "method", "method", "method", "result", "result", "objective", "method", "objective", "abstain", "result", "objective", "abstain", "abstain", "abstain", "other", "objective", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "method", "other", "other", "method", "other", "other", "other", "method", "method", "other", "other", "other", "other", "other", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "other", "abstain", "method", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "method", "method", "method", "method", "method", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "result", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "result", "abstain", "abstain", "method", "result", "abstain", "result", "objective", "abstain", "abstain", "other", "other", "other" ]
[ "While hyper-parameters (HPs) are important for knowledge graph (KG) learning, existing methods fail to search them efficiently.", "To solve this problem, we first analyze the properties of different HPs and measure the transfer ability from small subgraph to the full graph.", "Based on the analysis, we propose an efficient two-stage search algorithm KGTuner, which efficiently explores HP configurations on small subgraph at the first stage and transfers the top-performed configurations for fine-tuning on the large full graph at the second stage.", "Experiments show that our method can consistently find better HPs than the baseline algorithms within the same time budget, which achieves 9.1% average relative improvement for four embedding models on the large-scale KGs in open graph benchmark.", "Our code is released in https://github.", "com/AutoML-Research/KGTuner .", "1 1 Introduction Knowledge graph (KG) is a special kind of graph structured data to represent knowledge through entities and relations between the entities (Wang et al., 2017; Ji et al., 2021).", "Learning from KG aims to discover the latent properties from KGs to infer the existence of interactions among entities or the types of entities (Wang et al., 2017; Zhang and Yao, 2022).", "KG embedding, which encodes entities and relations as low dimensional vectors, is an important technique to learn from KGs (Wang et al., 2017; Ji et al., 2021).", "The existing models range from translational distance models (Bordes et al., 2013), tensor factorization models (Nickel et al., 2011; Trouillon et al., 2017; Balaevic et al., 2019), neural network models (Dettmers et al., 2017; Guo et al., 2019), to graph neural networks (Schlichtkrull et al., 2018; Vashishth et al., 2020).", "Hyper-parameter (HP) search (Claesen and De Moor, 2015) is very essential for KG learning.", "In this work, we take KG embedding methods (Wang et al., 2017), as a good example to study the impact of HPs to KG learning.", "As studied, the HP configurations greatly influence the model performance (Ruffinelli et al., 2019; Ali et al., 2020).", "An improper HP configuration will impede the model from stable convergence, while an appropriate one can make considerable promotion to the model performance.", "Indeed, studying the HP configurations can help us make a more scientific understanding of the contributions made by existing works (Rossi et al., 2021; Sun et al., 2020).", "In addition, it is also important to search for an optimal HP configuration when adopting KG embedding methods to the real-world applications (Bordes et al., 2014; Zhang et al., 2016; Saxena et al., 2020).", "Algorithms for HP search on general machine learning problems have been well-developed (Clae-sen and De Moor, 2015).", "As shown in Figure", "1(a), the search algorithm selects a HP configuration from the search space in each iteration, then the evaluation feedback obtained by full model training is used to update the search algorithm.", "The optimal HP is the one achieving the best performance on validation data in the search process.", "Representative HP search algorithms are within sample-based methods like grid search, random search (Bergstra and Bengio, 2012), and sequential model-based Bayesian optimization (SMBO) methods like Hyperopt (Bergstra et al., 2013), SMAC (Hutter et al., 2011), Spearmint (Snoek et al., 2012) as well as BORE (Tiao et al., 2021), etc.", "Recently, some subgraph-based methods (Tu et al., 2019; Wang et al., 2021) are proposed to learn a predictor with configurations efficiently evaluated on small subgraphs The predictor is then transferred to guide HP search on the full graph.", "However, these methods fail to efficiently search a good configuration of HPs for KG embedding models since the training 2715 !", "cost of individual model is high and the correlation of HPs in the huge search space is very complex.", "To address the limitations of existing HP search algorithms, we carry a comprehensive understanding study on the influence and correlation of HPs as well as their transfer ability from small subgraph to full graph in KG learning.", "From the aspect of performance, we classify the HPs into four different groups including reduced options , shrunken range , monotonously related and no obvious patterns based on their influence on the performance.", "By analyzing the validation curvature of these HPs, we find that the space is rather complex such that only tree-based models can approximate it well.", "In addition, we observe that the consistency between evaluation on small subgraph and that on the full graph is high, while the evaluation cost is significantly smaller on the small subgraph.", "Above understanding motivates us to reduce the size of search space and design a two-stage search algorithm named as KGTuner.", "As shown in Figure", "1(b), KGTuner explores HP configurations in the shrunken and decoupled space with the search algorithm RF+BORE (Tiao et al., 2021) on a subgraph in the first stage, where the evaluation cost of HPs are small.", "Then in the second stage, the configurations achieving the top10 performance at the first stage are equipped with large batch size and dimension size for fine-tuning on the full graph.", "Within the same time budget, KGTuner can consistently search better configurations than the baseline search algorithms for seven KG embedding models on WN18RR (Dettmers et al., 2017) and FB15k-237 (Toutanova and Chen, 2015).", "By applying KGTuner to the large-scale benchmarks ogbl-biokg and ogbl-wikikg2 (Hu et al., 2020), the performances of embedding models are improved compared with the reported results on OGB link prediction leaderboard.", "Besides, we justify the improvement of efficiency via analyzing the design components in KGTuner.", "We firstly revisit the important and common HPs in KG embedding.", "Following the general framework (Ruffinelli et al., 2019; Ali et al., 2020), the learning problem can be written as P =arg min PL ( F ( , P ) , D + , D )+ r ( P ) , (1) where F is the form of an embedding model with learnable parameters P , D + is the set of positive samples from the training data, D represents negative samples, and r ( ) is a regularization function.", "There are four groups of hyper-parameters (Table 1), i.e., the size of negative sampling for D , the choice of loss function L , the form of regularization r ( ) , and the optimization arg min P .", "Embedding model.", "While there are many existing embedding models, we follow (Ruffinelli et al., 2019) to focus on some representative models.", "They are translational distance models TransE (Bordes et al., 2013) and RotatE (Sun et al., 2019), tensor factorization models RESCAL (Nickel et al., 2011), DistMult (Yang et al., 2015), ComplEx (Trouillon et al., 2017) and TuckER (Balaevic et al., 2019), and neural network models ConvE (Dettmers et al., 2017).", "Graph neural networks for KG embedding (Schlichtkrull et al., 2018; Vashishth et al., 2020; Zhang and Yao, 2022) are not studied here for their scalability issues on large-scale KGs (Ji et al., 2021).", "Negative sampling.", "Sampling negative triplets is important as only positive triplets are contained in the KGs (Wang et al., 2017).", "We can pick up m triplets by replacing the head or tail entity with uniform sampling (Bordes et al., 2013) or use a full set of negative triplets.", "Using the full set can be defined as the 1VsAll (Lacroix et al., 2018) or kVsAll (Dettmers et al., 2017) according to the positive triplets used.", "The methods (Cai and Wang, 2018; Zhang et al., 2021) requiring additional models for negative sampling are not considered here.", "Loss function.", "There are three types of loss functions.", "One can use margin ranking (MR) loss (Bor-des et al., 2013) to rank the positive triplets higher over the negative ones, or use binary cross entropy (BCE) loss, with variants BCE_mean, BCE_adv (Sun et al., 2019) and BCE_sum (Trouillon et al., 2017), to classify the positive and negative triplets as binary classes, or use cross entropy (CE) loss (Lacroix et al., 2018) to classify the positive triplet as the true label over the negative triplets.", "Regularization.", "To balance the expressiveness and complexity, and to avoid unbounded embeddings, the regularization techniques can be considered, such as regularizers like Frobenius norm (FRO) (Yang et al., 2015; Trouillon et al., 2017), Nuclear norm (NUC) (Lacroix et al., 2018) as well as DURA (Zhang et al., 2020b), and dropout on the embeddings (Dettmers et al., 2017).", "Optimization.", "To optimize the embeddings, important optimization choices include the optimizer, such as SGD, Adam (Kingma and Ba, 2014) and Adagrad (Duchi et al., 2011), learning rate, initializers, batch size, embedding dimension size, and add inverse relation (Lacroix et al., 2018) or not.", "Denote an instance x = ( x 1 , x 2 . . . , x n ) , which is called an HP configuration, in the search space X .", "Let F ( P , x ) be an embedding model with model parameters P and HPs x , we define M ( F ( P , x ) , D val ) as the performance measurement (the larger the better) on validation data D val and L ( F ( P , x ) , D tra ) as the loss function (the smaller the better) on training data D tra .", "We define the problem of HP search for KG embedding models in Definition 1. The objective is to search an optimal configuration x X such that the embedding model F can achieve the best performance on the validation data D val .", "Definition 1 (Hyper-parameter search for KG embedding) .", "The problem of HP search for KG embedding model is formulated as x = arg max x X M (cid:0) F ( P , x ) , D val (cid:1) , (2) P = arg min PL (cid:0) F ( P , x ) , D tra (cid:1) .", "Definition 1 is a bilevel optimization problem (Colson et al., 2007), which can be solved by many conventional HP search algorithms.", "The most common and widely used approaches are sample-based methods like grid search and random search (Bergstra and Bengio, 2012), where the HP configurations are independently sampled.", "To guide the sampling of HP configurations by historical experience, SMBO-based methods (Bergstra et al., 2011; Hutter et al., 2011) learn a surrogate model to select configurations based on the results that have been evaluated.", "Then, the model parameters P are optimized by minimizing the loss function L on D tra in Eq.", "(3).", "The evaluation feedback M of x on the validation data D val is used to update the surrogate.", "There are three major aspects determining the efficiency of Definition 1:", "(i) the size of search space X ,", "(ii) the validation curvature of M ( , ) 2717 Figure 2: Ranking distribution of selected HPs.", "in Eq.", "(2), and", "(iii) the evaluation cost in solving arg min PL in Eq.", "(3).", "However, the existing methods (Ruffinelli et al., 2019; Ali et al., 2020) directly search on a huge space with commonly used surrogate models and slow evaluation feedback from the full KG due to the lack of understanding on the search problem, leading to low efficiency.", "To address the mentioned limitations, we measure the significance and correlation of each HP to determine the feasibility of the search space X in Section 4.1.", "In Section 4.2, we visualize the HPs that determine the curvature of Eq.", "(2).", "To reduce the evaluation cost in Eq.", "(3), we analyze the approximation methods in Section 4.3.", "Following (Ruffinelli et al., 2019), the experiments run on the seven embedding models in Section 2 and two widely used datasets WN18RR (Dettmers et al., 2017) and FB15k-237 (Toutanova and Chen, 2015).", "The experiments are implemented with PyTorch framework (Paszke et al., 2017), on a machine with two Intel Xeon 6230R CPUs and eight RTX 3090 GPUs with 24 GB memories each.", "We provide the implementation details in the Appendix D.1.", "Considering such large amount of HP configurations in X , we take the simple and efficient approach where HPs are evaluated under control variate (Hutter et al., 2014; You et al., 2020), which varies the i -th HP while fixing the other HPs.", "First, we discretize the continuous HPs according to their ranges.", "Then the feasibility of the search space X is analyzed by checking the ranking distribution and consistency of individual HPs.", "These can help us shrink and decouple the search space.", "The detailed setting for this part is in the Appendix B.1.", "Ranking distribution.", "To shrink the search space, we use the ranking distribution to indicate what HP values perform consistently.", "Given an anchor configuration x , we obtain the ranking of different values X i by fixing the other HPs, where X i is the range of the i -th HP.", "The ranking distribution is then collected over the different anchor configurations in X i , different models and datasets.", "According to the violin plots of ranking distribution shown in Figure 2, the HPs can be classified into four groups:", "(a) reduced options , e.g., Adam is the best optimizer and inverse relation should not be introduced;", "(b) shrunken range , e.g., learning rate, reg.", "weight and dropout rate are better in certain ranges;", "(c) monotonously related : e.g., larger batch size and dimension size tend to be better;", "(d) no obvious patterns : e.g., the remaining HPs.", "Consistency.", "To decouple the search space, we measure the consistency of configurations' rankings when only a specific HP changes.", "For the i -th HP, if the ranking of configurations' performance is consistent with different values of X i , we can decouple the search procedure of the i -th HP with the others.", "We measure such consistency with the spearman's ranking correlation coefficient ( SRCC ) (Schober et al., 2018).", "Given a value X i , we obtain the ranking r ( x , ) of the anchor configurations x X i by fixing the i -th HP as .", "Then, the SRCC between the two HP values 1 , 2 X i is computed as 1 (cid:80) x X i | r ( x , 1 ) r ( x , 2 ) | 2 |X i | ( |X i | 2 1) , (4) where |X i | means the number of anchor configurations in X i .", "SRCC indicates the matching rate of rankings for the anchor configurations in X i with respect to x i = 1 and x i = 2 .", "Then the consistency of the i -th HP is evaluated by averaging the SRCC over the different pairs of ( 1 , 2 ) for X i , the different models and different datasets.", "The larger consistency (in the range [ 1 , 1] ) indicates that changing the value of the i -th HP does not influence much on the other configurations' ranking.", "As in Figure 4, the batch size and dimension size show higher consistency than the other HPs.", "Hence, the evaluation of the configurations can be consistent with different choices of the two HPs.", "This indicates that we can decouple the search of batch size and dimension size with the other HPs.", "We analyze the curvature of the validation performance M ( , ) w.r.t x X .", "Specifically, we follow (Li et al., 2017) to visualize the validation loss' landscape by uniformly varying the numerical HPs in two directions (20 configurations in each direction) on the model ComplEx and dataset WN18RR.", "From Figure", "3(a), we observe that the curvature is quite complex with many local maximum areas.", "To gain insights from evaluating these configurations and guide the next configuration sampling, we learn a surrogate model as the predictor to approximate the validation curvature.", "The curvatures of three common surrogates, i.e., Gaussian process (GP) (Williams and Rasmussen, 1995), multi-layer perceptron (MLP) (Gardner and Dorling, 1998) and random forest (RF) (Breiman, 2001), are in Figures", "3(b)-3(d).", "The surrogate models are trained with the evaluations of 100 random configurations in the search space.", "As shown, both GP and MLP fail to capture the complex local surface in Figure", "3(a) as they tend to learn a flat and smooth distribution in the search space.", "In comparison, RF is better in capturing the local distributions.", "Hence, we regard RF as a better choice in the search space.", "A more detailed comparison on the approximation ability of different surrogates is in the Appendix B.3.", "The evaluation cost of the HP configuration on an embedding model is the majority computation cost in HP search.", "Thus, we firstly evaluate the HPs that have influence on the evaluation cost, including batch size, dimension size, number of negative samples loss function and regularizer.", "Then, we analyze the evaluation transfer ability from small subgraph to the full graph.", "Cost of different HPs.", "The cost of each HP value X i is averaged over the different anchor configurations in X i , different models and datasets.", "For fair comparsion, the time cost is counted per thousand iterations.", "We find that the evaluation cost increases significantly with larger batch size and dimension size, while the number of negative samples and the choice of loss function or regularizer do not have much influence on the cost.", "We provide two exemplar curves in Figure 5 and put the remaining results in the Appendix B.4.", "Transfer ability of subgraphs.", "Subgraphs can efficiently approximate the properties of the full graph (Hamilton et al., 2017; Teru et al., 2020).", "We evaluate the impact of subgraph sampling on HP search by checking the consistency between evaluations results on small subgraph and those on the full graph.", "First, we study how to sample subgraphs.", "There are several approaches to sample small subgraphs from a large graph (Leskovec and Faloutsos, 2006).", "We compare four representative approaches in Figure 6, i.e., Pagerank node sampling (Pagerank), random edge sampling (Random Edge), single-start random walk (Single-RW) and multi-start random 2719 walk (Multi-RW).", "For a fair comparison, we constrain the subgraphs with about 20% of the full graph.", "The consistency between the sampled subgraph with the full graph is evaluated by the SRCC in (4).", "We observe that multi-start random walk is the best among the different sampling methods.", "Apart from directly transferring the evaluation from subgraph to full graph, we can alternatively train a predictor with observations on subgraphs and then transfers the model to predict the configuration performance on the full graph.", "From Figure 6, we find that directly transferring evaluations from subgraphs to the full graph is much better than transferring the predictor model.", "In addition, we show the consistency and cost in terms of different subgraph sizes (percentage of entities compared to the full graph) in Figure 7.", "As shown, evaluation on subgraphs can significantly improve the efficiency.", "When the scale increases, the consistency increases but the cost also increases.", "To balance the consistency and cost, the subgraphs with 20% entities are the better choices.", "By analyzing the ranking distribution and consistency of HPs in Section 4.1, we observe that not all the HP values are equivalently good, and some HPs can be decoupled.", "These observations motivate us to revise the search space in Section 5.1.", "Based on the analysis in Section 4.2 and 4.3, we then propose an efficient two-stage search algorithm in Section 5.2.", "To shrink the search space, we mainly consider groups", "(a) and", "(b) of HPs in Section 4.1.", "From the full results in the Appendix B.2, we observe that Adam can consistently perform better than the other two optimizers, the learning rate is better in the range of [10 4 , 10 1 ] , the regularization weight is better in [10 8 , 10 2 ] , dropout rate is better in [0 , 0 . 3] , and add inverse relation is not a good choice.", "To decouple the search space, we consider batch size and dimension size that have larger consistency values than the other HPs, and are monotonously related to the performance as in group", "(c).", "However, the computation costs of batch size and dimension size increase prominently as shown in Figure 5.", "Hence, we can set batch size as 128 and dimension size as 100 to search the other HPs with low evaluation cost and increase their values in a fine-tuning stage.", "Given the full search space X , we denote the shrunken space as XS and the further decoupled space as X S|D .", "We achieve hundreds of times size reduction from XS to X S|D and we show the details of changes in the Appendix C. 5.2 Two-stage search algorithm (KGTuner) As discussed in Section 4.3, the evaluation cost can be significantly reduced with small batch size, dimension size and subgraph.", "This motivates us to design a two-stage search algorithm, named KGTuner, as in Figure", "1(b) and Algorithm 1. Algorithm 1 KGTuner: two-stage search algorithm Require: KG embedding model F , dataset D , and budget B ; 1: shrink the search space X to XS and decouple XS to X S|D ; # state one : efficient evaluation on subgraph 2: sample a subgraph (with 20% entities) G from D tra by multi-start random walk; 3: repeat 4: sample a configuration x from XS | D by RF+BORE; 5: evaluate x on the subgraph G to get the performance; 6: update the RF with record (cid:0) x , M ( F ( P , x ) , G val ) (cid:1) ; 7: until B / 2 budget exhausted; 8: save the top10 configurations in X S|D ; # state two : fine-tune the top configurations 9: increase the batch/dimension size in X S|D to get X ; 10: set y = 0 and re-initialize the RF surrogate; 11: repeat 12: select a configuration x from X by RF+BORE; 13: evaluate on full graph G to get the performance; 14: update the RF with record (cid:0) x , M ( F ( P , x ) ,D val ) (cid:1) ; 15: if M ( F ( P , x ) , D val ) > y then y M ( F ( P , x ) , D val ) and x x ; end if 16: until the remaining B / 2 budget exhausted; 17: return x .", "In the first stage, we sample a subgraph G with 20% entities from the full graph D tra by multi-start random walk.", "Based on the understanding of curvature in Section 4.2, we use the surrogate model random forest (RF) under the state-of-the art framework BORE (Tiao et al., 2021), denoted as RF+BORE, to explore HPs in X S|D on the subgraph G in steps 3-7.", "The top10 configurations evaluated in this stage are saved in a set X S|D .", "In the second stage, we increase batch size and dimension size for configurations in X S|D to generate a new set X .", "Then, the configurations in X are searched by the RF+BORE again in steps 11-16 until the remaining B / 2 budget is exhausted.", "Finally, the configuration x achieving the best performance on the full validation data D val is returned for testing.", "e now summarize the main differences of KGTuner with the existing HP search algorithms, i.e. Random (random search) (Bergstra and Bengio, 2012), Hyperopt (Bergstra et al., 2013), SMAC (Hutter et al., 2011), RF+BORE (Tiao et al., 2021), and AutoNE (Tu et al., 2019).", "The comparison is based on three aspects, i.e., search space, surrogate model and fast evaluation, in Table 2. KGTuner shrinks and decouples the search space based on the understanding of HPs' properties, and uses the surrogate RF based on the understanding on validation curvature.", "The fast evaluation on subgraph in KGTuner selects the top10 configurations to directly transfer for fine-tuning, while AutoNE (Tu et al., 2019) just uses fast evaluation on subgraphs to train the surrogate model, and transfers the surrogate model for HP search on the full graph.", "In Figure 6, the transfer ability of the surrogate model is shown to be much worse than direct transferring.", "In this part, we compare the proposed algorithm KGTuner with six HP search algorithms in Table 2. For AutoNE, we allocate half budget for it to search on the subgraph and another half budget on the full graph with the transferred surrogate model.", "The baselines search in the full search space (in Table 1) with the same amount of budget as KGTuner.", "We use the mean reciprocal ranking (MRR, the larger the better) (Bordes et al., 2013) to indicate the performance.", "Efficiency.", "We compare the different search algorithms in Figure 8 on an in-sample dataset WN18RR and an out-of-sample dataset ogbl-biokg.", "The time budget we set for WN18RR is one day's clock time, while that for ogbl-biokg is two days' clock time.", "For each dataset we show two kinds of figures.", "First, the best performance achieved along the clock time in one experiment on a specific model ComplEx.", "Second, we plot the the ranking of each algorithm averaged over all the models and datasets.", "Since AutoNE and KGTuner run on the subgraphs in the first stage, the starting points of them locate after 12 hours.", "The starting point of KGTuner is a bit later than AutoNE since it constrains to use large batch size and dimension size in the second stage, which is more expensive.", "As shown, random search is the worst.", "SMAC and RF+BORE achieve better performance than Hyperopt and Ax since RF can fit the space better than TPE and GP as in Section 4.2.", "Due to the weak transfer ability of the predictor (see Figure 6) and the weak approximation ability of GP (see Fig-2721 ure 3), AutoNE also performs bad.", "KGTuner is much better than all the baselines.", "We show the full search process of the two-stage algorithms AutoNE and KGTuner on WN18RR in Figure", "9(a).", "By exploring sufficient number of configurations in the first stage, the configurations fine-tuned in the second stage can consistently achieve the best performance.", "Effectiveness.", "For WN18RR and FB15k-237, we provide the reproduced results on TransE, ComplEx and ConvE with the original HPs, HPs searched by LibKGE and HPs searched by KGTuner in Table 3. The full results on the remaining four embedding models RotatE, RESCAL, DistMult and TuckER are in the Appendix D.2.", "Overall, KGTuner achieves better performance compared with both the original reported results and the reproduced results in (Ruffinelli et al., 2019).", "We observe that the tensor factorization models such as RESCAL, ComplEx and TuckER have better performance than the translational distance models TransE, RotatE and neural network model ConvE.", "This conforms with the theoretical analysis that tensor factorization models are more expressive (Wang et al., 2018).", "To further demonstrate the advantage of KGTuner, we apply it to the Open Graph Benchmark (OGB) (Hu et al., 2020), which is a collection of realistic and large-scale benchmark datasets for machine learning on graphs.", "Many embedding models have been tested there by two large-scale KGs for link prediction, i.e., ogbl-biokg and ogbl-wikikg2.", "Due to their scale, the evaluation cost of a HP configuration is very expensive.", "We use KGTuner to search HPs for embedding models, i.e., TransE, RotatE, DistMult, ComplEx and AutoSF (Zhang et al., 2020a), on OGB.", "Since the computation costs of the two datasets are much higher, we set the time budget as 2 days for ogbl-biokg and 5 days for ogbl-wikikg2.", "All the embedding models evaluated here are constrained to have the same (or lower) number of model parameters 2 .", "More details on model parameters, standard derivation, and validation performance are in the Appendix D.3.", "As shown in Table 4, KGTuner consistently improves the performance of the four embedding models with the same or fewer parameters compared with the results on the OGB board.", "In this subsection, we probe into how important and sensitive the various components of KGTuner are.", "Space comparison.", "To demonstrate the effectiveness gained by shrinking and decoupling the search space, we compare the following variants:", "(i) RF+BORE on the full space X ;", "(ii) RF+BORE on the shrunken space XS ;", "(iii) RF+BORE on the decoupled space X S|D , which differs from KGTuner by searching on the full graph in the first stage; and", "(iv) KGTuner in Algorithm 1. All the variants, i.e., RF+BORE, have one day's time budget.", "As in Figure", "9(b), the size of search space matters for the search efficiency.", "The three components, i.e., space shrinkage, space decoupling, and fast evaluation on subgraph, are all important to the success of KGTuner.", "Size of subgraphs.", "We show the influence of subgraph sizes with different ratios of entities (10%, 20%, 30%, 40%, 50%) from the full graph in Fig-2 We run all models on ogbl-wikikg2 with 100 dimension size to avoid out-of-memory, instead of 500 on OGB board.", "Model ComplEx and dataset WN18RR are used in these experiments.", "ure", "9(c).", "Using subgraphs with too large or too small size is not guaranteed to find good configurations.", "Based on the understanding in Figure 7, the subgraph with small size have poor transfer ability and those with large size are expensive to evaluate.", "Hence, we should balance the transfer ability and evaluation cost by sampling subgraphs with 20% 30% entities.", "Budget allocation.", "In Algorithm 1, we allocate B / 2 budget for both the first and second stage.", "Here, we show the performance of different allocation ratios, i.e., B / 4 , B / 2 , and 3 B / 4 in the first stage and the remaining budget in the second stage.", "As in Figure", "9(d), allocating too many or too few budgets to the first stage is not good.", "It either fails to explore sufficient configurations in the first stage or only fine-tunes a few configurations in the second stage.", "Allocating the same budget to the two stages is in a better trade-off.", "In analyzing the performance of KG embedding models, Ruffinelli et al. (2019) pointed out that the earlier works in KG embedding only search HPs in small grids.", "By searching hundreds of HPs in a unified framework, the reproduced performance can be significantly improved.", "Similarly, Ali et al. (2020) proposed another unified framework to evaluate different models.", "Rossi et al. (2021) evaluated 16 different models and analyzed their properties on different datasets.", "All of these works emphasize the importance of HP search, but none of them provide efficient algorithms to search HPs for KG learning.", "AutoSF (Zhang et al., 2020a) evaluates the bilinear scoring functions and set up a search problem to design bilinear scoring functions, which can be complementary to KGTuner.", "Understanding the HPs in a large search space is non-trivial since many HPs only have moderate impact on the model performance (Ruffinelli et al., 2019) and jointly evaluating them requires a large number of experiments (Fawcett and Hoos, 2016; Probst et al., 2019).", "Considering the huge amount of HP configurations (with 10 5 categorical choices and 5 continuous values), it is extremely expensive to exhaustively evaluate most of them.", "Hence, we adopt control variate experiments to efficiently evaluate HPs' properties instead of the quasi-random search in (Ruffinelli et al., 2019; Ali et al., 2020).", "Technically, we are similar to AutoNE (Tu et al., 2019) and e-AutoGR (Wang et al., 2021) by leveraging subgraphs to improve search efficiency on graph learning.", "Since they do not target at KG embedding methods, directly adopt them is not a good choice.", "Besides, based on the understanding in this paper, we demonstrate that transferring the surrogate model from subgraph evaluation to the full graph is inferior to directly transferring the top configurations for KG embedding models.", "In this paper, we analyze the HPs' properties in KG embedding models with search space size, validation curvature and evaluation cost.", "Based on the observations, we propose an efficient search algorithm KGTuner that efficiently explores configurations in a reduced space on small subgraph and then fine-tunes the top configurations with increased batch size and dimension size on the full graph.", "Empirical evaluations show that KGTuner is robuster and more efficient than the existing HP search algorithms and achieves competing performance on large-scale KGs in open graph benchmarks.", "In the future work, we will understand the HPs in graph neural network based models and apply KGTuner on them to solve the scaling limitations in HP search.", "This work was supported in part by The National Key Research and Development Program of China under grant 2020AAA0106000." ]
[ "abstain", "objective", "objective", "result", "other", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "other", "other", "other", "method", "other", "other", "other", "other", "method", "other", "other", "other", "other", "other", "other", "other", "other", "other", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "other", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "method", "abstain", "other", "objective", "method", "objective", "abstain", "objective", "other" ]
[ "Extracting lexico-semantic relations as graph-structured taxonomies, also known as taxonomy construction, has been beneficial in a variety of NLP applications.", "Recently Graph Neural Network (GNN) has shown to be powerful in successfully tackling many tasks.", "However, there has been no attempt to exploit GNN to create taxonomies.", "In this paper, we propose Graph2Taxo , a GNN-based cross-domain transfer framework for the taxonomy construction task.", "Our main contribution is to learn the latent features of taxonomy construction from existing domains to guide the structure learning of an unseen domain.", "We also propose a novel method of directed acyclic graph (DAG) generation for taxonomy construction.", "Specifically, our proposed Graph2Taxo uses a noisy graph constructed from automatically extracted noisy hyponym-hypernym candidate pairs, and a set of taxonomies for some known domains for training.", "The learned model is then used to generate taxonomy for a new unknown domain given a set of terms for that domain.", "Experiments on benchmark datasets from science and environment domains show that our approach attains significant improvements correspondingly over the state of the art. 1 Introduction Taxonomy has been exploited in many Natural Language Processing (NLP) applications, such as question answering (Harabagiu et al., 2003), query understanding (Hua et al., 2017), recommendation systems (Friedrich and Zanker, 2011), etc.", "Automatic taxonomy construction is highly challenging as it involves the ability to recognize", "(i) a set of types (i.e. hypernyms) from a text corpus,", "(ii) instances (i.e. hyponyms) of each type, and", "(iii) is-a (i.e. hypernymy) hierarchy between types.", "Taxonomies specific to many domains are either entirely absent or missing.", "In this paper, we focus on construction of taxonomies for such unseen domains 1 .", "Since taxonomies are expressed as directed acyclic graphs (DAGs) (Suchanek et al., 2008), taxonomy construction can be formulated as a DAG generation problem.", "There has been considerable research on Graph Neural Networks (GNN) (Sperduti and Starita, 1997; Gori et al., 2005) over the years; particularly inspired by the convolutional GNN (Bruna et al., 2014) where graph convolution operations were defined in the Fourier domain.", "In a similar spirit to convolutional neural networks (CNNs), GNN methods aggregate neighboring information based on the connectivity of the graph to create node embeddings.", "GNN has been applied successfully in many tasks such as matrix completion (van den Berg et al., 2017), manifold analysis (Monti et al., 2017), predictions of community (Bruna et al., 2014), knowledge graph completion (Shang et al., 2019), and representations of network nodes (Hamilton et al., 2017; Kipf and Welling, 2017).", "To the best of our knowledge, there has been no attempt to exploit GNN for taxonomy construction.", "Our proposed framework, Graph2Taxo , is the first to show that a GNN-based model using a cross-domain noisy graph can substantially improve the taxonomy construction of unseen domains (e.g., Environment) by exploiting taxonomy of one or more seen domains (e.g., Food).", "(The task is described in detail in Section 3.1.)", "Another novelty of our approach is we are the first to apply the acyclicity constraint-based DAG structure learning model (Zheng et al., 2018; Yu et al., 2019) for taxonomy generation task.", "The input of Graph2Taxo is a cross-domain 1 By unseen domain , we refer to a domain for which taxonomy is not available to the system.", "noisy graph constructed by connecting noisy candidate is-a pairs, which are extracted from a large corpus using standard linguistic pattern-based approaches (Hearst, 1992).", "It is noisy because pattern-based approaches are prone to poor coverage as well as wrong extractions.", "In addition, it is cross-domain because the noisy is-a pairs are extracted from a large-scale corpus which contains a collection of text from multiple domains.", "Our proposed neural model directly encodes the structural information from a noisy graph into the embedding space.", "Since the links between domains are also used in our model, it has not only structural information of multiple domains but also cross-domain information.", "We demonstrate effectiveness of our proposed method on science and environment datasets (Bor-dea et al., 2016), and show significant improvements on F-score over the state of the art. 2 Related Work Taxonomy construction (also known as taxonomy induction) is a well-studied problem.", "Most of the existing works follow two sequential steps to construct taxonomies from text corpora (Wang et al., 2017).", "First, is-a pairs are extracted using pattern-based or distributional methods.", "Then, a taxonomy is constructed from these is-a pairs.", "The pattern-based methods, pioneered by Hearst (1992), detect is-a relation of a term pair (x, y) using the appearance of x and y in the same sentence through some lexical patterns or linguistic rules (Ritter et al., 2009; Luu et al., 2014).", "Snow et al. (2004) represented each (x, y) term-pair as the multiset of dependency paths connecting their co-occurrences in a corpus, which is also regarded as a path-based method.", "An alternative approach for detecting is-a relation is the distributional methods (Baroni et al., 2012; Roller et al., 2014), using the distributional representation of terms to directly predict relations.", "As for the step of taxonomy construction using the extracted is-a pairs, most of the approaches do it by incrementally attaching new terms (Snow et al., 2006; Shen et al., 2012; Alfarone and Davis, 2015; Wu et al., 2012).", "Mao et al. (2018) is the first to present a reinforcement learning based approach, named TaxoRL , for this task.", "For each term pair, its representation in TaxoRL is obtained by the path LSTM encoder, the word embeddings of both terms, and the embeddings of features.", "Recently, Dash et al. (2020) argued that strict partial orders 2 correspond more directly to DAGs .", "They proposed a neural network architecture, called Strict Partial Order Network (SPON), that enforces asymmetry and transitive properties as soft constraints.", "Empirically, they showed that such a network produces better results for detecting hyponym-hypernym pairs on a number of datasets for different languages and domains in both supervised and unsupervised settings.", "Many graph-based methods such as Kozareva and Hovy (2010) and Luu et al. (2014) regard the task of hypernymy organization as a hypernymy detection problem followed by a graph pruning problem.", "For the graph pruning task, various graph-theoretic approaches such as optimal branching algorithm (Velardi et al., 2013), Edmond's algorithm (Karp, 1971) and Tarjan's algorithm (Tarjan, 1972) have been used over the years.", "In addition to these, Wang et al. (2017) mentions several other graph-based taxonomy induction approaches.", "In contrast, our approach formulates the taxonomy construction task as a DAG generation problem instead of an incremental taxonomy learning (Mao et al., 2018), which differentiates it when compared with the existing methods.", "In addition, our approach uses the knowledge from existing domains (Bansal et al., 2014; Gan et al., 2016) to build the taxonomies of missing domains.", "In this section, we first formulate the problem statement and then introduce our proposed Graph2Taxo framework as a solution.", "We describe the individual components of this framework in detail, along with justifications of how and why these components come together as a solution.", "The problem addressed in this paper is, given a list of domain-specific terms from a target unseen (aka missing) domain as input, how to construct a taxonomy for that target unseen domain.", "In other words, the problem addressed in this paper is how to organize these terms into a taxonomy.", "This problem can be further abstracted out as follows: Given a large input corpus and a set of gold taxonomies G gold from some known domains (different from the target domain), our task is to learn a model (trained using the corpus and taxonomies of known domains) to construct multiple taxonomies for the target unseen domains.", "As a solution to the aforementioned problem, we propose a GNN-based cross-domain transfer framework for taxonomy construction (see Figure 1), called Graph2Taxo which consists of a cross-domain graph encoder and a DAG generator.", "The first step in our proposed approach is to build a cross-domain noisy graph as an input to our Graph2Taxo model.", "In this step, we extract candidate is-a pairs from a large collection of input corpora that spans multiple domains.", "To do so, we used the output of Panchenko et al. (2016), which is a combination of standard substring matching and pattern-based approaches.", "Since such pattern-based approaches are too rigid, the corresponding output not only suffers from recall (i.e., missing is-a pairs) but also contains incorrect (i.e., noisy) pairs due to the ambiguity of language and richness in syntactic expression and structure in the input corpora.", "For example, consider the phrase ... animals other than dogs such as cats ... .", "As (Wu et al., 2012) noted, pattern-based approaches will extract (cat is-a dog) rather than (cat is-a animal) .", "Based on the noisy is-a pairs, we construct a directed graph G input = ( V input , E input ) , which is a cross-domain noisy graph .", "Here, V input denotes a set of terms, and ( v i , v j ) E input if and only if ( v i , v j ) belongs to the list of extracted noisy is-a pairs.", "The input document collection spans multiple domains, therefore E input not only has intra-domain edges, but also has cross-domain edges (see Figure 1).", "Graph2Taxo is a subgraph generation model which uses the large cross-domain noisy graph as the input.", "Given a list of terms for a target unseen domain, it aims to learn a taxonomy structure for the corresponding domain as a DAG.", "Graph2Taxo takes advantage of additional knowledge in the form of previously known gold taxonomies { G gold,i , 1 i N known } to train a learning model.", "During inference phase, the model receives a list of terms from the target unseen domain and aims to build a taxonomy by using the input terms.", "Here, N known denotes the number of previously known taxonomies used during the training phase.", "This problem of distilling directed acyclic substructures (taxonomies of many domains) using a large cross-domain noisy graph is challenging, because of relatively lower overlap between noisy edges in E input and true edges in the available taxonomies in hand.", "The following sections describe our proposed Cross-domain Graph Encoder and the DAG Generator in further detail.", "This subsection describes the Cross-domain Graph Encoder in Figure 1 for embedding generation.", "This embedding generation algorithm uses two strategies, namely Neighborhood aggregation and Semantic clustering aggregation .", "This is the first of the two strategies used for embedding generation.", "Let A R n n be the adjacency matrix of the noisy graph G input , where n is the size of V input .", "Let h li represent the feature representation for the node v i in the l -th layer and thus H l R n d l denotes the intermediate representation matrix.", "The initial matrix H 0 is randomly initialized from a standard normal distribution.", "We use the adjacency matrix A and the node representation matrix H l to iteratively update the representation of a particular node by aggregating representations of its neighbors.", "This is done by using a GNN.", "Formally, a GNN layer (Gilmer et al., 2017; Hamilton et al., 2017; Xu et al., 2019) employs the general message-passing architecture which consists of a message propagation function M to get messages from neighbors and a vertex update function U .", "The message passing works via the following equations, m l +1 v = M ( h lu ) u N ( v ) h l +1 v = U ( h lv , m l +1 v ) where N ( v ) denotes the neighbors of node v and m is the message.", "In addition, we use the following definitions for M and U functions, M ( h lu ) = (cid:88) u N ( v ) A vu h lu , u N ( v ) U ( h lv , m l +1 v ) = ( m l +1 v l + h lv l ) where l R d l d l +1 denotes trainable parameters for layer l and represents an activation function.", "Let A = A + I , here I is the identity matrix, the information aggregation strategy described above can be abstracted out as, H l +1 = GNN l ( A, H l ) = ( AH l l ) 3.2.2 Semantic Clustering Aggregation This is the second of the two strategies used for embedding generation, which operates on the output of the previous step.", "The learned representations from the previous step are highly likely not to be uniformly distributed in the Euclidean Space, but rather form a bunch of clusters.", "In this regard, we propose a soft clustering-based pooling-unpooling step, that uses semantic clustering aggregating for learning better model representations.", "In essence, this step shares the similarity information for any pair of terms in the vocabulary.", "Analogous to an auto-encoder, the pooling layer adaptively creates a smaller cluster graph comprising of a set of cluster nodes, whose representations are learned based on a trainable cluster assignment matrix.", "This idea of using an assignment matrix was first proposed by the DiffPool (Ying et al., 2018) approach.", "On the other hand, the unpooling layer decodes the cluster graph into the original graph using the same cluster assignment matrix learned in the pooling layer.", "The learned semantic cluster nodes can be thought of as bridges between nodes from the same or different clusters to pass messages.", "Mathematically speaking, we learn a soft cluster assignment matrix S l R n n c at layer l using the GNN model, where n c is the number of clusters.", "Each row in S l corresponds to one of n nodes in layer l and each column corresponds to one of the n c clusters.", "As a first step, the pooling layer uses the adjacency matrix A and the node feature matrix H l to generate a soft cluster assignment matrix as, S l = softmax ( GNN l,cluster ( A, H l )) (1) where the softmax is a row-wise softmax function, lcluster R d l n c denotes all trainable parameters in GNN l,cluster .", "Since the matrix S l is calculated based on node embeddings, nodes with similar features and local structure will have similar cluster assignment.", "As the final step, the pooling layer generates an adjacency matrix A c for the cluster graph and a new embedding matrix containing cluster node representations H lc as follows, H lc = ( S l ) TH l R n c d l A c = ( S l ) TAS l R n c n c A GNN operation is used within the small cluster graph, H l +1 c = GNN l ( A c , H lc ) R n c d l +1 to further propagate messages from the neighboring clusters.", "The trainable parameters in GNN l are l R d l d l +1 .", "For passing clustering information to the original graph, the unpooling layer restores the original graph using cluster assignment matrix, as follows, H l +1 = S l H l +1 c R n d l +1 The output of the pooling-unpooling layer results in the node representations possessing latent cluster information.", "Finally, we combine the neighborhood aggregation and semantic clustering aggregation strategies via a residual connection, as, H l +1 = concate ( H l +1 , H l ) where concate means concatenate the two matrices.", "H l +1 is the output of this pooling-unpooling step.", "The DAG generator takes in the noisy graph G input and representations of all the vocabulary terms (out-put of Section 3.2) as input, encodes acyclicity as", "a soft-constraint (as described below), and outputs a distribution of edges within G input that encodes the likelihood of true is-a relationships.", "This output distribution is finally used to induce taxonomies, i.e., DAGs of is-a relationships.", "In each training step, DAG generator is applied to one domain (see Figure 2), using a noisy graph G , which is a subgraph from G input , as a training sample and a DAG is generated for that domain.", "Here let N t denote the number of ( hypo, hyper ) pairs belonging to the edge set of G .", "During the training, we also know label vector label { 0 , 1 } N t for these N t pairs, based on whether they belong to the gold known taxonomy.", "For each edge within the noisy graph G , our DAG generator estimates the probability that the edge represents a valid hypernymy relationship.", "Our model estimates this probability through the use of a convolution operation illustrated in Figure 2.", "For each edge ( hypo, hyper ) , in the first step the term embeddings and edge features are concatenated as follows, v pair = concate ( v hypo , v hyper , v feas ) where v hypo and v hyper are the embeddings for hypo and hyper nodes (from Section 3.2) and v feas denotes a feature vector for the edge ( hypo, hyper ) , which includes edge frequency and substring features.", "The substring features includes ends with , contains , prefix match , suffix match , length of longest common substring (LCS) , length difference and a boolean feature denoting whether LCS in V input (the set of terms) or not .", "Inspired by ConvE model (Dettmers et al., 2018), a well known convolution based algorithm for link prediction, we apply a 1D convolution operation on v pair .", "We use a convolution operation since it increases the expressiveness of the DAG Generator through additional interaction between participating embeddings.", "For the convolution operation, we make use of C different kernels parameterized by { w c , 1 c C } .", "The 1D convolution operation is then calculated as follows, v c = [ U c ( v pair , 0) , ..., U c ( v pair , d v 1)] (2) U c ( v pair , p ) = K 1 (cid:88) =0 c ( ) v pair ( p + )) (3) where K denotes the kernel width, d v denotes the size of v pair , p denotes the position to start the kernel operation and the kernel parameters c are trainable.", "In addition, v pair denotes the padded version of v pair , wherein the padding strategy is as follows.", "If | K | is odd, we pad v pair with (cid:98) K/ 2 (cid:99) zeros on both the sides.", "On the other hand, if | K | is even, we pad (cid:98) K/ 2 (cid:99) 1 zeros at the beginning, and (cid:98) K/ 2 (cid:99) zeros at the end of v pair .", "Here, (cid:98) value (cid:99) returns the floor of value .", "Each kernel c generates a vector v c , according to Equation 2.", "As there are C different kernels, this results in the generation of C different vectors which are then concatenated together to form one vector VC , i.e. VC = concatenate ( v 0 , v 1 , . . . , v C ) .", "The probability p ( hypo,hyper ) of a given edge ( hypo, hyper ) expressing a hypernymy relationship can then be estimated using the following scoring function, p ( hypo,hyper ) = sigmoid ( VTCW ) (4) where W denotes the parameter matrix of a fully connected layer, as illustrated in Figure 2.", "Finally, for the loss calculations, we make use of differentiable F1 loss (Huang et al., 2015), Precision = (cid:80) N t 1 t =0 p t label t (cid:80) N t 1 t =0 p t Recall = (cid:80) N t 1 t =0 p t label t (cid:80) N t 1 t =0 label t LF 1 = 2 Precision Recall Precision + Recall 3.3.2 DAG Constraint The edge prediction step alone does not guarantee that the generated graph is acyclic.", "Learning DAG from data is an NP-hard problem (Chickering, 1995; Chickering et al., 2004).", "To this effect, one of the first works that formulate the acyclic structure learning task as a continuous optimization problem was introduced by Zheng et al. (2018).", "In that paper, the authors note that the trace of B k denoted by tr ( B k ) , for a non-negative adjacency matrix B R n n counts the number of length-k cycles in a directed graph.", "Hence, positive entries within the diagonal of B k suggests the existence of cycles.", "Or, in other words, B has no cycle if and only if (cid:80) k =1 (cid:80) n i =1 ( B k ) ii = 0 .", "However, calculating B k for every value of k , i.e. repeated matrix exponentiation, is impractical and can easily exceed machine precision.", "To solve this problem, Zheng et al. (2018) makes use of Taylor Series expansion as e B = (cid:80) k =0 B k k !", "To make sure this constraint is useful for an arbitrary weighted matrix with both positive and negative values, a Hadamard product B = A A is used, which leads us to the following theorem.", "where tr represents the trace of a matrix, represents the Hadamard product and e B equals matrix exponential of B.", "Since the matrix exponential may not be available in all deep learning frameworks, (Yu et al., 2019) propose an alternative constraint that is practically convenient as follows.", "Lemma 2 (Yu et al., 2019) Let = c/m > 0 for some c.", "For any complex , since (1 + | | ) m e c | | , the DAG constraint from Theorem 1 can be relaxed and stated as follows, h ( A ) = tr (cid:2) ( I + A A ) n (cid:3) n = 0 where is a hyper-parameter.", "Finally, using an augmented Lagrangian approach, we propose the combined loss function, L = LF 1 + h ( A ) + 2 h ( A ) 2 where and are the hyper-parameters.", "During the backpropagation, the gradients will be passed back to all domains through the intra-domain and cross-domain edges from G input to update all parameters.", "We evaluate Graph2Taxo on Semeval-2016 Task 13: Taxonomy Extraction Evaluation 3 , otherwise known as TExEval-2 task (Bordea et al., 2016).", "All experiments are implemented in PyTorch.", "Code is publicly available at https://github.com/IBM/ gnn-taxo-construction .", "For experiments, we used the English environment and the science taxonomies within the TExEval-2 benchmark datasets.", "These datasets do not come with any training data, but a list of terms and the task is to build a meaningful taxonomy using these terms.", "The science domain terms come from Wordnet , Eurovoc and a manually constructed taxonomy (henceforth referred to as combined ), whereas the terms for environment domain comes from Eurovoc taxonomy only.", "Table 1 shows the dataset statistics.", "We chose to evaluate our proposed approach on environment and science taxonomies only, because we wanted to compare our approach with the existing state-of-the-art system named TaxoRL (Mao et al., 2018) as well as with TAXI , the winning system in the TExEval-2 task.", "Note that we use the same datasets with TaxoRL (Mao et al., 2018) for TExEval-2 task.", "In addition, we used the dataset from Bansal et al. (2014) as gold taxonomies (i.e. sources of additional knowledge), G gold = { G gold,i , 1 i N known } that are known apriori.", "This dataset is a set of medium-sized full-domain taxonomies consisting of bottom-out full subtrees sampled from Wordnet, and contains 761 taxonomies in total.", "To test our model for taxonomy prediction (and to remove overlap), we removed any taxonomy from G gold which had term overlap with the set of provided terms for science and environment domains within TExEval-2 task.", "Because of this, we get 621 non-overlapping taxonomies in total, partitioned by 80-20 ratio to create training and validation datasets respectively.", "We ran our experiments in two different settings.", "In each of them, we train on a different noisy input graph (and the same gold taxonomies as mentioned before), and evaluate on the science and environ-Science Science Science Science Environment (Combined) (Eurovoc) (WordNet) (Average) (Eurovoc) Model P e R e F e P e R e F e P e R e F e P e R e F e P e R e F e Baseline 0.63 0.29 0.39 0.62 0.21 0.31 0.69 0.27 0.38 0.65 0.26 0.36 0.50 0.21 0.30 JUNLP 0.14 0.31 0.19 0.13 0.36 0.19 0.21 0.31 0.25 0.16 0.33 0.21 0.13 0.23 0.17 USAAR 0.38 0.26 0.31 0.63 0.15 0.25 0.82 0.19 0.31 0.61 0.20 0.29 0.81 0.15 0.25 TAXI 0.39 0.35 0.37 0.30 0.33 0.31 0.37 0.38 0.38 0.35 0.35 0.35 0.34 0.27 0.30 TaxoRL A 0.57 0.33 0.42 0.38 0.24 0.29 TaxoRL B 0.38 0.38 0.38 0.32 0.32 0.32 Graph2Taxo 1 0.91 0.31 0.46 0.78 0.26 0.39 0.82 0.32 0.46 0.84 0.30 0.44 0.89 0.24 0.37 Graph2Taxo 2 0.90 0.33 0.48 0.79 0.33 0.46 0.77 0.32 0.46 0.82 0.33 0.47 0.67 0.28 0.39 Table 2: Results on TExEval-2 task: Taxonomy Extraction Evaluation (a.k.a TExEval-2).", "ment domains, within TExEval-2 task.", "In the first setting, we used the same input as TaxoRL (Mao et al., 2018) for a fair comparison.", "This input of TaxoRL consists of term pairs and associated dependency path information between them, which has been extracted from three public web-based corpora.", "For Graph2Taxo , we only make use of the term pairs to create a noisy input graph.", "In the second setting, we used data 4 provided by TAXI (Panchenko et al., 2016), which comprises of a list of candidate is-a pairs extracted based on substrings and lexico-syntactic patterns.", "We used these noisy candidate pairs to create a noisy graph.", "A Graph2Taxo model is then trained on the noisy graph obtained in each of the two settings.", "In the test phase, all candidate term-pairs for which both terms are present in the test vocabulary are scored (between 0 and", "1) by the trained Graph2Taxo model.", "A threshold of 0.5 is applied, and the candidate pairs scoring beyond this threshold are accumulated together as the predicted taxonomy G pred .", "Notice that there are different optimal thresholds for different tasks.", "We get better performance if we tune the thresholds.", "However, we chose a harder task and proved our model has better performance than others even we simply use 0.5 as the threshold.", "In addition, We specify the hyper-parameter ranges for our experiments: learning rate { 0 .", "01 , 0 .", "005 , 0 .", "001 } , number of kernels { 5 , 10 , 20 } and number of clusters { 10 , 30 , 50 , 100 } .", "Finally, Adam optimizer (Kingma and Ba, 2015) is used for all experiments.", "G gold (as part of the TExEval-2 benchmark dataset) and a predicted taxonomy G pred (by our proposed Graph2Taxo approach), we evaluate G pred using Edge Precision, Edge Recall and F-score measures as defined in Bordea et al. (2016).", "We use the following hyper-parameter configura-tion for training the model.", "We set dropout to 0.3, number of kernels C to 10, kernel size K to 5, learning rate to 0.001 and initial embedding size to 300.", "For the loss function, we use the = 1 .", "0 and = 0 .", "5 .", "In addition, number of clusters n c is set to 50 for all our experiments.", "In the scenario wherein the input resource comes from TAXI, only hyponym-hypernym candidate pairs observed more than 10 times are used to create a noisy graph.", "Also, we use one pooling and one unpooling layer for our experiments.", "We use dropouts in two places, one at the end of the cross-domain encoder module, and the other after the Conv1D operation.", "Our models are trained using NVIDIA Tesla P100 GPUs.", "Table 2 shows the results on the TExEval-2 task Evaluation on science and environment domains.", "The first row represents a string-based baseline method (Bordea et al., 2016), that exploits term compositionality to hierarchically relate terms.", "For example, it extracts pairs such as ( Statistics Department , Department ) from the provided Wikipedia corpus, and utilizes aforementioned technique to construct taxonomy.", "ing systems that participated in the TExEval-2 task.", "Furthermore, TaxoRL A,B illustrates the performance of a Reinforcement Learning system by under the Partial induction and Full induction settings respectively (Mao et al., 2018).", "Since Mao et al. (2018) has shown that it outperforms other methods such as Gupta et al. (2017); Bansal et al. (2014), we only compare the results of our proposed Graph2Taxo approach against the state-of-the-art system TaxoRL.", "Finally, Graph2Taxo 1 and Graph2Taxo 2 depict the results of our proposed algorithm under both aforementioned settings, i.e. using the input resources of TaxoRL in the first scenario, and using the resources of TAXI in the second scenario.", "In each of these settings, we find that the overall precision of our proposed Graph2Taxo approach is far better than all the other existing approaches, demonstrating the strong ability of Graph2Taxo to find true relations.", "Meanwhile, the recall of our proposed Graph2Taxo approach is comparable to that of the existing state-of-the-art approaches.", "Combining the precision and recall metrics, we observe that Graph2Taxo outperforms existing state-of-the-art approaches on the F-score, by a significant margin.", "For example, for the Science (Average) domain, Graph2Taxo 2 improves over TaxoRL's F-score by 5%.", "For the Environment (Eurovoc) domain, our model improves TaxoRL's F-score by 7% on the TExEval-2 task.", "Besides, our proposed model has high scalability.", "For example, the GNN method has been trained for a large graph, including about 1 million nodes (Kipf and Welling, 2017).", "Besides, the GNN part can be replaced by any improved GNN methods (Hamilton et al., 2017; Gao et al., 2018) designed for large-scale graphs.", "Ablation Tests.", "Table 3 shows the results of proposed Graph2Taxo in the second setting for the ablation experiments (divided into four blocks), which indicates the contribution of each layer used in our Graph2Taxo model.", "In Table 3, all the experiments are run three times, and the average values of the three runs are reported.", "Furthermore, in Figure 3, we randomly choose Science (Eurovoc) domain as the one to report the error-bars (corre-sponding to the standard-deviation values) for our experiments.", "The first block of values in Table 3 illustrates results by ablating layers from within our Graph2Taxo model.", "Comparing the first two rows, it's evident that adding a Semantic Cluster (SC) layer improves recall at the cost of precision, however improving the overall F-score.", "This improvement is clearly seen for the Science (Eurovoc) domain, wherein we have an increase of 3%.", "In the second block, we show that the addition of constraints improves performance.", "Row 4 represents a Graph2Taxo i.e. 2GNN+SC+Res setup, but without any constraint.", "Adding the DAG Constraint (Row", "1) to this yields can get a better F-score.", "Specifically, we observe a major increase of +5% F1 for the Science (Eurovoc) domain.", "In the third block, we remove the features v feas as mentioned in section 3.3.1.", "The results, i.e. row 5 in Table 3 shows that these features are critical in improving the performance of our proposed system on both Science (Eurovoc) and Environment (Eu-rovoc) domains.", "Note that these features denoted as v feas are not a novelty of our proposed method, but rather have been used by existing state-of-the-art approaches.", "Finally, we study the effect of initializing our model using pre-trained embeddings, rather than initializing at random.", "Specifically, we initialize the input matrix H 0 of our Graph2Taxo model with pre-trained fastText 5 embeddings.", "Our model using fastText embeddings improves upon Row 1 by a margin of 4% in precision values for the Environment (Eurovoc) domain, but unfortunately has no significant effect on the F-score.", "Hence, we have not used pre-trained embeddings in reporting the results in Table 2.", "We provide an illustration of the output of the Graph2Taxo model in Figure 4, for the Environment domain.The generated taxonomy in this example contains multiple trees, which serve the purpose of generating taxonomical classifications.", "As future work, we plan to figure out different strategies to connect the subtrees into a large graph for better DAG generation.", "We have introduced a GNN-based cross-domain knowledge transfer framework Graph2Taxo , which makes use of a cross-domain graph structure, in conjunction with an acyclicity constraint-based DAG learning for taxonomy construction.", "Furthermore, our proposed model encodes acyclicity as a soft constraint and shows that the overall model outperforms state of the art.", "In the future, we would like to figure out different strategies to merge individual gains, obtained by separate application of the DAG constraint, into a setup that can take the best of both precision and recall improvements, and put forth a better performing system.", "We also plan on looking into strategies to improve recall of the constructed taxonomy.", "The authors would like to thank Dr. Jie Chen from MIT-IBM Watson AI Lab and Prof. Jinbo Bi from the University of Connecticut for in-depth discussions on model construction." ]
[ "abstain", "abstain", "abstain", "objective", "objective", "objective", "objective", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "method", "method", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "objective", "result", "result", "other" ]
[ "In this paper, we present a novel integrated approach for keyphrase generation (KG).", "Unlike previous works which are purely extractive or generative, we first propose a new multitask learning framework that jointly learns an extractive model and a generative model.", "Besides extracting keyphrases, the output of the extractive model is also employed to rectify the copy probability distribution of the generative model, such that the generative model can better identify important contents from the given document.", "Moreover, we retrieve similar documents with the given document from training data and use their associated keyphrases as external knowledge for the generative model to produce more accurate keyphrases.", "For further exploiting the power of extraction and retrieval, we propose a neural-based merging module to combine and re-rank the predicted keyphrases from the enhanced generative model, the extractive model, and the retrieved keyphrases.", "Experiments on the five KG benchmarks demonstrate that our integrated approach outperforms the state-of-the-art methods.", "Keyphrases are short text pieces that can quickly express the key ideas of a given document.", "The keyphrase generation task aims at automatically generating a set of keyphrases given a document.", "As shown in the upper part of Figure 1, the input is a document and the output is a set of keyphrases.", "Due to the concise and precise expression, keyphrases are beneficial to extensive downstream applications such as text summarization (Zhang et al., 2004; Wang and Cardie, 2013), sentiment analysis (Wilson et al., 2005; Berend, 2011), and document clustering (Hulth and Megyesi, 2006; Hammouda et al., 2005).", "Document: Futility-Based Offspring Sizing.", "Parameter control in evolutionary algorithms (EAs) has been shown to be beneficial; however, the control of offspring size has so far received very little attention.", "This paper introduces Futility-Based Offspring Sizing (FuBOS), a method for controlling offspring size on a per generation basis without even requiring the user to set an initial offspring size value.", ".", ". Keyphrases: { evolutionary algorithm ; parameterless evolutionary algorithm; parameter control ; offspring sizing ; optimization} Retrieved Document: An Exploration into Dynamic Population Sizing .", "Traditional evolutionary algorithms are powerful problem solvers that have several fixed parameters which require prior specification.", ".", ". While many methods of parameter control have been published that focus on removing the population size parameter, , all hampered by a variety of problems.", "This paper investigates the benefits of making a dynamic parameter and introduces two novel methods for population control.", ".", ". Retrieved Keyphrases: { evolutionary algorithm ; parameterless evolutionary algorithm; parameter control ; population sizing ; optimization} Figure 1: An example of keyphrase generation and retrieval.", "can be divided into two categories: extractive and generative .", "Extractive methods (Medelyan et al., 2009; Mihalcea and Tarau, 2004; Zhang et al., 2016; Luan et al., 2017) identify present keyphrases that appear in the source text like parameter control in Figure 1. Although extractive methods are simple to implement, they cannot predict absent keyphrases which are not in the document like optimization in Figure 1. Generative methods (Meng et al., 2017; Chen et al., 2018a; Ye and Wang, 2018; Yuan et al., 2018) adopt the wellknown encoder-decoder generative model (Luong et al., 2015; Bahdanau et al., 2014) with copy mechanism (Gu et al., 2016; See et al., 2017) to produce keyphrases.", "In a generative model, the decoder generates keyphrases word by word through either selecting from a predefined vocabulary according to a language model or copying from the source text according to the copy probability distribution computed by a copy mechanism.", "Thus, these generative methods are capable of generating both present and absent keyphrases.", "From a high-level perspective, extractive methods directly locate essential phrases in the document while generative models try to understand the document first and then produce keyphrases.", "To the best of our knowledge, these two kinds of methods have been developing independently without any combinations among them.", "However, when human annotators are asked to assign keyphrases to a document, they usually first obtain a global sense about which parts of the document are important and then write down the keyphrases word by word based on a more detailed understanding.", "To achieve such a goal, we propose a multi-task learning framework to take advantage of both extractive and generative models.", "For keyphrase extraction, we adopt a neural sequence labeling model to output the likelihood of each word in the source text to be a keyphrase word (or the importance score of each word).", "These importance scores are then employed to rectify the copy probability distribution of the generative model.", "Since the extractive model is explicitly trained to identify keyphrases from the source text, its importance scores can help the copy mechanism to identify important source text words more accurately.", "Different from the copy probability distribution which is dynamic at each generation step, these importance scores are static.", "Therefore, they can provide a global sense about which parts of the document are important.", "In addition, these scores are also utilized to extract present keyphrases which will be exploited by the merging module.", "Moreover, human annotators can also incorporate relevant external knowledge like the keyphrases of similar documents that they read before to assign more appropriate keyphrases.", "Correspondingly, to incorporate external knowledge, we propose a retriever to retrieve similar documents of the given document from training data.", "For instance, as shown in Figure 1, we retrieve a document from the KP20k training dataset that has the highest similarity with the upper document.", "The retrieved document is assigned with almost the same keyphrases as the upper document.", "Therefore, keyphrases from similar documents (i.e., retrieved keyphrases) can give useful knowledge to guide the generation of keyphrases for the given document.", "More concretely, we encode the retrieved keyphrases as vector representations and use them as an external memory for the decoder of the generative model in our multitask learning framework.", "Besides providing external knowledge, the retrieved keyphrases themselves are regarded as a kind of keyphrase prediction and can be utilized by the merging module.", "Finally, to imitate the integrated keyphrase assignment process of humans more comprehensively, we further exploit the extractive model and the retrieved keyphrases by proposing a merging module.", "This merging module collects and re-ranks the predictions from our aforementioned components.", "First, keyphrase candidates are collected from three different sources: (1) keyphrases generated by the enhanced generative model; (2) keyphrases extracted by the extractive model; and (3) the retrieved keyphrases.", "Then, we design a neural-based merging algorithm to merge and re-rank all the keyphrase candidates, and consequently return the top-ranked candidates as our final keyphrases.", "We extensively evaluate the performance of our proposed approach on five popular benchmarks.", "Experimental results demonstrate the effectiveness of the extractive model and the retrieved keyphrases in our multi-task learning framework.", "Furthermore, after introducing the merging module, our integrated approach consistently outperforms all the baselines and becomes the new state-of-the-art approach for keyphrase generation.", "In summary, our main contributions include: (1) a new multi-task learning framework that leverages an extractive model and external knowledge to improve keyphrase generation; (2) a novel neural-based merging module that combines the predicted keyphrases from extractive, generative, and retrieval methods to further improve the performance; and (3) the new state-of-the-art performance on five real-world benchmarks.", "Keyphrase extraction focuses on predicting the keyphrases that are present in the source text.", "Existing methods can mainly be categorized into two-step extraction approaches and sequence labeling models.", "Two-step extraction approaches first identify a set of candidate phrases from the document using different heuristics, such as the phrases that match specific part-of-speech (POS) tags (Liu et al., 2011; Wang et al., 2016; Le et al., 2016).", "Then, they learn a score for each candidate and select the top-ranked candidates as predicted keyphrases.", "The scores can be learned by either supervised methods with hand-crafted textual features (Medelyan et al., 2009; Witten et al., 1999; Nguyen and Kan, 2007; Frank et al., 1999; Hulth, 2003) or unsupervised graph ranking methods (Mihalcea and Tarau, 2004; Grineva et al., 2009; Wan and Xiao, 2008).", "Sequence labeling models are built on a recurrent neural network to sequentially go through a source text and learn the likelihood of each word in the source text to be a keyphrase word (Zhang et al., 2016; Luan et al., 2017; Gollapalli et al., 2017).", "In contrast to these extractive methods, our approach can generate both absent and present keyphrases.", "Keyphrase generation aims at predicting both present and absent keyphrases for a source text.", "Meng et al. (2017) proposed CopyRNN, which is built on the attentional encoder-decoder model (Bahdanau et al., 2014) with copy mechanism (Gu et al., 2016) to generate keyphrases.", "CorrRNN (Chen et al., 2018a), an extension of CopyRNN, was proposed to model the correlations among keyphrases.", "This model utilizes hidden states and attention vectors of previously generated keyphrases to avoid generating repetitive keyphrases.", "The title information of the source text was explicitly exploited by Ye and Wang (2018) and Chen et al. (2018b) to further improve the performance.", "Ye and Wang (2018) first considered a semi-supervised setting for keyphrase generation.", "In contrast, inspired by Hsu et al. (2018) and Cao et al. (2018), we enhance existing generative methods by adopting an extractive model to assist the copy mechanism and exploiting external knowledge from retrieved keyphrases to help the generation.", "Furthermore, we also design a merging module to combine the predictions from different components.", "As shown in Figure 2, our integrated framework consists of a retriever, two encoders, an extractor, a decoder, and a merging module.", "Given a document x , the retriever returns the keyphrases r retrieved from the training corpus.", "In addition to acting as keyphrase candidates, these retrieved keyphrases are also exploited to provide external guidance for the decoder.", "Then keyphrase extraction and generation are jointly conducted by the extractor and the decoder through sharing an en-Encoder1 Decoder Extractor Merging Module The source text The retrieved keyphrases Encoder2 Extracted Candidates GeneratedCandidates Retrieved Candidates Final Predictions Retriever Figure 2: Our integrated framework.", "coder.", "Besides extracting keyphrase candidates, the importance scores of the source text words, , predicted by the extractor are also employed to rescale the original copy probability distribution of the decoder.", "Thus, they can help the copy mechanism to detect important words more accurately.", "Finally, the merging module merges the candidates from three different sources (i.e., the retrieved, extracted, and generated candidates) and output the final predictions.", "Given a document x , the retriever module retrieves top K (document, keyphrases) pairs from the training corpus.", "The retrieval is based on the Jaccard similarities of the non-stop-word sets between x and the corpus documents.", "After that, the keyphrases of the top K pairs are returned and used in the later modules in two ways.", "First, these retrieved keyphrases are regarded as the keyphrase candidates of x and directly fed into the final merging module.", "In addition, these keyphrases are concatenated together as a guidance input r for the decoder to provide useful external knowledge for the generation process.", "A separator token is inserted among keyphrases when concatenating them together.", "We propose a multi-task learning framework which simultaneously learns to extract keyphrases from the source text and generate keyphrases word", "The inputs of the multi-task learning framework are the source text x and the concatenated retrieved keyphrases r .", "Both x and r are a sequence of tokens (i.e., x = [ x 1 , ..., x L x ] , r = [ r 1 , ..., r L r ] ), where L x and L r are the length of x and r respectively.", "The output of the extractor is a sequence of importance scores = [ 1 , ..., L x ] , where i is the probability of the i -th source word of being a keyphrase word.", "The output of the generator is a set of keyphrases Y = { y i } i =1", ",..,N , where N is the keyphrase number of x and y i = [ y i 1 , ..., y iL y i ] is a token sequence with length L y i .", "To fit the encoder-decoder framework, N tuples { ( x , r , , ( y i ) ) } i =1 ,...,N are split during training, where and ( y i ) are the gold binary importance scores and one of the gold keyphrases of x correspondingly.", "For simplicity, we adopt ( x , r , , y ) to represent such a tuple.", "Two encoders are employed in our multi-task learning framework.", "One is for the source text encoding (i.e., Encoder1 in Figure", "2) and the other is for retrieved keyphrases encoding (i.e., Encoder2 in Figure 2).", "Both of them employ a bidirectional GRU (Cho et al., 2014) layer to obtain a context-aware representation of each word: u i = BiGRU 1 ( x i , u i 1 , u i +1 ) , (1) v j = BiGRU 2 ( r j , v j 1 , v j +1 ) , (2) where i = 1 , 2 , ..., L x and j = 1 , 2 , ..., L r .", "x i and r j are the d e -dimensional embedding vectors of the i -th source text word x i and j -th retrieved keyphrases word r j respectively.", "u i = [ u i ; u i ] R d and v j = [ v j ; v j ] R d are regarded as the corresponding context-aware representations, where d is the hidden size of the biderectional GRU layer.", "Finally, we obtain the internal memory bank U = [ u 1 , ..., u L x ] for later extraction and generation, and the external memory bank V = [ v 1 , ..., v L r ] for later generation.", "Based on the internal memory bank, we use the following sequence identifier as our extractor to identify whether the word is a keyphrase word in", "the source text.", "We denote the importance score P ( j = 1 | u j , s j , d ) as j for simplicity: j = sigmoid ( W c u j + u Tj W s d u Tj W n tanh ( s j ) + b ) , (3) where d = tanh ( W d [ u L x ; u 1 ] + b ) is the global document representation and s j = (cid:80) j 1 i =1 u i i is current summary representation.", "W c , W s , and W n are the content, salience and novelty weights respectively.", "Although this extractor is inspired by Nallapati et al. (2017), our extractor identifies important words instead of sentences within the source text.", "In addition to the internal memory bank u 1 , ..., u L x ] , our decoder employs the external memory bank [ v 1 , ..., v L r ] to provide external guidance for the generation process.", "We exploit a decoder equipped with attention and copy mechanisms (Luong et al., 2015; See et al., 2017) to generate keyphrases.", "This decoder mainly consists of a forward GRU layer: h t = GRU ([ e t 1 ; h t 1 ] , h t 1 ) , (4) c int = attn ( h t , [ u 1 , ..., u L x ] , W in ) , (5) c ext = attn ( h t , [ v 1 , ..., v L r ] , W ex ) , (6) h t = tanh ( W 1 [ c int ; c ext ]; h t ) , (7) where e t 1 is the embedding vector of the ( t 1) -th predicted word.", "The attn operation in Eq.", "(5) is defined as c int = (cid:80) L x i =1 int,i u i , where int,i = exp( s t,i ) / (cid:80) L x j =1 exp( s t,j ) and s t,i = ( h t ) TW in u i .", "Similarly, we can obtain the external aggregated vector c ext .", "where g t = ( w Tg h t + b g ) R is the soft switch between generating from the predefined vocabulary V and copying from X that are all words appearing in the source text.", "P v ( y t ) = softmax ( W 2 h t + b v ) R | V | is the generating probability distribution over V and P c ( y t ) = (cid:80) i : x i = y t ct,i R | X | is the copying probability distribution over X .", "Previous work either directly uses the internal attention scores as the copy probabilities (i.e., ct,i = int,i ) or employs extra neural network layers to calculate new copy scores.", "But we employ the rescaled internal attention scores int by the importance scores [ 1 , ..., L x ] from the extractor as the final copy probabilities: ct,i = int,i i (cid:80) L x j =1 int,j j .", "The purpose of this rescaling is to provide extra guidance that which words within the source text are important and thus should obtain more attention when copying.", "Extraction Loss.", "We choose the source text words appearing in the assigned keyphrases as the gold important words and use the weighted cross-entropy loss for the extraction training i.e., L e = 1 L x (cid:80) L x j =1 w j log j +(1 j ) log (1 j ) , where j { 0 , 1 } is the ground-truth label for the j -th word and w is the loss weight for the positive training samples.", "Generation Loss.", "The negative log likelihood loss is utilized for the generation training i.e., L g = (cid:80) L y t =1 log P ( y t | y t 1 , x , r ) , where y t = [ y 1 , ..., y t 1 ] is the previously predicted word sequence, L y is the length of target keyphrase y , and y t is the t -th target word in y .", "In this module, the retrieved, extracted and generated keyphrases are collected and then merged to produce the final keyphrase predictions.", "Retrieved Candidate Collection.", "The retrieved keyphrases from the retriever are regarded as the retrieved candidates.", "Each retrieved candidate ( rk ) obtains a retrieval score ( rs ) that is the Jaccard similarity between the corresponding document and x .", "The duplicates with lower retrieval scores are removed.", "Finally, we get N rk retrieved keyphrase candidates rk = [ rk 1 , . . . , rk N rk ] and their retrieval scores rs = [ rs 1 , . . . , rs N rk ] .", "Extracted Candidate Collection.", "The extracted keyphrase candidates are from the extractor.", "We select the word x j as a keyword if its importance score j is larger or equal than a threshold (cid:15) (i.e., j (cid:15) ).", "The adjacent keywords compound a keyphrase candidate.", "If no other adjacent keywords, the keyword itself becomes a single-word keyphrase candidate.", "Each extracted keyphrase candidate ( ek ) is accompanied by an extraction score ( es ) that is the mean of the importance scores of the words within this candidate.", "Similarly, duplicates with lower extraction scores are removed.", "Consequently, we obtain N ek extracted keyphrase candidates ek = [ ek 1 , . . . , ek N ek ] and the corresponding extraction scores es = [ es 1 , . . . , es N ek ] .", "Generated Candidate Collection.", "The generated keyphrase candidates directly come from the beam search process of the decoder.", "Each generated phrase is a keyphrase candidate.", "The beam search score of the generated candidate ( gk ) represents its generation score ( gs ).", "Duplicates with lower generation scores are removed.", "Then, we get N gk generated candidates gk = [ gk 1 , . . . , gk N gk ] and their generation scores gs = [ gs 1 , . . . , gs N gk ] .", "In addition to the original importance scores (i.e., rs , es , gs ), we also employ an auxiliary scorer to assign an auxiliary importance score to each keyphrase candidate.", "Given a document-candidate pair ( x , candidate), the scorer should output the probability that the candidate is one of the keyphrases of x .", "That means the scorer should determine the relationship between the given document x and the candidate, which is similar to a natural language inference (NLI) problem.", "Therefore, we adapt the most popular NLI model (Parikh et al., 2016) as our scorer.", "Different from typical natural language inference which is a multi-class classification problem, we use a binary classification setting to train the scorer.", "Besides, we learn the word embeddings and use two bi-directional GRU to obtain the input representations.", "The positive samples are the ground-truth keyphrases.", "The negative samples come from either the phrases in the document or the retrieved candidates.", "Notably, the ground-truth keyphrases are filtered when selecting negative samples.", "Consequently, a cross-entropy loss is utilized to train the scorer.", "Finally, the trained scorer is used to help the merging process as shown in Algorithm 1. The u gs u rs and u gs u es factors are used to enforce the average of rs and es to be the same with the average of gs and thus these three scores become comparable.", "Similar to Meng et al. (2017), we use KP20k dataset (Meng et al., 2017) to train our models.", "The released dataset contains 530,809 articles for training, 20,000 for validation, and the other 20,000 for testing.", "However, there exist duplicates in the KP20k training dataset with itself, the KP20k validation dataset, the KP20k testing dataset, and other four popular testing datasets (i.e., Inspec (Hulth, 2003), Krapivin (Krapivin et al., 2009), NUS (Nguyen and Kan, 2007), and SemEval (Kim et al., 2010)).", "After removing these duplicates, we maintain 509,818 articles in the training dataset.", "As for testing, following Meng et al. (2017), we employ five popular testing datasets from scientific publications as our testbeds for the baselines and our methods, which include Inspec , Krapivin , NUS , SemEval , and KP20k .", "For a comprehensive evaluation, we compare our methods with the traditional extractive baselines and the state-of-the-art generative methods.", "The extractive baselines include two unsupervised methods (i.e., TF-IDF and TextRank (Mihalcea and Tarau, 2004)) and one supervised method Maui (Medelyan et al., 2009).", "The generative baselines consist of CopyRNN (Meng et al., 2017) and CorrRNN (Chen et al., 2018a).", "We also conduct several ablation studies as follows: KG-KE.", "KG-KR.", "The encoder-decoder generative model with retrieved keyphrases as external knowledge, but without combining with the extractive model and using the merging process.", "KG-KE-KR.", "The joint extraction and generation model with the retrieved keyphrases without using the merging process.", "All the above ablation models directly use the generated candidates as the final predictions.", "We denote our final integrated method which combines all the proposed modules as KG-KE-KR-M .", "Similar to CopyRNN and CorrRNN, we adopt macro-averaged recall (R) and F-measure (F 1 ) as our evaluation metrics.", "In addition, we also apply Porter Stemmer before determining whether two keyphrases are identical.", "Duplications are removed after stemming.", "We apply similar preprocessing procedures with Meng et al. (2017) including lowercasing, tokenizing and replacing digits with (cid:104) digit (cid:105) symbol.", "The title and the abstract of each article are concatenated as the source text input.", "We use the KP20k training dataset as the retrieval corpus.", "The implementations of our models are based on the OpenNMT system (Klein et al., 2017).", "The encoders, the decoder, and the scorer have the same vocabulary V with 50,000 tokens.", "The multi-task learning model and the scorer are trained separately.", "The embedding dimension d e and the hidden size d are set to 100 and 300 respectively.", "The initial state of the decoder GRU cell (i.e., h 0 ) is set to [ u L x ; u 1 ] .", "The other GRU cells are set to zero.", "The retrieval number K is set to 3 after evaluating the retrieved keyphrases on the evaluation dataset.", "When concatenating the retrieved keyphrases together as an external knowledge input, we use ;' as the separator among them.", "During training, all the trainable parameters including the embeddings are randomly initialized with uniform distribution in [-0.1, 0.1].", "We engage Adam (Kingma and Ba, 2014) as the optimizer with positive extraction loss weight w =9.0, batch size=64, dropout rate=0.1, max gradient norm=1.0, initial learning rate=0.001.", "The training is early stopped when the validation perplexity stops dropping for several continuous evaluations.", "While testing, the beam search depth, and beam size are set as 6 and 200 Model Inspec Krapivin NUS SemEval KP20k F 1 @5 F 1 @10 F 1 @5 F 1 @10 F 1 @5 F 1 @10 F 1 @5 F 1 @10 F 1 @5 F 1 @10 TF-IDF 0.188 0.269 0.092 0.120 0.103 0.142 0.076 0.135 0.087 0.113 TextRank 0.194 0.244 0.142 0.128 0.147 0.153 0.107 0.130 0.151 0.132 Maui 0.037 0.032 0.196 0.181 0.205 0.234 0.032 0.036 0.223 0.204 CorrRNN * 0.229 7 0.248 9 0.255 2 0.238 4 0.273 5 0.265 4 0.197 3 0.221 5 0.291 2 0.264 2 CopyRNN * 0.251 7 0.279 3 0.268 4 0.243 1 0.275 2 0.268 2 0.190 6 0.214 5 0.306 1 0.273 0 KG-KE 0.254 4 0.281 2 0.265 3 0.240 1 0.278 4 0.273 1 0.207 4 0.227 7 0.307 0 0.274 0 KG-KR 0.244 2 0.275 1 0.266 5 0.247 1 0.278 2 0.276 2 0.189 7 0.215 7 0.311 1 0.278 0 KG-KE-KR 0.245 1 0.278 4 0.267 3 0.246 2 0.285 9 0.279 4 0.194 4 0.220 2 0.314 0 0.280 0 KG-KE-KR-M 0.257 2 0.284 3 0.272 3 0.250 2 0.289 4 0.286 4 0.202 6 0.223 3 0.317 0 0.282 0 Table 1: Total keyphrase prediction results on all testing datasets.", "The extraction threshold (cid:15) is set to 0.7 after evaluating the extracted keyphrases on the evaluation dataset.", "Notably, the stemmer is not applied to the gold keyphrases of SemEval testing dataset since they have already been stemmed.", "We do not remove any single-word predictions for KP20k but only keep one single-word prediction for other testing datasets.", "The averaged results of three different random seed are reported 1 .", "Unlike the previous works which only separately analyze the present and absent keyphrase prediction ability, we also compare the whole keyphrase prediction ability regardless of the presence or absence of keyphrases, which is more reasonable in real applications.", "We show the F 1 scores at the top 5 and 10 predictions on Table 1. This table displays our KG-KE-KR-M method consistently outperforms the state-of-the-art mod-1 Our code is available at https://github.com/Chen-Wang-CUHK/KG-KE-KR-M Inspec Krapivin NUS SemEval KP20k 0.300 0.325 0.350 0.375 0.400 ( a ) P r e s en t F 1 @ 5 M ea s u r e CorrRNNCopyRNNKG-KEKG-KRKG-KE-KRKG-KE-KR-M Inspec Krapivin NUS SemEval KP20k 0.000 0.025 0.050 0.075 0.100 0.125 ( b ) A b s en t R @ 10 M ea s u r e Figure 3: The present and absent keyphrase prediction performance of all neural-based methods.", "els CopyRNN and CorrRNN demonstrating the effectiveness of our method.", "Moreover, we also observe that the KG-KE model exceeds CopyRNN and CorrRNN on most datasets, which indicates the strength of our combination with the extractive model.", "Besides, we also see the KG-KR model perform comparably or better than the baselines, suggesting the effective guidance ability of the retrieved keyphrases.", "In addition, after combining these two ideas, the KG-KE-KR model surpasses both or one of KG-KE and KG-KR on all datasets, which shows the effectiveness of the combination with extraction model and the retrieved keyphrases again.", "Finally, the performance gap between KG-KE-KR and KG-KE-KR-M implies the power of our merging module.", "For mean average precision (MAP) metric which considers prediction orders, we obtain similar conclusions as shown in Table 2. 5.2 Present and Absent Keyphrase Prediction In this section, we analyze the performance of present and absent keyphrase prediction.", "Only Candidate Sources Total F 1 @10 Present F 1 @5 Absent R @10 gk , ek , rk 0.250 0.002 0.330 0.002 0.172 0.002 gk , ek 0.249 0.003 0.328 0.003 0.154 0.002 gk , rk 0.249 0.002 0.329 0.002 0.172 0.002 gk 0.248 0.003 0.327 0.003 0.154 0.002 gk , no merging 0.246 0.002 0.324 0.002 0.158 0.002 ek , no merging 0.152 0.005 0.226 0.010 N/A rk , no merging 0.093 0.000 0.121 0.000 0.107 0.000 Table 3: Ablation study of the candidate sources of Algorithm 1 on Krapivin dataset.", "the present (absent) predictions and gold present (absent) keyphrases are preserved for the corresponding evaluation.", "We use F 1 @5 metric for present predictions and R@10 for absent predictions.", "Since the neural-based baselines are the state-of-the-art models, we focus on the comparison with them in this section.", "The results are depicted on Figure 3. The main observations are similar to the conclusions of total keyphrase prediction.", "Besides, we also note that after incorporating retrieved keyphrases, KG-KR model achieves substantial improvement gains over baselines on absent keyphrase prediction on Krapivin , NUS , and KP20k .", "These results demonstrate that the retrieved keyphrases indeed help the model to understand the main topics of the given document since generating absent keyphrase is an abstractive process and requires more powerful text understanding abilities.", "We notice that the KG-KE-KR-M method does not outperform the KG-KE-KR model on absent keyphrase prediction on Inspec dataset.", "One potential reason is that the merging module only merges two sources for absent keyphrases (i.e., the generated and retrieved keyphrases) instead of three sources like the present keyphrases do.", "Hence, the improvement for the absent keyphrases from the merging module is less stable than that for the present keyphrases.", "Moreover, we find that after combining with the extraction model, the KG-KE model achieves a huge improvement gain over CopyRNN on present keyphrase prediction on SemEval dataset, which manifests such a combination can improve the keyphrase extraction ability of the generative model.", "We also conduct in-depth ablation studies on our merging module.", "The objectives of these ablation studies are to (1) evaluate the effects of different Scoring Method Total F 1 @10 Present F 1 @5 Absent R @10 Combined 0.250 0.002 0.330 0.002 0.172 0.002 Only gs , es , rs 0.248 0.003 0.325 0.003 0.166 0.003 Only scorer 0.210 0.005 0.291 0.006 0.106 0.005 Table 4: Ablation study of the scoring method of Algorithm 1 on Krapivin dataset.", "candidate sources (i.e., what kinds of candidates are merged), and (2) analyze the effects of different final importance score calculating methods.", "Concerning candidate sources, we show the ablation study results on Table 3. When comparing gk with gk , no merging, we can see that the merging algorithm improves the performance of total and present keyphrase predictions, but it degrades the performance of absent keyphrase prediction.", "These results indicate the trained scorer performs better on scoring present keyphrases than scoring absent keyphrases.", "One possible reason is that scoring absent keyphrases requires a stronger text understanding ability than scoring present keyphrases.", "However, as shown in the row of gk , rk on Table3, this problem can be solved by incorporating the retrieved keyphrases which provide external information to this module.", "Besides absent keyphrase prediction, it is observed that the retrieved keyphrases can also ben-efit the present keyphrase prediction.", "For the extracted keyphrases, as shown in the gk , ek row, they only improve the present keyphrase prediction ability and do not affect absent keyphrases as we anticipated.", "Regarding the scoring method, we further explore the effects of not using or only using the scorer in Algorithm 1. We show the results on Table 4. From this table, we note that after removing the scorer (i.e., Only gs , es , rs ), both present and absent keyphrase prediction performance become worse, which demonstrates the effectiveness of the combination with the scorer .", "Moreover, if we totally ignore the previously obtained retrieval, extraction and generation scores, and only use the scorer to predict the final keyphrase importance score (i.e., Only scorer ), we find the performance decreases dramatically, which indicates the indispensability of the previously obtained retrieval, extraction, and generation scores.", "Approximating minimum power covers of intersecting families and directed edge connectivity problems .", "Given a ( directed ) graph with costs on the edges , the power of a node is the maximum cost of an edge leaving it , and the power of the graph is the sum of the powers of its nodes . . . We consider problems that seek to find a min power spanning subgraph G of g that satisfies a prescribed edge connectivity property . . . We give approximation algorithms with ratio o ( k ln vertical bar v vertical bar ) .", "Our algorithms are based on a more general o ( ln vertical bar v vertical bar ) approximation algorithm for the problem of finding a min power directed edge cover of an intersecting set family . . .", "(a) Present Keyphrases {approximation algorithms; edge connectivity; intersecting families} CopyRNN : 1. approximation algorithms , 2. edge connectivity , 3. algorithms, 4. set cover, 5. connectivity, . . . Retrieved: 1. power, 2. graphs, 3. approximation, 4. edge connectivity , 5. approximation algorithms , . . . KG-KE-KR : 1. approximation algorithms , 2. edge connectivity , 3. power, 4. set cover, 5. minimum power, . . . 7. intersecting families , . . . KG-KE-KR-M : 1. approximation algorithms , 2. edge connectivity , 3. minimum power, . . . 6. intersecting families , . . .", "(b) Absent Keyphrases {wireless networks; power minimization; directed graphs} CopyRNN : 1. graph algorithms, 2. combinatorial problems, 3. computational complexity, 4. directed graphs , 5. randomized algorithms, . . . Retrieved: 1. wireless, 2. degree, 3. k connectivity, 4. tree augmentation, . . . 7. power assignment, 8. wireless networks KG-KE-KR : 1. graph algorithms, 2. directed graphs , 3. graph theory, 4. randomized algorithms, 5. spanning tree, 6. wireless networks , . . . KG-KE-KR-M : 1. graph algorithms, 2. directed graphs , 3. power assignment, 4. graph theory, 5. wireless networks , . . . Figure 4: A keyphrase prediction example of CopyRNN, KG-KE-KR, and KG-KE-KR-M.", "To illustrate the advantages of our proposed methods, we show an example of the present and absent keyphrase predictions in Figure 4. For fairness, we only compare with CopyRNN since our models are based on its implementation.", "From the results of the present keyphrase prediction, we find the extractor of the KG-KE-KR model successfully extracts all the present keyphrases from the source text, which shows the power of the extractor.", "With the help of the copy probability rescaling from the extractor, the KG-KE-KR model correctly predicts the keyphrase intersecting families which is not successfully predicted by CopyRNN and retrieved by the retriever.", "Moreover, by merging the extracted keyphrases into the final predictions, the KG-KE-KR-M model assigns a higher rank to this keyphrase (i.e., from 7 to 6).", "As for absent keyphrase prediction, we note that KG-KE-KR successfully predicts the keyphrase wireless net-works while CopyRNN fails.", "Since the retriever successfully retrieves this absent keyphrase, it shows that the retrieved keyphrases can provide effective external guidance for the generation process.", "Furthermore, the KG-KE-KR-M method assigns a higher rank to this keyphrase after merging the retrieved keyphrases into the final predictions (i.e., from 6 to 5).", "The overall results demonstrate the effectiveness of our proposed methods.", "In this paper, we propose a novel integrated approach for keyphrase generation.", "First, an end-to-end multi-task learning framework is introduced, which not only combines the keyphrase extraction and generation but also leverages the retrieved keyphrases from similar documents to guide the generation process.", "Furthermore, we introduce a neural-based merging algorithm to merge the candidates from three different components.", "Comprehensive empirical studies demonstrate the effectiveness of our approach.", "One interesting future work is to incorporate the similar documents themselves into keyphrase generation.", "The work described in this paper was partially supported by the Research Grants Council of the Hong Kong Special Administrative Region, China (No. CUHK 14208815 of the General Research Fund) and Meitu (No. 7010445).", "We would like to thank Jiani Zhang for her comments." ]
[ "objective", "objective", "abstain", "method", "objective", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "objective", "abstain", "objective", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "result", "abstain", "abstain", "method", "abstain", "objective", "abstain", "abstain", "method", "objective", "objective", "objective", "other", "other", "other", "other", "other", "other", "other", "objective", "other", "other", "other", "other", "other", "other", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "method", "abstain", "other", "abstain", "method", "method", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "result", "result", "abstain", "abstain", "result", "other", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "result", "abstain", "abstain", "result", "abstain", "other", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "result", "abstain", "result", "result", "other", "other", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "objective", "abstain", "other", "other" ]
[ "We propose a simple data augmentation protocol aimed at providing a compositional inductive bias in conditional and unconditional sequence models.", "Under this protocol, synthetic training examples are constructed by taking real training examples and replacing (pos-sibly discontinuous) fragments with other fragments that appear in at least one similar environment.", "The protocol is model-agnostic and useful for a variety of tasks.", "Applied to neural sequence-to-sequence models, it reduces error rate by as much as 87% on diagnostic tasks from the SCAN dataset and 16% on a semantic parsing task.", "Applied to n-gram language models, it reduces perplexity by roughly 1% on small corpora in several languages.", "This paper proposes a rule-based data augmentation protocol for sequence modeling.", "Our approach aims to supply a simple and model-agnostic bias toward compositional reuse of previously observed sequence fragments in novel environments.", "Consider a language modeling task in which we wish to estimate a probability distribution over a family of sentences with the following finite sample as training data: (1)", "a. The cat sang.", "b. The wug sang.", "c. The cat daxed.", "(2)", "a. The wug daxed.", "b. * The sang daxed.", "This generalization amounts to an inference about syntactic categories (Clark, 2000).", "Because cat and wug are interchangeable in (1a) and (1b), they are also likely interchangeable elsewhere; cat and sang are not similarly interchangeable.", "Human learners make judgments like (2) about novel lexical items (Berko, 1958) and fragments of novel languages (Lake et al., 2019).", "But we do not expect such judgments from unstructured generative models trained to maximize the likelihood of the training data in (1).", "A large body of work in natural language processing provides generalization to data like (2a) by adding structure to the learned predictor (Chelba and Jelinek, 1998; Chiang, 2005; Dyer et al., 2016).", "On real-world datasets, however, such models are typically worse than black-box function approximators like neural networks, even for black-box models that fail to place probability mass on either example in (2) given small training sets like (1) (Melis et al., 2018).", "To the extent that we believe (2a) to capture an important inductive bias, we would like to find a way of softly encouraging it without tampering with the structure of predictors that work well at scale.", "In this paper, we introduce a procedure for generating synthetic training examples by recombining real ones, such that (2a) is assigned non-negligible probability because it already appears in the training dataset .", "The basic operation underlying our proposal (which we call GECA , for good-enough compositional augmentation) is depicted in Figure 1: if two (possibly discontinuous) fragments of training examples appear in some common environment, then any additional environment where the first fragment appears is also a valid environment for the second.", "GECA is crude: as a linguistic principle, it is both limited and imprecise.", "As discussed in Sections 3 and 4, it captures a narrow slice of the many phenomena studied under the heading of composi-tionality, while also making a number of incorrect predictions about real language data.", "Nevertheless, GECA appears to be quite effective across a range of learning problems.", "In semantic parsing, it gives improvements comparable to the task-specific data augmentation approach of Jia and Liang (2016) on logical expressions, better performance than that approach on a different split of the data designed to test generalization more rigorously, and corresponding improvements on a version of the dataset with a different meaning representation language.", "Outside of semantic parsing, it solves two representative problems from the SCAN dataset of Lake and Baroni (2018) that are synthetic but precise in the notion of compositionality they test.", "Finally, it helps with some (unconditional) low-resource language modeling problems in a typologically diverse set of six languages.", "Recent years have seen tremendous success at natural language transduction and generation tasks using complex function approximators, especially recurrent (Sutskever et al., 2014) and attentional (Vaswani et al., 2017) neural models.", "With enough training data, these models are often more accurate than than approaches built on traditional tools like regular transducers and context-free grammars (Knight and Graehl, 2005), which are brittle and difficult to efficiently infer from large datasets.", "However, models equipped with an explicit symbolic generative process have at least one significant advantage over the aforementioned black-box approaches: given a grammar, it is straightforward to precisely characterize how that grammar will extrapolate beyond the examples in a given training set to out-of-distribution data.", "Indeed, it is often possible for researchers to design the form that this extrapolation will take: smoothed n-gram language models ensure that no memorization is possible beyond a certain length (Ney et al., 1994); CCG-based semantic parsers can make immediate use of entity lexicons without having ever seen the lexicon entries used in real sentences (Zettlemoyer and Collins, 2005).", "It is not the case that black-box neural models are fundamentally incapable of this kind of predictable generalizationthe success of these models at capturing long-range structure in text (Radford et al., 2019) and controlled algorithmic data (Graves et al., 2014) indicate that some representation of hierarchical structure can be learned given enough data.", "But the precise point at which this transition occurs is not well characterized, and it is evidently beyond the scale available in many real-world problems.", "How can we improve the behavior of high-quality black-box models in these settings?", "There are many sophisticated tools available for improving the function approximators or loss functions themselvesstructured regularization of parameters (Oh et al., 2017), posterior regularization (Ganchev et al., 2010; Hu et al., 2018), explicit stacks (Grefenstette et al., 2015) and composition operators (Bowman et al., 2016; Russin et al., 2019).", "These existing proposals tend to be task-and architecture-specific.", "But to the extent that the generalization problem can be addressed by increasing the scale of the training data, it is natural to ask whether we can address the problem by increasing this scale artificially in other words, via data augmentation.", "Data augmentation techniques, which generate auxiliary training data by performing structured transformation or combination of training examples, are widely used in computer vision (Krizhevsky et al., 2012; Zhang et al., 2017; Summers and Dinneen, 2019).", "Within NLP, several data augmentation approaches have been proposed for text classification (e.g. Ratner et al., 2017; Wei and Zhou, 2019); these approaches give improvements even when combined with large-scale pretraining (Hu et al., 2019).", "Jia and Liang (2016) study data augmentation and compositionality in specific setting of learning language-to-logical-form mappings, beginning from the principle that data is compositional if it is generated by an explicit grammar that relates strings to logical forms.", "This view of compositionality as determined by synchronicity between form and meaning is essentially Montago-vian and well-suited to problems in formal semantics (Montague, 1973); however, it requires access to structured meaning representations with explicit types and bracketings, which are not available in most NLP applications.", "Here we aim at a notion of compositionality that is simpler and more general: a bias toward identifying recurring fragments seen at training time, and re-using them in environments distinct from those in which they were first observed.", "This view makes no assumptions about the availability of brackets and types, and is synchronous only to the extent that the notion of a fragment is permitted to include content from both the source and target sides.", "We will find that it is nearly as effective as existing approaches in the specific settings for which they were designed, but also effective on a variety of problems where they cannot be applied.", "Consider again the example in Figure 1.", "Our data augmentation protocol aims to discover substitutable sentence fragments (underlined), with the fact that a pair of fragments appear in some common sub-sentential environment (highlighted) taken as evidence that the fragments belong to a common category.", "To generate a new examples for the model, an occurrence of one fragment is removed from a sentence to produce a sentence template , which is then populated with the other fragment.", "Why should we expect this procedure to produce well-formed training examples?", "The existence of syntactic categories, and the expressibility of well-formedness rules in terms of these abstract categories, is one of the foundational principles of generative approaches to syntax (Chomsky, 1965).", "The observation that context provides a strong signal about a sentence fragment's category is in turn the foundation of distributional techniques for the study of language (Firth, 1957).", "Combining the two gives the outlines of the above procedure.", "This combination has a productive history in natural language processing: when fragments are single words, it yields class-based language models (Brown et al., 1992); when fragments are contiguous spans it yields unsupervised parsers (Clark, 2000; Klein and Manning, 2002).", "The present data augmentation scenario is distinguished mainly by the fact that we are unconcerned with producing a complete generative model of data, or with recovering the latent structure implied by the presence of nested syntactic categories.", "We can still synthesize high-precision examples of well-formed sequences by identifying individual substitutions that are likely to be correct without understanding how they fit into the grammar as a whole.", "Indeed, if we are not concerned with recovering linguistically plausible analyses, we need not limit ourselves to words or contiguous sentence fragments.", "(3)", "a. She picks the wug up.", "b. She puts the wug down.", "as evidence that we can use picks.", ".", ". up wherever we can use puts.", ".", ". down .", "Indeed, given a translation dataset: (4)", "a. I sing.", "(cid:46)", "Canto.", "b. I sing marvelously.", "(cid:46)", "Canto maravillosamente.", "c. I dax marvelously.", "(cid:46)", "Dajo maravillosamente.", "we can apply the same principle to synthesize I dax.", "(cid:46)", "Dajo.", "based on the common environment . . . marvelously (cid:46) . . . maravillosamente .", "From the perspective of a generalized substitution principle, the alignment problem in machine translation is the same as the class induction problem in language modeling, but with sequences featuring large num-bers of gappy fragments and a boundary symbol (cid:46) .", "The only remaining question is what makes two environments similar enough to infer the existence of a common category.", "There is, again, a large literature on this question (including the aforementioned work in language modeling, unsupervised parsing, and alignment), but in the current work we will make use of a very simple criterion: fragments are interchangeable if they occur in at least one lexical environment that is exactly the same.", "Given a window size k and sequence of n tokens w = w 1 w 2 w n , define a fragment as a set of non-overlapping spans of w , a template as a version of w with a fragment removed, and an environment as a template restricted to a k -word window around each removed fragment.", "Formally, (letting [ i, j ] denote { i, i + 1 , . . . , j } ) we have: fragments ( w ) = {{ w a 1", "..b 1 , w a 2", "..b 2 , . . . } : 1 a i < b i n , all [ a i , b i ] disjoint } (1) tpl ( w, f ) = ( w j : w a i", "..b i f.", "j (cid:54) [ a i , b i ]) (2) env ( w, f ) = { w j : w j tpl ( w, f ) and w a i", "..b i f.", "j [ a i k, b i + k ] } (3) In Figure", "1(a), the underlined picks.", ".", ". up is one possible fragment that could be extracted from the sentence.", "The corresponding template is She.", ".", ". the wug . . . in Fresno , and with k = 1 the environment is She.", ".", ". the wug . . . in .", "As shown in Figure", "1(d), any fragment may be substituted into any template with the same number of holes.", "Denote this substitution operation by t/f .", "The data augmentation operation that defines GECA is formally stated as follows: If the training data contains sequences w = t 1 /f 1 , x = t (cid:48) 1 /f 1 and y = t 2 /f 2 , with env ( w, t 1 ) = env ( y, t 2 ) and t (cid:48) 1 (cid:54) = t 1 , then synthesize a new training example z = t (cid:48) 1 /f 2 .", "Linguistic notes Despite the fact that the above operation is motivated by insights from generative syntax and distributional semantics, it should be emphasized that it is, as a statement of a general linguistic principle, obviously wrong.", "Counterexamples abound: in English, stress-derived nouns (e.g. rcord from recrd ) will be taken as evidence that many nouns and verbs are interchangeable; in Mandarin Chinese, kesh and dnsh both mean but, but kesh alone can be used in particular constructions to mean very.", "What ultimately matters is the relative frequency of such errors: if their contribution to an inaccurate model is less than the inaccuracy caused by the original shortage of training data, the GECA still helps.", "In conditional problems, like the machine translation example above, such errors may be totally harmless: if we synthesize a new ( x, y ) pair with x outside the support of the real training data, they may not influence the model's predictions on the true support beyond providing useful general inductive bias.", "Implementation Nave implementation of the boxed operation takes O ( t 3 f 3 ) time (where t is the number of distinct templates in the dataset and f the number of distinct fragments).", "This can be improved to O ( ft 2 e ) (where e is the number of templates that map to the same environment) by building appropriate data structures (Algorithm 1).", "Space requirements might still be considerable (comparable to those used by n-gram language models), and strategies from the language modeling literature can be used to reduce memory usage (Heafield, 2011).", "This algorithm is agnostic with respect to the choice of fragmentation and environment functions; task-specific choices are described in more detail for each experiment below.", "We begin with a set of experiments on synthetic data designed to precisely test whether GECA provides the kind of generalization it was designed for.", "Here we use the SCAN dataset (Lake and Baroni, 2018), which consists of simple English commands paired with sequences of discrete actions (Figure 2).", "We focus specifically on the add primitive (jump) and add template (around right) conditions, which test whether the agent can be exposed to individual commands or modifiers (e.g. jump (cid:46) JUMP ) in isolation at training time, and incorporate them into more complex commands like the earlier example at test time.", "We extract fragments with one gap and a maximum length of 4 tokens.", "The environment is taken to be the complete template.", "Generated examples are appended to the original dataset.", "As an exam-Algorithm 1 Sample GECA implementation.", "ple of the effect of this augmentation procedure, the original jump split has 12620 training examples; GECA generates an additional 395 using 395 distinct templates and 6 distinct fragments.", "With the original and augmented datasets, we train a one-layer LSTM encoderdecoder model with an embedding size of 64, a hidden size of 512, a bidirectional encoder and an attentional decoder (Hochreiter and Schmidhuber, 1997; Bah-danau et al., 2015).", "The model is trained using ADAM (Kingma and Ba, 2014) with a step size of 0 .", "001 and a dropout rate of 0 .", "5 .", "Results are shown in Table 1.", "In line with the original experiments of Lake and Baroni, the baseline sequence-to-sequence model completely fails to generalize to the test set.", "Applying GECA allows the learned model to successfully make most tested generalizations across single and multi-word entries, and in both instruction-to-action and action-to-instruction directions.", "Analysis: examples Some synthesized examples are shown in Figure 3.", "Success at the add primitive condition stems from the constraint that the single example usage of the primitive must still be a valid (command, action) pair, and all verbs are valid commands in isolation.", "Only three examples run (cid:46) RUN , walk (cid:46) WALK and look (cid:46) LOOK provide the evidence that GECA uses to synthesize to new usages of jump ; if these were removed, the sequence-to-sequence model's training accuracy would be unchanged but GECA would fail to synthesize any new examples involving jump , and test accuracy would fall to zero.", "For the add template condition, GECA correctly replaces all occurrences of LTURN with RTURN to produce new examples of the around right template; this example highlights the usefulness of GECA 's ability to discover discontinuous and non-context-free substitutions.", "set of analyses quantifying the overlap between the synthesized data and the held-out data.", "We first measure full example overlap , the fraction of test examples that appear in the augmented training set.", "(By design, no overlap exists between the test set and the original training set.)", "After applying GECA , 5% of test examples for the add primitive condition and 1% of examples for the add template condition are automatically synthesized.", "Next we measure token co-occurrence overlap : we compute the set of (input or output) tokens that occur together in any test example, and then measure the fraction of these pairs that also occur together in some training example.", "For the add primitive condition, GECA increases token co-occurrence overlap from 83% to 96%; for the add template condition it is 100% even prior to augmentation.", "It is important to note that GECA , which sees only the training set, is unaware that some subset of the data is singled out for generalization testing at evaluation time.", "The data augmentation protocol generates a large number of spurious training examples unrelated to the desired generalization (e.g. the first example in Figure 3); however, it also generates enough new usages of the target concept that the learner generalizes successfully.", "Next we turn to the problem of semantic parsing , which has also been a popular subject of study for questions about compositionality, generalization, and data augmentation.", "For the reasons discussed in Section 3, we expect qualitatively different behavior from this approach on real language data without the controlled vocabulary of SCAN .", "We study four versions of the GEOQUERY dataset (Zelle, 1995), which consists of 880 English questions about United States geography, paired with meaning representations in the form of either logical expressions or SQL queries.", "The standard traintest split for this dataset ensures that no natural language question is repeated between the train and test sets.", "As Finegan-Dollak et al. (2018) note, this provides only a limited test of generalization, as many test examples feature a logical form that overlaps with the training data; they introduce a more challenging query split to ensure that neither questions nor logical forms are repeated (even after anonymizing named entities).", "We extract fragments with at most 2 gaps and at most 12 tokens.", "On the SQL query split, the original training set contains 695 examples.", "GECA generates an additional 1055 using 839 distinct templates and 379 distinct fragments.", "For the question split we use the baseline model of Jia and Liang (2016); for the query split we use the same sequence-to-sequence model as used for SCAN and introduce the supervised copy mechanism of Finegan-Dollak et al. (2018).", "Environments are again taken to be identical to templates.", "Results are shown in Table 2.", "On the split for which Jia and Liang (2016) report results, GECA achieves nearly the same improvements with weaker domain assumptions.", "On the remaining splits it is more accurate.", "Analysis: examples Synthesized examples for the logical and SQL representations are shown in Figure", "4. Despite the fact that the sequence-to-sequence model uses neither gold entities or 1 In some cases these averages are slightly lower than the single-run results previously reported in the literature.", "Note also that the original publication from Jia and Liang reports denotation accuracies; the results here are taken from their accompanying code release.", "Overall trends across systems are comparable using either evaluation metric.", "This procedure also produces plausible but unattested entities like a river named florida and a state named west wyoming .", "The last example in the logical forms section is particularly interesting.", "The extracted fragment contains lowest population density on the natural language side but only density on the logical form side.", "However, the environment constrains substitution to happen where appropriate: this fragment will only be used in cases where the environment already contains the necessary smallest .", "Some substitutions are semantically problematic: for example, the final datapoint in Figure 4 asks about the population of a number (because substitution has replaced capital with area ); the corresponding SQL expression would fail to execute.", "Aside from typing problems, however, the example is syntactically well-formed and provides correct evidence about constituent boundaries, alignments and hierarchical structure within the geography domain.", "Other synthesized examples (like the second-to-last in Figure 4) have correct meaning representations but ungrammatical natural language inputs.", "Analysis: dataset statistics Applying GECA to the GEOQUERY data increases full example overlap (described at the end of Section 4) by 5% for the question split in both languages, 6% for the query split with logical forms, and 9% for the query split with SQL expressions, in line with the observation that accuracy improvements are greater for the query split than the question split.", "Augmentation increases token co-occurrence overlap by 34% across all conditions.", "In a larger-scale manual analysis of 100 synthesized examples from the query split, evaluating them for grammaticality and accuracy (whether the natural language captures the semantics of the logical form), we find that 96% are grammatical, and 98% are semantically accurate.", "Negative results We conclude with a corresponding set of experiments on the SCHOLAR text-to-SQL dataset of Iyer et al. (2017), which is similar Query Question SQL queries seq2seq 0.03 0.01 0.57 0.02 + GECA 0.03 0.01 0.56 0.02 Table 3: Negative results: meaning representation accuracies on the SCHOLAR dataset.", "to GEOQUERY in size, diversity and complexity.", "In contrast to GEOQUERY , however, application of GECA to SCHOLAR provides no improvement.", "On the query split, there is limited compositional re-use of SQL sub-queries (in line with the observation of Finegan-Dollak et al. (2018) that average nesting depth in SCHOLAR is roughly half that of GEOQUERY ).", "Correspondingly, full example overlap after augmentation remains at 0% and token co-occurrence overlap increases by only 1%.", "On the question split, full example overlap is larger (8%) but token co-occurrence overlap increases by less than 1%.", "These results suggest that GECA is most successful when it can increase similarity of word co-occurrence statistics in the training and test sets, and when the input dataset exhibits a high degree of recursion.", "Both of the previous sections investigated conditional models.", "The fragments extracted and reused by GECA were essentially synchronous lexicon entries, in line with example (4).", "We originally motivated GECA with monolingual problems in which we simply wish to improve model judgments about well-formedness, so we conclude with a set of language modeling experiments.", "We use Wikipedia dumps 2 in five languages (Kinyarwanda, Lao, Pashto, Tok Pisin, and a subset of English Wikipedia) as well as the Na dataset of Adams et al. (2017).", "These languages exhibit the performance of GECA across a range of morpholog-2 https://dumps.wikimedia.org/ ENG KIN LAO NA PUS TOK # train tokens 2M 62K 10K 28K 2M 30K 5-MKN 369 241 315 45.4 574 44.3 + GECA 365 239 313 45.4 570 44.1 Table 4: Perplexities on low-resource language modeling in English ( ENG ), Kinyarwanda ( KIN ), Lao, Na, Pashto ( PUS ) and Tok Pisin ( TOK ).", "ical complexities: for example, Kinyarwanda has a complex noun class system (Kimenyi, 1980) and Pashto has rich derivational morphology (Tegey and Robson, 1996), while Lao and Tok Pisin are comparatively simple morphologically (Enfield, 2008; Verhaar, 1995).", "Training datasets range from 10K2M tokens.", "Like Adams et al., we found that a 5-gram modified KneserNey language model (Ney et al., 1994) outperformed several varieties of RNN language model, so we base our GECA experiments on the n-gram model instead.", "We use the implementation provided in KenLM (Heafield, 2011).", "We extract fragments with no gaps and a maximum size of 2 tokens, with the environment taken to be a 2-token window around the extracted fragment.", "New usages are generated only for fragments that occur fewer than 20 times in the data.", "In Kinyarwanda, the base dataset contains 3358 sentences.", "GECA generates an additional 913, using 913 distinct templates and 199 distinct fragments.", "Rather than training directly on the augmented dataset, as in preceding sections, we found that the best performance came from training one language model on the original dataset and one on the augmented dataset, then interpolating their final probabilities.", "The weight for this interpolation is determined on a validation dataset and chosen to be one of 0 .", "05 , 0 .", "1 and 0 .", "5 .", "Results are shown in Table", "4. Improvements are not universal and are more modest than in preceding sections.", "However, GECA decreases perplexities across multiple languages and never increases them.", "These results suggest that the substitution principle underlying GECA is a useful mechanism for encouraging compositionality even outside conditional tasks and neural models.", "Analysis: examples and statistics In language modeling, GECA functions as a smoothing scheme: its primary effect is to move mass toward n-grams that can appear in productive contexts.", "In this sense, GECA performs a similar role to the KneserNey smoothing also used in all LM experiments.", "With GECA , in contrast to KneserNey, the notion of context can look forward as well as backward, and capture longer-range interactions.", "Examples of synthesized sentences are shown in Figure", "5. Most sentences are grammatical, and many of the substitutions preserve relevant semantic type information (substituting locations for locations, times for times, etc.).", "However, some ill-formed sentences are also generated.", "As in Section 5, we manually inspect 100 synthesized sentences.", "As before, sentences are evaluated for grammaticality; here, since no explicit semantics were provided, they are instead evaluated for generic semantic acceptability.", "In this case, only 51% of synthesized sentences are semantically acceptable, but 79% are grammatical.", "We introduced GECA , a simple data augmentation scheme based on identifying local phrase substitutions that are licensed by common contexts, and demonstrated that extra training examples generated with GECA lead to substantial improvements on both diagnostic and natural datasets for semantic", "While the approach is surprisingly effective in its current form, we view these results primarily as an invitation to consider more carefully the role played by representations of sentence fragments in larger questions about compositionality in black-box sequence models.", "The procedure detailed in this paper relies on exact string matching to identify common context; future work might take advantage of learned representations of spans and their environments (Mikolov et al., 2013; Peters et al., 2018).", "Further improvements are likely obtainable by constraining the extracted fragments to respect constituent boundaries when syntactic information is available.", "The experiments presented here focus on rewriting sentences using evidence within a dataset to encourage generalization to new outputs.", "An alternative line of work on paraphrase-based data augmentation (Ganitkevitch et al., 2013; Iyyer et al., 2018) uses external, text-only resources to encourage robust interpretation of new inputs corresponding to known outputs.", "The two lines of work could be combined,", "e.g.", "by using GECA -identified fragments to indicate productive locations for sub-sentential paraphrasing.", "More generally, the present results underline the extent to which current models fail to learn simple, context-independent notions of reuse, but also how easy it is to make progress towards addressing this problem without fundamental changes in model architecture.", "Code for all experiments in this paper may be found at github.com/jacobandreas/geca .", "Thanks to Oliver Adams for assistance with the language modeling experiments, and to the anonymous reviewers for suggestions in the analysis sections." ]
[ "objective", "abstain", "abstain", "abstain", "abstain", "objective", "objective", "method", "objective", "objective", "objective", "objective", "objective", "objective", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "result", "method", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "other", "other", "other", "other", "other", "abstain", "other", "other", "abstain", "other", "other", "other", "other", "objective", "other", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "other", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other" ]
[ "Neural sequence-to-sequence networks with attention have achieved remarkable performance for machine translation.", "One of the reasons for their effectiveness is their ability to capture relevant source-side contextual information at each time-step prediction through an attention mechanism.", "However, the target-side context is solely based on the sequence model which, in practice, is prone to a recency bias and lacks the ability to capture effectively nonsequential dependencies among words.", "To address this limitation, we propose a target-side-attentive residual recurrent network for decoding, where attention over previous words contributes directly to the prediction of the next word.", "The residual learning facilitates the flow of information from the distant past and is able to emphasize any of the previously translated words, hence it gains access to a wider context.", "The proposed model outperforms a neural MT baseline as well as a memory and self-attention network on three language pairs.", "The analysis of the attention learned by the decoder con-firms that it emphasizes a wider context, and that it captures syntactic-like structures.", "Neural machine translation (NMT) has recently become the state-of-the-art approach to machine translation (Bojar et al., 2016).", "Several architectures have been proposed for this task (Kalchbren-ner and Blunsom, 2013; Sutskever et al., 2014; Cho et al., 2014; Gehring et al., 2017; Vaswani et al., 2017), but the attention-based NMT model designed by Bahdanau et al. (2015) is still considered the de-facto baseline.", "This architecture is composed of two recurrent neural networks (RNNs), an encoder and a decoder, and an attention mechanism between them for modeling a", "(a) Baseline NMT decoder", "(b) Self-attentive residual dec.", "soft word-alignment.", "First, the model encodes the complete source sentence, and then decodes one word at a time.", "The decoder has access to all the context on the source side through the attention mechanism.", "However, on the target side, the contextual information is represented only through a fixed-length vector, namely the hidden state of the decoder.", "As observed by Bahdanau et al. (2015), this creates a bottleneck which hinders the ability of the sequential model to learn longer-term information effectively.", "As pointed out by Cheng et al. (2016), sequential models present two main problems for natural language processing.", "First, the memory of the encoder is shared across multiple words and is prone to bias towards the recent past.", "Second, such models do not fully capture the structural composition of language.", "To address these limitations, several recent models have been proposed, namely memory networks (Cheng et al., 2016; Tran et al., 2016; Wang et al., 2016) and self-attention networks (Daniluk et al., 2016; Liu and Lapata, 2018).", "We experimented with these methods, applying them to NMT: memory RNN (Cheng et al., 2016) and self-attentive RNN (Daniluk et al., 2016).", "How-1366 ever, we observed no significant gains in performance over the baseline architecture.", "In this paper, we propose a self-attentive residual recurrent decoder, presented in Figure 1b, which, if unfolded over time, represents a densely-connected residual network.", "The self-attentive residual connections focus selectively on previously translated words and propagate useful information to the output of the decoder, within an attention-based NMT architecture.", "The attention paid to the previously predicted words is analogous to a read-only memory operation, and enables the learning of syntactic-like structures which are useful for the translation task.", "Our evaluation on three language pairs shows that the proposed model improves over several baselines, with only a small increase in computational overhead.", "In contrast, other similar approaches have lower scores but a higher computational overhead.", "The contributions of this paper can be summarized as follows: We propose and compare several options for using self-attentive residual learning within a standard decoder, which facilitates the flow of contextual information on the target side.", "We demonstrate consistent improvements over a standard baseline, and two advanced variants, which make use of memory and self-attention on three language pairs (English-to-Chinese, Spanish-to-English, and English-to-German).", "We perform an ablation study and analyze the learned attention function, providing additional insights on its actual contributions.", "Several studies have been proposed to enhance sequential models by capturing longer contexts.", "Long short-term memory (LSTM) (Hochreiter and Schmidhuber, 1997) is the most commonly used recurrent neural network (RNN), because its internal memory allows to retain information from a more distant past than a vanilla RNN.", "Several studies attempt to increase the memory capacity of LSTMs by using memory networks (Weston et al., 2015; Sukhbaatar et al., 2015).", "For instance, Cheng et al. (2016) incorporate different memory cells for each previous output representation, which are later accessed by an attention mechanism.", "Tran et al. (2016) include a memory block to access recent input words in a selective manner.", "Both methods show improvements on language modeling.", "For NMT, Wang et al. (2016) presented a decoder enhanced with an external shared memory.", "Memory networks extend the capacity of the network and have the potential to read, write, and forget information.", "Our method, which attends over previously predicted words, can be seen as a read-only memory, which is simpler but computationally more efficient because it does not require additional memory space.", "Other studies aim to improve the modeling of source-side contextual information, for example through a context-aware encoder using self-attention (Zhang et al., 2017), or a recurrent attention NMT (Yang et al., 2017) that is aware of previously attended words on the source-side in order to better predict which words will be attended in future.", "Additionally, variational NMT (Zhang et al., 2016a) introduces a latent variable to model the underlying semantics of source sentences.", "In contrast to these studies, we focus instead on the contextual information on the target side .", "The application of self-attention mechanisms to RNNs have been previously studied, and in general, they seem to capture syntactic dependencies among distant words (Liu and Lapata, 2018; Soltani and Jiang, 2016; Lee et al., 2017; Lin et al., 2017).", "Daniluk et al. (2016) explore different approaches to self-attention for language modeling, leading to improvements over a baseline LSTM and over memory-augmented methods.", "However, the methods do not fully utilize a longer context.", "The main difference with our approach is that we apply attention on the output embeddings rather than the hidden states.", "Thus, the connections are independent of the recurrent layer representations, which is beneficial to NMT, as we show below.", "Our model relies on residual connections, which have been shown to improve the learning process of deep neural networks by addressing the vanishing gradient problem (He et al., 2016).", "These connections create a direct path from previous layers, helping the transmission of information.", "Recently, several architectures using residual connections with LSTMs have been proposed for sequence prediction (Zhang et al., 2016b; Kim et al., 2017; Zilly et al., 2017; Wang and Tian, 2016).", "To our knowledge, our study is the first one to use self-attentive residual connections within residual RNNs for NMT.", "In parallel to our study, a similar method was recently proposed for sentiment analysis (Wang, 2017).", "Neural machine translation aims to compute the conditional distribution of emitting a sentence in a target language given a sentence in a source language, denoted by p ( y | x ) , where is the set of parameters of the neural model, and y = { y 1 , ..., y n } and x = { x 1 , ..., x m } are respectively the representations of source and target sentences as sequences of words.", "The parameters are learned by training a sequence-to-sequence neural model on a corpus of parallel sentences.", "In particular, the learning objective is to maximize the following conditional log-likelihood: max 1 NNX n =1 log( p ( y | x )) (1) The models typically use gated recurrent units (GRUs) (Cho et al., 2014) or LSTMs (Hochreiter and Schmidhuber, 1997).", "Their architecture has three main components: an encoder, a decoder, and an attention mechanism.", "The goal of the encoder is to build meaningful representations of the source sentences.", "It consists of a bidirectional RNN which includes contextual information from past and future words into the vector representation h i of a particular word vector x i , formally defined as follows: h i = [ h i , h i ] (2) Here, h i = f ( x i , h i 1 ) and h i = f ( x i , h i +1 ) are the hidden states of the forward and backward passes of the bidirectional RNN respectively, and f is a non-linear function.", "The decoder (see Figure 1a) is in essence a recurrent language model.", "At each time step, it predicts a target word y t conditioned over the previous words and the information from the encoder using the following posterior probability: p ( y t | y 1 , ..., y t 1 , c t ) g ( s t , y t 1 , c t ) (3) where g is a non-linear multilayer function.", "computed by the attention mechanism.", "The attention mechanism allows the decoder to select which parts of the source sentence are more useful to predict the next output word.", "This goal is achieved by considering a weighted sum over all hidden states of the encoder as follows: c t = m X i =1 ti h i (5) where ti is a weight calculated using a normalized exponential function a , also known as alignment function , which computes how good is the match between the input at position i { 1 , ..., n } and the output at position t : ti = softmax ( e ti ) (6) e ti = a ( s t 1 , h i ) (7) Different types of alignment functions have been used for NMT, as investigated by Luong et al. (2015).", "Here, we use the one originally defined by Bahdanau et al. (2015).", "The decoder of the attention-based NMT model uses a skip connection from the previously predicted word to the output classifier in order to enhance the performance of translation.", "As we can see in Eq.", "(3), the probability of a particular word is calculated by a function g which takes as input the hidden state of the recurrent layer s t , the representation of the previously predicted word y t 1 , and the context vector c t .", "Within g , these quantities are typically summed up after going through simple linear transformations, hence the addition of y t 1 is indeed a skip connection as in residual networks (He et al., 2016).", "In theory, s t should be sufficient for predicting the next word given that it is dependent on the other two local-context components according to Eq.", "(4).", "However, the y t 1 quantity makes the model emphasize the last predicted word for generating the next word.", "How can we make the model consider a broader context?", "To answer this question, we propose to include into the decoder's formula skip connections not only from the previous time step y t 1 , but from all previous time steps from y 0 to y t 1 .", "This defines a residual recurrent network which, unfolded over time, can be seen as a densely connected residual network.", "These connections are applied to all previously predicted words, and reinforce the memory of the recurrent layer towards what has been translated so far.", "At each time step, the model 1368 decides which of the previously predicted words should be emphasized to predict the next one.", "In order to deal with the dynamic length of this new input, we use a target-side summary vector d t that can be interpreted as the representation of the decoded sentence until the time t in the word embedding space.", "We therefore modify Eq.", "(3) replacing y t 1 with d t : p ( y t | y 1 , ..., y t 1 , c t ) g ( s t , d t , c t ) (8) The replacement of y t 1 with d t means that the number of parameters added to the model is dependent only on the calculation of d t .", "Figure 1b illustrates the change made to the decoder.", "We de-fine two methods for summarizing the context into d t , which are described in the following sections.", "One simple way to aggregate information from multiple word embeddings is by averaging them.", "This average can be seen as the sentence representation until time t .", "We hypothesize that this representation is more informative than using only the embedding of the previous word.", "Formally: d avgt = 1 t 1 t 1 X i =1 y i (9) 4.2 Self-Attentive Residual Connections Averaging is a simple and cheap way to aggregate information from multiple words, but may not be sufficient for all kinds of dependencies.", "Instead, we propose a dynamic way to aggregate information in each sentence, such that different words have different importance according to their relation with the prediction of the next word.", "We propose to use a shared self-attention mechanism to obtain a summary representation of the translation, i.e. a weighted average representation of the words translated from y 0 to y t 1 .", "This mechanism aims to model, in part, important non-sequential dependencies among words, and serves as a complementary memory to the recurrent layer.", "The weights of the attention model are computed by a scoring function e ti that predicts how important each previous word ( y 0 , ..., or y t 1 ) is for the current prediction y t .", "ti where v R e , W y R e e , and W s R e d are weight matrices, e and d are the dimensions of the embeddings and hidden states respectively.", "Firstly, we study the scoring function noted con-tent+scope , as proposed by Bahdanau et al. (2015) for NMT.", "Secondly, we explore a scoring function noted as content , which is calculated based only on the previous hidden states of the decoder, as proposed by Pappas and Popescu-Belis (2017).", "In contrast to the first attention function, which makes use of the hidden vector s t , the second one is based only on the previous word representations, therefore, it is independent of the current prediction representation.", "However, the normalization of this function still depends on t .", "To compare our approach with similar studies, we adapted two representative self-attentive networks for application to NMT.", "The Memory RNN decoder is based on the proposal by Cheng et al. (2016) to modify an LSTM layer to include a memory with different cells for each previous output representation.", "Thus at each time step, the hidden layer can select past information dynamically from the memory.", "To adapt it to our framework, we modify Eq.", "(4) as: s t = f ( s t , y t 1 , c t ) (14) where s t = t 1 X i =1 t i s i (15) ti = softmax ( e ti ) (16) e ti = a ( h i , y t 1 , s t 1 ) (17) 5.2 Self-Attentive RNN The Self-Attentive RNN is the simplest one proposed by Daniluk et al. (2016), and incorporates a summary vector from past predictions calculated with an attention mechanism.", "Here, the attention is applied over previous hidden states.", "This decoder is formulated as follows: p ( y t | y 1 , ..., y t 1 , c t ) g ( s t , y t 1 , c t , s t ) (18) 1369 where s t = t 1 X i =1 ti s i (19) ti = softmax ( e ti ) (20) e ti = a ( s i , s t ) (21) Additional details of the formulations in Sections 3, 4, and 5 are described in the Appendix A. 6 Experimental Settings 6.1 Datasets To evaluate the proposed MT models in different conditions, we select three language pairs with increasing amounts of training data: English-Chinese (0.5M sentence pairs), Spanish-English (2.1M), and English-German (4.5M).", "For English-to-Chinese, we use a subset of the UN parallel corpus (Rafalovitch and Dale, 2009) 1 , with 0.5M sentence pairs for training, 2K for development, and 2K for testing.", "For training Spanish-to-English MT, we use a subset of WMT 2013 (Bojar et al., 2013), corresponding to Eu-roparl v7 and News Commentary v11 with ca.", "2.1M sentence pairs.", "Newstest2012 and New-stest2013 were used for development and testing respectively.", "Finally, we use the complete English-to-German set from WMT 2016 (Bojar et al., 2016) with a total of ca.", "4.5M sentence pairs.", "The development set is Newstest2013, and the testing set is Newstest2014.", "Additionally, we include as testing sets Newstest2015 and New-stest2016, for comparison with the state of the art.", "We report translation quality using", "(a) BLEU over tokenized and truecased texts, and", "(b) NISTBLEU over detokenized and detruecased texts 2 .", "We use the implementation of the attention-based NMT baseline provided in dl4mt-tutorial 3 developed in Python using Theano (Theano Development Team, 2016).", "The system implements an attention-based NMT model, described above, using one layer of GRUs (Cho et al., 2014).", "The vocabulary size is 25K for English-to-Chinese NMT, and 50K for Spanish-to-English and English-German.", "We use the byte pair encoding (BPE) strategy for out-of-vocabulary words 1 http://www.uncorpora.org/ 2 Scrips from Moses toolkit (Koehn et al., 2007): BLEU multi-bleu , NIST BLEU mteval-v13a.pl , tokenizer.perl , truecase.perl .", "(Sennrich et al., 2016b).", "For all cases, the maximum sentence length of the training samples is 50, the dimension of the word embeddings is 500, and the dimension of the hidden layers is 1,024.", "We use dropout with a probability of 0.5 after each layer.", "The parameters of the models are initialized randomly from a standard normal distribution scaled to a factor of 0.01.", "The loss function is optimized using Adadelta (Zeiler, 2012) with (cid:15) = 10 6 and = 0 .", "95 as in the original paper.", "The systems were trained in 712 days for each model on a Tesla K40 GPU at the speed of about 1,000 words/sec.", "Table 1 shows the BLEU scores and the number of parameters used by the different NMT models.", "Along with the NMT baseline, we included a statistical machine translation (SMT) model based on Moses (Koehn et al., 2007) with the same train-ing/tuning/test data as the NMT.", "The performance of memory RNN is similar to the baseline and, as confirmed later, its focus of attention is mainly on the prediction at t 1 .", "The self-attentive RNN method is inferior to the baseline, which can be attributed to the overhead on the hidden vectors that have to learn the recurrent representations and the attention simultaneously.", "The proposed models outperform the baseline, and the best scores are obtained by the NMT model with self-attentive residual connections .", "Despite their simplicity, the mean residual connections already improve the translation, without increasing the number of parameters.", "Tables 2 and 3 show further experiments with the proposed methods on various English-German test sets, compared to several previous systems.", "Table 2 shows BLEU values calculated by multi-bleu , and includes the NMT system proposed by Luong et al. (2015) which replaces unknown predicted words with the most strongly aligned word on the source sentence.", "Also, the table includes other systems described in Section", "2. Additionally, Table 3 shows values calculated by the NIST BLEU scorer, as well as results reported by the Winning WMT systems for each test set respectively: UEDIN-SYNTAX (Williams et al., 2014), UEDIN-SYNTAX (Williams et al., 2015), and UEDIN-NMT (Sennrich et al., 2016a).", "Also, we include the results reported by Sennrich et al. (2016b) for a baseline encoder-decoder NMT with BPE for unknown words similar to our configuration, and finally the system proposed by Nade-jde et al. (2017), an explicit syntax-aware NMT that introduces combinatory categorial grammar (CCG) supertags on the target side by predicting words and tags alternately.", "The comparison with this work is relevant for the analysis described BLEU Attention function En-Zh Es-En Content+Scope 23.1 25.6 Content 24.0 26.3 Table 4: BLEU scores for two scoring variants of the attention function of the proposed decoder.", "later in Section 8.2.", "The results confirm that the self-attentive residual connections improve significantly the translations.", "To evaluate the significance of the improvements against the NMT baseline, we performed a one-tailed paired t -test.", "We now examine the two scoring functions that can be used for the self-attentive residual connections model presented in Eq.", "(12), considering English-to-Chinese and Spanish-to-English.", "The BLEU scores are presented in Table 4: the best option is the content matching function, which depends only on the word embeddings.", "The con-tent+scope function, which depends additionally on the hidden representation of the current prediction is better than the baseline but scores lower than content .", "The idea that the importance of the context depends on the current prediction is appealing, because it can be interpreted as learning internal dependencies among words.", "However, the experimental results show that it does not necessarily lead to the best translation.", "On the contrary, the content attention function may be extracting representations of the whole sentence which are easier to learn and generalize.", "Manual evaluation on samples of 50 sentences for each language pair helped to corroborate the conclusions obtained from the BLEU scores, and to provide a qualitative understanding of the improvements brought by our model.", "For each language, we employed one evaluator who was a native speaker of the target language and had good knowledge of the source language.", "The evaluators ranked three translations of the same source sentence one from each of our models: baseline , mean residual connections , and self-attentive residual connections according to their translation quality.", "The three translations were presented in a random order, so that the system that had generated them could not be identified.", "To integrate 1371 Ranking (%) System EnZh EsEn EnDe > = < > = < > = < Mean vs. Baseline 26 56 18 20 64 16 28 58 24 Self-attentive vs. Baseline 28 60 12 28 56 16 32 54 14 Self-attentive vs. Mean 24 62 14 28 58 14 32 56 12 Table 5: Human evaluation of sentence-level translation quality on three language pairs.", "the judgments, we proceed in pairs, and count the number of times each system was ranked higher, equal to, or lower than another competing system.", "The results shown in Table 5 indicate that the self-attentive residual connections model outperforms the one with mean residual connections , and both outperform the baseline, for all three language pairs.", "The rankings are thus identical to those obtained using BLEU in Tables 1 and", "3. 7.3 Performance on Language Modeling To examine whether language modeling (LM) can benefit from the proposed method, we incorporate the residual connections into a neural LM.", "We use the same setting as Daniluk et al. (2016) for a corpus of Wikipedia articles (22.5M words), and we compare with two methods proposed in the same paper, namely attention LSTM and 4-gram LSTM.", "As shown in Table 6, the proposed models outperform the LSTM baseline as well as the self-attention model, but not the 4-gram LSTM.", "Experiments using 4-gram LSTM for NMT showed poor performance (13.9 BLEU points for English-Chinese) which can be attributed to the difference between the LM and NMT tasks.", "Both tasks predict one word at a time conditioned over previous words, however, in NMT the previous target-word-inputs are not given, they have to be generated by the decoder.", "Thus, the output could be conditioned over previous erroneous predictions Figure 2: Percentage of words that received maximum attention at a given relative position, ranging from 1 to 50 (maximum length).", "affecting in higher proportion the 4-gram LSTM model.", "This shows that even if a model improves language modeling, it does not necessarily improve machine translation.", "Figure 2 shows a comparison of the distribution of attention of the different self-attentive models described in this paper, on Spanish-to-English NMT (the other two language pairs exhibit similar distri-butions).", "The values correspond to the number of words which received maximal attention for each relative position ( x -axis).", "We selected, at each prediction, the preceding word with maximal weight, and counted its relative position.", "We normalized the count by the number of previous words at the time of each prediction.", "We observe that the memory RNN almost always selects the immediately previous word ( t 1 ) and ignores the rest of the context.", "On the contrary, the other two models distribute attention more evenly among all previous words.", "In particular, the self-attentive RNN uses a longer context than the self-attentive residual connections but, as the performance on BLEU score shows, this fact does not necessarily mean better translation.", "Figure 3 shows the attention to previous words generated by each model for one sentence translated from Spanish to English.", "The matrices present the target-side attention weights, with the vertical axis indicating the previous words, and the color shades at each position (cell) representing the attention weights.", "The weights of the memory RNN are concentrated on the diagonal, indicating that the attention is generally located on 1372", "the previous word, which makes the model almost equivalent to the baseline.", "The weights of the self-attentive RNN show that attention is more distributed towards the distant past, and they vary for each word because the attention function depends on the current prediction.", "This model tries to find dependencies among words, although complex relations seem difficult to learn.", "On the contrary, the proposed self-attentive residual connections model strongly focuses on particular words, and we present a wider analysis of it in the following section.", "When visualizing the matrix of attention weights generated by our model (Figure 3c), we observed the formation of sub-phrases which are grouped depending on their attention to previous words.", "To build the sub-phrases in a deterministic fashion, we implemented Algorithm 1, which iteratively splits the sentence into two sub-phrases every time the focus of attention changes to a new word, from left-to-right.", "The results are binary tree structures containing the sub-phrases, exem-Figure 4: Examples of hypothesized syntactic structures obtained with Algorithm 1.", "We formally evaluate the syntactic properties of the binary tree structures by comparing them with the results of an automatic constituent parser (Manning et al., 2014), using the ParsEval approach (Black et al., 1991), i.e. by counting the precision and recall of constituents, excluding single words.", "Our models reaches a precision of 0.56, which is better than the precision of 0.45 obtained by a trivial right-branched tree model 4 .", "Note that these structures were neither optimized for parsing nor learned using part-of-speech tagging as most parsers do.", "Our interpretation of the results is that they are syntactic-like structures.", "However, given the simplicity of the model, they could 4 A model constructed by dividing iteratively one word and the rest of the sentence, from left-to-right.", "Better than baseline S: Estudiantes y profesores se estan tomando a la ligera la fecha.", "R: Students and teachers are taking the date lightly.", "B: Students and teachers are being taken lightly to the date .", "O: Students and teachers are taking the date lightly .", "S: No porque compartiera su ideologa, sino porque para el los Derechos Humanos son indivisibles.", "R: Not because he shared their world view, but because for him, human rights are indivisible.", "B: Not because I share his ideology, but because he is indivisible by human rights .", "O: Not because he shared his ideology, but because for him human rights are indivisible .", "Worse than baseline S: El gobierno intenta que no se construyan tantas casas pequenas.", "R: The Government is trying not to build so many small houses.", "B: The government is trying not to build so many small houses .", "O: The government is trying to ensure that so many small houses are not built .", "S: Otras personas pueden tener ninos .", "R: Other people can have children.", "B: Other people can have children.", "O: Others may have children.", "also be viewed as more limited structures, similar to sentence chunks.", "Table 7 shows examples of translations produced with the baseline and the self-attentive residual connections model.", "The first part shows examples for which the proposed model reached a higher BLEU score than the baseline.", "Here, the structure of the sentences, or at least the word order, are improved.", "The second part contains examples where the baseline achieved better BLEU score than our model.", "In the first example, the structure of the sentence is different but the content and quality are similar, while in the second one lexical choices differ from the reference.", "We presented a novel decoder which uses self-attentive residual connections to previously translated words in order to enrich the target-side contextual information in NMT.", "To cope with the variable lengths of previous predictions, we proposed two methods for context summarization: mean residual connections and self-attentive residual connections .", "Additionally, we showed how similar previous proposals, designed for language modeling, can be adapted to NMT.", "We evaluated the methods over three language pairs: Chinese-to-English, Spanish-to-English, and English-to-German.", "In each case, we improved the BLEU score compared to the NMT baseline and two variants with memory-augmented decoders.", "A manual evaluation over a small set of sentences for each language pair confirmed the improvement.", "Finally, a qualitative analysis showed that the proposed model distributes weights throughout an entire sentence, and learns structures resembling syntactic ones.", "As future work, we plan to enrich the present attention mechanism with the key-value-prediction technique (Daniluk et al., 2016; Miller et al., 2016) which was shown to be useful for language modeling.", "Moreover, we will incorporate relative positional information to the attention function.", "To encourage further research in self-attentive residual connections for NMT an other similar tasks, our code is made publicly available 5 .", "We are grateful for support to the European Union under the Horizon 2020 SUMMA project (grant n. 688139, see www.summa-project.eu ).", "We would also like to thank James Henderson for his valuable feedback and suggestions." ]
[ "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "result", "objective", "abstain", "abstain", "objective", "abstain", "objective", "objective", "objective", "other", "other", "other", "other", "other", "other", "other", "other", "abstain", "other", "other", "objective", "other", "other", "other", "method", "abstain", "abstain", "other", "other", "objective", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "method", "other", "abstain", "other", "other", "other", "other", "other", "method", "objective", "other", "other", "other", "objective", "method", "other", "other", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "other", "abstain", "other", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "objective", "result", "method", "objective", "abstain", "abstain", "result", "abstain", "method", "other", "other" ]
[ "Most low resource language technology development is premised on the need to collect data for training statistical models.", "When we follow the typical process of recording and transcribing text for small Indigenous languages, we hit up against the so-called transcription bot-tleneck.", "Therefore it is worth exploring new ways of engaging with speakers which generate data while avoiding the transcription bottleneck.", "We have deployed a prototype app for speakers to use for confirming system guesses in an approach to transcription based on word spotting.", "However, in the process of testing the app we encountered many new problems for engagement with speakers.", "This paper presents a close-up study of the process of deploying data capture technology on the ground in an Australian Aboriginal community.", "We reflect on our interactions with participants and draw lessons that apply to anyone seeking to develop methods for language data collection in an Indigenous community.", "For decades, the work of collecting data for Indigenous languages has been the province of documentary and descriptive linguistics (Bouquiaux and Thomas, 1992; Vaux and Cooper, 1999; Meakins et al., 2018).", "This work has involved various kinds of elicitation, e.g. of word lists, phrases, etc, to support description of the phonology, morphosyntax, and grammar of the language.", "It has also involved the collection of unrestricted text, through recording and transcription.", "In most cases, the result is audio with aligned text.", "Many software tools have been developed for supporting these activities (Boersma, 2001; Clark et al., 2008; Hatton, 2013; Sloetjes et al., 2013).", "Within the field of natural language processing, established practice is to support the linguist's work (Michaud et al., 2018; Seifart et al., 2018; Foley et al., 2018; Cox et al., 2019).", "In some cases, this includes the participation of speakers in activities using apps controlled by linguists (Bird et al., 2014; Hanke, 2017; Bettinson and Bird, 2017).", "However, the premise is basically the same: obtain a substantial quantity of audio and transcribe it, or post-edit the output of an automatic transcription system.", "We believe that these approaches do not adequately address a fundamental reality of small languages: they are oral .", "There may be an official orthography, but it has no place in the local language ecology where any written business takes place in a language of wider communication.", "As a result, local people are usually not confident in the orthography of the language.", "Furthermore, there may be low confidence in using computers and text editors, and inadequate support for the language in terms of keyboarding and spelling correction.", "Add to all this the fact that the whole space of rendering an oral language into standardised orthography can be alienating (Dobrin et al., 2009; Hermes and Engman, 2017).", "There is no particular reason for NLP approaches to Indigenous languages to follow the long-established practices of linguists.", "After all, there is an equally long history of algorithmic approaches being profoundly different to the human tasks they replicate.", "For instance, a human sorting a hand of cards may use insertion sort, but a machine might use Quicksort, with better average-case complexity (Levitin, 1999).", "Computational approaches may be inspired by analogy, e.g. simulated annealing, genetic algorithms, neural networks, but they are not required to adhere to the human defined process.", "Accordingly, we can ask, what is an idiomatic computational approach to collecting data for Indigenous languages that is a better fit to the capabilities of human participants?", "In the case of associating text and speech, we believe that the answer might be keyword spotting.", "This is because, in our experience, speakers and learners are attuned to identifying whole words, rather than obsessing 4988 about the idiosyncratic phonetic makeup of individual tokens as required for phone transcription (cf. Bird, 2020b, 718f).", "Accordingly, we investigate an approach to transcription based on word spotting known as sparse transcription (Bird, 2020b).", "This would seem to be an easier, less specialised task than direct, contiguous transcription.", "If more people can participate, we can hope to establish a virtuous circle with more data, better models, less correction, even more data, and so on.", "The idea is that transcription can be accelerated by identifying the tokens of high-frequency terms all at once, then playing them back in quick succession for confirmation by participants.", "This paper reports on the deployment of a lexical confirmation app which supports human confirmation of system hypotheses.", "We begin by describing the background to this work (Sec. 2), including related work on designing technology for use in Indigenous places.", "We also describe the site where we work and the design of the lexical verification app.", "Next, we report what happened when we deployed the app in two field tests, including detailed accounts of interactions with participants (Sec. 3).", "In the discussion section, we reflect on the field experience from a variety of perspectives, trying to draw out lessons that may be applicable to other places where NLP researchers seek to design technologies for language data collection (Sec. 4).", "The paper concludes with a summary and prospects for further research.", "Designing in the Indigenous space is a small but growing area within the field of Human-Computer Interaction (HCI).", "Projects in this space often begin with ethnographic research to identify local priorities.", "Co-design is advocated as a way to establish a culturally-tailored, culturally-enriched and trustworthy environment for participation (Peters et al., 2018).", "The focus of this work includes traditional knowledge (Verran, 2007), language revitalisation (Hardy et al., 2016) or media sharing (Soro et al., 2017).", "Recent research mentioned the need to involve stakeholders in a system design (Lynch and Gregor, 2004) highlighting the challenges related to the transparency of the mechanism of a given system, specifically when machine learning is involved (Loi et al., 2019) and the difficulty to explain to the users such mechanism (Abdul et al., 2018).", "The lack of published accounts of experiences collecting language data in Indigenous contexts, specifically in the intersection of NLP and documentary linguistics, makes it difficult for newcomers like us to devise approaches that are likely to work.", "We address this shortcoming by reporting and reflecting on our field experience.", "Deploying speech technologies in remote Aboriginal communities is challenging, not primarily because of low technological literacy on the part of local people, but because of low interactional literacy on the part of NLP researchers who enter indigenous places to gather data.", "Our work is grounded in Bininj country in Arnhem land in the north of Australia.", "The biggest town is Gunbalanya with 1,100 inhabitants where we can find primary and secondary schools in which teaching is done in English.", "A few remote satellite communities, or outstations, can be found throughout this country in which education of young people takes place in a bi-cultural environment both in Kunwok and English.", "Kunwok (ISO gup) is the main language of communication here, and Kunwinjku is the prevalent dialect.", "It is spoken by some 2,500 people and is one of the few Australian languages which is gaining speakers (Evans et al., 2003).", "While a standard orthography exists, most community members do not write at all.", "When pressed, some of them are able to leverage their knowledge of English literacy in order to decode Kunwok texts (cf. Feinauer et al., 2013; August et al., 2009).", "In prior work in Bininj country, we discussed our work with traditional owners (heirs of a given tract of Aboriginal land and leaders of the com-munity).", "We described and demonstrated prior work involving transcription, and how it can be used to transcribe Kunwok.", "They raised their concerns about intergenerational knowledge preservation and transmission and access to the resources created by westerners.", "While it is not clear to us that the nature of our work had been thoroughly understood, we could identify through this interaction topics which are addressed by current speech processing and HCI research projects (San et al., 2021; Taylor et al., 2020).", "Our work took place in Gunbalanya and Manmoyi, a remote community situated 5 hours drive from Gunbalanya.", "Australian Aboriginal communities are far from uniform.", "The experiences and challenges we describe here may be relevant for the Australian Top End, but they cannot be directly applied to Indigenous communities in other places.", "It was built following the design of Bettinson and Bird (2017).", "We focused on a simple design without any textual component besides the transcription of the query term.", "The idea is to first load in a web app the query/utterance pairs generated by our spoken term detection system.", "We then ask speakers of the target language to confirm for each pair if the query word (i.e. the term we are trying to retrieve) is pronounced in the search utterance (i.e. the sentence in the speech collection in which the query term was detected).", "The participants have six buttons available to perform the task.", "They have two play buttons at the bottom left: One to play the query term, the other to play the search utterance.", "Once the two audio files have been listened to, two feedback buttons appear at the bottom right to allow the user to confirm if the query term is included or not in the utterance.", "We also added two arrows on each side of the top of the screen to allow the user to jump to the previous or the next example.", "When a new example is displayed on the screen, the query term is played automatically When the utterance is played, the transcription of the query term is highlighted around the timestamps in which the query term was detected.", "The terms are spotted in the utterances beforehand following the parameters of the sparse transcription simulation proposed by Le Ferrand and Bird (2020).", "Because of the challenges posed by the remote Aboriginal context such as the lack of reception or proper working facilities (e.g. a table), we needed to find solution in terms of data storage and activity design.", "Based on the work of (Bettinson and Bird, 2021), we stored the query/utterance pairs output by our spoken term detection system in a JSON file and loaded them in a Raspberry Pi with the app.", "The Pi acts as a WiFi hotspot to which any device can connect.", "We can then then connect a tablet to the Pi and, doing so, the feedback provided by the participant can directly be stored in the associated database.", "We tested our approach with two trials in two Aboriginal towns, with three people in each place.", "While the number of participants seems small, larger trials are difficult to arrange in Aboriginal contexts due to the small number of speakers.", "At the beginning of each elicitation session, the first author explained our intention to teach a machine to transcribe the language automatically, and that we wanted help to correct system guesses.", "There is actually no direct translation of transcription in Kunwok and the concept is usually given by the formulation karribimbun kure djurra , we're drawing on paper.", "In both places, we recruited the participants with the support of two local institutions, the art cen-tre in Gunbalanya and the ranger organisation in Manmoyi.", "At the start of our trips, the first author introduced himself to the communities and explained that he was looking for people to support him for language work.", "Then the people interested came to find him throughout the day.", "Each session lasted approximately 15 minutes and was part of other language work including recordings or language learning.", "Each participant was paid at the regular rate for language work.", "For our first trial, we recorded source audio from a three hour guided tour of a local site.", "We transcribed a few minutes of this recording and used this transcription to build a lexicon.", "We used voice activity detection to segment the recording into breath groups.", "Finally, we automatically spotted terms from the lexicon in these breath groups.", "Since the speaker of the lexicon and the speech collection overlap, most of the terms spotted by the system were correctly retrieved.", "In the data presented to participants, the query term was present in the supplied phrase in 57% of the instances.", "This configuration was tested with three Gunbalanya residents: SB (20s), TM (30s), and RB (40s).", "This last participant was also the speaker of the recordings.", "SB appeared nervous and said little in response to our explanations and questions.", "When an audio clip was played, he translated, even though this was not the instruction.", "It was as if he projected his assumption about the purpose of the task, namely for the researchers to understand the content.", "At one point he respoke the query term and the target phrase in a single utterance, before explaining his knowledge about the associated place.", "The interface itself was not legible to him: faced with a choice of two play buttons one for the query term and one for the phrase he was never clear which one to press.", "He never used the thumbs up/down feedback buttons.", "Here is an example of the confusing situation set up by our approach (we use App to indicate audio produced by the app, along with speaker initials, and ELf for the first author. Play1 refers to the button that plays the query term and play2 the utterance).", "ELF <press play1> App manyilk ELF <press play2> App menekke mandjewk karuy ELF manyilk?", "larrh .", "Because he says mandjewk SB manyilk , first <press play1> App manyilk Notice that the query term manyilk grass is not contained in the utterance menekke mandjewk karuy this wet season he dug it.", "When we demonstrate the use of the app by giving the expected response of larrh no, SB asserts that manyilk is present, contradicting us.", "He presses on the query term play button to show us.", "The following day, when we discussed with another participant, we heard that SB thought that our task was an attempt to test his memory.", "RB was more confident than SB.", "He seemed intrigued at hearing his own voice on the device.", "For each audio segment we played, RB gave an interpretation of the content.", "We offered the device to him to control, but he declined.", "After we pressed the two play buttons, he waited, and we had to follow up with overt questions: does he say <query term>?, or can you hear <query term> in this sentence?", "He answered as expected, with: yes, <query term> or no, he doesn't say <query term>.", "Consider the following example: ELF <press play1> App marnbom (he made) ELF <press play2> App kumekke artist marnbom kadi ELF do you hear marnbom ?", "RB marnbom that's painting making the painting ELF but do you hear marnbom in the sentence?", "RB yeah Unlike SB and RB, TM readily took the device and used the controls.", "Sometimes, when the query term was not contained in the utterance, he not only translated the audio, but he also offered an example sentence containing the query term.", "In the following example, confirm refers to one of the feedback button which automatically display the next example and play the query term: TM <press confirm> App karrikadjung (we follow it) TM karrikadjung , (we are following) <press play2> App karrikadjuy road (we followed the road) TM he says karrikadjuy , it means we went this way road, he should have say we are following this one, karrikadjung In this case, the difference between the query term karri-kadju-ng we-follow-PRES and the utterance karri-kadju-y we-follow-PAST is only in verb tense.", "The whole query term appears in the sentence, except for the tense marker.", "Should the speaker say yes or no?", "This points to a shortcoming of the task definition.", "When the term was correctly retrieved, TM would respeak the audio and press the thumbs-up button.", "When the term was not correctly retrieved, TM offered extensive explanations.", "For the second trial, we visited the Manmoyi outstation.", "We used five short audio recordings from previous fieldwork, including guided tours and traditional stories.", "One of the recordings was transcribed and we extracted the words to use as our lexicon.", "As before, we segmented the source audio into breath groups and ran word spotting against 4991 this set.", "Since the speaker of the lexicon and those of the rest of the collection did not overlap, there was much lower precision; often a query term matched noise or mumbling.", "In the data presented to participants, only in about 10% of cases was the query term present in the supplied phrase.", "This configuration was tested with three residents of Manmoyi: LY (60s), LB (50s), RG (50s).", "LB and LY participated together, with LB taking an active role and LY only participating by talking to LB during the task.", "With each round, LB listened to the query term and the utterance then appeared to associate them as a single linguistic event, and he would recount a story that included both the term and the utterance.", "After this, he would give feedback (thumbs up or down) depending on how easy he found it to link the two semantically: LB <press play1> App wirrihmi (dislike/wrong) LB that's wrong one LB <press play2> App wanjh manjbekkan manmanjmak LY it tasted sweet LB it tasted like you know this, it might have been a little bit funny or something like that LB yeah like for us they say: no I can't eat because he tasted it and they say try it and they gave it, and he says aah yeah it tasted nice LB yoh , that's the one, that's good, kamak LB often interpreted the audio segment.", "At one point, he recognised the speaker for the queries, and he told us about her and began to recount the same story: ELF <press play1> App nawernwarre (big brother) ELF <press play2> App birribonguni birri... (they were drinking, they...) LY nawernwarre LB yoh, nawernwarre LY nawernwarre , or manekke might be... lonely boy (story) LB lonely boy yoh that's the lonely boy (story) Towards the end of the session, we asked about LB's understanding of the task: ELF Can you tell me in English what do you think I am trying to do?", "LB You are trying to... you are making like Kunwok and English translating, but if you are making straight like Kunwok you're making straight and English making straight, that's the all same.", "ELF well, not really LB no it's real, we are talking, we know everything.", "Not all these, we've seen these people, they don't know anything about it, myself and LY we know everything about it.", "LB understood this to be a translation activity.", "When we disagreed, he re-asserted his standing as a knowledge authority.", "Later, we explained our ultimate purpose of automatically transcribing the language.", "LB rephrased transcription as make it together.", "We realised afterwards that LB may have been referring to his semantic linking process.", "RG was our final participant, and this session revealed many issues.", "Given the low number of correct query-utterance pairs, we found ourselves needing to manually skip over utterances that were too hard to understand out of context.", "Each time we abandoned a round and moved on to the following round, the next query term played automatically (this feature was added before any testing with the assumption that it would speed up the verification process).", "Such automation turned out to be confusing for RG.", "For a few instances, RG responded yes when the query term was not literally present in the utterance, maybe because the query term was morphologically related to a term that was present, e.g. birri-m-h-ni they-towards-immediate-were (query) and birri-ni they-were.", "Another interpretation of this behaviour is that RG was focussing on meanings not forms.", "In this and other cases, it seems that RG was not clear about what we were asking for.", "RG The old woman is talking about country and the young fellow is talking about what creation was.", "RG It's all a bit confusing.", "They are not even saying kunred it means home, the young other fellow is talking about dreamtime story, so it is not, well it's connect but it is not pronouncing.", "Sometimes, RG asked about the speakers and the 4992 overall context of the out-of-context audio segment, asking, e.g. Is this <name> speaking? I don't know what they're talking about here.", "In this section, we analyse the above interactions and try to identify some principles to inform NLP elicitation methodology, hoping to avoid such problems occurring in future.", "Task motivation.", "SB, RB and LB understood us to be interested in interpreting the content.", "SB thought we were testing his memory.", "TM offered detailed explanations.", "LB said things that we interpret as asserting authority.", "It appears that our attempt to explain our purpose in automatic transcription, and the activity of confirming or refuting system guesses, was unsuccessful.", "Task definition.", "Participants were not clear about what we were asking of them.", "The notion of word was not clearly defined, and there were a variety of responses when the query term was not identical yet morphologically or semantically similar to a word in the corresponding utterance.", "Naturalness of the task.", "When it comes to collaboration with western language workers, Aboriginal people in these communities are accustomed to participating in interviews, recordings, transcription, and translation activities.", "This may explain people's readiness to respeak or interpret the content or supply additional cultural information.", "We entered with a different task, one where the overt activity of human confirmation/rejection of system guesses was not transparently related to a recognisable transcription task.", "We explained and demonstrated the activity, but TM was the only participant to instantly grasp this task.", "Even so, he provided extensive explanations when the system guess was wrong in an effort to teach us.", "Utterance context.", "From our perspective, the components of the device were clear.", "We have a query term that needs to be detected, and an utterance that should contain the query term.", "From this, we just need two feedback buttons to confirm whether the query term is included in the utterance.", "However, to the participant listening to the audio produced by the app and not following our use of the controls, the query term and utterance may be perceived as a single utterance.", "Everything put into the aural space appears to be concatenated by listeners, and our non-conventional metalinguistic context is not interpretable.", "When endeavouring to explain the task in Kunwok, we were hampered by the lack of words for word and sentence.", "Teaching.", "The participants generally provided much more information than the simple yes/no response we requested.", "Each instance was another opportunity to teach us about the language or the country.", "The design of the task only limited the space for this style of participation.", "The activity itself was not particularly engaging, taking utterances out of context and asking for a mechanical response to a seemingly pointless question.", "It seems to be a kind of resilience that participants made the most of the opportunity to pursue their own ends of educating newcomers.", "Further discussion with community members highlighted their concerns about knowledge preservation, access to archival recordings, and learning literacy.", "Knowledge transmission.", "George et al. (2010) explains that the way in which westerners and Australian Aboriginal people transmit their knowledge varies in that one extracts, identifies and, categorizes while the other needs the information to be 4993 embedded in a system of kinship relationships.", "For example, in Bininj country, every individual has a kinship relationship to every other individual, and they address each other accordingly (Glowczewski, 1989).", "Stories do not exist in isolation but are connected to an individual who tells them, and the country it comes from.", "We ran up against this when participants needed to connect isolated utterances back to their rightful cultural context, not just consider them as arbitrary linguistic material for which they can answer an unmotivated question: does this utterance contain this word?", "We can see this in Trial 2 where LB ignores the utterance and uses his knowledge of the speaker of the query term to link the content back to the story.", "Yarning.", "Recent fieldwork methods research has shown that adopting Aboriginal-led approaches leads to more culturally appropriate practices and better feedback from Aboriginal consultants (Louro and Collard, 2021).", "Yarning has been described as a research method and the traditional way for Aboriginal people in Australia to pass knowledge.", "It can be defined as a conversational process that involves listening to storytelling that creates new knowledge and understanding (Terare and Rawsthorne, 2020).", "Adopting this to engage with participants could lead to better participation and a more appropriate way to collaborate.", "Here, the Aboriginal consultant would occupy a teaching role and the function of the technology would be to capture, support, and organise natural ways of transmitting knowledge.", "Spoken term detection performance.", "The spoken term detection method delivered markedly different results in the two trials.", "Presenting data with 50% accuracy (first trial) makes the user's task seem most worthwhile, otherwise, the user is mostly confirming or refuting system guesses (refuting in 90% of cases in the second trial).", "If this reasoning is correct, then we predict that a trial involving 90% accuracy would also be challenging to motivate and teach.", "The low accuracy of the system probably contributed to the challenges encountered during the second trial.", "However similar behaviour in both trials was observed (e.g. the systematic translation after an audio was played or the semantic linkage process) which makes us think that the sole performance of a system is not the main source of the misinterpretation of the task.", "App design.", "The design of the app was based on preliminary thinking about how collection could proceed fluidly.", "We did not consider the confusion that might be caused by having two play buttons on the screen (one for the query term, and one for the corresponding utterance).", "In the interests of efficiency, with each new round, the query term was played automatically.", "It was as if the thumbs up/down button from the previous round caused playback, and this turned out to be confusing.", "When we wanted to skip forward by a few examples using the right or left arrow keys at the top of the display (Fig. 1), the app would play a series of seemingly random words.", "Such automation should have been avoided, specifically in the early stage of our work when there was a lot of uncertainty regarding people reaction towards our activity.", "Design improvements.", "Besides the elements we already mentioned, a few paths can be explored to address the challenges we have faced.", "Removing the query play button could have the effect of reducing the number of contexts and avoid the linkage process we have observed with LB and SB.", "Limiting the activity to a single story and playing the utterances in chronological order can make the context clear, and the participant would not need to clarify it.", "Using bottleneck features instead of MFCCs to spot words could improve the precision of the system (Menon et al., 2019).", "Such modifications, however, cannot address the biggest flaw of our proposed task: it does not respond directly to people's agenda in terms of language work, but simply tries to leverage people's skills to respond to westerners' expectations.", "Pushing the proposed pipeline for several iterations would risk alienating our participants and compromising further collaboration.", "We believe that a complete reshaping of our method is necessary to enable a sustainable and community-based model for language and knowledge documentation.", "Our first attempt in this space was unsuccessful on many levels.", "Most superficially were issues with the task definition and the app interface.", "The task focused on the notion of word and on deciding whether a given word occurred in a given utterance.", "Yet the notion of word was not established; as an oral language, there was no a priori shared understanding between the participant's notion of spoken 4994 word and our notion of orthographic word.", "Throughout our interactions with participants, our attempts to explain the method and the purpose were unsuccessful.", "Local perception was fixed on the idea that we had entered the community to learn the language and culture, and that the purpose of participating in the study was to teach us and to interpret the texts for us.", "Consequently, the narrow focus of our activity on eliciting a binary, thumbs up/down response was unsuccessful.", "This is hardly surprising as many people have noted that engaging Aboriginal people with direct questions requiring a yes or no response is seen as testing people's knowledge or memory, and potentially irritating (Maar et al., 2011; Ober, 2017).", "We observed this ourselves, when SB reported that he felt like he was being tested, or when LB responded as if his authority was being questioned.", "Clearly, our style of engagement was not the expected kind of collaboration on a linguistic task.", "Aside from one participant (TM), no one would participate in the abstract and apparently pointless task of confirming whether a word was present in a sentence.", "Instead, all participants sought to create meaning from any language fragments they were presented with.", "On the basis of an isolated word, and person, place or story would be detected, and people would seek to teach us about these aspects of their lifeworld.", "This took various forms: repeating, paraphrasing, translating, interpreting, or offering extensive cultural commentaries.", "In retrospect, this response to our approach comes across as resilient and generous.", "In comparison, our narrow focus on data collection, and on getting across the specialised task of lexical confirmation may have come across as disconnected from local interests, and potentially disrespectful.", "Of course, we can hope to recruit more people like TM.", "However, the story about scalable creation of language resources involves working with whoever is available.", "The tasks need to be locally comprehensible and motivating.", "In moving forward, we believe it is necessary to rethink the collaborative transcription task.", "The starting point is to understand local participants as teachers and cultural guides, occupied with their own knowledge practices and with passing these on.", "Special focus need to be given on the creation of a third space between the several stakeholders of a project with benefits that serve both Indigenous participants and external actors (Bird, 2020a).", "Could we view the task of putting an audio recording into textual form as a way to help a newcomer make progress with the language and culture, and with getting the pronunciations and meanings correct?", "The answer to this question depends on further research.", "Outside the major languages, the development of language technologies is considered to be held up by the general lack of data (Krauwer, 2003).", "In the case of the world's small, oral languages, the usual approach has been to follow the long-established practice of linguists and record and transcribe audio and elicit wordlists and paradigms.", "Many computational tools were developed to support this approach.", "However, algorithmic approaches to working with small languages do not need to be limited by these past practices, and so we believe it is worth considering other approaches to data collection that might simultaneously support computational methods while engaging effectively with members of the speech community.", "Accordingly, we took a recently proposed approach to transcription based on keyword spotting, and developed an app for confirming system guesses.", "We anticipated that this app would be more accessible to local participants than the conventional linguist-driven tasks.", "We ran trials in two Aboriginal towns, with speakers of the Kunwok language.", "In this paper, we report the description of the several interactions we had with locals around a lexical verification activity.", "We present the many challenges we encountered, including a reflection around the technical and cultural issues of the task design, and the flaws around our approach in terms of collaborative language work.", "For the present, we offer our findings as a candid report on the experience of deploying data capture technology in an Indigenous community, in the hope that others will succeed where we have failed.", "We hope others will also follow our lead and share their own experiences of data collection, and make visible more of the real work of NLP (cf. Star, 2007).", "Perhaps it is possible for an externally-defined task such as transcription to be aligned to local agendas.", "Just as often, we expect that it will be necessary to let go of such tasks and do something different.", "Something that makes sense locally.", "This research was covered by a research permit from the Northern Land Council, and ethics approved from Charles Darwin University.", "We are grateful to the Australian government for a PhD scholarship to the first author, and for grants from the Australian Research Council and the Indigenous Language and Arts Program to the second author.", "The recruitment of participants was done with the support of the local organisation: Injalak Arts and Craft in Gunbalanya, and Warddeken Land Management in Manmoyi.", "The shape and purpose of the work was explained in English and oral consent has been obtained by all the participants.", "Additional approval has been given by Manmoyi traditional owners, concerning the collection and use of the data.", "All the participants have been paid at the regular rate for Aboriginal people consultancy.", "We would like to thank Mat Bettinson for his involvement in the design of the lexical verification App and Joshua Yang for the video recording of the trials." ]
[ "abstain", "method", "abstain", "method", "objective", "method", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "other", "other", "other", "abstain", "other", "method", "abstain", "other", "abstain", "abstain", "other", "other", "other", "other", "other", "abstain", "objective", "other", "method", "method", "other", "method", "other", "method", "objective", "method", "other", "other", "other", "method", "other", "other", "abstain", "abstain", "other", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "objective", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "other", "other", "other", "other", "other", "other", "other" ]
[ "The main motivation for developing context-sensitive lemmatizers is to improve performance on unseen and ambiguous words.", "Yet previous systems have not carefully evaluated whether the use of context actually helps in these cases.", "We introduce Lematus, a lemmatizer based on a standard encoder-decoder architecture, which incorporates character-level sentence context.", "We evaluate its lemmatization accuracy across 20 languages in both a full data setting and a lower-resource setting with 10k training examples in each language.", "In both settings, we show that including context significantly improves results against a context-free version of the model.", "Context helps more for ambiguous words than for unseen words, though the latter has a greater effect on overall performance differences between languages.", "We also compare to three previous context-sensitive lemmatization systems, which all use pre-extracted edit trees as well as hand-selected features and/or additional sources of information such as tagged training data.", "Without using any of these, our context-sensitive model outperforms the best competitor system (Lemming) in the full-data setting, and performs on par in the lower-resource setting.", "Lemmatization is the process of determining the dictionary form of a word (e.g. swim ) given one of its inflected variants (e.g. swims , swimming , swam , swum ).", "Data-driven lemmatizers face two main challenges: first, to generalize beyond the training data in order to lemmatize unseen words; and second, to disambiguate ambiguous wordforms from their sentence context.", "In Latvian, for example, the wordform celu is ambiguous when considered in isolation: it could be an inflected variant of the verb celt ( to lift ) or the nouns celis ( knee ) or cels ( road ); without context, the lemmatizer can only guess.", "By definition, sentence context (or latent information derived from it, such as the target word's morphosyntactic tags) is needed in order to correctly lemmatize ambiguous forms such as the example above.", "Previous researchers have also assumed that context should help in lemmatizing unseen words (Chrupaa, 2006; Muller et al., 2015)i.e., that the context contains useful features above and beyond those in the wordform itself.", "Nevertheless, we are not aware of any previous work that has attempted to quantify how much (or even whether) context actually helps in both of these cases.", "Several previous papers on context-sensitive lemmatization have reported results on unseen words (Chrupaa, 2006; Chrupaa et al., 2008; Muller et al., 2015; Chakrabarty et al., 2017), and some have compared versions of their systems that use context in different ways (Muller et al., 2015; Chakrabarty et al., 2017), but there are few if any direct comparisons between context-sensitive and context-free systems, nor have results been reported on ambiguous forms.", "This paper presents Lematus a system that adapts the neural machine translation framework of Sennrich et al. (2017) to learn context sensitive lemmatization using an encoder-decoder model.", "Context is represented simply using the character contexts of each form to be lemmatized, meaning that our system requires fewer training resources than previous systems: only a corpus with its lemmatized forms, without the need for POS tags (Chrupaa et al., 2008; Muller et al., 2015) or word embeddings trained on a much larger corpus (Chakrabarty et al., 2017).", "We evaluate Lematus on data from 20 typologically varied languages, both using the full training data from the Universal Dependencies project (Nivre et al., 2017), as well as a lower-resource scenario with only 10k training tokens per language.", "We compare results to three previous systems and to a context-free version of our own system, including results on both unseen 1391 and ambiguous words.", "We also examine the extent to which the rate of unseen and ambiguous words in a language can predict lemmatization performance.", "On average across the 20 languages, the context-sensitive version of Lematus achieves significantly higher lemmatization accuracy than its context-free counterpart in both the low-resource and full-data settings.", "It also outperforms the best competitor system (Lemming; Muller et al. 2015) in the full-data setting, and does as well as Lemming in the low-resource setting.", "Thus, even without explicitly training on or predicting POS tags, Lematus seems able to implicitly learn similar information from the raw character context.", "Analysis of our full-data results shows that including context in the model improves its accuracy more on ambiguous words (from 88.8% to 92.4% on average) than on unseen words (from 83.6% to 84.3% on average).", "This suggests that, to the extent that unseen words can be correctly lemmatized at all, the wordform itself provides much of the information needed to do so, and Lematus effectively exploits that informationindeed, Lematus without context outperforms all previous context-sensitive models on lemmatizing unseen words.", "Finally, our cross-linguistic analysis indicates that the proportions of unseen words and ambiguous words in a language are anti-correlated.", "Altogether, then, our results suggest that context-free neural lemmatization is surprisingly effective, and may be a reasonable option if the language contains many unseen words but few ambiguous ones.", "Context is likely to help in most languages, but the main boost is for languages with higher ambiguity.", "Early work on context-sensitive lemmatization focused on disambiguation: given a set of analyses produced by a hand-built morphological analyzer (typically including both lemmas and morphosyntactic tags), choose the best one in context (Oflazer and Kuruoz, 1994; Ezeiza et al., 1998; Hakkani-Tur et al., 2002).", "Here, we focus on systems learning to generate the lemmas and tags without a pre-existing analyzer (Erjavec and Dzeroski, 2004; Chrupaa, 2006).", "The three systems we use as baselines follow Chrupaa (2006) in treating the task as a classification problem, where the system learns to choose which of a set of edit scripts or edit trees (previ-ously induced from the aligned wordform-lemma pairs) should be applied to transform each wordform into the correct lemma.", "Two of our baselines, Morfette 1 (Chrupaa et al., 2008) and Lemming 2 (Muller et al., 2015), learn from morphologically annotated corpora to jointly tag each word and lemmatize it by choosing an edit script.", "Morfette consists of two log-linear classifiersone for lemmatization and one for taggingwhich are combined using beam search to find the best sequence of lemma-tag pairs for all words in the input sentence.", "Lemming (which proves to be the strongest baseline) also consists of two log-linear components (a classifier for lemmatization and a sequence model for tagging), which are combined either using a pipeline (first tag, then lemmatize) or through joint inference.", "The lemmatization model uses a variety of features from the edit trees, alignments, orthography of the lemma, and morphosyntactic tags.", "In experiments on six languages, Muller et al. (2015) showed that the joint Lemming model worked better than the pipelined model, and that adding morphosyntactic features helped.", "They also demonstrated improvements over an earlier context-free baseline model (Jiampojamarn et al., 2008).", "However, they did not evaluate on ambiguous forms, nor directly compare context-sensitive and context-free versions of their own model.", "Our third baseline, Ch-2017 3 (Chakrabarty et al., 2017) uses a neural network rather than a log-linear model, but still treats lemmatization as a classification task to choose the correct edit tree.", "(Like our model, Ch-2017 does not perform morphological tagging.)", "The model composes syntactic and semantic information using two successive bidirectional GRU networks.", "The first bidirectional GRU network is similar to the character to word model by Ling et al. (2015) and learns syntactic information.", "The semantic information comes from word embeddings pre-trained on much larger corpora.", "The second GRU uses a composition of the semantic and syntactic embeddings for the edit tree classification task.", "Rather than treating lemmatization as classification, our own model is inspired by recent work on morphological reinflection.", "As defined by two recent Shared Tasks (Cotterell et al., 2016, 2017), a morphological reinflection system gets as input 1 https://sites.google.com/site/ morfetteweb/ 2 http://cistern.cis.lmu.de/lemming 3 https://github.com/onkarpandit00786/ neural-lemmatizer 1392 some inflected wordform (and possibly its morphosyntactic tags) along with a set of target tags.", "The system must produce the correct inflected form for the target tags.", "In the 2016 SIGMORPHON Shared Task, various neural sequence-to-sequence models gave the best results (Aharoni et al., 2016; Kann and Schutze, 2016; Ostling, 2016).", "We base our work closely on one of these (Kann and Schutze, 2016), which also won one of the 2017 tasks (Bergmanis et al., 2017).", "Our lemmatization task can be viewed as a specific type of reinflection, but instead of assuming that tags are given in the input (or that the system simply has to guess the tags from the wordform itself, as in some of the Shared Tasks), we investigate whether the information available from the tags can instead be inferred from sentence context.", "Our model is based on the network architecture proposed by Sennrich et al. (2017), which implements an attentional encoder-decoder architecture similar to that of Bahdanau et al. (2015).", "Namely, our model is a deep attentional encoder-decoder with a 2-layer bidirectional encoder with a gated recurrent unit (GRU) (Cho et al., 2014) and a 2-layer decoder with a conditional GRU (Sennrich et al., 2017) in the first layer followed by a GRU in the second layer.", "For more architectural details see (Sennrich et al., 2017).", "A default implementation of this architecture is available in the Nematus toolkit, 4 which we used as our starting point.", "However, Sennrich et al. (2017) used their model for machine translation, while we work on lemmatization.", "Since our task is closer to the problem of morphological reinflection described above, we changed some of the default model parameters to follow those used in systems that performed well in the 2016 and 2017 SIGMORPHON Shared Tasks (Kann and Schutze, 2016; Bergmanis et al., 2017).", "Specifically, we reduced the number of hidden units to 100 and the encoder and decoder embedding size to 300.", "The input sequence is a space-separated character representation of a word in its N -character left and right sentence context.", "For example, with N = 15 , the Latvian word celu (the genitive plural 4 https://github.com/EdinburghNLP/ nematus of the noun cels , meaning road ) could be input as: s a k a <s> p a s v a l d b u <lc> c e l u <rc> u n <s> i e l u <s> r e g i s t r where <s> , <lc> , <rc> stand for word boundary, left and right context markers respectively.", "The target output is a sequence of characters forming the lemma of the word: c e l s 4 Datasets We contend that the difficulty of the lemmatization task largely depends on three factors: morphological productivity, lexical ambiguity and morphological regularity.", "One aim of our work is to investigate the extent to which it is possible to predict lemmatization performance for a particular language by operationalizing and measuring these properties.", "Therefore in this section we provide statistics and some analysis of the datasets used in our experiments.", "We use the standard splits of the Universal Dependency Treebank (UDT) v2.0 5 (Nivre et al., 2017) datasets for 20 languages: Arabic, Basque, Croatian, Dutch 6 , Estonian, Finnish, German, Greek, Hindi, Hungarian, Italian, Latvian, Polish, Portuguese, Romanian, Russian, Slovak, Slovene, Turkish and Urdu.", "See Figure 1 for training and development data sizes.", "Because the amount of training data varies widely between languages, we perform some of our language analysis (and later, system evaluation) on a subset of the data, where we use only the first 10k tokens in each language for training.", "The 10k setting provides a clearer comparison between languages in terms of their productivity, ambiguity, and regularity, and also gives a sense of how much training data is needed to achieve good performance.", "One of the main purposes of data-driven lemmatization is to handle unseen words at test time, yet languages with differing morphological productivity will have very different proportions of unseen words.", "Figure 2 shows the percentage of tokens in the development sets of each language that are not seen in training.", "Two conditions are given: the full training/development sets, and train/dev sets that are controlled in size across languages.", "For 5 UTD v2.0 datasets are archived at http://hdl.", "Figure 1 : Training and development set sizes each language, in thousands.", "Figure 2 : Percent of tokens unseen in training.", "Dev (yellow): within full development sets with respect to the full training sets.", "Dev 3k (green): within the first 3k tokens of development sets with respect to the first 10k tokens of training sets.", "the languages with large data sets, the percentage of unseen words is (unsurprisingly) higher when training data is reduced to 10k.", "However, these differences are often small compared to the differences between languages, suggesting that productivity is likely to affect lemmatization performance as much as training data size.", "Lexical ambiguity is the other major motivation for context-sensitive lemmatization.", "To quantify how frequently lemmatizers have to rely on context, Figure 3 shows the percentage of ambiguous tokens in each language, in either the full or reduced training sets.", "We define ambiguity empirically: ambiguous tokens are wordforms occurring with more than one lemma within the training set.", "Overall, the level of measured ambiguity tends to be lower than the proportion of unseen tokens.", "Many of the languages with high productivity (e.g., Russian, Slovak, Slovene, Turkish) have low levels of ambiguity, while others (Arabic, Urdu) trend the opposite way.", "Indeed, across all 20 languages, Figure 3 : Percent of ambiguous tokens within the first 10k tokens of training sets and full training sets.", "Ambiguous tokens are word forms occurring with more than one lemma in the training set.", "the levels of productivity and ambiguity are negatively correlated, with a rank correlation of -0.57 after controlling for training data size.", "7 This is not surprising, since given a set of morphosyntactic functions, they must either be expressed using distinct forms (leading to higher productivity) or non-distinct forms (leading to higher ambiguity).", "The final characteristic that we would expect to make some languages easier than others is morphological regularity, but it is unclear how to measure this property directly without an in-depth understanding of the morphophonological rules of a language.", "Nevertheless, the presence of many irregular forms, or other phenomena such as vowel harmony or spelling changes, complicates lemmatization and will likely affect accuracy.", "Training Parameters 8 We use a mini batch size of 60 and a maximum sequence length of 75.", "For training we use stochastic gradient descent, Adadelta (Zeiler, 2012), with a gradient clipping threshold of 1.0, recurrent Bayesian dropout probability 0.2 (Gal and Ghahramani, 2016) and weight normalization (Salimans and Kingma, 2016).", "We use early stopping with patience 10 (Prechelt, 1998).", "We use the first 10 epochs as a burn-in period, after which at the end of every second epoch 7 That is, the correlation is computed between the values in Figure 2 Dev 3k (unseen words wrt the first 10k training tokens for each language) and Figure 3 Train 10k (ambiguous words in the first 10k training tokens for each language).", "The correlation is significantly different from zero with p < 0 .", "01 .", "8 Training parameters were tunned/verified on the standard splits of UDT training and development sets for Spanish and Catalan, therefore the results on these languages are not included in our evaluation.", "we evaluate the current model's lemmatization exact match accuracy on the development set and keep this model if it performs better than the previous best model.", "When making predictions we use beam-search decoding with a beam of size 12.", "Baselines To train models we use the default settings for Morfette and Lemming.", "Ch-2017 requires word embeddings, for which we use fastText 9 (Bo-janowski et al., 2017).", "For Ch-2017 we set the number of training epochs to 100 and implement early stopping with patience 10.", "10 We leave the remaining model parameters as suggested by Chakrabarty et al. (2017).", "We also use a lookup-based baseline ( Baseline ).", "For words that have been observed in training, it outputs the most frequent lemma (or the first observed lemma, if the options are equally frequent).", "For unseen words it outputs the wordform itself as the hypothesized lemma.", "Context Representation We aim to use a context representation that works well across multiple languages, rather than to tune the context individually to each language.", "In preliminary experiments, we explored several different context representations: words, sub-word units, and N surrounding characters, for different values of N .", "These experiments were carried out on only six languages.", "Three of these (Latvian, Polish and Turkish) were also used in our main experiments, while three (Bulgarian, Hebrew, and Persian) were not, due to problems getting all the baseline systems to run on those languages.", "For the word level context representation ( Words ), we use all words in the left and the right sentence contexts.", "For the character level context representations ( N-Ch ) we experiment with N = 0 , 5, 10, 15, 20, or 25 characters of left and right contexts.", "For the sub-word unit context representation, we use byte pair encoding ( BPE ) (Gage, 1994), which has shown good results for neural machine translation (Sennrich et al., 2016).", "BPE is a data compression algorithm that iteratively replaces the most frequent pair of symbols (here, characters) in a sequence with a single new symbol.", "BPE has 9 https://github.com/facebookresearch/ fastText/blob/master/pretrained-vectors.", "a single parameterthe number of merge operations.", "Suitable values for this parameter depend on the application and vary from 10k in language modeling (Vania and Lopez, 2017) to 50k in machine translation (Sennrich et al., 2016).", "We aim to use BPE to extract a few salient and frequently occurring strings, such as affixes, therefore we set the number of BPE merge operations to 500.", "We use BPE-encoded left and right sentence contexts that amount up to 20 characters of the original text.", "Since we hoped to use context to help with ambiguous words, we looked specifically at ambiguous word performance in choosing the best context representation.", "11 Table 1 summarizes Lematus' performance on ambiguous tokens using different sentence context representations.", "There is no context representation that works best for all six languages, but the 20-Ch system seems to work reasonably well in all cases, and the best on average.", "We therefore use the 20-Ch context in our main experiments.", "Note that this choice was based on a relatively small number of experiments and it is quite possible that further tuning the BPE parameter, or the number of BPE units or words of context (or tuning separately for each language) could lead to better overall results.", "Evaluation To evaluate models, we use test and development set lemmatization exact match accuracy.", "When calculating lemmatization accuracy we ignore casing of the tokens and ommit punctuation tokens and those tokens that contain digits or any of the following characters: @+.", "/ .", "Results on Complete Datasets Development set accuracies for all languages and systems in the full data setting are provided in Figure 4a, with results on unseen and ambiguous words in Figures 4b and 4c.", "Overall, Lematus 20-Ch outperforms the previous systems, Morfette, Lemming and Ch-2017, on 20, 15 and 20 languages respectively.", "In addition, Figure 4 makes it clear that the major benefit of all the systems over the baseline is for unseen words: in fact, for ambiguous words, the baseline even outperforms some of the systems in a few languages.", "Comparing the two versions of Lematus, we can see that Lematus 20-Ch does consistently better 11 The percentage of ambiguous tokens in the training sets of Bulgarian, Hebrew and Persian are 8.4%, 16.6% and 7.6% respectively; for the other languages, see Figure 3.", "Table 1 : Lemmatization exact match accuracy on ambiguous tokens of dev sets, for baseline and for Lematus using various context representations: N characters, Byte Pair Encoding units, or words.", "Table 2 : Lemmatization exact match accuracy, averaged across all 20 languages.", "In the full training scenario (first five columns) results are given for All, Unseen, Ambiguous, and Seen Unambiguous tokens.", "(Note that ambiguity is empirical: is a type seen with more than one lemma in training?)", "We compare Lematus with/without context (20-Ch/0-Ch), the most frequent lemma baseline, and three previous systems.", "The numerically highest score in each column is bold; , , and indicate statistically significant improvements over Lemming, Lematus 0-Ch and 20-Ch, respectively (all p < 0 . 05 ; see text for details).", "on ambiguous tokens than Lematus 0-Ch, whereas their performance on unseen tokens (and thus, overall) is much more similar.", "In fact, on unseen words, Lematus 0-Ch outperforms the context-sensitive baselines Morfette, Lemming and Ch-2017 on 18, 12 and 17 languages respectively.", "These results suggest that a good context-free model can do surprisingly well on unseen words, and the added model complexity and annotation requirements of earlier context-sensitive models are not always justified.", "As further evidence of these claims, we summarize in Table 2 each system's average performance over all languages for both the development and test sets.", "In addition to performance breakdown into unseen and ambiguous words we also report each system's performance on tokens that were both seen and unambiguous in training.", "No system achieves 100% accuracy on seen unambiguous tokenseven the lookup baseline achieves only 99%, indicating that about 1% of tokens that appeared unambiguous in training occur with a previously unseen lemma in the development set.", "In principle, context-based systems could outperform the baseline on these words, but in practice none of them do.", "Indeed, switching to a dictionary lookup baseline for seen unambiguous words would slightly improve the performance of all models (though it would not change the overall ranking of the systems).", "We tested for statistically significant differences between the results of Lemming (the numerically best-performing competitor system) and our two systems (Lematus 0-Ch and Lematus 20-Ch) using a Monte Carlo method: for each comparison (say, between 0-Ch and 20-Ch on unseen words), we generated 10000 random samples, where each sample randomly swapped the two systems' results for each language with probability .5.", "We then obtained a p -value by computing the proportion of samples for which the difference in average results was at least as large as the difference observed in our experiments.", "Because the results of 0-Ch and 20-Ch are highly correlated across languages, all differences between these systems, except for results on seen unambiguous tokens, are significant ( p < 0 .", "01 for dev set All, p < 0 .", "05 for Unseen, p < 0 .", "001 for Ambig, and p < 0 .", "01 for test set All; p > 0 .", "1 for 1396", "Figure 5 : Lemmatization accuracy of Lematus 20-Ch on all dev set tokens vs percent of unseen tokens (left) or percent of ambiguous tokens (middle); accuracy on unseen tokens vs training set size (right).", "dev set SeenUA).", "Lemming does as well as Lematus 20-Ch on ambiguous and SeenUA words, but its accuracy on unseen words is lower ( p < 0 . 001 ), leading to worse performance overall ( p < 0 . 01 on both dev and test).", "Interestingly, even Lematus 0-Ch does better than Lemming on unseen words ( p < 0 . 02 ), and performs on par overall ( p = 0 . 28 ).", "So, although including context clearly can help (compare Lematus 20-Ch vs 0-Ch), and Lemming exploits this advantage for ambiguous words, a good context-free model can still do very well.", "Overall, our models do as well or better than the earlier ones, without the added model complexity and annotation requirements.", "On the other hand, although our context-sensitive model does improve somewhat over its context-free counterpart, there is still some way to go, since average performance on unseen and ambiguous words is still 84% and 92% respectively.", "Results on 10k Datasets Figure 4d shows the results on all tokens for each language in the 10k training setting, with averages in Table 2.", "On average, limiting training data to the first 10k examples resulted in an 82% reduction of training sets, and we see an average drop in test set performance of 5.6-6.8 percentage points for all systems except Ch-2017, which drops by about 10 percent.", "When comparing the 0-Ch and 20-Ch versions of Lematus we found the same pattern of significances as in the full data setting ( p < 0 . 01 ), however the two best systems (Lematus 20-Ch and Lemming) are statistically equivalent on the test sets, as are Lemming and Lematus 0-Ch.", "Patterns Across Languages In Section 4, we hypothesized that the success of data-driven lemmatization depends on a language's productivity, ambiguity, and regularity.", "We now explore the extent to which our results support this hypothesis.", "First, we examine the correlation between the overall performance of our best system on each language and the percentage of unseen (Figure 5, left) or ambiguous words (Figure 5, middle) in that language.", "As expected, there is a strong negative correlation between the percentage of unseen words and the accuracy of Lematus 20-Ch: the rank correlation is R = 0 .", "73 ( p < 0 . 001 ; we use rank correlation because it is less sensitive to outliers than is linear correlation, and the plot clearly shows several outliers.) In contrast to our original prediction, however, Lematus 20-Ch is actually more accurate for languages with greater ambiguity ( R = 0 . 44 , p = 0 . 05 ).", "The most likely explanation is that ambiguity is negatively correlated with productivity.", "Since there tend to be more unseen than ambiguous words, and since accuracy is typically lower for unseen than ambiguous words, higher ambiguity (which implies fewer unseen words) can actually lead to higher overall accuracy.", "Our earlier results also suggested that the main benefit of Lematus 20-Ch over Lematus 0-Ch is for ambiguous words.", "To confirm this, we looked at the extent to which the difference in performance between the two systems correlates with the percentage of unseen or ambiguous words in a language.", "As expected, this analysis suggests that including context in the model helps more for languages with more ambiguity ( R = 0 . 67 , p < 0 . 001 ).", "In contrast, Lematus 20-Ch provides less benefit over Lematus 0-Ch for the languages with more unseen words ( R = 0 . 75 , p < 0 . 0001 ).", "Again, we assume the latter result is due to the negative correlation between ambiguity and productivity.", "So far, our results and analysis show a clear relationship between productivity and ambiguity, and also suggest that using context for lemmatization may be unnecessary (or at least less beneficial) for languages with many unseen words but low am-1398 biguity.", "However, there are remaining differences between languages that are more difficult to explain.", "For example, one might expect that for languages with more training data, the system would learn better generalizations and lemmatization accuracy on unseen words would be higher.", "However, Figure 5 (right), which plots accuracy on unseen words in each language as a function of training data size, illustrates that there is no significant correlation between the two variables ( R = 0 . 32 , p = 0 . 16 ).", "In some languages (e.g., Hungarian, in the top left) Lematus performs very well on unseen words even with little training data, while in others (e.g., Arabic, along the bottom) it performs poorly despite relatively large training data.", "We assume that regularity (and perhaps the nonconcatenative nature of Arabic) must be playing an important role here, but we leave for future work the question of how to operationalize and measure regularity in order to further test this hypothesis.", "We presented Lematus, a simple sequence-to-sequence neural model for lemmatization that uses character-level context.", "On average across 20 languages, we showed that even without using context, this model performs as well or better than three previous systems that treated lemmatization as an edit tree classification problem and required POS tags (Chrupaa et al., 2008; Muller et al., 2015) or word embeddings trained on a much larger corpus (Chakrabarty et al., 2017).", "We also showed that with both larger and smaller training datasets, including context boosts performance further by improving accuracy on both unseen and (especially) ambiguous words.", "Finally, our analysis suggests that lemmatization accuracy tends to be higher for languages with low productivity (as measured by the proportion of unseen words at test time), but more surprisingly also for languages with high ambiguityperhaps because high ambiguity is also associated with low productivity.", "We also found that the amount of training data available for each language is not a good predictor of performance on unseen words, suggesting that morphological regularity or other language-specific characteristics are playing an important role.", "Understanding the causes of these differences is likely to be important for further improving neural lemmatization.", "This work was supported in part by the James S McDonnell Foundation (Scholar Award #220020374)." ]
[ "abstain", "abstain", "abstain", "method", "result", "abstain", "method", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "method", "method", "method", "result", "method", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "result", "abstain", "other", "method", "method", "method", "other", "other", "other", "other", "other", "other", "method", "method", "other", "other", "other", "other", "abstain", "other", "other", "other", "abstain", "objective", "abstain", "abstain", "abstain", "method", "method", "method", "method", "abstain", "other", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "method", "method", "abstain", "abstain", "method", "method", "method", "method", "method", "method", "method", "method", "abstain", "abstain", "method", "method", "abstain", "method", "method", "method", "method", "abstain", "other", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "result", "result", "method", "result", "abstain", "abstain" ]
[ "We present a resource for the task of FrameNet semantic frame disambiguation of over 5,000 word-sentence pairs from the Wikipedia corpus.", "The annotations were collected using a novel crowdsourcing approach with multiple workers per sentence to capture inter-annotator disagreement .", "In contrast to the typical approach of attributing the best single frame to each word, we provide a list of frames with disagreement-based scores that express the confidence with which each frame applies to the word.", "This is based on the idea that inter-annotator disagreement is at least partly caused by ambiguity that is inherent to the text and frames.", "We have found many examples where the semantics of individual frames overlap sufficiently to make them acceptable alternatives for interpreting a sentence.", "We have argued that ignoring this ambiguity creates an overly arbitrary target for training and evaluating natural language processing systems if humans cannot agree, why would we expect the correct answer from a machine to be any different?", "To process this data we also utilized an expanded lemma-set provided by the Framester system, which merges FN with WordNet to enhance coverage.", "Our dataset includes annotations of 1,000 sentence-word pairs whose lemmas are not part of FN.", "Finally we present metrics for evaluating frame disambiguation systems that account for ambiguity.", "Crowdsourcing has been a popular method to collect corpora for a variety of natural language processing tasks (Snow et al., 2008), although one of its downsides is the crowd's lack of domain knowledge that is helpful in solving some tasks.", "Semantic frame disambiguation is an example of a complex natural language processing task that is usually performed by linguistic experts, subjected to strict annotation guidelines and quality control (Baker, 2012).", "The theory of frame semantics (J Fillmore, 1982) defines a frame as an abstract representation of a word sense, describing a type of entity, relation, or event, together with the associated roles implied by the frame.", "The FrameNet (FN) corpus (Baker et al., 1998) is a collection of semantic frames, together with a corpus of documents annotated with these frames.", "Similarly to word-sense disambiguation, frame disambiguation is the task of obtaining the correct frame for each word, since many words have multiple possible meanings.", "Using domain experts for frame disambiguation is expensive and time consuming, resulting in small corpora for this task that do not scale well for modern machine learning methods FN version 1.7, the latest one at the time of writing, contains only about 10,000 sentences annotated with frames.", "Furthermore, only using one expert to perform the annotation makes it difficult to capture any diversity of perspectives.", "There have been a number of small-scale attempts at using crowdsourcing for frame disambiguation in sentences, showing that the crowd has comparable performance to the FN domain experts (Hong and Baker, 2011), and that the crowd can be used to correct wrong examples that have been collected automatically (Pavlick et al., 2015).", "Crowd performance can be improved by combining frame role identification with disambiguation (Fossati et al., 2013), or by asking crowd workers to give each other feedback and then letting them change their answer (Chang et al., 2015).", "Crowdsourcing has also been useful to identify the ambiguity in frame disambiguation (Jurgens, 2013).", "Previously, we have shown (Dumitrache et al., 2018a) that while the crowd and FN expert mostly agree over frame disambiguation, disagreement cases are often caused by ambiguity, such as vague or overlapping frame definitions, or incomplete information in the sentence.", "Because of these issues with the input data, the approach of selecting one single correct frame for every word, and ignoring alternative interpretations, often results in arbitrary, incomplete ground truth corpora.", "In order to aggregate annotated data while preserving disagreement, we use the CrowdTruth method 1 (Aroyo and Welty, 2014), which encourages using multiple crowd annotators to perform the same work, and processes the disagreement between them to signal low quality workers, sentences, and frames.", "This paper presents a crowdsourced FN frame disambiguation corpus of 5,042 sentence-word pairs (which has since grown to over 9,000 since the submission of this paper).", "More than 1,000 of these are lexical units (LUs) not part of FN.", "To our knowledge, it is the largest corpus of this type outside of FN.", "In addition, we applied the CrowdTruth method, in which each sentence and lexical item is accompanied by a list of multiple frames with scores that express the confidence with which each frame applies to the word.", "This allows us to demonstrate that ambiguity is a prominent feature of frame disambiguation, with many cases where more than one possible frame can apply to the same word.", "Finally, we present an evaluation of several frame disambiguation models using evaluation metrics that leverage the multiple answers and their confidence scores, and show that even a model that always predicts the top crowd answer will not always have the best performance.", "Our corpus consists of 5,042 candidate word-sentence pairs from Wikipedia (which has since grown to over 9,000 since the submission of this paper) and a candidate list of frames for the word, with 742 unique frames and 1,705 unique lexical units (LUs).", "The sentences have been randomly selected, based on these criteria: The candidate word has no more than 25 candidate frames , to not overwhelm the annotators.", "The part of speech of the word is a verb .", "1 http://crowdtruth.org The distribution of candidate frames was optimized for maximum diversity using a greedy approach.", "To gather the candidate frames for each word, we gathered the candidate frames associated with the LU from FN1.7.", "Next we completed the candidate list using Framester (Gangemi et al., 2016), which maps FN semantic frames to synonym sets from WordNet (Miller, 1995).", "The sentences were processed with tokenization, sentence splitting, lemmatization and part-of-speech tagging.", "Then each word with a frame attached to it was matched with all of its possible synonym sets from WordNet, while making sure that the part-of-speech constraint of the synonym set is fulfilled.", "Using the WordNet mapping, we constructed the list of additional candidate frames for each word.", "Framester disambiguation used release 1.5 of FN, and some frames changed names in version 1.7, so we manually mapped these frames from FS to their latest version.", "Framester disambiguation was also used to collect a subset of our corpus consisting of 1,000 sentence-word pairs with LUs that are not part of the FN corpus.", "For simplicity, we refer to the sentence-word pairs as sentences in the rest of the paper.", "We ran the task on Amazon Mechanical Turk, where the workers were asked to select all frames that fit the sense of the highlighted word in a sentence from the multiple choice candidate list, or that none of the frames is correct.", "We used 15 workers/sentence that were paid $0.05 for each judgment, and a total cost of $1.35 per sentence (after factoring in the additional AMT costs).", "2 To aggregate the results of the crowd while also capturing inter-annotator disagreement, we use the CrowdTruth metrics 3 (Dumitrache et al., 2018b), replicating the setup from our previous work (Du-mitrache et al., 2018a).", "The choice of frames of one worker over one sentence are aggregated into a worker vector a binary vector with n + 1 components, where n is the number of frames shown together with the sentence, where the decision to pick each of the frames (or none) corresponds to a component in the vector.", "The vectors are used to calculate quality scores for workers, sentences 2 https://mturk.com/ 3 https://github.com/CrowdTruth/ CrowdTruth-core # SENTENCESQS FRAMES ( FSS ) 1 Domestication of plants has, over the centuries improved disease resistance.", "and frames.", "Although we make all quality scores available as part of the corpus, in this paper we focus on: frame-sentence score ( F SS ): the degree with which a frame matches the sense of the word in the sentence.", "It is the ratio of workers that picked the frame to all the workers that read the sentence, weighted by the worker quality.", "A high F SS means the frame is clearly expressed in a sentence.", "sentence quality ( SQS ): the overall worker agreement over one sentence.", "It is the average cosine similarity over all worker vectors for one sentence, weighted by the worker quality and frame quality.", "A high SQS indicates a clear sentence.", "An analysis of the corpus found many examples of inter-annotator disagreement, of which a few examples are shown in Table", "1. For 720 sentences, a majority of the workers picked at least 2 frames (examples 1-3 in Table 1).", "And for 1,514 sentences, no one frame has been picked by a majority of the workers (examples 4-7 in Table 1).", "Disagreement is also more prominent in the sentences where the LU is not a part of FN (Figure 1).", "The disagreement comes from a variety of causes: a parent-child relation between the frames ( statement and communication in #3), an overlap in the definition of the frames ( accomplishment 4 https://github.com/CrowdTruth/ FrameDisambiguation and successful action in #2), the meaning of the word is expressed by a composition of frames (in #7, straightening of the knee is a combination of reshaping the form of the knee, arranging the knee in the right position, and body movement ), and combinations of all of these reasons (in #4, slices is a combination of part piece and cause harm , and the other frames are their children).", "More example sentences for each type of disagreement are available in the appendix.", "The sentences themselves are not difficult to understand, and it can be argued that all of them have one frame that applies the best for the word.", "The goal of this corpus is to show that next to this best frame for the word, there are other frames that apply to a lesser degree, or capture a different part of the meaning.", "When evaluating a model for frame disambiguation, it seems unfair to penalize misclassifications of frames that still apply to the word, but with less clarity, in the same way we would penalize a frame that captures a wrong meaning.", "Also, we argue that models should take into account that annotators do not agree over some examples, and treat them differently than clear expressions of frames.", "Disagreement can also be caused by worker mistakes (in #6, dimension refers to the size of the object, not the act of measuring the size).", "While we try to mitigate for this by weighing confidence scores with the worker quality, the mistakes still appear in the corpus.", "This type of disagreement could be useful in future work to identify examples that workers need to be trained on.", "As an example usage of our corpus, we used it evaluate these frame disambiguation models:", "1. OS: The Open-Sesame (Swayamdipta et al., 2017) classifier, pre-trained on the FN corpus (release 1.7).", "Given a word-sentence pair, OS uses a BiLSTM model with a softmax final layer to predict a single frame for the word.", "If the LU is not in FN, it cannot make a prediction.", "2. OS+: We modified the OS classifier to perform multi-label classification.", "To calculate the confidence score for candidate frame f , we removed the softmax layer and passed the output of the BiLSTM model ( f ) through the following transformation: c ( f ) = [1 + tanh ( f )] / 2 .", "This gave a score c ( f ) [0 , 1] expressing the confidence that frame f is expressed in the sentence.", "3. FS: Framester includes a tool for rule-based multi-class multi-label frame disambiguation (Gangemi et al., 2016).", "While for the dataset pre-processing (Sec.", "2) we considered the frames for all synsets a word is part of, FS performs an additional word-sense disambiguation step to return a more precise list of frames.", "We used the tool with profile T as it was shown to have the overall better performance.", "FS can only predict FN frames from the 1.5 release, which is missing 202 frames from version 1.7.", "While OS+ produces confidence scores, the other methods produce binary labels for each frame-sentence pair.", "These models do not have state-of-the-art performance (Hermann et al., 2014; FitzGerald et al., 2015), we picked them because they were accessible and allowed testing on a novel corpus.", "Finally, we evaluate the quality of the TC corpus, containing only the top frame picked by the crowd for every sentence.", "This test shows what is the best possible performance over our corpus that can be expected from a system such as OS that selects a single frame per sentence.", "Instead of traditional evaluation metrics that require binary labels, we propose an evaluation methodology that is able to consider multiple candidate frames for each sentence and their quality scores.", "We use Kendall's list ranking coeffi-cient (Kendall, 1938) and cosine similarity to calculate the distance between the list of frames produced by the crowd labeled with the F SS , and the frames predicted by the baselines in each sentence.", "Whereas Kendall's only accounts for the ranking of the F SS for each frame, cosine similarity uses the actual F SS values in the calculation of the similarity.", "Both metrics compute a score per sentence (Kendall's [ 1 , 1] , and cosine similarity [0 , 1] ).", "Using these metrics, we produce two aggregate statistics over our test corpus: (1) the area-under-curve ( AUC ) for each metric, normalized by the corpus size, and (2) the SQS weighted average of each metric ( w avg ), which also accounts for the ambiguity of the sentence as expressed by the SQS .", "We evaluate on two versions of the corpus: (1) the restricted set (R-SET ) of 4,000 sentences with LUs from the FN corpus, and (2) the full set (F-SET ) of 5,042 sentences.", "The results (Figure 2 & Table", "2) show that OS+ performs best out of all the models, even taking into account sentences with LUs not in FN for which OS+ cannot disambiguate.", "FS performs the worst out of all models on R-SET , because it cannot find newly added frames from the latest FN release, but improves on the F-SET (FS can find candidate frames for LUs not in FN).", "The scores on the F-SET were lower for all baselines, suggesting that sentences with LUs not in FN are more difficult to classify this could be because FN is missing frames that can express the full meaning of these LUs.", "TC has a good performance, but is far from being unbeatable when measuring Kendall's over the R-SET , OS+ performs better than TC.", "We described a FrameNet frame disambiguation resource of 5,042 sentence-word pairs, and 1,000 LUs that are new to FN the largest corpus of this type outside of FN.", "Since the submission of this paper, the corpus has grown to over 9,000 sentence-word pairs.", "We also provide confidence scores for each candidate frame that are based on inter-worker disagreement.", "We made a case for this kind of disagreement reflecting genuine cases of ambiguity in FrameNet frames, caused by: child-parent relations between frames, frames with overlapping definitions, or compositions of frames making up the meaning of a word.", "The evaluation method we proposed uses the scores for multiple frames, and is thus able to differentiate between frames that still apply to the word, but with less clarity, and frames that capture the wrong meaning.", "Our goal was to build a resource that recognizes different levels of ambiguity in the expression of the frames in the text, and allows a more fair evaluation of performance of frame disambiguation systems.", "We would like to thank Luigi Asprino, Valentina Presutti and Aldo Gangemi for their assistance with using the Framester corpus, as well as their advice in better understanding the task of frame disambiguation.", "We would also like to thank the anonymous crowd workers for their contributions to our crowdsourcing tasks." ]
[ "method", "abstain", "objective", "abstain", "result", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "method", "method", "abstain", "abstain", "method", "objective", "result", "abstain", "abstain", "abstain", "other", "abstain", "method", "abstain", "abstain", "method", "method", "method", "abstain", "method", "method", "method", "abstain", "other", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "method", "abstain", "objective", "objective", "other", "other" ]
[ "Language models that use additional latent structures (e.g., syntax trees, coreference chains, and knowledge graph links) provide several advantages over traditional language models.", "However, likelihood-based evaluation of these models is often intractable as it requires marginalizing over the latent space.", "Existing methods avoid this issue by using importance sampling.", "Although this approach has asymptotic guarantees, analysis is rarely conducted on the e ect of decisions such as sample size, granularity of sample aggregation, and the proposal distribution on the reported estimates.", "In this paper, we measure the e ect these factors have on perplexity estimates for three di erent latent language models.", "In addition, we elucidate subtle di erences in how importance sampling is applied, which can have substantial e ects on the final estimates, as well as provide theoretical results that reinforce the validity of importance sampling for evaluating latent language models.", "Latent language models are generative models of text that jointly represent the text and the latent structure underlying it, such as: the syntactic parse, coreference chains between entity mentions, or links of entities and relations mentioned in the text to an external knowledge graph.", "The benefits of modeling such structure include interpretability (Hayashi et al., 2020), better performance on tasks requiring structure (Dyer et al., 2016; Ji et al., 2017), and improved ability to generate consistent mentions of entities (Clark et al., 2018) and factually accurate text (Logan et al., 2019).", "Unfortunately, demonstrating that these models provide better performance than traditional language models by evaluating their likelihood on benchmark data can be di cult, as exact computation requires marginalizing over all possible latent structures.", "Existing approaches evaluate their models by estimating likelihoods using importance sampling, i.e. a weighted average over latent states sampled from a proposal distribution.", "Although convergence of importance sampled estimates is asymptotically guaranteed, results are typically produced using a small number of samples for which this guarantee does not necessarily apply.", "Furthermore, these works employ a variety of heuristicssuch as sampling from proposal distributions that are conditioned on future gold tokens the model is being evaluated on, and changing the temperature of the proposal distributionwithout providing measurements of the e ect these decisions have on estimated perplexity, and often omitting details crucial to replicating their results.", "In this paper, we seek to fill in this missing knowledge, and put this practice on more rigorous footing.", "First, we review the theory of importance sampling, providing proof that importance sampled perplexity estimates are stochastic upper bounds of the true perplexitya previously unnoted justifica-tion for this evaluation technique.", "In addition, we compile a list of common practices used in three previous worksRNNG (Dyer et al., 2016), E nti ty NLM (Ji et al., 2017) and KGLM (Logan et al., 2019)and uncover a di erence in the granularity at which importance samples are aggregated in these works that has a substantial e ect on the final estimates.", "We also investigate a direct marginalization alternative to importance sampling based on beam search that produces strict bounds, and in some cases, has similar performance.", "Last, we perform experiments to measure the e ect of varying sample size, aggregation method, and choice of proposal distribution for these models, an analysis that is missing from previous work.", "From these results we conclude a set of best practices to be used in future work.", "In this section, we provide an overview of importance sampling-based inference in latent language", "models, as well as some key theoretical results.", "Latent LMs A latent language model is a generative model which estimates the joint distribution p ( x , z ) of a sequence of text x = ( x 1 , . . . , x T ) and its underlying latent structure z .", "In this paper, we focus on three models: RNNG (Dyer et al., 2016) which models syntactic structure, E ntity NLM (Ji et al., 2017) which models coreference chains, and KGLM (Logan et al., 2019) which models links to an external knowledge graph.", "Example latent states for E ntity NLM and KGLM are depicted in Figure 1, showing latent coreference chains and links to the knowledge graph.", "Other notable latent language models include the NKLM (Ahn et al., 2016) and LRLM (Hayashi et al., 2020); we do not study them since they use alternatives to importance sampling (e.g., the forward-backward algorithm).", "Perplexity The standard evaluation metric for language models is perplexity : PPL = exp 1 TT (cid:88) t = 1 log p ( x t | x < t ) , (1) where p ( x t | x < t ) is the marginal likelihood of the token x t conditioned on the previous tokens x < t .", "By the chain rule of probabilities p ( x ) = (cid:81) Tt = 1 p ( x t | x < t ).", "Perplexity can be intractable to compute for latent language models since it requires marginalizing out the latent variable (e.g., p ( x ) = (cid:80) z p ( x , z )) whose state space is often exponential in the length of the text.", "instead use importance sampling (Kahn, 1950) to estimate an approximate marginal probability:", "In other words, importance sampled estimates of a model's perplexity are stochastic upper bounds of the true perplexity .", "This property has not been stated in prior work on latent language modeling, yet is an important consideration since it implies that importance sampled perplexities can be reliably used to compare against existing baselines.", "Limiting Behavior Another important observation is that importance sampled estimates of perplexity are consistent , e.g., will converge as the number of samples approaches infinity.", "To prove this, we first observe that p ( x ) is consistent, which is a well-known consequence of the strong law of large numbers (Geweke, 1989).", "Accordingly, log p ( x ) is also consistent due to the continuous mapping theorem (Van der Vaart, 2000).", "Implementing importance sampling for evaluating latent language models involves a number of decisions that need to be made.", "We need to select the number of samples, choose the proposal distribution, and decide whether to aggregate importance sampled estimates at the instance or corpus level.", "We list the practices used in previous work.", "1 Sample Size Typically, only 100 samples are used for computing the perplexity.", "A notable exception is Kim et al. (2019)'s follow-up to RNNG that uses 1000 samples.", "Proposal Distribution Previous work uses proposal distributions q ( z | x ) that are essentially discriminative versions of the generative model (e.g., they are models that predict the latent state conditioned on the text), with one key distinction: they are conditioned not only on the sequence of tokens that have been observed so far, but also on future tokens that the model will be evaluated on (a trait we will refer to as peeking ).", "This conditioning behavior does not contradict any of the assumptions in Eqn's (3) and (4), and is useful in preventing generation of invalid structures (for instance, parse trees with more leaves then there are words in the text), or ones that are inconsistent with future tokens.", "Dyer et al. (2016) and Kim et al. (2019) also increase the entropy of the proposal distribution by dividing logits by a temperature parameter (respectively using = 1 . 25 and = 2 . 0).", "Aggregation An oft-overlooked fact (unnoted in previous work) is that Eqn (2) can be substituted into Eqn (1) in multiple ways.", "Letting x C = { x 1 , . . . x N } denote a corpus of evaluation data comprised of instances (token sequences) x n , estimates can be formed at the instance level : (cid:100) PPLI = exp 1 TN (cid:88) n = 1 log p ( x n ) , (5) or at the corpus level : (cid:100) PPLC = exp (cid:32) 1 T log p ( x C ) (cid:33) , (6) i.e., average is either over each instance or the whole corpus.", "2 RNNG and E ntity NLM perform instance-level aggregation, whereas KGLM performs corpus-level aggregation.", "Note that these 1 Based both on the cited papers and available source code.", "Thus far, research has neglected to measure the e ectiveness of the practices detailed in Section 3.", "In the following section, we perform experiments to determine whether reporting estimates obtained from small sample sizes is warranted, as well as better understand the consequences of peeking and scaling the temperature of the proposal distribution.", "Setup For our experiments, we use Kim et al. (2019)'s RNNG implementation 3 , and Logan et al. (2019)'s E ntity NLM and KGLM implementations 4 .", "For RNNG and KGLM we use the pre-3 https://github.com/harvardnlp/urnng 4 https://github.com/rloganiv/kglm-model trained model weights.", "For E ntity NLM we train the model from scratch following the procedure described by Ji et al. (2017); results may not be directly comparable due to di erences in data preprocessing and hyperparameters.", "We evaluate models on the datasets used in their original papers: RNNG is evaluated on the Penn Treebank corpus (Marcus et al., 1993), E ntity NLM is evaluated on English data from the CoNLL 2012 shared task (Pradhan et al., 2014), and KGLM is evaluated on the Linked WikiText-2 corpus (Logan et al., 2019).", "Experiments For E ntity NLM and KGLM, we experiment with two kinds of proposal distributions: (1) the standard peeking proposal distribution that conditions on future evaluation data, and (2) a non-peeking variant that is conditioned only on the data observed by the model (this is akin to estimating perplexity by ancestral sampling).", "For RNNG we only experiment with peeking proposals, since a non-peeking variant generates invalid parse trees.", "For the peeking proposal distribution, we experiment with applying temperatures [0 . 5 , 0 . 9 , 1 . 0 , 1 . 1 , 2 . 0 , 5 . 0].", "We report both corpus-level and instance-level estimates, as well as bounds produced using a direct, beam marginalization method we describe later.", "Sample Size We plot instance-level perplexity estimates as sample size is varied in Figures 2 and 3.", "We observe that the curves are monotonically decreasing in all settings.", "Consistent with our observation that importance sampled estimates of perplexity are a stochastic upper bound, this demonstrates that the bound is improved as sample size increases.", "Furthermore, none of the curves exhibit any signs of convergence even after drawing orders of magnitude more samples (Figure 3); the estimated model perplexities continue to improve.", "Thus, the performance of these models is likely better than the originally reported estimates.", "Aggregation Final estimates of perplexity computed using both corpusand instance-level estimates are provided in Table", "1. We note that instance-level estimates are uniformly lower by a wide margin.", "For example, using a temperature of = 1 .", "1 the estimated KGLM perplexity is approximately 10 nats lower using instance-level estimates.", "This is substantially better than the perplexity of 43 nats reported by Logan et al. (2019).", "Proposal Distribution These results also appear to indicate that choice of proposal distribution has a substantial e ect on estimated perplexity.", "However, RNNG E nt KGLM Corpus-level = 0 .", "it could also be the case that the observed di erences in performance across proposal distributions are due to random chance.", "We investigate whether this is the case for E ntity NLM by examining the approximate density of perplexity estimates after drawing 100 importance samples (shown in Figure 4).", "5 Our results illustrate that the estimates are relatively stable; although there is some overlap between the better performing temperature values, the order of the modes matches the order reported in Table 1, and there is clear separation from the estimates produced when = 0 .", "5 or by the non-peeking proposal distribution.", "Due to the relative cost of sampling we did not replicate this experiment for RNNG and KGLM.", "6 5 Obtained by Monte Carlo sampling 100 times.", "In general, we observe the peeking proposal distributions produce better estimates, and that better performance is obtained using temperatures that slightly increase the entropy of the proposal distribution (e.g., [1 . 1 , 2 . 0]), although the ideal amount varies across models.", "We also observe that the relative performance of proposal distributions is mostly preserved as the number of samples is increased.", "This suggests that good temperature parameters can be quickly identified by running many experiments with a small number of samples.", "An alternative to importance sampling is to directly marginalize over a subset of z values where we expect p ( x | z ) is large.", "Specifically, we propose using the topk most likely values of z identified by performing beam search using the proposal distribution q ( z | x ).", "We will refer to this as beam marginalization .", "Because marginalization is only performed over a subset of the space, this method produces a strict upper bound of the true perplexity.", "Perplexity bounds obtained using beam marginalization are reported in Table", "2. This method produces bounds close to the instance-level importance sampled estimates for RNNG, but does not perform well for the other models.", "This is likely due to the fact that latent space of RNNG (which operates on sentences and parse trees) is much smaller than E ntity NLM and KGLM (which operate on documents and coreference chains / knowledge graphs).", "Best Practices From these results we recommend the following practices for future work utilizing importance sampling: (1) aggregate importance samples at the instance level, (2) condition on all avail-111 113 115 117 119 Perplexity (100 Samples) ENTITYNLM Figure 4: Approximate density of E ntity NLM perplexity estimates after drawing 100 importance samples (colors same as Figure 3).", "able information when designing proposals, (3) try increased temperatures when generating samples from the proposal distribution, good temperatures can be identified using relatively few samples, and (4) utilize as many samples as possible.", "In addition, consider using beam marginalization in applications where strict upper bounds are needed.", "We investigate the application of importance sampling to evaluating latent language models.", "Our contributions include: (1) showing that importance sampling produces stochastic upper bounds of perplexity, thereby justifying the use of such estimates for comparing language model performance, (2) a concise description of (sometimes unstated) common practices used in applying this technique, (3) a simple direct marginalization-based alternative to importance sampling, and (4) experimental results demonstrating the e ect of sample size, sampling distribution, and granularity on estimates.", "While this work helps clarify and validate existing results, we also observe that none of the estimates appear to converge even after drawing large numbers of samples.", "Thus, we encourage future research into obtaining tighter bounds on latent LM perplexity, possibly by using more powerful proposal distributions that consider entire documents as context, or by considering methods such as annealed importance sampling.", "We would like to thank Alex Boyd for helpful discussions.", "This work was funded in part by Allen Institute of Artificial Intelligence, the NSF award #IIS-1817183, and in part by the DARPA MCS program under contract No.", "N660011924033 with the United States O ce of Naval Research." ]
[ "abstain", "abstain", "abstain", "abstain", "method", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "objective", "method", "result", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "method", "other", "abstain", "method", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "other", "result", "result", "other", "other", "other" ]
[ "Despite the continuing efforts to improve the engagingness and consistency of chit-chat dialogue systems, the majority of current work simply focus on mimicking human-like responses, leaving understudied the aspects of modeling understanding between interlocutors.", "The research in cognitive science, instead, suggests that understanding is an essential signal for a high-quality chit-chat conversation.", "Motivated by this, we propose P 2 BOT , a transmitter-receiver based framework with the aim of explicitly modeling understanding.", "Specifically, P 2 BOT incorporates mutual persona perception to enhance the quality of personalized dialogue generation.", "Experiments on a large public dataset, PERSONA-CHAT , demonstrate the effectiveness of our approach, with a considerable boost over the state-of-the-art baselines across both automatic metrics and human evaluations.", "Thanks to the advance in neural models and the accessibility of massive datasets, open-domain dialogue (i.e. chit-chat) systems have made great progress towards mimicking human-like responses.", "Nevertheless, there still exist some serious challenges in building personalized chatbots that can deliver engaging conversations and gain user trust (Song et al., 2019).", "For example, current chit-chat systems tend to generate uninformative responses (Li et al., 2016b).", "Moreover, they are usually lack of coherent personality traits due to the fact that training dialogues actually come from a diverse set of speakers (Zhang et al., 2018b).", "Several attempts have been made to alleviate the above issues.", "Methods like special reward shaping to reduce generic responses (Li et al., 2016b) and representing the speakers with latent variables (Li et al., 2016a) were introduced to improve the engagingness of chit-chat systems.", "A more straightforward approach, which equips chit-chat systems with predefined personas, was proposed accompanied by a novel dataset, PERSONA-CHAT (Zhang et al., 2018b).", "Figure 1 shows a clipped dialogue from PERSONA-CHAT .", "Two interlocutors meet for the first time and are having a conversation in order to get to know each other.", "What makes PERSONACHAT unique is that personas of both interlocutors are explicitly described using several profile sentences, facilitating the training of chatbots with configurable and persistent personalities.", "PERSONA-CHAT has fueled a growing interest in developing methods for personalized dialogue I weight 300 pounds.", "generation.", "Mazare et al. (2018) incorporated additional data from Reddit to train the model.", "Wolf et al. (2019b) fine-tuned pretrained language model (Radford et al., 2018) to improve the dialogue generation.", "Although both works demonstrate promising results, they focus more on mimicking the style of human-like responses, leaving understudied the aspects of explicitly modeling understanding between interlocutors.", "Our work, instead, takes the perspective of understanding modeling.", "According to the research in cognitive science, effective communication creates similar activation maps in the brains of both interlocutors (Hasson et al., 2012), suggesting that understanding between interlocutors is an essential signal for a high-quality chit-chat conversation.", "For instance, in the conversation shown in Figure 1, the two interlocutors foster understanding either by raising persona-related topics, Seen any good movies lately? , or by revealing their own personas through answering questions, I don't watch movies more of a writer. .", "The efforts to build understanding keep the conversation flowing.", "Taking into account the above, we propose Persona Perception Bot ( P 2 BOT ), explicitly modeling the understanding between interlocutors with a transmitter-receiver framework.", "Distinguished from traditional methods, P 2 BOT highlights a novel concept, mutual persona perception , which is better suited to describe the information exchange process that empowers the interlocutors to get to know each other.", "In order to train P 2 BOT for personalized dialogue generation, we employ supervised training and self-play fine-tuning piloted by reward signals characterizing mutual persona perception.", "Experiments on the PERSONACHAT dataset demonstrate the superiority of our approach over the baselines in both automatic metrics and human evaluations 1 .", "The central idea of P 2 BOT is to explicitly model understanding between interlocutors and enhance dialogue generation via mutual persona perception.", "It comprises two components, Transmitter and Receiver , respectively responsible for dialogue generation and mutual persona perception.", "Figure 2 gives an overview of P 2 BOT : interlocutor A has a persona w A , described with L profile sentences { w A 1 , , w A L } .", "When she first meets the other interlocutor B , they are going to know each other through a N -turn dialogue ( x A 1 , x B 1 , , x A N , x B N ) , where x A n denotes the utterance that A says in n th turn and N denotes the number of total turns.", "Given the entire dialogue history up to n -th turn h A n = ( x A 1 , , x B n 1 ) , Transmitter generates x A n according to the distribution p ( x A n | w A , h A n ) , and transmits it to B .", "The same process applies to B , keeping the conversation flowing.", "As the conversation goes on, impressions are gradually built via utterances.", "For example, when A says I don't watch movies more of a writer. , the impression that A is a writer. is left on B 's mind.", "As mentioned above, a successful conversation helps interlocutors know each other, which means B 's impression of A should correspond to A 's persona and vice versa.", "Receiver aims to measure the proximity between the built impressions and the actual personas.", "Specifically, as demonstrated by the dashed black lines in Figure 2, Receiver first projects impressions and personas into a latent space, and then measures the relevance between them based on the impression encoding (e.g. HA , B 's impression on A , projected from A 's 1 Our code is available at https://github.com/ SivilTaram/Persona-Dialogue-Generation Transmitter Block Block Block Block Block [PS] I [SOS] .", "utterances x A ), and persona encoding (e.g. WA , projected from A 's persona w A ) 2 .", "The relevance scores serve as mutual persona perception rewards, and are further incorporated into the training of Transmitter.", "Details of the two components are presented in Section 3 and 4.", "Following previous work (Li et al., 2016b; Zhang et al., 2018b), we treat dialogue generation as a sequence generation problem.", "Concretely, we employ the pretraining transformer language model introduced in Radford et al. (2018) (i.e. GPT) to initialize Transmitter.", "The entire training procedure consists of two steps: (1) Supervised Dialogue Generation .", "We optimize Transmitter via maximum likelihood estimation (MLE) on the supervised dialogue generation task.", "(2) Self-play Model Fine-tuning .", "We simulate dialogues between two randomly paired interlocutors, encouraging Transmitter to learn a policy that maximizes reward signals via reinforcement learning (RL) (Sutton et al., 1999).", "The design of the reward function considers both language modeling and our proposed mutual persona perception.", "As illustrated in Figure 3, Transmitter follows the overall architecture of 12 stacked transformer layers to encode context and generate response.", "Here, the context contains the persona w A , the dialogue 2 We take A as an example, and all are similar to B .", "history h A n , and several special tokens (e.g. [PS] which indicates the start of persona).", "Given a training instance ( w A , h A n , x A n ) , the training objective of MLE is to maximize the conditional log-likelihood as: L mle = (cid:88) t log p ( x A n,t | w A , h A n , x A n,<t ) , (1) where is the parameter of Transmitter.", "x A n,t means the t -th token in x A n , and x A n,<t indicates the token sequence before t -th token.", "Equation 1, hereafter simplified as log p ( x A n | w A , h A n ) , applies to both A and B , and we mention A for the sake of brevity (the same as below).", "During inference, beam search is applied to store top-ranked response candidates { x A n } , and Transmitter subsequently chooses as prediction the one that maximizes the length-normalized score: x A n = arg max x A n log p ( x A n | w A , h A n ) | x A n | .", "Besides the sequence generation task, inspired by Wolf et al. (2019b), we set up an auxiliary task, Next Utterance Prediction .", "Apart from training Transmitter to generate responses, we also train it to discriminate whether the response is the next utterance of the given context.", "Concretely, we append a special token [CLS] to the tail of the generated tokens.", "A classifier is built on top of the token's hidden state in the last transformer layer, as indicated by the red rounded rectangle in Figure 3.", "In training, for each response, we randomly sample a distractor and train the classifier to give a higher score on the response than the distractor.", "In inference, the classifier is used to rank response candidates together with Equation 2.", "Denoting as y n = 1 the signal indicating the generated response x A n is predicted as the next utterance, Equation 2 is extended as: x A n = arg max x A n (cid:18) log p ( x A n | w A , h A n ) | x A n | +(1 ) log p ( y n = 1 | w A , h A n , x A n ) (cid:19) , (3) where is a hyper-parameter.", "Although supervised dialogue generation alone can be used to mimic human-like responses, it does not inherently target at understanding.", "Therefore, we 1 1 2 Reward update policy 1 1 2 2 2 Figure 4: The illustration of the self-play procedure.", "further fine-tune Transmitter using reinforcement learning with the goal of maximizing mutual persona perception.", "Analogous to Lewis et al. (2017), we apply self-play to simulate the communication between two Transmitters, both of which have been trained as described in Section 3.1.", "Specifically, we have the two Transmitters communicate with each other for several turns.", "One Transmitter serves as a user with the parameters frozen, while the other is a learnable agent .", "The parameter of the learnable agent, , is fine-tuned during the self-play.", "Without loss of generality, in our experiments, we let interlocutor A , who starts a conversation, be the user, and correspondingly B be the learnable agent.", "Here we introduce some necessary formulations for modeling our problem with reinforcement learning.", "A state contains the persona and the dialogue history.", "For example, the state for B at turn n is defined as s B n = { w B , h B n } .", "An action a B n is the response to be generated.", "The action space is in-finitely large as the response can be arbitrary long.", "Taking s B n as input, the parameter defines a policy p ( a B n | s B n ) , through which the learnable agent generates its response.", "As illustrated in Figure 4, when it is B 's turn to speak, B receives s B n and picks a B n according to the policy p .", "As for A , it receives s A n and generates the response x A n to simulate a user.", "A and B alternately produce responses till the number of turns exceeds the given limit.", "Once a complete dialogue is generated, the reward is collected to optimize using policy gradient (Sutton et al., 1999).", "Denoting as R ( a B n ) the reward B gets at turn n (more details are provided later), we can optimize it by maximizing the following objective: L rl = E a B n p ( a B n | s B n ) [ R ( a B n )] .", "Applying likelihood ratio trick, is updated by ascending the following gradient:", "As aforementioned, the space of action a B n is infinite.", "In practice, REINFORCE algorithm (Williams, 1992) is leveraged to approximate Equation 5 by sampling a B n from policy p ( a B n | s B n ) .", "Furthermore, subtracting a baseline (Weaver and Tao, 2001), here the mean reward of a mini-batch, is applied on R ( a B n ) to reduce variance.", "The agent samples tokens one by one through multinomial sampling over the output distribution of B , until the special token [EOS] is sampled or exceeding the maximum allowed decoding step (e.g. 32).", "Compared to beam search sampling, multinomial sampling provides more diversities.", "As described in Section 1, we believe that a high-quality chit-chat conversation should highlight both human language modeling and mutual persona perception.", "Bearing this in mind, we design three rewards to address language style, discourse coherence and mutual persona perception respectively.", "RS.1", "Language Style The generated responses should conform to human language styles, which we believe can be evaluated by a pretrained language model (i.e. GPT).", "After length normalization, the score for a B n is given as: R 1 ( a B n ) = 1 | a B n | (cid:88) t log p lm ( a B n,t | a B n,<t ) , (6) where a B n,t and a B n,<t have similar denotation as the previously mentioned x A n,t and x A n,<t .", "RS.2", "Discourse Coherence The language score is evaluated individually, without considering the discourse coherence.", "However, a reasonable response should establish links in meaning with context, which is also an important aspect of humanlike responses.", "To take into account the discourse coherence, we employ the well-trained Next Utterance Predictor (mentioned in Section 3.1).", "The reward is given by the log probability of a B n being the next utterance of s B n : R 2 ( a B n ) = log p ( y n = 1 | a B n , s B n ) .", "RS.3", "Mutual Persona Perception RS.1 and RS.2 only steer the agent training process towards human-like responding.", "They do not explicitly encourage understanding between interlocutors.", "Therefore, we meticulously design the reward to characterize mutual persona perception.", "Contrast from RS.1 and RS.2, mutual persona perception is a long-term goal throughout the whole dialogue, meaning that the effect of current action might only play out some time later.", "For instance, receiving what are your hobbies? from B , it is highly likely that A 's response is relevant to A 's hobbies.", "This suggests that, not only A 's response but also B 's initial question contributes to mutual persona perception.", "Denoting as the discount factor indicating how far ahead B looks, the reward of mutual persona perception for a B n is defined as: R 3 ( a B n )= r ( a B n )+ N (cid:88) k = n +1 (cid:16) 2( k n ) 1 r ( x A k ) + 2( k n ) r ( a B k ) (cid:17) , (8) where r ( a B n ) is the persona perception score that B obtains in n -th turn, and r ( x A k ) is defined likewise.", "r ( a B n ) can be computed using a score function: r ( a B n ) = score ( a B n , w B ) .", "In P 2 BOT , the score function comes from Receiver, which will be elaborated in Section 4.", "The final reward R ( a B n ) for a B n is a weighted sum of the rewards listed above: R = 1 R 1 + 2 R 2 + 3 R 3 , (10) where 1 , 2 and 3 are hyper-parameters.", "Receiver is devised to measure the proximity between the built impressions and the actual personas, implemented by negative sampling.", "Specifically, in training, we randomly sample a persona distractor w Z .", "Receiver is trained to identify the real persona w A from { w A , w Z } .", "In inference, for each utterance, Receiver is responsible for providing a reasonable relevance score, to model our proposed mutual persona perception.", "The score subsequently joins the self-play fine-tuning on Transmitter as part of the rewards, as in Equation 8.", "As illustrated in Figure 5, Receiver contains two different encoders for impression and persona respectively.", "Initialized by BERT (Devlin et al., 2019), both encoders provide deep contextualized representations for each token.", "Then we average all the representations, yielding a fixed d -dimensional vector for one sentence.", "In this way, feeding ( x A 1 , x A 2 , , x A N ) into the impression encoder consecutively, we obtain the impression encoding HA RN d .", "The persona encoding W RL d is produced likewise, where {A , Z} .", "The relevance score matrix U is computed via the scaled dot product (Vaswani et al., 2017): U = HA ( W ) (cid:62) d , RN L .", "In essence, Receiver is expected to capture fine-grained correlations between the persona and the dialogue.", "However, we do not have access to the golden fine-grained correlations.", "The only thing we know is that, compared with WZ , HA is more correlated to WA .", "Since the comparison is at a coarse granularity, we gather U into the cumulative score c through an aggregate function Agg , as shown in Figure 5.", "To encourage c A while at the same time depress c Z , we design a marginal loss L rec , which makes c A larger than c Z by a margin m .", "Moreover, considering that an utterance generally relates to zero or one profile, L 1 regularization is enforced to make U sparse.", "Combining all of these, the training loss for Receiver is: L rec = max(0 , m + c Z c A ) + | U | 1 , (12) where is a hyper-parameter for penalty.", "As for Agg , one straightforward way is to average over all positions of U .", "However, it maximizes every entry in UA , including all those that Category Model Original Revised Hits@1(%) ppl F1(%) Hits@1(%) ppl F1(%) Retrieval KV Profile Memory 54 .", "should not be activated (e.g. relevance scores between unrelated profile sentences and utterances), introducing unnecessary noise into the training of Transmitter.", "To alleviate the problem, we choose to implement Agg as a controllable weighted function, which summarizes U n, : as: Agg ( U n, : ) = (cid:80) Lk =1 exp( U n,k / ) U n,k (cid:80) Lk =1 exp( U n,k / ) , (13) where temperature > 0 is a tunable parameter (Hinton et al., 2015) controlling the evolution of Agg .", "In the beginning, Agg behaves close to average pooling.", "As anneals, Agg gradually focuses more on the highest relevance score.", "In this way, noise reduces as training goes on.", "Finally, c is given by: c = 1 NN (cid:88) n =1 Agg ( U n, : ) .", "Given x A n and w A , Receiver employs the following function to obtain x A n 's persona perception score, further modeling mutual persona perception as in Equation 9:", "We conducted experiments on the dataset PERSONA-CHAT , assessing P 2 BOT using both automatic metrics and human evaluations.", "To verify the effectiveness of our proposed mutual persona perception, we perform a thorough model analysis in Section 5.3.", "Finally, we probe Receiver's capability on perceiving persona in Section 5.4.", "PERSONA-CHAT dataset contains 8,939 / 1,000 multi-turn dialogues conditioned on 1,155 / 100 personas for train / dev.", "Each persona is described with at least 5 profile sentences.", "To make it more challenging, PERSONA-CHAT also provides revised personas by rephrasing, generalizing or specializing the original ones.", "For example, I am over-weight. is revised from I weight 300 pounds. .", "Our implementation was based on PyTorch (Paszke et al., 2019), ParlAI (Miller et al., 2017), and HuggingFace's transformers library (Wolf et al., 2019a).", "We used Adam (Kingma and Ba, 2015) optimizer with a learning rate of 6.25e-5 for both Receiver and Transmitter in supervised learning.", "In the training of Receiver, reduced linearly from 10 to 0.5.", "In the self-play phase of Transmitter, the learning rate was set as 1e-6.", "The hyper-parameters m , , , , 1 , 2 and 3 were set as 0.4, 0.1, 1e-4, 0.5, 0.4, 0.1 and 0.5 respectively.", "The supervised training of Transmitter lasted for 2 epochs, and the self-play fine-tuning comprised 2000 dialogues, where the number of turns was 3.", "The beam search size was set as 2.", "Our baselines fall into three categories: retrieval-based, generative-based and pretrain-finetune-based models.", "Among the retrieval-based baselines, KV Profile Memory (Zhang et al., 2018b) was the official baseline which employed the memory network along with profile information, and Model 1(%) 2(%) 3(%) 4(%) Avg Lost In Conversation 26 .", "Dually Interactive Matching Network (Gu et al., 2019) proposed a dual matching architecture to match between the responses and their corresponding contexts.", "Language Model , Generative Profile Memory (Zhang et al., 2018b) and SEQ 2S EQ with attention mechanism (Bahdanau et al., 2015) were implemented as generative baselines for dialogue generation.", "The remaining methods were all pretrain-finetune-based.", "Transfertransfo (Wolf et al., 2019b) 3 achieved the state-of-the-art performance on automatic metrics, while Lost In Conversation 4 topped the human evaluations (Dinan et al., 2019).", "Analogous to our approach, they employed the pretrained language model GPT to initialize their models, and then fine-tuned it on the dataset.", "Table 1 shows the experimental results on automatic metrics.", "Following Zhang et al. (2018b), we reported the official automatic metrics to evaluate the methods: Hits@1 , Perplexity (ppl) and F1 .", "Given 20 response candidates, Hits@1 is the probability that the real response ranks the highest according to the model.", "Perplexity measures the negative log likelihood of the correct sequence output by the model, lower values indicating better performance.", "F1 is the harmonic mean of word-level precision and recall.", "As observed, our approach outperforms almost all baselines and achieves new state-of-the-art performance on ppl and F1, with highly competitive performance on Hits@1.", "In the revised mode, our approach still achieves the best performance, obtaining a relative improvement of 13 .", "4% on F1 against the strongest baseline.", "It is worth noting that we also tried to employ F1 as the reward, but the result is far from satisfactory.", "As mentioned in Dinan et al. (2019), no automatic metric is perfect for evaluating such an open-domain task.", "Hence, we also performed crowd-sourced human evaluations on the state-of-the-art baselines (i.e. Transfertransfo & Lost In Conversation) and our proposed P 2 BOT .", "Concretely, on the original dev set, we randomly sampled 200 responses generated by these methods and asked each worker to rate them.", "The rating ranges from 1 3 http://github.com/huggingface/transfer-learning-conv-ai 4 http://github.com/atselousov/transformer chatbot Variant Hits@1(%) F1(%) BLEU(%) P 2 BOT-S 68 .", "to 4 .", "1 means the response is good only in terms of grammar and sentence structure; 2 means in addition to valid grammar, the response is also coherent with the context; 3 means the coherent response is meanwhile interesting and informative, instead of just a simple response like Yes; And 4 means the response is consistent with the persona of the interlocutor, which is of extreme importance for the task of reflecting whether the model can effectively utilize the persona information.", "As shown in Table 2, the results are consistent with the automatic evaluation results, demonstrating the superiority of P 2 BOT against the baselines.", "We also conducted Wilcoxon signed-rank tests between our method and the baselines and the results show the improvements are significant with p < 0 .", "05 .", "Variant Analysis We conducted variant analysis on P 2 BOT to investigate the influence of RS.1, RS.2 and RS.3.", "Another metric BLEU (Papineni et al., 2002), which evaluates the quality of response, was introduced to make the analysis more comprehensive.", "We show the variant analysis results in Table 3, where P 2 BOT-S is the variant of P 2 BOT which is trained only in the supervised setting.", "As expected, the results on Hits@1 validate the important role of the auxiliary task.", "Across all the variants, the gains in BLEU and F1 are very small, revealing the difficulty in improving them.", "Nevertheless, solely by adding RS.3, we obtained a 25% relative improvement on BLEU, indicating the effectiveness of our proposed mutual persona PERSONA i.", "perception.", "Similar conclusions can be drawn from the trend of F1.", "Case Study For a more comprehensive comparison, we show in Table 4 some randomly sampled responses of different methods.", "The results suggest the responses generated by our approach are more human-like.", "As observed, benefiting from our proposed mutual persona perception, the responses of P 2 BOT are more consistent, engaging and informative.", "For instance, in the last example in Table 4, the response I'm busy with my robot project explicates why the speaker does not exercise, meanwhile revealing that he is working on the robot, as depicted in his persona.", "Error Analysis Though our approach works well in most cases, we observed that the self-play simulation might fall into repeated cycles after rounds of training, as the challenge mentioned by Li et al. (2016b).", "Another issue is that the bots sometimes ask redundant questions in our approach, which might be due to inappropriate hyper-parameters in reward shaping.", "Receiver plays an important role in our approach, and we are interested in its capability on perceiving persona.", "Therefore, we conducted experiI enjoy death metal I volunteer at the local pool in India , that is where I'm from I'm learning about computers It is very basic but helpful I find myself to be I'm a student I listen to punk I love being in water I'm not from the U.S. Revised Persona Dialogue Figure 6: Visualization of the relevance scores between a sampled dialogue and its corresponding revised persona.", "ments on a synthesized dataset.", "We constructed the dataset by sampling 31 persona distractors for each dialogue in PERSONA-CHAT .", "Two widely used ranking metrics were used to evaluate the performance: Hits@1 and Mean Reciprocal Rank (MRR) .", "Hits@1 is the same metric as the one mentioned in Section 5.2, except that the candidate size is 32.", "Given a dialogue and the complete set of profile sentences, MRR is the average reciprocal ranks of the dialogue-relevant profile sentences.", "Two simple baselines Random and IR (Sordoni et al., 2015) were chosen for comparison.", "Table 5 shows the experimental results of different methods on the synthesized dataset.", "As observed, our approach achieved excellent results on both original and revised modes.", "For example, compared with the IR baseline, our approach achieved an absolute improvement of 26 .", "3% on Hits@1 in the original mode.", "In addition, the surprising results in the revised mode further demonstrate Receiver's capability to perceive rephrased persona.", "dialogue and its corresponding revised persona in Figure 6.", "As illustrated, the relevance scores between related profile sentences and dialogue utterances are significantly higher.", "For example, the utterance I volunteer at the local pool from the interlocutor implies the profile I love being in the water , and our Receiver successfully captures the relevance between them.", "Methods to build open-domain dialogue systems generally fall into two major categories: retrieval-based and generative-based.", "Retrieval-based methods retrieve response candidates and rank them based on the matching scores with the dialogue (Sordoni et al., 2015; Wu et al., 2017; Gu et al., 2019).", "Generative-based methods typically use SEQ 2S EQ model as the backbone (Sutskever et al., 2014; Bahdanau et al., 2015; Serban et al., 2017; Wolf et al., 2019b), where the encoder extracts the information in an utterance and the decoder generates the response.", "Our work adopts a similar architecture.", "Besides supervised learning, researchers also explore reinforcement learning based methods.", "Lewis et al. (2017) applied reinforcement learning for negotiation dialogues and showed it outperforms supervised learning when negotiating with humans.", "Yang et al. (2018) proposed to generate dialogue responses by dual learning based domain adaptation.", "Zhang et al. (2018a) built a coherence model to provide the reward signal for penalizing dull responses.", "Liu et al. (2019) employed reinfro-cement learning to learn an intermediate structure span.", "Our approach differs from this line of work in that we focus on improving personalized dialogues via mutual persona perception, which has not yet been explored before.", "More recently, under the topic of dialogue personalizing, Zemlyanskiy and Sha (2018) proposed a post-processing method to re-rank candidates generated by beam search, while Olabiyi et al. (2019) employed adversarial approaches to solve the consistency problem on interlocutors' names.", "Madotto et al. (2019) applied meta-learning to quickly adapt to new speakers, and Tigunova et al. (2019) extracted user attributes from daily dialogues.", "Compared with them, our work enhances persona based dialogue generation from a novel perspective.", "Furthermore, researchers explored to generate diverse responses conditioned on persona (Song et al., 2019, 2020).", "Personalization in goal-oriented dialogue systems has also received some attention (Joshi et al., 2017; Luo et al., 2019).", "The researches focus more on making the goal-oriented bots adjust the response according to different user profiles, while we aim to endow bots with persistent personalities.", "We propose P 2 BOT , a transmitter-receiver framework which explicitly models understanding between interlocutors.", "Under this framework, mutual persona perception is incorporated as a reward signal to achieve the personalized dialogue generation.", "Experiments on a large public dataset PERSONA-CHAT demonstrate the effectiveness of our approach.", "For future work, we would like to extend Receiver to conversational recommender systems.", "After turns of chatting, the agent should be able to infer the user's persona, based on which personalized contents can be recommended.", "We thank all the anonymous reviewers for their valuable comments.", "This work was supported in part by National Natural Science Foundation of China (U1736217 and 61932003), and National Key R&D Program of China (2019YFF0302902)." ]
[ "abstain", "abstain", "objective", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "objective", "abstain", "method", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "other", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "method", "result", "abstain", "result", "abstain", "method", "method", "other", "abstain", "other", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "result", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "other", "other", "method", "other", "other", "other", "other", "other", "objective", "other", "other", "objective", "other", "other", "objective", "objective", "abstain", "objective", "objective", "abstain", "other", "other" ]
[ "Hierarchical multi-label text classification (HMTC) aims to tag each document with a set of classes from a class hierarchy.", "Most existing HMTC methods train classifiers using massive human-labeled documents, which are often too costly to obtain in real-world applications.", "In this paper, we explore to conduct HMTC based on only class surface names as supervision signals.", "We observe that to perform HMTC, human experts typically first pinpoint a few most essential classes for the document as its core classes, and then check core classes' ancestor classes to ensure the coverage.", "To mimic human experts, we propose a novel HMTC framework, named TaxoClass.", "Specifi-cally, TaxoClass (1) calculates document-class similarities using a textual entailment model, (2) identifies a document's core classes and utilizes confident core classes to train a taxonomy-enhanced classifier, and (3) generalizes the classifier via multi-label self-training.", "Our experiments on two challenging datasets show TaxoClass can achieve around 0.71 Example-F1 using only class names, outperforming the best previous method by 25%.", "Hierarchical multi-label text classification (HMTC) aims to assign each text document to a set of relevant classes from a class taxonomy.", "As a fundamental task in NLP, HMTC has many applications such as product categorization (Goumy and Mejri, 2018), semantic indexing (Li et al., 2019), and fine-grained entity typing (Xu and Barbosa, 2018).", "Most existing methods address HMTC in a supervised fashion they first ask humans to provide many labeled documents and then train a text classifier for prediction.", "Many classifiers have been developed with different deep learning architectures such as CNN (Kim, 2014), RNN (You et al., 2019), Attention Network (Huang et al., 2019), and achieved decent performance when trained on massive human-labeled documents.", "Despite such a Document : When our son was about 4 months old, our doctor said we could give him crafted cereal.", "success, people find that applying these methods to many real-world scenarios remains challenging as the human labeling process is often too time-consuming and expensive.", "Recently, more studies have been developed to address text classification using smaller amount of labeled data.", "First, several semi-supervised methods (Gururangan et al., 2019; Berthelot et al., 2019) propose to use abundant unlabeled documents to assist model training on labeled dataset.", "Although mitigating the human annotation burden, these methods still require a labeled dataset that covers all classes, which could be too expensive to obtain when we have a large number of classes in HMTC.", "Second, some weakly-supervised models exploit class indicative keywords (Meng et al., 2018; Zeng et al., 2019; Mekala and Shang, 2020) or class surface names (Meng et al., 2020; Wang et al., 2020) to derive pseudo-labeled data for model training.", "Nevertheless, these models all assume each document has only one class and all class surface names (or class indicative keywords) must appear in the corpus, which are too restrictive for HMTC.", "In this paper, we study the problem of weakly-supervised hierarchical multi-label text classification where only class surface names, a class taxonomy, and an unlabeled corpus are available for model training.", "This setting is closer to how humans resolve the HMTC problem we perform classification by understanding each class from its surface name rather than learning from labeled documents.", "We observe that when asked to assign multiple classes to a document, humans will first pinpoint most essential core classes and then check whether their ancestor classes in the taxonomy should also be tagged.", "Taking the document in Fig. 1 as an example, humans can quickly identify this review text is clearly about baby cereal and crafted cereal , which are the core classes.", "After assigning these two most essential classes to the document, people continue to check the core classes' ancestor classes and find feeding as well as baby food should be tagged.", "Motivated by the above human labeling process, we propose TaxoClass , a weakly-supervised HMTC framework including four major steps.", "First, we calculate the document-class similarity using a pre-trained textual entailment model (Yin et al., 2019).", "Second, we identify each document's core classes by (1) selecting candidate core classes that are most similar to the document at each level in a top-down fashion, and (2) choosing (cid:104) document, candidate core class (cid:105) pairs that are salient across the whole unlabeled corpus.", "Third, we derive training data from document core classes and use them to train a text classifier.", "This classifier includes a document encoder based on pre-trained BERT (De-vlin et al., 2019), a class encoder capturing class taxonomy structure, and a text matching network computing the probability of a document being tagged with each class.", "Finally, we generalize this text classifier using multi-label self-training on all unlabeled documents.", "Contributions.", "To summarize, our major contributions are as follows: (1) We propose a weakly-supervised framework TaxoClass that only requires class surface names to perform hierarchical multi-label text classification.", "To the best of our knowledge, TaxoClass is the first weakly-supervised HMTC method.", "(2) We develop an unsupervised method to identify document core classes based on which a text classifier can be learned.", "(3) We conduct extensive experiments to verify the effectiveness of TaxoClass on two real-world datasets.", "In this section, we introduce the notations and present our task definition.", "Notations .", "A corpus D = { D 1 , . . . , DN } is a text collection where each document D i D is a sequence of words.", "A class taxonomy T = ( C , R ) is a directed acyclic graph where each node represents a class c j and each directed edge (cid:104) c m , c n (cid:105) R indicates that parent class c m is more general than the child class c n .", "In this work, we assume each class c j has a surface name s j (either a word or a phrase) that serves as the weak supervision signal.", "Task Definition .", "Given an unlabeled corpus D , a class hierarchy T = ( C , R ) , and class surface names S = { s j } |C| j =1 , our task is to learn a text classifier f ( ) that maps a new document D new to its target y = [ y 1 , . . . , y |C| ] Y = { 0 , 1 } |C| where y j equals to 1 if this document is categorized with class c j and 0 otherwise.", "Discussion.", "When the number of classes |C| is large (as it is in many HMTC applications), we can no longer assume all class surface names in S will explicitly appear in the given corpus D as done in most previous studies (Meng et al., 2019; Li et al., 2019; Wang et al., 2020).", "This is because many class names are actually summarizing phrases provided by humans ( e.g. , grocery & gourmet food in Fig. 1).", "As a result, we need to design a method that works under such a scenario.", "Our TaxoClass framework consists of four major steps: (1) document-class similarity calculation, (2) document core class mining, (3) core class guided classifier training, and (4) multi-label self-training.", "Fig. 2 shows our framework overview and below sections discuss each step in more details.", "We take a textual entailment approach (Yin et al., 2019) to calculate the semantic similarity between each (cid:104) document, class (cid:105) pair.", "This approach imitates how humans determine whether a document is similar to a class or not we read this document, create a hypothesis by filling the class name into a template ( e.g. , this document is about ), and ask ourselves to what extent this hypothesis is correct, given the context document.", "premise, a template filled with a class name s j as the hypothesis, and outputs a probability of how likely this premise can entail the hypothesis.", "We treat this probability P ( D i c j ) as the document-class similarity sim ( D i , c j ) .", "More specifically, we use Roberta-Large-MNLI 1 as our textual entailment model which utilizes the pre-trained Roberta-Large as its backbone and is fine-tuned on the MNLI dataset.", "When asked to tag a document with a set of classes from a class taxonomy, humans will first pinpoint a few classes that are most essential to this document.", "We refer to those most essential classes as the core classes and identify them in below two steps.", "We observe that on average each document is tagged with a small set of classes from the entire class taxonomy.", "Therefore, we first reduce the search space of core classes using a top-down approach (c.f. Fig. 3).", "Given a document D , we start with the Root class at level l = 0 , find its two children classes that have the highest similarity with D , and add them into a queue.", "Then, for each class at level l in the queue, we select l + 2 classes from its children classes that are most similar to D .", "After all level l classes are processed, we aggregate all selected children classes and choose ( l + 1) 2 classes (at level l + 1 ) with the highest path score 1 https://huggingface.co/ roberta-large-mnli ( ps ) defined below: ps ( Root ) = 1 , ps ( c j ) = max c k Par ( c j ) { ps ( c k ) sim ( c j , D ) } , (1) where P ar ( c j ) is class c j 's parent class set.", "All chosen classes (at level l + 1 ) will be pushed into the queue and we stop this process when no class in the queue has further children.", "Finally, all classes that have entered the queue, except for the Root class, consist of the core class candidate set.", "We use C candi to denote the candidate core class set of document D i .", "For each document, we identify its core classes from the above selected candidate set based on two observations.", "First, a document usually has higher similarity with its core class c than with the parent and sibling classes of c .", "Take the document D 2 in Fig. 2 as an example, the similarity between D 2 and its core class crib is 0.95, much higher than the similarity between D 2 and core class's parent class nursery (0.6) as well as core class's sibling classes.", "Based on this observation, we define the confidence score of a candidate core class c for a document D as below: conf ( D, c ) = sim ( D, c ) max c (cid:48) Par ( c ) Sib ( c ) { sim ( D, c (cid:48) ) } , (2) where Sib ( c ) represents the sibling class set of c .", "Our second observation is that the similarity between a document D and its core class c is salient from a corpus-wise perspective.", "Namely, if a class c is a document D 's core class, the confidence score Level l =0 Level l =1 Level l=2 0.24 Root diapering babyproduct health & personal care nursery feeding 0.1 nutritionwellness childsafety (cid:3)(cid:17)(cid:3)(cid:17)(cid:3)(cid:17) 0.8 0.32 (cid:3)(cid:17)(cid:3)(cid:17)(cid:3)(cid:17) sexualwellness ps ( diapering ) = ps ( baby product ) * sim ( diapering , D ) = 0.8 * 0.3 = 0.24 0.48 0.02 0.09 0.01 0.24 Root diapering babyproduct health & personal care nursery feeding 0.1 nutritionwellness childsafety (cid:3)(cid:17)(cid:3)(cid:17)(cid:3)(cid:17) 0.8 0.32 (cid:3)(cid:17)(cid:3)(cid:17)(cid:3)(cid:17) sexualwellness 0.48 0.02 0.09 0.01 Most likely classes are selected for expansion at the next level (cid:3)(cid:17)(cid:3)(cid:17)(cid:3)(cid:17) (cid:3)(cid:17)(cid:3)(cid:17)(cid:3)(cid:17) (cid:3)(cid:17)(cid:3)(cid:17)(cid:3)(cid:17) Figure 3: Top-down core class candidate selection.", "conf ( D, c ) is higher than the median confidence score 2 between class c and all documents tagged with c (denoted as D ( c ) ).", "Formally, we have: conf ( D, c ) median { conf ( D (cid:48) , c ) | D (cid:48) D ( c ) } .", "(3) According to this observation, we check each class in document D i 's candidate core set C candi and add classes that satisfy the above criteria into the final core class set C i .", "Note here this core class set C i could be empty when document D i does not have any confident core class.", "Based on identified document core classes, we train one classifier for hierarchical multi-label text classification.", "Below we first introduce our classifier architecture and then present our training method.", "We design our classifier to have a dual-encoder architecture: one document encoder maps document D i to its representation D i , one class encoder learns class c j 's representation c j , and one matching network returns the probability of document D i being tagged with class c j .", "Document Encoder.", "In this work, we instantiate our document encoder g doc ( ) to be a pre-trained BERT-base-uncased model (Devlin et al., 2019) and follow previous work (Chang et al., 2019; Meng et al., 2020) to use the [CLS] token representation as the document representation.", "Class Encoder.", "For class encoder g class ( ) , we follow (Shen et al., 2020) and use a graph neural network (GNN) (Kipf and Welling, 2017) to model the class taxonomy structure.", "This taxonomy-enhanced class encoder can capture both the textual information from class surface names and structural information from the class taxonomy.", "Given a class c j , we first obtain its ego network that includes its parent and children classes in the class taxonomy, as shown in Fig. 4.", "Then, we input this ego network to a GNN that propagates 2 We have also tried using \"average\" but empirically found that using median is better and more robust to outliers.", "node features over the network structure.", "The node features are initialized with the pre-trained word embeddings of class surface names 3 .", "The propagation mechanism updates the feature of a node u by iteratively aggregating representations of its neighbors and itself.", "Formally, we define a GNN with L -layers as follows: h ( l ) u = ReLU (cid:88) v N ( u ) ( l 1) uv W ( l 1) h ( l 1) v , (4) where l { 1 , . . . , L } , N ( u ) includes node u 's neighbors and itself, ( l 1) uv = 1 | N ( u ) || N ( v ) | is a normalization constant (same for all layers), and W ( l 1) are learnable parameters.", "After obtaining individual node features, we combine them into a vector representing the whole ego network G as follows: h G = 1 | G | (cid:88) u G h ( L ) u .", "As this ego network is centered on class c j and encodes its both textual and structural information, we treat this final graph representation as the class representation c j .", "Text Matching Network.", "Based on the document representation D i and the class representation c j , we use a log-bilinear text matching model to compute the probability of document D i being tagged with class c j as follows: p ij = P ( y j = 1 | D i ) = (exp( c Tj BD i )) , (6) where ( ) is the sigmoid function and B is a learnable interaction matrix.", "We use our discovered document confident core classes to train a text classifier.", "One intuitive strategy is to treat each document's core classes as positive classes and all the remaining classes as negative classes.", "However, this strategy has a high false 3 For multi-gram class names, we use their averaged word embeddings.", "negative rate because some non-core classes could still be relevant to the document (c.f. Fig. 1).", "We observe a document's multiple labeled classes usually have some ancestor-descendent relations in the class hierarchy T = ( C , R ) .", "This implies that given a document's core class, its parent class and some of its children classes are also likely to be tagged with this document.", "Therefore, we introduce all core classes' parent classes into the positive class set and exclude their children classes from the negative class set.", "Formally, given a document D i with its core class set C i , we define its positive and negative class set as follows: C posi = (cid:91) c j C i Par ( c j ) C i , C negi = C C posi (cid:91) c j C i Chd ( c j ) , (7) where Chd ( c j ) is class c j 's children class set.", "Finally, we train our classification model using the below binary cross entropy (BCE) loss: L = |D| (cid:88) i =1 C i (cid:54) = ( (cid:88) c j C posi log p ij + (cid:88) c j C negi log(1 p ij )) , (8) where indicates an empty set and we exclude the documents without any confident core class from the loss calculation.", "After training the text classifier based on document core classes, we propose to further refine the model via self-training on the entire unlabeled corpus D for better generalization.", "The idea of self-training (ST) (Xie et al., 2016) is to iteratively use the model's current prediction P to compute a Dataset # Train # Test # Classes Amazon-531 29,487 19,685 531 DBPedia-298 196,665 49,167 298 Table 1: Dataset statistics.", "target distribution Q which guides the model for re-finement.", "In general, the ST objective is expressed with the KL divergence loss as below: LST = KL ( Q || P ) = |D| (cid:88) i =1 |C| (cid:88) j =1 q ij log q ij p ij .", "Different from the previous studies (Meng et al., 2018; Yu et al., 2020), our target distribution Q can be applied to multi-label classification problem as it normalizes the current predictions P for each individual class.", "Intuitively, this equation can enhance high-confidence predictions while down-weighting low-confidence predictions.", "This is because if example i is more confidently labeled with class j than other examples, we will have a large p ij that dominates the (cid:80) i p ij term.", "Consequently, Eq 10 computes a large q ij , which further pushes the model to predict class j for example i .", "In practice, instead of updating the target distribution Q for every training example, we update it every 25 batches 4 and train the model with Eq.", "(9), which makes the self-training process more efficient and robust.", "We summarize our TaxoClass framework in Algorithm 1.", "We use two public datasets from different domains to evaluate our method: (1) Amazon-531 (McAuley and Leskovec, 2013) contains 49,145 product reviews and a three-level class taxonomy consisting of 531 classes; and (2) DBPedia-298 (Lehmann et al., 2015) includes 245,832", "Wikipedia articles and a three-level class taxonomy with 298 classes.", "Documents in both datasets are lower-cased and truncated to has maximum 500 tokens.", "We list the data statistics in Table 1.", "To the best of our knowledge, we are the first to study weakly-supervised HMTC problem and there is no directly comparable baseline under the exact same setting as ours.", "Therefore, we choose a wide range of representative methods that are most related to TaxoClass and adapt them to our problem setting, described as follows.", "Hier-doc2vec (Le and Mikolov, 2014) 5 : This weakly-supervised method first embeds documents and classes into a shared semantic space, and then recursively selects the class of the highest embedding similarity with the document in a top-down fashion.", "We set the embedding dimensionality to be 100 and use the default value for all other hyper-parameters.", "6 WeSHClass (Meng et al., 2019) 7 : Another weakly-supervised method that generates pseudo documents to pre-train a text classifier and bootstraps the pre-trained classifier on unlabeled documents with self-training.", "The class surface names are treated as the class-related keywords in this method.", "For the pseudo document generation step, we use its internal LSTM language model.", "We treat all classes in its returned class path as the output classes.", "SS-PCEM (Xiao et al., 2019) 8 : This semi-supervised method uses a generative model to generate documents based on a class path sampled from the class taxonomy.", "Both labeled and unlabeled documents are used to fit this generative model via the EM algorithm.", "Finally, it uses the posterior probability of a test document to predict its labeled classes.", "Among different base classifiers, we choose their author reported best variant PCEM in this study.", "We use 30% of labeled training documents for this method.", "5 https://radimrehurek.com/gensim/ models/doc2vec.html 6 We also test the Flat-doc2vec variant which directly ranks all classes in the taxonomy and returns top ranked classes.", "Its performance is significantly worse than Hier-doc2vec and thus we only report Hier-doc2vec results.", "", "Hier-0Shot-TC (Yin et al., 2019) 9 : This zero-shot method uses a pre-trained textual entailment model to predict to what extent a document (as the premise text) can entail a template filled with the class name (as the hypothesis text).", "Similar to Hier-doc2vec , we select the class with the highest entailment score at each level in a top-down recursive fashion.", "For fair comparison, we change its internal BERT-base-uncased model to RoBERTa-large-mnli model as is used in our method.", "TaxoClass 10 : Our proposed weakly-supervised framework that identifies document core classes, leverages core classes to train a taxonomy-enhanced text classifier, and generalizes the classifier using multi-label self-training.", "We also evaluate two ablations: TaxoClass-NoST which removes the multi-label self-training step, and TaxoClass-NoGNN which replaces the GNN-based class encoder with a simple embedding layer initialized with pre-trained word embeddings (c.f. Sect. 3.3.1).", "We follow previous studies (Partalas et al., 2015; Prabhu et al., 2018) and evaluate the multi-label classification results from different aspects using various metrics.", "The first metric is Example-F1 11 which calculates the average F1 scores for all documents as follows: Example-F1 = 1 NN (cid:88) i =1 2 | C truei C predi | | C truei | + | C predi | , where C truei ( C predi ) is the true (model predicted) class set of document D i .", "Moreover, as many applications formalize the HMTC as a class ranking problem (Jain et al., 2016; Guo et al., 2019), we convert predicted class set C predi into a rank list R predi based on each class's model predicted probability and calculate Precision at k ( P @ k ) as follows: P @ k = 1 NN (cid:88) i =1 | C truei R predi, 1: k | min ( k, | C truei | ) , 9 https://github.com/yinwenpeng/ BenchmarkingZeroShot 10 https://github.com/mickeystroller/ TaxoClass 11 This metric is also called micro-Dice coefficient.", "where R predi, 1: k is each method predicted top k most likely classes for D i .", "Finally, for methods able to return the probability of a document being tagged with each class in the taxonomy, we calculate their Mean Reciprocal Rank (MRR) as follows: MRR = 1 NN (cid:88) i =1 1 | C truei | (cid:88) c j C truei 1 R ij , where R ij is the rank of document D j 's true class c j in model predicted rank list (over all classes).", "For all baseline methods except Hier-doc2vec, we use the public implementations from their authors and leave the hyper-parameters unchanged.", "For both Hier-0Shot-TC and our method, we adopt the same public Roberta-Large-MNLI model as the textual entailment model and use the same hypothesis template: this product is about . for Amazon-531 dataset and this example is . for DBPedia-298 dataset.", "We use AdamW optimizer to train our model with batch size 64, learning rate 5e-5 for all parameters in BERT document encoder and learning rate 4e-3 for all remaining parameters.", "During the multi-label self-training stage (c.f. Sect. 3.4), we use learning rate 1e-6 for all parameters in the BERT document encoder and 5e-4 for all remaining parameters.", "We run all experiments on a single cluster with 80 CPU cores and a Quadro RTX 8000 GPU.", "All deep learning models are moved to the GPU for faster inference speed.", "With batch size 64, the TaxoClass framework consumes about 10GB GPU memory.", "In principle, all methods should be runnable on CPU.", "Table 2 presents the overall results of all compared methods.", "First, we find most weakly-supervised and zero-shot method can outperform the semi-supervised method SS-PCEM even the later has access to 30% of labeled documents.", "Second, we can see that TaxoClass has the overall best performance across all the metrics and defeats the second best method by a large margin.", "Comparing TaxoClass with TaxoClass-NoGNN, we show the importance of incorporating taxonomy structure into the class encoder.", "Moreover, the improvement of TaxoClass over TaxoClass-NoST demonstrates the effectiveness of our multi-label self-training.", "We evaluate the effectiveness of our core class mining method as follows.", "First, we define a set of rival methods and use them to generate various sets of core classes.", "Then, we derive pseudo-training data for each generated core class set and use it to learn a text classifier with the same architecture as the one in TaxoClass.", "Finally, we report each model's performance on the test set.", "Note here we skip the self-training step to ensure the core class based pseudo-training data is the only variable.", "Table 3 lists all the results.", "First, we find that the Explicit Mention method, which treats all classes with names explicitly appear in the corpus as the core classes, does not perform well for our HMTC problem.", "One reason could be many class names are human-curated summarizing phrases that do not appear in the corpus naturally.", "Second, the 0Shot method views the output classes of baseline method Hier-0Shot-TC as the core classes and trains a new classifier.", "Interestingly, this new classifier performs better than the original Hier-0Shot-TC classifier, which shows that transferring knowledge from a general zero-shot classifier to a domain-specific classifier is a possible and promising direction.", "Finally, we compare variants of our own methods.", "The Ours-NoCS method removes the candidate core class selection step (c.f. Sect. 3.2.1) and treats all classes with high confidence scores as core classes.", "The Ours-NoConf method skips the confident core class identification step (c.f. Sect. 3.2.2) and views all candidate core classes as the final output core classes.", "We can see a significant performance drop on both ablations, which shows the importance of our two core class mining steps.", "We study whether we can use the identified document core classes to train other text classifiers with different architectures such as fastText (Joulin et al., 2016) and TextCNN (Kim, 2014).", "As shown in Table 4, both methods achieve reasonable performance.", "We can also see that TaxoClass with and without GNN-enhanced class encoder can outperform both methods.", "This shows the effectiveness of our dual-encoder style classifier architecture.", "We vary the percentage of labeled documents on Amazon-531 dataset for training a supervised fastText classifier and present its corresponding performance in Fig. 5.", "We can see the performance of our TaxoClass framework is equivalent to that of supervised fastText learned on roughly 70% of E x a m p l e -F 1 0.15 0.0 0.30 0.60 0.75 0 40 60 80 100 Percentage of Labeled Documents 20 0.45 FastText TaxoClass (60, 0.581)(80, 0.619) MRR 0.15 0.0 0.30 0.60 0.75 0 40 60 80 100 20 0.45 (60, 0.602) Percentage of Labeled Documents (80, 0.644) 0.593 0.633 FastText TaxoClass Figure 5: Comparison between TaxoClass and supervised fastText method on Amazon-531 dataset.", "Weakly-supervised Text Classification.", "There exist some previous studies that leverage a few labeled documents or class-indicative keywords as weak supervision signals for text classification.", "A pioneering method is dataless classification (Chang et al., 2008; Song and Roth, 2014) which embeds documents and classes into the same semantic space of Wikipedia concepts and performs classification using the embedding similarity.", "Li et al. (2018, 2019) extend this idea by mining concepts directly from the corpus rather than using the external Wikipedia.", "Along another line, Chen et al. (2015) and Li et al. (2016) propose to apply a seed-guided topic model to infer class-specific top-ics from class-indicative keywords and to predict document classes from posterior class-topic assignments.", "Compared with these methods, our TaxoClass framework neither restricts document and class embeddings to live in the same semantic space nor imposes strong statistical assumptions.", "Recently, neural models are applied to weakly-supervised text classification.", "Meng et al. (2018, 2019) propose a pretrain-and-refine paradigm which first generates pseudo documents to pretrain a neural classifier and then refine this classifier via self-training.", "Mekala and Shang (2020); Meng et al. (2020); Wang et al. (2020) improve the above methods by introducing contextualized weak supervision and using a pre-trained language model to obtain better text representations.", "While achieving inspiring performance, these methods all assume each document has only one class and all class names (or class-indicative keywords) must appear in the corpus for pseudo training data generation.", "In this paper, we relax these assumptions and develop a new method for weakly-supervised hierarchical multi-label text classification task.", "Zero-shot Text Classification.", "Zero-shot text classification learns a text classifier based on training documents belonging to seen classes and applies the learned classifier to predict testing documents belonging to unseen classes (Wang et al., 2019).", "Nam et al. (2016) jointly embed documents and classes into a shared semantic space where knowledge from seen classes can be transferred to unseen classes.", "Such an idea is further developed in (Rios and Kavuluru, 2018; Srivastava et al., 2018; Yin et al., 2019; Chu et al., 2020) where external resources ( e.g. , knowledge graphs, natural language explanations of unseen classes, and open domain data) are introduced to help learn a better shared semantic space.", "Comparing with these methods, our TaxoClass framework does not require labeled data for a set of seen classes.", "Hierarchical Text Classification.", "Hierarchical text classification leverages a class hierarchy to improve the standard text classification performance.", "Typical methods can be divided into two categories: (1) local approaches which learn a text classifier per class (Banerjee et al., 2019), per parent class (Liu et al., 2005), or per level (Wehrmann et al., 2018), and (2) global approaches which incorporate taxonomy structure information into one single classifier through recursive regularization (Gopal and Yang, 2013) or graph neural network (GNN) based encoder (Peng et al., 2018; Huang et al., 2019; Zhou et al., 2020).", "Our TaxoClass framework adopts the second global approach and uses a GNN-based encoder to obtain each class's representation.", "This paper studies the hierarchical multi-label text classification problem when only class surface names, instead of massive labeled documents, are given.", "We propose a novel TaxoClass framework which leverages the class taxonomy structure to derive document core classes and learns taxonomy-enhanced text classifier for prediction.", "Extensive experiments demonstrate the effectiveness of TaxoClass on two real-world datasets from different domains.", "In the future, we plan to explore how TaxoClass framework can be integrated with semi-supervised methods and data augmentation methods, when some class surface names are too ambiguous to indicate class semantics.", "Moreover, we consider extending our multi-label self-training method to other related NLP tasks such as fine-grained entity typing.", "As text classification is a standard task in NLP, we do not see any significant ethical concerns.", "The expected usage of our work is to classify documents such as news articles, scientific literature, and etc.", "Research was sponsored in part by US DARPA SocialSim Program No.", "W911NF-17-C0099, NSF IIS 16-18481, IIS 17-04532, and IIS 17-41317, and DTRA HDTRA11810026.", "Any opinions, findings or recommendations expressed herein are those of the authors and should not be interpreted as necessarily representing the views, either expressed or implied, of DARPA or the U.S. Government.", "We thank anonymous reviewers for valuable and insightful feedback." ]
[ "abstain", "abstain", "objective", "objective", "objective", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "method", "method", "objective", "abstain", "abstain", "objective", "abstain", "method", "method", "abstain", "method", "abstain", "objective", "objective", "objective", "result", "method", "abstain", "abstain", "abstain", "method", "abstain", "other", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "other", "method", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "other", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "method", "other", "result", "abstain", "abstain", "method", "method", "method", "method", "result", "abstain", "other", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "result", "result", "abstain", "result", "other", "other", "other", "other", "other", "method", "other", "other", "other", "other", "objective", "other", "other", "other", "other", "abstain", "other", "other", "other", "abstain", "method", "objective", "abstain", "objective", "objective", "result", "abstain", "other", "other", "other", "other" ]
[ "Extracting information from full documents is an important problem in many domains, but most previous work focus on identifying relationships within a sentence or a paragraph.", "It is challenging to create a large-scale information extraction (IE) dataset at the document level since it requires an understanding of the whole document to annotate entities and their document-level relationships that usually span beyond sentences or even sections.", "In this paper, we introduce SCIREX, a document level IE dataset that encompasses multiple IE tasks, including salient entity identification and document level N -ary relation identification from scientific articles.", "We annotate our dataset by integrating automatic and human annotations, leveraging existing scientific knowledge resources.", "We develop a neural model as a strong baseline that extends previous state-of-the-art IE models to document-level IE.", "Analyzing the model performance shows a significant gap between human performance and current baselines, inviting the community to use our dataset as a challenge to develop document-level IE models.", "Our data and code are publicly available at https: //github.com/allenai/SciREX 1 Introduction Extracting information about entities and their relationships from unstructured text is an important problem in NLP.", "Conventional datasets and methods for information extraction (IE) focus on within-sentence relations from general Newswire text (Zhang et al., 2017).", "However, recent work started studying the development of full IE models and datasets for short paragraphs (e.g., information extraction from abstracts of scientific articles as in SCIERC (Luan et al., 2018)), or only extracting Work done while at AI2 We evaluate our model on the task of question answering using Section : Dataset SQuAD is a machine comprehension dataset on a large set of Wikipedia articles ,", "relations (given ground truth entities) on long documents (e.g. Jia et al. (2019)).", "While these tasks provide a reasonable testbed for developing IE models, a significant amount of information can only be gleaned from analyzing the full document.", "To this end, not much work has been done on developing full IE datasets and model for long documents.", "Creating datasets for information extraction at the document level is challenging because it requires domain expertise and considerable annotation effort to comprehensively annotate a full document for multiple IE tasks.", "In addition to local relationships between entities, it requires identifying document-level relationships that go beyond sentences and even sections.", "Figure 1 shows an example of such document level relation (Dataset: SQuAD , Metric: EM , Method: BiDAF , Task: machine comprehension ).", "In this paper, we introduce SCIREX, a new comprehensive dataset for information extraction from scientific articles.", "Our dataset focuses on the task of identifying the main results of a scientific article as a tuple (Dataset, Metric, Task, Method) from raw text.", "It consists of three major subtasks, identifying individual entities, their document level relationships, and predicting their saliency in the document (i.e., entities that take part in the results of the article and are not merely, for example, mentioned in Related Work).", "Our dataset is fully annotated with entities, their mentions, their coreferences, and their document level relations.", "To overcome the annotation challenges for large documents, we perform both automatic and manual annotations, leveraging external scientific knowledge bases.", "An automatic annotation stage identifies candidate mentions of entities with high recall, then an expert annotator corrects these extracted mentions by referring to the text of the article and an external knowledge base.", "1 This strategy significantly reduces the time necessary to fully annotate large documents for multiple IE tasks.", "In addition, we introduce a neural model as a strong baseline to perform this task end-to-end.", "Our model identifies mentions, their saliency, and their coreference links.", "It then clusters salient mentions into entities and identifies document level relations.", "We did not find other models that can perform the full task, so we evaluated existing state-of-the-art models on subtasks, and found our baseline model to outperform them.", "Experiments also show that our end-to-end document level IE task is challenging, with the most challenging subtasks being identifying salient entities, and to a lesser extent, discovering document level relations.", "The contributions of our paper are as follows,", "1. we introduce SCIREX, a dataset that evaluates a comprehensive list of IE tasks, including N -ary relations that span long documents.", "This is a unique setting compared to prior work that focuses on short paragraphs or a single IE task.", "2. We develop a baseline model that, to the best of our knowledge, is the first attempt toward a neural full document IE.", "Our analysis emphasizes the need for better IE models that can overcome the new challenges posed by our dataset.", "We invite the research community to focus on this important, challenging task.", "Scientific IE In recent years, there has been multiple attempts to automatically extract structured", "1 Papers with Code: paperswithcode.com", "information from scientific articles.", "These types of extractions include citation analysis (Jurgens et al., 2018; Cohan et al., 2019), identifying entities and relations (Augenstein et al., 2017; Luan et al., 2019, 2017), and unsupervised detection of entities and their coreference information (Tsai et al., 2013).", "Most structured extraction tasks from among these have revolved around extraction from sentences or abstracts of the articles.", "A recent example is SCIERC (Luan et al., 2018), a dataset of 500 richly annotated scientific abstracts containing mention spans and their types, coreference information between mentions, and binary relations annotations.", "We use SCIERC to bootstrap our data annotation procedure (Section 3.2).", "There has been a lack of comprehensive IE datasets annotated at the document level.", "Recent work by Hou et al. (2019); Jia et al. (2019) tried to rectify this by using distant supervision annotations to build datasets for document-level relation extraction.", "In both datasets, the task of relation extraction is formulated as a binary classification to check if a triplet of ground-truth entities is expressed in the document or not.", "Instead, our work focuses on a comprehensive list of information extraction tasks from scratch, where the input is the raw document.", "This makes the IE model more interesting as it requires to perform entity extraction, coreference resolution, saliency detection in addition to the relation extraction.", "2 General IE Most work in general domain IE focus on sentence-level information extraction (Stanovsky et al., 2018; Qin et al., 2018; Jie and Lu, 2019).", "Recently, however, Yao et al. (2019) introduced DocRED, a dataset of cross-sentence relation extractions on Wikipedia paragraphs.", "The paragraphs are of a comparable length to that of SCIERC, which is significantly shorter than documents in our dataset.", "Previous IE work on the TAC KBP competitions (Ellis et al., 2017; Getman et al., 2018) comprise multiple knowledge base population tasks.", "Our task can be considered a variant of the TACKBP cold start task that discovers new entities and entity attributes (slot filling) from scratch.", "Two aspects of our task make it more interesting, 1) our model needs to be able to extract facts that 2 Another approach is to perform entity extraction then use the binary classification approach with a list of all possible combinations of relation tuples.", "This might work for short documents, but it is intractable for long documents because of the large number of entities.", "are mentioned once or twice rather than rely on the redundancy of information in their documents (e.g Rahman et al. (2016)), 2) TAC KBP relations are usually sentence-level binary relations between a query entity and an attribute (e.g Angeli et al. (2015)), while our relations are 4-ary, span the whole document, and can't be split into multiple binary relations as discussed in Section 3.1.", "End-to-End Neural IE models With neural networks, a few end-to-end models have been proposed that perform multiple IE tasks jointly (Miwa and Bansal, 2016; Luan et al., 2018; Wadden et al., 2019).", "The closest to our work is DY GIE++ (Wad-den et al., 2019), which does named entity recognition, binary relation extraction, and event extraction in one model.", "DY GIE++ is a span-enumeration based model which works well for short paragraphs but does not scale well to long documents.", "Instead, we use a CRF sequence tagger, which scales well.", "Our model also extracts 4-ary relations between salient entity clusters , which requires a more global view of the document than that needed to extract binary relations between all pairs of entity mentions.", "Our goal is to extend sentence-level IE to documents and construct a dataset for document-level information extraction from scientific articles.", "This section defines the IE tasks we address, and describe the details of building our SCIREX dataset.", "Entity Recognition Our entities are abstract objects of type Method, Task, Metric, or Dataset that appear as text in a scientific article.", "We define mentions (or spans) as a specific instantiation of the entity in the text this could be the actual name of the entity, its abbreviation, etc.", "The entity recognition task is to identify entity mentions and classify them with their types.", "Salient Entity Identification Entities appear in a scientific article are not equally important.", "For example, a task mentioned in the related work section is less important than the main task of the article.", "In our case, salient entity identification refers to finding if an entity is taking part in the article evaluation.", "Salient Datasets, Metrics, Tasks, and Methods are those needed to describe the arti-cle's results.", "For the rest of this paper, we will use the term salient to refer to entities that belong to a result relation tuple.", "Coreference is the task of identifying a cluster of mentions of an entity (or a salient entity) that are coreferred in a single document.", "Relation Extraction is the task of extracting N ary relations between entities in a scientific article.", "We are interested in discovering binary, 3-ary, and 4-ary relations between a collection of entities of type (Dataset, Method, Metric, and Task).", "It is important to note that this 4-ary relation can't be split into multiple binary relations because, e.g., a dataset might have multiple tasks, and each one has its own metric, so the metric cannot be decided solely based on the dataset or the task.", "Document-level information extraction requires a global understanding of the full document to annotate entities, their relations, and their saliency.", "However, annotating a scientific article is time-consuming and requires expert annotators.", "This section explains our method for building our SCIREX dataset with little annotation effort.", "It combines distant supervision from an existing KB and noisy automatic labeling, to provide a much simpler annotation task.", "Existing KB: Papers with Code Papers with Code (PwC) 3 is a publicly available corpus of 1,170 articles published in ML conferences annotated with result five-tuples of (Dataset, Metric, Method, Task, Score).", "The PwC curators collected this data from public leaderboards, previously curated results by other people, manual annotations, and from authors submitting results of their work.", "This dataset provides us with distant supervision signal for a task that requires document-level understanding extracting result tuples.", "The signal is distant (Riedel et al., 2010) because, while we know that the PwC result tuple exists in the article, we don't know where exactly it is mentioned (PwC does not provide entity spans, and PwC entity names may or may not appear exactly in the document).", "PDF preprocessing PwC provides arXiv IDs for their papers.", "To extract raw text and section information, we use LaTeXML ( https://dlmf.nist.", "gov/LaTeXML/ ) for papers with latex source (all 438 annotated papers), or use Grobid (GRO, 2008 2020) for papers in PDF format (only 10% of remaining papers did not have latex source).", "LaTeXML allowed us to extract clean document text with no figures / tables / equations.", "We leave it as future work to augment our dataset with these structured fields.", "To extract tokens and sentences, we use the SpaCy ( https://spacy.io/ ) library.", "Automatic Labeling Given the length of the document is on the order of 5K tokens, we simplify the human annotation task by automatically labeling the data with noisy labels, then an expert annotator only needs to fix the labeling mistakes.", "One possible way to augment the distant supervision provided by PwC is finding mention spans of PwC entities.", "Initial experiments showed that this did not work well because it does not provide enough span-level annotations that the model can use to learn to recognize mention spans.", "To get more dense span-level information, we want to label salient (corresponding to PwC entities) and also non-salient spans.", "We train a standard BERT+CRF sequence labeling model on the SCIERC dataset (described in Section 2).", "We run this model on each of the documents in the PwC corpus, and it provides us with automatic (but noisy) predictions for mention span identification.", "The next step is to find mention spans that correspond to PwC entities.", "For each mention predicted by our SCI ERC-trained model, we compute a Jaccard similarity with each of the PwC entities.", "Each mention is linked to the entity if the threshold exceeds a certain (cid:15) .", "To determine (cid:15) , two expert annotators manually went through 10 documents to mark identified mentions with entity names, and (cid:15) was chosen such that the probability of this assignment is maximized.", "We use this threshold to determine a mapping for the remaining 1,170 documents.", "Given that Jaccard-similarity is a coarse measure of similarity, this step favors high recall over precision.", "Human Annotation Given this noisily labeled data, we ask our annotator to perform necessary corrections to generate high-quality annotations.", "Annotators are provided with a list of papers-with-code entities that they need to find in the document, making their annotations deliberate (as opposed to not knowing which entities to annotate).", "Our annotator deleted and modified types of spans for salient entities (belong to PwC result tuple) and non-salient entities, while only adding missed spans for salient ones.", "Also, if a mention was linked to a wrong PwC entity, then our annotator was also asked to correct it.", "Full annotation instructions are provided in Appendix B. 3.3 Dataset and Annotation Statistics Dataset statistics and Cross-section Relations Using the annotation procedure mentioned above, we build a dataset of 438 fully annotated documents.", "Table 1 provides dataset statistics and shows the proportion of relations in our dataset that requires reasoning across sentence/section.", "It shows that the majority of the relations, especially 4-ary relations span multiple sentences or even multiple sections.", "An example of such cross-section reasoning can be found in Figure", "1. Corrections Table 2 provides information about the average number of changes made during the human annotation.", "It shows that 83% (sum of diagonal) are correct automatic labels, 15% (sum of bottom row) are newly added spans, 2% are type changes, and a negligible percentage is deleted entities (sum of the last column).", "Also, on average, 12% (not in the table) of the final mentions in the document had the wrong PwC links and needed to be corrected, with a majority of changes being removing links from Method spans.", "Inter-annotator agreement We also asked four experts (Ph.D. students in ML/NLP field) to annotate five documents to compute the inter-annotator agreement.", "For mention classification, we achieve 95% average cohen scores between each pair of experts and our main annotator.", "Annotation Speed To measure if automatic labeling is making the human annotation faster, we also asked our annotator to perform annotations on five documents without automatic labeling.", "We compute the difference in time between these two forms of annotation per entity annotated.", "Note that here, we only ask our annotator to annotate salient mentions.", "With the automatic labeling, annotation speed is 1.34 sec per entity time vs. 2.48 sec per entity time on documents without automatic labeling (a 1.85x speedup).", "We also observe 24% improvement in recall of salient mentions by including non-salient mentions, further showing the utility of this approach.", "We develop a neural model that performs document-level IE tasks jointly in an end-to-end fashion.", "4 This section details our model design (also summarized in Figure 2).", "Document Representation An input document D is represented as a list of sections [ s 1 , ..., s | S | ] .", "We encode the document in two steps, section-level, then document-level.", "We use pretrained contextualized token encodings using SciBERT (Beltagy et al., 2019) over each section separately to get embeddings for tokens in that section.", "5 To allow document-level information flow, we concatenate 4 with the exception of coreference resolution 5 If the section is bigger than 512 tokens (SciBERT limit), it is broken into 512 token subsections, and each subsection is encoded separately.", "the section-level token embeddings and add a BiLSTM on top of them.", "This allows the model to take into account cross-section dependencies.", "Thus for each token w i in the document, this step outputs an embedding e i .", "Mention Identification and Classification Given token embeddings, our model applies a sequence tagger that identifies mentions and classifies their types.", "We train a BIOUL based CRF tagger on top of the BERT-BiLSTM embeddings of words to predict mention spans m j and their corresponding types.", "Mention Representation Given the words { w j 1 , ..., w j N } of a mention m j , our model learns a mention embedding me j of the mention, which will be used in later saliency identification and relation classification steps.", "The mention embedding is the concatenation of first token embedding e j 1 , last token embedding e j N and attention weighted average of all embeddings in the mention span (cid:80) Nk =1 j k e j k , where e j k is the embedding of word w j k and j k are scalars computed by passing the token embedding through an additive attention layer (Bahdanau et al., 2015).", "We concatenate these embeddings with additional features span's relative position in the document, an indicator showing if the sentence containing the mention also contains some marker words like experiment' or dataset' and the mention type.", "Salient Mention Classification Each mention m j is classified as being salient or not (i.e., should it belong in a relation tuple) by passing its span embedding me j through a feedforward layer.", "Because saliency is a property of entities, not mentions, this mention saliency score is just an input to the salient entity cluster identifications.", "Pairwise Coreference Resolution The coreference step is given a list of all pairs of identified mentions, and it decides which pair is coreferring.", "This component is separate from the end-to-end model.", "It concatenates the surface forms of two spans m i and m j , embed them using SciBERT, then use a linear classification layer on top of [CLS] embedding to compute the pairwise coreference score c ij .", "We also tried integrating it into our model, where we classify pairs of span embeddings (not the surface form) but found the separate model that uses surface forms to work much better.", "Mention clustering Given a list of span pairs m i and m j , and their pairwise coreference scores c ij , they are grouped into clusters that can be thought of as representing a single entity.", "We generate a coreference score matrix for all pairs and perform agglomerative hierarchical clustering (Ward, 1963) on top of it to get actual clusters.", "The number of clusters is selected based on the silhouette score (Rousseeuw, 1987) which optimizes for the cohesion and separation of clusters and does not depend on having gold standard cluster labels.", "Salient Entity Cluster Identification This step filters out clusters from the previous step, and only keep salient clusters for the final relation task.", "To do so, we take a simple approach that identifies a salient cluster as the one in which there is at least one salient mention (as determined previ-ously).", "The output of this step is a set of clusters C 1 , ..., CL where each cluster C i is a set of mentions { m i 1 , ..., m i j } of the same type.", "Relation Extraction Given all the clusters of mentions identified in a document from the previous step, our task now is to determine which of these belong together in a relation.", "To that end, we follow (Jia et al., 2019) methodology.", "We consider all candidate binary and 4-tuples of clusters and classify them as expressed or not expressed in the document.", "Here we describe the classification of 4-ary relations.", "For binary relation, the method is similar.", "Consider such a candidate relation (4-tuple of clusters) R = ( C 1 , C 2 , C 3 , C 4 ) where each C i is a set of mentions { m i 1 , ..., m i j } in the document representing the same entity.", "We encode this relation into a single vector by following a two-step procedure constructing a section embedding and aggregating them to generate a document level embedding.", "For each section s of the document, we create a section embedding E sR for this relation as follows For each cluster C i R , we construct its section embedding E si by max-pooling span embeddings of the mentions of C i that occur in section s (along with a learned bias vector b in case no mentions of C i appear in section s ).", "Then the section s embedding of tuple R is E sR = FFN ([ E s 1 ; E s 2 ; E s 3 ; E s 4 ]) where ; denotes concatenation and FFN is a feedforward network.", "We then construct a document level embedding of R , ER as mean of section embeddings 1 | S | (cid:80) | S | s =1 E s R .", "The final classification for relationship is done by passing the ER through another FFN, which returns a probability of this tuple expressing a relation in this document.", "Training Procedure While mention identification, span saliency classification, and relation extraction share the base document and span representation from BERT + BiLSTM and trained jointly, each of these subparts is trained on ground truth input.", "Note that we require the saliency classification and relation extraction to be independent of mention identification task since the output of this task (essentially the span of mention text) is nondifferentiable.", "6 The model jointly optimizes three losses, negative log-likelihood for mention identification, binary cross-entropy for saliency classification, and binary cross-entropy for relation extraction, with all three losses weighted equally.", "We compare our model with other recently introduced models.", "Since we cannot apply previous models directly to our task, we evaluate on subtasks of our dataset and also evaluate on SCIERC (Section 5.2).", "The other goal of the evaluation is to establish a baseline performance on our dataset and to provide insights into the difficulty of each subtask.", "To that end, we evaluate the performance of each component separately (Section 5.3), and in the overall end-to-end system (Section 5.4).", "In addition, we perform diagnostic experiments to identify the bottlenecks in the model performance.", "We report experimental setup and hyperparameters in appendix A. 5.1 Evaluation Metrics Mention Identification is a sequence labeling task, which we evaluate using the standard macro average F1 score of exact matches of all mention types.", "Salient Entity Clustering evaluation relies on some mapping between the set of predicted clusters and gold clusters.", "Given a predicted cluster P and a gold cluster G , we consider P to match G if more than 50% of P 's mentions belong to G , 7 that is |PG||P| > 0 .", "5 .", "The 0.5 threshold enjoys the property that, assuming all predicted clusters are disjoint from each other (which is the case by construction) and gold clusters are disjoint from each other (which is the case for 98.5% of them), a single predicted cluster can be assigned to atmost one gold cluster.", "This maps the set of predicted clusters to gold clusters, and given the mapping, it is straightforward to use the F1 score to evaluate predictions.", "This procedure optimizes for identifying all gold clusters even if they are broken into multiple predicted clusters.", "6 It is conceivable that mixing the gold mention spans with predicted mention spans might give an improvement in performance; therefore, we leave this as future work.", "Relation Extraction evaluation relies on the same mapping used in the evaluation of salient entity clustering.", "Under such mapping, each predicted N ary relation can be compared with gold relations, and decide if they match or not.", "This becomes a binary classification task that we evaluate with positive class F1 score.", "We report F1 scores for binary and 4-ary relation tuples.", "We get binary relations by splitting each 4-ary relation into six binary ones.", "We compare our model with DY GIE++ (Wadden et al., 2019) and DocTAET (Hou et al., 2019) on subtasks of our SCIREX dataset and on the SCIERC dataset wherever they apply.", "Our results show that only our model can perform all the subtasks in an end-to-end fashion and performs better than or on par with these baselines on respective subtasks.", "DY GIE++ (Wadden et al., 2019) is an end-to-end model for entity and binary relation extraction (check Section 2 for details).", "Being a span enumeration type model, DY GIE++ only works on paragraph level texts and extracts relations between mentions in the same sentence only.", "Therefore, we subdivide SCIREX documents into sections and formulate each section as a single training example.", "We assume all entities in relations returned by DY GIE++ are salient.", "We map each binary mention-level relation returned to entity-level by mapping the span to its gold cluster label if it appears in one.", "We consider 3 training configurations of DY GIE++,", "1. trained only on the abstracts in our dataset,", "2. trained on all sections of the documents in our dataset.", "3. trained on SCIERC dataset (still evaluated on our dataset), At test time, we evaluate the model on all sections of the documents in the test set.", "Results in Table 3 show that we perform generally better than DY GIE++.", "The performance on end-to-end binary relations shows the utility of incorporating a document level model for cross-section relations, rather than predicting on individual sections.", "Specifically, We observe a large difference in recall, which agrees with the fact that 55% of binary relation occur across sentence level.", "DY -GIE++ (All sections) were not able to identify any binary relations because 80% of training examples have no sentence level binary relations, pushing the model towards predicting very few relations.", "In Model P R F1 Mention Identification DY GIE++ 0.703 0.676 0.678 Our Model 0.707 0.717 0.712 End-to-end binary relations DY GIE++ (Abstracts Only) 0.003 0.001 0.002 DY GIE++ (All sections) 0.000 0.000 0.000 DY GIE++ (SCIERC) 0.029 0.128 0.038 Our Model 0.065 0.411 0.096 4-ary relation extraction only DocTAET 0.477 0.885 0.619 Our Model 0.531 0.718 0.611 Table 3: Evaluating state-of-the-art models on subtasks of SCIREX dataset because we did not find an existing model that can perform the end-to-end task.", "DocTAET (Hou et al., 2019) is a document-level relation classification model that is given a document and a relation tuple to classify if it is expressed in the document.", "It is formulated as an entailment task with the information encoded as [CLS] document [SEP] relation in a BERT style model.", "This is equivalent to the last step of our model but with gold salient entity clusters as input.", "Table 3 shows the result on this subtask, and it shows that our relation model gives comparable performance (in terms of positive class F1 score) to that of DocTAET.", "Table 4 summarizes the results of evaluating our model and DY GIE++ on the SCIERC dataset.", "For mention identification, our model performance is a bit worse mostly because SCIERC has overlapping entities that a CRF-based model like ours can not handle.", "For the task of identifying coreference clusters, we perform significantly worse than DY -GIE++'s end-to-end model.", "This provides future avenues towards improving coreference resolution for SCIREX by incorporating it in an end-to-end fashion.", "The main contribution of our model is to connect multiple components to perform our end-to-end task.", "This section evaluates each step of our model separately from all other components.", "To do so, we feed each component with gold inputs and evaluate the output.", "This gives us a good picture of the performance of each component without the accumulation of errors.", "The first block of Table 5 summarizes the results of this evaluation setting.", "We know from Tables 3, 4 that our mention identification and relation identification components are working well.", "For pairwise coreference resolution, we know from Table 4 that it needs to be improved, but it is performing well on our dataset likely because the majority of coreferences in our dataset can be performed using only the surface form of the mentions (for example, abbreviation reference).", "The worst performing component is identifying salient mentions, which requires information to be aggregated from across the document, something the current neural models lack.", "8 5.4 End-to-End Evaluation Evaluation with Predicted Input.", "The second block in Table 5 gives results for the end-to-end performance of our model in predicting salient entity clusters, binary relations, and 4-ary relations.", "We noticed that there is quite a drop in the end-to-8 Performance of Salient Entity Clusters is close to 1.0 because it is a deterministic algorithm (clustering followed by filtering) that gives perfect output given gold input.", "The reason the recall is not 1.0 as well is because of small inconsistencies in the gold annotations (two distinct entities merged into one).", "end performance compared to the component-wise performance.", "This is particularly clear with relations; even though the relation extraction component performance is reasonably good in isolation, its end-to-end performance is quite low because of the accumulation of errors in previous steps.", "Through manual error analysis, we found that the identification of salient clusters is the most problematic step in our model.", "The third block in Table 5 quantifies this.", "In this setting, we run our end-to-end model but with gold cluster saliency information.", "In particular, we predict clusters of mentions using our model (mention identification, pairwise coreference, and mention clustering).", "Then instead of filtering clusters using our mention saliency score, we keep only those clusters that have any overlap with at least one gold cluster.", "Predicted clusters that match the same gold cluster are then combined.", "Finally, we feed those to the relation extraction step of our model.", "Under this setting, we found that the performance of 4-ary relations improves considerably by more than 10x.", "This confirms our hypothesis that identifying salient clusters is the key bottleneck in the end-to-end system performance.", "This is also consistent with the component-wise results that show low performance for salient mentions identification.", "Error Analysis for Identifying Salient Clusters.", "Our error analysis shows that the average number of mentions in a salient cluster classified correctly is 15 mentions, whereas for the misclassified ones is six mentions.", "This indicates that our model judges the saliency of an entity strongly based on how frequently it is mentioned in the document.", "While this is a perfectly reasonable signal to rely on, the model seems to trust it more than the context of the entity mention.", "For example, in the following snippet, ... For each model, we report the test perplexity, the computational budget, the parameter counts, the value of DropProb, and the computational efficiency .... , the entity the parameter counts is misclassified as non-salient, as it only appears twice in the document.", "One possible way to address this issue with salient entity identification is to replace its simple filtering step with a trained model that can do a better job at aggregating evidence from multiple mentions.", "a challenging task.", "It requires careful document-level analysis, and getting it right is crucial for the performance of an end-to-end document-level IE model.", "Also, the difference between results in the third block of the results and the component-wise results indicate that the whole model can benefit from incremental improvements to each component.", "We introduce SCIREX, a comprehensive and challenging dataset for information extraction on full documents.", "We also develop a baseline model for our dataset, which, to the best of our knowledge, is the first attempt toward a neural document level IE that can perform all the necessary subtasks in an end-to-end manner.", "We show that using a document level model gave a significant improvement in terms of recall, compared to existing paragraph-level approaches.", "This task poses multiple technical and modeling challenges, including", "1. the use of trans-former-based models on long documents and related device memory issues,", "2. aggregating coreference information from across documents in an end-to-end manner,", "3. identifying salient entities in a document and", "4. performing N-ary relation extraction of these entities.", "Each of these tasks challenges existing methodologies in the information extraction domain, which, by and large, focus on short text sequences.", "An analysis of the performance of our model emphasizes the need for better document-level models that can overcome the new challenges posed by our dataset.", "As our research community moves towards document level IE and discourse modeling, we position this dataset as a testing ground to focus on this important and challenging task.", "This research was supported by the ONR MURI N00014-18-1-2670, ONR N00014-18-1-2826, DARPA N66001-19-2-4031, and Allen Distinguished Investigator Award.", "We thank the Semantic Scholar team at AI2, UW NLP, and anonymous reviewers for their insightful comments.", "We are especially grateful to Kyle Lo for help with Grobid parser, the complete Papers With Code team for making their data publicly available, Dan Weld and Robert Stojnic for helpful discussion and feedback." ]
[ "abstain", "abstain", "abstain", "abstain", "objective", "objective", "other", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "result", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "result", "result", "objective", "method", "abstain", "objective", "objective", "method", "other", "other", "other", "other", "other", "other", "method", "other", "other", "other", "method", "other", "other", "other", "abstain", "other", "objective", "method", "other", "method", "other", "abstain", "other", "method", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "result", "result", "other", "other", "abstain", "method", "objective", "other", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "result", "abstain", "method", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "result", "method", "method", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "method", "method", "method", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "method", "other", "other", "other" ]
[ "Recently, neural machine translation (NMT) has emerged as a powerful alternative to conventional statistical approaches.", "However, its performance drops considerably in the presence of morphologically rich languages (MRLs).", "Neural engines usually fail to tackle the large vocabulary and high out-of-vocabulary (OOV) word rate of MRLs.", "Therefore, it is not suitable to exploit existing word-based models to translate this set of languages.", "In this paper, we propose an extension to the state-of-the-art model of Chung et al. (2016), which works at the character level and boosts the decoder with target-side morphological information.", "In our architecture, an additional morphology table is plugged into the model.", "Each time the decoder samples from a target vocabulary, the table sends auxiliary signals from the most relevant affixes in order to enrich the decoder's current state and constrain it to provide better predictions.", "We evaluated our model to translate English into German, Russian, and Turkish as three MRLs and observed significant improvements.", "Morphologically complex words (MCWs) are multi-layer structures which consist of different subunits, each of which carries semantic information and has a specific syntactic role.", "Table 1 gives a Turkish example to show this type of complexity.", "This example is a clear indication that word-based models are not suitable to process such complex languages.", "Accordingly, when translating MRLs, it might not be a good idea to treat words as atomic units as it demands a large vocabulary that imposes extra overhead.", "Since MCWs can appear in various forms we require a very large vocabulary to i ) cover as many morphological forms and words as we can, and ii ) reduce the number of OOVs.", "Neural models by their nature are complex, and we do not want to make them more complicated by working with large vocabularies.", "Furthermore, even if we have quite a large vocabulary set, clearly some words would remain uncovered by that.", "This means that a large vocabulary not only complicates the entire process, but also does not necessarily mitigate the OOV problem.", "For these reasons we propose an NMT engine which works at the character level.", "In this paper, we focus on translating into MRLs and issues associated with word formation on the target side.", "To provide a better translation we do not necessarily need a large target lexicon, as an MCW can be gradually formed during decoding by means of its subunits, similar to the solution proposed in character-based decoding models (Chung et al., 2016).", "Generating a complex word character-by-character is a better approach compared to word-level sampling, but it has other disadvantages.", "One character can co-occur with another with almost no constraint, but a particular word or morpheme can only collocate with a very limited number of other constituents.", "Unlike words, characters are not meaning-bearing units and do not preserve syntactic information, so (in the extreme case) the 58 chance of sampling each character by the decoder is almost equal to the others, but this situation is less likely for words.", "The only constraint that prioritize which character should be sampled is information stored in the decoder, which we believe is insufficient to cope with all ambiguities.", "Furthermore, when everything is segmented into characters the target sentence with a limited number of words is changed to a very long sequence of characters, which clearly makes it harder for the decoder to remember such a long history.", "Accordingly, character-based information flows in the decoder may not be as informative as word-or morpheme-based information.", "In the character-based NMT model everything is almost the same as its word-based counterpart except the target vocabulary whose size is considerably reduced from thousands of words to just hundreds of characters.", "If we consider the decoder as a classifier, it should in principle be able to perform much better over hundreds of classes (characters) rather than thousands (words), but the performance of character-based models is almost the same as or slightly better than their word-based versions.", "This underlines the fact that the character-based decoder is perhaps not fed with sufficient information to provide improved performance compared to word-based models.", "Character-level decoding limits the search space by dramatically reducing the size of the target vocabulary, but at the same time widens the search space by working with characters whose sampling seems to be harder than words.", "The freedom in selection and sampling of characters can mislead the decoder, which prevents us from taking the maximum advantages of character-level decoding.", "If we can control the selection process with other constraints, we may obtain further benefit from restricting the vocabulary set, which is the main goal followed in this paper.", "In order to address the aforementioned problems we redesign the neural decoder in three different scenarios.", "In the first scenario we equip the decoder with an additional morphology table including target-side affixes.", "We place an attention module on top of the table which is controlled by the decoder.", "At each step, as the decoder samples a character, it searches the table to find the most relevant information which can enrich its state.", "Signals sent from the table can be interpreted as additional constraints.", "In the second scenario we share the decoder between two output channels.", "The first one samples the target character and the other one predicts the morphological annotation of the character.", "This multi-tasking approach forces the decoder to send morphology-aware information to the final layer which results in better predictions.", "In the third scenario we combine these two models.", "Section 3 provides more details on our models.", "Together with different findings that will be discussed in the next sections, there are two main contributions in this paper.", "We redesigned and tuned the NMT framework for translating into MRLs.", "It is quite challenging to show the impact of external knowledge such as morphological information in neural models especially in the presence of large parallel corpora.", "However, our models are able to incorporate morphological information into decoding and boost its quality.", "We inject the decoder with morphological properties of the target language.", "Furthermore, the novel architecture proposed here is not limited to morphological information alone and is flexible enough to provide other types of information for the decoder.", "There are several models for NMT of MRLs which are designed to deal with morphological complexities.", "Garca-Martnez et al. (2016) and Sennrich and Haddow (2016) adapted the factored machine translation approach to neural models.", "Morphological annotations can be treated as extra factors in such models.", "Jean et al. (2015) proposed a model to handle very large vocabularies.", "Luong et al. (2015) addressed the problem of rare words and OOVs with the help of a post-translation phase to exchange unknown tokens with their potential translations.", "Sennrich et al. (2016) used subword units for NMT.", "The model relies on frequent subword units instead of words.", "Costa-juss and Fonollosa (2016) designed a model for translating from MRLs.", "The model encodes source words with a convolutional module proposed by Kim et al. (2016).", "Each word is represented by a convolutional combination of its characters.", "Luong and Manning (2016) used a hybrid model for representing words.", "In their model, unseen and complex words are encoded with a character-based representation, with other words encoded via the usual surface-form embeddings.", "Vylomova et al. (2016) compared differ-59 ent representation models (word-, morpheme, and character-level models) which try to capture complexities on the source side, for the task of translating from MRLs.", "Chung et al. (2016) proposed an architecture which benefits from different segmentation schemes.", "On the encoder side, words are segmented into subunits with the byte-pair segmentation model ( bpe ) (Sennrich et al., 2016), and on the decoder side, one target character is produced at each time step.", "Accordingly, the target sequence is treated as a long chain of characters without explicit segmentation.", "Grnroos et al. (2017) focused on translating from English into Finnish and implicitly incorporated morphological information into NMT through multi-task learning.", "Passban (2018) comprehensively studied the problem of translating MRLs and addressed potential challenges in the field.", "Among all the models reviewed in this section, the network proposed by Chung et al. (2016) could be seen as the best alternative for translating into MRLs as it works at the character level on the decoder side and it was evaluated in different settings on different languages.", "Consequently, we consider it as a baseline model in our experiments.", "We propose a compatible neural architecture for translating into MRLs.", "The model benefits from subwordand character-level information and improves upon the state-of-the-art model of Chung et al. (2016).", "We manipulated the model to incorporate morphological information and developed three new extensions, which are discussed in Sections 3.1, 3.2, and 3.3.", "In the first extension an additional table containing the morphological information of the target language is plugged into the decoder to assist with word formation.", "Each time the decoder samples from the target vocabulary, it searches the morphology table to find the most relevant affixes given its current state.", "Items selected from the table act as guiding signals to help the decoder sample a better character.", "Our base model is an encoder-decoder model with attention (Bahdanau et al., 2014), implemented using gated recurrent units (GRUs) (Cho et al., 2014).", "We use a four-layer model in our experiments.", "Similar to Chung et al. (2016) and Wu et al. (2016), we use bidirectional units to encode the source sequence.", "Bidirectional GRUs are placed only at the input layer.", "The forward GRU reads the input sequence in its original order and the backward GRU reads the input in the reverse order.", "Each hidden state of the encoder in one time step is a concatenation of the forward and backward states at the same time step.", "This type of bidirectional processing provides a richer representation of the input sequence.", "On the decoder side, one target character is sampled from a target vocabulary at each time step.", "In the original encoder-decoder model, the probability of predicting the next token y i is estimated based on i ) the current hidden state of the decoder, ii ) the last predicted token, and iii ) the context vector.", "This process can be formulated as p ( y i | y 1 , ..., y i 1 , x ) = g ( h i , y i 1 , c i ) , where g ( . ) is a softmax function, y i is the target token (to be predicted), x is the representation of the input sequence, h i is the decoder's hidden state at the i -th time step, and c i indicates the context vector which is a weighted summary of the input sequence generated by the attention module.", "c i is generated via the procedure shown in (1): c i = n X j =1 ij s j ij = exp ( e ij ) P nk =1 exp ( e ik ); e ij = a ( s j , h i 1 ) (1) where ij denotes the weight of the j -th hidden state of the encoder ( s j ) when the decoder predicts the i -th target token, and a () shows a combinatorial function which can be modeled through a simple feed-forward connection.", "n is the length of the input sequence.", "In our first extension, the prediction probability is conditioned on one more constraint in addition to those three existing ones, as in p ( y i | y 1 , ..., y i 1 , x ) = g ( h i , y i 1 , c i , c mi ) , where c mi is the morphological context vector and carries information from those useful affixes which can enrich the decoder's information.", "c mi is generated via an attention module over the morphology table which works in a similar manner to word-based attention model.", "The attention procedure for 60 i= 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 y i b u t e r b i y e s i z l i k i i n l i s t e m c s t e m c w s pa ce s t e m c s t e m c s t e m c s t e m c s t e m c s t e m c s t e m c s i z c s i z c s i z c li k c li k c li k c w s pa ce s t e m c s t e m c s t e m c s t e m c Figure 1: The target label that each output channel is supposed to predict when generating the Turkish sequence bu 1 terbiyesizlik 2 iin 3 ' meaning because 3 of 3 this 1 rudeness 2 '.", "generating c mi is formulated as in (2): c mi = |A| X u =1 iu f u iu = exp ( e miu ) P |A| v =1 exp ( e iv ); e miu = a m ( f u , h i 1 ) (2) where f u represents the embedding of the u -th affix ( u -th column) in the morphology/affix table A , iu is the weight assigned to f u when predicting the i -th target token, and a m is a feed-forward connection between the morphology table and the decoder.", "The attention module in general can be considered as a search mechanism, e.g. in the original encoder-decoder architecture the basic attention module finds the most relevant input words to make the prediction.", "In multi-modal NMT (Huang et al., 2016; Calixto et al., 2017) an extra attention module is added to the basic one in order to search the image input to find the most relevant image segments.", "In our case we have a similar additional attention module which searches the morphology table.", "In this scenario, the morphology table including the target language's affixes can be considered as an external knowledge repository that sends auxiliary signals which accompany the main input sequence at all time steps.", "Such a table certainly includes useful information for the decoder.", "As we are not sure which affix preserves those pieces of useful information, we use an attention module to search for the best match.", "The attention module over the table works as a filter which excludes irrelevant affixes and amplifies the impact of relevant ones by assigning different weights ( val-ues).", "In the first scenario, we embedded a morphology table into the decoder in the hope that it can enrich sampling information.", "Mathematically speaking, such an architecture establishes an extra constraint for sampling and can control the decoder's predictions.", "However, this is not the only way of constraining the decoder.", "In the second scenario, we define extra supervision to the network via another predictor (output channel).", "The first channel is responsible for generating translations and predicts one character at each time step, and the other one tries to understand the morphological status of the decoder by predicting the morphological annotation ( l i ) of the target character.", "The approach in the second scenario proposes a multi-task learning architecture, by which in one task we learn translations and in the other one morphological annotations.", "Therefore, all network modules especially the last hidden layer just before the predictors should provide information which is useful enough to make correct predictions in both channels, i.e. the decoder should preserve translation as well as morphological knowledge.", "Since we are translating into MRLs this type of mixed information (morphology+translation) can be quite useful.", "In our setting, the morphological annotation l i predicted via the second channel shows to which part of the word or morpheme the target character belongs, i.e. the label for the character is the morpheme that includes it.", "We clarify the prediction procedure via an example from our training set (see Section 4).", "When the Turkish word terbiyesizlik ' is generated, the first channel is supposed to predict t , e , r , up to k , one after another.", "For the same word, the second channel is supposed to predict stem-C for the fist 7 steps as the first 7 characters terbiye ' belong to the stem of the word.", "The C sign indicates that stem-C is a class label.", "The second channel should also predict siz-C when the first channel predicts s (eighth character), i (ninth character), and z (tenth character), and lik-C when the first channel samples the last three characters.", "Clearly, the second channel is a classifier which works over the { stem-C , siz-C , lik-C , ...} classes.", "Figure 1 illustrates a segment of a sentence including this Turkish word and explains which class 61 tags should be predicted by each channel.", "To implement the second scenario we require a single-source double-target training corpus: [source sentence] [sequence of target characters & sequence of morphological annotations] (see Section 4).", "The objective function should also be manipulated accordingly.", "Given a training set { x t , y t , m t } Tt =1 the goal is to maximize the joint loss function shown in (3): TX t =1 log P ( y t | x t ; )+(1 ) TX t =1 log P ( m t | x t ; ) (3) where x t is the t -th input sentence whose translation is a sequence of target characters shown by y t .", "m t is the sequence of morphological annotations and T is the size of the training set.", "is the set of network parameters and is a scalar to bal-ance the contribution of each cost function.", "is adjusted on the development set during training.", "In the first scenario, we aim to provide the decoder with useful information about morphological properties of the target language, but we are not sure whether signals sent from the table are what we really need.", "They might be helpful or even harmful, so there should be a mechanism to control their quality.", "In the second scenario we also have a similar problem as the last layer requires some information to predict the correct morphological class through the second channel, but there is no guarantee to ensure that information in the decoder is sufficient for this sort of prediction.", "In order to address these problems, in the third extension we combine both scenarios as they are complementary and can potentially help each other.", "The morphology table acts as an additional useful source of knowledge as it already consists of affixes, but its content should be adapted according to the decoder and its actual needs.", "Accordingly, we need a trainer to update the table properly.", "The extra prediction channel plays this role for us as it forces the network to predict the target language's affixes at the output layer.", "The error computed in the second channel is back-propagated to the network including the morphology table and updates its affix information into what the decoder actually needs for its prediction.", "Therefore, the second output channel helps us train better affix embeddings.", "The morphology table also helps the second predictor.", "Without considering the table, the last layer only includes information about the input sequence and previously predicted outputs, which is not directly related to morphological information.", "The second attention module retrieves useful affixes from the morphology table and concatenates to the last layer, which means the decoder is explicitly fed with morphological information.", "Therefore, these two modules mutually help each other.", "The external channel helps update the morphology table with high-quality affixes (backward pass) and the table sends its high-quality signals to the prediction layer (forward pass).", "The relation between these modules and the NMT architecture is illustrated in Figure 2.", "As previously reviewed, different models try to capture complexities on the encoder side, but to the best of our knowledge the only model which proposes a technique to deal with complex constituents on the decoder side is that of Chung et al. (2016), which should be an appropriate baseline for our comparisons.", "Moreover, it outperforms other existing NMT models, so we prefer to compare our network to the best existing model.", "This model is referred to as CDNMT in our experiments.", "In the next sections first we explain our experimental setting, corpora, and how we build the morphology table (Section 4.1), and then report experimental results (Section 4.2).", "In order to make our work comparable we try to follow the same experimental setting used in CDNMT, where the GRU size is 1024 , the affix and word embedding size is 512 , and the beam width is 20 .", "Our models are trained using stochastic gradient descent with Adam (Kingma and Ba, 2015).", "Chung et al. (2016) and Sennrich et al. (2016) demonstrated that bpe boosts NMT, so similar to CDNMT we also preprocess the source side of our corpora using bpe .", "We use WMT-15 corpora 1 to train the models, newstest-2013 for tuning and newstest-2015 as the test sets.", "For EnglishTurkish (EnTr) we use the OpenSubtitle2016 collection (Lison and Tiedemann, 2016).", "The training side of the EnglishGerman (EnDe), EnglishRussian (En Ru), and EnTr corpora include 4 .", "5 , 2 .", "1 , and 4 million parallel sentences, respectively.", "We randomly select 3 K sentences for each of the development and test sets for EnTr.", "For all language pairs we keep the 400 most frequent characters as the target-side character set and replace the remainder (infrequent characters) with a specific character.", "One of the key modules in our architecture is the morphology table.", "In order to implement it we use a look-up table whose columns include embeddings for the target language's affixes (each column represents one affix) which are updated during training.", "As previously mentioned, the table is intended to provide useful, morphological information so it should be initialized properly, for which we use a morphology-aware embedding-learning model.", "To this end, we use the neural language model of Botha and Blunsom (2014) in which each word is represented via a linear combination of the embeddings of its surface form and subunits, e.g. terbiyesizlik = terbiyesizlik + terbiye + siz + lik .", "Given a sequence of words, the neural language model tries to predict the next word, so it learns sentence-level dependencies as well as intra-word relations.", "The model trains surface form and subword-level embeddings which provides us with high-quality affix embeddings.", "Our neural language model is a recurrent network with a single 1000 -dimensional GRU layer, which is trained on the target sides of our parallel corpora.", "The embedding size is 512 and we use a batch size of 100 to train the model.", "Before training the neural language model, we need 1 http://www.statmt.org/wmt15/ to manipulate the training corpus to decompose words into morphemes for which we use Morfessor (Smit et al., 2014), an unsupervised morphological analyzer.", "Using Morfessor each word is segmented into different subunits where we consider the longest part as the stem of each word; what appears before the stem is taken as a member of the set of prefixes (there might be one or more prefixes) and what follows the stem is considered as a member of the set of suffixes.", "Since Morfessor is an unsupervised analyzer, in order to minimize segmentation errors and avoid noisy results we filter its output and exclude subunits which occur fewer than 500 times.", "2 After decomposing, filtering, and separating stems from affixes, we extracted several affixes which are reported in Table 2.", "We emphasize that there might be wrong segmentations in Morfessor's output, e.g. Turkish is a suffix-based language, so there are no prefixes in this language, but based on what Morfessor generated we extracted 11 different types of prefixes.", "We do not post-process Morfessor's outputs.", "Using the neural language model we train word, stem, and affix embeddings, and initialize the look-up table (but not other parts) of the decoder using those affixes.", "The look-up table includes high-quality affixes trained on the target side of the parallel corpus by which we train the translation model.", "Clearly, such an affix table is an additional knowledge source for the decoder.", "It preserves information which is very close to what the decoder actually needs.", "However, there might be some missing pieces of information or some incompatibility between the decoder and the table, so we do not freeze the morphology table during training, but let the decoder update it with respect to its needs in the forward and backward passes.", "seem a little high, but for a corpus with more than 115 M words this is not a strict threshold in practice.", "Table 3 summarizes our experimental results.", "We report results for the bpe char setting, which means the source token is a bpe unit and the decoder samples a character at each time step.", "CDNMT is the baseline model.", "Table 3 includes scores reported from the original CDNMT model (Chung et al., 2016) as well as the scores from our reimplementation.", "To make our work comparable and show the impact of the new architecture, we tried to replicate CDNMT's results in our experimental setting, we kept everything (parameters, iterations, epochs etc.) unchanged and evaluated the extended model in the same setting.", "Table 3 reports BLEU scores (Papineni et al., 2002) of our NMT models.", "Table 3 can be interpreted from different perspectives but the main findings are summarized as follows: The morphology table yields significant improvements for all languages and settings.", "The morphology table boosts the EnTr engine more than others and we think this is because of the nature of the language.", "Turkish is an agglutinative language in which morphemes are clearly separable from each other, but in German and Russian morphological transformations rely more on fusional operations rather than agglutination.", "It seems that there is a direct relation between the size of the morphology table and the gain provided for the decoder, because Russian and Turkish have bigger tables and benefit from the table more than German which has fewer affixes.", "The auxiliary output channel is even more useful than the morphology table for all settings but EnRu, and we think this is because of the morpheme-per-word ratio in Russian.", "The number of morphemes attached to a Russian word is usually more than those of German and Turkish words in our corpora, and it makes the prediction harder for the classifier (the more the number of suffixes attached to a word, the harder the classification task).", "The combination of the morphology table and the extra output channel provides the best result for all languages.", "Figure 3 depicts the impact of the morphology table and the extra output channel for each language.", "To further study our models' behaviour and ensure that our extensions do not generate random improvements we visualized some attention weights when generating terbiyesizlik '.", "In Figure 4, the upper figure shows attention weights for all Turkish affixes, where the y axis shows different time steps and the x axis includes attention weights of all affixes (304 columns) for those time steps, e.g. the first row and the first column represents the attention weight assigned to the first Turkish affix when sampling t in t erbiyesizlik '.", "While at the first glance the figure may appear to be somewhat confusing, but it provides some interesting insights which we elaborate next.", "In addition to the whole attention matrix we also visualized a subset of weights to show how the morphology table provides useful information.", "In the second figure we study the behaviour of the morphology table for the first ( t 1 ), fifth ( i 5 ), ninth 64 t e r b i y e s i z l I k t 1 i 5 i 9 i 12 All affixes Figure 4: Visualizing the attention weights between the morphology table and the decoder when generating terbiyesizlik . ( i 9 ), and twelfth ( i 12 ) time steps when generating the same Turkish word t 1 erb i 5 yes i 9 zl i 12 k '.", "t 1 is the first character of the word.", "We also have three i characters from different morphemes, where the first one is part of the stem, the second one belongs to the suffix siz ', and the third one to lik '.", "It is interesting to see how the table reacts to the same character from different parts.", "For each time step we selected the top10 affixes which have the highest attention weights.", "The set of top10 affixes can be different for each step, so we made a union of those sets which gives us 22 affixes.", "The bottom part of Figure 4 shows the attention weights for those 22 affixes at each time step.", "After analyzing the weights we observed interesting properties about the morphology table and the auxiliary attention module.", "3 The main findings about the behaviour of the table are as follows: The model assigns high attention weights to stem-C for almost all time steps.", "However, the weights assigned to this class for t 1 and i 5 are much higher than those of affix characters (as they are part of the stem).", "The vertical lines in both figures approve this feature (bad behaviour).", "For some unknown reasons there are some affixes which have no direct relation to that particulate time step but they receive a high attention, such as maz in t 12 (bad behaviour).", "3 Our observations are not based on this example alone as we studied other random examples, and the table shows consistent behaviour for all examples.", "to be selected, e.g. weights for ( i 5 , stem-C ) or ( i 9 , siz-C ) (good behaviour).", "The morphology table may send bad or good signals but it is consistent for similar or co-occurring characters, e.g. for the last three time steps l 11 , i 12 , and k 13 , almost the same set of affixes receives the highest attention weights.", "This consistency is exactly what we are looking for, as it can define a reliable external constraint for the decoder to guide it.", "Vertical lines on the figure also confirm this fact.", "They show that for a set of consecutive characters which belong to the same morpheme the attention module sends a signal from a particular affix (good behaviour).", "There are some affixes which might not be directly related to that time step but receive high attention weights.", "This is because those affixes either include the same character which the decoder tries to predict (e.g. i-C for i 4 or t-C and tin-C for t 1 ), or frequently appear with that part of the word which includes the target character (e.g. mi-C has a high weight when predicting t 1 because t 1 belongs to terbiye which frequently collocates with mi-C : terbiye+mi ) (good behaviour).", "Finally, in order to complete our evaluation study we feed the English-to-German NMT model with the sentence Terms and conditions for sending contributions to the BBC ', to show how the model behaves differently and generates a better target sentence.", "Translations generated by our models are illustrated in Table 4.", "BBCCDNMT allgemeinen geschaftsbedingungen fur die versendung von Beitrgen an die BBCCDNMT mo Geschft s bedingungen fr die versendung von Beitrgen zum BBC", "The table demonstrates that our architecture is able to control the decoder and limit its selections, e.g. the word allgemeinen' generated by the baseline model is redundant.", "There is no constraint to inform the baseline model that this word should not be generated, whereas our proposed architecture controls the decoder in such situations.", "After analyzing our model, we realized that there are strong attention weights assigned to the w-space (indicating white space characters) and BOS (be-ginning of the sequence) columns of the affix table while sampling the first character of the word Geschft' , which shows that the decoder is informed about the start point of the sequence.", "Similar to the baseline model's decoder, our decoder can sample any character including a' of allgemeinen' or G' of Geschft' .", "Translation information stored in the baseline decoder is not sufficient for selecting the right character G' , so the decoder wrongly starts with i' and continues along a wrong path up to generating the whole word.", "However, our decoder's information is accompanied with signals from the affix table which force it to start with a better initial character, whose sampling leads to generating the correct target word.", "Another interesting feature about the table is the new structure Geschft s bedingungen' generated by the improved model.", "As the reference translation shows, in the correct form these two structures should be glued together via s' , which can be considered as an infix.", "As our model is supposed to detect this sort of intra-word relation, it treats the whole structure as two compounds which are connected to one another via an infix.", "Although this is not a correct translation and it would be trivial to post-edit into the correct output form, it is interesting to see how our mechanism forces the decoder to pay attention to intra-word relations.", "Apart from these two interesting findings, the number of wrong character selections in the baseline model is considerably reduced in the improved model because of our enhanced architecture.", "In this paper we proposed a new architecture to incorporate morphological information into the NMT pipeline.", "We extended the state-of-the-art NMT model (Chung et al., 2016) with a morphology table.", "The table could be considered as an external knowledge source which is helpful as it increases the capacity of the model by increasing the number of network parameters.", "We tried to benefit from this advantage.", "Moreover, we managed to fill the table with morphological information to further boost the NMT model when translating into MRLs.", "Apart from the table we also designed an additional output channel which forces the decoder to predict morphological annotations.", "The error signals coming from the second channel during training inform the decoder with morphological properties of the target language.", "Experimental results show that our techniques were useful for NMT of MRLs.", "As our future work we follow three main ideas.", "i ) We try to find more efficient ways to supply morphological information for both the encoder and decoder.", "ii ) We plan to benefit from other types of information such as syntactic and semantic annotations to boost the decoder, as the table is not limited to morphological information alone and can preserve other sorts of information.", "iii ) Finally, we target sequence generation for fusional languages.", "Although our model showed significant improvements for both German and Russian, the proposed model is more suitable for generating sequences in agglutinative languages.", "We thank our anonymous reviewers for their valuable feedback, as well as the Irish centre for high-end computing ( www.ichec.ie ) for providing computational infrastructures.", "This work has been supported by the ADAPT Centre for Digital Content Technology which is funded under the SFI Research Centres Programme (Grant 13/RC/2106) and is co-funded under the European Regional Development Fund." ]
[ "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "objective", "abstain", "objective", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "objective", "abstain", "objective", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "objective", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "other", "abstain", "other", "abstain", "other", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "objective", "abstain", "abstain", "abstain", "method", "method", "abstain", "result", "method", "result", "method", "abstain", "objective", "other", "other" ]
[ "Multi-hop reasoning question answering requires deep comprehension of relationships between various documents and queries.", "We propose a Bi-directional Attention Entity Graph Convolutional Network (BAG), leveraging relationships between nodes in an entity graph and attention information between a query and the entity graph, to solve this task.", "Graph convolutional networks are used to obtain a relation-aware representation of nodes for entity graphs built from documents with multi-level features.", "Bidirectional attention is then applied on graphs and queries to generate a query-aware nodes representation, which will be used for the final prediction.", "Experimental evaluation shows BAG achieves state-of-the-art accuracy performance on the QAngaroo WIKIHOP dataset.", "Question Answering (QA) and Machine Comprehension (MC) tasks have drawn significant attention during the past years.", "The proposal of large-scale single-document-based QA/MC datasets, such as SQuAD (Rajpurkar et al., 2016), CNN/Daily mail (Hermann et al., 2015), makes training available for end-to-end deep neural models, such as BiDAF (Seo et al., 2016), DCN (Xiong et al., 2016) and SAN (Liu et al., 2017).", "However, gaps still exist between these datasets and real-world applications.", "For example, reasoning is constrained to a single paragraph, or even part of it.", "Extended work was done to meet practical demand, such as DrQA (Chen et al., 2017) answering a SQuAD question based on the whole Wikipedia instead of single paragraph.", "Besides, latest large-scale datasets, e.g. TriviaQA (Joshi et al., 2017) and NarrativeQA (Kocisk`y et al., 2018), address this limitation by introducing multiple documents, ensuring reasoning cannot be done within local information.", "Although those datasets are fairly challenging, reasoning are within one document.", "In many scenarios, we need to comprehend the relationships of entities across documents before answering questions.", "Therefore, reading comprehension tasks with multiple hops were proposed to make it available for machine to tackle such problems, e.g. QAngaroo task (Welbl et al., 2018).", "Each sample in QAngaroo contains multiple supporting documents, and the goal is selecting the correct answer from a set of candidates for a query.", "Most queries cannot be answered depending on a single document, and multi-step reasoning chains across documents are needed.", "Therefore, it is possible that understanding a part of paragraphs loses effectiveness for multi-hop inference, which posts a huge challenge for previous models.", "Some baseline models, e.g. BiDAF (Seo et al., 2016) and FastQA (Weissenborn et al., 2017), which are popular for single-document QA, suffer dramatical accuracy decline in this task.", "In this paper, we propose a new graph-based QA model, named Bi-directional Attention Entity Graph convolutional network (BAG).", "Documents are transformed into a graph in which nodes are entities and edges are relationships between them.", "The graph is then imported into graph convolutional networks (GCNs) to learn relation-aware representation of nodes.", "Furthermore, we introduce a new bi-directional attention between the graph and a query with multi-level features to derive the mutual information for final prediction.", "Experimental results demonstrate that BAG achieves state-of-the-art performance on the WIKIHOP dataset.", "Ablation test also shows BAG ben-efits from the bi-directional attention, multi-level features and graph convolutional networks.", "Our contributions can be summarized as: Applying a bi-directional attention between graphs and queries to learn query-aware representation for reading comprehension.", "Multi-level features are involved to gain comprehensive relationship representation for graph nodes during processing of GCNs.", "Recently coreference and graph-based models are studied for multi-hop QA (Dhingra et al., 2018; Santoro et al., 2017).", "Coref-GRU (Dhingra et al., 2018) uses coreferences among tokens in documents.", "However, it is still limited by the long-distance relation propagation capability of RNNs.", "Besides, graph is proved to be an efficient way to represent complex relationships among objects and derive relational information (Santoro et al., 2017).", "MHQA-GRN (Song et al., 2018) and Entity-GCN (De Cao et al., 2018) construct entity graphs based on documents to learn more compact representation for multi-hop reasoning and derive answers from graph networks.", "However, both of them care less about input features and the attention between queries and graph nodes.", "Attention has been proven to be an essential mechanism to promote the performance of NLP tasks in previous work (Bahdanau et al., 2014; Sukhbaatar et al., 2015).", "In addition, bidirectional attention (Seo et al., 2016) shows its superiority to vanilla mutual attention because it provides complementary information to each other for both contexts and queries.", "However, little work exploits the attention between graphs and queries.", "We first formally define the multiple-hop QA task, taking QAngaroo (Welbl et al., 2018) WIKIHOP data as an example, There is a set S containing N supporting documents, a query q with M tokens and a set of answer candidates C .", "Our goal is to find the correct answer index a .", "Giving a triple-style query q = ( country, kepahiang ) , it means which country does kepahiang belongs to .", "Then answer candidates are provided, e.g. C = { Indonesia , Malaysia } .", "There are multiple supporting documents but not all of them are related to reasoning, e.g. Kephiang is a regency in Bengkulu , Bengkulu is one of provinces of Indonesia , Jambi is a province of Indonesia .", "We can derive the correct candidate is Indonesia , i.e. a = 0 , based on reasoning hops in former two documents.", "We show the proposed BAG model in Figure 1.", "It contains five modules: (1) entity graph construction, (2) multi-level feature layer, (3) GCN layer, (4) bi-directional attention and (5) output layer.", "We construct an entity graph based on Entity-GCN (De Cao et al., 2018), which means all mentions of candidates found out in documents are used as nodes in the graph.", "Undirected edges are defined according to positional properties of every node pair.", "There are two kinds of edges included:", "1) cross-document edge, for every node pair with the same entity string located in different documents;", "2) within-document edge, for every node pair located in the same document.", "Nodes in an entity graph can be found out via simple string matching.", "This approach can simplify calculation as well as make sure all relevant entities are included in the graph.", "Picked out along possible reasoning chains during dataset generating (Welbl et al., 2018), answer candidates have contained all related entities for answering.", "Finally, We can obtain a set of T nodes { n i } , 1 i T and corresponding edges among these nodes via above procedures.", "We represent both nodes and queries using multilevel features as shown in Figure 1(2).", "We first use pretrained word embeddings to represent tokens, such as GLoVe (Pennington et al., 2014) because nodes and queries are composed of tokens.", "Then contextual-level feature is used to offset the defi-ciency of GLoVe.", "Note that only part of tokens are remained during graph construction because we only extract entities as nodes.", "Thus contextual information around these entities in original document becomes essential for indicating relations between tokens and we use higher-level information for nodes except for token-level feature.", "We use ELMo (Peters et al., 2018) as contextualized word representations, modeling both complex word characteristics and contextual linguistic conditions.", "It should be noted that ELMo features for nodes are calculated based on original documents, then truncated according to the position indices of nodes.", "Token-level and context-level features will be concatenated and encoded to make a further comprehension.", "Since a node may contain more than one token, we average features among tokens to generate a feature vector for each node before encoding it.", "It will be transformed into the encoded node feature via a 1-layer linear network.", "Different from nodes, we represent a query by directly using a bidirectional LSTM (Bi-LSTM) whose output in each step is used as encoded query features.", "And both linear network and LSTM have the same output dimension d .", "In addition, we add two manual features to reflect the semantic properties of tokens, which are named-entity recognition (NER) and part-of-speech (POS).", "The complete feature f n RT d , f q RM d for both nodes and queries will be the concatenation of corresponding encoded features, NER embedding and POS embedding, where d = d + d POS + d NER .", "In order to realize multi-hop reasoning, we use a Relational Graph Convolutional Network (R-GCN) (Schlichtkrull et al., 2018) that can propagate message across different entity nodes in graphs and generate transformed representation of original ones.", "The R-GCN is employed to handle high-relational data characteristics and make use of different edge types.", "At l th layer, given the hidden state h li R d of node i , the hidden states h lj R d , j { N i } and relations RN i of all its neighbors ( d is the hidden state dimension), the hidden state in the next layer can be obtained via h l +1 i = ( (cid:88) r R Ni (cid:88) j N i 1 c i,r W lr h lj + W l 0 h li ) , (1) where c i,r is a normalization constant | N i | , W lr R d d stands a relation-specific weight matrix and W l 0 R d d stands a general weight.", "Similar to Entity-GCN (De Cao et al., 2018), we apply a gate on update vector u li and hidden state h li of current node by a linear transformation f s , w li = ( f s (concat( u li , h li )) , (2) in which u li can be obtained via (1) without sigmoid function.", "Then it will be used for updating weights for the hidden state h l +1 i of the same node in next layer, h l +1 i = w li (cid:12) tanh( u li ) + (1 w li ) (cid:12) h li .", "We stack such networks for L layers in which all parameters are shared.", "The information of each node will be propagated up to L -node distance away, generating L -hop-reasoning relation-aware representation of nodes.", "The initial input will be mutli-level nodes features f n = { f n i } , 0 i T and edges e = { e ij } in the graph.", "Bi-directional attention is responsible for generating the mutual information between a graph and a query.", "In BiDAF (Seo et al., 2016), attention is applied to sequence data in QA tasks such as supporting texts.", "However, we also find it works well between graph nodes and queries.", "It generates query-aware node representations that can provide more reasoning information for prediction.", "What differs in BAG is that attention is applied for graphs as shown in Figure 1(4).", "The similarity matrix S RT M is calculated via S = avg 1 f a (concat( h n , f q , h n f q )) , (4) in which h n RT d is all node representations obtained from the last GCN layer, f q RM d is the query feature matrix after encoding, d is the dimension for both query feature and transformed node representation, f a is a linear transformation, avg 1 stands for the average operation in last dimension, and is element-wise multiplication.", "Unlike the context-to-query attention in BiDAF, we introduce a node-to-query attention (cid:101) a n 2 q RT d , which signifies the query tokens that have the highest relevancy for each node using (cid:101) a n 2 q = softmax col ( S ) f q , (5) where softmax col means performing softmax function across the column, and stands for matrix multiplication.", "At the same time, we also design query-to-node attention (cid:101) a q 2 n RM d which signifies the nodes that are most related to each token in the query via (cid:101) a q 2 n = dup(softmax(max col ( S ))) (cid:62) f n , (6) in which max col is the maximum function applied on across column of a matrix, which will transform S into R 1 M .", "Then function dup will duplicate it for T times into shape RT M .", "f n RT d is the original node feature before GCN layer.", "Our bi-directional attention layer is the concatenation of the original nodes feature, nodes-to-query attention, the element-wise multiplication of nodes feature and nodes-to-query attention, and multiplication of nodes feature and query-to-nodes attention.", "It should be noted that the relation-aware nodes representation from GCN layer is just used to calculate the similarity matrix, and original node feature is used in rest calculation to obtain more general complementary information between graph and query.", "Edges are not taken in account because they are discrete and combined with nodes in GCN layer.", "The output is defined as (cid:101) a = concat( f n , (cid:101) a n 2 q , f n (cid:101) a n 2 q , f n (cid:101) a q 2 n ) .", "A 2-layer fully connect feed-forward network is employed to generate the final prediction, with tanh as the activation function in each layer.", "Softmax will be applied among the output.", "It uses query-aware representation of nodes from the attention layer as input, and its output is regarded as the probability of each node becoming answer.", "Since each candidate may appear several times in the graph, the probability of each candidate is the sum of all corresponding nodes.", "The loss function is defined as the cross entropy between the gold answer and its predicted probability.", "We used both unmasked and masked versions of the QAngaroo WIKIHOP dataset (Welbl et al.,", "2018) and followed its basic setting, in which masked version used specific tokens such as MASK1 to replace original candidates tokens in documents.", "There are 43,738, 5,129 and 2,451 examples in the training set, the development set and the test set respectively, and test set is not public.", "In the implementation 1 , we used standard ELMo with a 1024 dimension representation.", "Besides, 300-dimension GLoVe pre-trained embeddings from 840B Web crawl data were used as token-level features.", "We used spaCy to provide additional 8-dimension NER and POS features.", "The dimension of the 1-layer linear network for nodes in multi-level feature module was 512 with tanh as activation function.", "A 2-layer Bi-LSTM was employed for queries whose hidden state size is 256.", "Then the feature dimension is d = 512 + 8 + 8 = 528 .", "The GCN layer number L was set as 5.", "And the unit number of intermediate layers in output layer was 256 .", "In addition, the number of nodes and the query length were truncated as 500 and 25 respectively for normalized computation.", "Dropout with rate 0 .", "2 was applied before GCN layer.", "Adam optimizer is employed with initial learning rate 2 10 4 , which will be halved for every 5 epochs, With batch size 32 .", "It took about 14 hours for 50-epoch training on two GTX1080Ti GPUs using pre-built and pre-processed graph data generated from original corpus, which can significantly decrease the training time.", "We consider the following baseline models: FastQA (Weissenborn et al., 2017), BiDAF (Seo et al., 2016), Coref-GRU (Dhingra et al., 2018), MHQA-GRN (Song et al., 2018), Entity-GCN (De Cao et al., 2018).", "Former three models are RNN-based models, while coreference relationship is involved in Coref-GRU.", "The last two models are graph-based models specially designed for multi-hop QA tasks.", "As shown in Table 1, we collected three kinds of results.", "The dev and test results stand for the original validation and test sets respectively, noting that the test set is not public.", "In addition, we divide the original validation set of masked version into two parts evenly, one as a split validation set for tuning model and the other one as a split test set.", "The test 1 results are for the split test set.", "per-1 Source code is available on https://github.com/ caoyu1991/BAG .", "formance on both unmasked and masked data 2 , with accuracy 69.0% on the test set, which is 1.4% higher in value than previous best model Entity-GCN.", "It is significant superior than FastQA and BiDAF due to leveraging of relationship information given by the graph and abandoning some distracting context in multiple documents.", "Although Coref-GRU extends GRU with coreference relationships, it is still not enough for multi-hop because hop relationships are not limited to coreference, entities with the same strings also existed across documents which can be used for reasoning.", "Both MHQA-GRN and Entity-GCN utilize graph networks to resolve relations among entities in documents.", "However, the lack of attention and complementary features limits their performance.", "Therefore our BAG model achieves the best performance under all data configurations.", "It is noticed that BAG only gets a small promotion on masked data.", "We argue that the reason is the attention between masks and queries generating less useful information compared to unmasked ones.", "Moreover, ablation experimental results on unmasked version of the WIKIHOP dev set are given in Table 2.", "Once we remove the bi-directional attention and put the concatenation of nodes and queries directly into the output layer, it shows significant performance drop with more than 3% , proving the necessity of attention for reasoning in multi-hop QA.", "If we use linear-transformation-based single attention a = h n W a f q given in (Lu-ong et al., 2015) instead of our bi-directional attention, the accuracy drops with 2% , which means attention bi-directionality also contributes to the performance improvement.", "The similar condition will appear if we remove GCN, but use raw nodes as input for the attention layer.", "In addition, if edge types are no longer considered, which makes R-GCN degraded to vanilla GCN, noticeable accuracy loss about 2% appears.", "The absence of multi-level features will also cause degradation.", "The removal of semantic-level features causes slight decline on the performance, including NER and POS features.", "Further removal of ELMo feature will causes a dramatical drop, which reflects the insufficiency of only using word embeddings as features for nodes and that contextual information is very important.", "2 The paper was written on early Dec. 2018, during that time Entity-GCN is the best public model, and only one anonymous model is better than it.", "We propose a Bi-directional Attention entity Graph convolutional network (BAG) for multihop reasoning QA tasks.", "Regarding task characteristics, graph convolutional networks (GCNs) are efficient to handle relationships among entities in documents.", "We demonstrate that both bidirectional attention between nodes and queries and multi-level features are necessary for such tasks.", "The former one aims to obtain query-aware node representation for answering, while the latter one provides contextual comprehension of isolated nodes in graphs.", "Our experimental results not only demonstrate the effectiveness of two proposed modules, but also show BAG achieves state-of-the-art performance on the WIKIHOP dataset.", "This work was supported by Australian Research Council Projects under grants FL-170100117, DP-280103424." ]
[ "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "objective", "abstain", "abstain", "objective", "objective", "other", "other", "other", "other", "other", "other", "other", "other", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "abstain", "method", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "objective", "abstain", "objective", "other" ]
[ "This paper studies the performance of a neural self-attentive parser on transcribed speech.", "Speech presents parsing challenges that do not appear in written text, such as the lack of punctuation and the presence of speech disfluencies (including filled pauses, repetitions, corrections, etc.).", "Disfluencies are especially problematic for conventional syntactic parsers, which typically fail to find any EDITED disfluency nodes at all.", "This motivated the development of special disfluency detection systems, and special mechanisms added to parsers specifically to handle disfluencies.", "However, we show here that neural parsers can find EDITED disfluency nodes, and the best neural parsers find them with an accuracy surpassing that of specialized disfluency detection systems, thus making these specialized mechanisms unnecessary.", "This paper also investigates a modified loss function that puts more weight on EDITED nodes.", "It also describes tree-transformations that simplify the disfluency detection task by providing alternative encodings of disfluencies and syntactic information.", "While a great deal of effort has been expended on parsing written text, parsing speech (either transcribed or ASR output) has received less attention.", "Parsing speech is important because speech is the easiest and most natural means of communication, it is increasingly used as an input modality in human-computer interactions.", "Speech presents parsing challenges that do not appear in written text, such as the lack of punctuation and sentence boundaries, speech recognition errors and the presence of speech disfluencies (including filled pauses, repetitions, corrections, etc.) (Kahn et al., 2005).", "Of the major challenges associated with transcribed speech, we focus here on speech disfluencies, which are frequent in spontaneous speech.", "Disfluencies include filled pauses (um, uh), parenthetical asides (you know, I mean), interjections (well, like) and partial words (wou-, oper-).", "One type of disfluency which is especially problematic for conventional syntactic parsers are speech repairs.", "Following the analysis of Shriberg (1994), a speech repair consists of three main parts; the reparandum , the interregnum and the repair .", "As illustrated in the following example, the reparandum we don't is the part of the utterance that is replaced or repaired, the interregnum uh I mean (which consists of a filled pause uh and a discourse marker I mean ) is an optional part of the disfluency, and the repair a lot of states don't replaces the reparandum.", "The fluent version is obtained by removing the reparandum and the interregnum.", "In the Switchboard treebank corpus (Mitchell et al., 1999) the reparanda, filled pauses and discourse markers are dominated by EDITED, INTJ and PRN nodes, respectively (see Figure 1).", "Of these disfluency nodes, EDITED nodes pose a major problem for conventional syntactic parsers, as the parsers typically fail to find any EDITED nodes at all.", "Conventional parsers mainly capture tree-structured dependencies between words, while the relation between reparandum and repair is quite different: the repair is often a rough copy of the reparandum, using the same or very similar words in roughly the same order (Char-niak and Johnson, 2001; Johnson and Charniak, 2004).", "The rough copy dependencies are strong evidence of a disfluency, but conventional syntactic parsers cannot capture them.", "Moreover, the reparandum and the repair do not form conventional syntactic phrases, as illustrated in Figure 1, which is an additional difficulty when integrating disfluency detection with syntactic parsing.", "This motivated the development of special disfluency detection systems which find and remove disfluent words from the input prior to parsing (Charniak and Johnson, 2001; Kahn et al., 2005; Lease and Johnson, 2006), and special mechanisms added to parsers specifically to handle disfluencies (Rasooli and Tetreault, 2013; Honnibal and Johnson, 2014; Yoshikawa et al., 2016; Tran et al., 2018).", "In this paper, we investigate the performance of a neural self-attentive constituency parser on speech transcripts.", "We show that an off-the-shelf self-attentive parser, unlike conventional parsers, can detect disfluent words with a performance which is competitive to or better than specialized disfluency detection systems.", "In summary, the main contributions of this paper are: We show that the self-attentive constituency parser sets a new state-of-the-art for syntactic parsing of transcribed speech, A neural constituency parser can detect EDITED words with an accuracy surpassing that of specialized disfluency detection models, We demonstrate that syntactic information helps the neural syntactic parsing detect disfluent words more accurately, Replacing the constituent-based representation of disfluencies with a word-based representation of disfluencies improves the detection of disfluent words, Modifying the training loss function to put more weight on EDITED nodes during training also improves disfluency detection.", "Speech recognition errors, unknown sentence boundaries and disfluencies are three major problems addressed by previous work on parsing speech.", "In this work, we focus on the problem of disfluency detection in parsing human-transcribed speech, where we assume that sentence boundaries are given and there are no word recognition errors.", "This section reviews approaches that add special mechanisms to parsers to handle disfluencies as well as specialized disfluency detection models.", "Many speech parsers adopt a transition-based dependency approach to", "(i) find the relationship between head words and words modifying the heads, and", "(ii) detect and remove disfluent words and their dependencies from the sentence.", "Transition-based parsers can be augmented with new parse actions to specifically handle disfluent words (Rasooli and Tetreault, 2013; Honnibal and Johnson, 2014; Yoshikawa et al., 2016; Wu et al., 2015).", "A classifier is trained to choose between the standard and the augmented parse actions at each time step.", "Using pattern-match features in the classifier significantly improves disfluency detection (Honnibal and Johnson, 2014).", "This reflects the fact that parsing based models use pattern-matching to capture the rough copy dependencies that are characteristic of speech disfluencies.", "Speech parsing models usually use lexical features.", "One recent approach (Tran et al., 2018) integrates lexical and prosodic cues in an encoder-decoder constituency parser.", "Prosodic cues result in very small performance gain in both parsing and disfluency detection.", "Augmenting the parser with a location-aware attention mechanism is specially useful for detecting disfluencies (Tran et al., 2018).", "In general, parsing models are poor at detecting disfluencies, mainly due to rough copy dependencies in disfluent sentences, which are difficult for conventional parsers to detect.", "Disfluency detection models often use a sequence tagging technique to assign a single label to each", "word of a sequence.", "Previous work shows that LSTMs and CNNs operating on words alone are poor at disfluency detection (Zayats et al., 2016; Wang et al., 2016; Jamshid Lou et al., 2018).", "The performance of state-of-the-art disfluency detection models depends heavily on hand-crafted pattern match features, which are specifically designed to find rough copies.", "One recent paper (Jamshid Lou et al., 2018) augments a CNN model with a new kind of layer called an auto-correlational layer to capture rough copy dependencies.", "The model compares the input vectors of words within a window to find identical or similar words.", "The addition of the auto-correlational layer to a vanilla CNN significantly improves the performance over the baseline CNN model.", "The results are competitive to models using complex hand-crafted features or external information sources, indicating that the auto-correlation model learns rough copies.", "One recent paper (Wang et al., 2018) introduces a semi-supervised approach to disfluency detection.", "Their self-attentive model is the current state-of-the-art result in disfluency detection.", "The common factor in Wang et al. (2018) and the approach presented here is the self-attentive transformer architecture, which suggests that this architecture is capable of detecting disfluencies with very high accuracy.", "The work we present goes beyond the work of Wang et al. (2018) in also studying the impact of jointly predicting syntactic structure and disfluencies (so it can be understood as a kind of multi-task learning).", "We also investigate the impact of different ways of representing disfluency information in the context of a syntactic parsing task.", "We use the self-attentive constituency parser introduced by Kitaev and Klein (2018) and train it on the Switchboard corpus of transcribed speech (we describe the training and evaluation conditions in more detail in Section 4).", "The self-attentive parser achieves state-of-the-art performance on WSJ data, which is why we selected it as the best off-the-shelf parsing model.", "The constituency parser uses a self-attentive transformer (Vaswani et al., 2017) as an encoder and a chart-based parser (Stern et al., 2017) as a decoder, as reviewed in the following sections.", "The encoder of a transformer is a stack of n identical layers, each consists of two stacked sublayers: a multi-head attention mechanism, and a point-wise fully connected network.", "The inputs to the encoder first flow through a self-attention sublayer, which helps the encoder attends to several words in the sentence as it encodes a specific word.", "Because the model lacks recurrent layers, this sublayer is the only mechanism which propagates information between positions in the sentence.", "The self-attention maps the input to three vectors called query, key and value and defines an attention function as mapping a query and a set of key-value pairs to an output vector.", "The output is computed as a weighted sum of the values, where the weight assigned to each value is computed by a compatibility function of the query with the corresponding key.", "Each self-attention sublayer has several attention heads, where each head has its own query, key and value weight matrices.", "The multi-head attention allows the model to jointly attend to information from several different positions.", "The outputs of the self-attention layer are fed to a feed-forward neural network, which is applied to each position independently.", "For further detail, see Vaswani et al. (2017).", "We believe that the self-attention mechanism is especially useful for detecting disfluencies in a sentence.", "In pilot experiments we found that similar LSTM-based parsers, such as the AllenNLP parser (Gardner et al., 2018), were much worse at disfluency detection than the self-attentive parser.", "As shown in Figure 1, the rough copy similarity between the repair and the reparandum is a strong indicator of disfluency.", "Rough copies involve same or very similar words in roughly same word order; for example, in the Switchboard training data, over 60% of the words in the reparandum are exact copies of the words in the repair.", "Using the multi-head self-attention mechanism the model can presumably learn to focus on rough copies when detecting a reparandum.", "A chart-based parser scores a tree as a sum of potentials on its labeled constituent spans as follows:", "where s ( i, j, l ) is a score of a constituent located between string positions i and j with label l .", "At test time, a modified CYK algorithm is used to find the highest scoring parse tree for a given sentence.", "T = argmax T s ( T ) (3) Given the gold tagged tree T (cid:63) , we train the model by minimizing a hinge loss: max (cid:18) 0 , max T (cid:54) = T (cid:63) [ s ( T ) + (cid:52) ( T, T (cid:63) )] s ( T (cid:63) ) (cid:19) (4) where (cid:52) is the Hamming loss on labeled spans.", "For further detail, see Kitaev and Klein (2018) and Stern et al. (2017).", "Peters et al. (2018) have recently introduced a new approach for word representation called Embeddings from Language Models (ELMo) which has achieved state-of-the-art results in various NLP tasks.", "These embeddings are produced by a LSTM language model (LM) which inputs words and characters and generates a vector representation for each word of the sentence.", "The ELMo output is a concatenation of both the forward and backward LM hidden states.", "We found that using external ELMo embedding as the only lexical representation used by the model leads to the highest EDITED word f-score.", "Following Kitaev and Klein (2018), we use a trainable weight matrix to project the ELMo pretrained weights of 1024 dimension to a 512-dimensional content representation.", "We tried different combinations of input including predicted POS tags, character LSTM and word embeddings with ELMo, but the result was either worse or not significantly better than when using ELMo alone.", "The sole change we made to the self-attentive parser was to modify the loss function, so it puts more weight onto EDITED nodes.", "We show below that this improves the model's ability to recover EDITED nodes.", "We modify the tree scoring in 2 as follows: s ( T ) = (cid:88) ( i,j,l ) T w l s ( i, j, l ) (5) where w l depends on the label l .", "We only used two different values of w l here, one for EDITED nodes and one for all other node labels.", "We treat these as hyperparameters, and tune them to maximize EDITED nodes f-score (this is F(S E ) in Section 4.1 below).", "We evaluate the self-attentive parser on the Penn Treebank-3 Switchboard corpus (Mitchell et al., 1999).", "Following Charniak and Johnson (2001), we split the Switchboard corpus into training, dev and test sets as follows: training data consists of the sw[23]", ".mrg files, dev data consists of the sw4[5-9]", ".mrg files and test data consists of the sw4[0-1]", ".mrg files.", "Except as explicitly noted below, we remove all partial words (words tagged XX and words ending in -) and punctuation from data, as they are not available in realistic ASR applications (Johnson and Charniak, 2004).", "We evaluate the self-attentive parser in terms of parsing accuracy and disfluency detection performance.", "We report precision (P), recall (R) and fscore (F) for both constituent spans (S) and word positions (W), treating each word position as labeled by all the constituents that contain that word.", "We also consider subsets of constituent spans and word positions; specifically:", "(i) SE , the set of constituent spans labeled EDITED,", "(ii) WE , the set of word positions dominated by one or more EDITED nodes, and", "(iii) WEIP , the set of word positions dominated by one or more EDITED, INTJ or PRN nodes.", "We demonstrate the evaluation metrics with an example here.", "Consider the gold and predicted parse trees illustrated in Figure 2.", "The constituency trees are viewed as a set of labeled spans over the words of the sentence, where constituent spans are pairs of string positions.", "As explained earlier, we ignore punctuation and partial words when calculating evaluation scores.", "To calculate fscore for a span, i.e., F(S), the gold, predicted and correct labeled spans are counted.", "In this case, the number of predicted, gold and correctly predicted spans is 13 , 14 and 12 .", "Since a parse tree with EDITED nodes identifies certain words as EDITED, we can evaluate how accurately a parser classifies words as EDITED (i.e. F(W E )).", "Continuing with the example in Figure 2, the number of predicted, gold and correctly predicted EDITED words is 1 , 3 and 1 .", "Similarly, we can also measure how well the parser can identify all disfluency words, i.e., the words dominated by EDITED, INTJ or PRN nodes.", "Continuing with the example in Figure 2, the number of predicted, gold and correctly pre-S EDITED EDITED NP PRP I S NP PRP I VP VBP 've INTJ UH uh PRN S NP PRP I VP VBP mean NP PRP I VP VBP enjoy", "We use randomized search (Bergstra and Bengio, 2012) to tune the optimization and architecture parameters of the model on the dev set.", "We optimize the model for its performance on parsing EDITED nodes F(S E ).", "The hyperparameters include dimensionality of the model, learning rate, edited loss weight, dropout, number of layers and heads as shown in Table 1.", "All other hyperparameters not mentioned here are the same as in Kitaev and Klein (2018).", "Our best dev model (see Table 1) uses an edited loss that puts more weight on EDITED nodes and less weight on non-EDITED nodes.", "To explore the effect of edited loss, we retrained the best model with an equally weighted loss.", "The results in Table 2 indicate that differential weighting improves parsing EDITED nodes as well as EDITED word detection.", "It also rebalances the precision vs. recall trade-off and slightly increases overall parsing accuracy F(S).", "We investigate the effect of modifying the training data on the performance of the parser.", "We use different tree-transformations to explore the effect of different amounts of and encodings of disfluencies and syntactic information on the performance of the model.", "Transformation NoSyntax: Deleting all non-disfluency nodes, as shown in Figure", "5. S EDITED EDITED EDITED EX There BES 's EX there DT this NN topic VBZ is RB kind RB of JJ mute Figure 5: Transformation NoSyntax, where all non-disfluency nodes are deleted.", "Transformation PosDisfl+NoSyntax: Pushing disfluency nodes down to POS tags and deleting all non-disfluency nodes, as shown in Figure", "6. SEDITED EX There EDITED BES 's EDITED EX there DT this NN topic VBZ is RB kind RB of JJ mute Figure 6: Transformation PosDisfl+NoSyntax, where disfluency nodes are pushed down to POS tags and all non-disfluency nodes are deleted.", "We report the performance of the self-attentive parser in terms of EDITED word f-score and disfluency word f-score in Table", "3. Since the transformations change the tree shapes, it is not meaningful to compare their parsing f-scores.", "As illustrated in Table 3, pushing disfluency nodes down to POS tags (i.e. Transformation PosDisfl ) increases precision about 2% , resulting in 1% improvement in word f-score F(W E ).", "It also improves F(W EIP ) by 0 .", "4% .", "In general, the model can take advantage of the simplified encoding of disfluency nodes (see Transformations PosDisfl and TopDisfl ).", "Moreover, deleting all but the top-most disfluency nodes as in Transformation TopDisfl+NoSyntax significantly drops precision (about 20% ), resulting in more than 13% decrease in EDITED word f-score.", "It also hurts detecting all types of disfluency (more than 7% decrease in F(W EIP )).", "In general, removing syntactic structure dramatically degrades the performance of the model in terms of F(W E ) and F(W EIP ), as shown in Transformations NoSyntax , PosDisfl+NoSyntax and TopDisfl+NoSyntax .", "This indicates that syntactic information is important for detecting disfluencies.", "As mentioned before, speech recognition models generally do not produce punctuation and partial words in their outputs.", "Thus, prior work has removed them from the data to make the evaluation more realistic.", "However, it is interesting to see what information partial words and punctuation convey about syntactic structure in general and disfluencies in particular, so we did an experiment to investigate the effect of including these in the training and test data.", "We use the best hyperparameter configuration on the Switchboard dev set and retrain the model on two versions of the data:", "(i) with partial words and", "(ii) with punctuation and partial words.", "As shown in Table 4, keeping punctuation and partial words in the training data increases EDITED word f-score by about 4% , indicating that punctuation and partial words greatly help disfluency detection.", "Punctuation leads to more gain in disfluency detection than partial words.", "Punctuation also improves the word f-score for all types of disfluencies by more than 1% .", "We selected our best model based on the dev set results (including differentially weighted loss) and compared the results achieved for the Tree Transformation PosDisfl and No Tree Transformation on the test set with previous work.", "Although most previous work has used the Switchboard corpus, Setting F(W E ) F(W EIP ) without punctuation & partial words 88 .", "it is sometimes difficult to compare systems directly due to different scoring metrics and differences in experimental setup, such as the use of partial words, punctuation, prosodic cues and so on.", "Since some studies report their results using partial words and/or punctuation, we divide prior work according to the setting they used and report the results of the self-attentive parser on the test data for each setting.", "Table 5 shows the test performance of the self-attentive constituency parser against previous parsing models of speech transcripts.", "The self-attentive parser outperforms all previous models in parsing accuracy.", "It has also better performance than Kahn et al. (2005) and Tran et al. (2018), who used acoustic/prosodic cues from speech waveform as well as the words in the transcript.", "We also compare the performance of the self-attentive parser with state-of-the-art disfluency detection methods in terms of EDITED word f-score.", "As shown in Table 6, the self-attentive parser (with PosDisfl Transformation ) achieves a new state-of-the-art for detecting EDITED words.", "Its performance is competitive with specialized disfluency detection models that directly optimize for disfluency detection.", "Using partial words increases edited word f-score for No Transformation mode by 0 .", "1% and for PosDisfl Transformation mode by 0 .", "6% , which is not surprising as the presence of partial words is strongly correlated with the presence of a disfluency.", "It is interesting to compare the self-attentive parser with the ACNN model presented in Jamshid Lou et al. (2018).", "They introduce a new ACNN layer which is able to learn the rough copy dependencies between words, for which previous models heavily relied on hand-crafted pattern-matching features.", "Rough copies are a strong indicator of disfluencies that can help the model detect reparanda (i.e. EDITED nodes).", "That the self-attentive parser is better than the ACNN model (Jamshid Lou et al., 2018) in detecting disfluencies may indicate that the self-attention mechanism can learn rough copy dependencies.", "We also compare the performance of the self-attentive parser with Wang et", "al.'s (2018) self-attentive disfluency detection model in terms of disfluency (i.e. EDITED, INTJ and PRN) word f-score.", "As shown in Table 7, the self-attentive parser outperforms this state-of-the-art specialized self-attentive disfluency detection model.", "This paper shows that using an off-the-shelf constituency parser achieves a new state-of-the-art in parsing transcribed speech.", "The self-attentive parser is effective in detecting disfluent words as it outperforms specialized disfluency detection models, suggesting that it is feasible to use standard neural architectures to perform disfluency detection as part of some other task, rather than requiring a separate disfluency detection pre-processing step.", "We also show that removing syntactic information hurts word f-score.", "That is, performing syntactic parsing and disfluency detection as a multi-task training objective yields higher disfluency detection accuracy than performing disfluency detection in isolation.", "Modifying encoding by indicating disfluencies at the word level leads to further improvements in disfluency detection.", "In future work we hope to integrate syntactic parsing more closely with automatic speech recognition.", "A first step is to develop parsing models that parse ASR output, rather than speech transcripts.", "It may also be possible to more directly integrate an attention-based syntactic parser with a speech recogniser, perhaps trained in an end-to-end fashion.", "This research was supported by a Google award through the Natural Language Understanding Focused Program, CRP 8201800363 from Data61/CSIRO, and under the Australian Research Councils Discovery Projects funding scheme (project number DP160102156).", "We also thank the anonymous reviewers for their valuable comments that helped to improve the paper." ]
[ "method", "abstain", "abstain", "abstain", "result", "objective", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "objective", "result", "other", "other", "method", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "method", "objective", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "other" ]
[ "The task of named entity recognition (NER) is normally divided into nested NER and flat NER depending on whether named entities are nested or not.", "Models are usually separately developed for the two tasks, since sequence labeling models are only able to assign a single label to a particular token, which is unsuitable for nested NER where a token may be assigned several labels.", "In this paper, we propose a unified framework that is capable of handling both flat and nested NER tasks.", "Instead of treating the task of NER as a sequence labeling problem, we propose to formulate it as a machine reading comprehension (MRC) task.", "For example, extracting entities with the PER (PERSON ) label is formalized as extracting answer spans to the question which person is mentioned in the", "text.This formulation naturally tackles the entity overlapping issue in nested NER: the extraction of two overlapping entities with different categories requires answering two independent questions.", "Additionally, since the query encodes informative prior knowledge, this strategy facilitates the process of entity extraction, leading to better performances for not only nested NER, but flat NER.", "We conduct experiments on both nested and flat NER datasets.", "Experiment results demonstrate the effectiveness of the proposed formulation.", "We are able to achieve a vast amount of performance boost over current SOTA models on nested NER datasets, i.e., +1.28, +2.55, +5.44, +6.37,re-spectively on ACE04, ACE05, GENIA and KBP17, as well as flat NER datasets, i.e., +0.24, +1.95, +0.21, +1.49 respectively on English CoNLL 2003, English OntoNotes 5.0, Chinese MSRA and Chinese OntoNotes 4.0.", "The code and datasets can be found at https://github.com/ShannonAI/ mrc-for-flat-nested-ner .", "Named Entity Recognition (NER) refers to the task of detecting the span and the semantic category of entities from a chunk of text.", "The task can be further divided into two sub-categories, nested NER and flat NER, depending on whether entities are nested or not.", "Nested NER refers to a phenomenon that the spans of entities (mentions) are nested, as shown in Figure 1.", "Entity overlapping is a fairly common phenomenon in natural languages.", "The task of flat NER is commonly formalized as a sequence labeling task: a sequence labeling model (Chiu and Nichols, 2016; Ma and Hovy, 2016; Devlin et al., 2018) is trained to assign a single tagging class to each unit within a sequence of tokens.", "This formulation is unfortunately incapable of handling overlapping entities in nested NER (Huang et al., 2015; Chiu and Nichols, 2015), where multiple categories need to be assigned to a single token if the token participates in multiple entities.", "Many attempts have been made to reconcile sequence labeling models with nested NER (Alex et al., 2007; Byrne, 2007; Finkel and Manning, 2009; Lu and Roth, 2015; Katiyar and Cardie, 2018), mostly based on the pipelined systems.", "However, pipelined systems suffer from the disadvantages of error propagation, long running time and the intensiveness in developing hand-crafted features, etc.", "NLP problems as question answering tasks (Levy et al., 2017; McCann et al., 2018; Li et al., 2019), we propose a new framework that is capable of handling both flat and nested NER.", "Instead of treating the task of NER as a sequence labeling problem, we propose to formulate it as a SQuAD-style (Rajpurkar et al., 2016, 2018) machine reading comprehension (MRC) task.", "Each entity type is characterized by a natural language query, and entities are extracted by answering these queries given the contexts.", "For example, the task of assigning the PER( PERSON ) label to [Washington] was born into slavery on the farm of James Burroughs is formalized as answering the question which person is mentioned in the text? .", "This strategy naturally tackles the entity overlapping issue in nested NER: the extraction of two entities with different categories that overlap requires answering two independent questions.", "The MRC formulation also comes with another key advantage over the sequence labeling formulation.", "For the latter, golden NER categories are merely class indexes and lack for semantic prior information for entity categories.", "For example, the ORG( ORGANIZATION ) class is treated as a one-hot vector in sequence labeling training.", "This lack of clarity on what to extract leads to inferior performances.", "On the contrary, for the MRC formulation, the query encodes significant prior information about the entity category to extract.", "For example, the query find an organization such as company, agency and institution in the context encourages the model to link the word organization in the query to location entities in the context.", "Additionally, by encoding comprehensive descriptions (e.g., company, agency and in-stitution ) of tagging categories (e.g., ORG ), the model has the potential to disambiguate similar tagging classes.", "We conduct experiments on both nested and flat NER datasets to show the generality of our approach.", "Experimental results demonstrate its effectiveness.", "We are able to achieve a vast amount of performance boost over current SOTA models on nested NER datasets, i.e., +1.28, +2.55, +5.44, +6.37, respectively on ACE04, ACE05, GENIA and KBP17, as well as flat NER datasets, i.e., +0.24, +1.95, +0.21, +1.49 respectively on English CoNLL 2003, English OntoNotes 5.0, Chinese MSRA, Chinese OntoNotes 4.0.", "We wish that our work would inspire the introduction of new paradigms for the entity recognition task.", "Traditional sequence labeling models use CRFs (Lafferty et al., 2001; Sutton et al., 2007) as a backbone for NER.", "The first work using neural models for NER goes back to 2003, when Ham-merton (2003) attempted to solve the problem using unidirectional LSTMs.", "Collobert et al. (2011) presented a CNN-CRF structure, augmented with character embeddings by Santos and Guimaraes (2015).", "Lample et al. (2016) explored neural structures for NER, in which the bidirectional LSTMs are combined with CRFs with features based on character-based word representations and unsupervised word representations.", "Ma and Hovy (2016) and Chiu and Nichols (2016) used a character CNN to extract features from characters.", "Recent large-scale language model pretraining methods such as BERT (Devlin et al., 2018) and ELMo (Peters et al., 2018a) further enhanced the performance of NER, yielding state-of-the-art performances.", "The overlapping between entities (mentions) was first noticed by Kim et al. (2003), who developed handcrafted rules to identify overlapping mentions.", "Alex et al. (2007) proposed two multi-layer CRF models for nested NER.", "The first model is the inside-out model, in which the first CRF identifies the innermost entities, and the successive layer CRF is built over words and the innermost entities extracted from the previous CRF to identify second-level entities, etc.", "The other is the outside-in model, in which the first CRF identifies outermost entities, and then successive CRFs would identify increasingly nested entities.", "Finkel and Manning (2009) built a model to extract nested entity mentions based on parse trees.", "They made the assumption that one mention is fully contained by the other when they overlap.", "Lu and Roth (2015) proposed to use mention hyper-graphs for recognizing overlapping mentions.", "Xu et al. (2017) utilized a local classifier that runs on every possible span to detect overlapping mentions and Katiyar and Cardie (2018) used neural models to learn the hyper-graph representations for nested entities.", "Ju et al. (2018) dynamically stacked flat NER layers in a hierarchical manner.", "Lin et al. (2019a) proposed the Anchor-Region Networks (ARNs) architecture by modeling and leveraging the head-driven phrase structures of nested entity mentions.", "Luan et al. (2019) built a span enumeration approach by selecting the most confident entity spans and linking these nodes with confidence-weighted relation types and coreferences.", "Other works (Muis and Lu, 2017; Sohrab and Miwa, 2018; Zheng et al., 2019) also proposed various methods to tackle the nested NER problem.", "Recently, nested NER models are enriched with pre-trained contextual embeddings such as BERT (Devlin et al., 2018) and ELMo (Peters et al., 2018b).", "Fisher and Vlachos (2019) introduced a BERT-based model that first merges tokens and/or entities into entities, and then assigned labeled to these entities.", "Shibuya and Hovy (2019) provided inference model that extracts entities iteratively from outermost ones to inner ones.", "Strakova et al. (2019) viewed nested NER as a sequence-to-sequence generation problem, in which the input sequence is a list of tokens and the target sequence is a list of labels.", "MRC models (Seo et al., 2016; Wang et al., 2016; Wang and Jiang, 2016; Xiong et al., 2016, 2017; Wang et al., 2016; Shen et al., 2017; Chen et al., 2017) extract answer spans from a passage through a given question.", "The task can be formalized as two multi-class classification tasks, i.e., predicting the starting and ending positions of the answer spans.", "Over the past one or two years, there has been a trend of transforming NLP tasks to MRC question answering.", "For example, Levy et al. (2017) transformed the task of relation extraction to a QA task: each relation type R ( x, y ) can be parameterized as a question q ( x ) whose answer is y .", "For example, the relation EDUCATED-AT can be mapped to Where did x study? .", "Given a question q ( x ) , if a non-null answer y can be extracted from a sentence, it means the relation label for the current sentence is R .", "McCann et al. (2018) transformed NLP tasks such as summarization or sentiment analysis into question answering.", "For example, the task of summarization can be formalized as answering the question What is the summary? .", "Our work is significantly inspired by Li et al. (2019), which formalized the task of entity-relation extraction as a multi-turn question answering task.", "Different from this work, Li et al. (2019) focused on relation extraction rather than NER.", "Additionally, Li et al. (2019) utilized a template-based procedure for constructing queries to extract semantic relations between entities and their queries lack diversity.", "In this paper, more factual knowledge such as synonyms and examples are incorporated into queries, and we present an in-depth analysis of the impact of strategies of building queries.", "Given an input sequence X = { x 1 , x 2 , ..., x n } , where n denotes the length of the sequence, we need to find every entity in X , and then assign a label y Y to it, where Y is a predefined list of all possible tag types (e.g., PER, LOC, etc).", "Dataset Construction Firstly we need to transform the tagging-style annotated NER dataset to a set of ( QUESTION , ANSWER , CONTEXT ) triples.", "For each tag type y Y , it is associated with a natural language question q y = { q 1 , q 2 , ..., q m } , where m denotes the length of the generated query.", "An annotated entity x start,end = { x start , x start+1 , , x end-1 , x end } is a substring of X satisfying start end.", "Each entity is associated with a golden label y Y .", "By generating a natural language question q y based on the label y , we can obtain the triple ( q y , x start,end , X ), which is exactly the ( QUESTION , ANSWER , CONTEXT ) triple that we need.", "Note that we use the subscript start,end to denote the continuous tokens from index start' to end' in a sequence.", "The question generation procedure is important since queries encode prior knowledge about labels and have a significant influence on the final results.", "Different ways have been proposed for question generation, e.g., Li et al. (2019) utilized a template-based procedure for constructing queries to extract semantic relations between entities.", "In this paper, we take annotation guideline notes as references to construct queries.", "Annotation guideline notes are the guidelines provided to the annotators of the dataset by the dataset builder.", "They are descriptions of tag categories, which are described as generic and precise as possible so that Entity Natural Language Question Location Find locations in the text, including nongeographical locations, mountain ranges and bodies of water.", "human annotators can annotate the concepts or mentions in any text without running into ambiguity.", "Examples are shown in Table 1.", "Given the question q y , we need to extract the text span x start,end which is with type y from X under the MRC framework.", "We use BERT (Devlin et al., 2018) as the backbone.", "To be in line with BERT, the question q y and the passage X are concatenated, forming the combined string { [ CLS ] , q 1 , q 2 , ..., q m , [ SEP ] , x 1 , x 2 , ..., x n } , where [CLS] and [SEP] are special tokens.", "Then BERT receives the combined string and outputs a context representation matrix E R n d , where d is the vector dimension of the last layer of BERT and we simply drop the query representations.", "There are two strategies for span selection in MRC: the first strategy (Seo et al., 2016; Wang et al., 2016) is to have two n -class classifiers separately predict the start index and the end index, where n denotes the length of the context.", "Since the softmax function is put over all tokens in the context, this strategy has the disadvantage of only being able to output a single span given a query; the other strategy is to have two binary classifiers, one to predict whether each token is the start index or not, the other to predict whether each token is the end index or not.", "This strategy allows for outputting multiple start indexes and multiple end indexes for a given context and a specific query, and thus has the potentials to extract all related entities according to q y .", "We adopt the second strategy and describe the details below.", "the representation matrix E output from BERT, the model first predicts the probability of each token being a start", "index as follows:", "T start R d 2 is the weights to learn.", "Each row of P start presents the probability distribution of each index being the start position of an entity given the query.", "End Index Prediction The end index prediction procedure is exactly the same, except that we have another matrix T end to obtain probability matrix P end R n 2 .", "Start-End Matching In the context X , there could be multiple entities of the same category.", "This means that multiple start indexes could be predicted from the start-index prediction model and multiple end indexes predicted from the end-index prediction model.", "The heuristic of matching the start index with its nearest end index does not work here since entities could overlap.", "We thus further need a method to match a predicted start index with its corresponding end index.", "Specifically, by applying argmax to each row of P start and P end , we will get the predicted indexes that might be the starting or ending positions, i.e., I start and I end : I start = { i | argmax ( P ( i ) start ) = 1 , i = 1 , , n } I end = { j | argmax ( P ( j ) end ) = 1 , j = 1 , , n } (2) where the superscript ( i ) denotes the i -th row of a matrix.", "Given any start index i start I start and end index i end I end , a binary classification model is trained to predict the probability that they should be matched, given as follows: P i start ,j end = sigmoid ( m concat ( E i start , E j end )) (3) where m R 1 2 d is the weights to learn.", "At training time, X is paired with two label sequences Y start and Y end of length n representing the ground-truth label of each token x i being the start index or end index of any entity.", "We therefore have the following two losses for start and end index predictions: L start = CE ( P start , Y start ) L end = CE ( P end , Y end ) (4) Let Y start, end denote the golden labels for whether each start index should be matched with each end index.", "The start-end index matching loss is given as follows: L span = CE ( P start , end , Y start, end ) (5) The overall training objective to be minimized is as follows: L = L start + L end + L span (6) , , [0 , 1] are hyper-parameters to control the contributions towards the overall training objective.", "The three losses are jointly trained in an end-to-end fashion, with parameters shared at the BERT layer.", "At test time, start and end indexes are first separately selected based on I start and I end .", "Then the index matching model is used to align the extracted start indexes with end indexes, leading to the final extracted answers.", "For nested NER, experiments are conducted on the widely-used ACE 2004, ACE 2005, GENIA and KBP2017 datasets, which respectively contain 24%, 22%, 10% and 19% nested mentions.", "Hyperparameters are tuned on their corresponding development sets.", "For evaluation, we use span-level micro-averaged precision, recall and F1.", "ACE 2004 and ACE 2005 (Doddington et al., 2005; Christopher Walker and Maeda, 2006): The two datasets each contain 7 entity categories.", "For each entity type, there are annotations for both the entity mentions and mention heads.", "For fair comparison, we exactly follow the data preprocessing strategy in Katiyar and Cardie (2018) and Lin et al. (2019b) by keeping files from bn , nw and wl , and splitting these files into train, dev and test sets by 8:1:1, respectively.", "GENIA (Ohta et al., 2002) For the GENIA dataset, we use GENIAcorpus3.02p.", "We follow the protocols in Katiyar and Cardie (2018).", "KBP2017 We follow Katiyar and Cardie (2018) and evaluate our model on the 2017 English evaluation dataset (LDC2017D55).", "Training set consists of RichERE annotated datasets, which include LDC2015E29, LDC2015E68, LDC2016E31 and LDC2017E02.", "We follow the dataset split strategy in Lin et al. (2019b).", "Hyper-Graph: Katiyar and Cardie (2018) proposes a hypergraph-based model based on LSTMs.", "Seg-Graph: Wang and Lu (2018) proposes a segmental hypergargh representation to model overlapping entity mentions.", "ARN: Lin et al. (2019a) proposes Anchor-Region Networks by modeling and levrag-ing the head-driven phrase structures of entity mentions.", "KBP17-Best: Ji et al. (2017) gives an overview of the Entity Discovery task at the Knowledge Base Population (KBP) track at TAC2017 and also reports previous best results for the task of nested NER.", "Seq2Seq-BERT: Strakova et al. (2019) views the nested NER as a sequence-to-sequence problem.", "Input to the model is word tokens and the output sequence consists of labels.", "Path-BERT: Shibuya and Hovy (2019) treats the tag sequence as the second best path within in the span of their parent entity based on BERT.", "Merge-BERT: Fisher and Vlachos (2019) proposes a merge and label method based on BERT.", "DYGIE: Luan et al. (2019) introduces a general framework that share span representations using dynamically constructed span graphs.", "Table 2 shows experimental results on nested NER datasets.", "We observe huge performance boosts on the nested NER datasets over previous state-of-the-art models, achieving F1 scores of 85.98%, 86.88%, 83.75% and 80.97% on ACE04, ACE05, GENIA and KBP-2017 datasets, which are +1.28%, +2.55%, +5.44% and +6.37% over previous SOTA performances, respectively.", "For flat NER, experiments are conducted on both English datasets i.e. CoNLL2003 and OntoNotes 5.0 and Chinese datasets i.e. OntoNotes 4.0 and MSRA.", "Hyperparameters are tuned on their corresponding development sets.", "We report span-level English ACE 2004 Model Precision Rrecall F1 Hyper-Graph (Katiyar and Cardie, 2018) 73.6 71.8 72.7 Seg-Graph (Wang and Lu, 2018) 78.0 72.4 75.1 Seq2seq-BERT (Strakova et al., 2019) -84.40 Path-BERT (Shibuya and Hovy, 2019) 83.73 81.91 82.81 DYGIE (Luan et al., 2019) -84.7 BERT-MRC 85.05 86.32 85.98(+1.28) English ACE 2005 Model Precision Recall F1 Hyper-Graph (Katiyar and Cardie, 2018) 70.6 70.4 70.5 Seg-Graph (Wang and Lu, 2018) 76.8 72.3 74.5 ARN (Lin et al., 2019a) 76.2 73.6 74.9 Path-BERT (Shibuya and Hovy, 2019) 82.98 82.42 82.70 Merge-BERT (Fisher and Vlachos, 2019) 82.7 82.1 82.4 DYGIE (Luan et al., 2019) -82.9 Seq2seq-BERT (Strakova et al., 2019) -84.33 BERT-MRC 87.16 86.59 86.88(+2.55) English GENIA Model Precision Recall F1 Hyper-Graph (Katiyar and Cardie, 2018) 77.7 71.8 74.6 ARN (Lin et al., 2019a) 75.8 73.9 74.8 Path-BERT (Shibuya and Hovy, 2019) 78.07 76.45 77.25 DYGIE (Luan et al., 2019) -76.2 Seq2seq-BERT (Strakova et al., 2019) -78.31 BERT-MRC 85.18 81.12 83.75(+5.44) English KBP 2017 Model Precision Recall F1 KBP17-Best (Ji et al., 2017) 76.2 73.0 72.8 ARN (Lin et al., 2019a) 77.7 71.8 74.6 BERT-MRC 82.33 77.61 80.97(+6.37) Table 2: Results for nested NER tasks.", "CoNLL2003 (Sang and Meulder, 2003) is an English dataset with four types of named entities: Location, Organization, Person and Miscellaneous.", "We followed data processing protocols in Ma and Hovy (2016).", "OntoNotes 5.0 (Pradhan et al., 2013) is an English dataset and consists of text from a wide variety of sources.", "The dataset includes 18 types of named entity, consisting of 11 types (Person, Organization, etc) and 7 values (Date, Percent, etc).", "MSRA (Levow, 2006) is a Chinese dataset and performs as a benchmark dataset.", "Data in MSRA is collected from news domain and is used as shared task on SIGNAN backoff 2006.", "There are three types of named entities.", "OntoNotes 4.0 (Pradhan et al., 2011) is a Chinese dataset and consists of text from news domain.", "OntoNotes 4.0 annotates 18 named entity types.", "In this paper, we take the same data split as Wu et al. (2019).", "For English datasets, we use the following models as baselines.", "BiLSTM-CRF from Ma and Hovy (2016).", "ELMo tagging model from Peters et al. (2018b).", "CVT from Clark et al. (2018), which uses Cross-View Training(CVT) to improve the representations of a Bi-LSTM encoder.", "Bert-Tagger from Devlin et al. (2018), which treats NER as a tagging task.", "Lattice-LSTM: Zhang and Yang (2018) constructs a word-character lattice.", "Bert-Tagger: Devlin et al. (2018) treats NER as a tagging task.", "Glyce-BERT: The current SOTA model in Chinese NER developed by Wu et al. (2019), which combines glyph information with BERT pretraining.", "Table 3 presents comparisons between the proposed model and baseline models.", "For English CoNLL 2003, our model outperforms the fine-tuned BERT tagging model by +0.24% in terms of F1, while for English OntoNotes 5.0, the pro-English OntoNotes 5.0 Model F1 LSTM tagger (Strubell et al., 2017) 86.84 BiDAF (Seo et al., 2017) 87.39 (+0.55) QAnet (Yu et al., 2018) 87.98 (+1.14) BERT-Tagger 89.16 BERT-MRC 91.11 (+1.95) Table 4: Results of different MRC models on English OntoNotes5.0.", "posed model achieves a huge gain of +1.95% improvement.", "The reason why greater performance boost is observed for OntoNotes is that OntoNotes contains more types of entities than CoNLL03 (18 vs 4), and some entity categories face the severe data sparsity problem.", "Since the query encodes significant prior knowledge for the entity type to extract, the MRC formulation is more immune to the tag sparsity issue, leading to more improvements on OntoNotes.", "The proposed method also achieves new state-of-the-art results on Chinese datasets.", "For Chinese MSRA, the proposed method outperforms the fine-tuned BERT tagging model by +0.95% in terms of F1.", "We also improve the F1 from 79.16% to 82.11% on Chinese OntoNotes4.0.", "For flat NER, it is not immediately clear which proportion is responsible for the improvement, the MRC formulation or BERT (Devlin et al., 2018).", "On one hand, the MRC formulation facilitates the entity extraction process by encoding prior knowledge in the query; on the other hand, the good performance might also come from the large-scale pre-training in BERT.", "To separate the influence from large-scale BERT pretraining, we compare the LSTM-CRF tagging model (Strubell et al., 2017) with other MRC based models such as QAnet (Yu et al., 2018) and BiDAF (Seo et al., 2017), which do not rely on large-scale pretraining.", "Results on English Ontonotes are shown in Table 5.", "As can be seen, though underperforming BERT-Tagger, the MRC based approaches QAnet and BiDAF still significantly outperform tagging models based on LSTM+CRF.", "This validates the importance of MRC formulation.", "The MRC formulation's bene-fits are also verified when comparing BERT-tagger English OntoNotes 5.0 Model F1 BERT-Tagger 89.16 Position index of labels 88.29 (-0.87) Keywords 89.74 (+0.58) Wikipedia 89.66 (+0.59) Rule-based template filling 89.30 (+0.14) Synonyms 89.92 (+0.76) Keywords+Synonyms 90.23 (+1.07) Annotation guideline notes 91.11 (+1.95) Table 5: Results of different types of queries.", "with BERT-MRC: the latter outperforms the former by +1.95%.", "We plot the attention matrices output from the BiDAF model between the query and the context sentence in Figure 2.", "As can be seen, the semantic similarity between tagging classes and the contexts are able to be captured in the attention matrix.", "In the examples, Flevland matches geographical , cities and state .", "How to construct query has a significant influence on the final results.", "In this subsection, we explore different ways to construct queries and their influence, including: Position index of labels: a query is constructed using the index of a tag to , i.e., one, two, three.", "Keyword: a query is the keyword describing the tag, e.g., the question query for tag ORG is organization .", "Rule-based template filling: generates questions using templates.", "The query for tag ORG is which organization is mentioned in the text .", "Wikipedia: a query is constructed using its wikipedia definition.", "The query for tag ORG is an organization is an entity comprising multiple people, such as an institution or an", "association. Synonyms: are words or phrases that mean exactly or nearly the same as the original keyword extracted using the Oxford Dictionary.", "The query for tag ORG is association .", "Keyword+Synonyms: the concatenation of a keyword and its synonym.", "Annotation guideline notes: is the method we use in this paper.", "The query for tag ORG is find organizations including companies, agencies and institutions .", "Table 5 shows the experimental results on EnFigure 2: An example of attention matrices between the query and the input sentence.", "glish OntoNotes 5.0.", "The BERT-MRC outperforms BERT-Tagger in all settings except Position Index of Labels .", "The model trained with the Annotation Guideline Notes achieves the highest F1 score.", "Explanations are as follows: for Position Index Dataset , queries are constructed using tag indexes and thus do not contain any meaningful information, leading to inferior performances; Wikipedia underperforms Annotation Guideline Notes because definitions from Wikipedia are relatively general and may not precisely describe the categories in a way tailored to data annotations.", "It would be interesting to test how well a model trained on one dataset is transferable to another, which is referred to as the zero-shot learning ability.", "We trained models on CoNLL 2003 and test them on OntoNotes5.0.", "OntoNotes5.0 contains 18 entity types, 3 shared with CoNLL03, and 15 unseen in CoNLL03.", "Table 6 presents the results.", "As can been seen, BERT-tagger does not have zero-shot learning ability, only obtaining an accuracy of 31.87%.", "This is in line with our expectation since it cannot predict labels unseen from the training set.", "The question-answering formal-Figure 3: Effect of varying percentage of training samples on Chinese OntoNotes 4.0.", "BERT-MRC can achieve the same F1-score comparing to BERT-Tagger with fewer training samples.", "ization in MRC framework, which predicts the answer to the given query, comes with more generalization capability and achieves acceptable results.", "Since the natural language query encodes significant prior knowledge, we expect that the proposed framework works better with less training data.", "Figure 3 verifies this point: on the Chinese OntoNotes 4.0 training set, the query-based BERT-MRC approach achieves comparable performance to BERT-tagger even with half amount of training data.", "In this paper, we reformalize the NER task as a MRC question answering task.", "This formalization comes with two key advantages: (1) being capable of addressing overlapping or nested entities; (2) the query encodes significant prior knowledge about the entity category to extract.", "The proposed method obtains SOTA results on both nested and flat NER datasets, which indicates its effectiveness.", "In the future, we would like to explore variants of the model architecture.", "We thank all anonymous reviewers, as well as Ji-awei Wu and Wei Wu for their comments and suggestions.", "The work is supported by the National Natural Science Foundation of China (NSFC No. 61625107 and 61751209)." ]
[ "abstain", "abstain", "objective", "objective", "abstain", "abstain", "abstain", "method", "abstain", "result", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "result", "objective", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "other", "other" ]
[ "Fusion-in-decoder (FID ) (Izacard and Grave, 2021) is a generative question answering (QA) model that leverages passage retrieval with a pre-trained transformer and pushed the state of the art on single-hop QA.", "However, the complexity of multi-hop QA hinders the effectiveness of the generative QA approach.", "In this work, we propose a simple generative approach (PATHFID ) that extends the task beyond just answer generation by explicitly modeling the reasoning process to resolve the answer for multihop questions.", "By linearizing the hierarchical reasoning path of supporting passages, their key sentences, and finally the factoid answer, we cast the problem as a single sequence prediction task.", "To facilitate complex reasoning with multiple clues, we further extend the unified flat representation of multiple input documents by encoding cross-passage interactions.", "Our extensive experiments demonstrate that PATHFID leads to strong performance gains on two multihop QA datasets: HotpotQA and IIRC.", "Besides the performance gains, PATHFID is more interpretable, which in turn yields answers that are more faithfully grounded to the supporting passages and facts compared to the baseline FID model.", "Leveraging knowledge to make complex reasoning has been a fundamental problem of artificial intelligence.", "Open-domain question answering (QA) (Voorhees, 1999) is an integral part of such a line of research with impactful applications (Esteva et al., 2020; Zhang et al., 2020), where the task is to answer general domain questions by gathering evidence from a large collection of documents.", "While super-human level performance has been achieved on single-passage reading comprehension dataset like SQuAD (Rajpurkar et al., 2016), open-domain QA still has a long way to go, especially for questions requiring more complex reasoning.", "The main challenge in the task of complex QA, namely multihop QA , is that it requires a QA system to combine multiple pieces of evidence from multiple documents (Welbl et al., 2018; Talmor and Berant, 2018; Yang et al., 2018).", "Even for single-hop QA, it has been shown challenging for extractive QA models to effectively aggregate evidence from the combined pool of multiple passages, which has been the focus of recent work (Clark and Gardner, 2018; Min et al., 2019; Guu et al., 2020).", "Recent work (Lewis et al., 2020b; Min et al., 2020) has demonstrated the promise of a generative approach at combining evidences from multiple passages for answer generation.", "Thanks to large pre-trained transformers like T5 (Raffel et al., 2020), Izacard and Grave (2021) introduced fusion-in-decoder (FID ) that leverages passage retrieval with generative models for open-domain QA, achieving state-of-the-art scores across several single-hop QA benchmarks.", "However, we observe that the success of the FID model does not extend to multi-hop QA, which is corroborated by the findings in (Xiong et al., 2021).", "Further, the FID model is a rather opaque model in terms of interpretation of the answer generation process.", "This capability becomes especially important for multi-hop QA, which requires sequential reasoning across multiple evidences from the pool of retrieved passages.", "In this work, we propose PATHFID , a generative QA model that learns to generate an answer along with a reasoning path to improve its capability of multi-hop reasoning.", "PATHFID extends multi-hop QA beyond just answer generation by explicitly modeling the full reasoning path to resolve the answer with a generative sequence-to-sequence model.", "To this end, we cast the problem as a single sequence prediction task that simultaneously models reasoning path consisting of supporting passages and facts, and eventually the factoid answer.", "Furthermore, we extend PATHFID to allow for cross-passage interactions between the 974 Figure 1: An example of multi-hop question from HotpotQA dataset.", "retrieved passages to obtain more expressive representations from the encoder to facilitate modeling a complex reasoning chain by the decoder.", "Figure 1 shows an example of our task formulation, and Figure 2 shows an overview of our approach.", "We evaluate our proposed approach on two multihop QA datasets: HotpotQA (Yang et al., 2018) and IIRC (Ferguson et al., 2020).", "Our extensive experiments demonstrate that", "(i) PATHFID leads to significant performance gains over FID on answer generation,", "(ii) PATHFID is the first generative model unlocking the possibility of generating the reasoning path jointly with the answer while achieving competitive performance on supporting fact extraction metric as well.", "Besides the performance gains, PATHFID is able to expose the underlying reasoning process behind the answer generation, which allows us to conduct a much finer-grained qualitative and quantitative analysis on the model's behavior, providing insights into further improving and better understanding generative models for multi-hop QA.", "We first describe the multi-hop QA task in a general way.", "We assume that a collection of K passages are given for a question q : D q = { p 1 , p 2 , . . . , p K } , where D q can be a pre-defined set, or it can also be an output from a text retrieval system (e.g., DPR (Karpukhin et al., 2020) and MDR (Xiong et al., 2021)) in an open-domain QA setting.", "That is, in the case of the open-domain setting, D q is a subset of a large collection of passages, such as Wikipedia.", "The task is to generate an answer string a given q and D q .", "In addition, we aim at identifying which passages provide evidence, and which sentences in them are describing the evidence.", "Figure 1 shows a comprehensive example of the task definition, where we can see that some sentences (called supporting facts ) in the two paragraphs are crucial to answer the question.", "Moreover, there is a reasoning flow: the question the first paragraph the second paragraph, which is called a reasoning path in previous work (Asai et al., 2020).", "The overall task is then to predict the reasoning path along with the supporting facts, and the answer.", "Fusion-in-Decoder (FID ) is a generative reader based on a sequence-to-sequence architecture, initialized from pre-trained models such as T5 (Raf-fel et al., 2020) or BART (Lewis et al., 2020a).", "It consists of an encoder ( Enc ) and a decoder ( Dec ).", "First, it constructs a single block of text b n := question: q title: t n context: p n of concatenated evidence from each passage-title pair ( p n , t n ) together with the question ( q ).", "Then, each of the resulting evidence block b n is independently encoded into | b n | d -dimensional output representations, which are then concatenated to form a unified input representation X = [ Enc ( b 1 ); Enc ( b 2 ); . . . , Enc ( b N )] (1) of dimension ( (cid:80) n | b n | ) d where | b n | denotes the length of the n -th block b n in number of tokens.", "Note that, the motivation behind this strategy is to avoid the expensive quadratic self-attention computation on the encoder-side, effectively reducing the complexity from O (( (cid:80) | b n | ) 2 ) to O ( (cid:80) | b n | 2 ) .", "Then, the overall answer generation is modeled 975 as a conditional generation p ( a | X ) given X consuming the unified input representation X , where represents the set of all model parameters.", "The model is trained to minimize the cross-entropy loss for generating answer tokens on the decoder side.", "At inference time, FID first computes X based on the retrieved passages, and then decodes the answer token by token following p ( a i | a <i , X ) with the learned model parameters .", "In this section, we introduce a generative reader (PATHFID ) for K -hop QA that jointly generates an alternating sequence of passage-level and fact-level clues on the reasoning path by more explicit fusion of evidence from the pool of input passages to arrive at the correct answer.", "As illustrated in Figure 2, PATHFID employs a single sequence-to-sequence architecture that independently encodes the input passages after inserting special fact markers ( < f i > ) before the i -th sentence of each passage.", "Conditioning on the concatenation of token-level input representations per passage, its decoder then generates the linearized hierarchical reasoning path obtained by concatenating the sequence of passage titles and their corresponding supporting fact pointers followed by the answer.", "Each segment on the reasoning path is separated by special markers in a way that makes it possible to uniquely recover the individual segment predictions after decoding in the inference time.", "The opaqueness of the FID model, which makes understanding of the reasoning process more difficult, motivated our approach and its emphasis on exposing the reasoning path.", "Instead of only modeling answer generation, we propose to jointly model it with the full reasoning path in an hierarchical fashion to derive the answer in a unified way using multi-task maximum likelihood training .", "We utilize the core input encoding architecture from FID approach (Section 2.2) by introducing a new passage representation that will facilitate supporting fact generation on the reasoning path as illustrated in Figure", "2. To this end, we independently encode each input passage-title pair ( p n , t n ) along with the question q as a separate block b path n := question: q title: t n context: p path n where we redefine the context representation by inserting special tokens ( < f i > ) before each sentence of the passage as p path n := < f 1 > s (1) n < f 2 > s (2) n < f l n > s ( l n ) n (2) where s ( i ) n denotes the i -th sentence of passage p n , and l n is the number sentences it contains.", "Having redefined the input blocks ( b path n ) per passage, we then compute the global input representation similar to Eq.", "1 by X path q = [ Enc ( b path 1 ); Enc ( b path 2 ); . . . ; Enc ( b path N )] (3) Note that sentence indicators ( < f i > ) are shared across all passages, encouraging a more hierarchical passage representation by explicitly breaking them down into sentence-level sub-blocks using the same indicator tokens.", "The hierarchical design of reasoning path is inspired by the human reasoning process for multihop QA task.", "More precisely, if a question q requires K -hop reasoning, then we process these K passages in a sequential order alternating between their passage-level and sentence-level evidence until we reach the answer.", "To this end, let R q = { p r 1 , p r 2 , . . . , p r K } with r i [1 , N ] denote the sequence of passages from the larger pool D q reflecting this reasoning process for locating the answer a for question q .", "As shown in Figure 2, we define the hierarchical reasoning path as a linearized sequence of blocks of passage titles and supporting facts followed by the answer block Y path q := [ T r 1 ; E r 1 ; T r 2 ; E r 2 ; ; TK ; E r K ; A ] (4) where T r i represents the i -th title block obtained by inserting a special token ( <title-i> ) before the title t r j and A denotes the answer block derived by prepending a special token ( <answer> ) to the answer a as illustrated in Figure", "2. On the other hand, i -th supporting fact block is defined as the sequence of fact indicators following <facts-i> token by E r i := <facts-i> < f j 1 > < f j 2 > < f j mi > (5) 976 Figure 2: PATHFID model overview.", "where { j 1 , j 2 , . . . , j m i } denote the indices of key sentences to leverage from passage p r i to transition to the next evidence on the reasoning process R q for question q , and 1 m i l r i denotes the number of supporting facts.", "Note that fact indicators < f i > are shared between the contexts p path n of input blocks (Eq. 2) and supporting fact blocks (Eq. 5) on the target reasoning path to allow the decoder to follow along the sequential reasoning R q by pointing to the facts E r i of passage p r i .", "PATHFID enables more explicit evidence fusion through the reasoning path to guide the model to towards correct answer in a structured way.", "However, it still relies on the decoder to combine all the clues together, which might still struggle due to lack of cross-passage interactions as input blocks are encoded independently.", "To address this potential limitation, we propose PATHFID +, where we further extend PATHFID in a way that enables cross-passage interaction by redefining the input block consisting of a pair of passages ( p n 1 , p n 2 ) as b path+ n 1 ,n 2 := question: q <title-1> t n 1 <context-1> p path n 1 <title-2> t n 2 <context-2> p path n 2 assuming that a set of passage pairs ( p n 1 , p n 2 ) are available for model to consume.", "derive a set of pairs of passages from the initial set D q by D + q = { ( p , p 1 ) , ( p , p 2 ) , . . . , ( p , p N ) } where p corresponds to the first passage that is possible to immediately hop to from question q , which may be determined by another model, or by executing the original PATHFID on D q in our case.", "Global input representation X path+ q is obtained similarly (Eq. 3) by except encoding the new blocks b path+ n 1 ,n 2 allowing for cross-passage interactions, while the target reasoning path Y path+ q remains the same as Y path q .", "Note that <title-i> special markers are shared between new input block b path+ n 1 ,n 2 and target reasoning path Y path+ q to provide the model with additional clue regarding the first passage on the reasoning path while still relaying the complete evidence fusion to the decoder via information redundancy encoded in X path+ q .", "Having defined global input representation X path q , the decoder autoregressively generates the reasoning path Y path q per token at each step by following self-attention, cross-attention on the entire X path q , and feed-forward modules.", "So, the overall reasoning path generation is modeled as conditional generation p path ( Y path q | X path q ) .", "The model then is trained to minimize J ( path ) = (cid:80) | Y path q | i =1 log p ( y i | y <i , X path q ) with teacher forcing over a training set of { ( q, a, D q ) } .", "In the inference, the decoder consumes the input representation X path q computed by encoder, and generates the full reasoning path token by token.", "We then post-process the decoded sequence using the answer indicator ( <answer> ) to first obtain the answer, followed by recursively parsing the remaining sequence using the special separator tokens ( <title-k> , <facts-k> ) to reconstruct the title and retrieve its relevant sentences at each hop k .", "As illustrated in Figure 2, the final result of the inference can be summarized into a dictionary which maps each generated passage title to the list of sentence pointers as well as the final answer.", "answering datasets: HotpotQA and IIRC .", "HotpotQA (Yang et al., 2018) is a large-scale human-annotated dataset including 113K multihop questions.", "It focuses on using documents from Wikipedia as the source of information for answering questions rather than knowledge bases as in other multi-hop QA datasets (Welbl et al., 2018; Talmor and Berant, 2018).", "The questions in HotpotQA are not constrained by the fixed knowledge-base schema, hence they can cover more diverse topics.", "The answer for each question in HotpotQA is extracted from 10 paragraphs in the distractor setting, while it is allowed to use the entire Wikipedia for the full wiki setting.", "There are two main question types bridge (80%) and comparison (20%) in the corpus, where each question is designed in a way that extracting the correct answer requires reasoning over multiple evidence distributed across two passages.", "While comparison questions do not require the these passages to be processed in a particular order, bridge questions often require identifying the bridge entity in the first passage to correctly hop to the second one that contains the answer.", "Each question is also provided with the annotation of 2 supporting passages and up to 5 corresponding relevant sentences as their supporting facts.", "Since our proposed approach is a reader model that reasons over a given set of evidence documents, we primarily focus our experiments on the distractor setting 1 .", "IIRC (Ferguson et al., 2020) is a dataset of more than 13K human-written questions over paragraphs 1 See Appendix B for PATHFID results in open-domain setting using MDR (Xiong et al., 2021) as the retriever.", "from English Wikipedia, where crowdworkers had access only to initial paragraph and list of hyper-links to other relevant Wikipedia articles, with the missing information occurring in one or more linked documents.", "This annotation design encouraged less lexical overlap between the questions and the contexts that actually contain the answer.", "This dataset presents unique challenges compared to HotpotQA because (1) it additionally requires discrete/numerical reasoning and identification of unanswerable questions, which adds up to 4 different possible answer types (span, binary, numerical, unanswerable), and (2) about 30% of questions require reasoning over more than 2 passages including the main passage.", "Evaluation Metrics.", "We use standard metrics exact-match (EM) and F 1 scores for measuring the quality of predicted answers.", "For HotpotQA experiments, we are also able to evaluate PATHFID on supporting fact predictions using the official metrics (Support-EM, Support-F 1 ), which measures the performance of the reader model in correctly identifying the supporting facts from the relevant passages.", "Note that this metric implicitly requires correctly identifying relevant passages among the distractors as well.", "For our experiments on IIRC dataset, similar to the baseline model constructed in the original work (Ferguson et al., 2020), we follow the evaluation methods used by DROP (Dua et al., 2019).", "Implementation Details.", "We use pre-trained T5-large encoder-decoder (Raffel et al., 2020) to initialize the models in our experiments.", "We train the model with batch size of 64 with constant learning rate of 1e-4 for 10 epochs.", "We use maximum length of 256 (resp. 512) tokens for input blocks of PATHFID (resp. PATHFID +), while the maximum target sequence length is set to be 64.", "However, the sequence truncation is performed on the reasoning path excluding answer part for sequences of length longer than 64 tokens.", "All the experiments are conducted on a machine with 4 or 8 many 40GB A100 GPUs.", "Our code is based on Huggingface Transformers (Wolf et al., 2019).", "Please see Appendix for further details on the hyperparameter settings.", "We present our main results on the HotpotQA distractor setting in Table", "1. We report results on the HotpotQA development set in comparison with the 978 Answer Support Methods EM F1 EM F1 Baseline (Yang et al., 2018) 44.4 58.3 22.0 66.7 DFGN (Qiu et al., 2019) 55.4 69.2 -QFE (Nishida et al., 2019) 53.7 68.7 58.8 84.7 SAE (Tu et al., 2020) 61.3 74.8 58.1 85.3 SAE-large (Tu et al., 2020) 67.7 80.8 63.3 87.4 Graph Recurrent Retriever (Asai et al., 2020) (base) 52.7 65.8 57.4 84.6 Graph Recurrent Retriever (Asai et al., 2020) (wwm) 68.0 81.2 58.6 85.2 Gated Memory Flow (Shao et al., 2021) 69.6 83.0 64.7 89.0 This Work FID * (Izacard and Grave, 2021) 64.4 77.8 -PATHFID 65.8 78.9 59.3 85.7 PATHFID + 72.7 84.2 64.9 88.7 Table 1: Results on the development set of HotpotQA distractor setting in comparison with previous work.", "previous published methods.", "PATHFID reader provides 1.4% absolute gain on answer EM score in comparison to FID model.", "Moreover, it achieves competitive supporting fact predictions of 59.3% support-EM and 85.7% support-F 1 as a result of path generation compared to strong extractive models such as (Asai et al., 2020).", "In summary, PATHFID establishes the usefulness of modeling the full reasoning path along with answer generation for multi-hop QA.", "More notably, PATHFID + achieves a quite significant performance gain across all the central evaluation metrics, demonstrating the importance of cross-passage interactions.", "Overall results validate the effectiveness of the two central modeling contributions of our proposed method.", "Next, we present further analysis and discussion on the unique advantages of PATHFID approach under a few central questions which motivated our research at the first place.", "How faithfully grounded are the generated answers on supporting facts?", "In Table 2, we present a detailed analysis comparing different models in terms of the faithfulness of their generated answers on both gold and predicted supporting facts.", "The first row focuses on the passage-level answer grounding computed by the percentage of the answers found in one of the gold supporting passages, while the second row reports the same analysis on sentence-level.", "We can observe that PATHFID models significantly improves on how faithfully the generated answers are grounded on the supporting facts both at passage-level and sentence-level granularities.", "The next two rows provide further insight into the quality of the generated supporting facts by PATHFID models by measuring how often the gold answer can be found in them.", "This analysis shows that the generated supporting facts are of quite high-quality including the gold answer for more than 95.3% and 96.2% at sentence-level and passage-level, respectively.", "The last two rows measure the faithfulness of the generated answers on the model generated supporting facts, which is not applicable to FID model as it does not perform supporting fact prediction.", "We observe that the generated answers are quite faithfully grounded on the predicted supporting facts, showing the path generation not only improves the answer EM performance but also successfully grounds them on the evidence it generates as part of the full reasoning path.", "It is important emphasize here that extractive reader models can be guaranteed to output perfectly grounded answers simply by locating the answer in their predicted supporting facts.", "On the other hand, it is difficult for generative models to ensure 100% answer grounding simply due to its generative na-979 Answer-EM Support-EM Comparison Bridge Comparison Bridge # Supp Facts FIDPATHFIDFIDPATHFIDFIDPATHFIDFIDPATHFID 2 70.4 71.8 63.3 64.6 -86.7 -70.0 3 66.1 68.2 62.7 63.1 -43.4 -30.7 4 62.2 63.8 64.3 66.5 -5.4 -26.2 >=5 83.3 87.5 60.0 65.0 -0.0 -3.8 Table 3: Performance breakdown on Answer-EM and Support-EM by question type and the number of gold supporting facts (rows).", "ture.", "However, we are able to provide additional evidence validating the answers generated by PATHFID are significantly grounded in the supporting facts it generates, which might implicitly indicate that the generated reasoning path tightly aligns with the model's underlying process for answer generation.", "Although this is a strong evidence, it is still quite implicit in exposing the model's prediction process, so we see our approach as a step in the right direction rather than a complete solution.", "Performance breakdown by the number of supporting facts and question types .", "In Table 3, we compare the performance of models by breaking them down based on the number of gold supporting sentences and the question type (e.g., bridge and comparison).", "Our first observation is that PATHFID provides consistent improvement on answer-EM score over FID across both the question types and different number of supporting facts required to answer the question.", "The high variance in the answer-EM score on comparison questions can be attributed to the strictness of exact-match metric as well as the imbalanced nature of the dataset where only 5% of the comparison questions have more than 3 supporting facts.", "Surprisingly, both FID and PATHFID models perform considerably well on the comparison questions even when it requires at least 5 supporting facts.", "A more important motivation behind the performance breakdown analysis was to understand how the supporting fact prediction of PATHFID would change as the number of gold supporting facts grows.", "Although it starts degrading on examples with more than 2 supporting facts, it still achieves more than 25% Support-EM for bridge questions with up to 4 supporting facts.", "Recalling the average performance on the whole dataset is less than 60%, we conclude this result might be satisfactory enough, especially for a fully generative Figure 3: PATHFID model evolution on the HotpotQA Dev set during training.", "Analyzing the evolution of sub-tasks during joint training with PATHFID .", "In Figure 3, we present the evolution of PATHFID model on the HotpotQA development set at every 500 training steps.", "We observe that while the model more quickly picks up the patterns for title generation, it takes much longer for it to reach to a reasonable level of fact prediction.", "As one would expect, the general trend in the evolution of different segments (title-1, facts-1, title-2, facts-2, answer) of the reasoning path mostly follows the difficulty of the corresponding sub-task although all the sub-tasks are jointly formulated and trained in an end-to-end fashion.", "On the other hand, it seems counter-intuitive for model to reach to a better accuracy on predicting the facts of the second passage (F2-EM) on the reasoning path earlier despite having a better accuracy on (T1-EM).", "However, one can also interpret it as a result of stronger feedback provided by the answer segment of the reasoning path as most of the ground-truth answers are contained in the facts of the second passage.", "In addition to our main experiments presented in greater detail, we also conduct experiments on IIRC dataset to verify the generalization of the proposed approach.", "To this end, we closely follow the authors' model-free retrieval setting (referred to as Oracle L+C in Table-3) because the model checkpoints for the baseline retrieval model are not available in the public release.", "We use a python script 2 provided in the open-sourced repository to replicate the same setting for a fair comparison.", "In Table 5, we present the results on the development set for our proposed PATHFID and PATHFI D+ in comparison with the baseline reported in the original paper (Ferguson et al., 2020) and our implementation of the FiD (Izacard and Grave, 2021) baseline.", "FID model obtains a comparable F1 with IIRC baseline with a slightly worse exact-match performance.", "However, the proposed PATHFID approach is able to provide 1.3% and 1.4% improvement in F1 score over the two baselines.", "Furthermore, PATHFID + extension leads to the best performance achieving 4.7% and 4.2% EM score improvement in absolute value over the FID 2 https://github.com/jferguson144/ IIRC-baseline/blob/main/make_drop_style.py baseline and IIRC baseline, respectively.", "Our experimental results validate the benefit of the proposed approach on the IIRC dataset, suggesting strong evidence for the generalizability of our approach.", "4.4 Analyzing the Benefit of Joint Training In Table 4, we present the results of a case study where we analyze the benefit of multi-task training on the passage chain prediction.", "The first row of the table shows the results for training PATHFID only to predict the sequence of titles for the gold passages (i.e., [t1-t2] ), which is just a subsequence of the full reasoning path obtained by discarding facts and the answer.", "The second row is another variant, where we add the answer back to the linearized target sequence while still excluding the segments corresponding to the facts.", "The last row correspond to the full reasoning path generation, which is corresponding to the original formulation of PATHFID as described in Section 3 and illustrated in Figure", "2. Comparing first two rows in Table 4, we can immediately observe that including answer segment in the target reasoning path (i.e., [t1-t2-answer] ) boosts the performance across the board although in principle it makes the task more complicated while utilizing the same underlying model capacity.", "Further including segments corresponding to FACTS (sentences within supporting passages) in addition to answer segment (i.e., [t1-f1-t2-f2-answer] full reasoning path) boosts the title-EM even further, especially before applying title reconstruction post-processing step.", "Although the objective of the first task (i.e., [t1-t2] ) is perfectly aligned with the evaluation metric used in Table 4, the performance of the resulting model remains inferior compared to jointly modeling the same task with the answer (and/or supporting facts) prediction.", "These two observations elicit a compelling evidence regarding the benefit of jointly modeling the sub-tasks of multi-hop QA as single sequence capturing the full reasoning path.", "Multi-hop question answering.", "Research on multi-hop QA aims to tackle complex questions that require reasoning across multiple pieces of evidence in multiple documents (Welbl et al., 2018; Yang et al., 2018; Ferguson et al., 2020).", "In particular, the HotpotQA dataset (Yang et al., 2018) provides both the closed and open-domain settings to evaluate multi-hop reading comprehension models.", "Compared to single-hop QA, such complex questions pose additional challenges for both reader and retriever models since they are required to capture relationships between documents, instead of independently processing each document.", "This is challenging because the number of document combinations exponentially grows due to the sequential nature of the process.", "Two recent works (Nie et al., 2019; Asai et al., 2020) have tackled this challenge by leveraging hyperlink structure in the underlying Wikipedia corpus, while Xiong et al. (2021) has taken a recursive approach to extend the dense retrieval process to handle sequential search.", "Most of the reading comprehension (RC) models in existing work (Xiong et al., 2019; Chen et al., 2019; Nishida et al., 2019; Qi et al., 2021; Li et al., 2020; Xiong et al., 2021) follow an extractive architecture (De-vlin et al., 2019) for selection of the answer spans and their corresponding supporting evidence with minor modifications such as initializing the backbone model from a stronger or larger pre-trained models (Clark et al., 2020).", "On the other hand, some recent works (Inoue et al., 2021) take a more abstractive approach and generate question-focused summaries of input paragraphs as concise explanations to be fed to the RC module.", "Generative question answering.", "Especially after the emergence of the SQuAD dataset (Rajpurkar et al., 2016), neural extractive QA models have been widely studied.", "An underlying assumption is that we can extract a short text span (or a phrase) as an answer, but it is not always the case in reality.", "Motivated by this, the generative QA approach has also been investigated (Hewlett et al., 2017; Fan et al., 2019).", "Recent advances on pre-trained transformers have pushed this direction; for example, Lewis et al. (2020a) jointly trained a generative QA model along with a text retrieval model, and Roberts et al. (2020) explored an ambitious approach to directly generate an answer without any evidence documents.", "We focused on the fusion-in-decoder model (Izacard and Grave, 2021); they claimed that the decoder might be good at aggregating information across multiple documents.", "However, we have shown that it is not trivial in the multihop reasoning task, and pushed the model's ability to jointly learn to predict reasoning paths.", "Besides question answering, jointly learning multiple intrinsic capabilities required by the final objective with a generative approach has been shown useful in modeling other NLP tasks such as task-oriented dialogues (Neelakantan et al., 2019; Hosseini-Asl et al., 2020; Peng et al., 2021).", "Open-domain question answering.", "Open-domain QA (Voorhees, 1999) is practically important, which requires a system to retrieve relevant documents to answer a given question.", "The task is recently gaining much attention, thanks to the development of large-scale datasets like HotpotQA, SQuAD Open (Chen et al., 2017), Natural Questions Open (Kwiatkowski et al., 2019; Lee et al., 2019), etc.", "Pre-trained transformer models like BERT (Devlin et al., 2019) have accelerated the development of neural text retrievers (Lee et al., 2019; Karpukhin et al., 2020; Asai et al., 2020; Xiong et al., 2021; Liu et al., 2021) in the retriever-reader framework (Chen et al., 2017).", "We have investigated the effectiveness of our method in the multi-hop open-domain QA task (see Appendix B) using an existing external retriever component.", "In this work, we propose a generative question answering (QA) approach that models multi-hop QA as a single sequence prediction task.", "It learns to generate an answer along with a reasoning path to improve its capability of multi-hop reasoning.", "Our experiments on prominent multi-hop QA benchmarks, HotpotQA and IIRC, validate the promise and effectiveness of our proposed method PATHFID and its extension PATHFID +.", "Future work will explore (1) our PATHFID approach more closely with text retrieval models in open-domain QA scenarios and (2) more explicit grounding on the input information to make our approach even more interpretable and controllable.", "The authors would like to thank the members of Salesforce AI Research team for fruitful discussions, as well as the anonymous reviewers for their helpful feedback." ]
[ "abstain", "abstain", "objective", "abstain", "objective", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "objective", "abstain", "result", "abstain", "objective", "abstain", "abstain", "result", "objective", "method", "other", "other", "objective", "method", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "method", "method", "method", "abstain", "abstain", "method", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "other", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "method", "other", "other", "method", "abstain", "other", "other", "other", "other", "other", "objective", "objective", "abstain", "objective", "objective", "other" ]
[ "Handing in a paper or exercise and merely receiving \"bad\" or \"incorrect\" as feedback is not very helpful when the goal is to improve.", "Unfortunately, this is currently the kind of feedback given by many Automatic Short Answer Grading (ASAG) systems.", "One of the reasons for this is a lack of content-focused elaborated feedback datasets.", "To encourage research on explainable and understandable feedback systems, we present the Short Answer Feedback dataset (SAF).", "Similar to other ASAG datasets, SAF contains learner responses and reference answers to German and English questions.", "However, instead of only assigning a label or score to the learners' answers, SAF also contains elaborated feedback explaining the given score.", "Thus, SAF enables supervised training of models that grade answers and explain where and why mistakes were made.", "This paper discusses the need for enhanced feedback models in real-world pedagogical scenarios, describes the dataset annotation process, gives a comprehensive analysis of SAF, and provides T5-based baselines for future comparison.", "1 1 Introduction Assessment and feedback are essential to high-quality education (Shute, 2008).", "They allow learners and teachers to discover misconceptions, gaps in knowledge, and improvement opportunities.", "However, manually assessing learners' knowledge and providing helpful feedback is time-consuming and requires pedagogical as well as domain expertise.", "Here, automatic assessment can free up teachers' time to focus on tutoring learners or adequately preparing classroom activities.", "Moreover, it can be an alternative to peer-grading when course participant numbers increase beyond the financial 1 Our code, scoring rubrics and dataset are available at https://github.com/SebOchs/SAF under an MIT license feasibility of manual grading (Kay et al., 2013), making it particularly useful for freely accessible online courses.", "Besides being costand time-efficient, automating assessment also offers unique teaching opportunities.", "As long as systems give individual, response-specific feedback, learners may retry or take additional assignments and receive instantaneous feedback as often as they need.", "Additionally, knowing that a system instead of one's teacher or professor will evaluate one's assignment can also reduce anxiety and help learners focus on their work instead of worrying about their reputation (Lipnevich and Smith, 2009).", "Therefore, it is unsurprising that automatic assessment has been an active research field over the past decades (Burrows et al., 2015; Ihantola et al., 2010; Ke and Ng, 2019; Xi, 2010).", "So far, significant progress has been made.", "In particular, Transformer models are approaching human experts' performance on specific datasets in the Automatic Short Answer Grading (ASAG) field (Sung et al., 2019; Camus and Filighera, 2020).", "These models are trained to evaluate whether natural language responses fully answer open knowledge questions and typically output a score or label indicating the response's correctness.", "This kind of feedback is also called verification (Shute, 2008).", "An example can be seen in Table 1. However, merely providing a score or label for a learner's answer is generally not sufficient in real-world pedagogical scenarios.", "Firstly, learners must understand their feedback to use it effectively (Winstone et al., 2017).", "That may not be the case when learners only receive a score instead of a clear explanation of where and why they made mistakes.", "Secondly, the feedback's source needs to be trusted for learners to accept and engage with the given advice (Winstone et al., 2017).", "Especially assessments by automatic models may be questioned (Lipnevich and Smith, 2009; Filighera et al., 2020a,b).", "Providing a response-specific, detailed 8577 Question: What are the challenges of Mobile Routing compared to routing in fixed and wired networks?", "explanation may establish the necessary trust in the system's predictions.", "This kind of explanation is also called elaborated feedback (Shute, 2008) and is shown in Table 1. In the Intelligent Tutoring Systems community, the need for elaborated feedback is well-known (Deeva et al., 2021; Hasan et al., 2020).", "Several researchers have incorporated feedback modules in their systems (VanLehn, 2011; Kulik and Fletcher, 2016; Mousavinasab et al., 2021).", "However, these approaches are typically constrained to structured answer formats, such as programming exercises (Keuning et al., 2018), focus on the response's language and style instead of the content (Hell-man et al., 2020), or are hand-tailored to specific tasks (Dzikovska et al., 2014; Lu et al., 2008).", "A lack of public, content-centered elaborated feedback datasets may be one of the main reasons for these limitations.", "To narrow this gap, we provide the Short Answer Feedback dataset (SAF), a German and English collection of learner answers and feedback.", "In contrast to other ASAG datasets, SAF contains detailed elaborated feedback explaining the scores assigned to learner responses.", "This allows for automatic scoring and opens the new task of providing response-specific, elaborated feedback illustrating a given score.", "The dataset currently contains 4,519 submissions, corresponding scores, and response-specific elaborated feedback.", "Additionally, we provide T5 (Raffel et al., 2020) and mT5 (Xue et al., 2021) baselines for future comparison.", "While elaborated feedback datasets on language learning (Caines et al., 2020; Pilan et al., 2020;", "Stasaski et al., 2020) appeared recently, they focus on linguistic mistakes, such as grammatical errors, instead of content.", "Our extensive literature review did not reveal datasets that included content-focused elaborated feedback on short answer responses.", "However, SAF's feedback can be viewed as a textual explanation of the assigned score.", "Therefore, comparable NLP datasets with textual explanations and publicly available ASAG datasets without explanations are discussed in the following sections.", "In recent years, the need for understandable, interpretable NLP models has been widely discussed (Adadi and Berrada, 2018; Alishahi et al., 2019; Danilevsky et al., 2020; Das and Rad, 2020).", "One of the possible approaches to make models explainable is to train them or auxiliary models to directly generate explanations of their predictions (Liu et al., 2019; Narang et al., 2020).", "For this purpose, multiple researchers enhanced NLP datasets with textual explanations.", "Camburu et al. (2018) extended the Stanford Natural Language Inference dataset (SNLI) (Bow-man et al., 2015) using Amazon Mechanical Turk.", "The expanded dataset is called e-SNLI and contains textual, human-generated explanations for each of SNLI's entailment relation pairs.", "Ra-jani et al. (2019), also using Amazon Mechanical Turk, expanded COMMONSENSEQA (Talmor et al., 2019).", "The resulting Common Sense Explanations (CoS-E) dataset consists of commonsense reasoning questions with three possible answers and a textual explanation for every correct selection.", "Mostafazadeh et al. (2020) introduced 8578 GLUCOSE , a crowdsourced collection of semi-structured causal explanations related to sentences in stories.", "However, the datasets above do not have a pedagogical focus.", "This is detrimental to researchers aiming to employ their systems in educational contexts, where explanations should conform to pedagogical guidelines, such as avoiding harm to the learner's self-esteem or motivation.", "The closest to our research is the WorldTree V2 dataset.", "Here, Xie et al. (2020) used graphs of expert-engineered natural language facts to explain correct answers to multiple-choice science questions.", "The resulting explanations are essentially lists of scientific and world knowledge facts needed to answer the question correctly.", "Similarly, Ling et al. (2017) provide textual explanations for the correct solutions to math problems.", "Their multiple-choice questions, answers, and explanations are obtained by crowdsourcing and standardized tests, such as GMAT.", "While both Ling et al. (2017)'s and Xie et al. (2020)'s work have an educational focus, they only explain the reference solution instead of mistakes made in incorrect or partially correct solutions.", "Some of the most well-known ASAG datasets stem from the SemEval 2013 challenge (Dzikovska et al., 2013).", "BEETLE contains 5,044 student answers to basic electricity questions labeled as correct, partially_correct_incomplete, contradictory, irrelevant or non_domain .", "SCIENTSBANK follows the same structure but also contains questions of various other domains, such as biology or geography.", "Basu et al. (2013) introduced Powergrading , a collection of 2,532 unique, crowdsourced answers to ten questions of a United States Citizenship Exam.", "Each was manually classified as correct or incorrect .", "In contrast to the previous datasets, answers in the ASAP-SAS 2 dataset are scored on a scale from 0 to 3. Additionally, this dataset is much larger with 2,200 responses per question, with 10 questions in total.", "All of the datasets above only include verification feedback.", "Mizumoto et al. (2019) released a Japanese dataset containing 12,600 student responses equally distributed across 6 questions.", "The answers stem from a commercial achievement test for Japanese high school learners and are annotated with holistic scores and individual marks for manually defined 2 https://www.kaggle.com/c/asap-sas/ scoring criteria.", "Additionally, each criterion links to the phrase in the student's answer expressing it.", "For example, for a criterion like \"2 points if the response mentions Western culture \", the phrase Western culture would be marked in the response, if present.", "This dataset enables elaborated feedback systems.", "However, the structured nature of criteria and matching answer spans complicates an automatic translation to English.", "Additionally, the marking scheme is limited in its expressiveness as it is hard to mark missing information in the answer.", "Lastly, structured collections of smaller and nonpublic datasets can be found in surveys by Roy et al. (2015) and Burrows et al. (2015).", "To remedy the lack of content-focused elaborated feedback datasets, we provide SAF, an English and German short answer dataset with explanations that serve as elaborated feedback.", "In total, the corpus contains 4,519 submissions similar to the example in Table 1. There are 22 English short answer questions with reference answers covering a range of college-level communication network topics, such as extension headers in IPv6 or frame bursting.", "Additionally, the dataset contains 8 German short answer questions used in micro-job training on the appJobber 3 crowd-worker platform.", "The data was collected and annotated between April 2020 and June 2021.", "While individuals gave the German answers in the context of pre-job training, the English questions were answered in groups of up to three students in voluntary quizzes they could complete for extra points in the final exam.", "Each quiz consists of 3-4 questions regarding the same overarching topic, such as Internet protocols.", "All answers are annotated with a score, label, and feedback as described in Table 2. The dataset can be used for classical automatic short answer grading and elaborated feedback generation.", "We need reliable scoring and clear, detailed explanations to train understandable feedback models.", "Providing this is challenging for multiple reasons.", "Firstly, annotators need to have the necessary domain expertise and the pedagogical knowledge on how to provide understandable, well-received feedback.", "For instance, they should be aware of their 3 https://appjobber.de/ 8579 Field Description Score A numerical value between 0 and 1 indicating the answer's correctness and completeness.", "feedback's emotional effect.", "At first glance, this may seem obvious, but it is easily overlooked in practice.", "An example of this became apparent during a pilot study we conducted to uncover pitfalls and train our annotators.", "Even though we provided guidelines on how to give feedback, questionable phrases like \"This response fails to ...\" were common as the annotators did not consider that the word \"failing\" may trigger negative associations and emotions in learners.", "Secondly, a common ground truth must be established for each question with clearly defined boundaries because various sources may define concepts differently.", "For example, the network protocol TCP alone has at least five different variations, all with unique advantages and disadvantages, leading to multiple possible answers to TCP related questions (Chaudhary and Kumar, 2017).", "In our pilot study, this expressed itself with a low inter-annotator agreement (Krippendorff's Alpha of 0.36), making the need for detailed scoring rubrics clear.", "We discuss our approaches to these challenges in the following section.", "To ensure the necessary domain expertise, we selected two graduate students 4 who had completed the communication networks course themselves and two experienced appJobber employees for the crowd-worker platform's answers.", "For pedagogical training, a researcher first drafted a general annotation guideline .", "It explains the annotation files' structure, the annotation goals, and provides general recommendations for the formulation of feedback and the calculation of scores.", "For example, it asserts that praise, comparisons with other 4 The students' remuneration consisted of a paid research assistant position for one and partial credit towards a master's thesis and co-authorship of this paper for the other.", "learners, or emotionally charged words like \"fail\" should be avoided when writing feedback.", "Additionally, it points out common biases annotators should be aware of, such as confirmation bias.", "For instance, answers that contain keywords found in many correct responses may still contain mistakes and should, therefore, still be carefully inspected.", "The general annotation guidelines were submitted to a psychology doctoral student with prior work in the feedback field for additional advice.", "Then the annotators applied their knowledge in the pilot study and received further feedback from the researchers.", "Finally, the guideline was updated to reflect any additional discussion points.", "As can be seen in Figure 1, the researcher drafted grading rubrics for each question.", "The rubric consists of the questions, reference answers with detailed grading information, and four example answers per question for illustration.", "As research suggests that a single author may not suffice to produce reliable and objective scoring rubrics (Carr, 2020), the draft is then discussed and refined with the annotators.", "The discussion also mitigates the challenge of defining a common ground truth, as multiple sources and opinions can coalesce into a single, exhaustive rubric.", "Before the discussion, the answer annotation files are available to the annotators.", "The files contain the reference and students' answers.", "Subsequently, annotators individually evaluated answers using the scoring rubric and the general annotation guideline.", "All English answers were annotated twice, while only half of the German answers were annotated doubly due to the prohibitive cost of experienced employees.", "The first step of combining the independently annotated answer files into a cohesive gold standard involved discussing disagreements with the annotators and researcher.", "Disagreements between the annotators 8580 Researcher Final GradingRubric GradingRubric Draft AnswerAnnotationFiles Annotators D i s c u ss i on GradedAnswers & Feedback Annotator 1 Annotator 2 GeneralAnnotationGuidlines GradedAnswers & Feedback Discussion of Disagreements Grammar and Spell Checking Gold Standard Guideline Generation Annotation Researcher Figure 1: Schematic depiction of the annotation process.", "were resolved by either choosing one of the annotations, compromising, or fusing them if both had merit.", "For example, one annotator may notice a missing fact A while the second annotator may find a mistake in B's explanation.", "Finally, the English gold feedback was checked by Grammarly as well as an English native speaker.", "Grammar and spelling mistakes were corrected, and sentences were simplified when the same information could be expressed more concisely, for example, by using the possessive form.", "Learners' answers were not post-processed because models would frequently encounter grammar and spelling mistakes in the wild.", "Therefore, this is a challenge approaches should overcome.", "The annotation process resulted in a corpus with the following score and label distribution seen in Table 3. Similar to the SemEval dataset BEETLE (Dzikovska et al., 2013), we split the data into training (64% of DE / 70% of EN), unseen answers (11% / 12%) and unseen questions (25% / 18%) test sets.", "While the unseen answers test split contains new answers to the training's questions, the unseen questions split contains novel questions.", "This setup enables the investigation of models' ability to generalize to new questions without the need for priming with manually annotated answers first.", "Figure 2 shows the length of questions, feedback, reference, and learner answers of the English training set in tokens.", "We used NLTK's word_tokenize 5 to obtain the tokens, so their count 5 https://www.nltk.org/api/nltk.", "tokenize.html Score Train UA UQ DE EN DE EN DE EN 0.0 216 234 47 42 49 87 (0 .", "can be seen as the sum of words and punctuation symbols in the text.", "The learners' answers were between 0 and 589 tokens long (average=82.2, me-dian=68).", "We did not filter empty submissions (unless all of the group's submissions were empty) from the dataset as models will encounter this in real-world applications.", "Since the reference answer and learner answer are typically combined as input for ASAG models, this dataset's sensible input sequence length may prove to be computationally expensive for large Transformer models.", "Feedback tends to be shorter with 5-120 tokens (average=22.4, median=15).", "The distribution looks similar for the German half of the dataset only that the answers and feedback tend to be slightly shorter.", "Details can be found in Appendix A. 3.4 Annotation Quality To estimate our annotations' reliability, we rely on inter-annotator agreement measures.", "As the scores are interval scaled between 0 and 1, we report the 8581 10 20 30 40 50 60 70 80 0 2 4 6 8 10 C o un t", "percentage agreement and Krippendorff's Alpha.", "The annotators agreed on 89.46% of the cases on the English data, and is 0.91 (N=2,112).", "On the German questions, the annotators agreed in 81.38% of the cases, and is 0.78 (N=1,200).", "The high agreement on the overall dataset illustrates the effectiveness of our annotation process, especially when compared to the initially low agreement of =0.36 achieved in our pilot study.", "We can assume the validity of our German data to be high, since our experienced annotators were also responsible for accepting or rejecting job results later on.", "Hence, their judgements should be consistent with the desired learning outcome.", "To estimate the validity of our English data, we assume that the end-of-term exam is a valid evaluation of students' knowledge.", "Of course, this is most likely not accurate in practice since the exam was not formally validated and only provides a snapshot of students' performance in a single 120-minute time frame.", "However, most of the question pool and exam structure have been employed and refined over multiple years.", "For this reason, we deem it a sufficient approximation.", "Nevertheless, the following results should be viewed as an indication of validity rather than a fact.", "The Spearman's rank correlation between the points achieved in the exam and the quizzes is 0.438 ( p < 0 . 0001 ) with a sample size of 186.", "This is a moderate positive correlation between the exam and quiz results (Dancey and Reidy, 2007) and indicates that they may measure the same or a similar construct.", "In contrast to the quizzes, exams were not taken in groups, partly explaining the variance.", "It is our responsibility to be transparent in our data collection process and protect the privacy of our learners.", "Our first step in this regard was to inform our learners of the data collection process.", "We posted to the college course's online learning platform and the description of the German job training.", "Both channels usually carry vital information for the learners.", "In our post, we detailed how we would use the learners' answers to research and develop automatic assessment models.", "asked learners to refrain from including personal information in their answers, such as names or addresses.", "This was also checked during the annotation process.", "gave them contact information if they wanted their answers to be excluded from the data collection.", "We also clarified that this would not negatively impact them or their grades/access to jobs.", "None of the learners contacted us.", "clarified that we would only release anonymized data in our publications.", "We anonymized German answers by stripping identifying information and randomizing the order.", "To anonymize the English learners' answers, we randomly assigned each group an ID.", "The group-to-ID mapping was done locally on one computer and was deleted after the dataset construction.", "Keeping a consistent group ID allows us to identify responses with quizID.questionID.groupID and, thus, publish a dataset where the other answers of a group can be incorporated to refine an assessment model.", "For example, responses QuizA.1.3 and QuizB.2.3 are written by the group assigned the ID 3. This characteristic is beneficial as it allows for training models that provide personalized feedback, considering the current answer and answers to related questions.", "Patterns of mistakes spanning multiple questions may be discovered in this setting.", "For example, if a group answered all performance evaluation questions incorrectly, they may not understand the probability theory underlying the questions.", "However, note that SAF's an-8582 notators only considered the current answer when constructing their feedback.", "The goals of our experiments are threefold.", "Firstly, we want to provide baselines for the dataset.", "For this reason, it makes sense to report a wide range of metrics future work may want to utilize.", "Secondly, we hypothesize that including the question in the model's input would increase performance.", "Typically, only the student and reference answers are compared in ASAG (Lv et al., 2021) even though the question may contain additional important information.", "To investigate the question's effect on performance, we run each experiment in two settings: with a student and reference answer pair as model input or with a question, student, and reference answer triplet.", "Finally, we want to explore the synergy between the ASAG scoring and classification tasks and feedback generation.", "We believe that grading and feedback should be trained jointly since the feedback should match the assigned grade (Wiegreffe et al., 2021), and both tasks benefit from extracting the same information from the answers.", "For example, a span of tokens negatively impacting the grade should also affect the feedback accordingly.", "Our experiments investigate the hypothesis that feedback generation benefits more from being paired with the more informative ASAG scoring task (0-1) than the verification feedback label classification (correct vs. incorrect vs. partially correct).", "As baselines, we utilize HuggingFace's implementation of the T5-base and mT5-base models (Wolf et al., 2020).", "They are fine-tuned to predict the response's score or label and jointly explain it.", "For computational reasons, the input sequence is trimmed to 512 tokens when using T5 and 256 tokens when using mT5.", "When the sequence is longer, a part of the reference answer is truncated.", "While the complete learner answer is always relevant for grading, the reference answer may discuss details or additional aspects irrelevant to the particular response.", "The output is limited to 128 tokens and has the following format: \" label/score feedback: feedback \".", "We also enforce a minimum output sequence length of 11 tokens since models tended to refrain from generating feedback otherwise.", "In all experiments, 10% of the training data was split-off for manual hyperparameter tuning and model selection.", "All models use gradient accumulation and an Adafactor (Shazeer and Stern, 2018) optimizer with learning rate warm-up.", "We trained models for maximally 64 epochs utilizing early stopping with a patience of 10 and selected the best performing model/epoch using the following metric m , where f is the macro-averaged F1 score during classification and 1 MSE during scoring.", "We average SACREBLEU, 6 ROUGE-2 and METEOR to compensate for the individual metrics' weaknesses when measuring the generated feedback's quality (Post, 2018; Banerjee and Lavie, 2005).", "Thus, m balances the feedback generation and labelling performance, such that success on both tasks is required.", "Each model trained for approximately 1-5 hours on 2 Nvidia RTX 2080 Ti cards with 11 GB of RAM.", "The mT5 models were trained on a single card, due to the memory overhead of parallelization.", "Table 4 shows T5's, a majority baseline's and the average human performance on the English test sets.", "The majority baseline predicts the most common label/score in the training set, paired with the most common corresponding feedback.", "In both datasets, the majority class consists of entirely correct responses.", "In German, the most common matching feedback is Korrekt! and in English, The response answers the differences cor-rectly. is predicted.", "We report the accuracy and macro-averaged F1 score for classification and the root-mean-squared-error for scoring.", "Additionally, we compare the generated and annotated feedback to the gold standard using BERTScore 7 (Zhang et al., 2020) in addition to the metrics used during validation.", "We can see that T5 provides a strong baseline for this task, outperforming the majority baseline significantly.", "However, there is still room for improvement compared to human performance, especially on unseen questions.", "A closer inspection of the generated feedback also revealed that the 6 https://pypi.org/project/sacrebleu/1.", "model would often, and often senselessly, copy common phrases it saw during training with mi-nor modifications (see Appendix B).", "This indicates that elaborated feedback tasks can be challenging even to large language models.", "Simultaneously, the models' high text similarity scores indicate a need for new evaluation metrics that measure similarity on a contentinstead of lexical-level, enforcing that a text not only sounds well but also makes sense.", "Contrary to our belief, providing the model with more detailed scores instead of only labels during training does not improve the feedback generation's performance.", "It even worsens performance slightly for most metrics.", "On the English data, we observed that the question provided only a marginal benefit for unseen answers and a larger benefit for unseen questions.", "Interestingly, this trend does not seem to extend to the German dataset, as depicted in Table 5, indicating that this effect may be language or dataset dependent.", "Additionally, we can see that generalizing to new questions is even less successful on the German than on the English data.", "This may be due to the distribution of questions and answers in the datasets.", "While both are of similar size, there are significantly fewer German questions with more answers per question than English ones.", "The divergent answers to questions ratio may also explain why mT5 on the German data outperforms T5 on the English data when classifying or scoring unseen answers.", "This paper introduces the elaborated feedback generation task.", "We provide a benchmarking dataset containing short answers, scores, and textual explanations of given scores to kick off this task.", "As of yet, the dataset consists of 4,519 submissions to German and English questions.", "We demonstrate SAF's reliability with high inter-annotator agreements.", "In Section 3.3, we presented aspects of the dataset we plan to improve.", "While the dataset is sizable for a manually annotated task of this complexity, it is small compared to other NLP tasks' crawled, large-scale datasets.", "We plan to mitigate this by incorporating additional questions in future iterations of the dataset.", "The focus will be on more complex questions to improve the class balance and questions of other domains and languages to increase diversity.", "The model's ability to general-8584 ize to unseen questions may also benefit from a more diverse dataset.", "We also observed that common text similarity metrics can provide a valuable first impression of the feedback's quality but are not sufficient to fully capture it.", "Thus, we would recommend including humans in the evaluation loop.", "A possible evaluation setup could ask annotators whether the generated feedback expresses the same meaning as the reference feedback included in the dataset.", "We believe annotators could also carry out this task with limited background in the provided domains.", "Nevertheless, we provide the detailed scoring rubrics utilized by our annotators along with the dataset to support future human evaluations.", "Finally, the baselines presented in this paper can be improved.", "Considering the deep understanding human graders require for this task, we believe neuro-symbolic approaches to be an exciting avenue of future research.", "Current models may especially benefit from incorporating knowledge bases and other reference material.", "We would like to thank the wer denkt was GmbH for their cooperation in the German data collection, our annotators for their hard work and dedication and Viktor Pfanschilling for his feedback and support.", "This research is funded by the Bundesminis-terium fr Bildung und Forschung in the project: Software Campus 2.0 (ZN 01|S17050), Micropro-ject: DA-VBB." ]
[ "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "other", "other", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "other", "abstain", "abstain", "abstain", "result", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "objective", "result", "abstain", "method", "abstain", "abstain", "objective", "method", "abstain", "method", "method", "result", "method", "abstain", "abstain", "abstain" ]
[ "Abstract NLP algorithms are increasingly used in computational social science to take linguistic observations and predict outcomes like human preferences or actions.", "Making these social models transparent and interpretable often requires identifying features in the input that predict outcomes while also controlling for potential confounds.", "We formalize this need as a new task: inducing a lexicon that is predictive of a set of target variables yet uncorrelated to a set of confounding variables.", "We introduce two deep learning algorithms for the task.", "The first uses a bifurcated architecture to separate the explanatory power of the text and confounds.", "The second uses an adversarial discriminator to force confound-invariant text encodings.", "Both elicit lexicons from learned weights and attentional scores.", "We use them to induce lexicons that are predictive of timely responses to consumer complaints (controlling for product), enrollment from course descriptions (controlling for subject), and sales from product descriptions (controlling for seller).", "In each domain our algorithms pick words that are associated with narrative persuasion ; more predictive and less confound-related than those of standard feature weighting and lexicon induction techniques like regression and log odds.", "Applications of NLP to computational social science and data science increasingly use lexical features (words, prefixes, etc) to help predict nonlinguistic outcomes like sales, stock prices, hospital readmissions, and other human actions or preferences.", "Lexical features are useful beyond predictive performance.", "They enhance interpretability in machine learning because practitioners know why their system works.", "Lexical features can also be used to understand the subjective properties of a text.", "For social models, we need to be able to select lexical features that predict the desired outcome(s) while also controlling for potential confounders.", "For example, we might want to know which words in a product description lead to greater sales, regardless of the item's price.", "Words in a description like luxury or bargain might increase sales but also interact with our confound (price).", "Such words don't reflect the unique part of text's effect on sales and should not be selected.", "Similarly, we might want to know which words in a consumer complaint lead to speedy administrative action, regardless of the product being complained about; which words in a course description lead to higher student enrollment, regardless of the course topic.", "These instances are associated with narrative persuasion : language that is responsible for altering cognitive responses or attitudes (Spence, 1983; Van Laer et al., 2013).", "In general, we want words which are predictive of their targets yet decorrelated from confounding information.", "The lexicons constituted by these words are useful in their own right (to develop causal domain theories or for linguistic analysis) but also as interpretable features for down-stream modeling.", "Such work could help widely in applications of NLP to tasks like linking text to sales figures (Ho and Wu, 1999), to voter preference (Luntz, 2007; Ansolabehere and Iyengar, 1995), to moral belief (Giles et al., 2008; Keele et al., 2009), to police respect (Voigt et al., 2017), to financial outlooks (Grinblatt and Keloharju, 2001; Chatelain and Ralf, 2012), to stock prices (Lee et al., 2014), and even to restaurant health inspections (Kang et al., 2013).", "Identifying linguistic features that are indicative of such outcomes and decorrelated with confounds is a common activity among social scientists, data scientists, and other machine learning practitioners.", "Indeed, it is essential for developing transpar-1615 ent and interpretable machine learning NLP models.", "Yet there is no generally accepted and rigorously evaluated procedure for the activity.", "Practitioners have conducted it on a largely ad-hoc basis, applying various forms of logistic and linear regression, confound-matching, or association quantifiers like mutual information or log-odds to achieve their aims, all of which have known drawbacks (Imai and Kim, 2016; Gelman and Loken, 2014; Wurm and Fisicaro, 2014; Estevez et al., 2009; Szumilas, 2010).", "We propose to overcome these drawbacks via two new algorithms that consider the causal structure of the problem.", "The first uses its architecture to learn the part of the text's effect which the confounds cannot explain.", "The second uses an adversarial objective function to match text encoding distributions regardless of confound treatment.", "Both elicit lexicons by considering learned weights or attentional scores.", "In summary, we", "1. Formalize the problem into a new task.", "2. Propose a pair of well-performing neural network based algorithms.", "3. Conduct the first systematic comparison of algorithms in the space, spanning three domains: consumer complaints, course enrollments, and e-commerce product descriptions.", "The techniques presented in this paper will help scientists (1) better interpret the relationship between words and real-world phenomena, and (2) render their NLP models more interpretable 1 .", "We begin by formalizing this language processing activity into a task.", "We have access to text(s) T , target variable(s) Y , and confounding variable(s) C .", "The goal is to pick a lexicon L such that when words in T belonging to L are selected, the resulting set L ( T ) is related to Y but not C .", "There are two types of signal at play: the part of Y that T can explain, and that explainable by C .", "These signals often overlap because language reflects circumstance, but we are interested in the part of T 's explanatory power which is unique to T , and hope to choose L accordingly.", "So if Var [ E [ Y | L ( T ) , C ]] is the information in Y explainable by both L ( T ) and C , then our goal 1 Code, hyperparameters, and instructions for practitioners are online at https://nlp.stanford.edu/ projects/deconfounded-lexicon-induction/ is to choose L such that this variance is maximized after C has been fixed.", "With this in mind, we formalize the task of deconfounded lexicon induction as finding a lexicon L that maximizes an informativeness coefficient , I ( L ) = E (cid:2) Var (cid:2) E (cid:2) Y (cid:12)(cid:12) L ( T ) , C (cid:3) (cid:12)(cid:12) C (cid:3)(cid:3) , (1) which measures the explanatory power of the lexicon beyond the information already contained in the confounders C .", "Thus, highly informative lexicons cannot simply collect words that reflect the confounds.", "Importantly, this coefficient is only valid for comparing different lexicons of the same size, because in terms of maximizing this criterion, using the entire text will trivially make for the best possible lexicon.", "Our coefficient I ( L ) can also be motivated via connections to the causal inference literature: in Section 7, we show thatunder assumptions often used to analyze causal effects in observational studiesthe coefficient I ( L ) can correspond exactly to the strength of T 's causal effects on Y .", "Finally, note that by expanding out an ANOVA decomposition for Y , we can re-write this criterion as I ( L ) = E h(cid:0) Y E (cid:2) Y (cid:12)(cid:12) C, L ( T ) (cid:3)(cid:1) 2 i E h(cid:0) Y E (cid:2) Y (cid:12)(cid:12) C (cid:3)(cid:1) 2 i , (2) i.e., I ( L ) measures the performance improvement L ( T ) affords to optimal predictive models that already have access to C .", "We use this fact for evaluation in Section", "4. 3 Proposed Algorithms We continue by describing the pair of novel algorithms we are proposing for deconfounded lexicon induction problems.", "Motivation .", "Our first method is directly motivated by the setup from Section", "2. Recall that I ( L ) measures the amount by which L ( T ) can improve predictions of Y made from the confounders C .", "We accordingly build a neural network architecture that first predicts Y directly from C as well as possible, and then seeks to fine-tune those predictions using T .", "Description .", "First we pass the confounds through a feed-forward neural network (FFNN) to obtain 1616 Figure 1: The Deep Residualization (DR) selector.", "preliminary predictions Y 0 .", "We also encode the text into a continuous vector e R d via two alternative mechanisms:", "1. DR+ATTN: the text is converted into a sequence of embeddings and fed into Long Short-Term Memory (LSTM) cell(s) (Hochreiter and Schmidhuber, 1997) followed by an attention mechanism inspired by Bahdanau et al. (2015).", "If the words of a text have been embedded as vectors x 1 , x 2 , ..., x n then e is calculated as a weighted average of hidden states, where the weights are decided by a FFNN whose parameters are shared across timesteps: h 0 = ~ 0 h t = LST M ( x t , h t 1 ) l t = ReLU ( W attn h t ) v attn p t = exp ( l t ) P exp ( l i ) e = X p i h i 2. DR+BOW: the text is converted into a vector of word frequencies, which is compressed with a two-layer feedforward neural network (FFNN): t = [ freq 1 , freq 2 , ..., freq k ] h = ReLU ( W hidden t ) e = ReLU ( W output t ) We then concatenate e with Y 0 and feed the result through another neural network to generate final predictions Y .", "If Y is continuous we compute loss with L continuous = || Y Y || 2 If Y is categorical we compute loss with L categorical = p log b p Where b p corresponds to the predicted probability of the correct class.", "The errors from Y are propagated through the whole model, but the errors from Y 0 are only used to train its progenitor (Figure 1).", "Note the similarities between this model and the popular residualizing regression (RR) technique (Jaeger et al., 2009; Baayen et al., 2010, inter alia).", "Both use the text to improve an estimate generated from the confounds.", "RR treats this as two separate regression tasks, by regressing the confounds against the variables of interest, and then using the residuals as features, while our model introduces the capacity for nonlinear interactions by backpropagating between RR's steps.", "Lexicon Induction .", "We elicit lexicons from +ATTN style models by (1) running inference on a test set, but rather than saving those predictions, saving the attentional distribution over each source text, and (2) mapping each word to its average attentional score and selecting the k highest-scoring words.", "For +BOW style models, we take the matrix that compresses the text's word frequency vector, then score each word by computing the l 1 norm of the column that multiplies it, with the intuition that important words are dotted with big vectors in order to be a large component of e .", "Motivation .", "We begin by observing that a desirable L can explain Y , but is unrelated to C , which implies it should should struggle to predict C .", "The Adversarial Selector draws inspiration from this.", "It learns adversarial encodings of T which are useful for predicting Y , but not useful for predicting C .", "It is depicted in Figure", "2. Description.", "First, we encode T into e R d via the same mechanisms as the Deep Residualizer of Section 3.1.", "e is then passed to a series of FFNNs (prediction heads) which are trained to predict each target and confound with the same loss functions as that of Section 3.1.", "As gradients back-propagate from the confound prediction heads to the encoder, we pass them through a gradient reversal layer in the style of Ganin et al. (2016) and Britz et al. (2017), which multiplies gradients by 1 .", "If the cumulative loss of the target variables is L t and that of the confounds is L c , then the loss which is implicitly used to train the encoder is L e = L t L c , thereby encouraging the encoder to learn representations of the text which are not useful for predicting the confounds.", "Lexicons are elicited from this model via the same mechanism as the Deep Residualizer of Section 3.1.", "We evaluate the approaches described in Sections 3 and 5 by generating and evaluating deconfounded lexicons in three domains: financial complaints, e-commerce product descriptions, and course descriptions.", "In each case the goal is to find words which can always help someone net a positive outcome (fulfillment, sales, enroll-ment), regardless of their situation.", "This involves finding words associated with narrative persuasion : predictive of human decisions or preferences but decorrelated from non-linguistic information which could also explain things.", "We analyze the resulting lexicons, especially with respect to the classic Aristotelian modes of persuasion: logos, pathos, and ethos.", "We compare the following algorithms: R egression (R), R egression with C onfound features (RC), M ixed effects Regression (M), R esidualizing R egressions (RR), LogO dds R atio (OR), M utual I nformation (MI), and MI/OR with regresssion (R+MI and R+OR).", "See Section 5 for a discussion of these baselines, and the online supplementary information for implementation details.", "We also compare the proposed algorithms: D eep R esidualization using word frequencies (DR+BOW) and embeddings (DR+ATTN), and A dversarial Selection using word frequencies (A+BOW) and embeddings (A+ATTN).", "In Section 2 we observed that I ( L ) measures the improvement in predictive power that L ( T ) affords a model already having access to C .", "Thus, we evaluate each algorithm by (1) regressing C on Y , (2) drawing a lexicon L , (3) regressing C + L ( T ) on Y , and (4) measuring the size of gap in test prediction error between the models of step (1) and (3).", "For classification problems, we measured error with cross-entropy ( XE ): XE = X i p i log p i performance = XEC XEL ( T ) ,C And for regression, we computed the mean squared error ( MSE ): MSE = 1 n X i ( Y i Y i ) 2 performance = MSEC MSEL ( T ) ,C Because we fix lexicon size but vary lexicon content, lexicons with good words will score highly under this metric, yielding the large performance improvements when combined with C .", "We also report the average strength of association between words in L and C .", "For categorical confounds, we measure Cramer's V ( V ) (Cramer, 2016), and for continuous confounds, we use the 1618 point-biserial correlation coefficient ( r pb ) (Glass and Hopkins, 1970).", "Note that r pb is mathematically equivalent to Pearson correlation in bivariate settings.", "Here the best lexicons will score the lowest.", "We implemented neural models with the Ten-sorflow framework (Abadi et al., 2016) and optimized using Adam (Kingma and Ba, 2014).", "We implemented linear models with the scikit learn package (Pedregosa et al., 2011).", "We implemented mixed models with the lme4 R package (Bates et al., 2014).", "We refer to the online supplementary materials for per-experiment hyperparameters.", "For each dataset, we constructed vocabularies from the 10,000 most frequently occurring tokens, and randomly selected 2,000 examples for evaluation.", "We then conducted a wide hyperparameter search and used lexicon performance on the evaluation set to select final model parameters.", "We then used these parameters to induce lexicons from 500 random train/test splits.", "Significance is estimated with a bootstrap procedure: we counted the number of trials each algorithm won (i.e. had the largest error C error L ( T ) ,C ).", "We also report the average performance and correlation of all the lexicons generated from each split.", "We ran these experiments using lexicon sizes of k = 50, 150, 250, and 500 and observed similar behavior.", "The results reported in the following sections are for k = 150 , and the words in Tables 1, and 2, 3 are from randomly selected lexicons (other lexicons had similar characteristics).", "Setup.", "We consider 189,486 financial complaints publicly filed with the Consumer Financial Protection Bureau (CFPB) 2 .", "The CFPB is a product of Dodd-Frank legislation which solicits and addresses complaints from consumers regarding a variety of financial products: mortgages, credit reports, etc.", "Some submissions are handled on a timely basis ( < 15 days) while others languish.", "We are interested in identifying salient words which help push submissions through the bureaucracy and obtain timely responses, regardless of the specific nature of the complaint.", "Thus, our target variable is a binary indicator of whether the complaint obtained a timely response.", "Our 2 These data can be obtained from https: //www.consumerfinance.gov/data-research/consumer-complaints/ confounds are twofold, (1) a categorical variable tracking the type of issue (131 categories), and (2) a categorical variable tracking the financial product (18 categories).", "For the proposed DR+BOW, DR+ATTN, A+BOW, and A+ATTN models, we set | e | to 1, 64, 1, and 256, respectively.", "Results .", "In general, this seems to be a tractable classification problem, and the confounds alone are moderately predictive of timely response ( XEC = 1.06).", "The proposed methods appear to perform the best, and DR+BOW achieved the largest performance/correlation ratio (Figure 3).", "We obtain further evidence upon examining the lexicons selected by four representative algorithms: proposed (DR+BOW), a well-performing baseline (RR), and two naive baselines (R, MI) (Table 1).", "MI's words appear unrelated to the confounds, but don't seem very persuasive, and our results corroborate this: these words failed to add predictive power over the confounds (Fig-ure 3).", "On the opposite end of the spectrum, R's words appear somewhat predictive of the timely response, but are confound-related: they include the FDCPA (Fair Debt Collection Practices Act) and HIPAA (Health Insurance Portability and Accountability Act), which are directly related to the confound of financial product.", "The top-scoring words in RR's lexicon include numbers (6, 150.00) and words that suggest that the issue is ongoing (being, starting).", "On the other hand, the words of DR+BOW draw on the rhetorical devices of ethos by respecting the reader's authority (ma'am, honor), and logos by suggesting that the writer has been proactive about solving the issue (multiple, submitted, xx/xx/xxx, ago).", "These are narrative qualities that align with two of the persuasion literature's weapons of influence: reciprocation and commitment (Kenrick et al., 2005).", "Several algorithms implicitly favored longer (presumably more detailed) complaints by selecting common punctuation.", "Setup.", "We consider 141,753 undergraduate and graduate course offerings over a 6-year period (2010 2016) at Stanford University.", "We are interested in how the writing style of a description convinces students to enroll.", "We therefore choose log(enrollment) as our target variable and control for non-linguistic information which students also use when making enrollment decisions: course subject (227 categories), course level (26), number of requirements satisfied (7), whether there is a final (3), the start time, and the combination of days the class meets (26).", "All except start time are modeled as categorical variables.", "For the proposed DR+BOW, DR+ATTN, A+BOW, and A+ATTN models, we set | e | to 1, 100, 16, and 64, respectively.", "This appears to be a tractable regression problem; the confounds alone are highly predictive of course enrollment ( MSEC = 3.67).", "(Fig-A+ATTN R OR future programming summer instructor required interpretation eating prerequisites stability or computer attitude doing management optimization guest introduction completion sexual chemical during culture applications labor research you production project clinical background Table 2: The ten highest-scoring words in lexicons generated by Adversarial + ATTN (A+ATTN), Regression (R), and Log-Odds Ratio (OR).", "ure 4).", "A+ATTN performed the best, and in general, the proposed techniques produced the most-predictive and least-correlated lexicons.", "Interestingly, Residualization (RR) and Regression with Confounds (RC) appear to outperform the Deep Residualization selector.", "In Table 2 we observe stark differences between the highest-scoring words of a proposed technique (A+ATTN) and two baselines with opposing characteristics (R, OR) (Table 2).", "Words chosen via Regression (R) appear predictive of enrollment, but also related to the confounds of subject (pro-gramming, computer, management, chemi-cal, clinical) and level (required, prerequi-sites, introduction).", "Log-Odds Ratio (OR) selected words which 1620 A+BOW RR word transliteration translation word transliteration translation masu polite suffix purotein protein oh polite prefix nichiban adhesive company tsubu grain eiyo nutrition gun group go polite prefix saizu size haigo formulation sesshu intake dezato dessert mai sheet jo tablet kagaku chemical daizu soy mini mini Table 3: The ten highest-scoring words in lexicons generated by Adversarial Selection + BOW (A+BOW) and Residualization (RR).", "appear unrelated to both the confounds and enrollment.", "The Adversarial Selector (A+ATTN) selected words which are both confound-decorrelated and predictive of enrollment.", "Its words appeal to the concept of variety (or, guest), and to pathos, in the form of universal student interests (future, eating, sexual).", "Notably, the A+ATTN words are also shorter (mean length of 6.2) than those of R (9.3) and OR (9.0), which coincides with intuition (students often skim descriptions) and prior research (short words are known to be more persuasive in some settings (Pratkanis et al., 1988)).", "The lexicon also suggests that students prefer courses with research project components (research, project).", "Setup.", "We consider 59,487 health product listings on the Japanese e-commerce website Rakuten 3 .", "These data originate from a December 2012 snapshot of the Rakuten marketplace.", "They were tok-enized with the JUMAN morphological analyzer (Kurohashi and Nagao, 1999).", "We are interested in identifying words which advertisers could use to increase their sales, regardless of the nature of the product.", "Therefore, we set log(sales) as our target variable, and control for an item's price (continuous) and seller (207 categories).", "The category of an item (i.e. toothbrush vs. supplement) is not included in these data.", "In practice, sellers specialize in particular product types, so this may be indirectly accounted for.", "For the proposed DR+BOW, DR+ATTN, A+BOW, and A+ATTN models, we set | e | to 4, 3 These data can be obtained from https://rit.", "Results.", "This appears to be a more difficult prediction task, and the confounds are only slightly predictive of sales ( MSEC = 116.34) (Figure 5).", "Again, lexicons obtained via the proposed methods were the most successful, achieving the highest performance with the lowest correlation (Ta-ble 3).", "When comparing the words selected by A+BOW (proposed) and RR (widely used and well performing), we find that both draw on the rhetorical element of logos and demonstrate informativeness (nutrition, size, etc.).", "A+BOW also draws on ethos by identifying word stems associated with politeness.", "This quality draws on the authority of shared cultural values, and has been shown to appeal to Japanese shoppers (Pryzant et al., 2017).", "On the other hand, RR selected sev-1621 eral numbers and failed to avoid brand indicators: nichiban, a large company which specializes in medical adhesives, is one of the highest-scoring words.", "draw on.", "We address these in turn.", "Lexicon induction.", "Some work in lexicon induction is intended to help interpret the subjective properties of a text or make make machine learning models more interpretable, i.e. so that practitioners can know why their system works.", "For example, Taboada et al. (2011); Hamilton et al. (2016) induce sentiment lexicons, and Mohammad and Turney (2010); Hu et al. (2009) induce emotion lexicons.", "Practitioners often get these words by considering the high-scoring features of regressions trained to predict an outcome (McFar-land et al., 2013; Chahuneau et al., 2012; Ran-ganath et al., 2013; Kang et al., 2013).", "They account for confounds through manual inspection, residualizing (Jaeger et al., 2009; Baayen et al., 2010), hierarchical modeling (Bates, 2010; Gus-tarini, 2016; Schillebeeckx et al., 2016), log-odds (Szumilas, 2010; Monroe et al., 2008), mutual information (Berg, 2004), or matching (Tan et al., 2014; DiNardo, 2010).", "Many of these methods are manual processes or have known limitations, mostly due to multicollinearity (Imai and Kim, 2016; Chatelain and Ralf, 2012; Wurm and Fisi-caro, 2014).", "Furthermore, these methods have not been tested in a comparative setting: this work is the first to offer an experimental analysis of their abilities.", "Causal inference.", "Our methods for lexicon induction have connections to recent advances in the causal inference literature.", "In particular, Johans-son et al. (2016) and Shalit et al. (2016) propose an algorithm for counterfactual inference which bear similarities to our Adversarial Selector (Sec-tion 3.2), Imai et al. (2013) advocate a lasso-based method related to our Deep Residualization (DR) method (Section 3.1), and Egami et al. (2017) explore how to make causal inferences from text through careful data splitting.", "Unlike us, these papers are largely unconcerned with the underlying features and algorithmic interpretability.", "Athey (2017) has a recent survey of machine learning problems where causal modeling is important.", "mechanism of persuasion , which has been widely studied.", "Most of this prior work uses lexical, syntactic, discourse, and dialog interactive features (Stab and Gurevych, 2014; Habernal and Gurevych, 2016; Wei et al., 2016), power dynamics (Rosen-thal and Mckeown, 2017; Moore, 2012), or diction (Wei et al., 2016) to study discourse persuasion as manifested in argument.", "We study narrative persuasion as manifested in everyday decisions.", "This important mode of persuasion is understudied because researchers have struggled to isolate the ac-tive ingredient of persuasive narratives (Green, 2008; De Graaf et al., 2012), a problem that the formal framework of deconfounded lexicon induction (Section 2) may help alleviate.", "Computational social scientists frequently develop algorithms to find words that are related to some information but not other information.", "We encoded this problem into a formal task, proposed two novel methods for it, and conducted the first principled comparison of algorithms in the space.", "Our results suggest the proposed algorithms offer better performance than those which are currently in use.", "Upon linguistic analysis, we also find the proposed algorithms' words better reflect the classic Aristotelian modes of persuasion: logos, pathos, and ethos.", "This is a promising new direction for NLP research, one that we hope will help computational (and non-computational!) social scientists better interpret linguistic variables and their relation to outcomes.", "There are many directions for future work.", "This includes algorithmic innovation, theoretical bounds for performance, and investigating rich social questions with these powerful new techniques.", "(cid:2) (cid:2) (cid:2) (cid:12)(cid:12) (cid:3) (cid:12)(cid:12) (cid:3)(cid:3) Here, we discuss how under standard (albeit strong) assumptions that are often made to identify causal effects in observational studies, we can interpret I ( L ) with L ( T ) = T as a measure of the strength of the text's causal effect on Y .", "Following the potential outcomes model of Rubin (1974) we start by imagining potential out-1622 comes Y ( t ) corresponding to the outcome we would have observed given text t for any possible text t T ; then we actually observe Y = Y ( T ) .", "With this formalism, the causal effect of the text is clear, e.g., the effect of using text t 0 versus t is simply Y ( t 0 ) Y ( t ) .", "Suppose that T , our observed text, takes on values in T with a distribution that depends on C .", "Let's also assume that the observed text T is independent of the potential outcomes { Y ( t ) } t T , conditioned on the confounders C (Rosenbaum and Rubin, 1983).", "So we know what would happen with any given text, but don't yet know which text will get selected (because T is a random variable).", "Now if we fix C and there is any variance remaining in Y ( T ) (i.e. E (cid:2) Var (cid:2) Y ( T ) (cid:12)(cid:12) C, { Y ( t ) } t T (cid:3)(cid:3) > 0 ) then the text has a causal effect on Y .", "Now we assume that Y ( t ) = f c ( t ) + (cid:15) , meaning that the difference in effects of one text t relative to another text t 0 is always the same given fixed confounders.", "For example, in a bag of words model, this would imply that switching from using the word eating versus homework in a course description would always have the same impact on enrollment (conditionally on confounders).", "With this assumption in hand, then the causal effects of T , E (cid:2) Var (cid:2) Y ( T ) (cid:12)(cid:12) C, { Y ( t ) } t T (cid:3)(cid:3) , matches I ( L ) as described in equation (1) (Imbens and Rubin, 2015).", "In other words, given the same assumptions often made in observational studies, the informativeness coefficient of the full, uncompressed text in fact corresponds to the amount of variation in Y due to the causal effects of T .", "We gratefully acknowledge support from NSF Award IIS-1514268.", "We thank Youngjoo Chung for her invaluable assistance, advice, and the Rakuten data.", "We also thank Will Hamilton for his advice and direction while writing." ]
[ "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "objective", "objective", "objective", "method", "objective", "abstain", "abstain", "abstain", "method", "other", "other", "abstain", "abstain", "objective", "method", "method", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "other", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "other", "other", "other", "other", "other", "other", "objective", "other", "abstain", "other", "abstain", "other", "other", "other", "method", "other", "abstain", "objective", "objective", "objective", "objective", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "other", "other", "other" ]
[ "Transition-based parsers implemented with Pointer Networks have become the new state of the art in dependency parsing, excelling in producing labelled syntactic trees and outperforming graph-based models in this task.", "In order to further test the capabilities of these powerful neural networks on a harder NLP problem, we propose a transition system that, thanks to Pointer Networks, can straightforwardly produce labelled directed acyclic graphs and perform semantic dependency parsing.", "In addition, we enhance our approach with deep contextualized word embeddings extracted from BERT.", "The resulting system not only outperforms all existing transition-based models, but also matches the best fully-supervised accuracy to date on the SemEval 2015 Task 18 English datasets among previous state-of-the-art graph-based parsers.", "In dependency parsing , the syntactic structure of a sentence is represented by means of a labelled tree, where each word is forced to be attached exclusively to another that acts as its head.", "In contrast, semantic dependency parsing (SDP) (Oepen et al., 2014) aims to represent binary predicate-argument relations between words of a sentence, which requires producing a labelled directed acyclic graph (DAG): not only semantic predicates can have multiple or zero arguments, but words from the sentence can be attached as arguments to more than one head word (predicate), or they can be outside the SDP graph (being neither a predicate nor an argument) as shown in the examples in Figure 1.", "Since existing dependency parsers cannot be directly applied, most SDP research has focused on adapting them to deal with the absence of single-head and connectedness constraints and to produce an SDP graph instead.", "As in dependency parsing, we can find two main families of approaches to efficiently generate accurate SDP graphs.", "On the one hand, graph-based algorithms have drawn more attention since adapting them to this task is relatively straightforward.", "In particular, these globally optimized methods independently score arcs (or sets of them) and then search for a high-scoring graph by combining these scores.", "From one of the first graph-based DAG parsers proposed by McDonald and Pereira (2006) to the current state-of-the-art models (Wang et al., 2019; He and Choi, 2019), different graph-based SDP approaches have been presented, providing accuracies above their main competitors: transition-based DAG algorithms.", "A transition-based parser generates a sequence of actions to incrementally build a valid graph (usu-ally from left to right).", "This is typically done by local, greedy prediction and can efficiently parse a sentence in a linear or quadratic number of actions (transitions); however, the lack of global inference makes them more prone to suffer from error propagation: i.e. , since transitions are sequentially and locally predicted, an erroneous action can affect future predictions, having a significant impact in long sentences and being, to date, less appealing for SDP.", "In fact, in recent years only a few contributions, such as the system developed by Wang et al. (2018), present a purely transition-based SDP parser.", "It is more common to find hybrid systems that combine transition-based approaches with graph-based techniques to alleviate the impact of error propagation in accuracy (Du et al., 2015), but this penalizes the efficiency provided by transition-based algorithms.", "Away from the current mainstream, we present a purely transition-based parser that directly generates SDP graphs without the need of any additional techniques.", "We rely on Pointer Networks (Vinyals et al., 2015) to predict transitions that can attach multiple heads to the same word and incrementally build a labelled DAG.", "This kind of neural networks provide an encoder-decoder architecture that is capable of capturing information from the whole sentence and previously created arcs, alleviating the impact of error propagation and already showing remarkable results in transition-based dependency parsing (Ma et al., 2018; Fernandez-Gonzalez and Gomez-Rodrguez, 2019).", "We further enhance our neural network with deep contextualized word embeddings extracted from the pre-trained language model BERT (Devlin et al., 2019).", "The proposed SDP parser 1 can process sentences in SDP treebanks (where structures are sparse DAGs with a low in-degree) in O ( n 2 log n ) time, or O ( n 2 ) without cycle detection.", "This is more efficient than the current fully-supervised state-of-the-art system by Wang et al. (2019) ( O ( n 3 ) without cycle detection), while matching its accuracy on the SemEval 2015 Task 18 datasets (Oepen et al., 2015).", "In addition, we also prove that our novel transition-based model provides promising accuracies in the semi-supervised scenario, achieving some state-of-the-art results.", "An early approach to DAG parsing was implemented as a modification to a graph-based parser by McDonald and Pereira (2006).", "This produced DAGs using approximate inference by first finding a dependency tree, and then adding extra edges that would increase the graph's overall score.", "A 1 Source code available at https://github.com/ danifg/SemanticPointer .", "few years later, this attempt was outperformed by the first transition-based DAG parser by Sagae and Tsujii (2008).", "They extended the existing transition system by Nivre (2003) to allow multiple heads per token.", "The resulting algorithm was not able to produce DAGs with crossing dependencies, requiring the pseudo-projective transformation by Nivre and Nilsson (2005) (plus a cycle removal procedure) as a post-processing stage.", "More recently, there has been a predominance of purely graph-based DAG models since the SemEval 2015 Task 18 (Oepen et al., 2015).", "Almeida and Martins (2015) adapted the pre-deep-learning dependency parser by Martins et al. (2013) to produce SDP graphs.", "This graph-based parser encodes higher-order information with hand-crafted features and employs the AD 3 algorithm (Mar-tins et al., 2011) to find valid DAGs during decoding.", "This was extended by Peng et al. (2017) with BiLSTM-based feature extraction and multitask learning: the three formalisms considered in the shared task were jointly learned to improve final accuracy.", "After the success of Dozat et al. (2017) in graph-based dependency parsing, Dozat and Manning (2018) proposed minor adaptations to use this biaffine neural architecture to produce SDP graphs.", "To that end, they removed the maximum spanning tree algorithm (Chu and Liu, 1965; Edmonds, 1967) necessary for decoding well-formed dependency trees and simply kept those edges with a positive score.", "In addition, they trained the unlabelled parser with a sigmoid cross-entropy (instead of the original softmax one) in order to accept multiple heads.", "The parser by Dozat and Manning (2018) was recently improved by two contributions.", "Firstly, Wang et al. (2019) manage to add second-order information for score computation and then apply either mean field variational inference or loopy belief propagation information to decode the highest-scoring SDP graph.", "While significantly boosting parsing accuracy, the original O ( n 2 ) runtime complexity is modified to O ( n 3 ) in the resulting SDP system.", "Secondly, He and Choi (2019) significantly improve the original parser's accuracy by not only using contextualized word embeddings extracted from BERT (Devlin et al., 2019), but also introducing contextual string embeddings (called Flair) (Akbik et al., 2018), which consist in a novel type of word vector representations based on character-level language modeling.", "Both extensions, (Wang et al., 2019) and (He and Choi, 2019), are currently the state of the art on the SemEval 2015 Task 18 in the fully-supervised and semi-supervised scenarios, respectively.", "Kurita and Sgaard (2019) have also recently proposed a complex approach that iteratively applies the syntactic dependency parser by Zhang et al. (2017), sequentially building a DAG structure.", "At each iteration, the graph-based parser selects the highest-scoring arcs, keeping the single-head constraint.", "The process ends when no arcs are added in the last iteration.", "The combination of partial parses results in an SDP graph.", "Since the graph is built in a sequential process, they use reinforcement learning to guide the model through more optimal paths.", "Following Peng et al. (2017), multi-task learning is also added to boost final accuracy.", "On the other hand, the use of transition-based algorithms in the SDP task had been less explored until very recently.", "Du et al. (2015) presented a voting-based ensemble of fourteen graphand transition-based parsers.", "In their work, they noticed that individual graph-based models outperform transition-based algorithms, assigning, during voting, higher weights to them.", "Among the transition systems used, we can find the one developed by Titov et al. (2009), which is not able to cover all SDP graphs.", "We have to wait until the work by Wang et al. (2018) to see that a purely transition-based SDP parser (enhanced with a simple model ensemble technique) can achieve competitive results.", "They simply modified the preconditions of the complex transition system by Choi and McCallum (2013) to produce unrestricted DAG structures.", "In addition, their system was implemented by means of stack-LSTMs (Dyer et al., 2015), enhanced with BiLSTMs and Tree-LSTMs for feature extraction.", "We are, to the best of our knowledge, first to explore DAG parsing with Pointer Networks, proposing a purely transition-based algorithm that can be a competitive alternative to graph-based SDP models.", "Finally, during the reviewing process of this work, the proceedings of the CoNLL 2019 shared task (Oepen et al., 2019) were released.", "In that event, SDP parsers were evaluated on updated versions of SemEval 2015 Task 18 datasets, as well as on datasets in other semantic formalisms such as Abstract Meaning Representation (AMR) (Banarescu et al., 2013) and Universal Cognitive Conceptual Annotation (UCCA) (Abend and Rappoport, 2013).", "Although graph-based parsers achieved better accuracy in the SDP track, several BERT-enhanced transition-based approaches were proposed.", "Among them we can find an extension (Che et al., 2019) of the system by Wang et al. (2018), several adaptations for SDP (Her-shcovich and Arviv, 2019; Bai and Zhao, 2019) of the transition-based UCCA parser by Hershcovich et al. (2017), as well as an SDP variant (Lai et al., 2019) of the constituent transition system introduced by Fernandez-Gonzalez and Gomez-Rodrguez (2019).", "Also in parallel to the development of this research, Zhang et al. (2019) proposed a transition-based parser that, while it can be applied for SDP, was specifically designed for AMR and UCCA parsing (where graph nodes do not correspond with words and must be generated during the parsing process).", "In particular, this approach incrementally builds a graph by predicting at each step a semantic relation composed of the target and source nodes plus the arc label.", "While this can be seen as an extension of our approach for those tasks where nodes must be generated, its complexity penalizes accuracy in the SDP task.", "We design a novel transition system that is able to straightforwardly attach multiple heads to each word in a single pass, incrementally building, from left to right, a valid SDP graph: a labelled DAG.", "To implement it, we use Pointer Networks (Vinyals et al., 2015).", "These neural networks are able to learn the conditional probability of a sequence of discrete numbers that correspond to positions in an input sequence and, at decoding time, perform as a pointer that selects a position from the input.", "In other words, we can train this neural network to, given a word, point to the position of the sentence where its head (Fernandez-Gonzalez and Gomez-Rodrguez, 2019) or dependent words (Ma et al., 2018) are located, depending on what interpretation we use during training.", "In particular, (Fernandez-Gonzalez and Gomez-Rodrguez, 2019) proved to be more suitable for dependency parsing than (Ma et al., 2018) since it requires half as many steps to produce the same dependency parse, being not only faster, but also more accurate (as this mitigates the impact of error propagation).", "Inspired by Fernandez-Gonzalez and Gomez-Rodrguez (2019), we train a Pointer Network to point to the head of a given word and propose an algorithm that does not use any kind of data structures (stack or buffer, required in classic transition-based parsers (Nivre, 2008)), but just a focus word pointer i for marking the word currently being processed.", "More in detail, given an input sentence of n words w 1 , . . . , w n , the parsing process starts with i pointing at the first word w 1 .", "At each time step, the current focus word w i is used by the Pointer Network to return a position p from the input sentence (or 0, where the ROOT node is located).", "This information is used to choose between the two available transitions: If p (cid:54) = i , then the pointed word w p is considered as a semantic head word (predicate) of w i and an Attach p transition is applied, creating the directed arc w p w i .", "The Attach p transition is only permissible if the resulting predicate-argument arc neither exists nor generates a cycle in the already-built graph, in order to output a valid DAG.", "On the contrary, if p = i ( i.e. , the model points to the current focus word), then w i is considered to have found all its head words, and a Shift transition is chosen to move i one position to the right to process the next word w i +1 .", "The parsing ends when the last word from the sentence is shifted, meaning that the input is completely processed.", "As stated by Ma et al. (2018) for attaching dependent words, it is necessary to fix the order in which (in our case, head) words are assigned in order to define a deterministic decoding.", "As the sentence is parsed in a left-to-right manner, we adopt the same order for head assignments.", "For instance, the SDP graph in Figure 1(a) is produced by the transition sequence described in Table 1.", "We just need n Shift transitions to move the focus word pointer through the whole sentence and m Attach p transitions to create the m arcs present in the SDP graph.", "It is worth mentioning that we manage to significantly reduce the amount of transitions necessary for generating DAGs in comparison to those proposed in the complex transition systems by Choi and McCallum (2013) and Titov et al. (2009), used in the SDP systems by Wang et al. (2018) and Du et al. (2015), respectively.", "In addition, the described multi-head transition system is able to p transition focus word i added arc The 1 1 Shift results 2 1 Attach -1 results 2 The 1 results 2 4 Attach -4 results 2 results 2 in 4 2 Shift were 3 3 Shift in 4 0 Attach -0 in 4 ROOT 0 in 4 6 Attach 6 in 4 in 4 with 6 4 Shift line 5 4 Attach -4 line 5 in 4 line 5 5 Shift with 6 6 Shift analysts 7 7 Shift ' 8 8 Shift expectations 9 6 Attach -6 expectations 9 with 6 expectations 9 7 Attach -7 expectations 9 analysts 7 expectations 9 9 Shift .", "directly produce any DAG structure without exception, while some transition systems, such as the mentioned (Sagae and Tsujii, 2008; Titov et al., 2009), are limited to a subset of DAGs.", "Finally, while the outcome of the proposed transition system is a SDP graph without cycles, in other research, such as (Kurita and Sgaard, 2019) and state-of-the-art models by Dozat and Manning (2018) and Wang et al. (2019), the parser is not forced to produce well-formed DAGs, allowing the presence of cycles.", "Vinyals et al. (2015) introduced an encoder-decoder architecture, called Pointer Network , that uses a mechanism of neural attention (Bahdanau et al., 2014) to select positions from the input sequence, without requiring a fixed size of the output dictionary.", "This allows Pointer Networks to easily address those problems where the target classes considered at each step are variable and depend on the length of the input sequence.", "We prove that implementing the transition system previously de-fined on this neural network results in an accurate SDP system.", "We follow previous work in dependency parsing (Ma et al., 2018; Fernandez-Gonzalez and Gomez-Rodrguez, 2019) to design our neural architecture: Encoder A BiLSTM-CNN architecture (Ma and Hovy, 2016) is used to encode the input sentence w 1 , . . . , w n , word by word, into a sequence of encoder hidden states h 1 , . . . , h n .", "CNNs with max pooling are used for extracting character-level representations of words and, then, each word w i is represented by the concatenation of character ( e ci ), word ( e wi ), lemma ( e li ) and POS tag ( e pi ) embeddings: x i = e ci e wi e li e pi After that, the x i of each word w i is fed one-by-one into a BiLSTM that captures context information in both directions and generates a vector representation h i : h i = h li h ri = BiLSTM ( x i ) In addition, a special vector representation h 0 , de-noting the ROOT node, is prepended at the beginning of the sequence of encoder hidden states.", "Decoder An LSTM is used to output, at each time step t , a decoder hidden state s t .", "As input of the decoder, we use the encoder hidden state h i of the current focus word w i plus extra high-order features.", "In particular, we take into account the hidden state of the last head word ( h h ) attached to w i , which will be a co-parent of a future predicate assigned to w i .", "Following Ma et al. (2018), we use element-wise sum to add this information without increasing the dimensionality of the input: r i = h i + h h ; s t = LSTM ( r i ) Note that feature information like this can be easily added in transition-based models without increasing the parser's runtime complexity, something that does not happen in graph-based models, where, for instance, the second-order features added by Wang et al. (2019) penalize runtime complexity.", "We experimented with other high-order features such as grandparent or sibling information of the current focus word w i , but no significant improvements were obtained from their addition, so they were discarded for simplicity.", "Further feature exploration might improve parser performance, but we leave this for future work.", "Once s t is generated, the attention vector a t , which will work as a pointer over the input, must be computed in the pointer layer .", "First, following the previously cited work, the scores between s t and each encoder hidden representation h j from the input sentence are computed using this biaffine attention scoring function (Dozat and Manning, 2017): v tj = score ( s t , h j ) = f 1 ( s t ) TW f 2 ( h j ) + UT f 1 ( s t ) + VT f 2 ( h j ) + b where parameter W is the weight matrix of the bilinear term, U and V are the weight tensors of the linear terms and b is the bias vector.", "In addition, f 1 ( ) and f 2 ( ) are two single-layer multilayer per-ceptrons (MLP) with ELU activation, proposed by (Dozat and Manning, 2017) for reducing dimensionality and minimizing overfitting.", "Then, a softmax is applied on the resulting score vector v t to compute a probability distribution over the input words: a t = softmax ( v t ) The resulting attention vector a t can now be used as a pointer to select the highest-scoring position p from the input.", "This information will be employed by the transition system to choose between the two available actions and create a predicate-argument relation between w p and w i ( Attach p ) or move the focus word pointer to w i +1 ( Shift ).", "In case the chosen Attach p is forbidden due to the acyclicity constraint, the next highest-scoring position in a t is considered as output instead.", "Figure 2 depicts the neural architecture and the decoding procedure for the SDP structure in Figure 1(a).", "Label prediction We jointly train a multi-class classifier that scores every label for each pair of words.", "This shares the same encoder and uses the same biaffine attention function as the pointer: s ltp = score ( s t , h p , l ) = g 1 ( s t ) TW l g 2 ( h p ) + U Tl g 1 ( s t ) + V Tl g 2 ( h p ) + b l where a distinct weight matrix W l , weight tensors U l and V l and bias b l are used for each label l , where l { 1 , 2 , . . . , L } and L is the number of labels.", "In addition, g 1 ( ) and g 2 ( ) are two single-layer MLPs with ELU activation.", "The scoring function is applied over each predicted arc between the dependent word w i (repre-sented by s t ) and the pointed head word w p in position p (represented by h p ) to compute the score of each possible label and assign the highest-scoring one.", "Training Objectives The Pointer Network is trained to minimize the negative log likelihood Figure 2: Neural network architecture and decoding steps to partially parse the SDP graph in Figure 1.", "(implemented as cross-entropy loss) of producing the correct SDP graph y for a given sentence x : P ( y | x ) .", "Let y be a DAG for an input sentence x that is decomposed into a set of m directed arcs a 1 , . . . , a m following a left-to-right order.", "This probability can be factorized as follows: P ( y | x ) = m (cid:89) k =1 P ( a k | a <k , x ) where a <k denotes previous predicted arcs.", "On the other hand, the labeler is trained with softmax cross-entropy to minimize the negative log likelihood of assigning the correct label l , given a dependency arc with head word w h and dependent word w i .", "The whole neural model is jointly trained by summing the parser and labeler losses prior to computing the gradients.", "In that way, model parameters are learned to minimize the sum of the cross-entropy loss objectives over the whole corpus.", "In order to further improve the accuracy of our approach, we augment our model with deep contextualized word embeddings provided by the widely-used pre-trained language model BERT (Devlin et al., 2019).", "Instead of including and training the whole BERT model as encoder of our system, we follow the common, greener and more cost-effective approach of leveraging the potential of BERT by extracting the weights of one or several layers as word-level embeddings.", "To that end, the pre-trained uncased BERTBASE model is used.", "Since BERT is trained on subwords ( i.e. , substrings of the original token), we take the 768-dimension vector of each subword of an input token and use the average embedding as the final representation e BERTi .", "Finally, this is directly concatenated to the resulting basic word representation before feeding the BiLSTM-based encoder: x (cid:48) i = x i e BERTi ; h i = BiLSTM ( x (cid:48) i ) Higher performances can be achieved by summing or concatenating (depending on the task) several layers of BERT; however, exploring these combinations is out of the scope of this paper and we simply use embeddings extracted from the second-to-last hidden layer (since the last layer is biased to the target objectives used to train BERT's language model).", "In order to test the proposed approach, we conduct experiments on the SemEval 2015 Task 18 English datasets (Oepen et al., 2015), where all sentences are annotated with three different formalisms: DELPH-IN MRS (DM) (Flickinger et al., 2012), Predicate-Argument Structure (PAS) (Miyao and Tsujii, 2004) and Prague Semantic Dependencies (PSD) (Hajic et al., 2012).", "Standard split as in previous work (Almeida and Martins, 2015; Du et al., 2015) results in 33,964 training sentences from Sections 00-19 of the Wall Street Journal corpus (Marcus et al., 1993), 1,692 development sentences from Section 20, 1,410 sentences from Section 21 as in-domain test set, and 1,849 sentences sampled from the Brown Corpus (Francis and Kucera, Architecture hyper-parameters CNN window size 3 CNN number of filters 50 BiLSTM encoder layers 3 BiLSTM encoder size 512 LSTM decoder layers 1 LSTM decoder size 512 LSTM layers dropout 0.33 Word/POS/Char./Lemma embedding dimension 100 BERT embedding dimension 768 Embeddings dropout 0.33 MLP layers 1 MLP activation function ELU Arc MLP size 512 Label MLP size 128 UNK replacement probability 0.5 Adam optimizer hyper-parameters Initial learning rate 0.001 1 , 2 0.9 Batch size 32 Decay rate 0.75 Gradient clipping 5.0 Table 2: Model hyper-parameters. 1982) as out-of-domain test data.", "For the evaluation, we use the official script, 2 reporting labelled F-measure scores (LF1) (including ROOT arcs) on the in-domain (ID) and out-of-domain (OOD) test sets for each formalism as well as the macro-average over the three of them.", "We use the Adam optimizer (Kingma and Ba, 2014) and follow (Ma et al., 2018; Dozat and Manning, 2017) for parameter optimization.", "We do not specifically perform hyper-parameter selection for SDP and just adopt those proposed by Ma et al. (2018) for syntactic dependency parsing (detailed in Table 2).", "For initializing word and lemma vectors, we use the pre-trained structured-skipgram embeddings developed by Ling et al. (2015).", "POS tag and character embeddings are randomly initialized and all embeddings (except the deep contextualized ones) are fine-tuned during training.", "Due to random initializations, we report average accuracy over 5 repetitions for each experiment.", "In addition, during a 500-epoch training, the model with the highest labelled F-score on the development set is chosen.", "Finally, while further beam-size exploration might improve accuracy, we use beam-search decoding with beam size 5 in all experiments.", "Table 3 reports the accuracy obtained by state-of-the-art SDP parsers detailed in Section 2 in comparison to our approach.", "To perform a fair comparison, we group SDP systems in three blocks dependend-ing on the embeddings provided to the architecture: (1) just basic pre-trained word and POS tag embeddings, (2) character and pre-trained lemma embeddings augmentation and (3) pre-trained deep contextualized embeddings augmentation.", "As proved by these results, our approach outperforms all existing transition-based models and the widely-used approach by Dozat and Manning (2018) with or without character and lemma embeddings, and it is on par with the best graph-based SDP parser by (Wang et al., 2019) on average in the fully-supervised scenario.", "3 In addition, our model achieves the best fully-supervised accuracy to date on the PSD formalism, considered the hardest to parse.", "We hypothesize that this might be explained by the fact that the PSD formalism is the more tree-oriented (as pointed out by Oepen et al. (2015)) and presents a lower ratio of arcs per sentence, being more suitable for our transition-based approach.", "In the semi-supervised scenario, BERT-based embeddings proved to be more beneficial for the out-of-domain data.", "In fact, while not being a fair comparison since we neither include contextual string embeddings (Flair) (Akbik et al., 2018) nor explore different BERT layer combinations, our new transition-based parser manages to outperform the state-of-the-art system by He and Choi (2019) 4 on average on the out-of-domain test set, obtaining a remarkable accuracy on the PSD formalism.", "Given a sentence with length n whose SDP graph has m arcs, the proposed transition system requires n Shift plus m Attach p transitions to parse it.", "Therefore, since a DAG can have at most ( n 2 ) edges (as is also the case for general directed graphs), it could potentially need O ( n 2 ) transitions in the worst case.", "However, we prove that this does not happen in practice and real sentences can be 3 It is common practice in the literature that systems that only use standard pre-trained word or lemma embeddings are classed as fully-supervised models, even though, strictly, they are not trained exclusively on the official training data.", "4 He and Choi (2019) do not specify in their paper the BERT layer configuration used for generating the word embeddings.", "Parsing complexity of a transition-based dependency parsing algorithm can be determined by the number of transitions performed with respect to the number of words in a sentence (Kubler et al., 2009).", "Therefore, we measure the transition sequence length predicted by the system to analyze every sentence from the development sets of the three available formalisms and depict the relation between them and sentence lengths.", "As shown in Figure 3, a linear behavior is observed in all cases, proving that the number of Attach p transitions evaluated by the model at each step is considerably low (behaving practically like a constant).", "This can be explained by the fact that, on average on the training set, the ratio of predicate-argument dependencies per word in a sentence is 0.79 in DM, 0.99 in PAS and 0.70 in PSD, meaning that the transition sequence necessary for parsing a given sentence will need no more Attach p transitions than Shift ones (which are one per word in the sen-tence).", "It is true that one argument can be attached to more than one predicate; however, the amount of words unattached in the resulting DAG (single-tons) 5 can be significant in some formalisms (as described graphically in Figure 1): on average on the training set, 23% of words per sentence in DM, 6% in PAS and 35% in PSD.", "In addition, edge density on non-singleton words, computed by Oepen et al. (2015) on the test sets, also backs the linear behavior shown in our experiments: 0.96 in DM, 1.02 in PAS and 1.01 in PSD for the in-domain set and 0.95 in DM, 1.02 in PAS and 0.99 in PSD for the out-of-domain data.", "In conclusion, we can state that, on the datasets tested, the proposed transition system executes O ( n ) transitions.", "To determine the runtime complexity of the implementation of the transition system, we need to consider the following: firstly, at each transition, the attention vector a t needs to be computed, which means that each of the O ( n ) transitions takes O ( n ) time to run.", "Therefore, the overall time complexity of the parser, ignoring cycle detection, is O ( n 2 ) .", "Note that this is in contrast to algorithms like (Wang et al., 2019), which takes cubic time even though it does not enforce acyclicity.", "If we add cycle detection, needed to forbid transitions that would create cycles and therefore to enforce that the output is a DAG, then the complexity becomes O ( n 2 log n ) .", "This is because an efficient implementation of cycle detection contributes an additive factor of O ( n 2 log n ) to worst-case time complexity, which becomes the dominant factor.", "To achieve this efficient implementation, we incrementally keep two data structures: on the one hand, we keep track of weakly connected components using path compression and union by rank, which can be done in inverse Ackermann time, as is commonly done for cycle detection in tree and forest parsers (Covington, 2001; Gomez-Rodrguez and Nivre, 2010).", "On the other hand, we keep a weak topological numbering of the graph using the algorithm by Bender et al. (2015), which takes overall O ( n 2 log n ) time over all edge insertions.", "When these two data structures are kept, cycles can be checked in constant time: an arc a b creates a cycle if the involved nodes are in the same weakly connected component and a has a greater topological number than b .", "Therefore, the overall expected worst-case running time of the proposed SDP system is O ( n 2 log n ) for the range of data attested in the experiments, and can be lowered to O ( n 2 ) if we are willing to forgo enforcing acyclicity.", "Our multi-head transition system can accurately parse a sentence in quadratic worst-case runtime thanks to Pointer Networks.", "While being more efficient, our approach outperforms the previous state-of-the-art parser by Dozat and Manning (2018) and matches the accuracy of the best model to date (Wang et al., 2019), proving that, with a state-of-the-art neural architecture, transition-based SDP parsers are a competitive alternative.", "By adding BERT-based embeddings, we significantly improve our model accuracy by marginally affecting computational cost, achieving state-of-the-art F-scores in out-of-domain test sets.", "Despite the promising results, the accuracy of our approach could probably be boosted further by experimenting with new feature information and specifically tuning hyper-parameters for the SDP task, as well as using different enhancements such as implementing the hierarchical decoding recently presented by Liu et al. (2019), including contextual string embeddings (Akbik et al., 2018) like He and Choi (2019), or applying multi-task learning across the three formalisms like Peng et al. (2017).", "This work has received funding from the European Research Council (ERC), under the European Union's Horizon 2020 research and innovation programme (FASTPARSE, grant agreement No 714150), from the ANSWER-ASAP project (TIN2017-85160-C2-1-R) from MINECO, and from Xunta de Galicia (ED431B 2017/01, ED431G 2019/01)." ]
[ "abstain", "objective", "method", "abstain", "abstain", "other", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "objective", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "abstain", "abstain", "other", "other", "objective", "abstain", "other", "other", "other", "other", "other", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "method", "method", "other", "abstain", "method", "method", "method", "method", "method", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "method", "other", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "result", "result", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "result", "result", "other", "other" ]
[ "We consider the problem of collectively detecting multiple events, particularly in cross-sentence settings.", "The key to dealing with the problem is to encode semantic information and model event inter-dependency at a document-level.", "In this paper, we reformulate it as a Seq2Seq task and propose a M ultiL ayer Bi directional Net work (MLBiNet) to capture the document-level association of events and semantic information simultaneously.", "Specifi-cally, a bidirectional decoder is firstly devised to model event inter-dependency within a sentence when decoding the event tag vector sequence.", "Secondly, an information aggregation module is employed to aggregate sentence-level semantic and event tag information.", "Finally, we stack multiple bidirectional decoders and feed cross-sentence information, forming a multi-layer bidirectional tagging architecture to iteratively propagate information across sentences.", "We show that our approach provides significant improvement in performance compared to the current state-of-the-art results 1 .", "Event detection (ED) is a crucial sub-task of event extraction, which aims to identify and classify event triggers.", "For instance, the document shown in Table 1, which contains six sentences { s 1 , . . . , s 6 } , the ED system is required to identify four events: an Injure event triggered by injuries, two Attack events triggered by firing and fight, and a Die event triggered by death.", "Detecting event triggers from natural language text is a challenge task because of the following problems:", "a).", "Sentence-level contextual representation and document-level information aggregation (Chen et al., 2018; Zhao et al., 2018; Equal contribution and shared co-first authorship. Corresponding author. 1 The code is available in https://github.com/ zjunlp/DocED . s 1 : what a brave young woman s 2 : did you hear about the injuries [ Injure ] she sustained s 3 : did you hear about the firing [ Attack ] she did s 4 : she was going to fight [ Attack ] to the death [ Die ] s 5 : she was captured but she was one tough cookie s 6 : god bless here Table 1: An example document in ACE 2005 corpus with cross-sentence semantic enhancement and event inter-dependency. Specifically, semantic information of s 2 provides latent information to enhance s 3 , and Attack event in s 4 also contributes to s 3 . Shen et al., 2020).", "In ACE 2005 corpus, the arguments of a single event instance may be scattered in multiple sentences (Zheng et al., 2019; Ebner et al., 2019), which indicates that document-level information aggregation is critical for ED task.", "What's more, a word in different contexts would express different meanings and trigger different events.", "For example, in Table 1, firing in s 3 means the action of firing guns ( Attack event) or forcing somebody to leave their job ( End Position event).", "To specify its event type, cross-sentence information should be considered.", "b).", "Intra-sentence and inter-sentence event inter-dependency modeling (Liao and Grishman, 2010; Chen et al., 2018; Liu et al., 2018).", "For s 4 in Table 1, an Attack event is triggered by fight, and a Die event is triggered by death.", "This kind of event co-occurrence is common in ACE 2005 corpus, we investigated the dataset and found that about 44.4% of the triggers appeared in this way.", "The cross-sentence event co-occurrence shown in s 4 and s 3 is also very common.", "Therefore, modeling the sentence-level and document-level event inter-dependency is crucial for jointly detecting multiple events.", "To address those issues, previous approaches (Chen et al., 2015; Nguyen et al., 2016; Liu et al., 2018; Yan et al., 2019; Liu et al., 2019; Zhang et al., 2019) mainly focused on sentence-level event detection, neglecting the document-level event interdependency and semantic information.", "Some studies (Chen et al., 2018; Zhao et al., 2018) tried to integrate semantic information across sentences via the attention mechanism.", "For the document-level event inter-dependency modeling, Liao and Grishman (2010) extended the features with event types to capture dependencies between different events in a document.", "Although great progress has been made in ED task due to recent advances in deep learning, there is still no unified framework to model the document-level semantic information and event inter-dependency.", "We try to analyze the ACE 2005 data to re-understand the challenges encountered in ED task.", "Firstly, we find that event detection is essentially a special Seq2Seq task, in which the source sequence is a given document or sentence, and the event tag sequence is target of task.", "Seq2Seq tasks can be effectively modeled via the RNN-based encoder-decoder framework, in which the encoder captures rich semantic information, while the decoder generates a sequence of target symbols with inter-dependency been captured.", "This separate encoder and decoder framework can correspondingly deal with the semantic aggregation and event interdependency modeling challenges in ED task.", "Secondly, for the propagation of cross-sentence information, we find that the relevant information is mainly stored in several neighboring sentences, while little is stored in distant sentences.", "For example, as shown in Table 1, it seems that s 2 and s 4 contribute more to s 3 than s 1 and s 5 .", "In this paper, we propose a novel M ultiL ayer Bi directional Net work (MLBiNet) for ED task.", "A bidirectional decoder layer is firstly devised to decode the event tag vector corresponding to each token with forward and backward event interdependency been captured.", "Then, the event-related information in the sentence is summarized through a sentence information aggregation module.", "Finally, the multiple bidirectional tagging layers stacking mechanism is proposed to propagate cross-sentence information between adjacent sentences, and capture long-range information as the increasing of layers.", "We conducted experimental studies on ACE 2005 corpus to demonstrate its benefits in cross-sentence joint event detection.", "Our contributions are summarized as follows: We propose a novel bidirectional decoder model to explicitly capture bidirectional event inter-dependency within a sentence, alleviating long-range forgetting problem of traditional tagging structure; We propose a model called MLBiNet to propagate semantic and event inter-dependency information across sentences and detect multiple events collectively; We achieve the best performance ( F 1 value) on ACE 2005 corpus, surpassing the state-of-the-art by 1 .", "9 points.", "Generally, event detection on ACE 2005 corpus is treated as a classification problem, which is to determine whether it forms a part of an event trigger.", "Specifically, for a given document d = { s 1 , . . . , s n } , where s i = { w i, 1 , . . . , w i,n i } denotes the i -th sentence containing n i tokens.", "We are required to predict the triggered event type sequence y i = { y i, 1 , . . . , y i,n i } based on contextual information of d .", "Without ambiguity, we omit the subscript i .", "For a given sentence, the event tags corresponding to tokens are associated, which is important for collectively detecting multiple events (Chen et al., 2018; Liu et al., 2018).", "The way tokens are clas-sified independently will miss the association.", "In order to capture the event inter-dependency, the sequential information of event tag should be retained.", "Intuitively, the ED task can be regarded as event tag sequence generation problem, which is essentially a Seq2Seq task.", "Specifically, the source sequence is a given document or sentence, and the event tag sequence to be generated is the target sequence.", "For instance, for sentence did you hear about the injuries she sustained, the decoder model is required to generate a tag sequence [ O, O, O, O, O, B Injure , O, O ] , where O denotes that the corresponding token is not part of event trigger and B Injure indicates an Injure event is triggered.", "We introduce the RNN-based encoder-decoder framework for ED task, considering that it is an efficient solution for Seq2Seq tasks.", "And we propose a multi-layer bidirectional network called MLBiNet shown in Figure 1 to deal with the challenges in detecting multiple events collectively.", "The model framework consists of four components: the semantic encoder, the bidirectional decoder, the information aggregation module and stacking of mul-Figure 1: The architecture of our multi-layer bidirectional network (MLBiNet).", "The red arrow represents the input of semantic representation x t , the green arrow represents the input of adjacent sentences information [ I k 1 i 1 ; I k 1 i +1 ] integrated in the previous layer, and the blue arrow represents the input of forward event tag vector.", "The RNN-based encoder-decoder framework (Cho et al., 2014; Sutskever et al., 2014; Bahdanau et al., 2015; Luong et al., 2015; Gu et al., 2016) consists of two components:", "a) an encoder which converts the source sentence into a fixed length vector c and", "b) a decoder is to unfold the context vector c into the target sentence.", "As is formalized in (Gu et al., 2016), the source sentence s i is converted into a fixed length vector c by the encoder RNN, h t = f ( h t 1 , w t ) , c = ( { h 1 , . . . , h n i } ) where f is the RNN function, { h t } are the RNN states, w t is the t -th token of source sentence, c is the so-called context vector, and summarizes the hidden states, e.g. choosing the last state h n i .", "And the decoder RNN translates c into the target sentence according to: s t = f ( y t 1 , s t 1 , c ) p ( y t | y <t , s i ) = g ( y t 1 , s t , c ) (1) where s t is the state at time t , y t is the predicted symbol at time t , g is a classifier over the vocabulary, and y <t denotes the history { y 1 , . . . , y t 1 } .", "Studies (Bahdanau et al., 2015; Luong et al., 2015) have shown that summarizing the entire source sentence into a fixed length vector will limit the performance of the decoder.", "They introduced the attention mechanism to dynamically changing context vector c t in the decoding process, where c t can be uniformly expressed as c t = n i (cid:88) =1 t h (2) where t is the contribution weight of -th source token's state to context vector at time t , h denotes the representation of -th token.", "We introduce the encoder-decoder framework to model ED task, mainly considering the following advantages:", "a) the separate encoder module is flexible in fusing sentence-level and document-level semantic information and", "b) the RNN decoder model (1) can capture sequential event tag dependency as the predicted tag vectors before t will be used as input for predicting t -th symbol.", "The encoder-decoder framework for ED task is slightly different from the general Seq2Seq task as follows:", "a) For ED task, the length of event tag sequence (target sequence) is known because its elements correspond one-to-one with tokens in the source sequence.", "However, the length of target sequence in the general Seq2Seq task is unknown.", "b) The vocabulary of decoder for ED task is a collection of event types, instead of words.", "In this module, we encode the sentence-level contextual information for each token with Bidirectional LSTM (BiLSTM) and self-attention mechanism.", "Firstly, each token is transformed into comprehensive representation by concatenating its word embedding and NER type embedding.", "The word embedding matrix is pretrained by Skip-gram model (Mikolov et al., 2013), and the NER type embedding matrix is randomly initialized and updated in the training process.", "For a given token w t , its embedded vector is denoted as e t .", "We apply the BiLSTM (Zaremba and Sutskever, 2014) model for sentence-level semantic encoding, which can effectively capture sequential and contextual information for each token.", "The BiLSTM architecture is composed of a forward LSTM and a backward LSTM, i.e., h t = LSTM ( h t 1 , e t ) , h t = LSTM ( h t +1 , e t ) .", "After encoding, the contextual representation of each token is h t = [ h t ; h t ] .", "Attention mechanism between tokens within a sentence has been proven to further integrate long-range contextual semantic information.", "For each token w t , its contextual representation is the weighted average of the semantic information of all tokens in the sentence.", "We apply the attention mechanism proposed by (Luong et al., 2015) with the weights derived by t,j = exp( z t,j ) (cid:80) n i m =1 exp( z t,m ) z t,m = tanh( h (cid:62) t W sa h m + b sa ) (3) And the contextual representation of w t is h at = (cid:80) n i j =1 t,j h j .", "By concatenating its lexical embedding and contextual representation, we get the final comprehensive semantic representation of w t as x t = [ h at ; e t ] .", "The decoder layer for ED task is to generate a sequence of event tags corresponding to tokens.", "As is noted, the tag sequence (target sequence) elements and tokens (source sequence) are in one-to-one correspondence.", "Therefore, the context vector c shown in (1) and (2) can be personalized directly by c t = x t , which is equivalent to attention with degenerate weights.", "That is, tt = 1 and t = 0 , (cid:54) = t .", "In traditional Seq2Seq tasks, the target sequence length is unknown during the inference process, so only the forward decoder is feasible.", "However, for the ED task, the length of the target sequence is known when given source sequence.", "Thus, we devise a bidirectional decoder to model event interdependency within a sentence.", "Forward Decoder In addition to the semantic context vector c t = x t , the event information previously involved can help determine the event type triggered by t -th token.", "This kind of association can be captured by the forward decoder model: s t = f fw ( y t 1 , s t 1 , x t ) y t = f ( W y s t + b y ) (4) where f fw is the forward RNN, { s t } are the states of forward RNN, { y t } are the forward event tag vectors.", "Compared with general decoder (1), the classifier g ( ) over vocabulary is replaced with a transformation f ( ) (identity function, tanh , sigmoid, etc.) to obtain the event tag vector.", "Backward Decoder Considering the associated events may also be mentioned later, we devise a backward decoder to capture this kind of dependency as follows: s t = f bw ( y t +1 , s t +1 , x t ) y t = f ( W y s t + b y ) (5) where f bw is the backward RNN, { s t } are the states of backward RNN, { y t } are the backward event tag vectors.", "Bidirectional Decoder By concatenating y t and y t , we get the event tag vector y t = [ y t ; y t ] with bidirectional event inter-dependency been captured.", "The semantic and event-related entity information is also carried by y t as x t is an indirect input.", "An alternative method modeling the sentence-level event inter-dependency called hierarchical tagging layer is proposed by (Chen et al., 2018).", "The bidirectional decoder is quite different from the hierarchical tagging layer as follows: The bidirectional decoder models event interdependency immediately by combining a forward and a backward decoder.", "The hierarchical tagging layer utilizes two forward decoders and the tag attention mechanism to capture bidirectional event inter-dependency.", "In the bidirectional decoder, the ED task is formalized as a special Seq2Seq task, which can simplify the event inter-dependency modeling problem and cross-sentence information propagation problem discussed below.", "The bidirectional RNN decoder unfolds the event tag vector corresponding to each token, and captures the bidirectional event inter-dependency within the sentence.", "To propagate information across sentences, we need to firstly aggregate useful information of each sentence.", "For current sentence s i , the information we are concerned about can be summarized as recording which entities and tokens trigger which events.", "Thus, to summarize the information, we devise an-other LSTM layer (information aggregation module shown in Figure 1) with the event tag vector y t as input.", "The information at t -th token is computed by I t = LSTM ( I t 1 , y t ) (6) We choose the last state I n i as the summary information, which is I i = I n i .", "The sentence-level information aggregation module bridges the information across sentences, as the well-formalized information can be easily integrated into the decoding process of other sentences, enhancing the event-related signal.", "In this module, we introduce a multiple bidirectional tagging layers stacking mechanism to aggregate information of adjacent sentences into the bidirectional decoder, and propagate information across sentences.", "The information ( { y t } , I i ) obtained by the bidirectional decoder layer and information aggregation module has captured the event relevant information within a sentence.", "However, the cross-sentence information has not yet interacted.", "For a given sentence, as we can see in Table 1, its relevant information is mainly stored in several neighboring sentences, while distant sentences are rarely relevant.", "Thus, we propose to transmit the summarized sentence information I i among adjacent sentences.", "For the decoder framework shown in (4) and (5), the cross-sentence information can be integrated by extending the input with I i 1 and I i +1 .", "Further, we introduce a multiple bidirectional tagging layers stacking mechanism shown in Figure 1 to iteratively aggregate information of adjacent sentences.", "The overall framework is named M ultiL ayer Bi directional Net work (MLBiNet).", "As shown in Figure 1, a bidirectional tagging layer is composed of a bidirectional decoder and an information aggregation module.", "For sentence s i , the outputs of k -th layer can be computed by s t = f fw ( y k t 1 , s t 1 , x t , I k 1 i 1 , I k 1 i +1 ) s t = f bw ( y k t +1 , s t +1 , x t , I k 1 i 1 , I k 1 i +1 ) y k t = f ( W y s t + b y ) y k t = f ( W y s t + b y ) y kt = [ y k t ; y k t ] (7) where I k 1 i 1 is the sentence information of s i 1 aggregated in (k-1)-th layer, and { y kt } are event tag vectors obtained in k-th layer.", "The equation suggests that for each token of source sentence s i , the input of cross-sentence information is identical [ I k 1 i 1 , I k 1 i +1 ] .", "It is reasonable as their cross-sentence information available is the same for each token of current sentence.", "The iteration process shown in equation (7) is actually an evolutionary diffusion of the cross-sentence semantic and event information in the document.", "Specifically, in the first tagging layer, information of current sentence is effectively modeled by the bidirectional decoder and information aggregation module.", "In the second layer, information of adjacent sentences is propagated to current sentence by plugging in I 1 i 1 and I 1 i +1 to the decoder.", "In general, in the k -th ( k 3 ) layer, since s i 1 has captured the information of sentence s i k +1 in the (k-1)-th layer, then s i can obtain information in s i k +1 by acquiring the information in s i 1 .", "Thus, as the number of decoder layers increases, the model will capture information from distant sentences.", "For K -layer bidirectional tagging model, the sentence information with the longest distance of K-1 can be captured.", "We define the final event tag vector of w t as the weighted sum of { y kt } k in different layers, i.e., y dt = (cid:80) Kk =1 k 1 y kt , where (0 , 1] is a weight decay parameter.", "It means that cross-sentence information can supplement to the current sentence, and the contribution gradually decreases as the distance increases when < 1 .", "We note that the parameters of bidirectional decoder and information aggregation module at different layers can be shared, because they encode and propagate the same structured information.", "In this paper, we set the parameters of different layers to be the same.", "In order to train the networks, we minimize the negative log-likelihood loss function J ( ) ,", "where D denotes training documents set.", "The tag probability for token w t is computed by O t = W o y dt + b o p ( O jt | d ; ) = exp( O jt ) / M (cid:88) m =1 exp( O mt ) (9) where M is the number of event classes, p ( O jt | d ; ) is the probability that assigning event type j to token w t in document d when parameter is .", "We performed extensive experimental studies on the ACE 2005 corpus to demonstrate the effectiveness of our method on ED task.", "It defines 33 types of events and an extra NONE type for the non-trigger tokens.", "We formalize it as a task to generate a sequence of 67-class event tag (with BIO tagging schema).", "The data splitting for training, validation and testing follows (Ji and Grishman, 2008; Chen et al., 2015; Liu et al., 2018; Chen et al., 2018; Huang and Ji, 2020), where the training set contains 529 documents, the validation set contains 30 documents and the remaining 40 documents are used as testing set.", "We evaluated the performance of three multilayer settings with 1-, 2and 3-layer MLBiNet, respectively.", "We use the Adam (Kingma and Ba, 2017) for optimization.", "In all three settings, we cut every 8 consecutive sentences into a new document and padding when needed.", "Each sentence is truncated or padded to make it 50 in length.", "We set the dimension of word embedding as 100, the dimension of golden NER type and subtype embedding as 20.", "We set the dropout rate as 0 .", "5 and penalty coefficient as 2 10 5 to avoid overfitting.", "The hidden size of semantic encoder layer and decoder layer is set to 100 and 200, respectively.", "The size of forward and backward event tag vectors is set to 100.", "And we set the batch size as 64, the learning rate as 5 10 4 with decay rate 0.99, the weight decay parameter as 1 .", "0 .", "The results we report are the average of 10 trials.", "For comparison, we investigated the performance of the following state-of-the-art methods: 1) DMCNN (Chen et al., 2015), which extracts multiple events from one sentence with dynamic multi-pooling CNN; 2) HBTNGMA (Chen et al., 2018), which models sentence event inter-dependency via a hierarchical tagging model; 3) JMEE (Liu et al., 2018), which models the sentence-level event interdependency via a graph model of the sentence syntactic parsing graph; 4) DMBERT-Boot (Wang et al., 2019), which augments the training data with external unlabeled data by adversarial mechanism; 5) MOGANED (Yan et al., 2019), which uses graph convolution network with aggregative attention to explicitly model and aggregate multi-order syntactic representations; 6) SS-VQ-VAE (Huang and Ji, 2020), which learns to induct new event type by a semi-supervised vector quantized variational autoencoder framework, and fine-tunes with the pre-trained BERT-large model.", "Table 2 presents the overall performance comparison between different methods with gold-standard entities.", "As shown, under 2-layer and 3-layer settings, our proposed model MLBiNet achieves better performance, surpassing the current state-of-the-art by 1 .", "9 points.", "More specifically, our models achieve higher recalls by at least 0 .", "7 , 5 .", "9 and 5 .", "2 points, respectively.", "The powerful encoder of BERT pre-trained model (Devlin et al., 2018) has been proven to improve the performance of downstream NLP tasks.", "The 2-layer MLBiNet outperforms BERT-Boot (BERT-base) and SS-VQ-VAE (BERT-large) by 3 .", "5 and 1 .", "9 points, respectively.", "It proves the im-Methods 1/1 1/n all DMCNN 74.3 50.9 69.1 HBTNGMA 78.4 59.5 73.3 JMEE 75.2 72.7 73.7 MLBiNet (1-layer) 77.9 75.1 76.2 MLBiNet (2-layer) 80.6 77.1 78.6 MLBiNet (3-layer) 80.3 77.4 78.6 Table 3: System Performance on Single Event Sentences (1/1) and Multiple Event Sentences (1/n).", "portance of event inter-dependency modeling and cross-sentence information integration for ED task.", "When only information of current sentence is available, the 1-layer MLBiNet outperforms HBTNGMA by 2 .", "9 points.", "It proves that the hierarchical tagging mechanism adopted by HBTNGMA is not as effective as the bidirectional decoding mechanism we proposed.", "Intuitively, the bidirectional decoder models event inter-dependency explicitly by a forward decoder and a backward decoder, which is more efficient than hierarchies.", "The existing event inter-dependency modeling methods (Chen et al., 2015, 2018; Liu et al., 2018) aim to extract multiple events jointly within a sentence.", "To demonstrate that sentence-level event inter-dependency modeling benefits from cross-sentence information propagation, we evaluated the performance of our model in single event extraction (1/1) and multiple events joint extraction (1/n).", "1/1 means one sentence that has one event; otherwise, 1/n is used.", "The experimental results are presented in Table 3.", "As shown, we can verify the importance of cross-sentence information propagation mechanism and bidirectional decoder in sentence-level multiple events joint extraction based on the following results:", "a) When only the current sentence information is available, the 1-layer MLBiNet outperforms existing methods at least by 2 .", "4 points in 1/n case, which proves the effectiveness of bidirectional decoder we proposed;", "b) For ours 2-layer and 3-layer models, their performance in both 1/1 and 1/n cases surpasses the current methods by a large margin, which proves the importance of propagating information across sentences for single event and multiple events extraction.", "We conclude that it Methods 1-layer 2-layer 3-layer backward 72.2 75.0 75.5 forward 72.8 76.0 76.5 bidirectional 76.2 78.6 78.6 Table 4: The performance of our proposed method with different multi-layer settings or decoder methods.", "is the propagating information across sentences and bidirectional decoder which make cross-sentence joint event detection successful.", "Table 4 presents the performance of the model in three decoder mechanisms: forward, backward and bidirectional decoder, as well as three multilayer settings.", "We can reach the following conclusions:", "a) Under three decoder mechanisms, the performance of the proposed model will be signifi-cantly improved as the number of decoder layers increases;", "b) The bidirectional decoder dominates both forward decoder and backward decoder, and forward decoder dominates backward decoder;", "c) The information propagation across sentences will enhance event relevant signal regardless of the decoder mechanism applied.", "Among the three decoder models, the bidirectional decoder performs best because of its ability in capturing bidirectional event inter-dependency, which proves both the forward and backward decoders are critical for event inter-dependency modeling.", "In information aggregation module, we introduce a LSTM shown in (6) to aggregate sentence information, and then propagate to other sentences via the bidirectional decoder.", "We compare other aggregation methods:", "a) concat means the sentence information is aggregated by simply concatenating the first and last event tag vector of the sentence, and", "b) average means the sentence information is aggregated by averaging the event tag vectors of tokens in the sentence.", "The experimental results are presented in Table 5.", "Compared with the baseline 1-layer model, other three 2-layer settings equipped with information aggregation and cross-sentence propagation performs better.", "It proves that sentence information aggregation module can integrate some useful information and propagate it to other sentences through the decoder.", "On the other hand, the performance of LSTM and concat are comparable and stronger than average .", "Considering that the input of the information aggregation module is the event tag vector obtained by the bidirectional decoder, which has captured the sequential event information.", "Therefore, it is not surprising that LSTM does not have that great advantage over concat and average .", "Event detection is a well-studied task with research effort in the last decade.", "The existing methods (Chen et al., 2015; Nguyen and Grishman, 2015; Liu et al., 2017; Nguyen and Grishman, 2018; Deng et al., 2020; Tong et al., 2020; Lai et al., 2020; Liu et al., 2020; Li et al., 2020; Cui et al., 2020; Deng et al., 2021; Shen et al., 2021) mainly focus on sentence-level event trigger extraction, neglecting the document information.", "Or the document-level semantic and event inter-dependency information are modeled separately.", "For the problem of event inter-dependency modeling, some methods were proposed to jointly extract triggers within a sentence.", "Among them, Chen et al. (2015) used dynamic multi-pooling CNN to preserve information of multiple events; Nguyen et al. (2016) utilized the bidirectional recurrent neural networks to extract events; Liu et al. (2018) introduced syntactic shortcut arcs to enhance information flow and used graph neural networks to model graph information; Chen et al. (2018) proposed a hierarchical tagging LSTM layer and tagging attention mechanism to model the event interdependency within a sentence.", "Considering that adjacent sentences also store some relevant event information, which would enhance the event signals of other sentences.", "These methods would miss the event inter-dependency information across sentences.", "For document-level event inter-dependency modeling, Lin et al. (2020) proposed to incorporate global features to capture the cross-subtask and cross-instance interactions.", "based on multi-level attention mechanism.", "Chen et al. (2018) integrated document information by introducing a multi-level attention.", "Zhao et al. (2018) used trigger and sentence supervised attention to aggregate information and enhance the sentence-level event detection.", "Zheng et al. (2019) utilized the memory network to store document level contextual information and entities.", "Some feature-based document level information aggregation methods were proposed by (Ji and Grishman, 2008; Liao and Grishman, 2010; Hong et al., 2011; Huang and Riloff, 2012; Reichart and Barzilay, 2012; Lu and Roth, 2012).", "And Zhang et al. (2020) proposed to aggregate the document-level information by latent topic modeling.", "The attention-based document-level information aggregation mechanisms treat all sentences in the document equally, which may introduce some noises from distant sentences.", "And the feature-based methods require extensive human engineering, which also greatly affects the portability of the model.", "This paper presents a novel Multi-Layer Bidirectional Network (MLBiNet) to propagate document-level semantic and event inter-dependency information for event detection task.", "To the best of our knowledge, this is the first work to unify them in one model.", "Firstly, a bidirectional decoder is proposed to explicitly model the sentence-level event inter-dependency, and event relevant information within a sentence is aggregated by an information aggregation module.", "Then the multiple bidirectional tagging layers stacking mechanism is devised to iteratively propagate semantic and event-related information across sentence.", "We conducted extensive experiments on the widely-used ACE 2005 corpus, the results demonstrate the effectiveness of our model, as well as all modules we proposed.", "In the future, we will extend the model to the event argument extraction task and other information extraction tasks, where the document-level semantic aggregation and object inter-dependency are critical.", "For example, the recently concerned document-level relation extraction (Quirk and Poon, 2017; Yao et al., 2019), which requires reading multiple sentences in a document to extract entities and infer their relations by synthesizing all information of the document.", "For other sequence labeling tasks, such as the named entity recognition, we can also utilize the proposed architecture to model the entity label dependency.", "We want to express gratitude to the anonymous reviewers for their hard work and kind comments.", "This work is funded by NS-FCU19B2027/91846204, National Key R&D Program of China (Funding No.SQ2018YFC000004)." ]
[ "method", "abstain", "objective", "abstain", "abstain", "method", "result", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "result", "abstain", "abstain", "result", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "other", "objective", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "other", "other", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "other", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "method", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "method", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "other", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "objective", "objective", "abstain", "abstain", "abstain", "objective", "abstain", "objective", "other", "other" ]
[ "Trending topics in social media content evolve over time, and it is therefore crucial to understand social media users and their interpersonal communications in a dynamic manner.", "In this research we study dynamic online conversation recommendation , to help users engage in conversations that satisfy their evolving interests.", "Different from works in conversation recommendation which assume static user interests, our model captures the temporal aspects of user interests.", "Moreover, our model can cater for cold start problem where conversations are new and unseen in training.", "We propose a neural architecture to analyze changes of user interactions and interests over time, whose result is used to predict which discussions the users are likely to enter.", "We conduct experiments on large-scale collections of Reddit conversations.", "Results on three subreddits show that our model significantly outperforms state-of-the-art models based on static assumption of user interests.", "We further evaluate performance in cold start, and observe consistently better performance by our model when considering various degrees of sparsity of user's chatting history and conversation contexts.", "Lastly, our analysis also confirms the change of user interests.", "This further justify the advantage and efficacy of our model.", "Online social media platforms are popular outlets for individuals to exchange viewpoints and discuss topics they are interested in.", "However, the huge volume of online conversations produced daily hinders people's capability of finding the information they are interested in.", "As a result, there is pressing demand for developing a conversation recommendation engine that tracks ongoing conversations and recommends suitable ones to users.", "Viewing the deluge of information streaming through social media, it is not hard to envision that [T1] In the UK they can request your encryption keys [T2] I doubt we are seeing the banning of encryption in the ease of the authorities to go rummaging about your privacy .", "users' tastes, stances, and behaviors evolve over time (Wu et al., 2017).", "Nonetheless, existing work on recommending conversations (Chen et al., 2011; Zeng et al., 2018, 2019b) assume users' discussion preferences do not change over time.", "Moreover, the common practice of recommendation is via collaborative filtering (CF), which relies on rich user interaction history for model training (Zeng et al., 2018, 2019b).", "When a conversation is entirely absent from training data, the model performance is inevitably compromised.", "This phenomenon is referred to as conversation cold start .", "As a result, existing methods which ignore the time-evolving user interests is insurmountable to tackle a common problem in practice, i.e., to predict future conversations created after the model is trained.", "To overcome this predicament, we explore dynamic conversation recommendation, which can model the change of user interests over time (hence-forth user interest dynamics ).", "To illustrate such change, Figure 1 shows multiple conversation turns posted by user U in four Reddit discussion snippets: C 1 to C 4 in the chronological order.", "As can be seen, U used to like discussing Internet security , indicated by encryption, privacy, and surveil-lance in C 1 and C 2 .", "After a period of time, U 's interests changed to a different topic, operating system , as ksplice, oracle, and Ubuntu were later mentioned in C 3 and C 4 .", "We design the model to capture user interests from both what they said in the past, and how they interacted with each other in the conversation structure.", "We first capture time-variant representations from user chatting history, where we assume user interests may change over time and therefore apply a gated recurrent unit (GRU) (Cho et al., 2014) to model time dependency.", "User interactions in the conversation context are then explored with both bidirectional gated recurrent unit (Bi-GRU) (Cho et al., 2014) for conversation turns' chronological order and graph convolutional networks (GCN) (Marcheggiani and Titov, 2017) for in-reply-to relations.", "Both representations are learned to encode how participants formed the conversation structure, including what they said and whom they replied to.", "Next, we propose a user-aware attention to convey the user interest dynamics, which is further put over an interaction-encoded conversation to measure whether its ongoing contexts fit a user's current interests.", "Finally, we predict how likely a user will engage in a conversation, as a result of recommendation.", "To the best of our knowledge, we are the first to study dynamic online conversation recommendation and to explore the effects of user interests change over time learned from both chatting content and interaction behavior .", "For this reason, we are capable of recommending future conversations based on users' interests at the time.", "For experiments 1 , we collect Reddit conversations from three subreddits technology , to-dayilearned , and funny , each exhibiting different data statistics, discussion topics, and language styles.", "An absolute date is used to separate training data (before the date) from test and validation data (after the date).", "In this way, most conversations in the test and validation parts are new conversations that have not been counted before.", "This presents a more realistic setup than previous studies (Zeng et al., 2018, 2019b), which let training data contain partial context for any conversations to allow the possibility of predicting users' future engagement 1 The datasets and codes are available at: https:// github.com/zxshamson/dy-conv-rec for recommendation.", "Experimental results in main comparisons show that our model significantly outperforms all previous methods that ignore the change of user interests or interactions within contexts.", "For example, we achieve 0 .", "375 MAP in discussions of technology , compared with 0 .", "222 yielded by our previous state-of-the-art model (Zeng et al., 2019b).", "Further study shows that we consistently perform better both in conversation cold start and with varying degrees of sparsity of user history and conversation contexts.", "Lastly, to provide more insights into user interest dynamics, we inspect our model outputs and find that users indeed tend to engage in different types of conversations at different times, confirming the usefulness of tracking user preferences in real-time for conversation recommendation.", "User Response Prediction.", "This work is in line with user response prediction, such as message popularity forecast with handcrafted response features (Artzi et al., 2012; Backstrom et al., 2013) and conversation trajectory with user interaction structures (Cheng et al., 2017b; Jiao et al., 2018; Zeng et al., 2019a).", "These works predict responses from general public, while we work on personalized recommendation and focus on user interest modeling.", "For recommendation, there are extensive efforts on post-level recommendation (Chen et al., 2012; Yan et al., 2012) and conversation-level (Chen et al., 2011; Zeng et al., 2018, 2019b).", "In contrast with them which assume static user interests, we capture how user interests change over time and take advantage of the recent advancement of dynamic product recommendation (Wu et al., 2017; Beutel et al., 2018).", "To recommend conversations, we aim to learn user interest dynamics from chatting content and interaction behavior, which have never been explored in previous research.", "Conversation Structure Modeling.", "Our work is also related to previous work to understand how participants interact with each other in conversation structure.", "Earlier efforts focus on discovering word statistic patterns via probabilistic graphical models (Ritter et al., 2010; Louis and Cohen, 2015), which are unable to capture deep semantics embedded in complex interactions.", "Recent research points out the effectiveness to understand conversation structure from temporal dynamics (Cheng et al., 2017a; Jiao et al., 2018) and replying struc || MsgEncoder GRU UserFactorEmbedding Initialize Msg Encoder Bi-GRU GCN User Text Attention MLP Mechanism , (Predicted Score) Figure 2: Overall structure of our model.", "ture (Miura et al., 2018; Zayats and Ostendorf, 2018; Zeng et al., 2019b).", "The two factors are coupled in our interaction modeling and their joint effects for dynamic conversation recommendation, ignored by prior work, will be extensively studied here.", "This section describes our dynamic conversation recommendation model, whose overall structure is shown in Figure 2.", "In the following, we will first introduce how we model the user interest dynamics with their chatting history in Section 3.1, followed by the description of conversation modeling in Section 3.2.", "Afterwards, Section 3.3 will present how we produce final recommendation outputs.", "Objective function and learning procedures will be finally presented in Section 3.4.", "Given a sequence of chronologically ordered historical messages (cid:104) m 1 , m 2 , , m | u | (cid:105) of a user u ( | u | is the message number of u ), a message therein corresponds to a word sequence w m .", "Our goal is to capture the temporal patterns from the sequence of user chatting messages and then produce the user interest representation.", "We employ two-level modeling message level and user level.", "Message-level Modeling.", "We model message-level representation from its word sequence.", "Specifically, given u 's historical message m , we first use a pre-trained word embedding layer to map each word into a vector space, and then employ a Convolutional Neural Network (CNN) (Kim, 2014) encoder to model word occurrence with their neighbors.", "Afterwards, we output representation z m to reflect m 's content.", "User-level Modeling.", "As shown in Wu et al. (2017), some user interests may change rapidly and some may last for a long time.", "For the latter, we adopt a user embedding layer IUF ( ) to capture the time-invariant interest factor and define u 's factor as r UFu .", "For the time-variant interests, we are inspired by previous work (Beutel et al., 2018) and employ a GRU (Cho et al., 2014) encoder to capture how user interests change based on sequential chatting messages.", "For each time state t , we update user's current interests h Uu,t conditioned on the previous interests h Uu,t 1 and the current behavior z m t (derived from the aforementioned message-level modeling, reflecting m 's content): h Uu,t = GRU ( h Uu,t 1 , z m t ) (1) Further, to leverage time-invariant features in the modeling of user interest dynamics, we initialize GRU's hidden states based on the learned user factor r UFu following linear transformation: h Uu, 0 = WU r UFu + b U .", "And the last GRU states, i.e., r Uu = h Uu,t | u | , conveying the latest view of user interest dynamics, will be later used in conversation modeling and recommendation prediction.", "Here we introduce how we encode a conversation in aware of user interests.", "Each conversation c is formed with a sequence of chronologically ordered turns (cid:104) t 1 , t 2 , ..., t | c | (cid:105) ( | c | is the turn number of c ).", "A turn t therein is in form of a word sequence w t , its author's ID u t , and the turn it replies to for later exploiting in-reply-to structure.", "To learn c 's representation, we encode both word occurrence in each turn (via turn-level modeling) and interactions between conversation turns (via conversation-level modeling).", "Afterwards, to identify turns that match target user's interests, we propose a user-aware attention over turns.", "Turn-level Modeling.", "For each turn t c , similar to message-level modeling in Section 3.1, we use a CNN encoder over pre-trained word embed-dings to capture content representation, z t .", "Further, z t is concatenated with author u t 's user embedding r UFu t (see Section 3.1) to yield turn-level representation r Tt , conveying both what is said and who says that.", "Based on the turn-level representations, we then learn turn interactions.", "Conversation-level Modeling.", "To explore turn interactions, we exploit turn's chronological order and replying structure, both useful in conversation modeling (Zeng et al., 2019b).", "Chronological Order.", "We employ a Bi-GRU (Cho et al., 2014) to capture how a turn interacts with the turns posted right before and after it, whose hidden states are updated as followings: h GRUc,t = GRU ( h GRUc,t 1 , r Tt ) (2) h GRUc,t = GRU ( h GRUc,t +1 , r Tt ) (3) We then concatenate the forward and backward hidden states to produce chronology-encoded turn representations: h GRUc,t = [ h GRUc,t ; h GRUc,t ] .", "Replying Structure.", "To further encode who-replies-to-whom in conversation structure, we put a Graph Convolutional Network (GCN) (Marcheg-giani and Titov, 2017) over the chronology-encoded turn representations (learned by Bi-GRU see above).", "Graph encoder is empirically better than sequential ones because replying relations usually exhibit tree structure (a post may lead to multiple replies).", "Concretely, we first build a directed graph for a conversation via adding edges from a turn to its replies.", "We then define turn interactions therein in three directions: predecessors to successors ( P re ), successors to predecessors ( Suc ), and self interactions ( Self ).", "Next, we update a turn's hidden state with the formula below: h GCNc,t = (cid:88) i Pre ( t ) g i,t ( W Pre h GRUc,i + b Pre ) + (cid:88) j Suc ( t ) g j,t ( W Suc h GRUc,j + b Suc ) + g t,t ( W Self h GRUc,t + b Self ) (4) P re ( t ) and Suc ( t ) represent turn t 's predecessors and successors in replying graph; g i,j is a scalar gate controlling weights of turn interactions: g i,j = ( W Dir ( i,j ) h GRUc,i + b Dir ( i,j ) ) (5) where Dir ( i, j ) indicates the type of i j direction ( P re , Suc , or Self ).", "The process described above can be viewed as one GCN layer.", "Multiple layers can be stacked, with a ReLU (Rectified Linear Unit) activated function to connect two succinct layers.", "It enables the networks to explore deeper interaction effects.", "User-aware Attention.", "To identify conversation turns that better match target user's interests, we design a user-aware attention mechanism over interaction-encoded turns.", "The attention weights are defined to reflect the similarity between a conversation turn's representation h GCNc,i and the target user's latest interests r Uu (see Section 3.1): a i = softmax ( r Uu h GCNc,i ) (6) Finally, we compute the attentive sum of all turns and obtain the conversation representations conveying both interactions and user interests: r Cc = (cid:88) i a i h GCNc,i (7) 3.3 Recommendation Prediction To predict whether a user u willengage in conversation c , we compute how u 's interest dynamics (carried by r Uu in Section 3.1) are similar to c 's content and interaction styles (reflected by r Cc in Section 3.2).", "We adopt a two-way interactions via MLP mechanism (He et al., 2017) to measure the similarity: r u,c = ( WT 2 ( ( WT 1 [ r Uu ; r Cc ] + b 1 )) + b 2 ) (8) where ( ) is ReLU-activated function.", "The equation for the final output layer will be: y u,c = ( v T r u,c + b ) (9) where represents sigmoid activation function.", "Following Zeng et al. (2019b), we adopt weighted binary cross-entropy loss as our objective function, which assigns more weights to positive feedbacks (i.e. u engages in c ):", "where T is the training set, y u,c denotes the binary ground-truth label, and ( > 1 ) is a hyper-parameter to trade off the weights of positive and negative instances.", "We weigh more on positive feedbacks because they are more reliable, while the negative ones sometimes cannot reflect user's interests, owing to many unpredictable issues (e.g., users' busy time).", "For the same reason, we adopt the negative sampling strategy (He et al., 2017) in training, which also speeds up the training process.", "Datasets.", "For experiments, we collect online conversations from Reddit, a popular online platform.", "To build our datasets, we first downloaded a large corpus publicly available on Reddit 2 , which consists of posts and comments created since early 2006.", "Then, we gathered data posted from January to May 2015 on three subreddits reflecting discussion topics on technology ( Tech ), today-ilearned ( Learn ), and funny ( Fun ).", "We chose these three subreddits as they were popular subreddits with different discussed topics and language styles.", "For each subreddit, posts and comments were connected with in-reply-to relations (indi-cated by comments' parent id field) to form conversations.", "Finally, we removed conversations with only one turn and produced three conversation datasets of different topics.", "In model training and evaluation, we use conversation turns created from January to April for training.", "For those posted in May, we randomly select half of them for validation and the other half for 2 https://www.reddit.com/r/datasets/ comments/3bxlg7/i_have_every_publicly_available_reddit_comment/ 0 10 20 30 40 >50 # of User History Messages 2^0 2^4 2^8 2^12 2^16 # o f U s e r s Tech Learn Fun", "test.", "This reflects a more realistic scenario where the model is trained with past data and applied to future recommendation, as opposed to prior work which assumes all conversations can be split between training and test (Zeng et al., 2018, 2019b).", "Data Analysis.", "The dataset statistics are displayed on Table 1.", "Although differ in size, conversations therein exhibit similar average characteristics, likely because they come from the same platform.", "Moreover, over 99% of the conversations in test sets are future conversations (i.e. all turns were posted in May), highlighting the challenge of conversation cold start.", "We further plot the distributions of message (turn) number in Figure 3 (", "3(a) for users and", "3(b) for conversations).", "It is seen from Figure", "3(a) that a large proportion of users were involved in less than 10 conversation turns, where about 8 % (shown in Table 1) of users are absent in the training data.", "For conversations (Figure", "3(b)), their turn numbers follow a power-law distribution.", "Therefore, for both users and conversations, the sparse interaction history presents additional challenges for recommendation.", "In addition, Figure 4 shows distributions of conversation replying structure with 1 , 2 , and more root-to-leaf paths to characterize users' interaction structure.", "We find that more than 60 % of con-Tech Learn Fun Dataset 0.0 0.1 0.2 0.3 0.4 0.5 0.6 P e r ce n t a g e One-path Two-path More-path Figure 4: Distributions of conversation structure.", "versations contain two or more paths, illustrating complex who-replies-to-whom interactions in the tree structure (with the original post as the root node and in-reply-to relations as edges).", "Therefore, graph-structured encoder may be a suitable alternative for capturing rich turn interactions in Reddit conversations.", "Preprocessing.", "For all datasets, we applied open source natural language toolkit (NLTK) (Loper and Bird, 2002) for tokenization.", "Further, links were replaced by a generic tag (cid:104) URL (cid:105) and all number tokens were removed.", "In the experiments, we maintained a vocabulary with all the remaining tokens (including punctuation and emoticons).", "Model Settings.", "In training, we adopt negative sampling with sampling ratio of 5 (see Section 3.4).", "We also randomly sample 100 negative instances for each positive one during validation and test, to avoid unbalanced labels.", "For parameters, we initialize the word embedding layer with 300 -dim Common Crawl version of Glove embedding (Pennington et al., 2014), and the dimension of user factor embedding is set to 20 .", "For the CNN turn encoders, we use filter windows of 2 , 3 , and 4 , each with 100 feature maps.", "As for the GRU models for both user and conversation modeling, the hidden state size is set to 200 ( 100 for each direction in Bi-GRU).", "The same hidden state size is applied to the GCN interaction model.", "We also set the layer number of GCN (see in Section 3.2) to 1 , based on validation results.", "In training, the batch size is set to 256 and Adam optimizer (Kingma and Ba, 2014) is adopted with an initial learning rate of 0 .", "001 .", "As for the trade off weight in loss function, we set = 100 .", "Evaluation.", "Our evaluation metrics follow the common practice in conversation recommendation (Zeng et al., 2018, 2019b).", "Mean average precision (MAP), precision at 1 (P@1), and normalized Discounted Cumulative Gain at 5 (nDCG@5) are adopted to measure the ranking list of conversations to be recommended to a user.", "3 These metrics all have a value range of 0 .", "0 to 1 .", "0 , and greater value indicates better performance.", "Comparisons.", "We first consider two simple baselines: 1) ranking conversations based on POPULARITY , measured by the number of participants.", "2) TOPICRANK (Chen et al., 2011): ranking conversations by topic relevance to the target user's historical messages, where topics are learned from both LDA (Blei et al., 2003) and TF-IDF statistics.", "We also include previous conversation recommendation models without learning user interest dynamics: 3) CRJTD (Zeng et al., 2018): a CF-based method that jointly models topics and discourse with LDA-style Bayesian models.", "4) CRIM (Zeng et al., 2019b): a neural CF framework with GCN-based interaction modeling, which presents state-of-the-art conversation recommendation results in previous work.", "In addition, we compare with the following recent models for product recommendation.", "5) RRN (Wu et al., 2017): exploiting RNN model to capture user interest dynamics only with user interaction history (without modeling turn content).", "6) LC-RNN (latent cross-RNN) (Beutel et al., 2018): RNN-based user interest dynamic modeling with turn-level representations, with participant interactions in the conversation structure ignored.", "We first report the main comparison results in Section 5.1, and then discuss the effects of sparsity and cold start in Section 5.2.", "Lastly, in Section 5.3, we probe into our model outputs to provide more insights into user interest dynamics.", "Table 2 shows the comparison results on all three datasets.", "Our model achieves the highest scores, outperforming all comparison models by a large margin.", "It suggests that dynamic user interests learned from both content and interactions provide clearly useful signals on which conversations a user is likely to engage in.", "Below describes more detailed observations.", "The two baselines yield much worse results than others.", "This shows the challenging nature of conversation recommendation, and the limitation of simply using popularity or topic similarity.", "TOPICRANK performs slightly better than POPULARITY, indicating that individuals are more inclined to engage in conversations they like (reflected by topic relevance), rather than popular discussions with many participants.", "Our model outperforms CRJTD and CRIM (state-of-the-art model), which both assume fixed user interests, showing the usefulness of exploring user's evolving interests over time.", "We also find that CRIM produces better results than CRJTD, likely because the former additionally captures user interactions among each other.", "For recommendation models that consider user interest dynamics, all models perform better than CRIM and CRJTD, which are both based on the CF architecture.", "This reveals CF's limitation in dealing with cold start, which is a common phenomenon when recommending a large number of future conversations (see Table 1).", "Nevertheless, we see that our model performs much better than RRN and LC-RNN, indicating that both content and interaction features contribute to capturing user interests and how they change over time.", "Similar to previous work in product recomenda-tion (Sarwar et al., 2000), conversation recommendation models are also susceptible to the problems of history sparsity and cold start.", "We compare with LC-RNN (the best comparison model in Table 2) and CRIM (state-of-the-art model in conversation recommendation), and show in Figure 5 the MAP scores on Tech dataset with varying degrees of sparsity.", "4 Our model is shown to be consistently better in face of sparsity, including varying numbers of messages in user history, as well as varying numbers of available turns in conversation contexts.", "More detailed discussions are presented below.", "Varying Messages in User History.", "Refer to in Figure", "5(a), all models produce non-monotonic performance curves, peaking at certain points (e.g. 25 historical messages for our model).", "This reveals the issue of user history sparsity, and difficulty in coping with excessive historical information.", "More importantly, it is observed that our model already outperformed LC-RNN and CRIM when the number of history message is 0 .", "This may be attributed to our better modeling on conversation interaction structure.", "Varying Turns in Conversation Context.", "For conversations, Figure", "5(b) shows the MAP scores with varying turn numbers available in contexts.", "All three models produce upward-trending curves, which is expected since more features can be learned from richer contexts, thus leading to better prediction.", "Our model and CRIM perform worse than LC-RNN when available turn number is small (less than 4 ).", "This is because graph-structured networks need minimum amount of interaction infor-4 Similar trends are observed on all datasets and hence only the results on Tech are displayed.", "Conversation Cold Start.", "To understand how models perform exactly in conversation cold start, we separate the test set into future conversations (newly created in testing and unseen in training data) and existing ones (with context partially in the training data).", "We then compute the results averaging over conversations.", "The resultant MAP scores are reported in Table", "3. Our model outperforms the other two models by a large margin in recommending future conversations, thanks to the more accurate user interests that are learned from dynamic patterns of content and interactions.", "CRIM performs much better for existing conversations, by making use of rich user interaction history based on CF architecture.", "Our model abandons CF framework but still produce competitive performance, as we compute more accurate user-aware representations.", "The aforementioned results have shown the efficacy and advantage of our model.", "In this section, we provide more insights into different factors behind Models Future Convs Existing Convs Tech Learn Fun Tech Learn Fun CRIM 0.208 0.165 0.142 0.684 0.731 0.455 LC-RNN 0.214 0.220 0.197 0.129 0.587 0.318 OURS 0.384 0.356 0.305 0.590 0.749 0.458 Table 3: MAP scores to predict future and existing conversations (averaged over conversations).", "Training with More History.", "We have shown the usefulness of capturing user interest dynamics with historical messages.", "A natural question is whether the model needs more history to perform better.", "Figure 6 shows our MAP scores trained on history data in the last x months ( x = 1 , 2 , 3 , 4 ), and the three datasets exhibit diverse characteristics in user interest dynamics.", "Only Tech exhibits an increasing trend.", "This is probably because earlier history enables learning of long-term dynamics and technology change usually happens in a time span that is longer than 1 2 months.", "On the contrary, topics on Fun and Learn may change more rapidly, making the earlier history more noisy and less helpful for modeling users' current interests.", "Ablation Study.", "We then examine the contributions of different components in our model, and display the MAP scores of various ablations in Table", "4. We observe that user factor embedding and user-aware attention contribute most to model outputs because they are critical in modeling user interests.", "Removing Bi-GRU or GCN also has a significant impact on performance, indicating the usefulness of learning user interactions from turn chronology and replying relations.", "GCN in user interaction modeling, we compare the MAP scores of our full model and its variants without Bi-GRU or GCN in recommending conversations with 1 , 2 , or more root-to-leaf paths (as shown in Figure 7).", "GCN and Bi-GRU clearly demonstrate different capabilities.", "The former is good at encoding more complex structures (i.e. those with more paths), and the latter excels at sequential conversations.", "By leveraging the advantages of both, our full model performs the best for conversations of varying structures.", "Case Study.", "Lastly, we use the example in Figure 1 to analyze what the model has learned for recommendation.", "Recall that user U 's interests shifted from Internet security , signaled earlier in C 1 and C 2 , to operation system , when later chatting in C 3 and C 4 .", "We examine the predicted likelihoods of U engaging in two future conversations: Conversation A and B .", "Figure 8 shows their contexts A focuses on Internet security and B on file system , and U later engaged in B but not A due to the interest shift.", "In Table 5, we list our model's outputs when fed with earlier history only ( C 1 and C 2 ), later only ( C 3 and C 4 ), and full history, respectively.", "Not surprisingly, much higher scores are given to A when only the earlier history is given, as it fits well with U 's previous preference.", "Similarly, we correctly predict U to engage in B with much higher confidence in the other two situations as file system ( B 's focus) and operation system ( U 's later interests) are highly related.", "Given the full history, our model produces more closed scores, showing its efficacy of learning user interest dynamics.", "Conversation A [ T 1 ] : Ahhh!", "This reminds me of when you could hack fax machines and routers by just whistling in the phone!", "[ T 2 ] : Hm, that's pretty unrelated,", "though..", "...", "Conversation B [ T 1 ] : ...just downloaded FileZilla (from SourceForge) last night, and it automatically installed MacKepper and... [ T 2 ] : Dude, why?", "Filezilla has a website, you can download it straight from them... ...", "This paper presents a dynamic conversation recommendation model learned from the change of content and user interactions over time.", "Experimental results on three new datasets from Reddit show that our model significantly outperforms all comparisons, including previous state of the arts.", "Further discussion demonstrates the robustness of our model against history sparsity and cold start.", "We also analyze our model's outputs to get more insights into user interest dynamics.", "The research described in this paper is partially supported by HK RGC-GRF grant #14204118.", "Jing Li is partly funded by the Hong Kong Polytechnic University internal fund (1-BE2W).", "Lu Wang is supported by National Science Foundation through Grant IIS-1813341.", "We thank the three anonymous reviewers for the insightful suggestions on various aspects of this work." ]
[ "abstain", "method", "method", "objective", "objective", "method", "result", "result", "method", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "objective", "result", "objective", "method", "method", "abstain", "abstain", "other", "result", "result", "abstain", "abstain", "result", "result", "other", "abstain", "method", "other", "abstain", "objective", "other", "abstain", "other", "abstain", "other", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "other", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "method", "method", "abstain", "method", "method", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "method", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "result", "abstain", "abstain", "result", "abstain", "abstain", "result", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "objective", "objective", "method", "other", "other", "other", "other" ]
[ "Abstract Visual storytelling (VIST) is a typical vision and language task that has seen extensive development in the natural language generation research domain.", "However, it remains unclear whether conventional automatic evaluation metrics for text generation are applicable on VIST.", "In this paper, we present the VHED (VIST Human Evaluation Data) dataset, which first re-purposes human evaluation results for automatic evaluation; hence we develop Vrank (VIST ranker), a novel reference-free VIST metric for story evaluation.", "1 We first show that the results from commonly adopted automatic metrics for text generation have little correlation with those obtained from human evaluation, which motivates us to directly utilize human evaluation results to learn the automatic evaluation model.", "In the experiments, we evaluate the generated texts to predict story ranks using our model as well as other reference-based and reference-free metrics.", "Results show that Vrank prediction is significantly more aligned to human evaluation than other metrics with almost 30% higher accuracy when ranking story pairs.", "Moreover, we demonstrate that only Vrank shows human-like behavior in its strong ability to find better stories when the quality gap between two stories is high.", "Finally, we show the superiority of Vrank by its generalizability to pure textual stories, and conclude that this reuse of human evaluation results puts Vrank in a strong position for continued future advances.", "In visual storytelling (VIST) (Huang et al., 2016), a generation model tells a short story to describe the given five images.", "Automatic generation of visual stories is challenging because it has the complexity of cross-modal understanding with the diversity * denotes equal contribution 1 Dataset VHED and metric Vrank can be found on GitHub: https://github.com/AcademiaSinicaNLPLab/VHED.git the city was very busy.", "there were many different kinds of bikes.", "some were very unique.", "they were all very fast.", "i had a great time.", "i went to the park station.", "it was a train trip to the museum.", "the train was very long.", "we had to go on our way out of the trains.", "this dog is so happy to see us.", "Model 1 (BLEU-1: 0.605, Human Rankers: ) Model 2 (BLEU-1: 0.354, Human Rankers: ) Figure 1: Ranking of two stories generated by Model 1 and 2, by human rankers versus BLEU-1 score.", "Reference : i decided my dog would like a train ride.", "off to the train station we go.", "this is the train we will be taking our short trip on.", "my friend is the conductor.", "he is getting ready to attach the cars.", "here is the train all together.", "as you can see, my dog had a fantastic time.", "and sophistication of creative writing (Zhu et al., 2020).", "Extensive efforts in model developments have decreased the distance between machine-generated and human-written stories, but research on VIST evaluation remains stagnant.", "Automatic metrics and human evaluation are widely used to examine natural language generation.", "Traditional n-gram-based or reference-based autometrics such as BLEU (Papineni et al., 2002), CIDEr (Vedantam et al., 2015), and METEOR (Banerjee and Lavie, 2005) are common for VIST evaluation.", "However, preliminary findings have shown that these metrics have many drawbacks and hence are incompatible with VIST (Wang et al., 2018b).", "In particular, they assume that human-written stories are always better than machine-generated stories, limiting the advance of models yet not conforming to our observation on human judgment.", "Rethinking this postulation in evaluation, we believe the dependence on references should be minimized and human evaluation results should be fully utilized instead, because human judgements contain more 6365 meaningful signals.", "Recent hybrid and reference-free metrics such as BLEURT (Sellam et al., 2020) and UNION (Guan and Huang, 2020) have not yet been implemented or studied in VIST.", "Nevertheless, BLEURT utilizes few human results in fine-tuning, and UNION still regards human references as gold labels, which results in poor correlation to human judgment.", "On the other hand, human evaluations are relatively reliable for performance reports, and recent studies often include them to provide more convincing experimental results (Hsu et al., 2020, 2021a,b).", "However, human evaluations are expensive, time-consuming, and difficult to reproduce.", "Therefore, results should be recycled to benefit future evaluations.", "Accordingly, we re-collected the human evaluation results from multiple published papers and organized the data into story pairs (Wei and Jia, 2021) as the VHED (VIST Human Evaluation Data) dataset.", "We then re-purposed VHED to create a better metric Vrank for VIST to rank visual stories.", "Vrank is a reference-free SIMCSE (Gao et al., 2021) based metric trained on VHED to learn to rank visual stories.", "We believe a storytelling metric should be independent of the references because stories are highly diverse by nature (Zhu et al., 2020), and it is reasonable for them to be dissimilar to the references (Guan and Huang, 2020) As shown in Fig. 1, the story generated by Model 1 is assigned a higher BLEU score because larger portions of text overlap with the reference.", "However, human rankers recognize description in isolation and object detection error in Model 1, and instead rank Model 2 better.", "We conduct experiments to show that Vrank is superior to existing metrics, many of which lack properties essential to evaluating stories in a human-like fashion.", "Therefore, we utilize VHED to understand and analyze human judgment in evaluating visual stories, and to provide additional metric assessments to reveal the shortcomings of existing metrics.", "The metric assessment experiments are conducted as the story-pair ranking task in which two stories are ranked based on their story quality.", "We observe three characteristics and design corresponding assessments to demonstrate Vrank's merits.", "First, larger rank differences in story quality are easier for people to differentiate.", "We measure the performance of metrics in story pairs with large gaps versus small gaps to determine whether all metrics have this property.", "Our assessment indicates this property is exclusively hold by Vrank.", "Second, human-written stories are not always better than machine-written stories.", "Indeed, 38% of machine-generated stories are better than the references, which suggests that the afore-mentioned assumption may need to be revisited (Clark et al., 2021).", "We examine the ability of metrics to rank such human-machine pairs, which Vrank performs relatively well.", "Finally, most generated stories still contain many errors, which serve as signals for human rankers (Modi and Parde, 2019).", "Hence we evaluate the ability of metrics to detect errors and show that Vrank is a better indicator of errors.", "Also, we show that Vrank is able to generalize to other datasets without bias to VHED.", "In conclusion, Vrank excels in the above assessments and able to follow human behaviors in ranking.", "Moreover, Vrank can rank machine and human stories decently and is better at detecting story errors.", "Specifically, we make three major contributions: We re-collect and organize human evaluation results from recent VIST papers to form a new dataset: VHED.", "We propose a novel valid metric Vrank for visual storytelling which appropriately evaluates VIST model performance.", "We propose three assessments for metrics according to human properties and a generalization test to better illustrate the shortcomings of existing VIST metrics.", "Visual Storytelling (VIST) Visual storytelling was introduced by Huang et al. (2016) as the task of generating a coherent story given five images.", "They provided a dataset, Sequential Images Narrative Dataset (SIND), containing images and references in which references are human-written short stories describing images.", "For every image prompt (one sequence of photos), there are 2 to 5 references.", "VIST requires deeper understanding of the photo events to prevent descriptions in isolation (i.e., image captions).", "Researchers have proposed various methods for this task.", "Knowledge graphs are often integrated in models to encourage diversity of terms and plots in the stories (Hsu et al., 2020, 2021a; Chen et al., 2021).", "Some studies use reinforcement learning to reward models that generate stories that contain fewer errors and are more topically-focused (Huang et al., 2019; Hu et al., 2020a).", "However, existing evaluation methods are 6366 unable to capture the true quality of the generated stories.", "Thus we examine automatic metrics to devise a better way for machines to evaluate stories.", "VIST-Human Evaluation Several VIST generation models use human evaluation to evaluate model performance.", "Recent studies apply aspect-based rating evaluation.", "Hu et al. (2020b) and Wang et al. (2020b) ask workers to rate stories based on pre-defined aspects.", "2 However, it is difficult to normalize these aspects as the definition of aspect varies from paper to paper.", "Also, these aspects are not mutually independent, making it difficult to analyze results based on these ratings.", "Therefore, we consider the ranking method as it is commonly used (Hsu et al., 2020; Wang et al., 2020b; Hsu et al., 2021a) among authors.", "Hsu et al. (2020) asks human annotators to rank five stories from different models based on overall quality.", "Hu et al. (2020b) and Wang et al. (2020b) conduct pairwise human evaluations to rank stories according to different story aspects, where the latter is judged to be closer to human-level.", "These human evaluation results are valuable resources for observing human judgments in visual storytelling.", "Hence, in our work we collect this information for analysis and model training.", "Automatic Metrics Automatic evaluation metrics are widely used in language generation tasks.", "Most reference-based metrics (e.g., BLEU (Pap-ineni et al., 2002), METEOR (Banerjee and Lavie, 2005), and ROUGE (Lin, 2004)) evaluate the n-gram similarity between a generated text and the reference.", "However, referenced metrics correlate poorly with human judgment (Wang et al., 2018b; Hsu et al., 2019; Modi and Parde, 2019) in dialog generation and story generation tasks: the generated text is given unreasonable scores due to incongruity with the reference.", "To account for this, several reference-free metrics (Sinha et al., 2020; Guan and Huang, 2020) have been designed to measure generated texts without any reference.", "BERT-Score (Zhang et al., 2020), for instance, uses contextual embedding to calculate the similarity between candidates and references, and BLEURT (Sellam et al., 2020) uses referenced automatic metrics as supervision signals for pretraining and is fine-tuned on a human judgment evaluation dataset.", "UNION (Guan and Huang, 2 Hu et al. define relevance, coherence, and expressiveness, and Wang et al. define focus, coherence, detail, share, grounded, and human.", "2020) uses pre-defined negative samples to train a model in an attempt to provide a metric that specializes in story generation.", "In our analysis, current metrics remain unable to mimic human judgment to discern quality differences in story pairs.", "The VHED dataset is a collection of human evaluation results from three VIST studies: KGStory (Hsu et al., 2020), PR-VIST (Hsu et al., 2021a), and Stretch-VST (Hsu et al., 2021b).", "All papers followed Hsu et al. (2020)'s human evaluation method using Amazon Mechanical Turk.", "For each task, the workers were to rank the story by overall quality, from the best story to the worst story.", "Specifically, each task displayed N stories, and each worker ranked each story from 1 to N .", "Details about each paper are listed in Table", "1. The construction of VHED is shown in Figure", "2. Collected from the aforementioned papers, we obtained 4,500 task results.", "Further, we grouped N stories into story pairs, where the number of story pairs per task is CN 2 .", "The resulting story pairs ( x 1 , x 2 ) are either two machine-generated stories from two different models or one reference and one machine-generated story.", "For each story pair, there are five attributes: Stories: A story pair consists of a better-ranked story and worse-ranked story.", "The story pair is either a reference with a machine-generated story, or two machine-generated stories.", "Image Sequence IDs: A list of IDs for each of the five images from the SIND dataset (Huang et al., 2016).", "Average Rank: The average of the five workers' story rankings is divided by N for normalization.", "N varies from paper to paper (Table 1).", "Ranking Gap: The ranking gap is calculated as the average ranking of x 1 minus the average 6367 Paper HumanEvaluation Sampling Tasks N KGStory 2 500 1,000 5 PRVIST 6 250500 1,000 34 Stretch-VST 7 250500 2,500 24 Table 1: Statistics of human evaluation results of KGStory (Hsu et al., 2020), PRVIST (Hsu et al., 2021a), and Stretch-VST (Hsu et al., 2021b) ranking of x 2 .", "The ranking gap distribution is shown in the appendix (Table 6).", "Human Agreement: Human agreement is when k workers agree that the better-ranked stories are better than the worse-ranked stories.", "Note that human agreement = 2 is equivalent to human agreement = 3, because 1 person agreeing that story A is better than B is equivalent to 4 people agreeing that story B is better than A. Therefore, we kept human agreements = 3,4,5 for simple notation.", "For quality control, we remove story pairs with zero ranking gap.", "This yields 13,875 story pairs in total.", "The train-test-validation sets were split at a ratio of 8:1:1 to 11,208, 1,351, and 1,316 story pairs.", "The descriptions of VIST models' generated stories are included in the appendix.", "As we acquired data about human preferences in story pairs, we conducted analyses to understand the potential patterns for workers when assigning story ranks, the quality gap between machine-generated and human-written stories, and the errors in the stories.", "The results of this observation are crucial for assessing the performance of a metric.", "Worker Ranking Analysis Story pairs are grouped by the same human agreement.", "k denotes a sub-dataset containing story pairs with human agreement = k .", "In Table 2, we calculate the number of story pairs as well as the averaged ranking gap of each sub-dataset.", "For story pairs, we note that story pairs with k = 3 account for 53% of the dataset, meaning that half of the tasks have inconsistent annotations.", "Regardless, this paper evaluates the story pairs with k 4 to filter out inconsistent human annotations.", "We also note that the ranking gap increases as human agreement increases.", "The ranking gap indicates the quality difference between a better-ranked and a worse-ranked story.", "That is, the difference between a ranked 1 story and a ranked 5 story should be larger Subset Story pairs Ranking gap Machine better 3 6,494 (53%) 0.123 918 (45%) 4 3,677 (30%) 0.247 523 (35%) 5 2,110 (17%) 0.416 110 (22%) Table 2: The number and percentage of story pairs, average ranking gap of each sub-dataset.", "than that between a ranked 2 story and a ranked 3 story.", "From Table 2, we find that story-pairs with lower agreement are closer in ranking.", "In other words, a story pair with a marginal quality difference easily leads to inconsistent worker annotations, because it is harder to rank two similar stories.", "Essentially, we expect the metrics to exhibit similar behavior: the larger the ranking gap, the easier it is to rank .", "Who Wins?", "Machine vs. Human Stories Next we revisit the assertion that references are always superior.", "We select story pairs with a reference and a machine-generated story.", "We analyze the number and percentage of references that are ranked better than the generated stories on three human agreements.", "From Table 2, we observe that when more humans agree on the ranking results, the percentage of the reference being better also increases.", "In addition, further analysis shows that, on average, 38% of the machine-generated stories are in fact better than the references, showing that references are not always better than machine-generated stories.", "Error Analysis To understand the difference between betterand worse-ranked stories, deeper analysis into the story content is necessary.", "We randomly sampled 200 stories from VHED (67 human and 134 machine generated) and manually labeled the stories according to the following error aspects: Grammatical error (Gram): Erroneous usage of past/current tense and mistakes in misplaced modifiers.", "Repetition (Rep): Repetitive sentences or phrases at sentenceand story-level.", "Description in isolation (Desc): Sentences that lack consistency, resulting in isolated captions instead of a fluent story.", "Absurdity (Abs): Ambiguous sentences or nonsensical phrases that are incomprehensible to humans.", "Event mismatch (Event): Stories that are off-topic, which present events that are not relevant to the image stream.", "Object mismatch (Obj): Irrelevant nouns that do not appear in the images and are not semantically related.", "We first labeled stories based on all 11 error aspects defined in (Modi and Parde, 2019) and we select the most occurring errors, which are grammar, repetition, description in isolation, and absurdity.", "These four error aspects focus primarily on story coherence and within-story consistency.", "However, visual storytelling requires generated stories to fit the given story images.", "Rohrbach et al. (2019) show that humans are aware of the correctness of image descriptions.", "Also, Wang et al. (2020a) show that mismatched events in stories can lead to poor story quality.", "Therefore, we added event and object mismatch into our analysis.", "The error examples and correlation between the error are illustrated in the appendix (Table 9 and Figure 5).", "From our observation, 79.8% of the sampled machine generated stories contained at least one of the errors in the categories, meaning most VIST models are unable to generate perfect stories.", "In Table 3, the high percentage of object and event mismatch errors also show that current VIST models do not capture visual groundings accurately.", "This can lead to humans assigning higher scores to human-written stories since they are most likely to be relevant to the given images.", "Grammatical errors and absurdities are also common in generated text, which can lead to ambiguous stories that humans are unable to comprehend.", "The prevalence of errors makes it essential for evaluation metrics to automatically detect these errors.", "We propose Vrank, a reference-free automatic metric that inputs story pairs to predict human preferences between the two stories.", "We utilize SIMCSE (Gao et al., 2021) to leverage better sentence representations.", "SIMCSE uses contrastive learning with dropout as augmentation, then trained on natural langauge inference datasets to obtain better sentence embeddings from BERT (Devlin et al., 2019).", "First, we pre-trained the SIMCSE model using SIND reference stories with the Masked Language Model objective.", "Then, we input two stories with a [SEP] token in between through the pre-trained model.", "We use the acquired sentence embeddings and feed it through a regression layer to predict a ranking gap.", "We used mean squared error to calculate the loss between the predicted ranking gap and true ranking gap.", "After obtaining the ranking gap, we predict which story is better according to the sign of the predicted ranking gap.", "Although Vrank is a simple model fine-tuned solely on human judgment, it still outperforms current existing metrics in our assessments.", "This suggests further potential for use with VHED; more studies can be conducted to replace Vrank with stronger neural network models.", "During model training, since the number of positives and negatives were not balanced in the original dataset, we augmented the data to create a symmetric dataset of VHED to minimize dataset bias.", "3 The ranking gap in the resulting dataset was close to normally distributed.", "We hypothesize that utilizing this feature makes it possible to extract more information, making it easier for the model to learn human judgment.", "However, due to the small amount of data available, high variance is likely (Mosbach et al., 2020) to occur during inference.", "Hence, we used all data from VHED, including human agreement = 3 to increase the stability of our model following Mosbach et al. (2020).", "In this section, we describe a series of assessments conducted on existing metrics on VHED, in which the assessment methods are based on the analyses in VHED.", "The objective is to examine whether Vrank is superior to other metrics based on our analysis of VHED.", "Story-Pair Ranking A recent study (Wei and Jia, 2021) illustrates that pairwise accuracy reflects metric performance better than using correlation with human evaluation.", "Hence, we propose simple story-pair ranking to evaluate automatic evaluation metrics for visual storytelling.", "The task is to determine the correct ranking order of the stories in 3 Other configurations, including utilizing visual features and changing the task objective to classifying betterand worse-ranked stories did not perform better.", "a story pair based on the story quality scores predicted by the automatic evaluation metrics being assessed.", "Given the story pair ( x 1 , x 2 ) , the automatic metric being assessed predicts the corresponding story quality scores ( s 1 , s 2 ) which we compare to the averaged ranks y 1 and y 2 of x 1 and x 1 from human evaluation.", "The performance of the evaluation metric on the i -th story pair is formulated as ranking _ acc i = 1 , if s 1 > s 2 and y 1 < y 2 1 , if s 1 < s 2 and y 1 > y 2 0 , otherwise, (1) where ranking _ acc i = 1 indicates correct (incor-rect) prediction.", "Note that low scores indicate high rank.", "The overall metric performance over M story pairs is defined as: avg _ ranking _ acc = 1 MM (cid:88) i =1 ranking _ acc i .", "Datasets In addition to VHED, we also collected VIST-Edit 4 (Hsu et al., 2019) for story-pair ranking.", "VIST-Edit includes 2,981 visual stories generated by AREL (Wang et al., 2018a) and GLAC (Kim et al., 2018), and 14,905 human-edited visual stories, that is, AREL and GLAC-generated stories edited by workers.", "Their paper shows that the crowd workers' edits systematically increased the lexical diversity of the stories.", "Since the purpose of the editing was to improve the machine-4 VIST-Edit: https://github.com/tingyaohsu/ VIST-Edit generated stories, we paired up human-edited stories and machine-generated stories as better-ranked and worse-ranked samples (labeled as 1 and 2), resulting in 14,905 story pairs.", "Comparing VHED to VIST-Edit, VHED contains reference and multiple models' generated stories, but VIST-Edit has only human-machine story pairs.", "Additionally, VIST-Edit is not in Vrank's training data.", "VIST-Edit is utilized only for metric performance reports, serving as an unseen dataset for Vrank.", "Baseline Automatic Metrics We first consider traditional n-gram-based reference-based metrics, BLEU, ROUGE-L, METEOR, and SacreBLEU (Keenan, 2017).", "We also implement the more recent BERT-Score, BLEURT, and UNION as baseline metrics.", "In addition to the above automatic metrics, we also include a random baseline, denoted as Random in Table 4, to provide a random score for each story as the lower bound.", "A common practice for reference-based metrics: a candidate story is scored against each reference r j in a gold reference set R = { r i } ni =1 ; the highest score was used.", "However, applying this method on a reference-machine story pair would always result in reference having a full score, because of the exact match between reference and the gold reference set.", "To ensure a fair evaluation and avoid meaningless matching, we first check that the gold references do not include the reference.", "To this end, we propose the Reference Absent Algorithm for evaluating story pairs containing the reference story (or stories) as in Eq.", "3, which removes the r j from R when any of the candidate stories in a story 6370 pair ( x = { x 1 , x 2 } ) is identical to r j .", "where metric( ) can be any reference-based metric and s j is the story quality scores for the j -th story in a story pair.", "This algorithm only applies when evaluating story pairs containing references, i.e., reference-machine pairs in this paper.", "We believe such behavior A v e r a g e R a n k i n g A cc u r ac y 0.3 > r >= 0.2 0.2 > r >= 0.1 0.1 > r Ranking Gaps (r) r >= 0.3 A v e r a g e R a n k i n g A cc u r ac y Figure 3: Average ranking accuracy for each metric on four sub-datasets with different ranking gaps r .", "Pairwise Story Evaluation Accuracy: Metric's ability to determine the correct ranking order in story pairs.", "The average ranking accuracy of each automatic metric on VHED and VIST-Edit are presented in Table 4 (left).", "Around 50% corresponds to random guessing, as shown as Random in the table.", "Vrank shows superior performance in VHED and VIST-Edit, which VIST-Edit is the unknown dataset to Vrank.", "High performance on VIST-Edit and VHED indicates Vrank has the ability to distinguish diverse story pairs.", "In contrast, we observe unexpectedly low performance for most baseline metrics, as they perform no better than the Random baseline.", "BLEU-4 especially struggles to rank the stories in both datasets.", "Further analysis suggests that BLEU-4 marked 80% of the stories as 0, and Equation 1 coincidentally treated them as incorrect prediction because it discourages ties.", "BLEURT, in turn, also performed poorly because it relies on reference-based metrics as signals for training.", "Reference-free metrics, especially UNION, perform well on VIST-Edit.", "However, its design is not generalizeable to VHED.", "Worker Ranking Behavior on Metrics: The larger the ranking gap, the easier is it to rank.", "The ranking gap is the difference between a better-ranked and worse-ranked sample's average ranks.", "VHED is categorized into four sub-datasets with different ranking gaps.", "This assessment tests each metrics' ability to mimic worker ranking behavior observed in the analysis.", "Story pairs with larger gaps suggest stronger linguistic differences and are likely easier to rank, whereas those with smaller gaps are likely more difficult.", "In Fig. 3, all baseline automatic metrics, including metrics not reported in the figure, show randomly distributed scores, most of which remain around 50%, thus failing to exhibit such behavior.", "On the contrary, Vrank yields an ideal decrease.", "Starting with ranking gaps over 0.3, the accuracy reaches 0.85 and a grad-ual decrease afterward.", "Machine and Human on Metrics: Machines are sometimes better than humans.", "Two aspects are studied in this section.", "First, we evaluate the ability of Vrank and reference-based metrics to rank reference-machine (R&M) pairs.", "Although some machine texts have progressed to human-level, to our knowledge, there has been little investigation of metrics' ability to evaluate references and machines.", "We apply reference-based metrics with Eq.", "3. This results in poor performance for reference-based metrics as shown in R&M in Table 4 5 .", "An explanation is that since the reference is removed from the reference set by Eq.", "3, the reference needs to match with the remaining references in the reference set.", "Although most references are on topic, the stories are highly diverse (Zhu et al., 2020).", "These metrics are unable to calculate the similarity to semantic levels; thus, they result in poor performance.", "On the contrary, Vrank is a deep learning model, trained on VHED and thus learned to rate based on story quality rather than similarity.", "We also find that Vrank ranks correctly when machine is better than reference, showing that Vrank yields 26.5% recall when the other metrics have 0 recall without Eq.", "3 and 18% with Eq.", "3. Second, we observe the performance of metrics on M&M (machine-machine pairs).", "M&M ranking gaps are smaller than those of R&M pairs (0.18 v.s. 0.21), making them harder to rank because their story qualities are closer.", "However, Vrank still shows promising performance when ranking 5 A complete table without Eq.", "3 can be found in the appendix (Table 7) 6371 Error Types Human Vrank UNION-ROC UNION-WP BLEURT BERT-Score ROUGE-L METEOR Gram -0.107 -0.021 -0.099 -0.087 -0.228 -0.124 0.024 -0.167 Desc -0.212 -0.154 -0.149 0.154 -0.081 0.080 0.114 -0.018 Rep -0.130 -0.042 -0.120 -0.411 0.168 0.134 0.079 -0.034 Abs -0.309 -0.308 0.003 0.120 -0.113 0.105 0.092 -0.025 Obj -0.067 -0.157 -0.089 0.158 -0.302 -0.111 -0.048 -0.098 Event -0.191 -0.093 0.008 -0.001 -0.131 0.043 0.138 -0.099 Table 5: This table shows the correlation of human rankings, automatic metric scores with the corresponding error categories.", "Errors in Metrics: Metric's ability to detect errors.", "Current generated stories often contain errors which prompt human evaluators to assign lower scores.", "It is crucial for automatic metrics to also recognize such errors to judge generated text.", "To do this, we adapted the point-biserial correlation coefficient to analyze the correlation between binary annotated errors and metric scores.", "The correlation between metrics and errors is presented in Table 5: existing metrics are not able to detect errors as the correlation coefficients are low.", "From the correlation coefficients between the human ranking score and each error aspect, we observe that human evaluation for stories may be influenced by error aspects, especially absurdity and description in isolation.", "In general, Vrank performs best in detecting absurdity and description in isolation.", "UNION-WP performs best in correlation with repetition, which is reasonable since UNION is trained to discriminate erroneous stories that are repetitive in structure.", "In summary, current metrics remain unable to detect errors to evaluate coherency efficiently.", "Metrics ability to detect errors may give clearer indications of the quality of generated texts.", "In addition to VIST, we expect Vrank to reasonably evaluate the quality of text as well.", "To determine whether Vrank generalizes to textual stories, we selected MANS dataset (Guan et al., 2021), an image-free storytelling dataset in which the stories are derived from the ROCStories corpus (Mostafazadeh et al., 2016).", "MANS includes 200 story prompts, where each prompt includes five model-generated stories and a reference.", "However, it does not contain human story rankings.", "Thus, for each story prompt, we asked five workers from Amazon Mechanical Turk to rank the five stories to obtain ranking scores.", "Following the VHED construction procedure, the ranked stories were converted into story pairs, making for 1,112 story pairs for which 3 workers agreed on the ranking, 605 story pairs for which 4 workers agreed, and 132 story pairs for which 5 workers agreed.", "Likewise, we evaluate story pairs with k 4 .", "The results of Vrank and the baseline automatic metrics when ranking MANS are shown in Table 6.", "We find that Vrank outperforms baseline metrics in story pairs with k 4 , whereas the latter still show limited abilities to rank the MANS dataset.", "In general, the accuracy of automatic evaluation on MANS is lower than that on VHED.", "This may be due to the comparably unconstrained writing styles of pure textual stories.", "An example of the evaluation on stories is given in the appendix (Table 8).", "We present VHED and Vrank, the first dataset of human evaluation results and evaluation metric for VIST.", "We show that Vrank performs significantly better in three assessment tasks and generalizes to other datasets.", "Also, recent automatic 6372 metrics are ill-suited to evaluating visual stories, especially human-level written stories.", "We welcome researchers to share their human evaluation results to the community to broaden the data domain to obtain more knowledge about human judgment and improve the performance of Vrank.", "As the gap between machines and humans continues to decrease, stronger metrics will be needed to evaluate machine and human stories.", "Improving Vrank performance to replace reference-based metrics is our future goal.", "This research is partially supported by Ministry of Science and Technology, Taiwan under the project contract 108-2221-E-001-012-MY3 and 110-2634-F-002-051-.", "We also thank the crowd workers for participating in this project." ]
[ "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "method", "abstain", "result", "abstain", "result", "result", "abstain", "objective", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "result", "result", "abstain", "abstain", "objective", "objective", "objective", "other", "other", "other", "other", "other", "other", "other", "other", "method", "other", "other", "other", "other", "other", "method", "other", "other", "other", "method", "other", "other", "other", "other", "other", "other", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "result", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "result", "abstain", "result", "abstain", "objective", "other", "other" ]
[ "We present a new dataset for machine comprehension in the medical domain.", "Our dataset uses clinical case reports with around 100,000 gap-filling queries about these cases.", "We apply several baselines and state-of-the-art neural readers to the dataset, and observe a considerable gap in performance (20% F1) between the best human and machine readers.", "We analyze the skills required for successful answering and show how reader performance varies depending on the applicable skills.", "We find that inferences using domain knowledge and object tracking are the most frequently required skills, and that recognizing omitted information and spatio-temporal reasoning are the most difficult for the machines.", "Machine comprehension is a task in which a system reads a text passage and then answers questions about it.", "The progress in machine comprehension heavily depends on the introduction of new datasets (Burges, 2013), which encourages the development of new algorithms and deepens our understanding of the (linguistic) challenges that can or can not be tackled well by these algorithms.", "Recently, a number of reading comprehension datasets have been proposed ( 2), differing in various aspects such as mode of construction, answer-query formulation and required understanding skills.", "Most are open-domain datasets built from news, fiction and Wikipedia texts.", "For specialized domains, however, large machine comprehension datasets are extremely scarce (Welbl et al., 2017a), and We provide the information about accessing the dataset, as well as the code for the experiments, at http://github.", "passage: [...] A gradual improvement in clinical and laboratory status was achieved within 20 days of antituberculous treatment .", "The patient was then subjected to a thoracic CT scan that also showed significant radiological improvement .", "Thereafter , tapering of corticosteroids was initiated with no clinical relapse .", "The patient was discharged after being treated for a total of 30 days and continued receiving antituberculous therapy with no reported problems for a total of 6 months under the supervision of his hometown physicians .", "[...] query: If steroids are used , great caution should be exercised on their gradual tapering to avoid .", "the required comprehension skills poorly understood.", "With our work we hope to narrow this gap by proposing a new resource for reading comprehension in the clinical domain, and by analyzing the different types of comprehension skills that are triggered while answering (Sugawara et al., 2017; Lai et al., 2017).", "Machine comprehension for healthcare and medicine has received little attention so far, although it offers great potential for practical use.", "A typical application would be clinical decision support, where given a massive amount of text, a clinician asks questions about either external, medical knowledge (reading literature) or about particular patients (reading electronic health records).", "Currently, patient-specific questions are tackled by manually browsing or searching those records.", "This task can be facilitated by summarization and QA systems (Demner-Fushman and Lin, 2007; Demner-Fushman et al., 2009), and we believe, by fine-grained machine reading.", "Reading comprehension systems that perform on a finer level could play an important role especially when combined with 1551 document retrieval to perform machine reading at scale, such as in the models of Chen et al. (2017) and Watanabe et al. (2017) for the general domain.", "For our dataset, we construct queries, answers and supporting passages from BMJ Case Reports, the largest online repository of such documents.", "A case report is a detailed description of a clinical case that focuses on rare diseases, unusual presentation of common conditions and novel treatment methods.", "Each report contains a Learning points section, summarizing the key pieces of information from that report.", "The learning points are typically paraphrased portions of passage text and do not match passage sentences exactly.", "We use these learning points to create queries by blanking out a medical entity.", "To counteract potential errors and inconsistencies due to automated dataset creation, we perform several checks to improve the quality of the dataset ( 3).", "Our dataset contains around 100,000 queries on 12,000 case reports, has long support passages (around 1,500 tokens on average) and includes answers which are singleor multiword medical entities.", "We show an example from the dataset in Figure 1. We examine the performance on the dataset in two ways.", "First, we report machine performance for several baselines and neural readers.", "To enable a more flexible answer evaluation, we expand the answers with their respective synonyms from a medical knowledge base, and additionally supplement the standard evaluation metrics with BLEU and embedding-based methods.", "We investigate different ways of representing medical entities in the text and how this affects the neural readers.", "We obtain the best results with a recurrent neural network (RNN) with gated attention (Dhingra et al., 2017a), but a simple approach based on embedding similarity proves to be a strong baseline as well.", "Second, we look at how well humans perform on this task, by asking both a medical expert and a novice to answer a portion of the validation set.", "When categorizing the skills necessary to find the right answer, we observe that a large number of comprehension skills get activated and that prior knowledge in the form of the ability to perform lexico-grammatical inferences matters the most.", "This suggests that for our dataset and possibly for domain-specific datasets more generally, more background knowledge should be incorporated in machine comprehension models.", "The current gap between the best machine and the best human performance is nearly Dataset Question origin Domain Size CliCR(thiswork) Learningpoints Medical 105K Quasar-S(Dhingraet al., 2017b) Definitions Software 37K SciQ(Welbl et al., 2017a) Crowdsourced Science 14K MedHop(Welblet al., 2017b) KB Drugs 2.5K Biology(Berantet al., 2014) Domainexpert Biology 585 Algebra(Kushman et al., 2014) Crowdsourced Algebra 514 QA4MRE(Sutcliffeet al., 2013) Annotator Various 240 Table 1: Survey of closed-domain reading comprehension datasets.", "20% F1, which leaves ample space for further study of machine readers on our dataset.", "In brief, the contributions of our paper are: We propose a large dataset for reading comprehension in the medical domain, using clinical case descriptions.", "We carry out an empirical analysis of a ) system and human performance on reading comprehension, and b ) comprehension skills that are required for answering the queries correctly and that allow us to position the dataset according to its difficulty on each of the skills.", "Numerous general-domain datasets have been recently created to allow machine comprehension using data-intensive methods.", "These datasets were collected from Wikipedia (Hewlett et al., 2016; Joshi et al., 2017; Rajpurkar et al., 2016), web search queries (Nguyen et al., 2016), news articles (Hermann et al., 2015; Onishi et al., 2016; Trischler et al., 2017), books (Bajgar et al., 2016; Hill et al., 2016; Paperno et al., 2016) and English exams (Lai et al., 2017).", "In Table 1, we compare our dataset to several domain-specific datasets for machine comprehension.", "In Quasar-S, the queries are constructed from definitions of software entity tags in a community QA website, while in our case the queries are more varied and explicitly relate to the supporting passages.", "SciQ is a dataset of science exam questions, in which question-answer pairs are used to retrieve the text passages.", "For each question, four candidate answers are available.", "In our dataset, the number of candidate answer is much 1552 higher as the candidate answers come from the relatively long passages.", "Other datasets mentioned in the table are smaller, so they could not be used as training sets for statistical NLP models.", "Cloze datasets require the reader to fill in gaps by relying on accompanying text.", "Representative datasets are Children's Book Test (Hill et al., 2016) and Book Test (Bajgar et al., 2016), in which queries are created by removing a word or a named entity from the running text in a book; and Hermann et al. (2015), who similarly to us blank out entities in abstractive CNN and Daily Mail summaries, but who are only concerned with short proper nouns and short passages.", "Who-did-what (Onishi et al., 2016) requires the reader to select the person name from a short candidate list that best answers the query about a news event.", "They do not use summaries for query formation but remove a named entity from the initial sentence in a news article, and then perform information retrieval to find independent passages relevant to the query.", "Another cloze dataset for language understanding is ROCStories (Mostafazadeh et al., 2016), but it is targeted more towards script knowledge evaluation, and only contains five-sentence stories.", "Another related task is predicting rare entities only, with a focus on improving a reading comprehension system with external knowledge sources (Long et al., 2017).", "Another popular way of creating datasets for reading comprehension is crowdsourcing (Ra-jpurkar et al., 2016; Richardson et al., 2013; Nguyen et al., 2016; Trischler et al., 2017).", "These datasets exist primarily for the general domain; for specialized domains where background knowledge is crucial, crowdsourcing is intuitively less suitable (Welbl et al., 2017b), although some positive precedent exists for example in crowdsourcing annotations of radiology reports (Cocos et al., 2015).", "Compared to automated dataset construction, crowdsourcing is more likely to provide high-quality queries and answers.", "On the other hand, human question generation may also lead to less varied datasets as questions would tend to be of whtype; for cloze datasets, the questions may be more varied and might require readers to possess a different set of skills.", "1 1 Support for this is given in Sugawara et al. (2017), who show that Who-did-what dataset, for example, requires on average a larger number of reading skills than SQuAD (Ra-jpurkar et al., 2016) and MCTest (Richardson et al., 2013).", "We collected the articles from BMJ Case Reports 2 .", "The data span the years 20052016 and amount to almost 12 thousand reports.", "We removed the HTML boilerplate from the crawled reports using jusText 3 , segmented and tokenized the texts with cTakes (Savova et al., 2010), and annotated the medical entities using Clamp (Soysal et al., 2017).", "We apply two simple heuristics to refine the recognized entities and to decrease their sparsity.", "Namely, we move the function words (determin-ers and pronouns) from the beginning of the entity outside of it, and we adjust the entity boundary so that it does not include a parenthetical at the end of the entity.", "Clamp assigns entities following the i2b2-2010 shared task specifications (Uzuner et al., 2011).", "For each entity, a concept unique identifier (CUI) is also available, which links it to the UMLSR (cid:13) Metathesaurus R (cid:13) (Lindberg et al., 1993).", "To check the quality of the recognized entities, we carried out a small manual analysis on 250 entities.", "We found that in 89% of cases, the boundaries were correct and defined a true entity.", "Wrongly recognized cases occurred mostly when two entities were coordinated and recognized as one; when a verb was wrongly included in the entity; or when a pre-modifier was left out.", "We create a query by replacing a medical entity in one learning point with a blank.", "For example, in a report describing comorbid disorders of ADHD, we could obtain the following query: (1) Patients with ADHD have higher incidence of .", "The missing entity enuresis is taken as the correct answer.", "Even though one query corresponds to at most one learning point, there can be more than one query built from a learning point.", "Occasionally, a learning point contains an exact repetition from the passage.", "These instances would be trivial to answer, so we remove them.", "We count as an exact match every instance whose longer side to left/right of the query blank coincides with a part in the passage text.", "This curation step reduces the dataset size by 5%.", "More commonly, the learning points are paraphrases of crucial parts of the passage.", "Sometimes, the entity answering the query is expressed 2 http://casereports.bmj.com/ 3 https://pypi.python.org/pypi/jusText 1553 differently in the passage.", "For example, in place of enuresis, the passage might include its synonym bedwetting.", "We manage these cases in two ways, by extending the set of answers for a certain query ( 3.2), and adding a semantic relatedness metric to the standard evaluation ( 6).", "We account for lexical variation of the ground-truth answers (compared to mentions in the passages) by extending each original ground-truth answer a to a set of ground-truth answers A using a knowledge base.", "Since our entity recognizer already provides the CUI labels, we can use them to obtain the list of alternative word and phrase forms (synonyms, abbreviations and acronyms) from UMLSR (cid:13) .", "Similarly to previous work (Choi et al., 2016; Hewlett et al., 2016), for certain queries none of the answers in A occurs verbatim in the passage.", "We have found upon manual inspection that this is mostly due to lexical variation that is not captured by answer extension, and to a lesser degree, due to the introduction of entirely new information in the learning point and the entity recognition errors.", "In the empirical part, we use for training only the instances for which at least one answer occurs in the passage, but we evaluate on all instances in the validation and test sets, including those for which A E = , where E is the set of all entities in the passage.", "This mimics a likely real-life scenario where the set of ground-truth answers is a priori unknown.", "The reading comprehension problem in our case can be represented as a tuple ( q, p, A ) , where q is the query, built from a learning point; the passage p is the entire report excluding the Learning points section; and A is the set of ground-truth entities answering q .", "In defining the task, it is important to consider how to take into account entity annotation and how to define the answer output space.", "We look at these more closely in the rest of this section.", "Whenever the entities are marked in the passage, the system can learn to exploit this cue to find the answers more easily (Wang et al., 2017).", "Although this simplifies the task, it also makes it less realistic as the entities may not be recognized at test time.", "Realizing that the presence of entities makes the task easier for the machines, Hermann et al. (2015) anonymize the entities, also with a goal of discouraging language model solutions to the N of cases 11,846 N of queries in train/dev/test 91,344/6,391/7,184 N of tokens in passages 16,544,217 N of word types in passages 112,673 N of entity types in passages 591,960 N of distinct answers 56,093 N of distinct answers (incl. extended) 288,211 % answers verbatim in passage 59 Table 2: Data statistics based on the lowercased dataset.", "queries.", "In our case, it is not clear how relevant the anonymization is since we deal with medical entities, which have different properties than proper name entities (Kim et al., 2003; Niu et al., 2003).", "We explore different entity-annotation choices in the empirical part, where we refer to them as Ent (entities marked) and Anonym (entities marked but anonymized).", "We further examine a more challenging setup in which the reader can not rely on entity markers as they are not present in the passage ( NoEnt ).", "In all cases, the reader chooses an answer among the candidates E collected from all entities in the passage.", "4 Multi-word entities, which are common in our dataset, are treated as a single token by Ent and Anonym.", "We now describe the dataset in more detail, starting with the general statistics summarized in Table 2. It is worth pointing out that the support passages are rather long, which stems from the data origin (journal articles).", "We show the passage length distribution in Figure 2a, which has the average length of 1,466 tokens.", "Furthermore, passages are rich with medical entities.", "There is little repetition of answersthe total of around 100,000 queries are answered by 50,000 distinct entities.", "Upon extending the answer set with UMLSR (cid:13) we introduce on average four alternative answers for each original one.", "In 59% of instances, the answer entity is found verbatim in the relevant passage.", "The answers can belong to any of the problem, treatment or test categories (Table 3), and usually consist of multiple words (Figure 2b).", "The diversity of medical specialties represented in the articles is shown in Figure 3. 4.1 Analysis of comprehension skills We estimate the types of skills required in answering by following the categorization of Sugawara et al. (2017).", "We include the skill definitions with examples from our dataset in Appendix B. We annotated 100 instances in the validation set (with ground-truth answers provided), which yielded on average 2.85 skills per query.", "The distribution of the required skills is shown in Figure 4. In coml l l l l l l l l l l l l l math analogy causality elaboration punctuation ellipsis none logical spatiotemporal meta coreference complex tracking bridging 0 10 20 30 40 50 60 70 Percentage S k ill l CliCRQA4MRESQuADWhodidwhat Figure 4: Percentage of times a skill is required in a given dataset.", "parison to the general-domain datasets (SQuAD, Who-did-what), our dataset and QA4MRE (which is also a domain-specific dataset, but with human-generated questions) require more bridging inferences (inferences using background knowledge about the domain), spatio-temporal reasoning and coreference resolution.", "In our dataset, meta knowledge and object tracking are required more often than in any other dataset.", "This can be explained by the data origin and the nature of queries.", "In the case reports, a prominent topic can be discussed which the author refers to in the query, but the query itself is never answered in the passage (meta knowledge).", "Furthermore, the authors often enumerate medical entities in the query, which leads to the frequent use of object tracking.", "The queries which were unanswerable are marked as none.", "The fraction of these cases was around 16%.", "In our experience, the annotation of skills proved quite challenging due to certain confusables.", "For example, object tracking and coreference both need to maintain the link between objects; object tracking, which includes establishing set relations and membership, may be overlaid with the schematic clause relation skill (subordination); and bridging inference can overlap with coreference resolution.", "Nevertheless, we adhered to this classification of skills to increase comparability to other datasets included in Figure 4. 5 Methods 5.1 Baselines Our simplest baselines that we apply on the test set include choosing a random entity ( rand-entity ) 1555 and selecting the most frequent passage entity ( maxfreq-entity ) as the answer.", "We also include a distance-based method that uses word embeddings ( sim-entity) .", "Here, we vectorize the passage and the query, and then choose that entity from the passage whose representation has the highest cosine similarity to the query representation: sim-entity = argmax i E cos (cid:0) X j C i c j , X k Q q k (cid:1) , (1) where c, q R d .", "The multiset C i contains the words { x i n , . . . , x i 1 , x i +1 , . . . , x i + n } surrounding the passage entity i E .", "We define Q , the context words of the query, likewise.", "To find out how well the queries can be answered without reading the passage, we also predict the most likely continuation with a language model ( lang-model ).", "We trained a 4-gram Kneser-Ney model on CliCR training data (with multi-word entities represented as a single token) using SRILM (Stolcke, 2002).", "We apply two types of bidirectional RNNs to our data.", "Following Wang et al. (2017), we distinguish between aggregation readers and explicit reference readers, which differ in their formulation of the attention mechanism and how it is being used for answer prediction.", "Stanford Attentive (SA) Reader The model proposed by Chen et al. (2016) is an aggregation reader based on the Attentive Reader (Hermann et al., 2015).", "It predicts the answer using: a = argmax i E e o ( i ) T o, (2) where e o ( i ) is the answer's output embedding and o is the passage representation obtained by weighting every token representation in the passage with attention: o = P t t h t .", "The attention mechanism is used here to measure the compatibility between token ( h t ) and query ( q ) representations with a bilinear form, t = softmax t h Tt W q .", "At prediction time, attention should highlight that position t in the passage where the answer occurs.", "Note that the prediction relies on the aggregate representation o , hence the name of the reader category.", "As we see in (2), the prediction score does not allow accounting for multi-word entities, unless they are treated as a single token.", "Returning to our different set-ups based on entity annotation ( 3.3), this means that we can apply SA reader with Ent and Anonym setups, but not with NoEnt, where multi-word answers should be allowed.", "Gated-Attention (GA) Reader Dhingra et al. (2017a) investigate neural readers with a fine-grained attention mechanism that learns token representations for the passage that are also conditional on the query, but are in addition refined through multiple hops of the network.", "The model predicts the answer using attention weights with explicit reference to answer positions in the passage: a = argmax i EX t R ( i,p ) t , (3) where R is the set of indices in passage p at which a token from the candidate i occurs.", "This operation is also called the pointer sum attention (Kadlec et al., 2016).", "Since the model marks the references for each token in the answer separately, it allows us to investigate also the NoEnt set-up.", "5 We train each reader with the best hyper-parameters found on the validation set using random search (Bergstra and Bengio, 2012), and evaluate it on the test part of the dataset.", "We provide more details about parameter optimization in Appendix A. The models use word embeddings pre-trained on biomedical texts.", "We induce the word embeddings on a combination of the CliCR training corpus and PubMed abstracts with open-access PMC articles available until 2015 (segmented and tokenized), amounting to over 9 billion tokens (Hakala et al., 2016).", "Considering the large effect of hyper-parameter selection on the quality of word embeddings (Levy et al., 2015), we optimize the embedding hyper-parameters also using random search.", "A model f takes as input a passagequery pair and outputs an answer a .", "6 We carry out the evaluation 5 We assume the candidate entities are known in advance.", "6 In our case, the answer is a word or a word phrase representing a medical entity.", "Alternatively, one could also take the UMLSR (cid:13) CUI identifier as the answering unit.", "However, in that case, it would mean that sometimes the original word phrase is lost.", "This is because entity linking with CUIs can be noisy, and only a part of a word phrase may be linked to the ontology.", "In the current setup, we are able to keep both the original word phrase as well as the extended answers.", "The CUI information is still an integral part of the answer field in our dataset, so it can be used by other researchers if preferred.", "with different metrics described below.", "The final score m for a metric v is obtained by averaging over the test set: m v ( f ) = 1 | D test | X ( p,q,A ) D test max a A v ( f ( p, q ) , a ) .", "Since there are multiple correct answers A , we take the highest scoring answer a at each instance, as done in Rajpurkar et al. (2016).", "Note that in the dataset we do not supply the candidate answers; in the experiments, we constrain the candidates to the set of entities in the passage.", "The two standardly used metrics for machine comprehension evaluation are the exact match (EM) and the F1 score.", "For EM, the predicted and the ground truth answers must match precisely, safe for articles, punctuation and case distinction (same for other metrics).", "F1 metric is applied per instance and measures the overlap between the prediction a and the ground truth a , which are treated as bags of words.", "7 While these two metrics are arguably suffi-cient in news-style machine comprehension where the entities are proper nouns which allow for little variation and synonymy, in our case the medical entities are often mostly common nouns modified by specifiers and qualifiers.", "To take into account potentially large lexical and word-order variation, we use two additional metrics.", "First, we measure BLEU (Papineni et al., 2002) for n-grams of length 2 (shortly, B2) and 4 (B4) using the package by Chen et al. (2015), with which we aim to capture contiguity of tokens in longer answers.", "Second, it may occur that answers contain no word overlap yet still be good candidates because of their semantical relatedness, as in renal failurekid-ney breakdown.", "We take this into account by using an embedding metric (Emb), in which we construct mean vectors for both ground-truth and system answer sequences, and then compare them with the cosine similarity.", "This and other embedding metrics for evaluation were previously studied in dialog-system research (Liu et al., 2016).", "We show the results in Table 4. We see that answer prediction based on contextual representation of queries and passages (sim-entity) achieves a strong base performance that is only outperformed by GA 7", "In precision, the number of correct words is divided by the number of all predicted words.", "In recall, the former is divided by the number of words in the ground-truth answer.", "reader.", "The language model performs poorly on EM and F1, but the embedding-metric score is higher, likely reflecting the fact that the predicted answersthough mostly incorrectare related to the ground-truth answers.", "The poor performance means that based on queries alone (without reading the passage), it is difficult to provide accurate answers.", "The GA reader performs well across all entity set-ups, even when the entities are not marked in the passage.", "Interestingly, the exact match and BLEU scores in this case are much lower compared to other entity set-ups.", "Upon inspecting the predicted answers more closely, we have observed that GA-NoEnt tends to predict longer answers than GA-Ent/Anonym.", "For example, the average predicted answer length for GA-NoEnt was as high as 3.7 tokens, whereas for the other two set-ups and the ground-truth answers the numbers range between 2.3 and 2.5.", "A plausible explanation for this lies in how GA reaches its prediction (3), which is by accumulating the attention weights without normalizing.", "This would then drive the model to prefer longer answers.", "For example, for the ground-truth entity chest CT, GA-NoEnt predicts interval CT scans of the chest.", "Although all neural models use pre-trained word embeddings, for Ent and Anonym the multi-word entities do not have pre-trained embeddings since our embeddings are induced on the word level.", "This may partly explain the competitive performance of NoEnt compared to Ent.", "We leave the integration of entity embeddings for the future work.", "The results for SA reader are far below the per-1557 formance of GA reader.", "We also see that it performs much better on anonymized entities than on non-anonymized ones.", "This is in line with Wang et al. (2017) who find that SA reader suffers a drop of 19 points in exact match on Who-did-what dataset when anonymization is not done.", "A possible explanation is that anonymization reduces the output space to only several hundred entity candidates for which the output embedding needs to be trained.", "When we do not use anonymization, the set of output entities increases to the set of all entity types found in all passages, which is several orders of magnitude more.", "While this effect also occurs for GA reader, it is less pronounced because GA reader scores words in the passage and does not need to learn separate answer word embeddings.", "To measure the accuracy of human answering, we have used the same sample of data instances as used for the analysis of skills.", "8 The queries were answered separately by a novice reader (linguistics background, little-to-none medical knowledge) and by an expert reader (both linguistics and medical background).", "The annotators needed around 15 minutes on average to read the passage and answer the query.", "The results are shown at the bottom of Table 4. The expert scores higher across all evaluation metrics, with as much as a 7-point advantage in % F1.", "This advantage is largely coming from the better performance on those instances where bridging inferences are required (the average F1 score was 10 points higher on these queries), which suggests that domain knowledge is beneficial in the comprehension task.", "For a novice in a specialized domain, it is harder to build a good situation model that would lead to successful comprehension since it requires more effortactive, strategic processing and establishing ontological relationships in that specific domain.", "For an expert reader this process is more automatized (Kintsch and Rawson, 2008).", "We can see from the table that the best human performance is well below its theoretical upper bound of 100% F1.", "An important part of explanation for this lies in the automated dataset construction, which leaves certain queries unanswerable, especially when the authors do not refer to a part in the article but introduce completely new information.", "Another reason is the problem of answer openness: Typically more than one correct an-8 Human answers were collected before the skill analysis.", "swer is possible and the answers can be correct to various degrees, which we aimed to capture with the use of the embedding metric in the evaluation.", "Nevertheless, the gap between the best human and machine F1 score is large (around 20 points), leaving considerable space for future applications of machine readers on our dataset.", "9 7.2 Breakdown of results by skill To see how the answering performance relates to the skill requirements, we have analyzed the part of the validation set annotated with the skills by averaging F1 values for all instances with a particular skill.", "In this way, we are able to break down both human and machine performance skill-wise, as shown in Figure 5. Because of the small sample size, the results should only be taken as a general indication.", "The most difficult cases for the GA reader are those annotated with none (unanswer-able) and ellipsis (recognizing implicit and omitted information), ignoring analogy for which we only have a single annotated case.", "Furthermore, spatio-temporal reasoning, elaboration (inferences using general knowledge) and bridgingwhich is also the most commonly required skillare the next most difficult ones.", "The human scores are mostly much higher, which is especially apparent for spatio-temporal reasoning, logical skills and the skill involving punctuation.", "Our findings align with those of Chu et al. (2017) on the Lambada dataset (Paperno et al., 2016): Although they used a different categorization of comprehension skills, they also find that GA reader has most difficulties with elaboration (which they refer to as external 9 For comparison, the gap for SQuAD was 12.2 and for NewsQA 19.8 (Trischler et al., 2017). 1558 knowledge), followed by coreference resolution.", "Towards the machine comprehension of text: An essay.", "Technical report, Microsoft Research Technical Report MSR-TR-2013-125, 2013, pdf.", "Xinlei Chen, Hao Fang, Tsung-Yi Lin, Ramakrishna Vedantam, Saurabh Gupta, Piotr Dollar, and C Lawrence Zitnick.", "2015.", "Microsoft COCO captions: Data collection and evaluation server.", "arXiv preprint arXiv:1504.00325 .", "E. Choi, D. Hewlett, A. Lacoste, I. Polosukhin, J. Uszkoreit, and J. Berant.", "2016.", "Hierarchical question answering for long documents.", "arXiv preprint arXiv:1611.01839 .", "Dina Demner-Fushman, Wendy W. Chapman, and Clement J. McDonald.", "2009.", "What can natural language processing do for clinical decision support?", "Journal of Biomedical Informatics 42(5):760 772.", "Dina Demner-Fushman and Jimmy Lin.", "2007.", "Answering clinical questions with knowledge-based and statistical techniques.", "Computational Linguistics 33(1):63103.", "Bhuwan Dhingra, Kathryn Mazaitis, and William W Cohen.", "2017b.", "Quasar: Datasets for Question Answering by Search and Reading.", "arXiv preprint arXiv:1707.03904 .", "We have introduced a new dataset for domain-specific reading comprehension in which we have constructed around 100,000 cloze queries from clinical case reports.", "We analyzed the dataset in terms of the skills required for successful comprehension, and applied various baseline methods and state-of-the-art neural readers.", "We showed that a large gap still exists between the best machine reader and the expert human reader.", "One direction for future research is improving the reading models on the queries that are currently the most challenging, i.e. those requiring world and background domain knowledge.", "Better representing background knowledge by inducing embeddings for entities or otherwise integrating ontological knowledge is in our opinion a promising avenue for future research.", "We would like to thank Madhumita Sushil and the anonymous reviewers for useful comments.", "We are also grateful to BMJ Case Reports for allowing the collection of case reports.", "This work was carried out in the framework of the Accumulate IWT SBO project (nr. 150056), funded by the government agency for Innovation by Science and Technology.", "We also acknowledge the support of the Nvidia GPU Grant Program." ]
[ "objective", "method", "result", "result", "result", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "method", "result", "abstain", "result", "objective", "method", "objective", "result", "method", "objective", "abstain", "other", "abstain", "objective", "abstain", "abstain", "other", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "result", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "other", "abstain", "objective", "method", "method", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "objective", "method", "result", "abstain", "abstain", "other", "other", "other", "other" ]
[ "Named entity recognition (NER) is a common task in Natural Language Processing (NLP), but it remains more challenging in Chinese because of its lack of natural delimiters.", "Therefore, Chinese Word Segmentation (CWS) is usually necessary as the first step for Chinese NER.", "However, models based on word-level embeddings and lexicon features often suffer from segmentation errors and out-of-vocabulary (OOV) problems.", "In this paper, we investigate a C onvolutional A ttention N etwork ( CAN ) for Chinese NER, which consists of a character-based convolutional neural network (CNN) with local-attention layer and a gated recurrent unit (GRU) with global self-attention layer to capture the information from adjacent characters and sentence contexts.", "Moreover, differently from other approaches, CAN-NER does not depend on any external resources like lexicons and employing small-size char embeddings makes CAN-NER more practical for real systems scenarios.", "Extensive experimental results show that our approach outperforms state-of-the-art methods without word embedding and external lexicon resources on different domains datasets.", "Named Entity Recognition (NER) aims at identifying text spans which are associated with a specific semantic entity type such as person (PER), organization (ORG), location (LOC), and geopolitical entity (GPE).", "NER has received constant research attention as it is the first step in a wide range of downstream Natural Language Processing (NLP) tasks, e.g., entity linking (Gupta et al., 2017), relation extraction (Miwa and Bansal, 2016), event extraction (Chen et al., 2015), and co-reference resolution (Fragkou, 2017).", "The standard approach in existing state-of-the-art models This work was performed when the first author was an intern at Microsoft Research Asia.", "for English NER treats the problem as a word-by-word sequence labeling task and makes full use of the Recurrent Neural Network (RNN) and Conditional Random Field (CRF) to capture context information at the word level (Lample et al., 2016; Ma and Hovy, 2016; Chiu and Nichols, 2016; Liu et al., 2018).", "These models for English NER preSentence: Segmentation 1: | Nanjing City, Yangtze River Bridge Location, Location Segmentation 2: | | Nanjing, Mayor, Jiang Daqiao Location, Title, Person Figure 1: Entity Ambiguity with Word Segmentation.", "dict a tag for each word assuming that words can be separated clearly by explicit word separators, e.g., blank spaces.", "As the Chinese language has no natural delimiters, it would be intuitive to apply Chinese Word Segmentation (CWS) first to get word boundaries and then use a word-level sequence labeling model similar to the English NER models.", "However, word boundaries can be ambiguous in Chinese, which leads to the possibility that entity boundaries do not match word boundaries.", "For example, the term (Ti-bet Autonomous Region) is a GPE-type entity in NER, but it could be segmented as a single word or as two words (Tibet) and (autonomous region) separately, depending on different granularity of segmentation tools.", "Most of the time, however, it is hard to determine the correct granularity for word segmentation.", "Also, as shown in Figure 1, different segmentation can lead to different sentence meanings in Chinese, which could even result in different named entities.", "Obviously, if entity boundaries are mistakenly detected in segmentation, it will negatively affect entity tagging in word-based NER models.", "Furthermore, most recent neural network-based Chinese NER models rely heavily on word-level embeddings and external lexicon sets (Huang et al., 2017; Zhang and Yang, 2018).", "The quality of such models strongly relies on the different word embedding representations and lexicon features.", "Moreover, word-based models tend to suffer from OOV issues as Chinese words can be very diverse and named entities are an important source of OOV words.", "Other potential limitations are as follows: (1) Dependency on word embeddings increases model size and makes the fine-tuning process more costly during training (while negatively affecting latency in testing/decoding); (2) It is hard to learn word representation correctly without enough labeled utterances for named entities are usually rarer proper nouns.", "(3) Large lexicons are very costly for real NER systems as they greatly increase memory usage and latency in feature extraction (matching), which makes models inefficient; (4) It is very costly to remove noise from large lexicons and any update to pre-trained word embeddings or lexicons requires model retraining.", "Meanwhile, character-level embedding by itself can only carry limited information due to losing word and word-sequence information.", "For instance, the character in words (bat) and (auction) has very different meanings.", "How to better integrate segmentation-related information and exploit local context information is the key feature in a character-based model.", "Zhang and Yang (2018) leverage lexicons to add all the embeddings of candidate word segmentation to their last character embeddings as soft features, and construct a convolutional neural network (CNN) to encode characters as word-level information.", "Cao et al. (2018) propose a multitask architecture to learn NER tagging and Chinese word segmentation together, with each part using a character-based Bi-LSTM.", "In this paper, we propose a convolutional attention layer to capture the implicit relations within adjacent characters, in which the position features from word segmentation are soft hints for character combinations.", "With the segmentation vector softly concatenating into character embedding, the convolutional attention layer is able to group implicitly meaning-related characters and help bypass the impact of segmentation errors.", "A BiGRU structure with a global self-attention layer on the whole sentence is utilized to capture sentence-level dependencies.", "Extensive experimental results show that our approach outperforms state-of-the-art methods without relying on external resources (e.g. word embedding, external lexicon) across different corpora.", "The main contributions of this paper can be summarized as follows: We first combine CNNs with the local-attention mechanism to enhance the ability of the model to capture implicitly local context relations among character sequences.", "Compared with experimental results against a baseline with a regular CNN layer, our Convolutional Attention layer leads to substantial performance improvements.", "We introduce a character-based Chinese NER model that consists of combined CNN with local attention and BiGRU with global self-attention layers.", "Our model achieves state-of-the-art F1-scores without using any external resources like word embeddings and lexicon resources, which make it very practical for real-world NER systems.", "We utilize BiGRU-CRF as our basic model structure.", "Our model considers multi-level context features in three layers:", "i) convolutional attention layer,", "ii) GRU layer, and", "iii) global attention layer.", "The whole architecture of our proposed model is illustrated in Figure 2.", "In the Chinese NER task, we denote an input sentence as X i = { x i, 1 , x i, 2 , x i, 3 , ..., x i, } , where x i, R d e represents the -th character in sentence X i and d e is the dimension of the input embeddings.", "Correspondingly, we denote the sentence label sequence as Y i = { y i, 1 , y i, 2 , y i, 3 , ..., y i, } , where y i, Y belongs to the set of all possible labels.", "The objective is learning a function f : X (cid:55) Y to obtain the entity types including the O ' type for all the characters in the input text.", "In the following text, we take one instance as the example and therefore omit subindex i in the formula.", "The convolutional attention layer aims to encode the sequence of input characters and implicitly group meaning-related characters in the local context.", "The input representation for each character is constructed as x = [ x ch ; x seg ] , where x ch R d ch and x seg R d seg are character embedding and segmentation mask, respectively.", "The segmentation information is encoded by BMES scheme (Wang and Xu, 2017).", "For every window in the CNN, whose window size is k , we first concatenate a position embedding to each character embedding, helping to keep sequential relations in the local window context.", "The dimension of the position embedding equals to the window size k with the initial values of 1 at the position where the character lies in the window and 0 at other positions.", "So, the dimension of the concatenated embedding is d e = d ch + d pos + d seg .", "We then apply local attention inside the window to capture the relations between the center character and each context token, followed by a CNN with sum-pooling layer.", "We set the hidden dimension as d h .", "For the j -th character, the local attention takes all the concatenated embeddings x j k 12 , ...x j , ..., x j + k 12 in the window as its input and outputs k hidden vectors h j k 12 , ..., h j , ..., h j + k 12 .", "The hidden vectors are calculated as follows: h m = m x m , (1) where m { j k 12 , ..., j + k 12 } and m is the attention weight, which is calculated as: m = exp s ( x j , x m ) (cid:80) n { j k 12 ,...,j + k 12 } exp s ( x j , x n ) .", "(2) The score function s is defined as follows: s ( x j , x k ) = v (cid:62) tanh( W 1 x j + W 2 x k ) , (3) where v R d h and W 1 , W 2 R d h ,d e .", "The CNN layer contains d h kernels on a context window of k tokens as: h cj = (cid:88) k [ W c h j k 12 : j + k 12 + b c ] , (4) where W c R k d h d e and b c R k d h .", "The operation denotes element-wise product and h j k 12 : j + k 12 means a concatenation of the hidden states h j k 12 , ..., h j + k 12 , both of which are calculated at the first dimension.", "Finally sum-pooling is also conducted on the first dimension.", "After extracting the local context features by the convolutional attention layer, we feed them into a BiGRU-CRF based model to predict final label for each character.", "This layer models the sequential sentence information and it is calculated as follows: h rj = BiGRU ( h rj 1 , h cj ; W r , U r ) , (5) where h cj is the output of the convolutional attention layer, h rj 1 is the previous hidden state for the BiGRU layer, and W r , U r R d h d h are its parameters.", "where j = 1 , ..., denotes all characters in a sentence instance and gj,s is calculated as:", "The score function s is similar to Equation 3 with different parameters v g R d h and W g 1 , W g 2 R d h ,d h instead.", "Finally, a standard CRF layer is used at the top of the concatenation of the output of the BiGRU and global attention layers, which is denoted as H = [ h r ; h g ] .", "Given the predicted tag sequence Y = { y 1 , y 2 , y 3 , ..., y } , the probability of the ground-truth label sequence is computed by: P ( Y | X ) = exp ( (cid:80) i ( W y i CRFH i + b ( y i 1 ,y i ) CRF )) (cid:80) y (cid:48) exp ( (cid:80) i ( W y (cid:48) i CRFH i + b ( y (cid:48) i 1 ,y (cid:48) i ) CRF )) , (8) where y (cid:48) denotes an arbitrary label sequence, W y i CRF and b ( y i 1 ,y i ) CRF are trainable parameters.", "In decoding, we use the Viterbi algorithm to get the predicted tag sequence.", "For training, we exploit log-likelihood objective as the loss function.", "Given a set of training examples { ( X i , Y i ) }| Ki =1 , the loss function L can be defined as follows: L = K (cid:88) i =1 logP ( Y i | X i ) (9) In the training phase, at each iteration, we first shuffle all the training instances, and then feed them to the model with batch updates.", "We use the AdaDelta (Zeiler, 2012) algorithm to optimize the final objective with all the parameters as described in Section 3.1.", "To demonstrate the effectiveness of our proposed model, we have run multiple experiments on Chinese NER datasets covering different domains.", "This section describes the details of each dataset, settings, and results in our experiments.", "Standard precision (P), recall (R) and F1-score (F1) are used as evaluation metrics.", "Data We use four datasets in our experiments.", "For the news domain, we experiment on OntoNotes 4 (Weischedel et al., 2011) and MSRA NER dataset from SIGHAN Bakeoff 2006 (Levow, 2006).", "For the social media domain, we adopt the same annotated Weibo corpus as Peng and Dredze (2015) which is extracted from Sina Weibo 1 .", "For more variety in test domains, we also use a Chinese Resume dataset (Zhang and Yang, 2018) collected from Sina Finance 2 .", "The Weibo dataset is annotated with four entity types: PER (Person), ORG (Organization), LOC (Location), and GPE (Geo-Political Entity); and it includes both named and nominal mentions.", "This corpus is already divided into training, development, and test sets.", "The Chinese Resume dataset is annotated with eight types of named entities: CONT (Country), EDU (Educational In-stitution), LOC, PER, ORG, PRO (Profession), RACE (Ethnicity/Background), and TITLE (Job Title).", "OntoNotes 4 is annotated with four named entity categories: PER, ORG, LOC, and GPE.", "We follow the same data split method of Che et al. (2013) over OntoNotes 4.", "Lastly, the MSRA 2006 dataset contains three annotated named entities: ORG, PER and LOC.", "A development subset is Models NE NM Overall Peng and Dredze (2015) 51.96 61.05 56.05 Peng and Dredze (2016) 55.28 62.97 58.99 He and Sun (2017a) 50.60 59.32 54.82 He and Sun (2017b) 54.50 62.17 58.23 Cao et al. (2018) 54.34 57.35 58.70 Zhang and Yang (2018) 53.04 62.25 58.79 Baseline 49.02 58.80 53.80 Baseline + CNN 53.86 58.05 55.91 CAN-NER Model 55.38 62.98 59.31 Table 2: Weibo NER results 1 http://www.weibo.com/ 2 http://finance.sina.com.cn/stock/index.html not available for the MSRA dataset.", "The detailed statistics of each datasets are shown in Table 1.", "Gold segmentation is unavailable for Weibo, Chinese Resume, and MSRA test sections.", "We follow Zhang and Yang (2018) to automatically segment these by using the model described in Yang et al. (2017).", "We treat NER as a sequential labeling problem and adopt the BIOES tagging style since it has been shown to produce better results than straight BIO (Yang et al., 2018b).", "Hyper-parameter settings For hyper-parameter configuration, we adjust them according to the performance on the described development sets for Chinese NER.", "We set the character embedding size, hidden sizes of CNN and BiGRU to 300 dims.", "After comparing experimental results with different CNN window sizes, we set the window size as 5.", "Adadelta is used for optimization, with an initial learning rate of 0.005.", "The character embeddings used in our experiments are from Li et al. (2018), which is trained by Skip-Gram with Negative Sampling (SGNS) on Baidu Encyclopedia.", "In this section, we describe the experimental results of our proposed model and previous state-of-the-art methods on four datasets: Weibo, Chinese Resume, OntoNotes 4, and MSRA.", "We propose two baselines for comparison, and show the CANNER model results.", "In the experiment results table, we use Baseline to represent a pure BiGRU + CRF model; and Baseline + CNN to indicate the base model with a CNN layer.", "Here we compare our proposed model with the latest models on the Weibo dataset.", "3 Table 2 shows the F1-scores for named entities (NE), nominal entities (NM, excluding named entities), and both (Overall).", "We observe that our proposed model achieves state-of-the-art performance.", "Existing state-of-the-art systems include Peng and Dredze (2016), He and Sun (2017b), Cao et al. (2018) and Zhang and Yang (2018), which leverage rich external data like cross-domain data, semi-supervised data, and lexicons, or joint-train 3 In Table 2,3, 4 and 5, we use to denote a model with external labeled data for semi-supervised learning.", "denotes that the model use external lexicon data.", "Zhang and Yang (2018) with is the char-based model in the paper.", "NER and Chinese Word Segmentation (CWS).", "4 In the first block of Table 2, we report the performance of the latest models.", "Peng and Dredze (2015) propose a model that jointly trains embeddings with NER and it achieves a F1-score of 56.05% on overall performance.", "The model (Peng and Dredze, 2016) that jointly trains NER and CWS reaches a F1-score of 58.99%.", "He and Sun (2017b) propose a unified model to exploit cross-domain and semi-supervised data, which improves the F1-score from 54.82% to 58.23% compared with the model proposed by He and Sun (2017a).", "Cao et al. (2018) use an adversarial transfer learning framework to incorporate task-shared word boundary information from CWS and achieves a F1-score of 58.70%.", "Zhang and Yang (2018) leverage a lattice structure to integrate lexicon information into their model and achieve a F1-score of 58.79%.", "In the second block of Table 2, we give the results of our baselines and proposed models.", "While the BiGRU + CRF baseline only achieves a F1-score of 53.80%, adding a normal CNN layer as featurizer improves the score to 55.91%.", "Replacing the CNN with our convolutional attention layer greatly improves the F1-score to 59.31%, which outperforms other models.", "The improvement demonstrates the effectiveness of our proposed model.", "The Chinese Resume test results are shown in Table 3.", "Zhang and Yang (2018) released the Chinese Resume dataset and they achieve a F1-score of 94.46%.", "It can be seen that our proposed baseline (CNN + BiGRU + CRF) outperforms Zhang and Yang (2018) with F1-score of 94.60%.", "4 The results of Peng and Dredze (2015, 2016) are taken from Peng and Dredze (2017) Models P R F1 Yang et al. (2016) 65.59 71.84 68.57 Yang et al. (2016) 72.98 80.15 76.40 Che et al. (2013) 77.71 72.51 75.02 Wang et al. (2013) 76.43 72.32 74.32 Zhang and Yang (2018) 76.35 71.56 73.88 Zhang and Yang (2018) 74.36 69.43 71.81 Baseline 70.67 71.64 71.15 Baseline + CNN 72.69 71.51 72.10 CAN-NER Model 75.05 72.29 73.64 Table 4: Results on OntoNotes Adding our convolutional attention leads a further improvement and achieves state-of-the-art F1-score of 94.94%, which further demonstrates the effectiveness of our proposed model.", "Table 4 shows comparisons on the OntoNotes 4 dataset.", "The first block in the table lists the performance of previous methods for Chinese NER.", "Yang et al. (2016) propose a model combining neural and discrete feature, e.g., POS tagging features, CWS features and orthographic features, improving the F1-score from 68.57% to 76.40%.", "Leveraging bilingual data, Che et al. (2013) and Wang et al. (2013) achieves F1-scores of 74.32% and 73.88% respectively.", "Zhang and Yang (2018) is a recent model that uses a character-based model with bichar and softword.", "The second block of Table 4 shows the results of our baselines and proposed model.", "Consistently with observations on the Weibo and Resume datasets, our Convolutional Attention layer leads to a substantial increment on F1-score.", "Our proposed model achieves a competitive F1-score of 73.64% among character-based model without using external data (e.g., Zhang and Yang (2018) ).", "Table 5 shows experiment results on the MSRA 2006 dataset.", "Chen et al. (2006), Zhang et al. (2006), and Zhou et al. (2013) leverage rich handcrafted features and Lu et al. (2016) exploit multi-prototype embedding features.", "Dong et al. (2016) introduce radical features into LSTM-CRF.", "Cao et al. (2018) make use of Adversarial Transfer Learning and global self-attention to improve model performance.", "Yang et al. (2018a) propose a character-based CNN-BiLSTM-CRF model to incorporate stroke embeddings and generate n-Models P R F1 Chen et al. (2006) 91.22 81.71 86.20 Zhang et al. (2006) 92.20 90.18 91.18 Zhou et al. (2013) 91.86 88.75 90.28 Lu et al. (2016) -87.94 Dong et al. (2016) 91.28 90.62 90.95 Cao et al. (2018) 91.30 89.58 90.64 Yang et al. (2018a) 92.04 91.31 91.67 Zhang and Yang (2018) 93.57 92.79 93.18 Baseline 92.54 88.20 90.32 Baseline + CNN 92.57 92.11 92.34 CAN-NER Model 93.53 92.42 92.97 Table 5: Results on MSRA dataset gram features.", "Zhang and Yang (2018) introduce a lattice structure to incorporate lexicon information into the neural network, which actually includes word embedding information.", "Although this model achieves state-of-the-art F1-score at 93.18%, it leverages external lexicon data and thus the result is dependent on the quality of the lexicon.", "At the bottom section of the table, we can see that Baseline + CNN already outperforms most previous methods.", "Compared with Zhang and Yang (2018), our char-based method achieves a competitive F1-score of 92.97% without any additional lexicon data and word embedding information.", "Moreover, CAN-NER model achieves state-of-the-art result among the character-based models.", "As shown in Tables 2, 3, and 5, our proposed model's performance demonstrates the effectiveness of the Convolutional Attention Network.", "To better evaluate the effect of the Attention Mechanism, we visualize the normalized attention weights lm for each window from Eq.", "2, as in Figure 3a.", "Each row of the matrix represents location attention weights in each window.", "For example, the third row indicates that the relationship between center character and contexts .", "We can see from the Figure 3a that the word-level features can be extracted through the local attention.", "In the context, the center character tends to have a stronger connection with American president Clinton on the 1st leave for Europe", "its related character , which means they have a higher probability of forming the Chinese word (American).", "Also for characters , , and , they tend to have a strong connection because means Clinton.", "Characters and also have strong connections, as seen in Figure 3a, because represents Europe in Chinese.", "Therefore, both experiment results and visualization verifies that the Convolutional Attention is effective in obtaining phrase-level information between adjacent characters.", "In Figure 3b, we visualize the global self-attention matrix.", "From the picture, we can find that global self-attention can capture the sentence context information from the long-distance relationship of words to overcome the limitation of Recurrent Neural Networks.", "For the word (Clinton), the global self-attention learns the dependencies with (leave for) and 1 (on the 1st).", "Distinguished by the red color, (Clinton) has a stronger connection with (leave for) than with 1 (on the 1st), which matches the expectation that the predicate in a sentence provides more information to the subject than adverbs of time.", "Our proposed model outperforms previous work on th eWeibo and Chinese Resume datasets and reaches competitive results on both MSRA and OntoNotes 4 datasets without using any external resources.", "The experiments results demonstrate the effectiveness of our proposed model, especially among char-based models.", "The performance improvement after adding Convolutional Attention Layer and Global Attention Layer verifies that our model can capture the relationship between character and its local context, as well as the relationship between word and global context.", "However, although we can obtain comparable or better results to other models that utilize no external resources, we find that our model performance on the OntoNotes 4 dataset still has room for improvement (2.76% F1-score gap to the best model that leverages additional data).", "This may be explained by specific discrete features and external resources (e.g., other labeled data or lexicons) having a more positive influence on this specific dataset, while CAN-NER cannot learn enough information from only the training set.", "However, we were not able to identify the precise contributors to the gap based on the available corresponding resources.", "Neural networks, such as LSTM and CNN, have been shown to outperform conventional machine learning methods without requiring handcrafted features.", "Collobert et al. (2011) describe a CNN-CRF model that reaches competitive results compared to the best statistical models at the time.", "More recently, the LSTM-CRF architecture has become a quasi-standard on NER tasks.", "Huang et al. (2015) employed BiLSTM to extract word-level context information and Lample et al. (2016) further introduced a hierarchy structure by incorporating BiLSTM-based character embeddings.", "Multiple recent works integrating word-level information and character-level information have been found to achieve improved performance (dos Santos et al., 2015; Chiu and Nichols, 2016; Ma and Hovy, 2016; Lample et al., 2016; Chen et al., 2019).", "Moreover, external knowledge has also been exploited for NER, as has character-level knowledge, both pre-trained (Peters et al., 2017) and co-trained (Liu et al., 2018).", "More recently, large-scale pre-trained language representations with deep language models have been proposed to help improve the performance of downstream NLP tasks.", "E.g., ELMo (Peters et al., 2018) and BERT (Devlin et al., 2018).", "Also, Attention Mechanisms have shown very good performance on a variety of tasks including machine translation, machine comprehension, and related NLP tasks (Vaswani et al., 2017; Seo et al., 2016; Tan et al., 2018a).", "In language understanding, Shen et al. (2018) exploit self-attention to learn long range dependencies.", "Rei et al. (2016) proposed a model employing an attention mechanism to combine the character-based representation with the word embedding instead of simply concatenating them.", "This method allows the model to dynamically decide which source of information to use for each word, and therefore outperforming the concatenation method used in previous work.", "More recently, Tan et al. (2018b) and Cao et al. (2018) employ self-attention to directly capture the global dependencies of the inputs for NER tasks and demonstrate the effectiveness of self-attention in Chinese NER.", "Multiple previous efforts have tried to address the Chinese language challenge of not having explicit word boundaries.", "Traditional models depended on hand-crafted features and CRFs-based models (He and Wang, 2008; Mao et al., 2008) and character-based LSTM-CRF models have been applied to Chinese NER to utilize both characterand radical-level representations (Dong et al., 2016).", "Peng and Dredze (2015) applied character positional embeddings and proposed a jointly trained model for embeddings and NER.", "To better integrate word boundary information into Chinese NER model, Peng and Dredze (2016) co-trained NER and word segmentation to improve performance in both tasks.", "He and Sun (2017b) unified cross-domain learning and semi-supervised learning to obtain information from out-of-domain corpora and in-domain unannotated text.", "Instead of performing word segmentation first, Recently, Zhang and Yang (2018) proposed constructing a word-character lattice by matching words in texts with a lexicon to avoid segmentation errors.", "Cao et al. (2018) use an adversarial network to jointly train Chinese NER task and Chinese Word Segmentation tasks to extract task-shared word boundary information.", "Also, Yang et al. (2018c) leverage character-level BiLSTM to extract higher-level features from crowd-annotations.", "In this paper, we propose CAN-NER, a Convolutional Attention Network model to improve Chinese NER performance and preclude word embedding and additional lexicon dependencies; thus making the model more efficient and robust.", "In our model, we implement local-attention CNN and BiGRU with the global self-attention structure to capture word-level features and context information with char-level features.", "Extensive experiments show that our model outperforms the state-of-art systems on the different domain datasets.", "We'd like to thank our colleague Borje Karlsson for his contribution and support in this work, as well as thank our colleagues Haoyan Liu, Zijia Lin, and the anonymous reviewers for their valuable feedback." ]
[ "abstain", "abstain", "abstain", "objective", "abstain", "result", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "result", "objective", "result", "abstain", "result", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "method", "other", "abstain", "abstain", "abstain", "other", "method", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "objective", "method", "result", "other" ]
[ "Meta-learning, or learning to learn, is a technique that can help to overcome resource scarcity in cross-lingual NLP problems, by enabling fast adaptation to new tasks.", "We apply model-agnostic meta-learning ( MAML ) to the task of cross-lingual dependency parsing.", "We train our model on a diverse set of languages to learn a parameter initialization that can adapt quickly to new languages.", "We find that meta-learning with pre-training can significantly improve upon the performance of language transfer and standard supervised learning baselines for a variety of unseen, typologically diverse, and low-resource languages, in a few-shot learning setup.", "The field of natural language processing (NLP) has seen substantial performance improvements due to large-scale language model pre-training (Devlin et al., 2019).", "Whilst providing an informed starting point for subsequent task-specific fine-tuning, such models still require large annotated training sets for the task at hand (Yogatama et al., 2019).", "This limits their applicability to a handful of languages for which such resources are available and leads to an imbalance in NLP technology's quality and availability across linguistic communities.", "Aiming to address this problem, recent research has focused on the development of multilingual sentence encoders, such as multilingual BERT (mBERT) (Devlin et al., 2019) and XLM-R (Conneau et al., 2020), trained on as many as 93 languages.", "Such pre-trained multilingual encoders enable zero-shot transfer of task-specific models across languages (Wu and Dredze, 2019), offering a possible solution to resource scarcity.", "Zero-shot transfer, how-(cid:70) Corresponding author: annalangedijk@gmail.com.", "ever, is most successful among typologically similar, high-resource languages, and less so for languages distant from the training languages and in resource-lean scenarios (Lauscher et al., 2020).", "This stresses the need to develop techniques for fast cross-lingual model adaptation, that can transfer knowledge across a wide range of typologically diverse languages with limited supervision.", "In this paper, we focus on the task of universal dependency (UD) parsing and present a novel approach for effective and resource-lean cross-lingual parser adaptation via meta-learning, requiring only a small number of training examples per language (which are easy to obtain even for low-resource languages).", "Meta-learning is a learning paradigm that leverages previous experience from a set of tasks to solve a new task efficiently.", "As our goal is fast cross-lingual model adaptation, we focus on optimization-based meta-learning, where the main objective is to find a set of initial parameters from which rapid adaption to a variety of different tasks becomes possible (Hospedales et al., 2020).", "Optimization-based meta-learning has been successfully applied to a variety of NLP tasks.", "Notable examples include neural machine translation (Gu et al., 2018), semantic parsing (Huang et al., 2018), pre-training text representations (Lv et al., 2020), word sense disambiguation (Holla et al., 2020) and cross-lingual natural language inference and question answering (Nooralahzadeh et al., 2020).", "To the best of our knowledge, meta-learning has not yet been explored in the context of dependency parsing.", "We take inspiration from recent research on universal dependency parsing (Tran and Bisazza, 2019; Kondratyuk and Straka, 2019).", "We employ an existing UD parsing framework UDify, a multitask learning model (Kondratyuk and Straka, 2019) and extend it to perform few-shot model adap-8503 tation to previously unseen languages via metalearning.", "We pre-train the dependency parser on a high-resource language prior to applying the model-agnostic meta-learning ( MAML ) algorithm (Finn et al., 2017) to a collection of few-shot tasks in a diverse set of languages.", "We evaluate our model on its ability to perform few-shot adaptation to unseen languages, from as few as 20 examples.", "Our results demonstrate that our methods outperform language transfer and multilingual joint learning baselines, as well as existing (zero-shot) UD parsing approaches, on a range of language families, with the most notable improvements among the low-resource languages.", "We also investigate the role of a pre-training language as a starting point for cross-lingual adaptation and the effect of typological properties on the learning process.", "In meta-learning, the datasets are separated into episodes that correspond to training tasks.", "Each episode contains a support and a query set, that include samples for adaptation and evaluation, respectively.", "Meta-learning serves as an umbrella term for algorithms from three categories: Metric-based methods classify new samples based on their similarity to the support set (e.g. Snell et al., 2017).", "Model-based methods explicitly store meta-knowledge within their architectures e.g. through an external memory (Santoro et al., 2016).", "Optimization-based methods, on which we focus, estimate parameter initializations that can be fine-tuned with a few steps of gradient descent (e.g. Finn et al., 2017; Nichol and Schulman, 2018).", "Finn et al. (2017) proposed MAML to learn parameter initializations that generalize well to similar tasks.", "During the meta-training phase, MAML iteratively selects a batch of episodes, on which it fine-tunes the original parameters given the support set in an inner learning loop , and tests it on the query set.", "The gradients of the query set with respect to the original parameters are used to update those in the outer learning loop , such that these weights become a better parameter initialization over iterations.", "Afterwards, during meta-testing , one selects a support set for the test task, adapts the model using that set and evaluates it on new samples from the test task.", "MAML has provided performance benefits for cross-lingual transfer for tasks such as machine translation (Gu et al., 2018), named entity recognition (Wu et al., 2020), hypernymy detection (Yu et al., 2020) and mapping lemmas to inflected forms (Kann et al., 2020).", "The closest approach to ours is by Nooralahzadeh et al. (2020), who focus on natural language inference and question answering.", "Their method, X-MAML , involves pre-training a model on a high-resource language prior to applying MAML .", "This yielded performance benefits over standard supervised learning for cross-lingual transfer in a zero-shot and fine-tuning setup (albeit using 2500 training samples to fine-tune on test lan-guages).", "The performance gains were the largest for languages sharing morphosyntactic features.", "Besides the focus on dependency parsing, our approach can be distinguished from Nooralahzadeh et al. (2020) in several ways.", "We focus on fast adaptation from a small number of examples (using only 20 to 80 sentences).", "Whilst they use one language for meta-training, we use seven languages, with the aim of explicitly learning to adapt to a variety of languages.", "The Universal Dependencies project is an ongoing community effort to construct a cross-linguistically consistent morphosyntactic annotation scheme (Nivre et al., 2018).", "The project makes results comparable across languages and eases the evaluation of cross-lingual (structure) learning.", "The task of dependency parsing involves predicting a dependency tree for an input sentence, which is a directed graph of binary, asymmetrical arcs between words.", "These arcs are labeled and denote dependency relation types, which hold between a head -word and its dependent .", "A parser is tasked to assign rankings to the space of all possible dependency graphs and to select the optimal candidate.", "Dependency parsing of under-resourced languages has since long been of substantial interest in NLP.", "Well-performing UD parsers, such as the winning model in the CoNLL 2018 Shared Task by Che et al. (2018), do not necessarily perform well on low-resource languages (Zeman et al., 2018).", "Cross-lingual UD parsing is typically accomplished by projecting annotations between languages with parallel corpora (Agic et al., 2014), through model transfer (e.g. Guo et al., 2015; Ammar et al., 2016; Ahmad et al., 2019), through hybrid methods combining annotation projections and model transfer (Tiedemann et al., 2014), or by aligning word embeddings across languages (Schuster et al., 2019).", "State-of-the-art methods for cross-lingual dependency parsing exploit pre-trained mBERT with a dependency parsing classification layer that is fine-tuned on treebanks of high-resource languages, and transferred to new languages: Wu and Dredze (2019) only fine-tune on English, whereas Tran and Bisazza (2019) experiment with multiple sets of fine-tuning languages.", "Including diverse language families and scripts benefits transfer to low-resource languages, in particular.", "UDify, the model of Kondratyuk and Straka (2019), is jointly fine-tuned on data from 75 languages, with a multi-task learning objective that combines dependency parsing with predicting part-of-speech tags, morphological features, and lemmas.", "stn et al. (2020), instead, freeze the mBERT parameters and train adapter modules that are interleaved with mBERT's layers, and take a language embedding as input.", "This embedding is predicted from typological features.", "Model performance strongly relies on the availability of those features, since using proxy embeddings from different languages strongly degrades low-resource languages' performance.", "We use data from the Universal Dependencies v2.3 corpus (Nivre et al., 2018).", "We use treebanks from 26 languages that are selected for their typological diversity.", "We adopt the categorization of high-resource and low-resource languages from Tran and Bisazza (2019) and employ their set of training and test languages for comparability.", "The set covers languages from six language families (Indo-European, Korean, Afro-Asiatic, Uralic, Dravidian, Austro-Asiatic).", "Their training set ( expMix ) includes eight languages: English, Arabic, Czech, Hindi, Italian, Korean, Norwegian, and Russian.", "These languages fall into the language families of Indo-European, Korean and Afro-Asiatic and have diverse word orders (i.e. VSO, SVO and SOV).", "Joint learning on data from this diverse set yielded state-of-the-art zero-shot transfer performance on low-resource languages in the experiments of Tran and Bisazza (2019).", "Per training language we use up to 20,000 example trees, predicting dependency arc labels from 132 classes total.", "We select Bulgarian (Indo-European) and Telugu (Dravidian) as validation languages to improve generalization to multiple language families.", "The 16 test languages cover three new language families that were unseen during training, i.e. Austro-Asiatic, Dravidian, and Uralic.", "Furthermore, three of our test languages (Buryat, Faroese, and Upper Sorbian) are not included in the pre-training of mBERT.", "We refer the reader to Appendix B for details about the treebank sizes and language families.", "The UDify model concurrently predicts part-of-speech tags, morphological features, lemmas and dependency trees (Kondratyuk and Straka, 2019).", "UDify exploits the pre-trained mBERT model (De-vlin et al., 2019), that is a self-attention network with 12 transformer encoder layers.", "The model takes single sentences as input.", "Each sentence is tokenized into subword units using mBERT's word piece tokenizer, after which contextual embedding lookup provides input for the self-attention layers.", "A weighted sum of the outputs of all layers is computed (Equation 1) and fed to a task-specific classifier.", "Here, e t denotes the contextual output embeddings for task t .", "In our case, t indicates UD-parsing.", "In contrast to the multi-task objective of the original UDify model, our experiments only involve UD-parsing.", "The term B ij represents the mBERT representation for layer i = 1 , ..., 12 at token position j .", "The terms and denote trainable scalars, where the former applies to mBERT and the latter scales the normalized averages.", "For words that were tokenized into multiple word pieces, only the first word piece was fed to the UD-parsing classifier.", "The UD-parsing classifier is a graph-based biaffine attention classifier (Dozat and Manning, 2017) that projects the embeddings e tj through arc-head and arc-dep feedforward layers.", "The resulting outputs are combined using biaffine attention to produce a probability distribution of arc heads for each word.", "Finally, the dependency tree is decoded using the Chu-Liu/Edmonds algorithm (Chu, 1965; Edmonds, 1967).", "We refer the reader to the work of Kondratyuk and Straka (2019) for further details on the architecture and its training procedure.", "We apply first-order 1 MAML to the UDify model.", "The model's self-attention layers are initialized with parameters from mBERT and the classifier's feedforward layers are randomly initialized.", "The model is pre-trained on a high-resource language using standard supervised learning and further meta-trained on a set of seven languages with MAML .", "It is then evaluated using meta-testing.", "We refer to MAML with pre-training as simply MAML .", "The meta-learning procedure is visualized in Figure 1 and can be described as follows: Step 1 Pre-train on a high-resource language to yield the initial parameters .", "Step 2 Meta-train on all other training languages.", "For each language i , we partition the UD training data into two disjoint sets, D train i and D test i , and perform the following inner loop:", "1. Temporarily update the model parameters i with stochastic gradient descent on support set S , sampled from D train i , with learning rate for k gradient descent adaptation steps.", "When using a single gradient step, the update becomes: i L ( i ) (2)", "2. Compute the losses of the model parameters i using the query set Q , sampled from D test i , denoted by L i ( i ) .", "1 For more details on first-order versus second-order, see Finn et al. (2017); Holla et al. (2020).", "Step 3 Sum up the test losses and perform a meta-update in the outer learning loop on the model with parameters using the learning rate : (cid:88) i L i ( i ) (3) In our experiments, the update is a first-order approximation, replacing L i ( i ) by i L i ( i ) .", "Step 4 After meta-training, we apply meta-testing to unseen languages.", "For each language, we sample a support set S from the UD training data.", "We then fine-tune our model on S , and evaluate the model on the entire test set.", "Thereby, meta-testing mimics the adaptation from the inner loop.", "We repeat this process multiple times to get a reliable estimate of how well the model adapts to unseen languages.", "We extend the existing UDify code 2 to be used in a meta-learning setup.", "All of our code is publicly available.", "3 5.1 Training and evaluation Pre-training In the main body of the paper, we consider the pre-training languages English and Hindi to measure the impact of pre-training prior to cross-lingual adaptation, and to draw more general conclusions about how well MAML generalizes with typologically different pre-training languages.", "English and Hindi differ in word order (SVO versus SOV), and Hindi treebanks have a larger percentage of non-projective dependency trees (Mannem et al., 2009), where dependency arcs are allowed to cross one another.", "Non-projective trees are more challenging to parse (Nivre, 2009).", "Pre-training on Hindi allows us to test the effects of projectivity on cross-lingual adaptation.", "To ensure that our findings are not specific to the pre-training languages of English and Hindi, Appendix D reproduces a subset of experiments for the pre-training languages Italian and Czech, reporting results for monolingual baselines, a non-episodic baseline, and MAML .", "Italian and Czech are high-resource languages as well, but are from two different subfamilies of the family of Indo-European languages and also differ in their percentage of non-projective dependency trees.", "2 github.com/Hyperparticle/udify 3 github.com/annaproxy/udify-metalearning 8506 Meta-training We apply meta-training using seven languages listed in Section 3, excluding the pre-training language from meta-training.", "We train for 500 episodes per language, using a cosine-based learning rate scheduler with 10% warm-up.", "We use the Adam optimizer (Kingma and Ba, 2015) in the outer loop and SGD in the inner loop (Finn et al., 2017).", "Support and query sets are of size 20.", "Due to the sequence labelling paradigm, the number of shots per class varies per batch.", "When | S | = 20 , the average class will appear 16 times.", "To select hyperparameters, we independently vary the amount of updates k and the learning rates in the inner loop and outer loop for mBERT and the parser, while performing meta-validation with the languages Bulgarian and Telugu.", "To meta-validate, we follow the procedure described in Section 4.2 for both languages, mimicking the meta-testing setup with a support set size of 20.", "The hyperparameters are estimated independently for Hindi and English pre-training (see Appendix A).", "Meta-testing At meta-testing time, we use SGD with the same learning rates and the same k used in the inner loop during meta-training.", "We vary the support set size | S | { 20 , 40 , 80 } .", "We define several baselines, that are evaluated using meta-testing, i.e. by fine-tuning the models on a support set of a test language prior to evaluation on that language.", "This allows us to directly compare their ability to adapt quickly to new languages with that of the meta-learner.", "Monolingual baselines ( EN , HIN ) These baselines measure the impact of meta-training on data from seven additional languages.", "The model is initialized using mBERT and trained using data from English ( EN ) or Hindi ( HIN ), without meta-training.", "Multilingual non-episodic baseline ( NE ) Instead of episodic training, this baseline treats support and query sets as regular mini-batches and updates the model parameters directly using a joint learning objective, similar to Kondratyuk and Straka (2019) and Tran and Bisazza (2019).", "The model is pre-trained on English or Hindi and thus indicates the advantages of MAML over standard supervised learning.", "The training learning rate and meta-testing learning rate are estimated separately, since there is no inner loop update in this setup.", "MAML without pre-training We evaluate the effects of pre-training by running a MAML setup without any pre-training.", "Instead, the pre-training language is included during meta-training as one of now eight languages.", "MAML without pre-training is trained on 2000 episodes per language.", "Meta-testing only The simplest baseline is a decoder randomly initialized on top of mBERT, without pre-training and meta-training.", "Dependency parsing is only introduced at meta-testing time.", "Hyperparameter selection and evaluation is performed using Labeled Attachment Score (LAS) as computed by the CoNLL 2018 Shared Task evaluation script.", "4 LAS evaluates the correctness of both the dependency class and dependency head.", "We use the standard splits of Universal Dependencies for training and evaluation when available.", "Otherwise, we remove the meta-testing support set from the test set prior to evaluation.", "We train each model with seven different seeds and compare MAML to a monolingual baseline and NE using paired t -tests, adjusting for multiple comparisons using Bonfer-roni correction.", "MAML with English pre-training We report the mean LAS for models pre-trained on English in Table", "1. We compare these results to related approaches that use mBERT and have multiple training languages.", "With support set size 20, MAML already outperforms the zero-shot transfer setup of Tran and Bisazza (2019) for all test languages, except Persian and Urdu.", "MAML is competitive with UDify (Kondratyuk and Straka, 2019) and UDapter (stn et al., 2020) for low-resource languages, despite the stark difference in the number of training languages compared to UDify 5 (75), and without relying on fine-grained typological features of languages, as is the case for UDapter.", "MAML consistently outperforms the EN and NE baselines.", "Large improvements over the EN baseline are seen on low-resource and non-Germanic languages.", "The difference between MAML and the baselines increases with | S | .", "The largest improvements over NE are on Tamil and Japanese, 4 universaldependencies.org/conll18/evaluation.html 5 UDify is trained on the low-resource languages, while we only test on them.", "For a fair comparison, we only list UDify results on languages with a small amount of sentences ( < 80) in the training set, to mimic a few-shot generalisation setup.", "however NE outperforms MAML on Hungarian and Urdu.", "MAML consistently outperforms NE on low-resource languages, with an average 1.1% improvement per low-resource language for | S | = 20 , up to a 2.2% average improvement for | S | = 80 .", "MAML with Hindi pre-training The results for models pre-trained on Hindi can be seen in Table 3.", "Although there are large differences between the monolingual EN and HIN baselines, both MAML ( HIN ) and NE ( HIN ) achieve, on average, similar LAS scores to their English counterparts.", "MAML still outperforms NE for the majority of languages: the mean improvement on low-resource languages is 0.8% per language for | S | = 20 , which increases to 1.6% per language for | S | = 80 .", "Other pre-training languages The full results for the two other pre-training languages, Italian and Czech, are listed in Appendix D. Here, too, MAML outperforms its NE counterpart.", "The NE baseline is stronger for more languages than in our main experiments.", "For | S | = 20 , the mean improvements per unseen language are 0.91% and 0.47% when pre-training on Italian and Czech, respectively.", "For | S | = 80 , the improvements are 2.18% and 1.75%.", "MAML without (pre-)training We investigate the effectiveness of pre-training by omitting the pre-training phase.", "A comparison between MAML and MAML without pre-training is shown in Table", "2. MAML without pre-training underperforms for most languages and its performance does not increase as much with a larger support set size.", "This suggests that pre-training provides a better starting point for meta-learning than plain mBERT.", "When meta-testing only i.e. omitting both pretraining and meta-training the fine-tuned model reaches a mean LAS of 6.9% over all test languages for | S | = 20 , increasing to 15% for | S | = 80 , indicating that meta-testing alone is not sufficient 8508 | S | = 20 | S | = 40 | S | = 80 Language HIN NE MAML HIN NE MAML HIN NE MAML Low-Resource Languages Armenian 48.41 63.30 63.76 48.87 63.41 64.17 49.70 63.59 64.76 Breton 34.06 62.09 61.56 36.09 62.40 62.47 38.95 63.05 63.75 Buryat 24.24 25.05 26.27 24.71 25.18 26.79 25.54 25.40 27.37 Faroese 50.72 65.31 66.82 52.30 65.57 67.31 54.64 66.17 68.25 Kazakh 49.80 53.77 54.23 49.90 53.94 54.45 50.49 54.08 55.00 U.Sorbian 36.22 53.36 54.97 37.08 53.58 55.64 38.22 53.94 56.56 Mean 40.57 53.81 54.60 41.49 54.01 55.14 42.92 54.37 55.95 High-Resource Languages Finnish 50.49 64.05 64.64 50.93 64.20 65.05 51.79 64.40 65.61 French 31.16 64.44 65.73 31.59 64.44 65.68 33.39 64.42 65.69 German 44.83 74.40 75.15 45.46 74.41 75.23 46.65 74.46 75.31 Hungarian 46.72 60.98 62.51 46.97 61.33 62.89 47.91 61.68 62.91 Japanese 40.25 39.97 41.96 43.03 40.56 43.61 46.87 41.58 45.90 Persian 28.60 53.73 53.63 29.51 53.85 54.00 31.11 54.06 54.53 Swedish 46.96 79.24 79.89 47.73 79.32 80.14 49.15 79.31 80.21 Tamil 46.51 39.44 39.57 47.35 39.84 40.84 48.55 40.73 42.81 Urdu 67.72 50.64 49.16 67.96 50.93 50.16 68.17 51.50 51.57 Vietnamese 26.96 42.13 42.12 27.92 42.23 42.37 29.61 42.46 42.87 Mean 43.02 56.9 57.44 43.85 57.11 58.0 45.32 57.46 58.74 Mean 42.1 55.74 56.37 42.96 55.95 56.92 44.42 56.3 57.69 Table 3: Mean LAS aligned accuracy per unseen language, for models pre-trained on Hindi.", "Further Analysis Performance increases over the monolingual baselines vary strongly per language e.g. consider the difference between Japanese and French in Table", "1. The performance increase is largest for languages that differ from the pre-training language with respect to their syntactic properties.", "We conduct two types of analysis, based on typological features and projectivity, to quantify this effect and correlate these properties to the performance increase over monolingual baselines.", "7 Firstly, we use 103 binary syntactic features from URIEL (Littell et al., 2017) to compute the syntactic cosine similarities (denoted ) between languages.", "With this metric, a language such as Italian is syntactically closer to English ( = 0 . 86 ) than Urdu ( = 0 . 62 ), even though they are both Indo-European.", "For each unseen language, we collect the cosine similarities to each (pre-)training language.", "Then, we collect the difference in performance between the monolingual baselines and the NE or MAML setups for | S | = 20 .", "For each training language, we compute the correlations between performance increases for the test languages and their similarity to this training language, visualised 6 Full results can be found in Appendix C. 7 No clear correlation was found by Tran and Bisazza (2019).", "in Figure", "2. When pre-training on Hindi, there is a significant positive correlation with syntactic similarity to English and related languages.", "When pre-training on English, a positive correlation is seen with similarity to Hindi and Korean.", "Positive correlations imply that on unseen languages, improvement increases when similarity to the training language increases.", "Negative correlations mean there is less improvement when similarity to the training languages increases, suggesting that those languages do not contribute as much to adaptation.", "On average, the selection of meta-training languages contributes significantly to the increase in performance for the Hindi pre-training models.", "This effect is stronger for MAML (HIN) ( p = 0 . 006 ) than NE (HIN) ( p = 0 . 026 ), which may indicate that the meta-training procedure is better at incorporating knowledge from those unrelated languages.", "Secondly, we analyze which syntactic features impact performance most.", "We correlate individual URIEL features with MAML 's performance increases over monolingual baselines (see Figure 3).", "Features related to word order and negation show a significant correlation.", "Considering the presence of these features in both pre-training languages of MAML , a pattern emerges: when a feature is absent in the pre-training language, there is a positive correlation with increase in performance.", "Similarly, when a feature is present in the pre-training lan-8509 NE ( EN ) NE ( HIN ) MAML ( EN ) MAML ( HIN ) Arabic Czech Italian Norwegian Russian Hindi Korean English Mean L a n g u a g e f o r s i m il a r i t y -0.31 0.62* -0.23 0.66* -0.26 0.54* -0.19 0.62* -0.5* 0.71* -0.41 0.77* -0.54* 0.63* -0.48 0.69* -0.48 0.6* -0.42 0.67* 0.61* -0.58* 0.59* -0.53* 0.67* -0.81* 0.63* -0.75* -0.6* 0.73* -0.55* 0.78* -0.36 0.55* -0.29 0.66* 1.0 0.5 0.0 0.5 1.0 Figure 2: Spearman's between the performance increase over the monolingual baseline and the cosine similarity to the syntax of training languages ( y -axis) for models using pre-training ( x -axis).", "This indicates that MAML is successfully adapting to these specific features during meta-training.", "We analyzed MAML 's performance improvements over NE on each of the 132 dependency relations, and found that they are consistent across relations.", "8 Lastly, we detect non-projective dependency trees in all datasets.", "The Hindi treebank used has 14% of non-projective trees, whereas English only has 5%.", "9 We correlate the increase in performance with the percentage of non-projective trees in a language's treebank.", "The correlation is significant for NE (EN) ( = 0 . 46 , p = 0 . 01 ) and MAML (EN) ( = 0 . 42 , p = 0 . 03 ).", "Figure 4 visualizes the correlation for MAML (EN).", "We do not find significant correlations for models pre-trained on Hindi.", "This suggests that a model trained on a mostly projective language can benefit more from further training on non-projective languages than the other way around.", "The same trend is observed when comparing models pre-trained on Italian and Czech, that also differ in the percentage of nonprojective trees (Appendix D).", "Our experiments confirm that meta-learning, specifically MAML , is able to adapt to unseen languages on the task of cross-lingual dependency parsing", "8 The same holds for the 37 coarse-grained UD relations.", "9 Full results can be found in Appendix B. MAML (EN)MAML (HIN) SVO ( ) Adposition before noun ( ) Possessor after noun ( ) Negative word after verb ( ) Negative word before object ( ) Prosubject word ( ) VOX ( ) Oblique after verb ( ) SOV ( ) Adposition after noun ( ) Object headmark ( ) Any redup.", "more effectively than a non-episodic model.", "The difference between both methods is most apparent for languages that differ strongly from those in the training set (e.g. Japanese in Table 1) where effective few-shot adaptation is crucial.", "This shows that MAML is successful at learning to learn from a few examples, and can efficiently incorporate new information.", "Furthermore, we see a clear increase in performance for MAML when increasing the test support set size, while NE only slightly improves.", "This suggests that MAML may be a promising method for cross-lingual adaptation more generally, also outside of the few-shot learning scenario.", "Our ablation experiments on pre-training show that it is beneficial for MAML to start from a strong set of parameters, pre-trained on a high resource 8510 language.", "Thereby, the pre-training is not dependent on a specific language.", "MAML performs well with a variety of pre-training languages, although improvements for unseen languages vary.", "When a model is pre-trained on English, there is a large positive correlation for improvements in languages that are syntactically dissimilar to English, such as Japanese and Tamil.", "During meta-training, dissimilar training languages such as Hindi most contribute to the model's ability to generalize.", "Syntactic features, especially those related to word order, which have already been learned during pretraining, require less adaptation.", "The same is true, vice versa, for Hindi pre-training.", "This effect is also observed, though only in one direction, when correlating performance increase with non-projectivity.", "It is beneficial to meta-train on a set of languages that vary in projectivity after pre-training on one which is mostly projective.", "However, not all variance is explained by the difference in typological features.", "The fact that MAML outperforms MAML without pretraining suggests that pre-training also contributes language-agnostic syntactic features, which is indeed the overall goal of multi-lingual UD models.", "In this paper, we present a meta-learning approach for the task of cross-lingual dependency parsing.", "Our experiments show that meta-learning can improve few-shot universal dependency parsing performance on unseen, unrelated test languages, including low-resource languages and those not covered by mBERT.", "In addition, we see that it is beneficial to pre-train before meta-training, as in the X-MAML approach (Nooralahzadeh et al., 2020).", "In particular, the pre-training language can affect how much adaptation is necessary on languages that are typologically different from it.", "Therefore, an important direction for future research is to investigate a wider range of pre-training/meta-training language combinations, based on specific hypotheses about language relationships and relevant syntactic features.", "Task performance may be further improved by including a larger set of syntax-related tasks, such as POS-tagging, to sample from during meta-training (Kondratyuk and Straka, 2019)." ]
[ "abstain", "method", "objective", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "objective", "method", "abstain", "method", "method", "objective", "objective", "other", "other", "other", "other", "method", "other", "other", "other", "other", "other", "other", "other", "other", "other", "abstain", "method", "objective", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "method", "abstain", "method", "method", "method", "method", "abstain", "abstain", "method", "method", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "method", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "result", "result", "abstain", "abstain", "abstain" ]
[ "Dependency trees have been intensively used with graph neural networks for aspect-based sentiment classification.", "Though being effective, such methods rely on external dependency parsers, which can be unavailable for low-resource languages or perform worse in low-resource domains.", "In addition, dependency trees are also not optimized for aspect-based sentiment classification.", "In this paper, we propose an aspect-specific and language-agnostic discrete latent opinion tree model as an alternative structure to explicit dependency trees.", "To ease the learning of complicated structured latent variables, we build a connection between aspect-to-context attention scores and syntactic distances, inducing trees from the attention scores.", "Results on six English benchmarks, one Chinese dataset and one Korean dataset show that our model can achieve competitive performance and interpretability.", "Aspect-based sentiment classification (ABSA) is the task of recognizing the sentiment polarities of specific aspect categories or aspect terms in a given sentence (Jiang et al., 2011; Dong et al., 2014; Wang et al., 2016; Tang et al., 2016; Li et al., 2018; Du et al., 2019; Sun et al., 2019a; Seoh et al., 2021; Xiao et al., 2021).", "Different from document-level sentiment analysis, different aspect terms in the same document can bear different sentiment polarities.", "For example, given a restaurant review decor is nice though service can be spotty\", the corresponding sentiment labels of decor and service are positive and negative, respectively.", "How to locate the corresponding opinion contexts for each aspect term is a key challenge for ABSA.", "To this end, recent efforts leverage dependency trees (Zhang et al., 2019; Sun et al., 2019a; Wang et al., 2020).", "Syntactic dependencies have been shown to better capture the interaction between the aspect and the opinion contexts (Huang decor is be nice though service can spotty", "(b) Induced tree for decor.", "(c) Induced tree for service.", "et al., 2020; Tang et al., 2020).", "For example, in", "Figure1(a), using syntactic relations, we can find that the corresponding opinion words for decor and service are nice and spotty, respectively.", "Despite its effectiveness, dependency syntax has the following limitations.", "First, dependency parsers can be unavailable for low-resource languages or perform worse in low-resource domains (Duong et al., 2015; Rotman and Reichart, 2019; Vania et al., 2019; Kurniawan et al., 2021).", "Second, dependency trees are also not optimized for aspect-based sentiment classification.", "Previous studies transform dependency trees to aspect-specific forms by hand-crafted rules (Dong et al., 2014; Nguyen and Shirai, 2015; Wang et al., 2020) to improve the aspect sentiment classification performance.", "However, the tree structure is adjusted mainly by the node hierarchy, without optimizing dependency relations for ABSA.", "In this paper, we explore a simple method to induce a discrete opinion tree structure automatically for each aspect.", "Two examples are shown in Figure", "1. In particular, given a target and a sentence, 2051 our algorithm induces a tree structure recursively according to a set of attention scores, calculated using a neural layer on top of BERT representation of the sentence (Devlin et al., 2019).", "Starting with the root node, the algorithm builds a tree by selecting one child node on each side of a current node and recursively continue the partition process to obtain a binarized and lexicalized tree structure.", "The resulting tree serves as the input structure and is fed into graph convolutional networks (Kipf and Welling, 2017) for learning the sentiment classifier.", "We study policy-based reinforcement learning (Williams, 1992) to train the tree inducer.", "One challenge is that the generated policy can be easily remembered by the BERT encoder, which leads to insufficient explorations (Shi et al., 2019).", "To alleviate this issue, we propose a set of regularizers to help BERT-based policy generations.", "Although our method is conceptually simple and straightforward for the inference stage, we show that it has a deep theoretic grounding.", "In particular, the attention based tree induction parsers trained using the policy network can be viewed as a simplified version to a standard latent tree structured VAE model (Kingma and Welling, 2014; Yin et al., 2018), where the KL divergence between the prior and the posterior tree probabilities is approximated by attention-based syntactic distance measures (Shen et al., 2018a).", "Experiments on six English benchmarks, a Chinese hotel review dataset and a Korean automotive review dataset show the effectiveness of our proposed models.", "The discrete structure also makes it easy to interpret the classification results.", "In addition, our algorithm is faster, smaller and more accurate than a full variational latent tree variable model.", "To our knowledge, we are the first to learn aspect-specific discrete opinion tree structures with BERT.", "We make our code publicly available at https://github.com/CCSoleil/dotGCN .", "Figure 2 shows the architecture of our proposed model.", "Given an input sentence x and a specific aspect term a , we induce an opinion tree t according to a recognition network Q ( t | x, a ) , where is the set of network parameters.", "We apply multi-layered graph convolutional networks (GCNs) over the BERT output vectors to model the structural relations in the opinion tree and extract aspect-specific features.", "Finally, we use an attention-based clas-Figure 2: The model architecture.", "where is the set of parameters.", "To train the model, RL is used for Q ( t | x, a ) (Section 2.3) and standard backpropagation is used for training P ( y | x, a, t ) (Section 2.2).", "Opinion Tree Denote the input sentence as x = w 1 w 2 . . . w n and the aspect as a = w b w b +1 . . . w e .", "[ b, e ] is a continuous span of [1 , n ] .", "w i is the i th word.", "As shown in Figure 1, the opinion tree for a is a binarized tree.", "Each node contains a word span and at most two children.", "a is placed at the root node.", "Except for the root node, 1 each node contains only one word.", "An in-order traversal over t can recover the original sentence.", "Ideally, the nodes near the root node should contain the corresponding opinion words, such as nice for decor and spotty for service.", "Algorithm 1 shows the process of building an opinion tree t for a that conforms to the above conditions using a node score function v , where v i indicates the informative score of the i -th word contributing to the sentiment polarity y of a .", "v ji is the corresponding scores of words in the span [ i, j ] .", "We first make the aspect span [ b, e ] as the root node and then build its left and right children from the spans [1 , b 1] and [ e +1 , n ] , respectively.", "To build the left or right subtree, we first select the element with the largest score in the span as the root node of the subtrees and then recursively use the build_tree call for the corresponding span partitions.", "Calculating v Following Song et al. (2019), we feed the inputs [CLS] w 1 w 2 . . . w n [SEP] w b w b +1 . . . w e to BERT 2 to obtain the aspect-specific sentence representation H , and then calculate a set 1 A case study in Appendix shows an example of a root node containing multiple words grilled alaskan king salmon.", "Input : The scores v n 1 , the aspect span [ b, e ] ; //build the root node ; root new TreeNode; root.words = w b w b +1 . . . w e ; // w i is the i -th word.", "root.left = build_tree ( v b 1 1 , 1, b-1); root.right = build_tree( v ne +1 , e+1, n); build_tree( v ji , i , j ): if i > j : return None; node new TreeNode; k arg max k [ i,j ] v k ; node.words = w k ; node.left = build_tree( v k 1 i , i , k 1 ); node.right = build_tree( v jk +1 , k + 1 , j ); return node; Output : root ; Algorithm 1: Aspect-specific construction algorithm given a scoring function v .", "where u p , W p and W a,p are model parameters, is the ReLU activation function, h a is the aspect representation by sum pooling from H b H b +1 . . . H e .", "in Q ( t | x, a ) contains the model parameters of BERT, u p , W p and W a,p .", "Graph Representation Given t and H , we use GCNs to learn the representation vectors for each word.", "We convert t to an undirected graph G .", "Specifically, we take each word as a node in G and design the adjacency matrix A R n n of G by considering four types of edges.", "First, we include self loops for each word.", "Second, we fully connect each word within the aspect term.", "Third, for the child node w j of the root node, we link w j to each word in a .", "Last, we consider edges in t between single word nodes except the root node.", "Formally, A is given by A i,j = 1 if i = j, (self-loops) 1 if i ( b, e ) and j ( b, e ) , (aspect words) 1 if i [ b, e ] and a is the parent node of w j 1 if w i is the parent or a child node of w j 0 otherwise.", "(2) A is ensured to be symmetric by Eq", "2. We then use GCNs to capture the structured relations between word pairs, given the adjacency matrix A between nodes and the representation matrix of the ( l 1) -th layer H l 1 R n d , the l -th layer representation H l given by a GCN is, H l = f ( AH l 1 W l + b l ) , (3) where f is an activation function (i.e., ReLU), W l R d d and b l R d are the model parameters for the l -th layer.", "The input to the first GCN layer H 0 is H given by the sentence encoder.", "Target Aspect Representation We consider both the representation vector of the [CLS] token ( H 0 cls ) and the aspect vectors given by the last GCN layer ( H Nb , H Nb +1 . . . , H Ne ) as the aspect-specific representation vector to query the input sentence representation H 0 .", "The final aspect-specific feature representation c over the input sentence representation is given by an attention layer, t = ( H 0 t ) T ( H 0 cls + e (cid:88) i = b H Ni ) , = softmax ( ) , c = H 0 , (4) where t is the attention scores of a to w t , is the normalized scores and c is the final feature.", "Output layers use c for computing the sentiment polarity scores.", "The final sentiment distribution is given by a softmax classifier, p = softmax ( W c c + b c ) , (5) where W c and b c are model parameters and p is the predicted distribution.", "Cross Entropy Loss The classifier is trained by maximizing the log-likelihood of the training samples.", "Formally, the objective is to minimize L sup = | D | (cid:88) i =1 (cid:88) a x i log p i,y a , (6) where | D | is the size of training data, y a is the sentiment label of a in the i -th example x i and p i,y a is the classification probability for a , which is given by Eq 5.", "The set of model parameters in P ( y | x, a, t ) includes GCN blocks and the classifier parameters in Eq 5.", "Tree Distance Regularized Loss Following Pouran Ben Veyseh et al. (2020), we introduce a syntax constraint to regularize the attention weights.", "Ideally, the words near to the root node should receive high attention weights.", "Given an opinion tree t , we compute the tree distance d i for each word i using the length of the shortest path to the root.", "Given the distances and the attention scores , we use the KL divergence to encourage the aspect term to attend the contexts with shorter distances.", "sampling to explore more discrete structures.", "Since the tree sampling process is a discrete decision making procedure, it is non-differentiable.", "The gradient can be propagated from L sup in Eq 6 to t and , but can not be further propagated from t to .", "Therefore, we use the policy gradient given by REINFORCE (Williams, 1992) to optimize of the policy network (Section 2.3).", "Suppose that the reward function for a latent tree t is R t , the goal of reinforcement learning is to minimize the negative expected reward function,", "For each t , we use the sentiment log-likelihood log P ( y | x, t, a ) as R t .", "Using REINFORCE, the gradient of L rl with respect to is, L rl = EQ ( t | x,a ) [ R t log Q ( t | x, a ) ] (9) log Q ( t | x, a ) is the log-likelihood of the generated sample t , which can be decomposed to a sum of log-likelihood at each tree-building step.", "According to Algorithm 1, each call of build_tree ( v ji , i, j ) involves selecting an action k from the span [ i, j ] given the scores v nm .", "The action space contains j i +1 actions.", "The log-likelihood of this action is given by, log k = log exp( v k ) (cid:80) jl = i exp( v l ) , i k j.", "In particular, we use v p in Eq 1 as the score function v .", "Enumerating all possible trees to calculate the expectation term in Eq 9 is intractable, and we use a Monte Carlo method (Rubinstein and Kroese, 2016), approximating the training objective by taking M samples, EQ ( t | x,a ) [ R t log Q ( t | x, a ) ] 1 MM (cid:88) i =1 R t i log Q ( t i | x, a ) .", "Attention Consistency Loss Instead of solely relying on the reinforced gradient to train the policy network, we also apply an attention consistency loss to directly supervise the policy network.", "Note that there are two attention scores in our model.", "The first is the attention score s p defined in Eq 1, which is trained by the reinforcement learning algorithm.", "The second is the attention score defined in Eq 4 for extracting useful context features for the aspect-specific classifier.", "is trained via end-to-end back propagation.", "Intuitively, words that receive the largest attention scores should be effective opinion words of the target aspect.", "Therefore, it should be put closer to the root node by the policy network.", "To this end, we enforce a consistent regularization between the two attention scores so that polarity oriented attention can be directly used to supervise the scoring policy s p .", "Formally, L att is given by, L att = KL (", "where L sup is the supervised loss, L rl is the reinforcement learning loss, L att is a novel attention consistency loss and L td is a loss to guide the attention score distributions by tree constraints.", "rl , att and td are hyper-parameters.", "Interestingly, L sup , L rl and L att can be unified in a theoretic framework using variational inference (Kingma and Welling, 2014).", "We show in this section, that our method can be viewed as a stronger extension to a latent tree VAE model.", "To model P ( y | x, a ) , we introduce a latent discrete structured variable t .", "Formally, the training objective is to minimize the negative log-likelihood, LMLE = log P ( y | x, a, ) = log (cid:88) t P ( y, t | x, a ) , (14) Eq 14 calculates log-of-sum over all possible trees t , which is exponential.", "Eq 14 can be approximated by the evidence lower bound (ELBO) using variational parameters (Kingma and Welling, 2014; Yin et al., 2018), LELBO = E q ( t | x,y,a ) [log P ( y | x, a, t )] + KL (cid:16) q ( t | x, y, a ) , p ( t | x, a ) (cid:17) , (15) where p ( t | x, a ) is the prior distribution for generating latent trees, q ( t | x, y, a ) is the corresponding posterior distribution, log P ( y | x, a, t ) is the log-likelihood function by assuming 2054 that the latent tree t is already known, and E q ( t | x,y,a ) [log P ( y | x, a, t )] is the expected log-likelihood function over q ( t | x, y, a ) by considering all the potential trees.", "The KL term acts as a regularizer to force the matching of the prior and the posterior distributions.", "During training, q ( t | x, y, a ) is used to induce the tree.", "For inference, p ( t | x, a ) is used since y is still unknown.", "In practice, a scale hyper-parameter can be used to control the behaviour of the KL term (Bow-man et al., 2016b), LELBO = E q ( t | x,y,a ) [log P ( y | x, a, t )] + KL (cid:16) q ( t | x, y, a ) || p ( t | x, a ) (cid:17) .", "The first term is an expectation term and the second term is a KL term .", "Eq 16 is a standard VAE model for the ABSA task, which, however, has not been discussed in the research literature.", "It can be trained using the tree entropy (Kim et al., 2019b) and neural mutual information estimation (Fang et al., 2019).", "However, both are slow because they both need to consider a large batch of tree samples.", "To model q ( t | x, y, a ) , we instead calculate a score function s q for the posterior by a MLP layer similar to Eq 1, s q = softmax (cid:16) u q ( W q H + W a,q h a ) (cid:17) , (17) where u q , W q and W a,q are parameters, H and h a are the posterior sentence and aspect representations respectively given y .", "To ensure that y can guide the encoder, we feed the input sequence together with y to BERT by using [CLS] w 1 w 2 . . . w n [SEP] w f w f +1 . . . w e y to obtain H .", "Our method can be regarded as a novel simplification to the above model, which can be shown by correlating the expectation term and the KL term defined in Eq 16 with the attention scores in Eq 1 and Eq 4, respectively.", "In particular, we consider converting t into a special type of tree distance, namely the aspect-to-context attention scores.", "Then we delegate the probability distribution over structured tree samples to a set of attention scores.", "Intuitively, if the attention scores are similar, the generated trees should be highly similar.", "E q ( t | x,y,a ) [log P ( y | x, a, t )] = E q ( t | x,y,a ) [log P ( y | x, a, t ) log q ( t | x, y, a ) ]", "Assuming that the posterior q ( t | x, y, a ) is approximate to Q ( t | x, a ) given by the recognition network, Eq 18 is equivalent to L rl in Eq", "11. Approximate KL Term The KL term resembles L att in Eq 12 for = att , namely KL (cid:16) q ( t | x, y, a ) || p ( t | x, a ) (cid:17) KL ( , s p ) .", "First, we delegate the probability distribution over tree samples to a set of attention scores.", "In particular, we use s p and s q as the proxies for p ( t | x, a ) and q ( t | x, y, a ) , respectively.", "This is equivalent to say that the posterior scores s q and the prior score s p are fed to Algorithm 1 to derive the corresponding trees during training.", "Second, since both s q and the attention score in Eq 4 are directly supervised by the output label y , we can safely assume that s q .", "Then the KL term KL ( s q , s p ) in Eq 16 becomes KL ( , s p ) , which is the attention-based regularization loss defined in Eq", "12. 4 Experiments We perform experiments on eight aspect-based sentiment analysis benchmarks, including six English datasets, one Chinese dataset, and one Korean datase.", "The data statistics is shown in Appendix A.3.", "We use Stanza (Qi et al., 2020) as the external parser to produce dependency parses for comparing with dependency tree based models, reporting accuracy (Acc.) and macro-f1 (F1) scores for each model.", "More details are presented in Appendix A.1.", "MAMS Jiang et al. (2019) provide a recent challenge dataset with 4,297 sentences and 11,186 aspects.", "We take it as the main dataset because it is a large-scale multi-aspect dataset with more aspects in each sentence compared to the other datasets.", "MAMS-small is a small version of MAMS.", "Chinese hotel reviews dataset Liu et al. (2020) provide manually annotated 6,339 targets and 2,071 items for multi-target sentiment analysis.", "by Pontiki et al. (2014), restaurant reviews of SemEval 2014 task 4 ( Rest14 ; Pontiki et al. 2014), SemEval 2015 task 12 ( Rest15 ; Pontiki et al. 2015) and SemEval 2016 task 5 ( Rest16 ; Pontiki et al. 2016).", "These datasets are pre-processed following Tang et al. (2016) and Zhang et al. (2019).", "We denote our model as dotGCN ( d iscrete o pinion t ree GCN), making comparisons with BERT-based models, including models without using trees and dependency tree based models.", "In addition, the variational inference baseline (Section 3.1) is denoted as viGCN .", "Baselines are (1) BERT-SPC is a simple baseline by fine-tuning the vector of [CLS] of BERT from Jiang et al. (2019); (2) AEN .", "Song et al. (2019) use an attentional encoder with BERT; (3) CapsNet .", "Jiang et al. (2019) combine capsule network with BERT; (4) Hard-Span .", "Hu et al. (2019) use RL to determine aspect-specific opinion spans; (5) depGCN .", "Zhang et al. (2019) applies aspect-specific GCNs over dependency trees; (6) RGAT .", "Wang et al. (2020) use relational graph attention networks over aspect-centered dependency trees to incorporate the dependency edge type information; (7) SAGAT .", "Huang et al. (2020) use graph attention network and BERT, exploring both syntax and semantic information in the sequence; (8) DGEDT .", "Tang et al. (2020) jointly consider BERT outputs and dependency tree based representations by a bidirectional GCN.", "(9) kumaGCN .", "Chen et al. (2020) combine the dependency trees and latent graphs induced by self-attention neural networks; 4.2 Development Results We perform development experiments using MAMS since this is the largest dataset and the examples are more challenging compared to the other datasets.", "We implement three baselines, in-Method MAMS Small Multilingual Acc F1 Acc F1 Ch-F1 Ko-F1 BERT-SPC 82.22 -79.44 -CapsNet 83.39 -80.91 -CapsNet-DR 82.97 -80.09 --BERT-SPC 83.01 82.76 80.91 80.39 80.92 61.17 depGCN + L td 84.36 83.88 81.59 80.81 NA NA kumaGCN + L td 84.37 83.83 81.59 81.10 NA NA dotGCN 84.95 84.44 82.34 81.73 81.53 62.78 Table 2: Results on two MAMS datasets and the multilingual review datasets.", "cluding BERT-SPC, depGCN and kumaGCN.", "For fair comparison, we also combine depGCN and kumaGCN with the syntax regularization loss in Eq 7 by calculating syntactic distances on the input dependency trees with respect to the aspect terms.", "Table 1 shows the results on MAMS validation-set.", "BERT-SPC achieves 84.08 accuracy and 83.52 F1.", "Surprisingly, the dependency tree based models cannot outperform BERT-SPC, which verifies the limitation of using cross-domain dependency parsers for this task.", "kumaGCN outperforms depGCN due to its ability to include an implicit latent graph.", "Adding the syntax regularization loss generally improves the model performance of syntax-based models.", "In particular, kumaGCN + L td is on par with BERT-SPC.", "viGCN outperforms kumaGCN + L td and depGCN + L td , which shows the potential of structured latent tree models.", "Our dotGCN model achieves 84.53 accuracy and 83.97 F1, outperforming all the baselines by a large margin, which empirically shows the induced discrete opinion tree is promising to this task.", "Compared to viGCN, our model gives better scores.", "In addition, our model converges nearly 1.8 times faster (0.66h/epoch v.s. 1.25h/epoch) than viGCN.", "dotGCN does not have to calculate the true posterior distribution over structured tree samples and thus largely reduce computation overhead.", "Ablation Study Table 1 shows ablation studies on MAMS validation set by removing three proposed loss items during training, namely L td , L rl and L att .", "We can observe that the model performance degrades after removing either one of them.", "Removing the syntax regularization loss L td slightly hurts the performance.", "Without using the attention consistency loss L att , the model falls behind BERT-SPC, which suggests the importance of our proposed attention consistency regularizations.", "Excluding the reinforcement learning loss leads to the 2056 Model Twitter Laptop Rest14 Rest15 Rest16 Average Acc F1 Acc F1 Acc F1 Acc F1 Acc F1 Acc F1 AEN 75.14 74.15 76.96 73.67 84.29 77.22 ---RGAT 76.15 74.88 78.21 74.07 86.60 81.35 ---BERT-SPC 73.41 72.38 80.56 77.20 84.55 75.74 83.03 63.92 90.75 74.00 82.46 72.65 depGCN 75.58 74.58 81.19 77.67 85.00 78.79 84.13 67.28 91.39 74.25 83.46 74.51 SAGAT 75.40 74.17 80.37 76.94 85.08 77.94 ---DGEDT 77.90 75.40 79.80 75.60 86.30 80.00 84.00 71.00 91.90 79.00 83.98 76.2 depGCN + L td 75.49 76.73 79.31 75.84 86.43 80.72 84.69 70.89 92.37 79.40 83.66 76.72 dotGCN 78.11 77.00 81.03 78.10 86.16 80.49 85.24 72.74 93.18 82.32 84.74 78.13 Table 3: Results on five SemEval datasets.", "biggest performance drop (Acc: 84 . 53 83 . 48 ) among the three settings.", "This shows that the reinforcement learning component plays a central role in the full model.", "MAMS Table 2 shows the results of dotGCN and the baselines from Jiang et al. (2019) on the MAMS test set.", "We implement BERT-SPC, denoted as BERT-SPC , which outperforms the BERT-SPC model of Jiang et al. (2019).", "Compared to baselines (BERT-SPC, CapsNet, CapsNet-DR and BERT-SPC ) without using dependency trees, dotGCN gives significantly better results ( p < 0 . 01 ).", "For fair comparison with dependency tree based models, we also implement depGCN+ L td and kumaGCN+ L td .", "depGCN+ L td achieves 84.36 accuracy and 83.88 F1 on the MAMS test set.", "kumaGCN+ L td gives similar results with 84.37 accuracy and 83.83 F1.", "Our dotGCN outperforms all the baselines, giving 84.95 accuracy and 84.44 F1.", "In terms of the averaged accuracy of F1 scores on MAMS and MAMS-small, dotGCN is significantly better than depGCN and kumaGCN ( p < 0 . 05 ).", "The results demonstrate that the induced aspect-specific discrete opinion trees are promising to handle multi-aspect sentiment tasks.", "Multilingual The results 3 on the Chinese hotel review dataset are shown in Table", "2. dotGCN outperforms the baseline BERT-SPC by 0.72 accuracy points and 0.61 F1, respectively.", "The result shows that our model can be generalized across languages without relying on language-specific parsers.", "On the Korean dataset, we obtain 5.20 accuracy and 11.61 F1 improvements compared to the LCF-BERT (Zeng et al., 2019), which is the 3 Since the Hotel dataset is based on Chinese characters, there are no annotated words.", "best BERT-based model.", "These results show that our model can be well generalized to multiple languages and may potentially benefits low-resource languages for this task.", "SemEval Table 3 shows the results of our model on the SemEval datasets.", "First, tree based graph neural network models are generally better than BERT-SPC.", "On the five datasets, which are relatively small, our model still achieves competitive performances in terms of the averaged F1 and accuracy scores as shown in Table", "3. In particular, our model in general outperforms depGCN and depGCN + L td on four out of five datasets, which verifies that the reinforced discrete opinion trees can be promising structured representations compared to auto-parsed dependency trees.", "We also compare our models with span-based reinforcement learning models (Hard-Span; Hu et al. (2019)) on the dataset of laptops and restaurants preprocessed by Tay et al. (2018).", "As shown in Table 4, our model outperforms Hard-Span by 2.55 accuracy points on laptops 4 .", "On restaurants, our model achieves a comparable result to Hard-Span.", "It shows that the opinion tree is a better representation compared to an opinion span.", "Figure 3a and Figure 3b show the induced tree and dependency parse for the aspect term scallops, respectively.", "The opinion words unique and tasty 4 Since the code of span-based RL methods is not publicly available, we do not include a significant test here.", "(b) The corresponding dependency tree by Stanza.", "In the induced tree by dotGCN, the opinion word tasty and unique are 2 and 3 depths from the aspect scallops respectively, which shows that dotGCN can potentially handle complex interactions among aspects and opinion contexts.", "In addition, the tree induced by dotGCN is binarized, and the root node can contain multiple words as shown in Figure 4a.", "Figure 4a and Figure 4b show the induced trees for two aspect terms with different sentiment polarities.", "For creme brulee, the policy network assigns high weights to both delicious and savory.", "Interestingly, it assigns a higher weight to delicious than savory, though savory is closer to its aspect term than delicious.", "For appetizer, the word interesting receives higher attention scores than the other two sentiment words.", "These results show that dotGCN is able to distinguish different sentiment contexts for different aspect terms in the same sentence.", "Distances between Aspect Terms and Opinion Words Figure 5 shows the distances between aspect terms and opinion words.", "We use the annotated opinion words of Rest16 provided by Fan et al. (2019) to compare our induced trees and dependency trees.", "The distances calculated over the original sequences are also included.", "We can observe creme brulee 7 , 8 the 6 but 5 appetizer 1 the 0 interesting 3 was 2 , 4 delicious 13 savory 11 was 9 very 10 and 12 .", "(b) An induced tree for appetizer.", "that the distance distribution over the sequences is relatively flat compared to that over tree structures.", "For the two tree structures, nearly 90% of opinion words are within 3 depths from the aspect terms.", "The distance distribution of our induced trees is similar to that of the dependency trees, which empirically demonstrates that induced discrete trees are able to capture the interactions between aspect terms and opinions.", "By treating dependency trees as gold standard, our tree inducer obtains 35.4% unlabeled attachment scores (UAS), which shows the induced trees are significantly different from the dependency trees although both can connect opinion words with aspect terms.", "cation accuracy of the MAMS test set with respect to the aspect frequency.", "For aspect terms which appear in the training corpus, both methods give similar results.", "However, for unseen aspects, dotGCN gives better results than depGCN.", "This is potentially due to the severe parsing errors for the low-frequent aspects.", "dotGCN does not depend on external parsers and thus can circumvent this problem.", "It empirically suggests that the induced tree structures have strong robustness for capturing the aspect-opinion interactions compared to depGCN.", "Tree Induction for ABSA There has been much work on unsupervised discrete induction (Bow-man et al., 2016a; Shen et al., 2018b; Kim et al., 2019b,a; Jin et al., 2019; Cao et al., 2020; Yang et al., 2021; Dai et al., 2021), which aims to obtain general constituent trees without explicit syntax annotations and task-dependent supervised signals.", "We focus on learning task-specific tree-structures for ABSA, where the tree is fully binarized and lexicalized.", "Choi et al. (2018) propose Gumbel Tree-LSTM for learning task-specific tree for semantic compositions.", "Similarly, Maillard et al. (2019) propose an unsupervised chart parser for jointly learning sentence embeddings and syntax.", "However, they focus on sentence-level tasks and do not consider aspect information.", "Aspect-level Sentiment Classification Much recent work has explored neural attention mechanism to this task (Tang et al., 2016; Ma et al., 2017; Li et al., 2018; Liang et al., 2019).", "Among tree-based methods, Zhang et al. (2019) and Sun et al. (2019b) encode dependency tree using GCN for aspect-level sentiment analysis; Zhao et al. (2019) use GCN to model fully connected graphs between aspect terms; Wang et al. (2020) use relational graph attention networks to incorporate the dependency edge type information, and construct aspect-specific graph structures; Barnes et al. (2021) attempt to directly predict dependency-based sentiment graphs.", "Tang et al. (2020) use duel-transformer structure to enhance the dependency graph for this task.", "Our work is similar in that we also consider the structure dependencies, but different in that we rely on automatically induced tree structures instead of external parses.", "Chen et al. (2020) propose to induce aspect-specific latent graph by sampling from self-attention-based Hard Kumaraswamy distributions (Bastings et al.).", "However, to achieve competitive performance, their method still requires a combination of external dependency parse trees and the induced latent graphs.", "Sun et al. (2019a) and Xu et al. (2019) constructed aspect related auxiliary sentences as inputs to BERT (Devlin et al., 2019) for strong contextual encoders.", "Xu et al. (2019) proposed BERT-based post training for enhancing domain-specific contextual representations for aspect sentiment analysis.", "Our work shares a similar feature extraction approach, but differently we focus on inducing latent trees for ABSA.", "We proposed a method to induce aspect-specific discrete opinion trees for aspect-based sentiment analysis, obtaining trees by viewing aspect-to-context attention scores as syntactic distances.", "The attention scores are trained using both RL and a novel attention-based regularization.", "Our model empirically achieves competitive performance compared with dependency tree based models, while being independent of parsers.", "We also provide a theoretic view of our method using variational inference.", "Zhiyang Teng and Yue Zhang are the corresponding authors.", "Our thanks to anonymous reviewers for their insightful comments and suggestions.", "We appreciate Prof. Pengyuan Liu sharing the Chinese Hotel dataset, Prof. Jingjing Wang sharing the reinforcement learning code of Wang et al. (2019), Mr. Chuang Fan helping obtain the MAMS-Small dataset, Prof. Hwanjo Yu and Mr. Dongmin Hyun sharing the Korean automotive datasets, Prof. De-jiang Dou and Mr. Amir Veyseh responding to our questions when reproducing their results on MAMS, Mr. Zhen Wu for releasing their codes of Wu et al. (2020) upon our request.", "We thank Dr. Xuebin Wang for providing us with 2 V100 GPU cards for use.", "This publication is conducted with the financial support of Pioneer and Leading Goose R&D Program of Zhejiang under Grant Number 2022SDXHDX0003." ]
[ "abstain", "abstain", "abstain", "objective", "method", "result", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "method", "abstain", "abstain", "method", "abstain", "objective", "result", "abstain", "objective", "abstain", "abstain", "objective", "other", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "other", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "method", "other", "abstain", "method", "abstain", "method", "other", "abstain", "abstain", "abstain", "other", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "method", "other", "abstain", "abstain", "abstain", "method", "other", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "result", "result", "abstain", "result", "result", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "method", "other", "other", "other", "other", "other", "other", "method", "other", "other", "other", "other", "method", "objective", "abstain", "result", "method", "other", "other", "other", "other", "other" ]
[ "One of the most crucial challenges in question answering (QA) is the scarcity of labeled data, since it is costly to obtain question-answer (QA) pairs for a target text domain with human annotation.", "An alternative approach to tackle the problem is to use automatically generated QA pairs from either the problem context or from large amount of unstructured texts (e.g. Wikipedia).", "In this work, we propose a hierarchical conditional variational autoencoder (HCVAE) for generating QA pairs given unstructured texts as contexts, while maximizing the mutual information between generated QA pairs to ensure their consistency.", "We validate our Info rmation Maximizing H ierarchical C onditional V ariational A uto E ncoder ( Info-HCVAE ) on several benchmark datasets by evaluating the performance of the QA model (BERT-base) using only the generated QA pairs (QA-based evaluation) or by using both the generated and human-labeled pairs (semi-supervised learning) for training, against state-of-the-art baseline models.", "The results show that our model obtains impressive performance gains over all baselines on both tasks, using only a fraction of data for training.", "1 1 Introduction Extractive Question Answering (QA) is one of the most fundamental and important tasks for natural language understanding.", "Thanks to the increased complexity of deep neural networks and use of knowledge transfer from the language models pretrained on large-scale corpora (Peters et al., 2018; Devlin et al., 2019; Dong et al., 2019), the state-of-the-art QA models have achieved human-level performance on several benchmark datasets (Ra-jpurkar et al., 2016, 2018).", "However, what is also * Equal contribution 1 The generated QA pairs and the code can be found at https://github.com/seanie12/Info-HCVAE Paragraph (Input) Philadelphia has more murals than any other u.s. city, thanks in part to the 1984 creation of the department of recreation's mural arts program, ...The program has funded more than 2,800 murals Q1 which city has more murals than any other city?", "A1 philadelphia Q2 why philadelphia has more murals?", "A2 the 1984 creation of the department of recreation's mural arts program Q3 when did the department of recreation' s mural arts program start ?", "A3 1984 Q4 how many murals funded the graffiti arts program by the department of recreation?", "A4 more than 2,800 Table 1 : An example of QA pairs generated with our framework.", "The paragraph is an extract from Wikipedia provided by Du and Cardie (2018).", "For more examples, please see Appendix D. crucial to the success of the recent data-driven models, is the availability of large-scale QA datasets.", "To deploy the state-of-the-art QA models to real-world applications, we need to construct high-quality datasets with large volumes of QA pairs to train them; however, this will be costly, requiring a massive amount of human efforts and time.", "Question generation (QG) , or Question-Answer pair generation (QAG) , is a popular approach to overcome this data scarcity challenge.", "Some of the recent works resort to semi-supervised learning, by leveraging large amount of unlabeled text (e.g. Wikipedia) to generate synthetic QA pairs with the help of QG systems (Tang et al., 2017; Yang et al., 2017; Tang et al., 2018; Sachan and Xing, 2018).", "However, existing QG systems have overlooked an important point that generating QA pairs from a context consisting of unstructured texts, is essentially a one-to-many problem.", "Sequence-to-sequence models are known to generate generic sequences (Zhao et al., 2017a) without much variety, as they are trained with maximum likelihood estimation.", "This is highly suboptimal for QAG since the contexts given to the model often contain richer information that could be exploited to generate multiple QA pairs.", "To tackle the above issue, we propose a novel probabilistic deep generative model for QA pair generation.", "Specifically, our model is a hierarchical conditional variational autoencoder (HCVAE) with two separate latent spaces for question and answer conditioned on the context, where the answer latent space is additionally conditioned on the question latent space.", "During generation, this hierarchical conditional VAE first generates an answer given a context, and then generates a question given both the answer and the context, by sampling from both latent spaces.", "This probabilistic approach allows the model to generate diverse QA pairs focusing on different parts of a context at each time.", "Another crucial challenge of the QG task is to ensure the consistency between a question and its corresponding answer, since they should be semantically dependent on each other such that the question is answerable from the given answer and the context.", "In this paper, we tackle this consistency issue by maximizing the mutual information (Bel-ghazi et al., 2018; Hjelm et al., 2019; Yeh and Chen, 2019) between the generated QA pairs.", "We empirically validate that the proposed mutual information maximization significantly improves the QA-pair consistency.", "Combining both the hierarchical CVAE and the InfoMax regularizer together, we propose a novel probabilistic generative QAG model which we refer to as Info rmation Maximizing H ierarchical C onditional V ariational A uto E ncoder ( Info-HCVAE ).", "Our Info-HCVAE generates diverse and consistent QA pairs even from a very short context (see Table 1).", "But how should we quantitatively measure the quality of the generated QA pairs?", "Popular evaluation metrics (e.g. BLEU (Papineni et al., 2002), ROUGE (Lin and Hovy, 2002), METEOR (Baner-jee and Lavie, 2005)) for text generation only tell how similar the generated QA pairs are to Ground-Truth (GT) QA pairs, and are not directly correlated with their actual quality (Nema and Khapra, 2018; Zhang and Bansal, 2019).", "Therefore, we use the QA -based E valuation ( QAE ) metric proposed by Zhang and Bansal (2019), which measures how well the generated QA pairs match the distribution of GT QA pairs.", "Yet, in a semi-supervised learning setting where we already have GT labels, we need novel QA pairs that are different from GT QA pairs for the additional QA pairs to be truly effective.", "Thus, we propose a novel metric, R everse QAE ( R-QAE ), which is low if the generated QA pairs are novel and diverse.", "We experimentally validate our QAG model on SQuAD v1.1 (Rajpurkar et al., 2016), Natural Questions (Kwiatkowski et al., 2019), and TriviaQA (Joshi et al., 2017) datasets, with both QAE and R-QAE using BERT-base (Devlin et al., 2019) as the QA model.", "Our QAG model obtains high QAE and low R-QAE, largely outperforming state-of-the-art baselines using a significantly smaller number of contexts.", "Further experimental results for semi-supervised QA on the three datasets using the SQuAD as the labeled dataset show that our model achieves significant improvements over the state-of-the-art baseline (+2.12 on SQuAD, +5.67 on NQ, and +1.18 on Trivia QA in EM).", "Our contribution is threefold: We propose a novel hierarchical variational framework for generating diverse QA pairs from a single context, which is, to our knowledge, the first probabilistic generative model for question-answer pair generation (QAG).", "We propose an InfoMax regularizer which effectively enforces the consistency between the generated QA pairs, by maximizing their mutual information.", "This is a novel approach in resolving consistency between QA pairs for QAG.", "We evaluate our framework on several benchmark datasets by either training a new model entirely using generated QA pairs (QA-based evaluation), or use both ground-truth and generated QA pairs (semi-supervised QA).", "Our model achieves impressive performances on both tasks, largely outperforming existing QAG baselines.", "Question and Question-Answer Pair Generation Early works on Question Generation (QG) mostly resort to rule-based approaches (Heilman and Smith, 2010; Lindberg et al., 2013; Labutov et al., 2015).", "However, recently, encoder-decoder based neural architectures (Du et al., 2017; Zhou et al., 2017) have gained popularity as they outperform rule-based methods.", "Some of them use paragraph-level information (Du and Cardie, 2018; Song et al., 2018; Liu et al., 2019; Zhao et al., 2018; Kim et al., 2019; Sun et al., 2018) as additional information.", "Reinforcement learning is a popular approach to train the neural QG models, where the reward is defined as the evaluation metrics (Song et al., 2017; Kumar et al., 2018), or the QA ac-curacy/likelihood (Yuan et al., 2017; Hosking and Riedel, 2019; Zhang and Bansal, 2019).", "State-of-the-art QG models (Alberti et al., 2019; Dong et al., 2019; Chan and Fan, 2019) use pre-trained language models.", "Question-Answer Pair Generation (QAG) from contexts, which is our main target, is a relatively less explored topic tackled by only a few recent works (Du and Cardie, 2018; Alberti et al., 2019; Dong et al., 2019).", "To the best of our knowledge, we are the first to propose a probabilistic generative model for end-to-end QAG; Yao et al. (2018) use VAE for QG, but they do not tackle QAG.", "Moreover, we effectively resolve the QA-pair consistency issue by maximizing their mutual information with an InfoMax regularizer (Belghazi et al., 2018; Hjelm et al., 2019; Yeh and Chen, 2019), which is another contribution of our work.", "Semi-supervised QA with QG With the help of QG models, it is possible to train the QA models in a semi-supervised learning manner to obtain improved performance.", "Tang et al. (2017) apply dual learning to jointly train QA and QG on unlabeled dataset.", "Yang et al. (2017) and Tang et al. (2018) train QG and QA in a GAN framework (Goodfel-low et al., 2014).", "Sachan and Xing (2018) propose a curriculum learning to supervise the QG model to gradually generate difficult questions for the QA model.", "Dhingra et al. (2018) introduce a cloze-style QAG method to pretrain a QA model.", "Zhang and Bansal (2019) propose to filter out low-quality synthetic questions by the answer likelihood.", "While we focus on the answerable setting in this paper, few recent works tackle the unanswerable settings.", "Zhu et al. (2019) use neural networks to edit answerable questions into unanswerable ones, and perform semi-supervised QA.", "Alberti et al. (2019) and Dong et al. (2019) convert generated questions into unanswerable ones using heuristics, and filter or replace corresponding answers based on EM or F1.", "Variational Autoencoders Variational autoencoders (VAEs) (Kingma and Welling, 2014) are probabilistic generative models used in a variety of natural language understanding tasks, including language modeling (Bowman et al., 2016), dialogue generation (Serban et al., 2017; Zhao et al., 2017b; Park et al., 2018; Du et al., 2018; Qiu et al., 2019), and machine translation (Zhang et al., 2016; Su et al., 2018; Deng et al., 2018).", "In this work, we propose a novel hierarchical conditional VAE framework with an InfoMax regularization for generating a pair of samples with high consistency.", "Our goal is to generate diverse and consistent QA pairs to tackle the data scarcity challenge in the extractive QA task.", "Formally, given a context c which contains M tokens, c = ( c 1 , . . . , c M ) , we want to generate QA pairs ( x , y ) where x = ( x 1 , . . . , x N ) is the question containing N tokens and y = ( y 1 , . . . , y L ) is its corresponding answer containing L tokens.", "We aim to tackle the QAG task by learning the conditional joint distribution of the question and answer given the context, p ( x , y | c ) , from which we can sample the QA pairs: ( x , y ) p ( x , y | c ) We estimate p ( x , y | c ) with a probabilistic deep generative model, which we describe next.", "We propose to approximate the unknown conditional joint distribution p ( x , y | c ) , with a variational autoencoder (VAE) framework (Kingma and Welling, 2014).", "However, instead of directly learning a common latent space for both question and answer, we model p ( x , y | c ) in a hierarchical conditional VAE framework with a separate latent space for question and answer as follows: p ( x , y | c ) = (cid:90) z x (cid:88) z y p ( x | z x , y , c ) p ( y | z x , z y , c ) p ( z y | z x , c ) p ( z x | c ) d z x where z x and z y are latent variables for question and answer respectively, and the p ( z x | c ) and p ( z y | z x , c ) are their conditional priors following an isotropic Gaussian distribution and a categorical distribution (Figure", "1-(a)).", "We decompose the latent space of question and answer, since the answer is always a finite span of context c , which can be modeled well by a categorical distribution, while a continuous latent space is a more appropriate choice for question since there could be unlimited valid questions from a single context.", "Moreover, we design the bi-directional dependency flow of joint distribution for QA.", "By leveraging hierarchical structure, we enforce the answer latent variables Figure 1 : The conceptual illustration of the proposed HCVAE model encoding and decoding question and its corresponding answer jointly.", "The dashed line refers to the generative process of HCVAE.", "Figure 2 : The directed graphical model for HCVAE.", "The gray and white nodes denote observed and latent variables.", "to be dependent on the question latent variables in p ( z y | z x , c ) and achieve the reverse dependency by sampling question x p ( x | z x , y , c ) .", "We then use a variational posterior q ( ) to maximize the Evidence Lower Bound (ELBO) as follows (The complete derivation is provided in Appendix A): log p ( x , y | c ) E z x q ( z x | x , c ) [log p ( x | z x , y , c )] + E z y q ( z y | z x , y , c ) [log p ( y | z y , c )] DKL [ q ( z y | z x , y , c ) || p ( z y | z x , c )] DKL [ q ( z x | x , c ) || p ( z x | c )] =: LHCVAE where , , and are the parameters of the generation, posterior, and prior network, respectively.", "We refer to this model as a Hierarchical Conditional Variational Autoencoder (HCVAE) framework.", "Figure 2 shows the directed graphical model of our HCVAE.", "The generative process is as follows: 1. Sample question L.V.: z x p ( z x | c ) 2. Sample answer L.V.: z y p ( z y | z x , c ) 3. Generate an answer: y p ( y | z y , c ) 4. Generate a question: x p ( x | z x , y , c ) Embedding We use the pre-trained word embedding network from BERT (Devlin et al., 2019) for posterior and prior networks, whereas the whole BERT is used as a contextualized word embedding model for the generative networks.", "For the answer encoding, we use a binary token type id of BERT.", "Specifically, we encode all context tokens as 0s, except for the tokens which are part of answer span (highlighted words of context in Figure", "1-(a) or", "-(c)), which we encode as 1s.", "We then feed the sequence of the word token ids, token type ids, and position ids into the embedding layer to encode the answer-aware context.", "We fix all the embedding layers in HCVAE during training.", "Prior Networks We use two different conditional prior networks p ( z x | c ) , p ( z y | z x , c ) to model context-dependent priors (the dashed lines in Figure", "1-(a)).", "To obtain the parameters of isotropic Gaussian N ( , 2 I ) for p ( z x | c ) , we use a bidirectional LSTM (Bi-LSTM) to encode the word embeddings of the context into the hidden representations, and then feed them into a Multi-Layer Perceptron (MLP).", "We model p ( z y | z x , c ) following a categorical distribution Cat ( ) , by computing the parameter from z x and the hidden representation of the context using another MLP.", "Posterior Networks We use two conditional posterior networks q ( z x | x , c ) , q ( z y | z x , y , c ) to approximate true posterior distributions of latent variables for both question x and answer y .", "We use two Bi-LSTM encoders to output the hidden representations of question and context given their word embeddings.", "Then, we feed the two hidden representations into MLP to obtain the parameters of Gaussian distribution, (cid:48) and (cid:48) (upper right corner in Figure", "1-(a)).", "We use the reparameterization trick (Kingma and Welling, 2014) to train the model with backpropagation since the stochastic sampling process z x q ( z x | x , c ) is nondifferentiable.", "We use another Bi-LSTM to encode the word embedding of answer-aware context into the hidden representation.", "Then, we feed the hidden representation and z x into MLP to compute the parameters (cid:48) of categorical distribution (lower right corner in Figure", "1-(a)).", "We use the categorical reparameterization trick with gumbel-softmax (Maddison et al., 2017; Jang et al., 2017) to enable backpropagation through sampled discrete latent variables.", "Answer Generation Networks Since we consider extractive QA, we can factorize p ( y | z y , c ) into p ( y s | z y , c ) and p ( y e | z y , c ) , where y s and y e are the start and the end position of an answer span (highlighted words in Figure", "1-(b)), respectively.", "To obtain MLE estimators for both, we first encode the context c into the contextualized word embedding of E c = { e c 1 , . . . , e c M } with the pre-trained BERT.", "We compute the final hidden representation of context and the latent variable z y with a heuristic matching layer (Mou et al., 2016) and a Bi-LSTM: f i = [ e c i ; z y ; | e c i z y | ; e c i (cid:12) z y ] h i = LSTM ([ f i , h i 1 ]) h i = LSTM ([ f i , h i +1 ]) H = [ h i ; h i ] Mi =1 where z y is linearly transformed, and H R d y M is the final hidden representation.", "Then, we feed H into two separate linear layers to predict y s and y e .", "Question Generation Networks We design the encoder-decoder architecture for our QG network by mainly adopting from our baselines (Zhao et al., 2018; Zhang and Bansal, 2019).", "For encoding, we use pre-trained BERT to encode the answer-specific context into the contextualized word embedding, and then use a two-layer Bi-LSTM to encode it into the hidden representation (in Figure", "1-(c)).", "We apply a gated self-attention mechanism (Wang et al., 2017) to the hidden representation to better capture long-term dependencies within the context, to obtain a new hidden representation H R d x M .", "The decoder is a two-layered LSTM which receives the latent variable z x as an initial state.", "It uses an attention mechanism (Luong et al., 2015) to dynamically aggregate H at each decoding step into a context vector of s j , using the j -th decoder hidden representation d j R d x (in Figure", "1-(c)).", "Then, we feed d j and s j into MLP with maxout activation (Goodfellow et al., 2013) to compute the final hidden representation d j as follows: d 0 = z x , d j = LSTM ([ e x j 1 , d j 1 ]) r j = H TW a d j , a j = softmax ( r j ) , s j = Ha j d j = MLP ([ d j ; s j ]) where z x is linearly transformed, and e x j is the j -th question word embedding.", "p ( x j | x <j , z x , y , c ) = softmax ( W e d j ) .", "We initialize the weight matrix W e as the pretrained word embedding matrix and fix it during training.", "Further, we use the copy mechanism (Zhao et al., 2018), so that the model can directly copy tokens from the context.", "We also greedily decode questions to ensure that all stochasticity comes from the sampling of the latent variables.", "One of the most important challenges of the QAG task is enforcing consistency between the generated question and its corresponding answer.", "They should be semantically consistent, such that it is possible to predict the answer given the question and the context.", "However, neural QG or QAG models often generate questions irrelevant to the context and the answer (Zhang and Bansal, 2019) due to the lack of the mechanism enforcing this consistency.", "We tackle this issue by maximizing the mutual information (MI) of a generated QA pair, assuming that an answerable QA pair will have high MI.", "Since an exact computation of MI is intractable, we use a neural approximation.", "While there exist many different approximations (Belg-hazi et al., 2018; Hjelm et al., 2019), we use the estimation proposed by Yeh and Chen (2019) based on Jensen-Shannon Divergence: MI ( X ; Y ) E x , y P [log g ( x , y )] + 1 2 E x , y N [log(1 g ( x , y ))] + 1 2 E x , y N [log(1 g ( x , y ))] =: L Info where EP and EN denote expectation over positive and negative examples.", "We generate negative examples by shuffling the QA pairs in the minibatch, such that a question is randomly associated with an answer.", "Intuitively, the function g ( ) acts like a binary classifier that discriminates whether QA pair is from joint distribution or not.", "We empirically find that the following g ( ) effectively achieves our goal of consistent QAG: g ( x , y ) = sigmoid ( x T Wy ) where x = 1 N (cid:80) i d i and y = 1 L (cid:80) j h j are summarized representations of question and answer, respectively.", "Combined with the ELBO, the final objective of our Info-HCVAE is as follows: max LHCVAE + L Info where includes all the parameters of , , and W , and controls the effect of MI maximization.", "Stanford Question Answering Dataset v1.1 (SQuAD) (Rajpurkar et al., 2016).", "This is a reading comprehension dataset consisting of questions obtained from crowdsourcing on a set of Wikipedia articles, where the answer to every question is a segment of text or a span from the corresponding reading passage.", "We use the same split used in Zhang and Bansal (2019) for the fair comparison.", "Natural Questions (NQ) (Kwiatkowski et al., 2019).", "This dataset contains realistic questions from actual user queries to a search engine, using Wikipedia articles as context.", "We adapt the dataset provided from MRQA shared task (Fisch et al., 2019) and convert it into the extractive QA format.", "We split the original validation set in half, to use as validation and test for our experiments.", "TriviaQA (Joshi et al., 2017).", "This is a reading comprehension dataset containing question-answer-evidence triples.", "The QA pairs and the evidence (contexts) documents are authored and uploaded by Trivia enthusiasts.", "Again, we only choose QA pairs of which answers are span of contexts.", "HarvestingQA 2 This dataset contains top-ranking 10K Wikipedia articles and 1M synthetic QA pairs generated from them, by the answer span extraction and QG system proposed in (Du and Cardie, 2018).", "We use this dataset for semi-supervised learning.", "Implementation Details In all experiments, we use BERT-base ( d = 768 ) (Devlin et al., 2019) as the QA model, setting most of the hyperparameters as described in the original paper.", "For both HCVAE and Info-HCVAE, we set the hidden dimensionality of the Bi-LSTM to 300 for posterior, prior, and answer generation networks, and use the dimensionality of 450 and 900 for the encoder and the decoder of the question generation network.", "We set the dimensionality of z x as 50 , and define z y to be set of 2 https://github.com/xinyadu/ harvestingQA 10-way categorical variables z y = { z 1 , . . . , z 20 } .", "For training the QA model, we fine-tune the model for 2 epochs.", "We train both the QA model and Info-HCVAE with Adam optimizer (Kingma and Ba, 2015) with the batch size of 32 and the initial learning rate of 5 10 5 and 10 3 respectively.", "For semi-supervised learning, we first pre-train BERT on the synthetic data for 2 epochs and fine-tune it on the GT dataset for 2 epochs.", "To prevent posterior collapse , we multiply 0 .", "1 to the KL divergence terms of question and answer (Higgins et al., 2017).", "For more details of the datasets and experimental setup, please see Appendix C. Baselines We experiment two variants of our model against several baselines: 1. Harvest-QG : An attention-based neural QG model with a neural answer extraction system (Du and Cardie, 2018).", "2. Maxout-QG : A neural QG model based on maxout copy mechanism with a gated self-attetion (Zhao et al., 2018), which uses BERT as the word embedding as suggested by Zhang and Bansal (2019).", "3. Semantic-QG : A neural QG model based on Maxout-QG with semantic-enhanced reinforcement learning (Zhang and Bansal, 2019).", "4. HCVAE : Our HCVAE model without the InfoMax regularizer.", "5. Info-HCVAE : Our full model with the InfoMax regularizer.", "For the baselines, we use the same answer spans extracted by the answer extraction system (Du and Cardie, 2018).", "QAE and R-QAE One of crucial challenges with generative models is a lack of a good quantitative evaluation metric.", "We adopt QA -based E valuation (QAE) metric proposed by Zhang and Bansal (2019) to measure the quality of QA pair.", "QAE is obtained by first training the QA model on the synthetic data, and then evaluating the QA model with human annotated test data.", "However, QAE only measures how well the distribution of synthetic QA pairs matches the distribution of GT QA pairs, and does not consider the diversity of QA pairs.", "Thus, we propose R everse QA -based E valuation ( R-QAE ), which is the accuracy of the QA model trained on the human-annotated QA pairs, evaluated on the generated QA pairs.", "If the synthetic Method QAE ( ) R-QAE ( ) SQuAD (EM/F1) Harvesting-QG 55.11/66.40 64.77/78.85 Maxout-QG 56.08/67.50 62.49/78.24 Semantic-QG 60.49/71.81 74.23/88.54 HCVAE 69.46/80.79 37.57 /61.24 Info-HCVAE 71.18/81.51 38.80/ 60.73 Natural Questions (EM/F1) Harvesting-QG 27.91/41.23 49.89/70.01 Maxout-QG 30.98/44.96 49.96/70.03 Semantic-QG 30.59/45.29 58.42/79.23 HCVAE 31.45/46.77 32.78/55.12 Info-HCVAE 37.18/51.46 29.39/53.04 TriviaQA (EM/F1) Harvesting-QG 21.32/30.21 29.75/47.73 Maxout-QG 24.58/34.32 31.56/49.92 Semantic-QG 27.54/38.25 37.45/58.15 HCVAE 30.20/40.88 34.41/48.16 Info-HCVAE 35.45/44.11 21.65/37.65 Table 2 : QAE and R-QAE results on three datasets.", "All results are the performances on our test set.", "Table 3 : The results of mutual information estimation.", "The results are based on QA pairs generated from H 10%.", "data covers larger distribution than the human annotated training data, R-QAE will be lower.", "However, note that having a low R-QAE is only meaningful when the QAE is high enough since trivially invalid questions may also yield low R-QAE.", "Results We compare HCVAE and Info-HCVAE with the baseline models on SQuAD, NQ, and TriviaQA.", "We use 10% of Wikipedia paragraphs from HarvestingQA (Du and Cardie, 2018) for evaluation.", "Table 2 shows that both HCVAE and Info-HCVAE significantly outperforms all baselines by large margin in QAE on all three datasets, while obtaining significantly lower R-QAE, which shows that our model generated both high-quality and diverse QA pairs from the given context.", "Moreover, Info-HCVAE largely outperforms HCVAE, which demonstrates the effectiveness of our InfoMax regularizer for enforcing QA-pair consistency.", "Figure 3 shows the accuracy as a function of number of QA pairs.", "Our Info-HCVAE outperform all baselines by large margins using orders of magnitude smaller number of QA pairs.", "For example, Info-HCVAE achieves 61 .", "38 points using 12K QA pairs, outperforming Semantic-QG that use 10 times larger number of QA pairs.", "We also report 10 4 10 5 10 6 50 60 70 # of QA pairs (log-scaled) QAb a s e d E v a l u a ti on ( EM ) Harvest-QG Maxout-QG Semantic-QG Info-HCVAE Figure 3: QAE vs. # of QA pairs (log-scaled) on SQuAD.", "Table 4 : QAE and R-QAE results of the ablation study SQuAD dataset.", "All the results are the performances on our test set.", "the score of x T Wy as an approximate estimation of mutual information (MI) between QA pairs generated by each method in Table 3; our Info-HCVAE yields the largest value of MI estimation.", "Ablation Study We further perform an ablation study to see the effect of each model component.", "We start with the model without any latent variables, which is essentially a deterministic Seq2Seq model (denoted as Baseline in Table 4).", "Then, we add in the question latent variable (+Q-latent) and then the answer latent variable (+A-latent), to see the effect of probabilistic latent variable modeling and hierarchical modeling respectively.", "The results in Table 4 shows that both are essential for improving both the quality (QAE) and diversity (R-QAE) of the generated QA pairs.", "Finally, adding in the InfoMax regularization (+InfoMax) further improves the performance by enhancing the consistency of the generated QA pairs.", "Human Evaluation As a qualitative analysis, we first conduct a pairwise human evaluation of the QA pairs generated by our Info-HCVAE and Maxout-QG on 100 randomly selected paragraphs.", "Specifically, 20 human judges performed blind quality assessment of two sets of QA pairs that are presented in a random order, each of which contained two to five QA pairs.", "Each set of QA pairs is evalu-Method Diversity Consistency Overall Baseline 26% 34% 30% Ours 47% 50% 52% Tie 27% 16% 18% Table 5 : The results of human judgement in terms of diversity, consistency, and overall quality on the generated QA pairs.", "Table 6 : Examples of one-to-many mapping of our Info-HCVAE.", "The answer is highlighted by pink.", "GT denotes the ground-truth question.", "Odenotes questions generated by Info-HCVAE.", "ated in terms of the overall quality, diversity, and consistency between the generated QA pairs and the context.", "The results in Table 5 show that the QA pairs generated by our Info-HCVAE is evaluated to be more diverse and consistent, compared to ones generated by the baseline models.", "One-to-Many QG To show that our Info-HCVAE can effectively tackle one-to-many mapping problem for question generation, we qualitatively analyze the generated questions for given a context and an answer from the SQuAD validation set.", "Specifically, we sample the question latent variables multiple times using the question prior network p ( z x | c ) , and then feed them to question generation networks p ( x | z x , y , c ) with the answer.", "The example in Table 6 shows that our Info-HCVAE generates diverse and semantically consistent questions given an answer.", "We provide more qualitative examples in Appendix D. Latent Space Interpolation To examine if Info-HCVAE learns meaningful latent space of QA pairs, we qualitatively analyze the QA pairs generated by interpolating between two latent codes of it on SQuAD training set.", "We first encode z x from two QA pairs using posterior networks of q ( z x | x , c ) , and then sample z y from interpolated values of z x using prior networks p ( z y | z x , c ) to generate corresponding QA pairs.", "Table 7 shows that the semantic of the QA pairs generated smoothly transit from one latent to another with high diversity and consistency.", "We provide more qualitative examples Paragraph ...", "Table 7 : QA pairs generated by interpolating between two latent codes encoded by our posterior networks.", "Ori1 and Ori2 are from training set of SQuAD.", "We now validate our model in a semi-supervised setting, where the model uses both the ground truth labels and the generated labels to solve the QA task, to see whether the generated QA pairs help improve the performance of a QA model in a conventional setting.", "Since such synthetic datasets consisting of generated QA pairs may inevitably contain some noise (Zhang and Bansal, 2019; Dong et al., 2019; Alberti et al., 2019), we further refine the QA pairs by using the heuristic suggested by Dong et al. (2019), to replace the generated answers whose F1 score to the prediction of the QA model trained on the human annotated data is lower than a set threshold.", "We select the threshold of 40 .", "0 for the QA pair refinement model via cross-validation on the SQuAD dataset, and used it for the experiments.", "Please see Appendix C for more details.", "SQuAD We first perform semi-supervised QA experiments on SQuAD using the synthetic QA pairs generated by our model.", "For the contexts, we use both the paragraphs in the original SQuAD (S) dataset, and the new paragraphs in the HarvestingQA dataset (H).", "Using Info-HCVAE, we generate 10 different QA pairs by sampling from the latent spaces (denoted as S 10).", "For the baseline, we use Semantic-QG (Zhang and Bansal, 2019) with the beam search size of 10 to obtain the same number of QA pairs.", "We also generate new QA pairs Data EM F1 SQuAD 80.25 88.23 Semantic-QG (baseline) +S 10 81.20 (+0.95) 88.36 (+0.13) +H 100% 81.03 (+0.78) 88.79 (+0.56) +S 10 + H 100% 81.44 (+1.19) 88.72 (+0.49) Info-HCVAE (ours) +S 10 82.09 (+1.84) 89.11 (+0.88) +H 10% 81.37 (+1.12) 88.85 (+0.62) +H 20% 81.68 (+1.43) 89.06 (+0.93) +H 30% 81.76 (+1.51) 89.12 (+0.89) +H 50% 82.17 (+1.92) 89.38 (+1.15) +H 100% 82.37 (+2.12) 89.63 (+1.40) +S 10 + H 100% 82.19 (+1.94) 89.84 (+1.59) Table 8 : The results of semi-supervised QA experiments on SQuAD.", "All the results are the performances on our test set.", "using different portions of paragraphs provided in HarvestingQA (denoted as H 10%-H 100%), by sampling one latent variable per context.", "Table 8 shows that our framework improves the accuracy of the BERT-base model by 2.12 (EM) and 1.59 (F1) points, significantly outperforming Semantic-QG.", "NQ and TriviaQA Our model is most useful when we do not have any labeled data for a target dataset.", "To show how well our QAG model performs in such a setting, we train the QA model using only the QA pairs generated by our model trained on SQuAD and test it on the target datasets (NQ and TriviaQA).", "We generate multiple QA pairs from each context of the target dataset, sampling from the latent space one to ten times (denoted by N 1-10 or T 1-10 in Table 9).", "Then, we fine-tune the QA model pretrained on the SQuAD dataset with the generated QA pairs from the two datasets.", "Table 9 shows that as we augment training data with larger number of synthetic QA pairs, the performance of the QA model significantly increases, significantly outperforming the QA model trained on SQuAD only.", "Yet, models trained with our QAG still largely underperform models trained with human labels, due to the distributional discrepancy between the source and the target dataset.", "We proposed a novel probabilistic generative framework for generating diverse and consistent question-answer (QA) pairs from given texts.", "Specifically, our model learns the joint distribution of question and answer given context with a hierarchically conditional variational autoencoder, while enforcing consistency between generated QA pairs by maximizing their mutual information with a novel In-Data EM F1 Natural Questions SQuAD 42.77 57.29 +N 1 46.70 (+3.94) 61.08 (+3.79) +N 2 46.95 (+4.19) 61.34 (+4.05) +N 3 47.73 (+4.96) 61.98 (+4.69) +N 5 48.19 (+5.42) 62.21 (+4.92) +N 10 48.44 (+5.67) 62.69 (+5.40) NQ 61.65 73.91 TriviaQA SQuAD 48.96 57.98 +T 1 49.65 (+0.69) 59.13 (+1.21) +T 2 50.01 (+1.05) 59.08 (+1.10) +T 3 49.71 (+0.75) 59.49 (+1.51) +T 5 50.14 (+1.18) 59.21 (+1.23) +T 10 49.65 (+0.69) 59.20 (+1.22) Trivia 64.55 70.42 Table 9 : The result of semi-supervised QA experiments on Natural Questions and TriviaQA dataset.", "All results are the performance on our test set.", "foMax regularizer.", "To our knowledge, ours is the first successful probabilistic QAG model.", "We evaluated the QAG performance of our model by the accuracy of the BERT-base QA model trained using the generated questions on multiple datasets, on which it largely outperformed the state-of-the-art QAG baseline (+6.59-10.69 in EM), even with a smaller number of QA pairs.", "We further validated our model for semi-supervised QA, where it improved the performance of the BERT-base QA model on the SQuAD by 2.12 in EM, significantly outperforming the state-of-the-art model.", "As future work, we plan to extend our QAG model to a meta-learning framework, for generalization over diverse datasets.", "This work was supported by the Engineering Research Center Program through the National Research Foundation of Korea (NRF) funded by the Korean Government MSIT (NRF-2018R1A5A1059921), Institute of Information & communications Technology Planning & Evaluation (IITP) grant funded by the Korea govern-ment(MSIT) (No.2019-0-01410, Research Development of Question Generation for Deep Learning based Semantic Search Domain Extension, No.2016-0-00563, Research on Adaptive Machine Learning Technology Development for Intelligent Autonomous Digital Companion, No.20190-00075, and Artificial Intelligence Graduate School Program (KAIST))." ]
[ "abstain", "abstain", "objective", "method", "result", "abstain", "other", "other", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "method", "objective", "objective", "result", "method", "other", "abstain", "objective", "objective", "method", "result", "result", "objective", "objective", "objective", "objective", "result", "other", "other", "other", "other", "other", "method", "abstain", "abstain", "other", "other", "other", "other", "other", "other", "method", "other", "other", "other", "objective", "method", "method", "method", "method", "method", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "method", "method", "method", "abstain", "method", "method", "method", "method", "abstain", "method", "method", "method", "method", "method", "abstain", "method", "method", "method", "abstain", "method", "method", "abstain", "method", "method", "method", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "method", "method", "other", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "other", "method", "method", "method", "method", "abstain", "method", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "result", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "result", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "result", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "other", "result", "abstain", "result", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "objective", "other", "result", "abstain", "objective", "result", "result", "objective", "other" ]
[ "We present a benchmark suite of four datasets for evaluating the fairness of pre-trained language models and the techniques used to fine-tune them for downstream tasks.", "Our benchmarks cover four jurisdictions (Euro-pean Council, USA, Switzerland, and China), five languages (English, German, French, Italian and Chinese) and fairness across five attributes (gender, age, region, language, and legal area).", "In our experiments, we evaluate pre-trained language models using several group-robust fine-tuning techniques and show that performance group disparities are vibrant in many cases, while none of these techniques guarantee fairness, nor consistently mitigate group disparities.", "Furthermore, we provide a quantitative and qualitative analysis of our results, highlighting open challenges in the development of robustness methods in legal NLP.", "Natural Language Processing (NLP) for law (Chalkidis and Kampas, 2019; Aletras et al., 2019; Zhong et al., 2020; Chalkidis et al., 2022) receives increasing attention.", "Assistive technologies can speed up legal research or discovery significantly assisting lawyers, judges and clerks.", "They can also help legal scholars to study case law (Katz, 2012; Coupette et al., 2021), improve access of law to laypersons, help sociologists and research ethicists to expose biases in the justice system (Angwin et al., 2016; Dressel and Farid, 2018), and even scrutinize decision-making itself (Bell et al., 2021).", "In the context of law, the principle of equality and non-discrimination is of paramount importance, although its definition varies at international, regional and domestic level.", "For example, EU nondiscrimination law prohibits both direct and indirect discrimination.", "Direct discrimination occurs when one person is treated less favourably than Corresponding author: ilias.chalkidis@di.ku.dk Figure 1: Group disparity for defendant state (C.E. Europe vs. The Rest) in ECtHR and legal area (Civil law vs. Penal law) in FSCS.", "others would be treated in comparable situations on grounds of sex, racial or ethnic origin, disability, sexual orientation, religion or belief and age.", "1 Given the gravity that legal outcomes have for individuals, assistive technologies cannot be adopted to speed up legal research at the expense of fairness (Wachter et al., 2021), potentially also decreasing the trust in our legal systems (Barfield, 2020).", "Societal transformations perpetually shape our legal systems.", "The topic deserves great attention because AI systems learning from historical data pose the risk of lack of generalisability beyond the training data, and more importantly transporting biases previously encumbered in the data in future decision-making, thereby exponentially increasing their e ect (Delacroix, 2022).", "Historical legal data do not represent all groups in our societies equally and tend to reflect social biases in our societies and legal institutions.", "When models are deployed in production, they may reinforce these biases.", "For example, criminal justice is already often strongly influenced by racial bias, with people of colour being more likely to be arrested and receive higher punishments than others, both in the USA 2 and in the UK.", "3 1 An in-depth analysis of the notion of discrimination and fairness in law is presented in Appendix A. 2 https://tinyurl.com/4cse552t 3 https://tinyurl.com/hkff3zcb 4389 In recent years, the NLP and machine learning literature has introduced fairness objectives, typically derived from the Rawlsian notion of equal opportunities (Rawls, 1971), to evaluate the extent to which models discriminate across protected attributes.", "Some of these rely on notions of resource allocation, i.e., reflecting the idea that groups are treated fairly if they are equally represented in the training data used to induce our models, or if the same number of training iterations is performed per group.", "This is sometimes referred to as the resource allocation perspective on fairness (Lundgard, 2020).", "Contrary, there is also a capability -centered approach to fairness (Anderson, 1999; Robeyns, 2009), in which the goal is to reserve enough resources per group to achieve similar performance levels, which is ultimately what is important for how individuals are treated in legal processes.", "We adopt a capability-centered approach to fairness and define fairness in terms of performance parity (Hashimoto et al., 2018) or equal risk (Donini et al., 2018).", "4 Performance disparity (Hashimoto et al., 2018) refers to the phenomenon of high overall performance, but low performance on minority groups, as a result of minimizing risk across samples (not groups).", "Since some groups benefit more than others from models and technologies that exhibit performance disparity, this likely widens gaps between those groups.", "Performance disparity works against the ideal of fair and equal opportunities for all groups in our societies.", "We therefore define a fair classifier as one that has similar performance (equal risk) across all groups (Donini et al., 2018).", "In sum, we adopt the view that (approximate) equality under the law in a modern world requires that our NLP technologies exhibit (approximately) equal risk across sensitive attributes.", "For everyone to be treated equally under the law, regardless of race, gender, nationality, or other characteristics, NLP assistive technologies need to be (approxi-mately) insensitive to these attributes.", "We consider three types of attributes in this work: Demographics : The first category includes demographic information of the involved parties, e.g., the gender, sexual orientation, nationality, age, or race of the plainti / defendant in a case.", "In this case, we aim to mitigate biases against specific 4 The dominant alternative to equal risk is to define fairness in terms of equal odds .", "groups, e.g., a model performs worse for female defendants or is biased against black defendants.", "We can further consider information involving the legal status of involved parties, e.g., person vs. company, or private vs. public.", "Regional : The second category includes regional information, for example the courts in charge of a case.", "In this case, we aim to mitigate disparity in-between di erent regions in a given jurisdiction, e.g., a model performs better in specific cases originated or ruled in courts of specific regions.", "Legal Topic : The third category includes legal topic information on the subject matter of the controversy.", "In this case, we aim to mitigate disparity in-between di erent topics (areas) of law, e.g., a model performs better in a specific field of law, for example penal cases.", "Contributions We introduce FairLex, a multilingual fairness benchmark of four legal datasets covering four jurisdictions (Council of Europe, United States of America, Swiss Confederation and People's Republic of China), five languages (English, German, French, Italian and Chinese) and various sensitive attributes (gender, age, region, etc.).", "We release four pre-trained transformer-based language models, each tailored for a specific dataset (task) within our benchmark, which can be used as baseline models (text encoders).", "We conduct experiments with several group-robust algorithms and provide a quantitative and qualitative analysis of our results, highlighting open challenges in the development of robustness methods in legal NLP.", "Fair machine learning The literature on inducing approximately fair models from biased data is rapidly growing.", "See Mehrabi et al. (2021); Makhlouf et al. (2021); Ding et al. (2021) for recent surveys.", "We rely on this literature in how we define fairness, and for the algorithms that we compare in our experiments below.", "As already discussed, we adopt a capability-centered approach to fairness and define fairness in terms of performance parity (Hashimoto et al., 2018) or equal risk (Donini et al., 2018).", "The fairness-promoting learning algorithms we evaluate are discussed in detail in Section 4.", "Some of these Group Distributionally Robust Optimization (Sagawa et al., 2020) and Invariant Risk Minimization (Arjovsky et al., 2020) have previously been evaluated for fairness in the context of hate speech (Koh et al., 2021).", "Fairness in law Studying fair machine learning in the context of legal (computational) applications has a limited history.", "In a classic study, Angwin et al. (2016) analyzed the performance of the Correctional O ender Management Profiling for Alternative Sanctions (COMPAS) system, which was used for parole risk assessment (recidivism prediction) in the US.", "The system relied on 137 features from questionnaires and criminal records.", "Angwin et al. found that blacks were almost twice as likely as whites to be mislabeled as high risk (of re-o ending), revealing a severe racial bias in the system.", "The system was later compared to crowd-workers in Dressel and Farid (2018).", "These studies relied on tabular data and did not involve text processing (e.g., encoding case facts or decisions).", "More recently, Wang et al. (2021b) studied legal judgment consistency using a dataset of Chinese criminal cases.", "They evaluated the consistency of LSTM-based models across region and gender and reported severe fairness gaps across gender.", "They also found that the fairness gap was particular severe for more serious crimes.", "Another line of work (Rice et al., 2019; Baker Gillis, 2021; Gu-musel et al., 2022) explores representational bias with respect to race and gender analyzing word latent representations trained in legal text corpora.", "While we agree that representational bias can potentially reinforce unfortunate biases, these may not impact the treatment of individuals (or groups).", "We therefore focus on directly measuring equal risk on downstream applications instead.", "Previous work has focused on the analysis of specific cases, languages or algorithms, but FairLex aims at easing the development and testing of bias-mitigation models or algorithms within the legal domain.", "FairLex allows researchers to explore fairness across four datasets covering four jurisdictions (Council of Europe, United States of America, Swiss Confederation and People's Republic of China), five languages (English, German, French, Italian and Chinese) and various sensitive attributes (gender, age, region, etc.).", "Furthermore, we provide competitive baselines including pre-trained transformer-based language models, adapted to the examined datasets, and an in-dept examination of performance of four group robust algorithms described in detail in Section 4.", "Benchmarking NLP has been stormed by the rapid development of benchmark datasets that aim to evaluate the performance of pre-trained language models with respect to di erent objectives: general Natural Language Understanding (NLU) (Wang et al., 2019b,a), Cross-Lingual Transfer (CLT) (Hu et al., 2020), and even domain-specific ones on biomedical (Peng et al., 2019), or legal (Chalkidis et al., 2022) NLP tasks.", "Despite their value, recent work has raised criticism on several limitations of the so called NLU benchmarks (Paullada et al., 2020; Bowman and Dahl, 2021; Raji et al., 2021).", "The main points are: poor ( laissez-faire ) dataset development (e.g., lack of diversity, spurious correlations), legal issues (e.g., data licensing and leakage of personal information), construct validity (e.g., poor experimental setup, unclear research questions), question of general capabilities, and promotion of superficial competitiveness (hype, or even falsify, state-of-the-art results).", "We believe that the release of FairLex, a domain-specific (legal-oriented) benchmark suite for evaluating fairness, overcomes (or at least mitigates) some of the aforementioned limitations.", "We introduce the core motivation in Section 1, while specific (case-by-case) details are described in Section 3.", "Our benchmark is open-ended and inevitably has several limitations; we report known limitations and ethical considerations in Sections 7 and 8.", "Nonetheless we believe that it will help critical research in the area of fairness.", "ECtHR The European Court of Human Rights (ECtHR) hears allegations that a state has breached human rights provisions of the European Convention of Human Rights (ECHR).", "We use the dataset of Chalkidis et al. (2021), which contains 11K cases from ECtHR's public database.", "Each case is mapped to articles of the ECHR that were violated (if any).", "This is a multi-label text classification task.", "Given the facts of a case, the goal is to predict the ECHR articles that were violated, if any, as decided (ruled) by the court.", "The cases are chronologically split into training (9k, 200116), development (1k, 201617), and test (1k, 201719) sets.", "To facilitate the study of fairness of text classifiers, we record for each case the following attributes:", "(a) The defendant states , which are the European states that allegedly violated the ECHR.", "The defendant states for each case is a subset of the 47 Member States of the Council of Europe; 5 To have statistical support, we group defendant states 5 https://www.coe.int/ 4391 Dataset Original Publication Classification Task No of Classes Attributes Attribute Type #N ECtHR (Chalkidis et al., 2021) Legal Judgment Prediction: ECHR Violation Prediction 10 + 1 Defendant State 2 Applicant Gender 2 Applicant Age 3 SCOTUS (Spaeth et al., 2020) Legal Topic Classification: Issue Area Classification 14 Respondent Type 4 Decision Direction 2 FSCS (Niklaus et al., 2021) Legal Judgment Prediction: Case Approval Prediction 2 Language 3 Region of Origin 6 Legal Area 6 CAIL (Wang et al., 2021b) Legal Judgment Prediction: Crime Severity Prediction 6 Defendant Gender 2 Region of Origin 7 Table 1: Main characteristics of FairLex datasets (ECtHR, SCOTUS, FSCS, CAIL).", "in two: Central-Eastern European states, on one hand, and all other states, as classified by the EuroVoc thesaurus.", "(b) The applicant's age at the time of the decision.", "We extract the birth year of the applicant from the case facts, if possible, and classify its case in an age group ( 35, 64, or older) ; and", "(c) the applicant's gender , extracted from the facts, if possible, based on pronouns or other gendered words, classified in two categories (male, female).", "6 SCOTUS The US Supreme Court (SCOTUS) is the highest federal court in the United States of America and generally hears only the most controversial or otherwise complex cases which have not been su ciently well solved by lower courts.", "We combine information from SCOTUS opinions with the Supreme Court DataBase (SCDB) 7 (Spaeth et al., 2020).", "SCDB provides metadata (e.g., date of publication, decisions, issues, decision directions and many more) for all cases.", "We consider the available 14 thematic issue areas (e.g, Criminal Procedure, Civil Rights, Economic Activity, etc.) as labels.", "This is a single-label multi-class document classification task.", "Given the court opinion, the goal is to predict the issue area whose focus is on the subject matter of the controversy (dis-pute).", "SCOTUS contains a total of 9,262 cases that we split chronologically into 80% for training (7.4k, 19461982), 10% for development (914, 19821991) and 10% for testing (931, 19912016).", "From SCDB, we also use the following attributes to study fairness:", "(a) the type of respondent , which is a manual categorization of respondents (defen-dants) in five categories (person, public entity, organization, facility and other); and", "(c) the direction of the decision , i.e., whether the decision is considered liberal, or conservative, provided by SCDB.", "FSCS The Federal Supreme Court of Switzerland (FSCS) is the last level of appeal in Switzerland and similarly to SCOTUS, the court generally hears only the most controversial or otherwise complex cases which have not been su ciently well solved by lower courts.", "The court often focus only on small parts of previous decision, where they discuss possible wrong reasoning by the lower court.", "The Swiss-Judgment-Predict dataset (Niklaus et al., 2021) contains more than 85K decisions from the FSCS written in one of three languages (50K German, 31K French, 4K Italian) from the years 2000 to 2020.", "The dataset provides labels for a sim-plified binary ( approval , dismissal ) classification task.", "Given the facts of the case, the goal is to predict if the plainti 's request is valid or partially valid.", "The cases are also chronologically split into training (59.7k, 2000-2014), development (8.2k, 2015-2016), and test (17.4k, 2017-2020) sets.", "The original dataset provides three additional attributes:", "(a) the language of the FSCS written decision, in either German, French, or Italian;", "(b) the legal area of the case (e.g., public, penal law) derived from the chambers where the decisions were heard; and", "(c) the region that denotes in which federal region was the case originated.", "CAIL The Supreme People's Court of China is the last level of appeal in China and considers cases that originated from the high people's courts concerning matters of national importance.", "The Chinese AI and Law challenge (CAIL) dataset (Xiao et al., 2018) is a Chinese legal NLP dataset for judgment prediction and contains over 1m criminal cases.", "The dataset provides labels for relevant article of criminal code prediction, charge (type of crime) prediction, imprisonment term (period) prediction, and monetary penalty prediction.", "8 8 The publication of the original dataset has been the topic of an active debate in the NLP community (Leins et al., 2020; Tsarapatsanis and Aletras, 2021; Bender, 2021).", "Recently, Wang et al. (2021b) re-annotated a subset of approx.", "100k cases with demographic attributes.", "Specifically the new dataset has been annotated with:", "(a) the applicant's gender , classified in two categories (male, female); and", "(b) the region of the court that denotes in which out of the 7 provincial-level administrative regions was the case judged.", "We re-split the dataset chronologically into training (80k, 2013-2017), development (12k, 2017-2018), and test (12k, 2018) sets.", "In our study, we re-frame the imprisonment term prediction and examine a soft version, dubbed crime severity prediction task, a multi-class classification task, where given the facts of a case, the goal is to predict how severe was the committed crime with respect to the imprisonment term.", "We approximate crime severity by the length of imprisonment term, split in 6 clusters (0, 12, 36, 60, 120, > 120 months).", "Across experiments, our main goal is to find a hypothesis for which the risk R ( h ) is minimal: h = arg min h H R ( h ) (1) R ( h ) = E ( L ( h ( x ) , y )) (2)", "Similar to previous studies, R ( h ) is an expectation of the selected loss function ( L ).", "In this work, we study multi-label text classification (Section 3), thus we aim to minimize the binary cross-entropy loss across L classes: L = y log y (1 y ) log(1 y ) (3) ERM (Vapnik, 1992), which stands for Empirical Risk Minimization, is the most standard and widely used optimization technique to train neural methods.", "The loss is calculated as follows: LERM = N (cid:88) i = 1 L i N (4) where N is the number of instances (training examples) in a batch, and L i is the loss per instance.", "Besides ERM, we also consider a representative selection of group-robust fine-tuning algorithms which aims at mitigating performance disparities with respect to a given attribute ( A ), e.g., the gender of the applicant or the region of the court.", "Each attribute is split into G groups, i.e., male / female for gender.", "All algorithms rely on a balanced group sampler, i.e., an equal number of instances (sam-ples) per group ( NG ) are included in each batch.", "Most of the algorithms are built upon group-wise losses ( L g ), computed as follows: L ( g i ) = 1 N g i N gi (cid:88) j = 1 L ( x j ) (5) Group DRO (Sagawa et al., 2020), stands for Group Distributionally Robust Optimization (DRO).", "Group DRO is an extension of the Group Uniform algorithm, where the group-wise losses are weighted inversely proportional to the group training performance.", "The total loss is: LDRO = G (cid:88) i = 1 w g i L ( g i ), where (6) w g i = 1 W ( w g i e L ( g i ) ) and W = G (cid:88) i = 1 w g i (7) where G is the number of groups (labels), L g are the averaged group-wise (label-wise) losses, w g are the group (label) weights, w g are the group (label) weights as computed in the previous update step.", "Initially the weight mass in equally distributed across groups.", "V-REx (Krueger et al., 2020), which stands for Risk Extrapolation, is yet another proposed group-robust optimization algorithm.", "Krueger et al. (2020) hypothesize that variation across training groups is representative of the variation later encountered at test time, so they also consider the variance across the group-wise losses.", "In V-REx the total loss is calculated as follows: LREX = LERM + Var([ L g 1 , . . . , L g G ]) (8) where Var is the variance among the group-wise losses and , a weighting hyper-parameter scalar.", "IRM (Arjovsky et al., 2020), which stands for Invariant Risk Minimization, mainly aims to penalize variance across multiple training dummy estimators across groups, i.e., performance cannot vary in samples that correspond to the same group.", "The total loss is computed as follows: LIRM = 1 G G (cid:88) i = 1 L ( g i ) + P ( g i ) (9) Please refer to Arjovsky et al. (2020) for the definition of the group penalty terms ( P g ).", "Adversarial Removal (Elazar and Goldberg, 2018) algorithm mitigates group disparities by means of an additional adversarial classifier (Good-fellow et al., 2014).", "The adversarial classifier share the encoder with the main network and is trained to predict the protected attribute ( A ) of an instance.", "The total loss factors in the adversarial one, thus penalizing the model when it is able to discriminate groups.", "Formally, the total loss is calculated as: LAR = LERM LADV (10) LADV = L ( g i , g i ) (11) where g i is the adversarial classifier's prediction for the examined attribute A (in which group ( g i ) of A , does the example belong to) given the input ( x ).", "Models Since we are interested in classifying long documents (up to 6000 tokens per document, see Figure 2 in Appendix E.1), we use a hierarchical BERT-based model similar to that of Chalkidis et al. (2021), so as to avoid using only the first 512 tokens of a text.", "The hierarchical model, first, encodes the text through a pre-trained Transformer-based model, thus representing each paragraph independently with the [CLS] token.", "Then, the paragraph representations are fed into a two-layers transformer encoder with the exact same specifica-tions of the first one (e.g., hidden units, number of attention heads), so as to contextualize them, i.e., it makes paragraphs representations aware of the surrounding paragraphs.", "Finally, the model max-pools the context-aware paragraph representations computing the document-level representation and feed it to a classification layer.", "For the purpose of this work, we release four domain-specific BERT models with continued pretraining on the corpora of the examined datasets.", "9 We train mini-sized BERT models with 6 Transformer blocks, 384 hidden units, and 12 attention heads.", "We warm-start all models from the public MiniLMv2 models checkpoints (Wang et al., 2021a) using the distilled version of RoBERTa (Liu et al., 2019) for the English datasets (ECtHR, SCOTUS) and the one distilled from XLM-R (Conneau et al., 2020) for the rest (trilingual FSCS, and Chinese CAIL).", "Given the limited size of these models, we can e ectively use up to 4096 tokens in ECtHR and SCOTUS and up to 2048 tokens in FSCS and 9 https://huggingface.co/coastalcph CAIL for up to 16 samples per batch in a 24GB GPU card.", "10 For completeness, we also consider linear Bag-of Words (BoW) classifiers using TF-IDF scores of the most frequent n -grams (where n = 1 , 2 , 3) in the training corpus of each dataset.", "Data Repository and Code We release a uni-fied version of the benchmark on Hugging Face Datasets (Lhoest et al., 2021).", "11 In our experiments, we use and extend the WILDs (Koh et al., 2021) library.", "For reproducibility and further exploration with new group-robust methods, we release our code on Github.", "12 Evaluation Details Across experiments we compute the macro-F1 score per group (mF1 i ), excluding the group of unidentified instances, if any.", "13 We report macro-F1 to avoid bias toward majority classes because of class imbalance and skewed label distributions across train, development, and test subsets (Sgaard et al., 2021).", "Main Results In Table 2, we report the group performance (mF1), where models trained with the ERM algorithm, across all datasets and attributes.", "We observe that the intensity of group disparities vary a lot between di erent attributes, but in many cases the group disparities are very vibrant.", "For example, in ECtHR, we observe substantial group disparity between the two defendant state groups (21.5% absolute di erence), similarly for applicant's gender groups (16.2% absolute di er-ence).", "In FSCS, we observe language disparity, where performance is on average 3-5% lower for cases written in Italian compared to those written in French and German.", "Performance disparity is even higher with respect to legal areas , where the model has the best performance for criminal (penal law) cases (83.4%) compared to others (approx. 10-20% lower).", "We also observe substantial group disparities with respect to the court region , e.g., cases ruled in E. Switzerland courts (66.8%) compared to Federation courts (56.4%).", "The same applies for CAIL, e.g., cases ruled in Beijing courts (66.8%) compared to Sichuan courts (56.4%).", "Group Disparity Analysis Moving forward we try to identify general (attribute agnostic) factors based on data distributions that could potentially lead to performance disparity across groups.", "We identify three general (attribute agnostic) factors: Representation Inequality : Not all groups are equally represented in the training set.", "To examine this aspect, we report the number of training cases per group.", "Temporal Concept Drift : The label distribution for a given group changes over time, i.e., in-between training and test subsets.", "To examine this aspect, we report per group, the KL divergence in-between the training and test label distribution.", "Worst Class Influence : The performance is not equal across labels (classes), which may disproportionally a ect the macro-averaged performance across groups.", "To examine this aspect, we report the Worst Class Influence (WCI) score per group, which is computed as follows: WCI( i ) = #test-cases (worst-class) #test-cases (12) In Table 2, we present the results across all attributes.", "We observe that only in 4 out of 10 cases (attributes), the less represented groups are those with the worst performance compared to the rest.", "It is generally not the case that high KL divergence (drift) correlates with low performance.", "In other words, group disparities does not seem to be driven by temporal concept drift.", "Finally, the influence of the worst class is relatively uniform across groups in most cases, but in the cases where groups di er in this regard, worst class influence correlates with error in 2 out of 3 cases.", "14 In ECtHR, considering performance across defendant state, we see that all the three factors correlate internally, i.e., the worst performing group is less represented, has higher temporal drift and has more cases in the worst performing class.", "This is not the case considering performance across other attributes.", "It is also not the case for SCOTUS.", "In FSCS, considering the attributes of language and region, representation inequality seems to be an important factor that leads to group disparity.", "This is not the case for legal area, where the best 14 For ECtHR performance across defendant states and SCOTUS across directions, but not for ECtHR performance across applicant age.", "represented group is the worst performing group.", "In other words, there are other reasons that lead to performance disparity in this case; according to Niklaus et al. (2021), a potential factor is that the jurisprudence in penal law is more united and aligned in Switzerland and outlier judgments are rarer making the task more predictable.", "Cross-Attribute Influence Analysis We have evaluated fairness across attributes that are not necessarily independent of each other.", "We therefore evaluate the extent to which performance disparities along di erent attributes correlate, i.e., how attributes interact, and whether performance di erences for attribute A 1 can potentially explain performance di erences for another attribute A 2 .", "We examine this for the two attributes with the highest group disparity: the defendant state in ECtHR, and the legal area in FSCS.", "For the bins induced by these two attributes ( A 1 ), we compute mF1 scores across other attributes ( A 2 ).", "In ECtHR, approx.", "83% and 81% of male and women applicants are involved in cases against E.C. European states (best-performing group).", "Similarly, in case of age groups, we observe that ratio of cases against E.C. European states is: 87% and 86% for 65 and 35, the bestand worst-performing groups respectively.", "In FSCS, the ratio of cases relevant to penal law is: approx.", "29%, and 41% written in written in French (best-performing group) and Italian (worst-performing group).", "Similarly, approx.", "27% originated in E. Switzerland (best-performing group) and 42% in Federation (worst performing group) are relevant to public law.", "In both attributes, there is a 15% increase of cases relevant to public law for the worst performing groups.", "In other words, the group disparity in one attribute A 2 (language, region) could be also explained by the influence of another attribute A 1 (legal area).", "In Table 3, we report the performance in the aforementioned cross-attribute ( A 1 , A 2 ) pairings.", "With the exception of the (age, defendant state) cross-examination in ECtHR, we observe that group disparities in attribute A 2 (Table 2) are consistent across groups of the plausible influencer (i.e. attribute A 1 ).", "Hence, cross-attribute influence does not explain the observed group disparities.", "We believe that such an in-depth analysis of the results is fundamental to understand the influence of di erent factors in the outcomes.", "This analysis wouldn't be possible, if we had counterfeited an ideal scenario, where all groups and labels where equally represented.", "While a controlled experimental environment is frequently used to examine specific factors, it could hide, or partially alleviate such phenomena, hence producing misleading results on fairness of the examined models.", "Group Robust Algorithms Results Finally, we evaluate the performance for several group robust algorithms ( Section 4) that could potentially mitigate group disparities.", "To estimate their performance, we report the average macro-F1 across groups (mF1) and the group disparity (GD) among groups measured as the group-wise std", "dev.: GD = (cid:118)(cid:117)(cid:116) 1 GG (cid:88) i = 1 (mF1 i mF1) 2 (13) We also report the worst-group performance (mF1 W = min([mF1 1 , mF1 2 , . . . mF1 G )).", "In Table 4, we report the results of all our baselines on the four datasets introduced in this paper.", "We first observe that the results of linear classifiers trained with the ERM algorithm (top row per dataset) are consistently worse (lower average and worst-case performance, higher group disparity) compared to transformed-based models in the same setting.", "In other words linear classifier have lower overall performance, while being less fair with respect to the applied definition of fairness (i.e. equal performance across groups).", "As one can see, transformer-based models trained with the ERM algorithm, i.e., without taking into account information about groups and their distribution, perform either better on in the same ballpark than models trained with methods specialized to mitigate biases (Section 4), with an average loss of 0 .", "17% only in terms of mF 1 and of 0 .", "78% in terms of mF 1 W .", "While, these algorithms improve worst case performance in the literature, 4396 ECtHR (ECHR Violation Prediction) SCOTUS (Issue Area Classification) Algorithm Defendant State Applicant Gender Applicant Age Respondent Type Direction mF1 GD mF1 W mF1 GD mF1 W mF1 GD mF1 W mF1 GD mF1 W mF1 GD mF1 WB ag of -W ords L inear C lassifier ERM 46.8 3.0 43.8 44.1 4.9 40.6 46.9 6.3 40.9 73.8 6.6 61.8 77.5 2.6 74.9 T ransformer based C lassifier ERM 53.2 8.3 44.9 57.5 3.1 54.4 54.1 5.9 46.2 75.1 4.0 70.8 78.1 1.6 76.6 ERM + GS 54.4 5.5 48.9 57.8 3.3 54.5 56.0 5.6 48.7 75.2 3.9 70.9 77.1 1.3 76.0 ADV-R 53.8 5.8 47.9 54.6 3.2 51.5 48.9 6.1 40.6 56.9 4.7 53.1 41.0 0.8 40.3 G-DRO 55.0 5.2 49.8 56.3 1.9 55.0 52.6 6.2 44.3 74.5 3.3 71.6 77.1 1.7 75.4 IRM 53.8 5.7 48.1 53.8 2.3 52.5 54.8 4.4 49.5 73.4 4.8 68.2 78.1 2.7 75.4 V-REx 54.6 6.3 48.3 54.6 2.0 53.2 55.0 4.5 49.8 73.8 3.8 68.2 78.2 1.1 77.1 FSCS (Case Approval Prediction) CAIL (Crime Severity Prediction) Algorithm Language Legal Area Region Defendant Gender Region mF1 GD mF1 W mF1 GD mF1 W mF1 GD mF1 W mF1 GD mF1 W mF1 GD mF1 WB ag of -W ords L inear C lassifier ERM 55.5 6.2 46.8 54.4 9.7 40.9 56.8 5.0 46.6 33.5 0.7 32.8 31.7 5.0 25.5 T ransformer based C lassifier ERM 67.8 2.1 65.0 69.4 9.6 56.9 69.7 2.9 63.9 60.2 0.6 60.1 59.3 3.5 56.4 ERM + GS 66.4 3.5 61.7 67.1 9.3 55.5 67.9 3.0 62.3 59.4 0.7 59.1 58.2 3.1 55.9 ADV-R 62.6 5.1 59.0 65.6 12.4 50.0 67.4 3.2 61.5 53.3 1.3 52.1 53.5 2.5 50.8 G-DRO 70.5 0.6 69.9 57.5 5.6 52.6 67.7 4.2 60.2 59.2 1.3 57.9 58.9 3.7 55.7 IRM 68.3 1.9 66.7 67.8 9.5 55.8 68.7 3.0 63.2 56.4 1.5 55.7 58.0 3.1 54.9 V-REx 67.2 3.5 62.4 66.6 8.9 56.0 68.4 3.1 62.4 58.5 0.7 58.3 58.6 3.3 54.4 Table 4: Test results for all examined group-robust algorithms per dataset attribute.", "when applied in a controlled experimental environment, they fail in a more realistic setting, where both groups across attributes and labels are imbal-anced, while also both group and label distribution change over time.", "Furthermore, we cannot identify one algorithm that performs better across datasets and group with respect to the others, indeed results are quite mixed without any recognizable pattern.", "The current version of FairLex covers a very small fraction of legal applications, jurisdictions, and protected attributes.", "Our benchmark is open-ended and inevitably cannot cover everything in the whole wide (legal) world (Raji et al., 2021), but nonetheless we believe that the published resources will help critical research in the area of fairness.", "Some protected attributes within our datasets are extracted automatically, i.e., the gender and the age in the ECtHR dataset, if possible, by means of regular expressions, or manually clustered by the authors, such as the defendant state in the ECtHR dataset and the respondent attribute in the SCOTUS dataset.", "Various simplifications made, e.g, the binarization of gender, would be inappropriate in real-world applications.", "Another important limitation is that what is considered the ground truth in these datasets (with the exception of SCOTUS) is only ground truth relative to judges' interpretation of a specific (EC, US, Swiss, Chinese) jurisdiction and legal framework.", "The labeling is therefore somewhat subjective for non-trivial cases, and its validity is only relative to a given legal framework.", "We of course do not in any way endorse the legal standards or framework of the examined datasets.", "We introduced FairLex, a multi-lingual benchmark suite for the development and testing of models and bias-mitigation algorithms within the legal domain, based on four datasets covering four jurisdictions, five languages and various sensitive attributes.", "Furthermore, we provided competitive baselines including transformer-based language models adapted to the examined datasets, and examination of performance of four group robust algorithms (Adversarial Removal, IRM, Group DRO, and V-REx).", "While, these algorithms improve worst case performance in the literature, when applied in a controlled experimental environment, they fail in a more realistic setting, where both groups across attributes, and labels are imbal-anced, while also both group and label distributions change over time.", "Furthermore, we cannot identify a single algorithm that performs better across datasets and groups compared to the rest.", "In future work, we aim to further expand the benchmark with more datasets that could possibly cover more sensitive attributes.", "Further analysis on the reasons behind group disparities, e.g., representational bias, systemic bias, is also critical.", "The scope of this work is to provide an evaluation framework along with extensive experiments to further study fairness within the legal domain.", "Following the work of Angwin et al. (2016), Dressel and Farid (2018), and Wang et al. (2021b), we provide a diverse benchmark covering multiple tasks, jurisdictions, and protected (examined) attributes.", "We conduct experiments based on pre-trained transformer-based language models and compare model performance across four representative group-robust algorithm, i.e., Adversarial Removal (Elazar and Goldberg, 2018), Group DRO (Sagawa et al., 2020), IRM (Arjovsky et al., 2020) and REx (Krueger et al., 2020).", "We believe that this work can inform and help practitioners to build assisting technology for legal professionals with respect to the legal framework (jurisdiction) they operate -; technology that does not only rely on performance on majority groups, but also considering minorities and the robustness of the developed models across them.", "We believe that this is an important application field, where more research should be conducted (Tsarapatsa-nis and Aletras, 2021) in order to improve legal services and democratize law, but more importantly highlight (inform the audience on) the various multi-aspect shortcomings seeking a responsible and ethical (fair) deployment of technology.", "We standardize and put together four datasets: ECtHR (Chalkidis et al., 2021), SCOTUS (Spaeth et al., 2020), FSCS (Niklaus et al., 2021), and CAIL (Xiao et al., 2018; Wang et al., 2021b) that are already publicly available under CC-BY-(NC-)SA-4.0 licenses.", "We release the compiled version of the dataset under a CC-BY-NC-SA-4.0 license to favor academic research, and forbid to the best of our ability potential commercial dual use.", "15 All datasets, except SCOTUS, are publicly available and have been previously published.", "If datasets or the papers where they were introduced in were not compiled or written by ourselves, we have referenced the original work and encourage FairLex users to do so as well.", "In fact, we believe that this work should only be referenced, in addition to citing the original work, when jointly experi-15 https://creativecommons.org/licenses/ by-nc-sa/4.0/ menting with multiple FairLex datasets and using the FairLex evaluation framework and infrastructure, or use any newly introduced annotations (EC-tHR, SCOTUS).", "Otherwise only the original work should be cited.", "The data is in general partially anonymized in accordance with the applicable national law.", "The data is considered to be in the public sphere from a privacy perspective.", "This is a very sensitive matter, as the courts try to keep a balance between transparency (the public's right to know) and privacy (respect for private and family life).", "ECtHR cases are partially annonymized by the court.", "Its data is processed and made public in accordance with the European data protection laws.", "SCOTUS cases may also contain personal information and the data is processed and made available by the US Supreme Court, whose proceedings are public.", "While this ensures compliance with US law, it is very likely that similarly to the ECtHR any processing could be justified by either implied consent or legitimate interest under European law.", "In FSCS, the names of the parties have been redacted by the courts according to the o cial guidelines.", "CAIL cases are also partially anonymized by the courts according to the courts' policy.", "Its data is processed and made public in accordance with Chinese Law.", "This work is fully funded by the Innovation Fund Denmark (IFD) 16 under File No. 0175-00011A.", "We would like to thank the authors of the original datasets for providing access to the original documents, metadata, or confidentially sharing pre-released versions of the datasets." ]
[ "method", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "abstain", "objective", "objective", "objective", "method", "objective", "objective", "objective", "objective", "objective", "method", "result", "other", "other", "method", "method", "method", "other", "other", "method", "other", "other", "other", "other", "other", "other", "other", "other", "abstain", "method", "other", "other", "method", "other", "other", "other", "method", "objective", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "other", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "method", "other", "method", "method", "method", "method", "method", "method", "result", "result", "result", "result", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "result", "abstain", "abstain", "result", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "result", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "method", "objective", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "method", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "other" ]
[ "The key to effortless end-user programming is natural language.", "We examine how to teach intelligent systems new functions, expressed in natural language.", "As a first step, we collected 3168 samples of teaching efforts in plain English.", "Then we built fu SE , a novel system that translates English function descriptions into code.", "Our approach is three-tiered and each task is evaluated separately.", "We first classify whether an intent to teach new functionality is present in the utterance (accuracy: 97.7% using BERT).", "Then we analyze the linguistic structure and construct a semantic model (accuracy: 97.6% using a BiLSTM).", "Finally, we synthesize the signature of the method, map the intermediate steps (instructions in the method body) to API calls and inject control structures (F 1 : 67.0% with information retrieval and knowledge-based methods).", "In an end-to-end evaluation on an unseen dataset fu SE synthesized 84.6% of the method signatures and 79.2% of the API calls correctly.", "Intelligent systems became rather smart lately.", "One easily arranges appointments by talking to a virtual assistant or controls a smart home through a conversational interface.", "Instructing a humanoid robot in this way no longer seems to be futuristic.", "For the time being, users can only access built-in functionality.", "However, they will soon expect to add new functionality themselves.", "For humans, the most natural way to communicate is by natural language.", "Thus, future intelligent systems must be programmable in everyday language.", "Today's systems that claim to offer programming in natural language enable laypersons to issue single commands or construct short scripts (e.g. Mihalcea et al. (2006); Rabinovich et al. (2017)); usually no new functionality is learned.", "Only a few addressed learning new functionality from natural language instructions (e.g. Le et al. (2013); Markievicz et al. (2017)).", "However, even recent approaches still either restrict the language or are (over-)fitted to a certain domain or application.", "We propose to apply deep natural language understanding to the task of synthesizing methods from spoken utterances.", "Our approach combines modern machine learning techniques with information retrieval and knowledge-based methods to grasp the user's intent.", "As a first step, we have performed a user study to investigate how laypersons teach new functionality with nothing but natural language.", "In a second step, we develop fu SE (Func-tion Synthesis Executor).", "fu SE translates teaching efforts into code.", "On the basis of the gathered data we constructed a three-tiered approach.", "We first determine, whether an utterance comprises an explicitly stated intent to teach a new skill.", "Then, we decompose these teaching efforts into distinct semantic parts.", "We synthesize methods by transferring these semantic parts into a model that represents the structure of method definitions.", "Finally, we construct signatures, map instructions of the body to API calls, and inject control structures.", "The objective of programming in natural language was approached from different perspectives over the years.", "Quite a few approaches are natural language interfaces to code editors (Price et al., 2000; Begel, 2004; Begel and Graham, 2005; Desilets et al., 2006).", "However, they assume that users literally dictate source code.", "Thus, these approaches are intended for developers rather than laypersons.", "Other approaches such as Voxelurn by Wang et al. (2017) aim to naturalize programming languages to lower the hurdle for programming novices.", "Approaches for end-user programming in natu-coffee is a beverage people like in order to make coffee you have to locate the cup place it under the dispenser and press the red button 1 st stage teachingintent coffee is a beverage people like in order to make coffee you have to locate the cup place it under the dispenser and press the red button Teaching 2 nd stage semanticstruct.", "ral language take up the challenge of bridging the semantic gap between informal spoken or written descriptions in everyday language and formal programming languages.", "Early systems were syntax-based (Winograd, 1972; Ballard and Biermann, 1979; Biermann and Ballard, 1980; Biermann et al., 1983; Liu and Lieberman, 2005).", "Some were already capable to synthesize short scripts including control structures and comments, e.g. NLP for NLP by Mihalcea et al. (2006).", "Others take the user in the loop and create scripts with a dialog-driven approach (Le et al., 2013).", "In further developments intelligent assistants offer their service to assist with programming (Azaria et al., 2016).", "Often these assistants support multi-modal input, e.g. voice and gestures (Campagna et al., 2017, 2019).", "Others combine programming in natural language with other forms of end-user programming, such as programming by example (Manshadi et al., 2013) or programming by demonstration (Li et al., 2018).", "Some authors such as Landhauer et al. (2017) and Atzeni and Atzori (2018a,b) take a knowledge-based approach by integrating domain and environmental information in the form of ontologies.", "Suhr and Artzi (2018) employ a neural network to learn a situational context model that integrates the system environment and the human-system-interaction, i.e. the dialog.", "Many recent approaches integrate semantic parsing in the transformation process (Guu et al., 2017; Rabinovich et al., 2017; Chen et al., 2018; Dong and Lapata, 2018).", "Even though the natural language understanding capabilities are often impressive, the synthesized scripts are still (semantically) erroneous in most cases.", "Additionally, learning of new functionality is not covered by approaches of that category so far.", "Programming in natural language is of particular interest in the domain of humanoid robotics (Lau-ria et al., 2001, 2002; She et al., 2014; Mei et al., 2016).", "People expect to teach them as they teach human co-workers.", "Therefore, some authors, e.g. Markievicz et al. (2017), use task descriptions that were intended to instruct humans to benchmark their approach.", "However, often the assumed vocabulary is rather technical (Lincoln and Veres, 2012).", "Thus, the usability for laypersons is limited.", "The goal of our work is to provide a system for programming in (spoken) natural language.", "Laypersons shall be enabled to create new functionality in terms of method definitions by using natural language only.", "We offer a general approach, i.e. we do not restrict the natural language regarding wording and length.", "Since spontaneous language often comprises grammatical flaws, disfluencies, and alike, our work must be resilient to these issues.", "We decompose the task in three consecutive steps.", "The rationale behind this decision is as follows.", "On the one hand, we can implement more focused (and precise) approaches for each task, e.g. using machine learning for one and information retrieval for another.", "On the other hand, we are able to evaluate and optimize each approach individually.", "The stages of our three-tiered approach are the following (see Figure 1 for an example):", "1. Classification of teaching efforts: Determine whether an utterance comprises an explicitly stated teaching intent or not.", "2. Classification of the semantic structure: Analyze (and label) the semantic parts of a teaching sequence.", "Teaching sequences are composed of a declarative and a specifying part as well as superfluous information.", "3. Method synthesis: Build a model that represents the structure of methods from syntactic information and classification results.", "Then, map the actions of the specifying part to API calls and inject control structures to form the body; synthesize the method signature.", "The first two stages are classification problems.", "Thus, we apply various machine learning techniques.", "The first stage is a sequence-to-single-label task, while the second is a typical sequence-to-sequence task.", "For the first we compare classical machine learning techniques, such as logistic regression and support vector machines, with neural network approaches including the pre-trained language model BERT (Devlin et al., 2019).", "For the second task we narrow down to neural networks and BERT.", "A more detailed description of the first two stages may be found in (Weigelt et al., 2020).", "The implementation of the third stage is a combination of syntactic analysis, knowledge-based techniques and information retrieval.", "We use semantic role labeling, coreference analysis, and a context model (Weigelt et al., 2017) to infer the semantic model.", "Afterwards, we synthesize method signatures heuristically and map instructions from the body to API calls using ontology search methods and datatype analysis.", "Additionally, we inject control structures, which we infer from keywords and syntactic structures.", "To cope with spontaneous (spoken) language, our approach relies on shallow NLP techniques only.", "We carried out a study to examine how laypersons teach new functionality to intelligent systems.", "The study consists of four scenarios in which a humanoid robot should be taught a new skill: greeting someone, preparing coffee, serving drinks, and setting a table for two.", "All scenarios take place in a kitchen setting but involve different objects and actions.", "Subjects were supposed to teach the robot using nothing but natural language descriptions.", "We told the subjects that a description ideally comprises a declaration of intent to teach a new skill, a name for the skill, and an explanation of intermediate steps.", "However, we do not force the subjects into predefined wording or sentence structure.", "Instead, we encouraged them to vary the wording and to speak' freely.", "We also instructed them to imagine that they were standing next to the robot.", "After the short introduction, we successively presented the scenarios to the subjects.", "Finally, we requested some personal information in a short questionnaire.", "We used the online micro-tasking platform Prolific 1,2 .", "In less than three days, 870 participants 1 Prolific: https://www.prolific.co/ 2 We decided to gather textual responses, even though desc.", "completed the study.", "The share of male and female participants is almost equal (50.5% vs. 49.5%); more than 60% are native English speakers.", "Most of them (70%) had no programming experience at all.", "An analysis of the dataset revealed that there is barely any difference in the language used by subjects, who are inexperienced in programming, compared to more experienced subjects (except for a few subjects that used a rather technical lan-guage).", "The age of the participants ranges from 18 to 76 with more than half being 30 or younger.", "The collected data comprises 3,168 descriptions with more than 109,000 words altogether (1,469 unique words); the dataset statistics are depicted in Table", "1. We provide a set of six descriptions from the dataset in Table 13 (Appendix A).", "A thorough analysis of the dataset revealed that a notable share (37%) lacks an explicitly stated intent to teach a skill, albeit we even consider phrases such as to prepare lunch as teaching intent.", "Regarding the semantic structure, we observed that the distinct parts can be clearly separated in almost all cases.", "However, the respective parts occurred in varying order and are frequently non-continuous.", "The data was jointly labeled by two of the authors.", "We first attached the binary labels teaching and non-teaching .", "These labels correspond to the classification task from the first stage.", "Then we add ternary labels ( declaration , specification , and miscellaneous ) to all words in descriptions that were classified as teaching effort in the first step.", "This label set is used for the second stage.", "The distribution of the labels is depicted in Table", "2. Both label sets are unequally distributed, which may cause the machine learning models to over-fit in favor of the dominating label.", "This mainly affects the ternary classification task; the speech recordings would be more natural.", "However, from previous studies we learned that subjects more willingly write texts than speak.", "Besides, the audio quality of recordings is often poor, when subjects use ordinary microphones.", "label specification distinctly dominates (76%) the others.", "The entire dataset is publicly accessible (open access), including raw data, labeled data, meta-data, and scenario descriptions: http://dx.doi.org/10.21227/zecn-6c61 .", "The first step of fu SE is discovering teaching intents in utterances.", "An utterance can either be an effort to teach new functionality or merely a description of a sequence of actions.", "This problem is a typical sequence-to-single-label task, where the words of the utterance are the sequential input and the output is either teaching or non-teaching .", "To train, validate, and test classifiers we split up the dataset in two ways.", "The first is the common approach to randomly split the set in an 80-to-20 ratio, where 80% of the data is used for training and 20% for testing.", "As usual, we again split the training set in 80 parts for training and 20 for validation.", "However, we felt that this approach does not reflect realistic set-ups, where a model is learned from historical data and then applied to new unseen data, that is semantically related but (potentially) different.", "Therefore, we introduced an additional so-called scenario-based split in which we separate the data according to the scenarios.", "We use three of the four scenarios for training and the remaining for testing.", "Note that we again use an 80-20 split to divide training and validation sets.", "We applied classical machine learning and neural network approaches to the task.", "The classical techniques are: decision trees, random forests, support vector machines, logistic regression, and Nave Bayes.", "As baseline for the classification accuracy we use the so-called Zero-Rule classifier (ZeroR); it always predicts the majority class of the training set, i.e. teaching in this case.", "We transform the words to bag-of-words vectors and use triand quadrigrams as additional features.", "The measured accuracy of each classifier on the random and scenario-based data is depicted in Table 3; the validation set accuracy is given in parenthesis and the test set accuracy without.", "On the random set all classifiers exceed the baseline.", "Thus, the (slightly) imbalanced dataset does not seem to affect the classifiers much.", "Logistic regression performs surprisingly well.", "However, on the scenario-based split the accuracy of all classifiers decreases drastically.", "While the accuracies on the validation set remain stable, these classifier techniques are unable to generalize to unseen input.", "The logistic regression remains the best classifier.", "However, its accuracy decreases to 71.9%.", "These results reinforced our intuition that deep learning is more appropriate for this task.", "We implemented a broad range of neural network architectures: artificial neural networks, convolutional networks, and recurrent networks, including LSTMs and GRUs and their bidirectional variants.", "We experimented with additional layers, which we systematically added to the networks, such as dropout (DO), dense (D), or global max pooling (GMax).", "We altered all hyper-parameters in reasonable ranges of values 3 .", "We present only the best performing configurations, i.e. architecture and hyper-parameter combinations, in Table 4.", "Detailed information on the tested hyper-parameter values and further results may be found in Appendices B and C. The words from the input are represented as fastText word embeddings (Bojanowski et al., 2017; Joulin et al., 2017); we use the 300-dimensional embeddings that were trained on the Common Crawl dataset 4 by Facebook Research (Mikolov et al., 3 Note that we do not discuss the influence of varying epoch numbers, since we used early stopping , i.e. the training stops when the validation loss stops decreasing.", "network architecture random scenario C(128,3), Max(2), C(64,3), GMax, D(10) (.952) .971 (.962) .874 C(128,5), Max(2), C(128,5), GMax, D(10) (.954) .966 ( .977 ) .862 BiGRU(32), DO(.2), D(64), DO(.2) (.952) .959 (.958) .932 BiLSTM(128), D(64) ( .956 ) .959 (.962) .919 BERT, 5 epochs (.973) .981 (.991) .969 BERT, 10 epochs ( .976 ) .982 ( .992 ) .973 BERT, 300 epochs (.962) .982 ( .992 ) .977 baseline (Log. Reg.) (.927) .947 (.891) .719", "2018).", "Moreover, we use Google's pre-trained language model BERT (base-uncased), which we equipped with a flat binary output layer.", "The results attest that deep learning approaches clearly outperform the best classical technique (lo-gistic regression).", "In particular, the accuracies show smaller differences between random and scenario-based split.", "This suggests that the classification is more robust.", "The best accuracy on the scenario test set is achieved by a bidirectional GRU: 93.2%.", "Using BERT, the accuracy increases by more than 4% with a peak at 97.7% using 300 training epochs.", "However, the ten-epochs version is a feasible choice, since the accuracy loss is negligible and the training savings are immense.", "The second stage, detecting the semantic parts in teaching efforts, is a typical sequence-to-sequence-labeling task with the labels declaration , specification , and miscellaneous .", "Even though these semantic structures correspond to phrases from a grammatical point of view, we decided to use per-word labels.", "For this task we only use neural network approaches and BERT.", "The remaining set-up is similar to the first stage.", "We again use fastText embeddings and vary the network architectures and hyper-parameters.", "Except for a ternary output layer, we use the same configuration for BERT as in the first stage.", "The results for both, the random and scenario-based split, are reported in Table 5 5 .", "The bidirectional architectures be it GRU or LSTM are 5 Again, we only present the best configurations here.", "the clear choice for this task; accuracy values are consistently high.", "Most encouragingly, the decline on the scenario data is negligible (less than 1%).", "Apparently, the models generalize well and are thus resilient to a change in vocabulary.", "For the second stage the use of BERT is of no advantage; the results even fall behind the best RNN configurations.", "During stage three we first transfer the natural language utterances into a model that represents both method definitions and scripts.", "Afterwards, we synthesize methods (or scripts) from this model.", "We create a method signature and map instructions in the body to API calls; to synthesize scripts we only map the instructions and inject control structures.", "Before we can transfer natural language utterances to the semantic model we must perform a few NLP pre-processing steps that enrich the input with syntactic and semantic information.", "To obtain parts of speech (PoS), we apply a joint tagging approach; we consolidate the PoS tags produced by the Stanford Log-linear Part-Of-Speech Tagger (Toutanova et al., 2003) and SENNA (Collobert et al., 2011).", "The Stanford Tagger also provides us with word lemmas.", "Then we detect individual events in terms of clauses.", "Since our approach is supposed to cope with spoken language, we are unable to make use of punctuation.", "Instead, we split the input in a continuous sequence of instructions based on heuristics that make use of PoS tags and keywords.", "However, the instructions do not necessarily span complete clauses.", "Thus, we can not apply common parsers.", "Instead, we use the shallow parser BIOS 6 that provides us with chunks.", "To obtain semantic roles for each instruction, we again 6 http://www.surdeanu.info/mihai/bios/ class description Thing Top concept of the ontology (cid:120) System (Sub-)Systems (API classes) (cid:120) Method System functions (API methods) (cid:120) Parameter Parameter names (cid:120) DataType Data types used by the system, e.g., int or Graspable (cid:120) Object External objects [empty here] (cid:120) State Object states [empty here] Table 6: Domain ontology structure for systems.", "employ SENNA 7 .", "Word senses are disambiguated using the tool Babelfy (Moro et al., 2014).", "Since Babelfy is linked to WordNet (Fellbaum, 1998), we can also make use of synonyms.", "We use ontologies to model the target systems, i.e. APIs.", "An ontology represents the classes, methods, parameters, data types, and values (resp. value ranges), of an API (similar to the ontologies used by Landhauer et al. (2017) and Atzeni and Atzori (2018a,b)).", "The basic ontology structure is depicted in Table 6.", "If the system is supposed to interact with an environment, we employ additional ontologies that model the environment including objects and their states (see Table 7).", "Environment ontologies are merged into system ontologies by copying concepts to the respective placeholders.", "To bridge the semantic gap between natural and programming language we introduce a semantic model, as depicted in Figure", "2. The model resembles the basic structure of method definitions.", "However, the leaves are composed of natural language phrases.", "To determine the phrases that will make up the model elements, we first smooth the classification results provided by the second stage.", "fu SE maps all phrases of an instruction to the same second-level model element, i.e. either method signature or an instruction of the body.", "Therefore, we 7 SENNA uses the semantic role label set defined in the CoNLL-2004 resp.", "CoNLL-2005 shared tasks (Carreras and M ` arquez, 2004, 2005).", "unify the second stage classification labels for each instruction using majority decision.", "Afterwards, we map phrases to leaf elements.", "Roughly speaking, we use the roles provided by semantic role labeling (SRL) and map predicates to names and arguments to parameters.", "If we detect a coreference, we substitute the referring expression with the referent, e.g. it with the cup .", "We also add a lemmatized variant of the phrase and all synonyms.", "Note that the parameters are a list of phrases.", "The first step to create method definitions is signature synthesis.", "To construct a meaningful name, we heuristically clean up the phrase, e.g. remove auxiliary verbs and stop words, and concatenate the remaining words.", "The parameters are either mapped to data types to infer formal parameters or if no mapping is to be found they are attached to the name.", "For instance, assuming that the declarative instruction is serving wine means , fu SE extracts serve as the first part of the name.", "Then it tries to map wine to an ontology individual (as discussed later).", "Assuming it finds the individual RedWineBottle and it is an instance of the concept Graspable in the environment ontology.", "If the system ontology supports the data type Graspable , fu SE synthesizes the signature serve(serve.what : Graspable) .", "Otherwise, the method signature serveWine() is created.", "The instructions in the method body are mapped to API calls.", "Therefore, we first query the ontologies for each leaf element individually.", "For the queries we use three sets of words we create from the original phrase, the lemmatized version, and the synonyms.", "We then build the power sets and all permutations of each set, before we concatenate the words to construct a query set.", "For instance, for the phrase is closed , we produce the query strings: isclosed, closedis, beclose, closebe, closed, is, . . . The ontology search returns all individuals with a Jaro-Winkler score (Winkler, 1990) above .4 or individuals API calls pre.", "a fuzzy score 8 above .15.", "We decided for these comparatively low thresholds, since we see them as lightweight filters that let pass numerous generally valid candidates.", "Since an individual may be returned more than once with different scores, we set the score of the individual to the maximum of each of its scores.", "Afterwards, we construct API calls from the model structure and rate each candidate.", "We start with the method name candidates.", "For each candidate we query the ontology for formal parameters.", "Then, we try to satisfy the parameters with the candidates returned by the individual ontology search.", "Note that we perform type checking for the parameters (including inheritance if applicable).", "For instance, for the instruction take the cup we may have found the individual grasp as candidate for a method name and the parameter candidates Mug (type Graspable ) and Cupboard (type Location ).", "The ontology indicates that the method grasp has one parameter of type Graspable .", "Then, the type check ensures that fu SE creates the call candidate grasp(Mug) but not grasp(Cupboard) .", "The score is composed of the individual scores of the method names and parameters, the share of mapped words of query string to all words in the query, the ratio of mapped parameters to (expected) formal parameters, and the number of additional (superfluous) parameters.", "In Appendix D we give a more formal introduction to our scoring approach.", "The result of the scoring process is a ranked list of candidates for each instruction.", "For the time being, we simply use the top-ranked candidates to synthesize the method body.", "However, re-ranking the candidates based on other semantic resources is promising future work.", "In a last step, we inject control structures, i.e. conditional branching, various types of loops, and concurrency (Weigelt et al., 2018b,c).", "The approach is rule-based.", "We use key phrases, such as in case, until , and at the same time .", "Proceeding from these anchor points we look for structures that fit into the respective control structure.", "Here, we apply heuristics on the syntax (based on PoS tags and chunks) and coreference.", "Utterances that were labeled as non-teaching in the first stage also run through the third stage, except for signature synthesis.", "Thus, we only construct scripts for this type of utterances.", "We determine the quality of the approach for the third stage based on utterances from scenarios one, two, and three, since we used scenario four during development.", "The assessment is partly manual.", "Hence, we randomly drew 25 utterances from each scenario to reduce the effort.", "For each description we used the manual labels of first-stage and second-stage classifications and prepared a gold standard for API calls in the method body.", "Table 9 depicts the dataset.", "We did not prepare solutions for the signatures, since plenty of valid solutions are imaginable.", "Thus, we decided to review the signatures manually afterwards.", "Of the 52 synthesized method names we assessed eight inappropriate.", "A name is inappropriate if either the name is off-topic or it contains unrelated terms, e.g. askSpeaker or prepareCoffeeFriend for the scenario How to prepare coffee .", "Moreover, fu SE correctly mapped 23 parameters without any false positive.", "The API ontology used in our setting (house-hold robot) comprises 92 methods, 59 parameters, and 20 data types.", "To represent the environment (a kitchen) of the robot, we used another ontology individuals API calls pre.", "with 70 objects of six types, and six states.", "Table 8 details the results for the method body synthesis.", "Besides precision, recall, and F 1 , it shows the average rank at which the correct element is to be found.", "Since the semantic role labeling introduces a vast amount of errors on spoken utterances and our approach heavily depends on it, we also determine recall and F 1 excluding SRL errors.", "The results are encouraging.", "We achieve an F 1 value of 76.7% for the individuals and 62.0% for entire calls; in both cases the precision is slightly ahead of the recall.", "If we excluded SRL errors, the overall performance increases (about 7% for individuals and 5% for calls).", "Besides the SRL, missing and inappropriate synonyms are a major source of errors.", "If WordNet lacks a synonym for an important word in the utterance, fu SE 's API mapping may be unable to determine the correct ontology individual.", "Contrary, if WordNet provides an inappropriate synonym, fu SE may produce an incorrect (superfluous) mapping.", "In other cases, our language model is unable to capture the semantics of the utterance properly.", "For example, fu SE creates two method calls for the phrase make sure you close it : close( . . . ) and make( . . . ) .", "It may also produce superfluous mappings for explanatory phrases, such as the machine fills cups, if the second stage did not classify it as miscellaneous .", "Regarding the composition of API calls (methods plus arguments), the majority of errors is introduced by the arguments.", "In addition to the afore-mentioned error sources, arguments are often ambiguous.", "For instance, the phrase open the door leaves it up to interpretation, which door was intended to be opened.", "Even though fu SE makes use of an elaborated context model, some ambiguities are impossible to resolve (see section 5).", "A related issue is the incorrect resolution of coreferences; each mistake leads to a misplaced argument.", "Most of these error sources can be eliminated, if the pre-processing improves.", "However, many difficulties simply arise from erroneous or ambiguous descriptions.", "Still, fu SE interprets most of them correctly.", "Most encouragingly, the average rank of the correct element is near", "1. Thus, our scoring mechanism succeeds in placing the right elements on top of the list.", "To measure the performance of fu SE on unseen data, we set up an end-to-end evaluation.", "We created two new scenarios.", "They take place in the kitchen setting again, but involve different actions and objects.", "In the first, subjects are supposed to teach the robot, how to start the dishwasher and in the second, how to prepare cereals.", "Once more we used Prolific to collect the data and set the number of participants to 110.", "However, we accepted only 101 submissions, i.e. 202 descriptions.", "We randomly drew 50 descriptions each.", "Since the evaluation of the overall approach entails the same output as the third stage, we prepared the gold standard like in subsection 3.4 and used the same ontologies.", "Table 11 details the dataset used in the end-to-end evaluation.", "Additionally, we provide five exemplary descriptions from the dataset in Table 14 (Appendix A).", "In the end-to-end evaluation our approach synthesized 73 method signatures; five were missed due to an incorrect first-stage classification.", "Out of 73 synthesized methods we assessed seven to be inappropriate.", "Additionally, 36 parameters were mapped correctly and no false positives were created.", "Except for the missing method signatures the results are in line with the third-stage evaluation.", "The results for the method body synthesis, as depicted in Table 10, even exceed the previous evaluation.", "The value of the F 1 -score is 87.7% for pre.", "individuals and 66.9% for entire API calls.", "Again, recall and F 1 increase, if we exclude SRL errors.", "However, the effect is smaller here.", "Moreover, the average rank is also closer to the optimum (1.0) in both cases.", "Since the first two stages of fu SE are based on neural networks, it is difficult to say why the results in the end-to-end evaluation improve.", "However, we believe the main cause is the introduction of a new test dataset, which has two consequences.", "First, the models used in the first two stages are learned on all four scenarios instead of three, i.e. the models are trained on a larger dataset, which (presumably) makes them more robust.", "Second, the new task may be simpler to describe.", "Consequently, the descriptions comprise simpler wordings and become easier to handle.", "In summary, the results show that fu SE generalizes to different settings at least in the same domain and is marginally degraded by error propagation.", "To assess how well fu SE generalizes to truly spoken utterances we evaluated on another dataset.", "It is a collection of recordings from multiple recent projects.", "The setting (instructing a humanoid robot in a kitchen setting) is the same.", "However, none of the scenarios involved teaching new functionality.", "Thus, we can only measure fu SE 's ability to construct scripts.", "The descriptions in this dataset comprise control structures to a much larger extent.", "Altogether the dataset comprises 234 recordings and manual transcriptions.", "The 108 subjects were mostly under-graduate and graduate students.", "On the transcripts we assess the mapping of methods and parameters individually.", "The results for both and entire calls are depicted in Table 12.", "Even though the spoken samples comprise a vast number of disfluencies and grammatical flaws, fu SE maps more calls correctly.", "This counter-intuitive effect may be explained by the lower complexity and briefness of the spoken descriptions.", "Regarding the control structures, 27.4% were injected correctly.", "Note that correctly means an appropriate condition plus a block with correct extent.", "If we lower the standards for condition correctness, the share of correct structures is 71.23%.", "We have presented fu SE , a system for programming in natural language.", "More precisely, we aim to enable laypersons to teach an intelligent system new functionality with nothing but spoken instructions.", "Our approach is three-tiered.", "First, we classify whether a natural language description entails an explicitly stated intent to teach new functionality.", "If an intent is spotted, we use a second classifier to separate the input into semantically disjoint parts; we identify declarative and specifying parts and filter out superfluous information.", "Finally, we synthesize method signatures from the declarative and method bodies from the specifying parts.", "Method bodies contain instructions and control structures.", "Instructions are mapped to API calls.", "We implemented the first two steps using classical machine learning and neural networks.", "Teaching intents are identified with an accuracy of 97.7% (using BERT).", "The classification of the semantics is correct in 97.6% of the cases (using a BiLSTM).", "We evaluated fu SE on 100 descriptions obtained from a user study.", "The results are promising; fu SE correctly synthesized 84.6% of the method signatures.", "The mapping of instructions in the body to API calls achieved an F 1 -score of 66.9%.", "In a second evaluation on a speech corpus the F 1 -score for API calls is 79.2%.", "We plan to evaluate fu SE in other domains.", "It will be interesting to see, if we can reuse (or transfer) the machine learning models as well as the rest of the approach.", "Future adoptions to fu SE will include the integration of a dialog component.", "We may query the user in case of ambiguous statements or missing parameters.", "We have implemented an extensible dialog module and shown that it can be used to resolve ambiguous references, word recognition errors, and missing conditions (Weigelt et al., 2018a).", "However, we still have to figure out, how to query users properly if an API mapping is ambiguous or parameters are missing.", "Another improvement concerns the analysis of verb references.", "Humans often refer to previous actions, which may cause superfluous instructions.", "We will also implement a sanity check that considers feasibility and meaningfulness of the sequence of actions in the method body.", "The latter may involve a feedback mechanism via the dialog component.", "Giving feedback to newly learned method definitions that may be lengthy and therefore unhandy to repeat as a whole is an interesting challenge." ]
[ "abstain", "objective", "objective", "objective", "method", "objective", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "method", "objective", "objective", "abstain", "method", "objective", "abstain", "abstain", "method", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "method", "abstain", "method", "method", "method", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "method", "method", "method", "method", "method", "abstain", "abstain", "abstain", "method", "method", "method", "method", "method", "method", "method", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "method", "method", "method", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "method", "method", "method", "other", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "method", "method", "method", "method", "method", "method", "abstain", "method", "method", "other", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "method", "method", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "method", "abstain", "method", "method", "method", "method", "method", "method", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "abstain", "method", "method", "method", "abstain", "method", "method", "abstain", "method", "method", "abstain", "method", "method", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "objective", "abstain", "objective", "method", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "method", "result", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "method", "abstain", "abstain" ]
[ "The goal of the cross-lingual summarization (CLS) is to convert a document in one language ( e.g. , English) to a summary in another one ( e.g. , Chinese).", "Essentially, the CLS task is the combination of machine translation (MT) and monolingual summarization (MS), and thus there exists the hierarchical relationship between MT&MS and CLS.", "Existing studies on CLS mainly focus on utilizing pipeline methods or jointly training an end-to-end model through an auxiliary MT or MS objective.", "However, it is very challenging for the model to directly conduct CLS as it requires both the abilities to translate and summarize.", "To address this issue, we propose a hierarchical model for the CLS task, based on the conditional variational auto-encoder.", "The hierarchical model contains two kinds of latent variables at the local and global levels, respectively.", "At the local level, there are two latent variables, one for translation and the other for summarization.", "As for the global level, there is another latent variable for cross-lingual summarization conditioned on the two local-level variables.", "Experiments on two language directions (English Chinese) verify the effectiveness and superiority of the proposed approach.", "In addition, we show that our model is able to generate better cross-lingual summaries than comparison models in the few-shot setting.", "The cross-lingual summarization (CLS) aims to summarize a document in source language ( e.g. , English) into a different language ( e.g. , Chinese), which can be seen as a combination of machine translation (MT) and monolingual summarization (MS) to some extent (Orasan and Chiorean, 2008; Zhu et al., 2019).", "The CLS can help people effectively master the core points of an article in a Work was done when Liang and Zhou were interning at Pattern Recognition Center, WeChat AI, Tencent Inc, China.", "foreign language.", "Under the background of globalization, it becomes more important and is now coming into widespread use in real life.", "Many researches have been devoted to dealing with this task.", "To our knowledge, they mainly fall into two categories, i.e. , pipeline and end-to-end learning methods.", "( i )", "The first category is pipeline-based, adopting either translation-summarization (Leuski et al., 2003; Ouyang et al., 2019) or summarization-translation (Wan et al., 2010; Orasan and Chiorean, 2008) paradigm.", "Although being intuitive and straightforward, they generally suffer from error propagation problem.", "( ii )", "The second category aims to train an end-to-end model for CLS (Zhu et al., 2019, 2020).", "For instance, Zhu et al. (2020) focus on using a pre-constructed probabilistic bilingual lexicon to improve the CLS model.", "Furthermore, some researches resort to multi-task learning (Takase and Okazaki, 2020; Bai et al., 2021a; Zhu et al., 2019; Cao et al., 2020a,b).", "Zhu et al. (2019) separately introduce MT and MS to improve CLS.", "Cao et al. (2020a,b) design several additional training objectives ( e.g. , MS, back-translation, and reconstruction) to enhance the CLS model.", "And Xu et al. (2020) utilize a mixed-lingual pre-training method with several auxiliary tasks for CLS.", "As pointed out by Cao et al. (2020a), it is challenging for the model to directly conduct CLS as it requires both the abilities to translate and summarize.", "Although some methods have used the related tasks ( e.g. , MT and MS) to help the CLS, the hierarchical relationship between MT&MS and CLS are not well modeled, which can explicitly enhance the CLS task.", "Apparently, how to effectively model the hierarchical relationship to exploit MT and MS is one of the core issues, especially when the CLS data are limited.", "1 In many other related NLP tasks (Park et al., 2018; Serban et al., 2017; 1 Generally, it is difficult to acquire the CLS dataset (Zhu et al., 2020; Ayana et al., 2018; Duan et al., 2019).", "Shen et al., 2019, 2021), the Conditional Variational Auto-Encoder (CVAE) (Sohn et al., 2015) has shown its superiority in learning hierarchical structure with hierarchical latent variables, which is often leveraged to capture the semantic connection between the utterance and the corresponding context of conversations.", "Inspired by these work, we attempt to adapt CVAE to model the hierarchical relationship between MT&MS and CLS.", "Therefore, we propose a Variational Hierarchical Model to exploit translation and summarization simultaneously, named VHM, for CLS task in an end-to-end framework.", "VHM employs hierarchical latent variables based on CVAE to learn the hierarchical relationship between MT&MS and CLS.", "Specifically, the VHM contains two kinds of latent variables at the local and global levels, respectively.", "Firstly, we introduce two local variables for translation and summarization, respectively.", "The two local variables are constrained to reconstruct the translation and source-language summary.", "Then, we use the global variable to explicitly exploit the two local variables for better CLS, which is constrained to reconstruct the target-language summary.", "This makes sure the global variable captures its relationship with the two local variables without any loss, preventing error propagation.", "For inference, we use the local and global variables to assist the cross-lingual summarization process.", "We validate our proposed training framework on the datasets of different language pairs (Zhu et al., 2019): Zh2EnSum (Chinese English) and En2ZhSum (English Chinese).", "Experiments show that our model achieves consistent improvements on two language directions in terms of both automatic metrics and human evaluation, demonstrating its effectiveness and generalizability.", "Few-shot evaluation further suggests that the local and global variables enable our model to generate a satisfactory cross-lingual summaries compared to existing related methods.", "Our main contributions are as follows 2 : We are the first that builds a variational hierarchical model via conditional variational auto-encoders that introduce a global variable to combine the local ones for translation and summarization at the same time for CLS.", "Our model gains consistent and significant performance and remarkably outperforms the 2 The code is publicly available at: https://github.", "com/XL2248/VHM most previous state-of-the-art methods after using mBART (Liu et al., 2020).", "Under the few-shot setting, our model still achieves better performance than existing approaches.", "Particularly, the fewer the data are, the greater the improvement we gain.", "Machine Translation (MT).", "Given an input sequence in the source language X mt = { x i } | X mt | i =1 , the goal of the neural MT model is to produce its translation in the target language Y mt = { y i } | Y mt | i =1 .", "The conditional distribution of the model is: p ( Y mt | X mt ) = | Y mt | (cid:89) t =1 p ( y t | X mt , y 1: t 1 ) , where are model parameters and y 1: t 1 is the partial translation.", "Monolingual Summarization (MS).", "Given an input article in the source language X srcms = { x srci } | X srcms | i =1 and the corresponding summarization in the same language X tgtms = { x tgti } | X tgtms | i =1 , the monolingual summarization is formalized as: p ( X tgtms | X srcms ) = | X tgtms | (cid:89) t =1 p ( x tgtt | X srcms , x tgt 1: t 1 ) .", "CLS, we aim to learn a model that can generate a summary in the target language Y cls = { y i } | Y cls | i =1 for a given article in the source language X cls = { x i } | X cls | i =1 .", "Formally, it is as follows: p ( Y cls | X cls ) = | Y cls | (cid:89) t =1 p ( y t | X cls , y 1: t 1 ) .", "Conditional Variational Auto-Encoder (CVAE).", "The CVAE (Sohn et al., 2015) consists of one prior network and one recognition (posterior) network, where the latter takes charge of guiding the learning of prior network via KullbackLeibler (KL) divergence (Kingma and Welling, 2013).", "For example, the variational neural MT model (Zhang et al., 2016a; Su et al., 2018a; McCarthy et al., 2020; Su et al., 2018c), which introduces a random latent variable z into the neural MT conditional distribution: p ( Y mt | X mt ) = (cid:90) z p ( Y mt | X mt , z ) p ( z | X mt ) d z .", "firstly sampled by the prior network from the encoder, and then the target sentence is generated by the decoder: Y mt p ( Y mt | X mt , z ) , where z p ( z | X mt ) .", "As it is hard to marginalize Eq.", "1, the CVAE training objective is a variational lower bound of the conditional log-likelihood: L ( , ; X mt , Y mt ) = KL( q ( z | X mt , Y mt ) p ( z | X mt )) + E q ( z | X mt ,Y mt ) [log p ( Y mt | z , X mt )] log p ( Y mt | X mt ) , where are parameters of the CVAE.", "Fig. 1 demonstrates an overview of our model, consisting of four components: encoder , variational hierarchical modules , decoder , training and inference .", "Specifically, we aim to explicitly exploit the MT and MS for CLS simultaneously.", "Therefore, we firstly use the encoder ( 3.1) to prepare the representation for the variational hierarchical module ( 3.2), which aims to learn the two local variables for the global variable in CLS.", "Then, we introduce the global variable into the decoder ( 3.3).", "Finally, we elaborate the process of our training and inference ( 3.4).", "Our model is based on transformer (Vaswani et al., 2017) framework.", "As shown in Fig. 1, the encoder takes six types of inputs, { X mt , X srcms , X cls , Y mt , X tgtms , Y cls }, among which Y mt , X tgtms , and Y cls are only for training recognition networks.", "Taking X mt for example, the encoder maps the input X mt into a sequence of continuous representations whose size varies with respect to the source sequence length.", "Specifically, the encoder consists of N e stacked layers and each layer includes two sub-layers: 3 a multi-head self-attention ( SelfAtt ) sub-layer and a position-wise feed-forward network ( FFN ) sublayer: s e = SelfAtt( h 1 e ) + h 1 e , h e = FFN( s e ) + s e , where h e denotes the state of the -th encoder layer and h 0 e denotes the initialized embedding.", "Through the encoder, we prepare the representations of { X mt , X src ms , X cls } for training prior networks, encoder and decoder.", "Taking X mt for example, we follow Zhang et al. (2016a) and apply 3 The layer normalization is omitted for simplicity and you may refer to (Vaswani et al., 2017) for more details.", "mean-pooling over the output h N e ,X mt e of the N e -th encoder layer: h X mt = 1 | X mt | | X mt | (cid:88) i =1 ( h N e ,X mt e,i ) .", "Similarly, we obtain h X srcms and h X cls .", "Firstly, we design two local latent variational modules to learn the translation distribution in MT pairs and summarization distribution in MS pairs, respectively.", "Then, conditioned on them, we introduce a global latent variational module to explicitly exploit them.", "Translation.", "To capture the translation of the paired sentences, we introduce a local variable z mt that is responsible for generating the target information.", "Inspired by Wang and Wan (2019), we use isotropic Gaussian distribution as the prior distribution of z mt : p ( z mt | X mt ) N ( mt , 2 mt I ) , where I denotes the identity matrix and we have mt = MLP mt ( h X mt ) , mt = Softplus(MLP mt ( h X mt )) , (2) where MLP( ) and Softplus( ) are multi-layer per-ceptron and approximation of ReLU function, respectively.", "At training, the posterior distribution conditions on both source input and the target reference, which provides translation information.", "Therefore, the prior network can learn a tailored translation distribution by approaching the recognition network via KL divergence (Kingma and Welling, 2013): q ( z mt | X mt , Y mt ) N ( mt , 2 mt I ) , where mt and mt are calculated as: mt = MLP mt ( h X mt ; h Y mt ) , mt = Softplus(MLP mt ( h X mt ; h Y mt )) , (3) where ( ; ) indicates concatenation operation.", "Summarization.", "To capture the summarization in MS pairs, we introduce another local variable z ms , which takes charge of generating the source-language summary.", "Similar to z mt , we define its prior distribution as: p ( z ms | X srcms ) N ( ms , 2 ms I ) , where ms and ms are calculated as: ms = MLP ms ( h X srcms ) , ms = Softplus(MLP ms ( h X srcms )) .", "At training, the posterior distribution conditions on both the source input and the source-language summary that contains the summarization clue, and thus is responsible for guiding the learning of the prior distribution.", "Specifically, we define the posterior distribution as: q ( z ms | X srcms , X tgtms ) N ( ms , 2 ms I ) , where ms and ms are calculated as: ms = MLP ms ( h X srcms ; h X tgtms ) , ms = Softplus(MLP ms ( h X srcms ; h X tgtms )) .", "After obtaining z mt and z ms , we introduce the global variable z cls that aims to generate a target-language summary, where the z cls can simultaneously", "simultaneously exploit the local variables for CLS.", "Specifically, we firstly encode the source input X cls and condition on both two local variables z mt and z ms , and then sample z cls .", "We define its prior distribution as: p ( z cls | X cls , z mt , z ms ) N ( cls , 2 cls I ) , where cls and cls are calculated as: cls = MLP cls ( h X cls ; z mt ; z ms ) , cls = Softplus(MLP cls ( h X cls ; z mt ; z ms )) .", "At training, the posterior distribution conditions on the local variables, the CLS input, and the cross-lingual summary that contains combination information of translation and summarization.", "Therefore, the posterior distribution can teach the prior distribution.", "Specifically, we define the posterior distribution as: q ( z cls | X cls , z mt , z ms , Y cls ) N ( cls , 2 cls I ) , where cls and cls are calculated as: cls = MLP cls ( h X cls ; z mt ; z ms ; h Y cls ) , cls = Softplus(MLP cls ( h X cls ; z mt ; z ms ; h Y cls )) .", "The decoder adopts a similar structure to the encoder, and each of N d decoder layers includes an additional cross-attention sub-layer ( CrossAtt ):", "s d = SelfAtt( h 1 d ) + h 1 d , c d = CrossAtt( s d , h N e e ) + s d , h d = FFN( c d ) + c d ,", "d", "As shown in Fig. 1, we firstly obtain the local two variables either from the posterior distribution predicted by recognition networks (training process as the solid grey lines) or from prior distribution predicted by prior networks (inference process as the dashed red lines).", "Then, conditioned on the local two variables, we generate the global variable ( z cls / z cls ) via posterior (training) or prior (infer-ence) network.", "Finally, we incorporate z ( ) cls 4 into the state of the top layer of the decoder with a projection layer: o t = Tanh( W p [ h N d d,t ; z ( ) cls ] + b p ) , (8) where W p and b p are training parameters, h N d d,t is the hidden state at time-step t of the N d -th decoder layer.", "Then, o t is fed into a linear transformation and softmax layer to predict the probability distri-4 Here, we use z cls when training and z cls during inference, as similar to Eq.", "8.", "The model is trained to maximize the conditional log-likelihood, due to the intractable marginal likelihood, which is converted to the following vari-tional lower bound that needs to be maximized in the training process:", "J ( , ; X cls ,X mt ,X srcms ,Y cls ,Y mt ,X tgtms ) = KL( q ( z mt | X mt ,Y mt ) p ( z mt | X mt )) KL( q ( z ms | X srcms ,X tgtms ) p ( z ms | X srcms )) KL( q ( z cls | X cls , z mt , z ms ,Y cls ) p ( z cls | X cls , z mt , z ms )) + E q [log p ( Y mt | X mt , z mt )] + E q [log p ( X tgtms | X srcms , z ms )] + E q [log p ( Y cls | X cls , z cls , z mt , z ms )] ,", "where the variational lower bound includes the reconstruction terms and KL divergence terms based on three hierarchical variables.", "We use the repa-rameterization trick (Kingma and Welling, 2013) to estimate the gradients of the prior and recognition networks (Zhao et al., 2017).", "During inference, firstly, the prior networks of MT and MS generate the local variables.", "Then, conditioned on them, the global variable is produced by prior network of CLS.", "Finally, only the global variable is fed into the decoder, which corresponds to red dashed arrows in Fig.", "1. 4 Experiments 4.1 Datasets and Metrics Datasets.", "We evaluate the proposed approach on Zh2EnSum and En2ZhSum datasets released by (Zhu et al., 2019).", "5 The Zh2EnSum and En2ZhSum are originally from (Hu et al., 2015) and (Hermann et al., 2015; Zhu et al., 2018), respectively.", "Both the Chinese-to-English and English-to-Chinese test sets are manually corrected.", "The involved training data in our experiments are listed in Tab.", "1. Zh2EnSum.", "It is a Chinese-to-English summarization dataset, which has 1,699,713 Chinese short texts (104 Chinese characters on average) paired with Chinese (18 Chinese characters on average) and English short summaries (14 tokens on aver-age).", "The dataset is split into 1,693,713 training pairs, 3,000 validation pairs, and 3,000 test pairs.", "2. En2ZhSum.", "It is an English-to-Chinese summarization dataset, which has 370,687 English documents (755 tokens on average) paired with multi-sentence English (55 tokens on average) and Chinese summaries (96 Chinese characters on aver-age).", "The dataset is split into 364,687 training pairs, 3,000 validation pairs, and 3,000 test pairs.", "The involved training data used in multi-task learning, model size, training time, are listed in Tab.", "3. Metrics.", "Following Zhu et al. (2020), 1) we evaluate all models with the standard ROUGE metric (Lin, 2004), reporting the F1 scores for ROUGE-1, ROUGE-2, and ROUGE-L.", "All ROUGE scores 2092 M# Models Zh2EnSum En2ZhSum RG1 RG2 RGL MVS RG1 RG2 RGL M1 GETran (Zhu et al., 2019) 24.34 9.14 20.13 0.64 28.19 11.40 25.77 M2 GLTran (Zhu et al., 2019) 35.45 16.86 31.28 16.90 32.17 13.85 29.43 M3 TNCLS (Zhu et al., 2019) 38.85 21.93 35.05 19.43 36.82 18.72 33.20 M4 ATS-A (Zhu et al., 2020) 40.68 24.12 36.97 22.15 40.47 22.21 36.89 M5 MS-CLS (Zhu et al., 2019) 40.34 22.65 36.39 21.09 38.25 20.20 34.76 M6 MT-CLS (Zhu et al., 2019) 40.25 22.58 36.21 21.06 40.23 22.32 36.59 M7 MS-CLS-Rec (Cao et al., 2020a) 40.97 23.20 36.96 NA 38.12 16.76 33.86 M8 MS-CLS* 40.44 22.19 36.32 21.01 38.26 20.07 34.49 M9 MT-CLS* 40.05 21.72 35.74 20.96 40.14 22.36 36.45 M10 MT-MS-CLS (Ours) 40.65 24.02 36.69 22.17 40.34 22.35 36.44 M11 VHM (Ours) 41.36 24.64 37.15 22.55 40.98 23.07 37.12 M12 mBART (Liu et al., 2020) 43.61 25.14 38.79 23.47 41.55 23.27 37.22 M13 MLPT (Xu et al., 2020) 43.50 25.41 29.66 NA 41.62 23.35 37.26 M14 VHM + mBART (Ours) 43.97 25.61 39.19 23.88 41.95 23.54 37.67 Table 4: ROUGE F1 scores (%) and MoverScore scores (%) on Zh2EnSum test set, and ROUGE F1 scores (%) on En2ZhSum test set.", "are reported by the 95% confidence interval measured by the official script; 6 2) we also evaluate the quality of English summaries in Zh2EnSum with MoverScore (Zhao et al., 2019).", "In this paper, we train all models using standard transformer (Vaswani et al., 2017) in Base setting.", "For other hyper-parameters, we mainly follow the setting described in Zhu et al. (2019, 2020) for fair comparison.", "For more details, please refer to Appendix A. 4.3 Comparison Models Pipeline Models.", "TETran (Zhu et al., 2019).", "It first translates the original article into the target language by Google Translator 7 and then summarizes the translated text via LexRank (Erkan and Radev, 2004).", "TLTran (Zhu et al., 2019).", "It first summarizes the original article via a transformer-based monolingual summarization model and then translates the summary into the target language by Google Translator.", "End-to-End Models.", "TNCLS (Zhu et al., 2019).", "It directly uses the de-facto transformer (Vaswani 6 The parameter for ROUGE script here is -c 95 -r 1000 -n 2 -a 7 https://translate.google.com/ et al., 2017) to train an end-to-end CLS system.", "ATS-A (Zhu et al., 2020).", "8 It is an efficient model to attend the pre-constructed probabilistic bilingual lexicon to enhance the CLS.", "MS-CLS (Zhu et al., 2019).", "It simultaneously performs summarization generation for both CLS and MS tasks and calculates the total losses.", "MT-CLS (Zhu et al., 2019).", "9 It alternatively trains CLS and MT tasks.", "MS-CLS-Rec (Cao et al., 2020a).", "It jointly trains MS and CLS systems with a reconstruction loss to mutually map the source and target representations.", "mBART (Liu et al., 2020).", "We use mBART ( mbart.cc 25 ) as model initialization to fine-tune the CLS task.", "MLPT (Mixed-Lingual Pretraining) (Xu et al., 2020).", "It applies mixed-lingual pretraining that leverages six related tasks, covering both cross-lingual tasks such as translation and monolingual tasks like masked language models.", "MT-MS-CLS.", "It is our strong baseline, which is implemented by alternatively training CLS, MT, and MS. Here, we keep the dataset used for MT and MS consistent with Zhu et al. (2019) for fair comparison.", "Overall, we separate the models into three parts in Tab.", "4: the pipeline, end-to-end, and multi-task 8 https://github.com/ZNLP/ATSum 9 https://github.com/ZNLP/NCLS-Corpora 2093 \u0000\u0014\u0000\u0013\u0000\u0013\u0000\b \u0000\u0018\u0000\u0013\u0000\b \u0000\u0014\u0000\u0013\u0000\b \u0000\u0014\u0000\b \u0000\u0013\u0000\u0011\u0000\u0014\u0000\b \u00003\u0000U\u0000R\u0000S\u0000R\u0000U\u0000W\u0000L\u0000R\u0000Q\u0000\u0003\u0000R\u0000I\u0000\u0003\u00008\u0000V\u0000H\u0000G\u0000\u0003\u0000&\u0000/\u00006\u0000\u0003\u00007\u0000U\u0000D\u0000L\u0000Q\u0000L\u0000Q\u0000J\u0000\u0003\u0000'\u0000D\u0000W\u0000D \u0000\u0013 \u0000\u0018 \u0000\u0014\u0000\u0013 \u0000\u0014\u0000\u0018 \u0000\u0015\u0000\u0013 \u0000\u0015\u0000\u0018 \u0000\u0016\u0000\u0013 \u0000\u0016\u0000\u0018 \u0000\u0017\u0000\u0013 \u00005 \u0000* \u0000\u0014 \u0000=\u0000K\u0000\u0015\u0000(\u0000Q\u00006\u0000X\u0000P \u00009\u0000+\u00000\u00000\u00007\u0000\u0010\u00000\u00006\u0000\u0010\u0000&\u0000/\u00006\u00000\u00007\u0000\u0010\u0000&\u0000/\u00006\u00000\u00006\u0000\u0010\u0000&\u0000/\u00006\u0000$\u00007\u00006\u0000\u0010\u0000$\u0000*\u0000D\u0000S\u0000\u0010\u0000+\u0000*\u0000D\u0000S\u0000\u0010\u0000' \u0000\u0014\u0000\u0013\u0000\u0013\u0000\b \u0000\u0018\u0000\u0013\u0000\b \u0000\u0014\u0000\u0013\u0000\b \u0000\u0014\u0000\b \u0000\u0013\u0000\u0011\u0000\u0014\u0000\b \u00003\u0000U\u0000R\u0000S\u0000R\u0000U\u0000W\u0000L\u0000R\u0000Q\u0000\u0003\u0000R\u0000I\u0000\u0003\u00008\u0000V\u0000H\u0000G\u0000\u0003\u0000&\u0000/\u00006\u0000\u0003\u00007\u0000U\u0000D\u0000L\u0000Q\u0000L\u0000Q\u0000J\u0000\u0003\u0000'\u0000D\u0000W\u0000D \u0000\u0013 \u0000\u0018 \u0000\u0014\u0000\u0013 \u0000\u0014\u0000\u0018 \u0000\u0015\u0000\u0013 \u0000\u0015\u0000\u0018 \u00005 \u0000* \u0000\u0015 \u0000=\u0000K\u0000\u0015\u0000(\u0000Q\u00006\u0000X\u0000P \u00009\u0000+\u00000\u00000\u00007\u0000\u0010\u00000\u00006\u0000\u0010\u0000&\u0000/\u00006\u00000\u00007\u0000\u0010\u0000&\u0000/\u00006\u00000\u00006\u0000\u0010\u0000&\u0000/\u00006\u0000$\u00007\u00006\u0000\u0010\u0000$\u0000*\u0000D\u0000S\u0000\u0010\u0000+\u0000*\u0000D\u0000S\u0000\u0010\u0000' \u0000\u0014\u0000\u0013\u0000\u0013\u0000\b \u0000\u0018\u0000\u0013\u0000\b \u0000\u0014\u0000\u0013\u0000\b \u0000\u0014\u0000\b \u0000\u0013\u0000\u0011\u0000\u0014\u0000\b \u00003\u0000U\u0000R\u0000S\u0000R\u0000U\u0000W\u0000L\u0000R\u0000Q\u0000\u0003\u0000R\u0000I\u0000\u0003\u00008\u0000V\u0000H\u0000G\u0000\u0003\u0000&\u0000/\u00006\u0000\u0003\u00007\u0000U\u0000D\u0000L\u0000Q\u0000L\u0000Q\u0000J\u0000\u0003\u0000'\u0000D\u0000W\u0000D \u0000\u0013 \u0000\u0018 \u0000\u0014\u0000\u0013 \u0000\u0014\u0000\u0018 \u0000\u0015\u0000\u0013 \u0000\u0015\u0000\u0018 \u0000\u0016\u0000\u0013 \u0000\u0016\u0000\u0018 \u00005 \u0000* \u0000/ \u0000=\u0000K\u0000\u0015\u0000(\u0000Q\u00006\u0000X\u0000P \u00009\u0000+\u00000\u00000\u00007\u0000\u0010\u00000\u00006\u0000\u0010\u0000&\u0000/\u00006\u00000\u00007\u0000\u0010\u0000&\u0000/\u00006\u00000\u00006\u0000\u0010\u0000&\u0000/\u00006\u0000$\u00007\u00006\u0000\u0010\u0000$\u0000*\u0000D\u0000S\u0000\u0010\u0000+\u0000*\u0000D\u0000S\u0000\u0010\u0000' \u0000\u0014\u0000\u0013\u0000\u0013\u0000\b \u0000\u0018\u0000\u0013\u0000\b \u0000\u0014\u0000\u0013\u0000\b \u0000\u0014\u0000\b \u0000\u0013\u0000\u0011\u0000\u0014\u0000\b \u00003\u0000U\u0000R\u0000S\u0000R\u0000U\u0000W\u0000L\u0000R\u0000Q\u0000\u0003\u0000R\u0000I\u0000\u0003\u00008\u0000V\u0000H\u0000G\u0000\u0003\u0000&\u0000/\u00006\u0000\u0003\u00007\u0000U\u0000D\u0000L\u0000Q\u0000L\u0000Q\u0000J\u0000\u0003\u0000'\u0000D\u0000W\u0000D \u0000\u0013 \u0000\u0018 \u0000\u0014\u0000\u0013 \u0000\u0014\u0000\u0018 \u0000\u0015\u0000\u0013 \u00000 \u00009 \u00006 \u0000=\u0000K\u0000\u0015\u0000(\u0000Q\u00006\u0000X\u0000P \u00009\u0000+\u00000\u00000\u00007\u0000\u0010\u00000\u00006\u0000\u0010\u0000&\u0000/\u00006\u00000\u00007\u0000\u0010\u0000&\u0000/\u00006\u00000\u00006\u0000\u0010\u0000&\u0000/\u00006\u0000$\u00007\u00006\u0000\u0010\u0000$\u0000*\u0000D\u0000S\u0000\u0010\u0000+\u0000*\u0000D\u0000S\u0000\u0010\u0000' Figure 2: ROUGE F1 scores (%) and MoverScore scores (%) on Zh2EnSum test set in few-shot setting.", "settings.", "In each part, we show the results of existing studies and our re-implemented baselines and our approach, i.e. , the VHM, on Zh2EnSum and En2ZhSum test sets.", "Results on Zh2EnSum.", "Compared against the pipeline and end-to-end methods, VHM substantially outperforms all of them ( e.g. , the previous best model ATS-A) by a large margin with 0.68/0.52/0.18/0.4 scores on RG1/RG2/RGL/MVS, respectively.", "Under the multi-task setting, compared to the existing best model MS-CLS-Rec, our VHM also consistently boosts the performance in three metrics ( i.e. , 0.39 , 1.44 , and 0.19 ROUGE scores on RG1/RG2/RGL, respec-tively), showing its effectiveness.", "Our VHM also significantly surpasses our strong baseline MT-MS-CLS by 0.71/0.62/0.46/0.38 scores on RG1/RG2/RGL/MVS, respectively, demonstrating the superiority of our model again.", "Results on En2ZhSum.", "Compared against the pipeline, end-to-end and multi-task methods, our VHM presents remarkable ROUGE improvements over the existing best model ATS-A by a large margin, about 0.51/0.86/0.23 ROUGE gains on RG1/RG2/RGL, respectively.", "These results suggest that VHM consistently performs well in different language directions.", "Our approach still notably surpasses our strong baseline MT-MS-CLS in terms of all metrics, which shows the generalizability and superiority of our model again.", "Due to the difficulty of acquiring the cross-lingual summarization dataset (Zhu et al., 2019), we conduct such experiments to investigate the model performance when the CLS training dataset is limited, i.e. , few-shot experiments.", "Specifically, we randomly choose 0.1%, 1%, 10%, and 50% CLS training datasets to conduct experiments.", "The results are shown in Fig. 2 and Fig.", "3. Results on Zh2EnSum.", "Fig. 2 shows that VHM significantly surpasses all comparison models under each setting.", "Particularly, under the 0.1% setting, our model still achieves best performances than all baselines, suggesting that our variational hierarchical model works well in the few-shot setting as well.", "Besides, we find that the performance gap between comparison models and VHM is growing when the used CLS training data become fewer.", "It is because relatively larger proportion of translation and summarization data are used, the influence from MT and MS becomes greater, effectively strengthening the CLS model.", "Particularly, the performance Gap-H between MT-MS-CLS and VHM is also growing, where both models utilize the same data.", "This shows that the hierarchical relationship between MT&MS and CLS makes substantial contributions to the VHM model in terms of four metrics.", "Consequently, our VHM achieves a comparably stable performance.", "Results on En2ZhSum.", "From Fig. 3, we observe the similar findings on Zh2EnSum.", "This shows that VHM significantly outperforms all comparison models under each setting, showing the generalizability and superiority of our model again in the few-shot setting.", "We conduct ablation studies to investigate how well the local and global variables of our VHM works.", "When removing variables listed in Tab.", "5, we have the following findings.", "(1) Rows 1 3 vs. row 0 shows that the model performs worse, especially when removing the two local ones (row 3), due to missing the explicit translation or summarization or both information provided by the local variables, which is important to CLS.", "Besides, row 3 indicates that directly attending to z cls leads to poor performances, showing the necessity of the hierarchical structure, i.e. , using the global variable to exploit the local ones.", "(2) Rows 4 5 vs. row 0 shows that directly attending the local translation and summarization cannot achieve good results due to lacking of the global combination of them, showing that it is very necessary for designing the variational hierarchical model, i.e. , using a global variable to well exploit and combine the local ones.", "Following Zhu et al. (2019, 2020), we conduct human evaluation on 25 random samples from each of the Zh2EnSum and En2ZhSum test set.", "We compare the summaries generated by our methods (MT-MS-CLS and VHM) with the summaries generated by ATS-A, MS-CLS, and MT-CLS in the full setting and few-shot setting (0.1%), respectively.", "We invite three graduate students to compare the generated summaries with human-corrected references, and assess each summary from three independent perspectives:", "1. How informative ( i.e. , IF) the summary is?", "2. How concise ( i.e. , CC) the summary is?", "3. How fluent , grammatical ( i.e. , FL) the summary is?", "Each property is assessed with a score from 1 (worst) to 5 (best).", "The average results are presented in Tab.", "6 and Tab.", "7.", "Tab.", "6 shows the results in the full setting.", "We find that our VHM outperforms all comparison models from three aspects in both language directions, which further demonstrates the effectiveness and superiority of our model.", "Tab.", "7 shows the results in the few-shot setting, where only 0.1% CLS training data are used in all models.", "We find that our VHM still performs best than all other models from three perspectives in both datasets, suggesting its generalizability and effectiveness again under different settings.", "Cross-Lingual Summarization.", "Conventional cross-lingual summarization methods mainly focus on incorporating bilingual information into 2095 Models Zh2EnSum En2ZhSum IF CC FL IF CC FL ATS-A 3.44 4.16 3.98 3.12 3.31 3.28 MS-CLS 3.12 4.08 3.76 3.04 3.22 3.12 MT-CLS 3.36 4.24 4.14 3.18 3.46 3.36 MT-MS-CLS 3.42 4.46 4.22 3.24 3.48 3.42 VHM 3.56 4.54 4.38 3.36 3.54 3.48 Table 6: Human evaluation results in the full setting.", "the pipeline methods (Leuski et al., 2003; Ouyang et al., 2019; Or asan and Chiorean, 2008; Wan et al., 2010; Wan, 2011; Yao et al., 2015; Zhang et al., 2016b), i.e. , translation and then summarization or summarization and then translation.", "Due to the difficulty of acquiring cross-lingual summarization dataset, some previous researches focus on constructing datasets (Ladhak et al., 2020; Scialom et al., 2020; Yela-Bello et al., 2021; Zhu et al., 2019; Hasan et al., 2021; Perez-Beltrachini and Lapata, 2021; Varab and Schluter, 2021), mixed-lingual pre-training (Xu et al., 2020), knowledge distillation (Nguyen and Tuan, 2021), contrastive learning (Wang et al., 2021) or zero-shot approaches (Ayana et al., 2018; Duan et al., 2019; Dou et al., 2020), i.e. , using machine translation (MT) or monolingual summarization (MS) or both to train the CLS system.", "Among them, Zhu et al. (2019) propose to use roundtrip translation strategy to obtain large-scale CLS datasets and then present two multi-task learning methods for CLS.", "Based on this dataset, Zhu et al. (2020) leverage an end-to-end model to attend the pre-constructed probabilistic bilingual lexicon to improve CLS.", "To further enhance CLS, some studies resort to shared decoder (Bai et al., 2021a), more pseudo training data (Takase and Okazaki, 2020), or more related task training (Cao et al., 2020b,a; Bai et al., 2021b).", "Wang et al. (2022) concentrate on building a benchmark dataset for CLS on dialogue field.", "Different from them, we propose a variational hierarchical model that introduces a global variable to simultaneously exploit and combine the local translation variable in MT pairs and local summarization variable in MS pais for CLS, achieving better results.", "Conditional Variational Auto-Encoder.", "CVAE has verified its superiority in many fields (Sohn et al., 2015; Liang et al., 2021a; Zhang et al., 2016a; Su et al., 2018b).", "For instance, in dialogue, Shen et al. (2019), Park et al. (2018) and Serban et al. (2017) extend CVAE to capture the semantic connection between the utterance and the corresponding context with hierarchical latent variables.", "Although the CVAE has been widely used in NLP tasks, its adaption and utilization to cross-lingual summarization for modeling hierarchical relationship are non-trivial, and to the best of our knowledge, has never been investigated before in CLS.", "Multi-Task Learning.", "Conventional multi-task learning (MTL) (Caruana, 1997), which trains the model on multiple related tasks to promote the representation learning and generalization performance, has been successfully used in NLP fields (Collobert and Weston, 2008; Deng et al., 2013; Liang et al., 2021d,c,b).", "In the CLS, conventional MTL has been explored to incorporate additional training data (MS, MT) into models (Zhu et al., 2019; Takase and Okazaki, 2020; Cao et al., 2020a).", "In this work, we instead focus on how to connect the relation between the auxiliary tasks at training to make the most of them for better CLS.", "In this paper, we propose to enhance the CLS model by simultaneously exploiting MT and MS. Given the hierarchical relationship between MT&MS and CLS, we propose a variational hierarchical model to explicitly exploit and combine them in CLS process.", "Experiments on Zh2EnSum and En2ZhSum show that our model significantly improves the quality of cross-lingual summaries in terms of automatic metrics and human evaluations.", "Particularly, our model in the few-shot setting still works better, suggesting its superiority and generalizability.", "The research work descried in this paper has been supported by the National Key R&D Program of China (2020AAA0108001) and the National Nature Science Foundation of China (No. 61976015, 61976016, 61876198 and 61370130).", "Liang is supported by 2021 Tencent Rhino-Bird Research Elite Training Program.", "The authors would like to thank the anonymous reviewers for their valuable comments and suggestions to improve this paper." ]
[ "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "objective", "method", "objective", "other", "abstain", "result", "abstain", "other", "other", "other", "other", "other", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "other", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "other", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "other", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "other", "abstain", "result", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "result", "abstain", "result", "result", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "objective", "other", "other", "other", "objective", "other", "other", "other", "method", "objective", "result", "abstain", "other", "other", "other" ]
[ "Ronald Cardenas Ying Lin Heng Ji Jonathan May Institute of Formal and Applied Linguistics, Charles University in Prague Computer Science Department, Rensselaer Polytechnic Institute Information Sciences Institute, University of Southern California ronald.cardenas@matfyz.cz liny9@rpi.edu jih@rpi.edu jonmay@isi.edu", "Abstract", "Unsupervised part of speech (POS) tagging is often framed as a clustering problem, but practical taggers need to ground their clusters as well.", "Grounding generally requires reference labeled data, a luxury a low-resource language might not have.", "In this work, we describe an approach for low-resource unsupervised POS tagging that yields fully grounded output and requires no labeled training data.", "We find the classic method of Brown et al. (1992) clusters well in our use case and employ a decipherment-based approach to grounding.", "This approach presumes a sequence of cluster IDs is a ciphertext' and seeks a POS tag-to-cluster ID mapping that will reveal the POS sequence.", "We show intrinsically that, despite the difficulty of the task, we obtain reasonable performance across a variety of languages.", "We also show extrinsically that incorporating our POS tagger into a name tagger leads to state-of-the-art tagging performance in Sinhalese and Kinyarwanda, two languages with nearly no labeled POS data available.", "We further demonstrate our tagger's utility by incorporating it into a true zero-resource' variant of the MALOPA (Ammar et al., 2016) dependency parser model that removes the current reliance on multilingual resources and gold POS tags for new languages.", "Experiments show that including our tagger makes up much of the accuracy lost when gold POS tags are unavailable.", "While cellular, satellite, and hardware advances have ensured that sophisticated NLP technology can reach all corners of the earth, the language barrier upon reaching remote locales still remains.", "As an example, when international aid organizations respond to new disasters, they are often unable to deploy technology to understand local reports detailing specific events (Munro and Manning, 2012; Lewis et al., 2011).", "An inability to communicate Figure 1: Overview of our approach to grounded POS tagging.", "with partner governments or civilian populations in a timely manner leads to preventable casualties.", "The lack of adequate labeled training data has been the major obstacle to expanding NLP's outreach more multilingually.", "Developments in unsupervised techniques that require only monolingual corpora (Lample et al., 2018a; Artetxe et al., 2018) and the ability to leverage labeled resources in other languages have been proposed to address this issue (Das and Petrov, 2011; Duong et al., 2014; Ammar et al., 2016).", "Unfortunately, these methods either do not work in practice on true low-resource cases or unrealistically assume the availability of some amount of supervision.", "Consider syntactic parsing as a prime example.", "Past editions of the CoNLL Shared Task on Multilingual Parsing (Zeman et al., 2017, 2018) featured a category of target languages for which either little or no training data was provided.", "However, even in the no-resource' scenario that most closely matches our use case, gold part-of-speech (POS) tags for test data were provided for the participants to use.", "Prior to these shared tasks, Ammar et al. (2016) proposed a variant of their main model, MALOPA , that was meant to produce reasonable parses for languages under zero-resource conditions.", "In order to function, however, the model requires users to provide gold POS tags and word mappings from these languages into a common semantic space, using approaches that require parallel data (Guo et al., 2015).", "Indeed, the compulsion to use POS tag-labeled data in zero-resource circumstances extends to the vast, varied lines of research in unsupervised POS tagging itself!", "Every approach explored so far ultimately requires POS-annotated resources for the language being studied in order to produce a fi-nal, grounded output.", "Even the most conservative strategies (Goldwater and Griffiths, 2007; Berg-Kirkpatrick et al., 2010; Stratos et al., 2016) that do not require any supervised signal during training still ultimately produce only ungrounded clusters, and require a reference annotated corpus to map the inferred clusters or states to actual POS tags.", "Making matters worse, evaluation is generally offered in terms of the many-to-one' or one-to-one' analyses Johnson (2007).", "These metrics use a reference corpus to determine the optimal mapping of clusters to tags.", "While this evaluation approach is intuitively sensible for measuring cluster purity, to actually use such an output, an entire annotated training corpus is required.", "1 It is not enough to simply rely on ungrounded clusters in real-world systems; grounded labels offer a sort of universal API between other resources such as rule-based modules that operate on certain word types or between resources built from other annotated high-resource language data.", "new languages are often unavailable or unreliable, we make the following contributions to ensure the surprise of a new language does not immobilize us: We introduce a decipherment-based approach to POS grounding, which yields fully grounded output and does not require any annotated data or parallel corpora in the language to be analyzed.", "The approach uses preexisting human-labeled POS tag sequences from high-resource parent languages (PL) but no labeled data or sequences for the target, or child language (CL).", "An overview of the approach is shown in Figure 1.", "We demonstrate our approach by evaluating over a variety of languages spanning 4 families and 8 genera (Germanic, Romance, Slavic, Japanese, Semitic, Iranian, Indic, and Bantoid), and show across-the-board reasonable intrinsic performance, given the difficulty of the task and the stringency (straight-forward accuracy) in comparison to other unsupervised evaluation strategies.", "We test the utility of our grounded tags in a name tagging task, obtaining state-of-the-art performance for Sinhalese and Kiryarwanda, two languages with nearly no labeled POS or named entity resources.", "We further pare down the annotated resources required in an existing zero-resource' dependency parser model and show that our unsupervised and grounded tags are helpful at closing the gap between a nihilistic tag-free setting and an unrealistic gold tag setting.", "We release our code so that others may create zero-resource syntactic analysis and information extraction systems at the onset of the next new emergency.", "2 2 POS Grounding as Decipherment We consider the task of POS induction as a two-step pipeline: from word sequence w to POS tag sequence p via cluster sequence c .", "Formally, our conditional probability model is 2 https://github.com/isi-nlp/ universal-cipher-pos-tagging.git argmax p P ( p | w ) = argmax p (cid:88) c C | w | P ( p, c | w ) = argmax p (cid:88) c C | w | P ( p | c, w ) P ( c | w ) where C is the cluster vocabulary and parameterizes our probability model.", "If we assume a deterministic pipelined clustering of words and a tag labeling model that does not depend on words, then for chosen c , this becomes argmax p (cid:88) c C | w | P ( p | c, w ) P ( c | w ) = argmax p P ( p | c ) = argmax p P ( c | p ) P ( p ) (1) We call this model the cipher grounder .", "As presented it requires an estimate for P ( p ) for the CL, which requires POS training data.", "Under the zero-resource scenario, we instead approximate P ( p ) by the tag distribution of a PL.", "Then, the cipher table P ( c | p ) can be trained using a noisy-channel, expectation-maximization (EM)-based approach as in Ravi and Knight (2011).", "We approach the search for optimal components in the two-step pipeline outlined in Section 2 in a cascaded manner.", "First, an optimal word clustering is determined by means of the many-to-one evaluation method.", "This method is explained well by Johnson (2007): ...deterministically map each hidden state to the POS tag it co-occurs most frequently with, and return the proportion of the resulting POS tags that are the same as the POS tags of the gold-standard corpus.", "While unrealistic for POS tagger performance purposes, many-to-one is a good choice for determining cluster purity' and provides a reasonable grounding upper bound.", "As the calculation of many-to-one does require labeled data, we constrain the use of these labels for development and will evaluate extrinsically using languages for which we do not have any training data; see Section 5.2.", "Secondly, we search for the best approach to ground the chosen clusters, given several possible PL options.", "After the optimal components and parameters are determined, we validate POS tag quality intrinsically via tag accuracy on reference data where it exists, and then extrinsically on two downstream tasks.", "We investigate a simulated no-resource scenarios in the task of dependency parsing, and a real low-resource scenario in name tagging.", "For intrinsic evaluation and optimization of the tagging pipeline, including all preliminary experiments, we use annotated corpora from Universal Dependencies (UD) v2.2 3 for the following languages: English (en), German (de), French (fr), Italian (it), Spanish (es), Japanese (ja), Czech (cs), Russian (ru), Arabic (ar), and Farsi (fa).", "For Swahili (sw), we use the Helsinki Corpus of Swahili 2.0.", "4 Overall in these experiments we cover 11 languages and 4 language families.", "In our dependency parsing experiments, we use the Universal Treebank v2.0 (McDonald et al., 2013) for en, de, fr, es, it, Portuguese (pt), and Swedish (sv).", "This set of treebanks is chosen instead of UD in order to obtain results comparable to those of previous work on simulated zero-resource parsing scenarios (Ammar et al., 2016; Zhang and Barzilay, 2015; Rasooli and Collins, 2015).", "In our name tagging experiments, we use monolingual texts for Sinhalese (si) and Kinyarwanda (rw) provided by DARPA's Low Resource Languages for Emergent Incidents (LORELEI) Program during the 2018 Low Resource Human Languages Technologies (LoReHLT) evaluation.", "In this step we compare two approaches to unsupervised ungrounded labeling.", "The first strategy is to cluster by word types and thus label each token with its cluster ID independently of its context.", "5 We consider Brown's hierarchical clustering algo-3 http://universaldependencies.org/ 4 http://urn.fi/urn:nbn:fi: lb-2016011301 5 We refer to ungrounded POS tag labels as clusters' even though not all methods induce a clustering.", "rithm, (Brown et al., 1992) 6 BROWN ; Brown's exchange algorithm, 7 (Martin et al., 1998) MARLIN ; and k-means clustering of monolingual word embeddings of dimension size 100, trained using fastText (Joulin et al., 2016), E-KMEANS .", "The second labeling strategy is context-sensitive; it uses the Bayesian HMM tagger proposed by Stratos et al. (2016), which we call A-HMM .", "As noted previously, we evaluate unsupervised labeling extrinsically, via the many-to-one approach, and use the best performing labeling in the complete two-step grounded tagging pipeline.", "In preliminary experiments, we vary the number of clusters and hidden states ( | C | ) between 17 and 500.", "We initially sought to create one cluster per UD POS tag and then choose the proper 1:1 assignment of cluster to tag, following the approach of Stratos et al. (2016).", "However, cluster purity is low when only 17 clusters are allowed (i.e. each cluster has words with a variety of POS tags).", "Naturally, as the number of clusters is raised, the purity of each cluster improves.", "We ultimately fix the cluster limit at 500, which gives a good tradeoff between overall cluster quality for all the ungrounded tagging methods, and size small enough to allow EM-based decipherment to be tractable.", "Given this setting, we evaluate our four labeling strategies using the many-to-one approach, as presented in Table 1.", "Due to the larger number of clusters, the results presented here are higher than and not comparable to the original literature describing the methods.", "8 We can, nevertheless, make relative judgements.", "In all cases, clustering by type with Brown-based algorithms works better than using a sophisticated tagger such as A-HMM .", "Since BROWN and MARLIN obtain similar results, with no consistently dominant model, in all subsequent experiments we use the BROWN labeler with 500 clusters.", "We now seek an appropriate method for grounding the clusters generated in Section 3.2.", "We experi-6 https://github.com/percyliang/ brown-cluster 7 Optimized and implemented by M uller and Schuetze (2015).", "Available at http://cistern.cis.lmu.de/ marlin/ 8 As noted by Clark (2003) and Johnson (2007), in the limit, keeping each type (or, in the case of A-HMM , TOKEN in its own cluster will result in the maximum possible many-to-one (polysemic types prevent perfect accuracy when type clustering).", "ment with en, fr, fa, and sw as CLs.", "For each CL t , we instantiate our model following Equation 1, using the Carmel toolkit (Graehl, 1997) and forming the cipher table as a one-state transducer.", "We train these models using EM for 500 iterations or until convergence, and we select the model with the lowest perplexity from among 70 random restarts.", "Yet unspecified is the nature of the POS language model P ( p ) .", "We begin by training bigram models of POS tag sequences with additive smoothing using the SRILM toolkit (Stolcke, 2002) for each PL s S = { en, de, fr, it, es, ja, ar, cs, ru, sw } .", "But which PL's POS tag data to use for each CL?", "We explore two initial criteria for choosing a single suitable PL s : confidence of the model during decoding (perplexity, PPL), and typological similarity.", "For the first criterion, the PL whose cipher grounder s t yields the better performance is chosen.", "For the second criterion, the most similar language to CL t is chosen according to the cosine similarity between typological features vectors.", "We employ 102 features obtained from WALS 9 related to word order and morphosyntactic alignment, further reduced to 50 dimensions using PCA.", "However, none these criteria correlates significantly to tagging accuracy, as we elaborate in Section 5.1.", "We instead try a combined approach.", "The likelihood of cluster ID replacement, P ( c i | p j ) , c i C, p j in the tagset, is replaced by P avg ( c i | p j ) (cid:80) s S ,s (cid:54) = t P ( c i | p sj ) | S | 1 where P ( c i | p sj ) is the likelihood of POS tag p j being represented by cluster c i after training with the language s tag distribution.", "Note that the CL is excluded from S for the combination.", "The combined cipher grounder is then defined by argmax p P all ( p ) P avg ( c | p ) (2) where P all ( p ) is a language model trained over the concatenation of POS sequences of all parent languages in S .", "We call this approach CIPHER-AVG .", "We experiment with the LSTM-CNN model proposed by Chiu and Nichols (2016), one of the", "9 https://wals.info/", "state-of-the-art name tagging models, as our baseline model.", "To incorporate POS features, we extend the token representation (word and character embeddings) with a one-hot vector representation of the POS tag.", "Figure 2 presents an outline of the architecture.", "We base our experiments on the no-treebank setup of MALOPA (Ammar et al., 2016), but change the underlying transition-based parser to the graph-based parser proposed by Dozat and Manning (2017) for implementation convenience.", "Following this setup, for each CL except en, we train the parser on the concatenation of treebanks of the other 6 languages as PLs.", "The original MALOPA work enriches the input representation by concatenating pretrained multilingual word embeddings (Guo et al., 2016), multilingual Brown cluster IDs, and POS tag information.", "However, these representations are obtained using parallel corpora and gold POS tags are required for parsing at test time.", "In contrast, we are interested in the realistic scenario in which no resource is available in the child language but raw text.", "It is important to note, however, that our objective is not to beat the state-of-the-art on this benchmark but to investigate parsing performance fluctuation when cross-lingual components (gold POS annotations and supervised multilingual embeddings) are replaced by those obtained in an unsupervised manner.", "We investigate the following variations to each component of the input representation.", "Multilingual word and cluster embeddings.", "The original work of Ammar et al. (2016) uses robustly projected' pre-trained embeddings (Guo et al., 2015) for word embeddings and embeddings learned from English Brown cluster IDs projected through word alignments (Guo et al., 2016) for cluster embeddings; both of these rely on parallel data and we refer to them collectively as GUO .", "We replace these with monolingual fastText embeddings (Bojanowski et al., 2017) projected to a common space using MUSE , the unsupervised method of Lample et al. (2018b).", "For cluster embeddings we start with fastText monolingual embeddings trained over Brown cluster ID sequences instead of word tokens ( | C | = 256 , the same as in Guo et al. (2016)).", "Then, unsupervised multilingual embeddings are derived, again using MUSE .", "10 Note that this approach, which we refer to collectively as MUSE , requires no parallel data.", "We compare both MUSE and GUO approaches in Section 5.2 and Table 5.", "POS tag scheme.", "The original work uses gold POS tag data at both train and test time.", "While realistic to have gold POS info from PLs for training, it is unrealistic to have this data available for new CLs at test time.", "We thus compare the original GOLD scenario with the realistic CIPHER scenario, where the training data is still gold, but the test POS tags use the method presented in this work.", "Another realistic scenario dispenses 10 Both cluster and word MUSE embeddings are projected to the corresponding English space.", "with POS disambiguation except for the trivial distinction of punctuation; for compatibility purposes this is done in both train and test data and is labeled NONE .", "We investigate all combinations of { GUO , MUSE } { GOLD , CIPHER , NONE } .", "The results in Table 1 are somewhat at odds with those presented in Stratos et al. (2016), but these are done at different operating points; we use different data, the UD-17 tag set instead of the Universal Treebank 12 tag set, and, perhaps most importantly, generate more clusters.", "We further note that to some degree, choosing Brown clusters based on the results in Table 1 compromises claims of our approach being fully unsupervised' for those six languages, however our subsequent experiments on additional languages are truly unsupervised.", "Table 2 presents the intrinsic performance of the cipher grounder over all PL-CL pairs considered.", "The difference between the best and the worst performing PL for each CL ranges from 24.62 percentage points for Swahili to 48.34 points for French, and an average difference of 34.5 points among all languages.", "The case when PL = CL is also presented in Table 2 as a reference and provides a reliable upper-bound under zero-resource conditions.", "It is worth noting the difference in accuracy when comparing the best performing PL for each CL with its corresponding PL = CL upper-bound.", "Among all CLs, the best cipher grounder for French (es-fr) gets the closest to its upper-bound with just 4.81 percentage points of difference, followed by the English grounder (fr-en) with 13.53 points of difference.", "On the other hand, the best Swahili grounder (ar-sw) is the most distant from its upper-bound with 30.45 points of difference.", "Given such wide performance gaps in the CL set, the choice of a suitable PL becomes crucial for performance; therein the cipher model confidence and typological similarity are explored as possible choice criteria.", "With regards to model confidence, the Pearson correlation between accuracy scores and PPL, expected to be negative, ranges from 0 .", "71 for English to 0.40 for Farsi.", "Since the PPL values for different PLs are not comparable, we first z-normalize PPL per CL and then concatenate the results for all CLs.", "The Pearson correlation of the resulting PPL-accuracy values is -0.13.", "This last result indicates that the most confident model might not be the most accurate, hence this criterion is not suitable for choosing a suitable PL.", "With regards to typological similarity, we find that the Pearson correlation between accuracy scores and cosine similarity of typological feature vectors, expected to be positive, ranges from 0.44 for English to -0.14 for Farsi.", "The total correlation is found to be 0.18.", "Again, we find that the most typologically similar s might not be the the most accurate, hence this criterion is not suitable either.", "Hence, it becomes obvious that choosing a single PL is an inefficient strategy that does not leverage the contribution that other PLs could bring.", "In this situation, the combination of cipher grounders for several PLs represents a sound strategy when no prior linguistic information of a certain CL is available.", "As shown in Table 2, this model, CIPHER-AVG , obtains accuracy scores of 56.4, 58.6, 37.4, and 37.8 % for en, fr, fa, and sw, respectively.", "When compared to the best performing PL for each CL (see bold cells in Table 2), it can be noticed that the performance gap ranges from just 1.2 percentage points for Swahili to 13.3 points for French, with an average of 6.1 points among all target languages.", "Let us now compare the performance of CIPHER-AVG with that of a vanilla supervised neural model.", "11 Table 3 shows precision, recall, and F1 scores for the NOUN tag.", "Even though CIPHER-AVG achieved mixed results (mid to low accuracy), the model robustly achieves mid-range performance according to F1-score for all CLs.", "The results are even more optimistic in terms of recall for English and French, and in terms of precision for Farsi and Swahili.", "This gives us hope that CIPHER-AVG can provide a useful, if noisy, signal to downstream tasks that depend on nontrivial performance over specific POS tags, such as name tagging, as exposed in the next section.", "In the name tagging task, our LSTM-CNN baseline obtains 78 .", "76% and 70 .", "76% F1 score for Kinyarwanda and Sinhalese, respectively.", "When enriching the input representation with CIPHERAVG tags, the performance goes up to 80 .", "16% and 71 .", "71% respectively.", "These results suggest that the signal provided by the combined cipher grounder is significant enough for relevant tags such as common, proper nouns and noun modifiers.", "As an example, consider the sentence Kwizera Peace Ndaruhutse , wari wambaye nomero 11.", "The baseline model fails to recognize Kwizera Peace Ndaruhutse as a person name.", "In contrast, with the PROPN tag assigned by CIPHER-AVG to Kwizera , Peace , and Ndaruhutse , our model is able to identify this name.", "Likewise, the utility of CIPHER-AVG tags for dependency parsing under zero-resource scenarios is summarized in Table 4 and Table 5.", "It is important to point out that, even though the MALOPA setup follows the no-treebank setup of Ammar et al. (2016), parsing scores in the first row of Table 4 differ from those reported by them (Table 8 in Ammar et al. (2016)).", "Such difference is to be expected since the underlying parser used in our experiments is a graph-based neural parser (Dozat and Manning, 2017) instead of a transition-based one (Dyer et al., 2015).", "12 As mentioned earlier, our objective is to analyze the effect of our tagger's signal on parsing performance under no-resource scenarios, instead of pushing the state-of-the-art for the task.", "We first analyze the effect of POS tag information at test time for the MALOPA setup in Table 4.", "First we remove all POS signal except trivial punctuation information ( NONE row), and, predictably, the scores drop significantly across all target languages.", "Then, we use our cipher tags ( CIPHER row) and see improvements for all languages in LAS and for all but one language in UAS (de).", "This demonstrates the value of our cipher approach.", "We then take the next logical step and remove the parallel data-grounded embeddings, replacing them with fully unsupervised MUSE embeddings.", "Table 5 summarizes these results.", "Let us compare MUSE-NONE setup (no POS signal at train or test time) with MUSE-GOLD (gold POS signal at train and test time).", "It can be observed that POS signal improves performance greatly for all languages when using MUSE embeddings.", "However, consider GUO-GOLD and MUSE-NONE .", "Here we note a mixed result: whilst de, sv, and it do benefit from POS information, the other languages do not, obtaining great improvements from MUSE embed-12 Due to time constraints, we could not experiment with longer training regimes possibly needed given the high block dropout rates in Dozat and Manning (2017).", "dings instead.", "Finally, consider MUSE-CIPHER (gold POS tags during training, cipher tags during testing).", "When compared to MUSE-NONE setup, it can be observed that, unfortunately, the heuristic POS tagger is too noisy and gets in MUSE 's way.", "Our proposed tagging pipeline can be interpreted as first reducing the vocabulary size to a fixed number of clusters, and then finding a cluster POS tag mapping table that best explains the data without any path constraint (a cluster ID could be mapped to any POS tag).", "In this sense, our approach applies EM to simplify the task (e.g. when using Brown clustering (Brown et al., 1992)), followed by another EM run to optimize cipher table parameters.", "Under this lens, the methods closest to our approach are those which attempt to reduce or constrain the parameter search space prior to running EM.", "For instance, Ravi and Knight (2009) explicitly search for the smallest model that explains the data using Integer Programming, and then use EM to set parameter values.", "In a different approach, Goldberg et al. (2008) obtain competitive performance with a classic HMM model by initializing the emission probability distribution with a mixture of language-specific, linguistically constrained distributions.", "However, both of these approaches are framed around the task of unsupervised POS disambiguation with a full dictionary (Merialdo, 1994).", "Previous work relaxes the full dictionary constraint by leveraging monolingual lexicons (Haghighi and Klein, 2006; Smith and Eisner, 2005; Merialdo, 1994; Ravi and Knight, 2009), multilingual tagged dictionaries (Li et al., 2012; Fang and Cohn, 2017), and parallel corpora (Duong et al., 2014; Tackstrom et al., 2013; Das and Petrov, 2011).", "In addition, previous work includes sequence models that do not rely on any resource besides raw text during training, namely unsupervised POS induction models.", "These models are based, with few exceptions, on extensions to the standard HMM; most, in the form of appropriate priors over the HMM multinomial parameters (Goldwater and Griffiths, 2007; Johnson, 2007; Ganchev et al., 2009); others, by using logistic distributions instead of multinomial ones (Berg-Kirkpatrick et al., 2010; Stratos et al., 2016).", "However, these models still need to ground or map hidden states to actual POS tags to evaluate, and they inevitably resort to many-to-one or one-to-one accuracy scoring.", "Some previous work has been cautious in pointing out this ill-defined setting (Ravi and Knight, 2009; Christodoulopoulos et al., 2010), and we argue its inappropriateness for scenarios in which the test set is extremely small or even when no annotated reference corpus exists.", "Therefore, the problem of grounding the sequence of states or cluster IDs to POS tags without using any linguistic resource remains unsolved.", "We formulate this task as a decipherment problem.", "Decipherment aims to find a substitution table between alphabets or tokens of an encrypted code and a known language without the need of parallel corpora.", "The task has been successfully applied in alphabet mapping for lost languages (Snyder et al., 2010), and machine translation at the character (Pourdamghani and Knight, 2017) and token level (Ravi and Knight, 2011; Dou et al., 2015).", "For the task of POS tag grounding, the sequence of states or cluster IDs is modeled as an encrypted code to be deciphered back to a POS sequence.", "Furthermore, we tackle the problem from a uni-versal' perspective by allowing the cipher learn from POS sequences from a varied pool of languages.", "Other recent work has declared a radically uni-versal' mantra to language inclusivity.", "Herm-jakob et al. (2018) presents a Romanizer that covers all writing systems known to Unicode.", "Pan et al. (2017) extends name tagging and linking capability to hundreds of languages by leveraging Wikipedia.", "Kirov et al. (2016) has semiautomatically built inflectional paradigms for hundreds of languages.", "We present a POS tag grounding strategy based on decipherment that does not require human-labeled data to map states or clusters to actual POS tags and thus can be used in real-world situations requiring grounded POS tags.", "The decipherment model considers state or word cluster IDs of a CL as a cipher text to be deciphered back to a POS sequence.", "The model operates on top of Brown cluster IDs and requires a POS language model trained on annotated corpora of one or more PLs.", "Experimental results over a large and linguistically varied set of PLs show that the choice of which PL to decipher de fr es it pt sv Test Tags UAS LAS UAS LAS UAS LAS UAS LAS UAS LAS UAS LASGOLD 65.57 52.37 71.27 59.80 73.26 63.13 71.46 59.66 63.28 54.93 77.50 64.90 NONE 40.90 18.61 51.14 30.91 43.82 17.67 48.22 33.29 37.89 16.72 38.15 17.96 CIPHER (this work) 38.31 24.72 54.46 41.04 55.56 41.16 54.05 39.78 46.97 36.07 55.06 36.51 Table 4: Impact of grounded unsupervised POS tagging on MALOPA 's zero-resource' condition.", "POS tags from is crucial for performance.", "We explore model confidence, as measured by perplexity and typological similarities, as intuitive criteria for PL choice.", "However, both criteria prove to be not correlated with tagging accuracy scores.", "Thus, we propose a cipher model combination strategy in order to leverage the word-order patterns in several PLs, at the cost of an accuracy drop ranging from just 1.15 percentage points to 13.33 points.", "The resulting combined grounder is completely language agnostic, making it attractive for the analysis of languages new to the academic community.", "Furthermore, analysis over the tasks of name tagging and dependency parsing demonstrate that the tags induced by the combined grounder provide a non-trivial signal for improvement of the downstream task.", "We obtain state-of-the-art results for name tagging in Kinyarwanda and Sinhalese, languages for which POS annotated corpora is nearly non-existent.", "Thanks to Xusen Yin, Nima Pourdamghani, Thamme Gowda, and Nanyun Peng for fruitful discussions.", "This work was sponsored by DARPA LORELEI (HR0011-15-C-0115)." ]
[ "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "objective", "objective", "objective", "result", "result", "objective", "method", "other", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "method", "abstain", "objective", "method", "method", "abstain", "method", "abstain", "method", "method", "abstain", "other", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "result", "method", "other", "other", "abstain", "method", "method", "abstain", "method", "abstain", "objective", "abstain", "abstain", "result", "abstain", "method", "other", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "objective", "method", "method", "other", "other", "other", "other", "other", "other", "other", "method", "other", "method", "other", "other", "other", "method", "other", "other", "other", "other", "method", "abstain", "abstain", "result", "abstain", "objective", "abstain", "objective", "abstain", "abstain", "result", "other", "other" ]
[ "Human innovation in language, such as inventing new words, is a challenge for pretrained language models.", "We assess the ability of one large model, GPT-3, to process new words and decide on their meaning.", "We create a set of nonce words and prompt GPT-3 to generate their dictionary definitions.", "We find GPT-3 produces plausible definitions that align with human judgments.", "Moreover, GPT-3's definitions are sometimes preferred to those invented by humans, signaling its intriguing ability not just to adapt, but to add to the evolving vocabulary of the English language.", "Humans are constantly expanding languages with new words.", "How are artificial language models, which are increasingly deployed in the wild', to handle the stream of neologisms that are appearing in slang or on social media (Grieve et al., 2018)?", "Today's most advanced language models, including GPT-3 (Brown et al., 2020), use a subword tokenization of input text, rather than consuming it word by word.", "This allows them to process words never seen in their training data.", "For example, the word perdetry', which has never been used in English, is treated by GPT-3 as a sequence of two tokens (Fig. 1).", "The subword tokenization algorithm is designed for text compression and does not respect the natural morpheme boundaries.", "We explore GPT-3's understanding of English at the subword level by prompting it to give definitions of nonce words 1 (Fig. 1).", "We find in human 1 We use the term nonce word' for a new word not used in English.", "per|detry (n.) an instance of inventing words, esp. as a hobby har|bole|mic (adj.) tending to babble; talking nonsense sh|out|ze (v.) to laugh through half-open teeth", "studies that not only does GPT-3 generate realistic, original meanings for new words, but its definitions are sometimes preferred to those invented by humans.", "This finding sheds light on GPT-3's ability to adapt to and even extend a changing vocabulary.", "While we cannot ascertain GPT-3's exact reasons for assigning meanings to nonce words, our results prove that these reasons are not limited to morphology: many neologisms have no clear roots or derivational origin.", "The meanings of words may be imported by their phonological qualities more precisely, their orthographic realizations or by clues to their membership in certain lexical strata.", "Thus, at a high level, our findings suggest that GPT-3 has learned not only its world knowledge and capacity for long-range reasoning in text (Brown et al., 2020), but also the nuances of etymology and the correspondences of sound and meaning that lie at the very base of language understanding.", "Below are some pairs of words together with their definitions.", "The goal is to guess, for each pair, which word goes with which definition.", "We will show you two options, and you will decide which of them is a better match.", "The words you'll get are rare, and we do not expect you to know many, or indeed any, of them.", "Make your best guess.", "For some pairs, there is no correct answer.", "We'll show you the expected answers at the end.", "Do not look up the words while doing the task: we are really interested in your gut feeling, right or wrong.", "In his seminal work, de Saussure (1916) rejected this notion.", "Yet, later work identified a large set of English phonesthemes, such as the cluster /gl/ in glow', glitter', gloss', etc. meaning light; a notable list was compiled by Marchand (1959a,b).", "Recent studies found phonosemantic patterns that are common to many languages (Blasi et al., 2016).", "In practice, words are even engineered for subconscious reactions: certain sounds in brand names are correlated with associations such as size (of a gadget) or speed (of a courier) (Klink, 2000).", "Our study suggests that GPT-3 may understand such patterns as well.", "There is a body of work on joint modeling of (or-thographic or phonological) word forms and grammatical classes such as noun gender and inflection pattern.", "In a recent study, Williams et al. (2020) used neural models to measure mutual information between meanings and inflection classes of Czech and German nouns, which, for borrowed words, often depend on the language of origin.", "It is plausible that GPT-3 implicitly uses likely source languages of nonce words to generate meanings associated with some lexical strata, e.g., abstract nouns from Norman French, concrete nouns from the Germanic substrate, and artificially constructed terms with Greek or Latinate elements.", "(We direct the interested reader to the lexicon in Appendix C.)", "them into embedding spaces (Bojanowski et al., 2017; Zalmout et al., 2019; Ryskina et al., 2020), and codifying and predicting etymologies (Melo, 2014; Wu and Yarowsky, 2020).", "Others have studied definition generation (Noraset et al., 2017) and the reverse task of mapping definitions to words (Hill et al., 2015), albeit with pretrained embeddings.", "Limited examples of a pretrained model's use of nonce words appear in Brown et al. (2020).", "In this work, we study GPT-3's ability to define words never seen in context.", "We trained a LSTM model (Hochreiter and Schmid-huber, 1997) on a corpus of English words 2 with a standard character-level objective, then sampled strings from the LSTM to create nonce words.", "The words were lemmatized and assigned parts of speech (POS): noun (n.), verb (v.), or adjective (adj.).", "To produce definitions for these words, we generated text from GPT-3, primed with input in the format word ( POS. ) .", "Usually, GPT-3's outputs had the style of a dictionary definition (Fig. 1).", "The definitions were filtered by common-sense criteria and lightly edited for consistency, as explained in Appendix A. By this procedure, we obtained 146 word-definition pairs (67 n., 47 v., 32 adj.).", "For comparison in our study, we also sampled a set of real but rare English words from a corpus.", "2 Definitions for these words were taken from a dictionary.", "3 This resulted in a combined set of 220 words (102 n., 70 v., 48 adj.), with a 2:1 ratio of fake to rare words in each POS.", "See Appen-2 github.com/dwyl/english-words 3 en.wiktionary.org n.", "dices A and C for the full lexicon and generation details, including all points of human input.", "We performed a study in which human subjects were presented with pairs of words of the same part of speech together with their definitions (generated by GPT-3, for fake words, or extracted from the dictionary, for rare words), but not told which definition matches with which word.", "4 Some questions contained two fake words, some two rare words, and some one fake and one rare word.", "Users were asked to decide which assignment of definitions to words is a better fit and to rate their confidence (Fig. 2); the choices were converted to a scale of 0 (confident in the incorrect match) to 5 (confident in the correct match).", "Each user received a random pairing of words, but saw each word exactly once.", "We collected 65 sets of annotations for each POS, for a total of 65 2202 = 7150 data points.", "Results.", "Humans prefer the pairing from our lexicon in 68% of cases.", "The scores by the POS and the kind of pair (fake-fake, fake-rare, or rare-rare) are shown in the top rows of Table 1.", "GPT-3's definitions align with human judgments far better than random choice ( p -values below floating-point epsilon).", "Notably, humans' performance on pairs containing a fake and a rare word was about the same as on pairs of fake words.", "Correlation in performance between different parts of the word-definition matching task is high.", "Considering only the fake-fake pairs, the score (number of correctly matched pairs) on the noun portion of the task is correlated with the score on 4 Users were not told that some of the definitions were machine-generated.", "verb and adjective pairs with Spearman 0 .", "42; a permutation test on rank correlation gives p 0 .", "01.", "The verb and adjective portions are similarly predictive of the other two ( p 0 . 05 for both).", "The correlation is even stronger ( p < 0 . 0001 for nouns) when all pairs, not just fake-fake, are considered.", "This indicates that some users can be identified as better' at the task, perhaps due to their personal vocabulary, education, or effort.", "(For example, the average score on the fake-fake noun pairs is 70.7%. However, the average score on fake-fake noun pairs among users who scored above median on the fake-fake adjective pairs is 74.2%.)", "This is strong evidence that the values in Table 1 would be higher with a better selection of users.", "There was significant agreement between annotators.", "In cases when the same pair of words was shown to two users, the mean difference between the two users' choices on the 0-5 scale was 1.5, and in 61% of cases the two users preferred the same assignment.", "Remarkably, the latter number is the same for rare-rare, rare-fake, and fake-fake pairs.", "It is possible that the subjects knew some of the rare words and the tables in Appendix C do suggest this.", "However, assuming that a subject will choose the correct match if they know the meaning of at least one word in a pair, and will do no worse than random guessing on pairs where they know neither word, the last human' row is consistent with less than a quarter of the rare words, on average, being known to the subjects.", "Likelihood analysis.", "For each word w and definition d in the lexicon (where d may be the definition of a word different from w ), we compute the likelihood under GPT-3 of the definition d to follow word w , p ( d | w ) .", "For each pair of words ( w 1 , w 2 ) of the same POS, with definitions ( d 1 , d 2 ) , we compute the difference in log-likelihood between the proper match ( w 1 d 1 , w 2 d 2 ) and the inverted assignment ( w 1 d 2 , w 2 d 1 ) : LLD ( w 1 , w 2 ) = log p ( d 2 | w 1 ) p ( d 1 | w 2 ) p ( d 1 | w 1 ) p ( d 2 | w 2 ) .", "If GPT-3 were to perform the matching task done by our human subjects, it would choose the option with higher total likelihood.", "In other words, it would prefer the correct pairing if LLD ( w 1 , w 2 ) is negative and the inverted pairing if it is positive.", "the correct matches for fake-fake pairs, since the definitions of fake words were sampled from the same model of likelihood.", "Indeed, we see this in the bottom rows of Table 1.", "GPT-3's imperfect performance on fake-fake pairs is a byproduct of the sampling used in the generation and perhaps of the edits made in postprocessing.", "To maximize total likelihood of the lexicon, GPT-3 would prefer to enact some post-factum swaps of definitions.", "LLD and human confidence.", "LLD is a good predictor of human judgments: confidence in the correct pairing for fake-fake pairs ( w 1 , w 2 ) is strongly correlated with LLD ( w 1 , w 2 ) , a rank correlation test giving p < 0 .", "001 for all POS.", "One may object that this correlation and indeed much of humans' performance is due to the presence of simple disambiguating markers: for example, a word with suffix -ist' is likely to denote a person, while an -ism' is probably an abstract noun.", "However, examination of log-likelihood differences shows that this is not the case.", "We stratify the pairs of fake words by LLD and consider the distribution of humans' confi-dences for pairs with LLD falling in five ranges: [ 40 , 30 ) , [ 30 , 20 ) ,..., [ 0 , 10 ) .", "Confidence in the correct matching is inversely correlated with LLD, but humans tend to choose the correct assignment for pairs in all five strata (Table 2).", "For pairs with LLD in the ranges [ 10 , 0 ) and [ 0 , 10 ) , which form a majority, there tend to be no revealing morphological markers.", "(Table 2 shows pairs of words with LLD falling into these ranges; Table 7 in the appendix shows more examples.)", "Conclusion.", "Finally, we observe that many of GPT-3's definitions are original: we are not aware of English words that describe the same concepts (see Table 2 and Appendix C).", "Some of the innovated meanings fill plausible lexical gaps (drob-bler'), while others require a degree of creativity (subacitide').", "This shows that GPT-3 is not simply aligning new words with existing words as in Zalmout et al. (2019), but inventing new meanings.", "We test GPT-3's ability to define new words on a set of human-proposed neologisms from the Dictionary of Obscure Sorrows.", "5 Many of these words were created out of real English morphemes.", "We sampled 20 words from this set, got GPT-3 defini-5 dictionaryofobscuresorrows.com/ A. occhiolism : a belief that personal power increases proportionally with one's height B. occhiolism : the awareness of the smallness of one's perspective Figure 3: A typical question in the definition choice task.", "We then ran a study with 25 users, in which each user was given words and both definitions (in random order, without being told how each definition was generated) and asked to pick the better match.", "The responses were converted to a scale of 0 (human-generated is much better) to 5 (GPT-3-generated is much better).", "Each user marked their definition preference for all 20 words (Fig. 3).", "Results.", "Remarkably, users preferred GPT-3's definitions in 40% of cases, despite the fact that a human thought up each of these word-meaning pairs.", "This is not simply the result of random guessing by the workers: the result matrix (Fig. 4) shows a significant amount of structure.", "There are words on which most users agree that the better definition is the one generated either by the human inventor (top rows) or by GPT-3 (bottom rows).", "Users most prefer GPT-3's definition for backmasking : the act of disguising messages within recordings via sound effects to the human definition the instinctive tendency to see someone as you knew them in their youth , while the human definition of lapyear : the age at which you become older than your parents were when you were born is preferred to GPT-3's a lazy person; someone of a low-energy lifestyle .", "User clusters.", "These human-coined neologisms have a bias towards meanings with an existential slant, which results in additional structure in our results, reflecting the population structure of the subjects.", "Indeed, some workers prefer human-made definitions and others prefer GPT-3's definitions, which reflect a mixture of meanings seen in a crawl of the Internet.", "To analyze the significance such preferences, we perform a randomization test.", "We define the polarization of a user as the absolute difference between the number of words for which they prefer the human-generated definition and the number for which they prefer GPT-3's definition.", "The average polarization over users is greater than that seen in 99% of random preference matrices, indicating that there may indeed be two types of users, with different preferences for the types of meanings they see in words.", "6 5 Conclusion A character-level model of English words composed with GPT-3 is a complete scheme for generating new words and innovative meanings.", "GPT-3 invents definitions for words it has not seen in training that are seen as reasonable by humans.", "These results have implications for language models' ability to adapt and even add to an evolving vocabulary.", "They can inspire future work on machine understanding of new slang, optimization of words and acronyms, creation of fictitious entries, and automatically generating word games.", "6 A similar test could be performed taking the confidence into account.", "Here we define polarization as the absolute difference between a user's mean confidence and 2.5.", "In each random sample, we flip a random subset of the entries in the confidence matrix to the opposite preference, while keeping the level of uncertainty the same: 0 5, 1 4, 2 3. This results in a p -value around 0.04.", "immediate negative societal this work.", "Jack Grieve, Andrea Nini, and Diansheng Guo.", "2018.", "Mapping lexical innovation on American social media.", "Journal of English Linguistics , 46(4):293319.", "Felix Hill, Kyunghyun Cho, Anna Korhonen, and Yoshua Bengio.", "Learning to understand phrases by embedding the dictionary.", "Transactions of the Association for Computational Linguistics", ", 4. Sepp Hochreiter and Jrgen Schmidhuber.", "1997.", "Neural Computation , 9(8):1735-1780.", "Creating brand names with meaning: The use of sound symbolism.", "Marketing Letters , 11:520.", "Phonetic symbolism in English word-formation.", "Indogermanische Forschun-gen , 64:146168.", "Janis Nuckolls.", "1999.", "The case for sound symbolism.", "Annual Review of Anthropology , 28:225252.", "Jeffrey Pennington, Richard Socher, and Christopher D. Manning.", "2014.", "Glove: Global vectors for word representation.", "In Empirical Methods in Natural Language Processing (EMNLP) , pages 15321543.", "As explained in Appendix B, we followed data privacy and anonymization procedures to the greatest extent possible and fairly compensated human subjects." ]
[ "abstain", "objective", "method", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "objective", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method" ]
[ "We discuss the impact of data bias on abusive language detection.", "We show that classification scores on popular datasets reported in previous work are much lower under realistic settings in which this bias is reduced.", "Such biases are most notably observed on datasets that are created by focused sampling instead of random sampling.", "Datasets with a higher proportion of implicit abuse are more affected than datasets with a lower proportion.", "Abusive or offensive language is commonly de-fined as hurtful, derogatory or obscene utterances made by one person to another person.", "1 Examples are (1)-(3).", "In the literature, closely related terms include hate speech (Waseem and Hovy, 2016) or cyber bullying (Zhong et al., 2016).", "While there may be nuanced differences in meaning, they are all compatible with the general definition above.", "Due to the rise of user-generated web content, in particular on social media networks, the amount of abusive language is also steadily growing.", "NLP methods are required to focus human review efforts towards the most relevant microposts.", "In this paper, we examine the issue of data bias.", "For the creation of manually annotated datasets, randomly sampling microposts from large social media platforms typically results in a too small proportion of abusive comments (Wulczyn et al., 2017; Founta et al., 2018).", "Therefore, more focused sampling strategies have to be applied which 0 Present affiliation: Leibniz ScienceCampus, Heidel-berg/Mannheim, Germany 1 http://thelawdictionary.org/ cause biases in the resulting datasets.", "We show what implications this has on classifiers trained on these datasets: Previous evaluations reported high classification performance on datasets with difficult cases of abusive language, e.g. implicit abuse (2).", "Contrarily, we find that the high classification scores are likely to be the result of modeling the bias in those datasets.", "Although we will explicitly name shortcomings of existing individual datasets, our paper is not intended as a reproach of those who created them.", "On the contrary, we acknowledge the great efforts the researchers have taken to provide these resources.", "Without them, much existing research would not have been possible.", "However, we also noticed a lack of awareness of the special properties of those datasets among researchers using them.", "As we will illustrate with specific examples, this may result in unforeseen results of particular classification approaches.", "One major distinction that has been proposed in the literature is the division into explicitly and implicitly abusive language (Waseem et al., 2017).", "The former are microposts that employ some abusive words (1)-(3) (e.g. dumbass or scum ), while the latter represents the more difficult case in which the abusive nature is conveyed by other means, such as sarcasm, jokes, and particularly the usage of negative stereotypes etc. (4)-(5).", "To determine which of the datasets that we consider in this work contain which type of abusive language, we proceeded as follows.", "On the set of abusive microposts of each dataset, we computed the proportion of microposts that include at least one abusive word according to the lexicon of abusive words from Wiegand et al. (2018a).", "Datasets with a high proportion of abusive words typically contain a high amount of explicitly abusive microposts, whereas datasets with a low proportion contain a higher amount of implicitly abusive language.", "The resulting figures, of course, are only a lower bound estimate for explicit language abuse.", "There will also be microposts containing abusive words that are missing from the lexicon.", "However, after manual inspection of a sample of microposts, we are fairly confident that this does not significantly change the relative order of datasets when ranked according to their degree of explicit language abuse.", "Due to the limited space of this paper, we restrict our discussion to frequently cited (publicly available) datasets and datasets from shared tasks.", "Substantial interannotation agreement has also been reported with these datasets.", "As we focus on the detection of abusive language in general, for those datasets containing more fine-grained class inventories describing subtypes of abusive language 2 , we conflate the categories to one general category.", "As a result, there are always only two categories: abuse and no-abuse .", "This merging removes differences between the individual annotation schemes that would otherwise impede a meaningful comparison.", "Table 1 shows a brief summary of the different datasets.", "Among the properties, we list the performance of a text classifier in the right-most column.", "Since in previous work performance on the different datasets was reported on the basis of different types of classifiers and also varying evaluation metrics, we ran the same classifier on all datasets in order to ensure a meaningful comparison.", "We chose FastText, which is an efficient supervised classifier known to produce state-of-the-art performance on many text classification tasks 3 (Joulin et al., 2017) and whose results are easy to reproduce.", "Performance is evaluated in a 10-fold crossvalidation setting using the macro-average F1-score.", "by pure random sampling.", "This would always result in tiny proportions of abusive language.", "For example, Founta et al. (2018) estimate that on Twitter, there are only between 0 .", "1% up to at most 3% abusive tweets.", "What comes closest to random sampling is the procedure followed by Founta et al. (2018), Razavi et al. (2010) and the Kaggle-challenge.", "4 They took a random sample and applied some heuristics in order to boost the proportion of abusive microposts.", "For instance, in the Kaggle-challenge, further microposts from users were added who had been blocked due to being reported to post personal attacks.", "The procedures applied by other researchers are more drastic because, as we show in 4 and 5, they affect more heavily the topic distribution of the dataset.", "These approaches do not even start with a random sample.", "The topic distribution is mostly determined by the creators of the dataset themselves.", "For example, Waseem and Hovy (2016) extract tweets matching query words likely to co-occur with abusive content.", "Kumar et al. (2018) choose Facebook-pages covering topics that similarly coincide with abusive language.", "The resulting datasets are far from representing a natural sample of the underlying social-media sites.", "Table 1 shows that datasets that apply biased sampling ( Warner , Waseem , Kumar ) contain a high degree of implicit abuse.", "Boosted random sampling, which provides a more realistic cross section of microposts, on the other hand, captures a larger amount of explicit abuse.", "Future work should explore whether this is due to the predominance of explicit abuse on social media or some other reason, for example, the fact that human annotators more readily detect explicit abuse.", "Intuitively, one would expect that the lower the proportion of explicit abuse is on the set of abusive microposts of a dataset, the lower the F1-score becomes because implicit abuse is not conveyed by lexical cues that are easy to learn.", "Table 1 confirms this notion, yet Waseem is the notable exception.", "We need to find an explanation for this deviation since Waseem is by far the most frequently used dataset for detecting abusive language (Badjatiya et al., 2017; Bourgonje et al., 2017; Pitsilis et al., 2018; Agrawal and Awekar, 2018; Karan and Snajder, 2018; Kshirsagar et al., 4 www.kaggle.com/c/jigsaw-toxic-comment-classification-challenge name publication source microposts %abusive sampling %explicit F1 Kaggle (Wulczyn et al., 2017) Wikipedia 312,737 9.6 boosted random sampling 76.9 88.2 Founta (Founta et al., 2018) Twitter 59,357 14.1 boosted random sampling 75.9 87.3 Razavi (Razavi et al., 2010) diverse 1,525 31.9 boosted random sampling 64.7 83.3 Warner (Warner and Hirschberg, 2012) diverse 3,438 14.3 biased sampling 51.3 71.8 Waseem (Waseem and Hovy, 2016) Twitter 16,165 35.3 biased sampling 44.4 80.5 Kumar (Kumar et al., 2018) Facebook 15,000 58.1 biased sampling 32.7 70.4 Table 1: Properties of the different datasets.", "This investigation is only possible since, fortunately, Waseem is one of the datasets whose creation process has been meticulously documented.", "The Waseem -dataset has been sampled in such a way that it contains a high proportion of microposts discussing the role of women in sports, particularly their suitability as football commentators.", "Such microposts also very often coincide with sexist remarks.", "However, the authors did not make any attempt to debias their dataset.", "As a consequence, domain-specific expressions such as announcer , commentator , football or sport occur very frequently and almost exclusively in abusive tweets.", "Yet intuitively these words should not be representative of abusive language.", "There are many texts on the web including Twitter that contain mentions of these expressions but that are not abusive.", "The current dataset, however, does not reflect that.", "Table 2 illustrates this bias by listing the words with the highest Pointwise Mutual Information (PMI) towards abusive microposts.", "It compares the Founta -dataset (a dataset representing almost random sampling) with the Waseem -dataset (a dataset produced by biased sampling).", "We deliberately chose two datasets sampled from the same social-media site, namely Twitter, as otherwise the difference we report could be ascribed to differences in the underlying text sources.", "Table 2 shows that on the Founta -dataset, abusive words occupy the high ranks.", "Most of the highly ranked words of the Waseem -dataset, however, are not abusive.", "Similar observations can be made on the other datasets produced by biased sampling (i.e. Warner and Kumar ).", "In the Warner -dataset, the rank Founta Waseem 1 bitch commentator 2 niggas comedian 3 motherfucker football 4 fucking announcer 5 nigga pedophile 6 idiot mankind 7 asshole sexist 8 fuck sport 9 fuckin outlaw 10 pussy driver Table 2: Top 10 words having strongest correlation with abusive microposts according to PMI on Founta (dataset representing almost random sample) and Waseem (dataset produced by biased sampling).", "words CBS and Hollywood are two of the most predictive words.", "They refer to the anti-semitic prejudice that Jews are supposed to control most of the US media.", "On that dataset, the bias of identity terms is also extreme: Almost 80% of the 256 mentions of the identity term Jew occur in abusive microposts.", "On the Kumar -dataset, even common Arabic person names, such as Azan or Nahid , strongly correlate with abusive language.", "In order to demonstrate the detrimental effects such biases have, we now report the performance of further classifiers trained on the Wassem dataset.", "Similar results could be obtained on the Warner and Kumar -dataset.", "Yet they are most pronounced on the Waseem -dataset, which is also the dataset on which unexpectedly high classification performance has been observed in Table 1.", "Presumably, it is also the most biased dataset.", "In our first experiment, we tested a FastText-classifier (3) trained on the Waseem -dataset on a random sample of 500 additional tweets that include mentions of the topic words football and racist sexist author name freq author name freq Vile Islam 1915 Yes You're Sexist 1320 Yes You're Sexist 8 Male Tears #4648 948 Standing Up 4 Trump 5 Vile Islam 50 YESMarriageEquality 1 LilBeasy91 10 LilBeasy91 1", "sport .", "One would expect a low proportion of these particular tweets to be predicted as abusive.", "However, due to the fact that the abusive training data have such a large topic bias towards sports, the proportion of tweets predicted to be abusive is unreasonably high (i.e. 70%).", "Manual inspection confirmed that only a small proportion (up to 5% ) was actually abusive.", "This result shows us that classifiers trained on the Waseem -dataset hardly generalize to the concept of abusive language.", "Difficult tweets on that dataset, e.g. instances of implicit abuse, may be classified correctly just because biased words such as football or sport occur in them.", "In our second experiment, we train and test a classifier on the original Waseem -dataset in 10-fold crossvalidation.", "However, we remove either of the two types of biased words from the dataset:", "(i) We remove 25 topic words from the 100 most correlating words that we thought bear no relation towards abusive language (e.g. announcer , commentator , football or sport ).", "(ii) We remove the 17 words that were used as a query by Waseem and Hovy (2016) to produce the dataset.", "With", "(i) we want to show how good classifiers are that do not have access to biased words.", "This would be a realistic setting since words, such as football or sport , only have this bias towards abusive language on the Waseem -dataset.", "Such removal is also necessary since otherwise these biased words cause a huge amount of false positives when testing on other datasets (as shown above).", "With", "(ii) we want to show that query words themselves are biased, too.", "For example, we observed that the query word WomenAgainstFemi-nism correlates with abusive tweets while gamer-gate correlates with non-abusive tweets.", "The purpose of query words is to retrieve tweets that address specific topics.", "The fact that they correlate with the classes of the dataset further proves that the focused sampling process introduces data bias.", "The results of these two configurations are displayed in Table", "3. It shows that the removal of a very few words (i.e. 0 . 2% of the overall vocabulary) already causes the classification performance to drop notably.", "Please note that these experiments do not capture the full impact of the bias in this dataset.", "That is, there will be more biased words beyond the 25 words we identified on the list of top 100 words ranked according to PMI since the cut-off value of 100 was arbitrarily chosen.", "Datasets may also be affected by author bias.", "By that, we mean that information relating to the author of a micropost may artificially boost classification performance.", "Author information can be explicitly derived from meta-information of a micropost, for example, a feature that encodes the user name of a particular tweet that is to be classified.", "However, even if we do not explicitly encode such information, a (lexical) supervised classifier, such as the FastText-classifier from Table 1, may indirectly be affected by author biases.", "If the set of tweets belonging to a certain class predominantly comes from the same author, then a supervised classifier may largely pick up the writing style or the topics addressed by that author.", "Whenever the writing style or those topics are recognized, abusive language is predicted.", "This may work on a biased dataset but not beyond it.", "We found that the distribution of abusive tweets on the Waseem -dataset is highly skewed towards 3 different authors as shown in Table", "4. More than 70% of the sexist tweets originate from the two authors Male Tears #4648 and Yes, They're Sexist .", "99% of the racist tweets originate from a single author (i.e. Vile Islam ).", "If virtually all racist tweets originate from the same author, a classifier just needs to consider tweets from that author and can predict tweets from every other author as non-racist.", "On this particular dataset, such a strategy leads to good results: Both Qian et al. (2018) and Mishra et al. (2018a) proposed classification approaches that add author information to common text-level features.", "These approaches were solely evaluated on the Waseem -dataset.", "However, the author distribution on the Waseem -dataset does not reflect reality where abusive tweets originate from far more than a very few authors.", "In reality, we therefore should expect author information to be less predictive.", "A possible way to prevent classification scores from looking unreasonably well is by applying cross-domain classification, i.e. testing a classifier on a dataset different from the one it was trained on.", "The specific biases we pointed out should be primarily restricted to individual datasets and not carry over to other ones.", "This is illustrated by Table", "5. Compared to in-domain classification (Ta-ble 1), all classifiers perform worse.", "So all classifiers seem to be affected by data bias to some degree.", "Datasets with explicit abuse and less biased sampling ( Kaggle , Founta , Razavi ) still perform reasonably when trained among each other, i.e. they are not heavily affected, whereas datasets with implicit abuse and biased sampling ( Warner , Waseem , Kumar ) perform poorly.", "This time this also includes Waseem which implies that the good performance in in-domain classification (Table 1) was indeed caused by data bias.", "Of course, cross-domain classification may not always be practical, particularly if a specific subtype of language abuse is studied for which there is only one dataset available.", "However, even then, simple methods such as computing the words that highly correlate with the different classes on that dataset, similar to what we did in Table 2, may already indicate that there are biases hidden in the dataset.", "If only a very small amount of biased words is identified, then usually it suffices to manually debias the dataset.", "By that, one understands sampling additional microposts containing the words manually detected to be biased (Dixon et al., 2017; Wiegand et al., 2018b).", "For example, in the case of the Waseem -dataset, randomly sampling additional tweets matching the words announcer , commentator , football or sport , would reduce the sexism bias we reported in this paper (simply because random tweets are unlikely to contain sexist remarks unlike the existing tweets from the Waseem -dataset).", "5 In order to avoid author bias to interfere with classification, one could restrict the number of microposts per author.", "This would result in a more balanced distribution of microposts per author.", "5 Please note, however, that in the case of the Waseem dataset, this form of debiasing would not completely solve the data bias since this dataset contains biased words beyond the four words mentioned above.", "Previous work already established that identity terms (e.g. gay , Jew or woman ) have a bias to co-occur with abusive language (Dixon et al., 2017; Park et al., 2018).", "In this work, we showed that this problem is not restricted to the small set of identity terms.", "Most biases are introduced by the sampling method used on a dataset and they have a huge impact on classification performance.", "We examined the impact of data bias on abusive language detection and showed that this problem is closely related to how data have been sampled.", "On the popular Waseem -dataset, we illustrated that under more realistic settings, where such biases would be less prominent, classification performance is much lower than reported in research publications.", "Currently, datasets with a higher degree of implicit abuse are more affected by data bias.", "Such bias often goes unnoticed in in-domain classification which is why we recommend cross-domain classification.", "Our finding that under a realistic evaluation classification performance is actually quite poor particularly on implicit abuse, is also in line with assessments from industry on the quality of the state of the art 6 which suggests that there is still a long way to go.", "The authors were partially supported by the German Research Foundation (DFG) under grants RU 1873/2-1 and WI 4204/2-1." ]
[ "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "result", "result", "method", "method", "abstain", "method", "result", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "other", "abstain", "other", "result", "abstain", "abstain", "method", "result", "other" ]
[ "Since the development and wide use of pretrained language models (PLMs), several approaches have been applied to boost their performance on downstream tasks in specific domains, such as biomedical or scientific domains.", "Additional pre-training with in-domain texts is the most common approach for providing domain-specific knowledge to PLMs.", "However, these pre-training methods require considerable in-domain data and training resources and a longer training time.", "Moreover, the training must be re-performed whenever a new PLM emerges.", "In this study, we propose a domain knowledge transferring (DoKTra) framework for PLMs without additional in-domain pretraining.", "Specifically, we extract the domain knowledge from an existing in-domain pretrained language model and transfer it to other PLMs by applying knowledge distillation.", "In particular, we employ activation boundary distillation, which focuses on the activation of hidden neurons.", "We also apply an entropy regularization term in both teacher training and distillation to encourage the model to generate reliable output probabilities, and thus aid the distillation.", "By applying the proposed DoKTra framework to downstream tasks in the biomedical, clinical, and financial domains, our student models can retain a high percentage of teacher performance and even outperform the teachers in certain tasks.", "Our code is available at https://github.com/DMCB-GIST/DoKTra.", "Recently, transformer (Vaswani et al., 2017)-based language models have been successfully applied in the field of natural language processing (NLP).", "In particular, the two-stage approach of pre-training and fine-tuning, such as BERT (Devlin et al., 2019), has become the standard for NLP applications.", "Generally, a transformer-based model is pretrained with a large amount of text data in an unsuHyunju Lee is the corresponding author.", "pervised manner, and then fine-tuned with a small dataset for several downstream tasks.", "Further, advanced pre-trained language models (PLMs) with improved architectures or training methods continue to emerge, including ALBERT (Lan et al., 2019) or RoBERTa (Liu et al., 2019).", "However, these models must be further improved for tasks requiring domain knowledge, such as those in the biomedical or financial domains, as the pre-training data usually consist of general domain text (e.g., Wikipedia).", "Additional pre-training with in-domain text has been proposed to provide the PLMs with domain-specific knowledge.", "For example, in the biomedical domain, several domain-specific PLMs trained with large biomedical texts, such as BioBERT (Lee et al., 2020), PubMedBERT (Gu et al., 2020) and BlueBERT (Peng et al., 2019), have been successfully used as strong baselines for several downstream tasks.", "Nevertheless, additional pre-training has several limitations, such as the need for sufficient training data and resources, and a longer training time.", "Furthermore, whenever a new PLM emerges, it must be re-trained to create more advanced domain-specific models.", "To address this issue, we propose an efficient domain-knowledge transferring framework that does not require additional pre-training steps.", "Specifically, we focus on the applicability of knowledge distillation (Hinton et al., 2015) as a domain-knowledge transfer method, not only for model compression.", "Knowledge distillation is a wellknown knowledge transfer method that is primarily used for model compression.", "The knowledge from a larger and more effective teacher model is distilled to a smaller student model by encouraging it to mimic the teacher characteristics, such as soft probabilities (Hinton et al., 2015) or hidden representations (Kim et al., 2018; Sun et al., 2019).", "In this study, we propose a domain knowledge transfer (DoKTra) framework for an advanced PLM via calibrated activation boundary distilla-1658", "tion.", "In contrast to the existing in-domain pretraining methods, we transfer domain knowledge to a new language model using only an existing in-domain pre-trained model, and without a time-consuming pre-training on the new model.", "For instance, BioBERT was pre-trained for 23 days on 8 NVIDIA V100 GPUs (Lee et al., 2020).", "We can estimate that if a new, larger language model is pre-trained with a large number of biomedical texts, its training duration would be longer than that of BioBERT.", "However, our framework can be executed in a few hours on a single 24 GB GPU.", "The comparison between our framework and a conventional approach is visualized in Figure", "1. Specifically, we apply the calibration method to generate a reliable and well-supervising teacher model.", "Then, we apply activation boundary distillation (Heo et al., 2019) to distill the domain knowledge to the student, which is more efficient with a small amount of training data.", "Moreover, by selecting language models more advanced than the teacher as students, we allow the student models to acquire additional domain knowledge while preserving its superiority.", "We apply our framework to the biomedical domain and verify its effectiveness by conducting experiments on several biomedical and clinical downstream tasks.", "Consequent to applying our framework to ALBERT and RoBERTa student models, we were able to obtain models that retained most of the teacher model's performance with fewer model parameters (ALBERT), and models with a higher performance than both students and teachers (RoBERTa).", "We also investigate the general applicability of our framework by applying it to a financial domain PLM and downstream tasks.", "The contributions of this study can be summarized as follows: We propose a DoKTra framework for advanced PLM via calibrated activation boundary distillation, without additional time-consuming pre-training steps.", "We conduct experiments to demonstrate the efficacy of DoKTra, resulting in obtaining the student models that retain most of the performances of the teacher model while utilizing fewer parameters or achieve even higher performances than the teacher model.", "Most modern language models are based on the transformer (Vaswani et al., 2017) architecture.", "The PLMs generally use only the encoder block of the transformer, which consists of two sublayers: a self-attention layer and a feed-forward layer.", "BERT (Devlin et al., 2019) is the most widely used PLM, which consists of several layers of transformer encoders.", "It was pre-trained for 4 days with a large amount of text data, which consisted of 3.3 billion words, using masked language modeling and a next sentence prediction task in an unsupervised manner.", "This pre-trained model can be easily used in various downstream tasks by fine-tuning it with a labeled dataset.", "Following the success of BERT, a variety of similar PLMs have emerged.", "Lan et al. (2019) proposed ALBERT, which outperformed BERT with considerably fewer parameters.", "AL-BERT's architecture is more complex than BERT's; however, by applying factorized embedding parameterization and cross-layer parameter sharing, the number of parameters can be reduced.", "Liu et al. (2019) observed that BERT is significantly undertrained, and proposed RoBERTa, a more robust and better-performing model, which is obtained by a longer pre-training with a larger dataset (approxi-mately 10 times that of BERT) and the removal of next sentence prediction.", "Despite the PLMs' excellent performances in several downstream tasks in the general domain, they have not exhibited a superior performance in specific domain tasks, such as in biomedicine.", "To provide domain-specific knowledge to PLMs, additional pre-training with in-domain data has been applied.", "BioBERT (Lee et al., 2020) further pretrained BERT using biomedical text consisting of 18 billion words, such as literature abstracts.", "Peng et al. (2019) applied a similar approach with both biomedical and clinical text data.", "Differently, Gu et al. (2020) pre-trained BERT from scratch with only biomedical literature.", "The main goal of the DoKTra framework is to produce a task-specific student model for each downstream task in a specific domain by distilling domain knowledge from a fine-tuned teacher model.", "Our framework consists of two main stages: calibrated teacher training and activation boundary distillation.", "In calibrated teacher training, the teacher model is trained to distil its domain-specific and task-specific knowledge into the student model.", "We use an existing in-domain PLM as the initial teacher model.", "For each downstream task in the initial teacher's domain, the teacher model is fine-tuned with its training data.", "In this process, an entropy regularization term, called the confidence penalty loss (Pereyra et al., 2017), is added to the training loss.", "By adding the confidence regularizer, the fine-tuned teacher model can generate more reliable output prediction probabilities for the input data, and thus, have a positive effect on distillation.", "In activation boundary distillation, the domain-specific knowledge of the teacher model is transferred to the student model.", "We use an existing PLM as the initial student model, which is only pre-trained in the general domain.", "First, the student model is fine-tuned for a downstream task.", "Subsequently, it mimics the activation pattern of the hidden neurons in the teacher model (Heo et al., 2019).", "By distilling the activation pattern, the activation boundary of the teacher model is transferred more precisely, and the domain-specific knowledge of the teacher is transferred to the student model.", "Additionally, the student model is refined over fewer epochs with a standard classification loss (Romero et al., 2014; Yim et al., 2017; Heo et al., 2019).", "Because the student model is already fine-tuned for the downstream task, any additional refinement may result in overconfidence (Guo et al., 2017; Nixon et al., 2019).", "To address this issue, we also add the confidence regularizer to the refinement step.", "The proposed framework is visualized in Figure", "2. 3.2 Calibrated teacher training In this step, a task-specific teacher model is generated for each in-domain downstream task using a fine-tuning approach.", "Specifically, we choose BioBERT-base (Lee et al., 2020) as the initial teacher model, which has been pre-trained with a large biomedical domain corpus, such as PubMed abstracts.", "Owing to the in-domain pre-training, the BioBERT model outperforms the BERT model in several biomedical downstream tasks.", "Despite their high performance, modern deep neural networks are not well calibrated (Guo et al., 2017), which is similar to language models such as BERT.", "In other words, these models only predict overconfidently and cannot generate a reliable output probability for the given input.", "However, most distillation approaches encourage the use of softened probability because they contain more information and can better support the learning of the student model (Hinton et al., 2015; Cho and Hariharan, 2019).", "Moreover, Menon et al. (2021) demonstrated that a teacher model that estimates good probabilities can better supervise a student model.", "Based on this idea, we apply an entropy-regularizing term that penalizes overconfidence when fine-tuning the teacher model (Pereyra et al., 2017).", "Several previous studies have revealed that a confidence penalty improves both the calibration and performance of biomedical downstream tasks (Choi and Lee, 2020).", "Since an overconfident classification model produces output probabilities close to 0 and 1, its probability distribution has a low entropy value.", "The confidence penalty loss (CPL) addresses this problem by minimizing the negative entropy of the output probability.", "Formally, the output probability of the model with parameters can be written as a conditional distribution p ( y | x ) through the soft-1660 Layer 1 Layer 2 Layer L-1 Layer L Layer 1 Layer 2 Layer M-1 Layer M + + + + -+ + + + + Teacher Student Initial Teacher (Specific domain) Initial Student (General domain) (x) (x) Task A Task A Final model for task A Task A Calibrated teacher training Activation boundary distillation #epochs (1 ) #epochs Figure 2: An overview of the DoKTra framework max function for classes y and a given input x .", "The entropy value of the output probability is given by H ( p ( y | x )) = (cid:88) i p ( y i | x ) log( p ( y i | x )) , (1) where i denotes the class index.", "Recently, Heo et al. (2019) has proposed a knowledge distillation method that only distils the activation boundary of the hidden representation of a deep neural network.", "Instead of distilling the magnitude of the neurons of the teacher network, Heo et al. (2019) designed the distillation loss to only transfer the activation of neurons and thus, allowed the activation boundary to be transferred.", "Since the decision boundary of a model, which consists of a combination of activation boundaries, is critical for the classification task, this method outperformed several distillation methods in image classification.", "Moreover, they also reported that the activation boundary distillation can learn rapidly and more efficiently with a small amount of training data.", "Thus, we select it as the domain-knowledge transferring method for our framework; this is beacuse the domain-specific downstream tasks usually consist of lesser training data than general domains.", "To apply the activation boundary distillation to PLMs, we use classification embedding of the teacher and student as the distillation target.", "More precisely, the input sequence of a PLM such as BERT can be written as [ CLS ] , t 1 , t 2 , . . . , [ SEP ] , where t i is the i -th token of the example.", "Then, the final output sequence is h ([ CLS ]) , h ( t 1 ) , . . . , h ([ SEP ]) , where h ( t ) indicates the hidden output of the last layer of the token t .", "For the classification task, the output embedding of the first special token([CLS], also known as the classification token) is generally used as the input of the classification layer.", "Thus, we apply activation boundary distillation to the classification embedding (output embedding of the classification token).", "For an input example x , let T [ CLS ] ( x ) R d and S [ CLS ] ( x ) R d be the classification embedding vector ( h ([ CLS ]) ) of the teacher and student model, respectively.", "An element-wise activation indicator function can be defined to express the activation of a neuron: ( x ) = (cid:40) 1 , if x > 0 0 , otherwise.", "The loss function to transfer the activation of neurons is a l 1 norm of the difference between activations:", "However, this loss function cannot be minimized using gradient descent because is a discrete function.", "To address this issue, Heo et al. (2019) has proposed an alternative loss function similar to hinge loss (Rosasco et al., 2004) with an activation function .", "where is the element-wise product and 1 is a d-dimensional vector, with all values equal to", "1. is the margin, which is a hyperparameter for training stability.", "Specifically, we select two PLMs as the initial student model: ALBERT-xlarge (Lan et al., 2019), which has a smaller number of parameters but performs better than BERT, and RoBERTa-large (Liu et al., 2019), which has a larger number of parameters and is known to outperform BERT significantly for most of the tasks.", "To distil the knowledge from a teacher model, we first fine-tune the student model to provide initial knowledge about the task.", "Then the student model is trained with LAT .", "We also add a few refinement steps to refine the classification layer of the student model.", "Because the student model is already fine-tuned before the distillation step, this additional refinement may cause overconfidence.", "Thus, we apply a confidence penalty regularization in the refinement step.", "Namely, the student is refined with L cls after the distillation steps.", "We add a hyperparameter [0 , 1] , which determines when the training loss is switched from distillation to refinement.", "The procedure of the DoKTra framework is summarized in Algorithm", "1. Algorithm 1 DoKTra framework Input : Downstream task data D = { x k , y k } Nk =1 , hyperparameter 1 , 2 , 1: Fine-tune the teacher T with data D , using L cls with 1 2: Fine-tune the student S with data D , using LCE 3: epoch switch = epochs total 4: for each epoch do 5: if epoch < epoch switch then 6: Train S using LAT 7: else 8: Train S using L cls with 2 9: end if 10: end for 11: return Student model S Dataset #Train #Dev #Test Metrics Domain ChemProt 17865 11263 15583 micro F1 Biomed.", "The relation extraction task aims to classify the relationship between two entities (e.g., gene, chemical, and disease) that are already annotated.", "The ChemProt (Krallinger et al., 2017) dataset contains PubMed abstracts with 10 types of chemical-protein interaction annotations and only five of the types are used for evaluation.", "The GAD dataset (Bravo et al., 2015) consists of gene-disease binary relation annotations.", "The DDI (Herrero-Zazo et al., 2013) dataset consists of text from the DrugBank database and Medline abstracts, with four types of drug-drug interaction annotations.", "In the clinical domain, the i2b2 dataset (Uzuner et al., 2011) contains texts from clinical documents, and eight types of relations between medical problems and treatments have been annotated.", "The HoC (Baker et al., 2016) corpus consists of PubMed abstracts with ten types of hallmarks of cancer annotation.", "Note that the HoC dataset is a multi-label document classification task predicting the combination of labels from an input text.", "We pre-process every classification dataset except for GAD in the same manner as the BLUE (Peng et al., 2019) benchmark.", "In particular, entity anonymization is applied to all relation extraction datasets, which replace the entity mentions with anonymous tokens (e.g., @GENE$, @DISEASE$) to avoid confusion in using complex entity names.", "We use a pre-processed version of the GAD dataset provided by BioBERT, which is split for 10-fold cross-validation.", "The statistics of the pre-processed downstream task datasets are listed in Table", "1. 1662 Models #Params.", "We used two pre-trained models as the initial student model: ALBERT-xlarge (L=24, H=2048, A=32) and RoBERTa-large (L=24, H=1024, A=16).", "In the previous description, we have assumed that the embedding dimensions of teachers and students are identical.", "However, because the hidden embedding dimensions of teachers and students are different in our setting, we applied a linear transformation to the teacher's classification embedding to match the dimension with the student model.", "In calibrated teacher training, we trained for 3-10 epochs with a learning rate of 2e-5.", "The hyperparameter 1 , the strength of the confidence penalty in teacher training, was chosen from {0, 0.3, 0.5, 0.7}.", "For activation boundary distillation, we first fine-tuned the initial student model for 5-10 epochs with learning rates of {6e-6, 8e-6, 1e-5}.", "Then, we distilled for 10 epochs with learning rates of {6e-6, 8e-6, 1e-5}.", "The confidence penalty strength 2 in the refinement step and loss switch rate were chosen from {0, 0.3, 0.5, 0.7} and {0.6, 0.7 ,0.8, 0.9}, respectively.", "The margin of the activation transfer loss was set to 1.0.", "Every hyperparameter was tuned on the development set.", "The selected hyperparameters are shown in the Appendix.", "The experiments were run on a single RTX 3090 24 GB GPU, and the training codes were implemented in PyTorch.", "All experiments were repeated three times with different random seeds, and the average performances and standard deviations have been reported.", "fine-tuned student models are in the second and fourth rows and the DoKTra framework is applied to both, as shown in the third and fifth rows.", "As shown in the third and fifth rows, the classification performances of biomedical and clinical downstream tasks are significantly improved by applying our proposed framework, when compared to the initial student models.", "This implies that distilling the activation patterns of the neurons from the calibrated teacher model can transfer its domain-specific knowledge and thus improve the task performance in the domain on which the student has not yet been pre-trained.", "By applying the DoKTra framework, the ALBERT-xlarge student model was able to retain 99.72% of the teacher model performance on an average.", "ALBERT has two advantages: a small number of parameters and high performance (Lan et al., 2019).", "Applying our framework to ALBERT allowed us to obtain a student model with performance comparable to that of the teacher with half the parameters.", "In other words, we successfully transferred domain-specific knowledge to ALBERT while maintaining its existing advantages.", "Consequently, the distilled ALBERT achieved a higher performance than the teacher model on ChemProt and DDI.", "The RoBERTa model that was applied to the proposed framework outperformed the teacher model on an average, specifically in four of five downstream tasks (ChemProt, DDI, i2b2, and HoC).", "RoBERTa's performance was already similar to the teacher model in the initial fine-tuning stage because it was pre-trained with more data than BERT and exhibited a greater robustness.", "The results on RoBERTa imply that our proposed framework can be effectively applied to emerging and advanced pre-trained language models.", "In other words, domain-specific knowledge can be transferred into advanced models without a time-consuming pre-1663 Dataset BioBERT RoBERTa RoBERTa -ft -PM-ft -DoKTra ChemProt 76.20 79.00 78.04 GAD 81.59 81.16 81.38 DDI 80.05 81.39 82.25 i2b2 74.14 78.83 75.65 HoC 84.21 86.11 85.34 Avg.", "To compare our approach with the in-domain pre-training method, we used RoBERTa-PM-large (Lewis et al., 2020), which is a RoBERTa-large model additionally pre-trained with a large biomedical and clinical corpus consisting of 14 billion words.", "We fine-tuned the RoBERTa-PM for each task.", "Table 3 shows the classification performance of BioBERT, RoBERTa-PM, and our approach in five biomedical and clinical tasks.", "As mentioned before, our best model outperformed the BioBERT (teacher) model on four of the five tasks.", "Notably, our approach even outperformed RoBERTa-PM on two tasks and demonstrated comparable performances on the others.", "These results are remarkable since our approach spent only a few hours on each task, whereas RoBERTa-PM may require several days and billions of words to be pre-trained.", "Note that RoBERTa-PM has an advantage in the i2b2 task since its pre-training data contains MIMIC-III clinical text data, while our teacher model was pretrained with only biomedical texts.", "In other words, this implies our approach has a room for further improvement when a better in-domain model is set as a teacher.", "We also compared our framework with task-adaptive pre-training (TAPT) (Gururangan et al., 2020), an additional pre-training method for PLMs.", "The TAPT approach additionally pre-trains an existing PLM before fine-tuning it with the training samples of each task.", "As both TAPT and DoKTra only utilize the task-specific training data, they can be fairly compared in terms of performance Dataset RoBERTa -ft TAPT TAPT (3xGPU) RoBERTa -DoKTra ChemProt 75.75 73.55 75.40 78.04 GAD 80.17 81.85 81.41 84.47 DDI 80.71 73.61 78.00 82.25 i2b2 72.51 70.95 72.42 75.65 HoC 83.98 86.39 86.45 85.34 Avg.", "and training resources.", "For TAPT, we additionally pre-trained the RoBERTa-large model with each pre-processed downstream task's training data.", "We followed the hyperparameters used in TAPT except for batch size and the maximum sequence length because we used the same computing resource as DoKTra for a fair comparison.", "The possible maximum pre-training batch size with the given computing resource for the RoBERTa-large model was 36.", "Since the results of the RoBERTa-large model with a small batch size were unstable, we also performed a distributed training with three GPUs, resulting in a batch size of 108.", "The comparison results are shown in Table 4.", "Note that the performance on GAD in Table 4 was evaluated with the first split of a 10-fold cross-validation, while the main result in Table 3 was evaluated with all splits.", "As revealed in the results, even though TAPT showed improved results in the original study with Google Cloud TPU, it was unstable with the small batch size and sequence length; the performances were even degraded in the general GPU environment.", "Although the TAPT performance improved when the batch size increased through distributed training, the improvement was inadequate.", "This may be because of the batch size being smaller than that in the TPU environment.", "Moreover, DoKTra required less training time than TAPT while both methods were task-specific.", "For instance, TAPT required a total of seven hours of training, while DoKTRa was completed in only 1.1 hours for the ChemProt task.", "This is because DoKTra leverages the knowledge of an existing in-domain PLM, thus requiring only a few fine-tuning and distillation steps.", "The comparison of TAPT and DoKTra using more advanced computing resources is left as a future work.", "Because the entropy regularizer in calibrated teacher training issues penalties based on the output probability distribution, it is difficult to intuitively understand how it positively affects activation boundary distillation, which uses hidden representation.", "Thus, we ablate the calibrated teacher training steps in our framework and compare the final performances and loss values.", "Irrespective of the use of an alternative version (Equation 5) during the training, the extent to which the activation pattern is distilled can be intuitively observed by calculating the original activation transfer loss (Equation 4).", "The value of Equation 4 directly refers to the number of neurons activated differently than the teacher model.", "For instance, if LAT = 500 for an ALBERT model (H=2,048), it indicates that 500 of the 2,048 elements in the hidden representation vector exhibited signs different to those of the teacher.", "Table 5 shows the experimental results on four relation extraction tasks with ALBERT students.", "As shown in Table 5, the application of the calibrated teacher training reduces the LAT and improves the classification performance.", "In other words, calibration on the teacher training clearly aids the supervision of the teacher in activation boundary distillation, even though the output probability information is not directly used in distillation.", "To observe how each component contributed to the proposed framework, we conducted an ablation study.", "We ablated two major components: calibrated teacher training (CTT) and activation Models F1 (%) Improvement BioBERT-ft (teacher) 76.20 0.65 ALBERT-ft (student) 73.67 0.98 +KLD 76.40 0.36 +2.73 +CTT+KLD 76.87 0.49 +3.20 +ABD 76.20 0.24 +2.53 +CTT+ABD (proposed method) 77.42 0.04 +3.75 ALBERT-ft+CPL 74.04 0.43 +0.37 Table 6: Ablation study on the ChemProt dataset.", "The experiments were performed on the ChemProt dataset, using the ALBERT-xlarge model as the student architecture.", "To ablate the calibrated teacher training, we trained the teacher model using only LCE .", "We compared the activation boundary distillation with KL-divergence based distillation (KLD), which penalizes the difference between the output probability distributions of the two models.", "Table 6 presents the results of the ablation study.", "As we proposed, applying both calibrated teacher training and activation boundary distillation resulted in a superior performance.", "In particular, the calibrated teacher model was able to distil its activation boundary to the student model much more effectively, thus improving the performance of the student model, as we hypothesized in the previous section.", "Applying KL-divergence-based distillation yielded positive results in terms of classification performance.", "Notably, calibrated teacher training also improved the KL-divergence-based distillation because it enabled the distillation of a considerably more reliable output probability, as reported in Menon et al. (2021).", "Note that applying the confidence regularizer to the fine-tuning of the student model only slightly improved the performance, suggesting that the observed gains in our model are only partially because of the calibration regularizer.", "To verify the general applicability of our approach, we conducted experiments on financial sentiment classification tasks.", "Financial sentiment analysis 1665 Models #Params FPB FTS Avg.", "aims to classify the polarity of financial-related text, such as financial news or tweets.", "Since financial text usually contains specialized language, several pre-training approaches have emerged (Araci, 2019; Yang et al., 2020; Liu et al., 2021) to fill the gap between the general and financial domains.", "In this study, we selected the FinBERT (Yang et al., 2020) model as a teacher in the DoKTra framework and evaluated our approach on two tasks, the Financial PhraseBank (FPB) and FinTextSen (FTS).", "The Financial PhraseBank (FPB) (Malo et al., 2014) contains sentences from financial news annotated for positive, neutral, and negative sentiments.", "The FinTextSen (FTS) (Cor-tis et al., 2017) consists of financial tweets from Twitter and StockTwits with real-valued sentiment scores.", "To transform it into a classification task, we clustered the sentiment score into a 3-class label, following Daudert et al. (2018).", "The Financial PhraseBank dataset contains 4,846 sentences, and we set 10% of the examples as the test set while preserving the label distribution.", "The FinTextSen originally includes 2,488 tweets, but only 1,700 tweets are available now.", "We set 10% of the entire data as the test set, which is similar to FPB.", "As shown in Table 7, ALBERT-DoKTRa and RoBERTa-DoKTRa outperformed the FinBERT-ft teacher on financial downstream tasks.", "Note that we used the RoBERTa-base model in this section because of the training stability.", "This result suggests that DoKTra can be applied regardless of the domain and can be an efficient alternative to in-domain pre-training.", "In this study, we proposed the DoKTra framework as a domain knowledge transfer method for PLMs.", "The experimental results from the biomedical, clinical, and financial domain downstream tasks demonstrated that our proposed framework could transfer domain-specific knowledge into a PLM, while preserving its own expressive advantages without any further pre-training with additional in-domain data.", "We employed advanced models as the student model and verified the future applicability of our framework to emerging language models by achieving even higher performances than the teacher model.", "However, the limitations of our approach are that it is task-specific and was evaluated only in classification tasks.", "Our future studies would focus on developing the proposed framework as a task-agnostic method and evaluating it on various tasks.", "This research was supported by the Bio-Synergy Research Project (NRF-2016M3A9C4939665) of the Ministry of Science and ICT through the National Research Foundation of Korea (NRF) and the NRF grant funded by the Korean government (Ministry of Science and ICT) (NRF-2018M3C7A1054932), and partly supported by Institute for Information and communications Technology Promotion (IITP) grant funded by the Korea government (MSIP) [No. 2019-0-01842, Artificial Intelligence Graduate School Program (GIST)]." ]
[ "abstain", "abstain", "abstain", "abstain", "objective", "method", "method", "method", "objective", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "method", "abstain", "abstain", "objective", "abstain", "objective", "abstain", "objective", "abstain", "abstain", "method", "method", "abstain", "result", "result", "objective", "objective", "objective", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "objective", "result", "method", "objective", "other" ]
[ "Understanding human preferences, along with cultural and social nuances, lives at the heart of natural language understanding.", "Concretely, we present a new task and corpus for learning alignments between machine and human preferences.", "Our newly introduced problem is concerned with predicting the preferable options from two sentences describing scenarios that may involve social and cultural situations.", "Our problem is framed as a natural language inference task with crowd-sourced preference votes by human players, obtained from a gamified voting platform.", "We benchmark several state-of-the-art neural models, along with BERT and friends on this task.", "Our experimental results show that current state-of-the-art NLP models still leave much room for improvement.", "The ability to understanding social nuances and human preferences is central to natural language understanding.", "This also enables better alignment of machine learning models with human values, eventually leading to better human-compatible AI applications (Peterson et al., 2019; Leslie, 2019; Rosenfeld and Kraus, 2018; Amodei et al., 2016; Russell and Norvig, 2016).", "There exist a plethora of work on studying optimal decision-making under a variety of situations (Edwards, 1954; Bottom, 2004; Plonsky et al., 2019; Peterson et al., 2019).", "On the other hand, cognitive models of human decision-making are usually based on small datasets (Peterson et al., 2019).", "Furthermore, these studies tend to only consider individuals in isolation.", "In contrast, we investigate the influence of cultural and social nuances for choice prediction at scale.", "In other words, we study the social preference as a whole, First two authors contributed equally not those of an individual in isolation, which is arguably more challenging and largely unexplored.", "In this work, we propose a new benchmark dataset with a large number of 200k data points, M achine A lignment with C ultural values and S ocial preferences (MACS), for learning AI alignment with humans.", "Our dataset is based on a popular gamified voting platform, namely the game of would you rather?'.", "In this game, participants are given two choices and vote for the more preferable option.", "Examples from our dataset can be found at Table", "1. To the best of our knowledge, our work is the first work to incorporate voting-based language games as a language understanding benchmark.", "In many ways, our benchmark dataset is reminiscent of the natural language inference problem (MacCartney, 2009; Bowman et al., 2015), social commonsense reasoning (Sap et al., 2019) or other natural language understanding problems (Wang et al., 2018; Zellers et al., 2018).", "To this end, our problem is framed in a way that enables convenient benchmarking of existing state-of-the-art NLU models such as BERT (Devlin et al., 2018) or RoBERTa (Liu et al., 2019).", "That said, unlike many NLU datasets that rely on few annotators, the key differentiator lies in the fact that our dataset aggregates across hundreds or thousands and beyond for each data point.", "Options are also crowd-sourced and gamified which may encourage less monotonic samples,", "ie., encouraging players to come up with questionss that are difficult for other players.", "Additionally, our dataset comprises of country-level statistics, which enable us to perform cultural-level prediction of preferences.", "We propose a new NLU benchmark based on", "an online gamified voting platform.", "We propose several ways to formulate the problem, including absolute and relative preference prediction.", "We also introduce a cultural-level NLU problem formulation.", "We investigate state-of-the-art NLU models such as BERT (Devlin et al., 2018), RobERTA (Liu et al., 2019) and XLNET (Yang et al., 2019) on this dataset.", "Empirical results suggests that our benchmark is reasonably difficult and there is a huge room for improvement.", "We look to crowdsourcing platforms to construct our dataset.", "Our dataset is constructed from https://www.rrrather.com/ , an online platform 1 for gamified voting.", "The platform is modeled after the famous internet game would you rather?", ", which pits two supposedly comparable choices together.", "Whenever a player votes, their vote is recorded in the system.", "Players generally vote to see how well their vote aligns with the majority and consensus with everyone else.", "We provide samples of the problem space in Table", "1. We crawled data from the said platform and fil-tered away posts with less than 500 total votes.", "In total, we amassed 194,525 data points, which we split into train/dev/test splits in an 80/10/10 fashion.", "Dataset statistics are provided in Table", "2. Train Dev Test Total Data 155,621 19,452 19,452 194,525 (cid:96) max 678 351 298 (cid:96) mean 8 8 8 (cid:96) min 1 2 2 Table 2: Dataset statistics of the MACS dataset.", "This section outlines the benefits of our proposed dataset as a language understanding benchmark.", "1 The authors have obtained written permission from the owner of the platform to crawl and use their data for academic research.", "The questions, answers or discussions do not represent opinions of the authors in this paper.", "(1) Understanding before Interaction.", "In our dataset and problem formulation, complex understanding of each option text is often required first before modeling the relative preference between two options.", "This is unlike NLI or question-answering based NLU benchmarks, where matching signals can be used to predict the outcome easily.", "In our dataset and task, it is imperative that any form of word overlap can be hardly used to determine the outcome.", "(2) A good coverage of social preferences.", "Upon closer inspection of our proposed benchmark, we find there is a good representation of samples which cover social and cultural themes.", "Social preferences (such as the preference of brands) are captured in samples such as example (6).", "(3) Completely natural.", "Our MACS dataset completely exists in the wild naturally .", "This is unlike datasets that have to be annotated by mechanical turkers or paid raters.", "In general, there is a lack of incentives for turkers to provide high-quality ratings, which often results in problems such as annotation artifacts.", "Unlike these datasets, our MACS dataset completely exists in the wild naturally.", "The choices are often created by other human players.", "Hence, in the spirit of competitiveness, this means that the data is meant to be deliberately challenging.", "Moreover, there are at least 500 annotators for each sample, which makes the assigned label less susceptible to noisy raters.", "Given Q (prompt), two sentences S 1 and S 2 and V ( . ) which computes the absolute votes to each option, we explore different sub-tasks (or variant problem formulation).", "Predicting Preference This task is primarily concerned with predicting if V ( S 1) > V ( S 2) or otherwise.", "Intuitively, if a model is able to solve this task (perform equivalent to a human player), we consider it to have some fundamental understanding of human values and social preferences.", "We frame this task in two ways.", "The first is a straightforward binary classification problem, i.e., V ( S 1) > V ( S 2) .", "The second task is a three-way classification problem with a third class predicting if the difference | V ( S 1) V ( S 2) | is less than 5% of the total votes.", "In short, this means that two options are almost in a draw.", "Standard Cultural Binary Three-way Binary Three-way Model Dev Test Dev Test Dev Test Dev Test BERT 61.02 60.38 56.71 55.85 62.42 62.88 57.42 58.21 XLNEt 56.12 56.84 55.72 56.34 51.77 51.42 57.08 57.39 RoBERTa 64.75 64.15 61.04 61.19 64.39 64.71 59.28 61.22 Table 3: Experimental results on predicting preference (standard and cultural) with BERT (Devlin et al., 2018), XLNEt (Yang et al., 2019) and RoBERTa (Liu et al., 2019) on MACS dataset.", "Predicting Cultural Preferences We consider a variant of the preference prediction problem.", "Our MACS dataset has culture-level preference votes which are the voting scores with respect to a particular cultural demographic.", "We extend the same setting as Task 1 by requiring the model to produce culture-level predictions.", "In order to do this, we prepend the input sentence with a culture embedding token.", "For example, Input = [Culture] + [Choice A] + [Sep] + [Choice B].", "The task is identical, predicting the greater of Choice A OR Choice B, with respect to the cultural ground truth.", "The dataset is augmented at the culture level and the same example is duplicated for each culture, e.g., we duplicate the sample for countries 'USA' and 'Europe'.", "We consider only culture-level votes with a threshold above 25 votes in the dataset for train/dev/test sets.", "Predicting Relative Preference The third variant is a fine-grained regression task where we want to identify if our model is able to learn the extent of preference given by human players.", "This problem is framed as a regression problem that is normalized from [0 , 1] with respect to the total number of votes in the data point 3 Experiments This section outlines our experimental setup and results.", "We implement and run several models on this dataset.", "(1) BERT (Devlin et al., 2018) Deep Bidirectional Transformers is the state-of-the-art pretrained transformer model for a wide range of NLP tasks.", "(2) XLNet (Yang et al., 2019) is a large pretrained model based on Transformer-XL.", "(3) RoBertA (Liu et al., 2019) is a robustly optimized improvement over the vanilla BERT model.", "All models were run using the finetune methodology using the standard Pytorch Huggingface 2 repository.", "We train (finetune) all models for 3 epochs using the default", "hyperparameters..", "Metrics The evaluation metrics for classification tasks is the standard accuracy score.", "For regression tasks, we use the correlation, Pearson, and Spearman metrics.", "Table 3 reports our results on binary and three-way classification on the MACS dataset.", "In general, we find that RoBERTa performs the best.", "However, in most cases, the performance of all three models still leaves a lot to be desired.", "An accuracy of 60%+ shows that state-of-the-art models still struggle at this task.", "On the other hand, results on regression task are also similarly lacklustre, and 2 https://github.com/huggingface/ transformers Dev Test Model Correlation Pearson Spearman Correlation Pearson Spearman BERT 0.234 0.256 0.214 0.229 0.250 0.208 XLNEt 0.225 0.243 0.206 0.228 0.250 0.206 RoBERTa 0.258 0.279 0.236 0.256 0.278 0.235 Table 4: Experimental results on predicting relative preference on MACS dataset.", "show that models like BERT and RoBERTa are unable to perform well on this task.", "On a whole, it is good to note that RoBERTa performs the best out of the three compared models.", "Overall, this encourages further research on cultural and social commonsense reasoning in the current state-of-the-art in natural language understanding.", "All in all, we hope our benchmark serves as a useful tool for understanding the social capabilities of these models.", "Table 5 reports some sample of our model outputs, shedding light on examples in which our model does well and otherwise.", "We observe that the model often gets the answer wrong even when the ground truth is overwhelmingly swayed towards one side.", "On the other hand, occasionally, we also observe that the model can get questionable questions such as (4) and (5) correctly even despite the tight draw between human voters.", "We propose MACS (Machine Alignment with Cultural and Social Preferences), a new benchmark dataset for learning machine alignment with human cultural and social preferences.", "MACS encompasses and requires social and cultural reasoning to solve and an overall holistic understanding of humanity.", "It is designed to be challenging where state-of-the-art NLP models still struggle at 60% .", "In this paper, we are not promoting the use of https://www.rrrather.com/ as the training source, but rather the study of the alignment of machine learning models with social preference of a large population.", "Unfortunately, there might be some issues of bias, fairness and representation due to the curation of the training data from Internet, which might lead models to give prejudiced or stereotyped outputs.", "Evaluating bias, fairness and representation in language models and the training data is an important research area (Nadeem et al., 2020; Huang et al., 2019).", "As for future works, it is important to characterize and intervene biases when designing such tasks." ]
[ "abstain", "objective", "method", "result", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "objective", "objective", "abstain", "abstain", "result", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "other", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "result", "result", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "other", "abstain", "abstain", "abstain" ]
[ "We present Knowledge Distillation with Meta Learning (MetaDistil), a simple yet effective alternative to traditional knowledge distillation (KD) methods where the teacher model is fixed during training.", "We show the teacher network can learn to better transfer knowledge to the student network (i.e., learning to teach ) with the feedback from the performance of the distilled student network in a meta learning framework.", "Moreover, we introduce a pilot update mechanism to improve the alignment between the inner-learner and meta-learner in meta learning algorithms that focus on an improved inner-learner.", "Experiments on various benchmarks show that MetaDistil can yield significant improvements compared with traditional KD algorithms and is less sensitive to the choice of different student capacity and hy-perparameters, facilitating the use of KD on different tasks and models.", "1 1 Introduction With the prevalence of large neural networks with millions or billions of parameters, model compression is gaining prominence for facilitating efficient, eco-friendly deployment for machine learning applications.", "Among techniques for compression, knowledge distillation (KD) (Hinton et al., 2015b) has shown effectiveness in both Computer Vision and Natural Language Processing tasks (Hinton et al., 2015b; Romero et al., 2015; Zagoruyko & Komodakis, 2017; Tung & Mori, 2019; Peng et al., 2019; Ahn et al., 2019; Park et al., 2019; Passalis & Tefas, 2018; Heo et al., 2019; Kim et al., 2018; Shi et al., 2021; Sanh et al., 2019; Jiao et al., 2019; Wang et al., 2020b).", "Previous works often train a large model as the teacher; then they fix the teacher and train a student model to mimic the Equal contribution.", "However, this paradigm has the following drawbacks: (1) The teacher is unaware of the student's capacity.", "Recent studies in pedagogy suggest student-centered learning, which considers students' characteristics and learning capability, has shown effectiveness improving students' performance (Cornelius-White, 2007; Wright, 2011).", "However, in conventional knowledge distillation, the student passively accepts knowledge from the teacher, without regard for the student model's learning capability and performance.", "Recent works (Park et al., 2021; Shi et al., 2021) introduce student-aware distillation by jointly training the teacher and the student with task-specific objectives.", "However, there is still space for improvement since: (2) The teacher is not optimized for distillation.", "In previous works, the teacher is often trained to optimize its own inference performance.", "However, the teacher is not aware of the need to transfer its knowledge to a student and thus usually does so suboptimally.", "A real-world analogy is that a PhD student may have enough knowledge to solve problems themselves, but requires additional teaching training to qualify as a professor.", "To address these two drawbacks, we propose Knowledge Distillation with Meta Learning (MetaDistil), a new teacher-student distillation framework using meta learning (Finn et al., 2017) to exploit feedback about the student's learning progress to improve the teacher's knowledge transfer ability throughout the distillation process.", "On the basis of previous formulations of bi-level optimization based meta learning (Finn et al., 2017), we propose a new mechanism called pilot update that aligns the learning of the bi-level learners (i.e., the teacher and the student).", "We illustrate the workflow of MetaDistil in Figure", "1. The teacher in MetaDistil is trainable, which enables the teacher to adjust to its student network and also improves its 7037 (1) Teaching experiment (2) Quiz & Meta update (3) Knowledge distillation T S' Training Batches S Copy Update T S' Training Batches Quiz Samples LCE <latexit sha1_base64=\"cWzYMh3AeRJkO+vsctAQMH7i188=\">AAACAnicbVBNS8NAEN3Ur1q/op7ES7AInkpSBT0Wi+DBQwXbCm0Im+22XbrZhN2JUELw4l/x4kERr/4Kb/4bN2kO2vpg4e17M8zM8yPOFNj2t1FaWl5ZXSuvVzY2t7Z3zN29jgpjSWibhDyU9z5WlDNB28CA0/tIUhz4nHb9STPzuw9UKhaKO5hG1A3wSLAhIxi05JkH/QDDmGCe3KRekn8YJM2rNPXMql2zc1iLxClIFRVoeeZXfxCSOKACCMdK9Rw7AjfBEhjhNK30Y0UjTCZ4RHuaChxQ5Sb5Cal1rJWBNQylfgKsXP3dkeBAqWng68psRzXvZeJ/Xi+G4YWbMBHFQAWZDRrG3ILQyvKwBkxSAnyqCSaS6V0tMsYSE9CpVXQIzvzJi6RTrzmntfrtWbVxWcRRRofoCJ0gB52jBrpGLdRGBD2iZ/SK3own48V4Nz5mpSWj6NlHf2B8/gAT8Jfd</latexit> T S Training Batches Update Update Forward pass Backward pass: 1st derivatives 2nd derivatives Pilot update L 0 KD <latexit sha1_base64=\"uziwHVw4qqKfwsWyy8k1x7MuDt4=\">AAACA3icbVDLSsNAFL3xWesr6k43wSK6KkkVdFnQhaCLCvYBbQiT6bQdOpmEmYlQQsCNv+LGhSJu/Ql3/o2TNAttPTBw5px7ufceP2JUKtv+NhYWl5ZXVktr5fWNza1tc2e3JcNYYNLEIQtFx0eSMMpJU1HFSCcSBAU+I21/fJn57QciJA35vZpExA3QkNMBxUhpyTP3ewFSI4xYcnucekn+oyq5uUpTz6zYVTuHNU+cglSgQMMzv3r9EMcB4QozJGXXsSPlJkgoihlJy71YkgjhMRqSrqYcBUS6SX5Dah1ppW8NQqEfV1au/u5IUCDlJPB1ZbajnPUy8T+vG6vBhZtQHsWKcDwdNIiZpUIrC8TqU0GwYhNNEBZU72rhERIIKx1bWYfgzJ48T1q1qnNard2dVeq1Io4SHMAhnIAD51CHa2hAEzA8wjO8wpvxZLwY78bHtHTBKHr24A+Mzx+AZ5gF</latexit> L 0 KD <latexit sha1_base64=\"uziwHVw4qqKfwsWyy8k1x7MuDt4=\">AAACA3icbVDLSsNAFL3xWesr6k43wSK6KkkVdFnQhaCLCvYBbQiT6bQdOpmEmYlQQsCNv+LGhSJu/Ql3/o2TNAttPTBw5px7ufceP2JUKtv+NhYWl5ZXVktr5fWNza1tc2e3JcNYYNLEIQtFx0eSMMpJU1HFSCcSBAU+I21/fJn57QciJA35vZpExA3QkNMBxUhpyTP3ewFSI4xYcnucekn+oyq5uUpTz6zYVTuHNU+cglSgQMMzv3r9EMcB4QozJGXXsSPlJkgoihlJy71YkgjhMRqSrqYcBUS6SX5Dah1ppW8NQqEfV1au/u5IUCDlJPB1ZbajnPUy8T+vG6vBhZtQHsWKcDwdNIiZpUIrC8TqU0GwYhNNEBZU72rhERIIKx1bWYfgzJ48T1q1qnNard2dVeq1Io4SHMAhnIAD51CHa2hAEzA8wjO8wpvxZLwY78bHtHTBKHr24A+Mzx+AZ5gF</latexit> LKD <latexit sha1_base64=\"Ok//7Q4UXi0+yvIX1pX7vfyqMZs=\">AAACAnicbVDLSsNAFJ3UV62vqCtxEyyCq5JUQZcFXQi6qGAf0IYwmU7aoZNJmLkRSghu/BU3LhRx61e482+ctFlo64GBM+fcy733+DFnCmz72ygtLa+srpXXKxubW9s75u5eW0WJJLRFIh7Jro8V5UzQFjDgtBtLikOf044/vsz9zgOVikXiHiYxdUM8FCxgBIOWPPOgH2IYEczT28xLpx8G6c1Vlnlm1a7ZU1iLxClIFRVoeuZXfxCRJKQCCMdK9Rw7BjfFEhjhNKv0E0VjTMZ4SHuaChxS5abTEzLrWCsDK4ikfgKsqfq7I8WhUpPQ15X5jmrey8X/vF4CwYWbMhEnQAWZDQoSbkFk5XlYAyYpAT7RBBPJ9K4WGWGJCejUKjoEZ/7kRdKu15zTWv3urNqoF3GU0SE6QifIQeeoga5RE7UQQY/oGb2iN+PJeDHejY9ZackoevbRHxifPxnSl9Q=</latexit> Figure 1: The workflow of MetaDistil.", "teaching skills.", "Motivated by the idea of student-centered learning, we allow the teacher to adjust its output based on the performance of the student model on a quiz set, which is a separate reserved data split from the original training set.", "For each training step, we first copy the student S to S (cid:48) and update S (cid:48) by a common knowledge distillation loss.", "We call this process a teaching experiment.", "In this way, we can obtain an experimental student S (cid:48) that can be quizzed.", "Then, we sample from the quiz set, and calculate the loss of S (cid:48) on these samples.", "We use this loss as a feedback signal to meta-update the teacher by calculating second derivatives and performing gradient descent (Finn et al., 2017).", "Finally, we discard the experimental subject S (cid:48) and use the updated teacher to distill into the student S on the same training batches.", "The use of meta learning allows the teacher model to receive feedback from the student in a completely differentiable way.", "We provide a simple and intuitive approach to explicitly optimize the teacher using the student's quiz performance as a proxy.", "To test the effectiveness of MetaDistil, we conduct extensive experiments on text and image classification tasks.", "MetaDistil outperforms knowledge distillation by a large margin, verifying the effectiveness and versatility of our method.", "Also, our method achieves state-of-the-art performance compressing BERT (Devlin et al., 2019) on the GLUE benchmark (Wang et al., 2019) and shows competitive results compressing ResNet (He et al., 2016) and VGG (Simonyan & Zisserman, 2015) on CIFAR-100 (Krizhevsky et al., 2009).", "Additionally, we design experiments to analyze and explain the improvement.", "Ablation studies show the effectiveness of our proposed pilot update and dynamic distillation.", "Also, compared to conventional KD, MetaDistil is more robust to different student capacity and hyperparameters, which is probably because of its ability to adjust the parameters of the teacher model.", "Knowledge Distillation Recently, many attempts have been made to accelerate large neural networks (Xu et al., 2020, 2021b; Zhou et al., 2020, 2021; Xu & McAuley, 2022).", "Knowledge distillation is a prominent method for training compact networks to achieve comparable performance to a deep network.", "Hinton et al. (2015b) first introduced the idea of knowledge distillation to exploit the dark knowledge (i.e., soft label distribution) from a large teacher model as additional supervision for training a smaller student model.", "Since its introduction, several works (Romero et al., 2015; Zagoruyko & Komodakis, 2017; Tung & Mori, 2019; Park et al., 2019; Sun et al., 2019; Jiao et al., 2019) have investigated methods that align different latent representations between the student and teacher models for better knowledge transfer.", "In the context of knowledge distillation, MetaDistil shares some common ideas with the line of work that utilizes a sequence of intermediate teacher models to make the teacher network better adapt to the capacity of the student model throughout the training process, including teacher assistant knowledge distillation (TAKD) (Mirzadeh et al., 2020) and route constraint optimization (RCO) (Jin et al., 2019).", "However, the intermediate teachers are heuristically selected independently of the training process and the evolution of the teacher network is discrete.", "In contrast, MetaDistil employs meta learning to make the teacher model adapt to the current state of the student model and provide a continuously 7038 evolving meta-teacher that can better teach the student.", "Concurrently, Park et al. (2021) and Shi et al. (2021) propose to update the teacher model jointly with the student model with task specific objectives (e.g., cross-entropy loss) during the KD process and add constraints to keep student and teacher similar to each other.", "Their approaches makes the teacher model aware of the student model by constraining the teacher model's capacity.", "However, the teacher models in their methods are still not optimized for knowledge transfer.", "In addition, Zhang et al. (2018) introduced deep mutual learning where multiple models learn collaboratively and teach each other throughout the training process.", "While it is focused on a different setting where different models have approximately the same capacity and are learned from scratch, it also encourages the teacher model to behave similarly to the student model.", "Different from all aforementioned methods, MetaDistil employs meta learning to explicitly optimize the teacher model for better knowledge transfer ability, and leads to improved performance of the resulting student model.", "Meta Learning The core idea of meta learning is learning to learn, which means taking the optimization process of a learning algorithm into consideration when optimizing the learning algorithm itself.", "Meta learning typically involves a bi-level optimization process where the inner-learner provides feedback for optimization of the meta-learner.", "Successful applications of meta learning include learning better initialization (Finn et al., 2017), architecture search (Liu et al., 2019), learning to optimize the learning rate schedule (Baydin et al., 2018), and learning to optimize (Andrychowicz et al., 2016).", "These works typically aim to obtain an optimized meta-learner (i.e., the teacher model in MetaDistil), while the optimization of the inner-learner (i.e., the student model in MetaDis-til), is mainly used to provide learning signal for the meta optimization process.", "This is different from the objective of knowledge distillation where an optimized student model is the goal.", "Recently, there have been a few works investigating using this bi-level optimization framework to obtain a better inner-learner.", "For example, meta pseudo labels (Pham et al., 2020) use meta learning to optimize a pseudo label generator for better semi-supervised learning; meta back-translation (Pham et al., 2021) meta-trains a back-translation model to better train a machine translation model.", "These methods adapt the same bi-level optimization process as previous works where the goal is to obtain an optimized meta-learner.", "In these approaches, during each iteration, the meta-learner is optimized for the original inner-learner and then applied to the updated inner-learner in the next iteration.", "This leads to a mismatch between the meta-learner and the inner-learner, and is therefore suboptimal for learning a good inner-learner.", "In this paper, we introduce a pilot update mechanism, which is a simple and general method for this kind of problems, for the inner-learner to mitigate this issue and make the updated meta-learner better adapted to the inner-learner.", "Meta Knowledge Distillation Recently, some works on KD take a meta approach.", "Pan et al. (2020) proposed a framework to train a meta-teacher across domains that can better fit new domains with meta-learning.", "Then, traditional KD is performed to transfer the knowledge from the meta-teacher to the student.", "Liu et al. (2020) proposed a self-distillation network which utilizes meta-learning to train a label-generator as a fusion of deep layers in the network, to generate more compatible soft targets for shallow layers.", "Different from the above, MetaDistil is a general knowledge distillation method that exploits meta-learning to allow the teacher to learn to teach dynamically.", "Instead of merely training a meta-teacher, our method uses meta-learning throughout the procedure of knowledge transfer, making the teacher model compatible for the student model for every training example during each training stage.", "An overview of MetaDistil is presented in Figure", "1. MetaDistil includes two major components.", "First, the meta update enables the teacher model to receive the student model's feedback on the distillation process, allowing the teacher model to learn to teach and provide distillation signals that are more suitable for the student model's current capacity.", "The pilot update mechanism ensures a finer-grained match between the student model and the meta-updated teacher model.", "Knowledge distillation algorithms aim to exploit the hidden knowledge from a large teacher network,", "denoted as T , to guide the training of a shallow student network, denoted as S .", "To help transfer the knowledge from the teacher to the student, apart from the original task-specific objective (e.g., cross-entropy loss), a knowledge distillation objective which aligns the behavior of the student and the teacher is included to train the student network.", "Formally, given a labeled dataset D of N samples D = { ( x 1 , y 1 ) , . . . , ( x N , y N ) } , we can write the loss function of the student network as follows, LS ( D ; S ; T ) = 1 NN (cid:88) i =1 [ LT ( y i , S ( x i ; S )) + (1 ) LKD ( T ( x i ; T ) , S ( x i ; S ))] (1) where is a hyper-parameter to control the relative importance of the two terms; T and S are the parameters of the teacher T and student S , respectively.", "LT refers to the task-specific loss and LKD refers to the knowledge distillation loss which measures the similarity of the student and the teacher.", "Some popular similarity measurements include the KL divergence between the output probability distribution, the mean squared error (MSE) between student and teacher logits, the similarity between the student and the teacher's attention distribution, etc.", "We do not specify the detailed form of the loss function because MetaDistil is a general framework that can be easily applied to various kinds of KD objectives as long as the objective is differentiable with respect to the teacher parameters.", "In the experiments of this paper, we use mean squared error between the hidden states of the teacher and the student for both our method and the KD baseline since recent study Kim et al. (2021) finds that it is more stable and slightly outperforms than KL divergence.", "In meta learning algorithms that involve a bi-level optimization problem (Finn et al., 2017), there exists an inner-learner f i and a meta-learner f m .", "The inner-learner is trained to accomplish a task T or a distribution of tasks with help from the meta-learner.", "The training process of f i on T with the help of f m is typically called inner-loop , and we can denote f (cid:48) i ( f m ) as the updated inner-learner after the inner-loop.", "We can express f (cid:48) i as a function of f m because learning f i depends on f m .", "In return, the meta-learner is optimized with a meta objective, which is generally the maximization of expected performance of the inner-learner after the inner-loop, i.e., f (cid:48) i ( f m ) .", "This learning process is called a meta-loop and is often accomplished by gradient descent with derivatives of L ( f (cid:48) i ( f m )) , the loss of updated inner-leaner on some held-out support set (i.e., the quiz set in our paper).", "3.2.1 Pilot Update In the original formulation of meta learning (Finn et al., 2017), the purpose is to learn a good meta-learner f m that can generalize to different inner-learners f i for different tasks.", "In their approach, the meta-learner is optimized for the original inner-learner at the beginning of each iteration and the current batch of training data.", "The updated meta-learner is then applied to the updated inner-learner and a different batch of data in the next iteration.", "This behavior is reasonable if the purpose is to optimize the meta-learner.", "However, in MetaDistil, we only care about the performance of the only inner-learner, i.e., the student.", "In this case, this behavior leads to a mismatch between the meta-learner and the inner-learner, and is therefore suboptimal for learning a good inner-learner.", "Therefore, we need a way to align and synchronize the learning of the metaand inner-learner, in order to allow an update step of the meta-learner to have an instant effect on the inner-learner.", "This instant reflection prevents the meta-learner from catastrophic forgetting (McCloskey & Cohen, 1989).", "To achieve this, we design a pilot update mechanism.", "For a batch of training data x , we first make a temporary copy of the inner-learner f i and update both the copy f (cid:48) i and the meta learner f m on x .", "Then, we discard f (cid:48) i and update f i again with the updated f m on the same data x .", "This mechanism can apply the im-pact of data x to both f m and f i at the same time, thus aligns the training process.", "Pilot update is a general technique that can potentially be applied to any meta learning application that optimizes the inner-learner performance.", "We will describe how we apply this mechanism to MetaDistil shortly and empirically verify the effectiveness of pilot update in Section 4.2.", "In MetaDistil, we would like to optimize the teacher model, which is fixed in traditional KD frameworks.", "Different from previous deep mutual learning (Zhang et al., 2018) methods that switch the role between the student and teacher 7040 Algorithm 1 Knowledge Distillation with Meta Learning (MetaDistil) Require: student S , teacher T , train set D , quiz set Q Require: , : learning rate for the student and the teacher 1: while not done do 2: Sample batch of training data x D 3: Copy student parameter S to student (cid:48) S 4: Update (cid:48) S with x and T : (cid:48) S (cid:48) S (cid:48) SLS ( x ; S ; T ) 5: Sample a batch of quiz data q Q 6: Update T with q and (cid:48) S : T T TLT ( q , (cid:48) S ( T )) 7: Update original S with x and the updated T : S S SLS ( x ; S ; T ) 8: end while network and train the original teacher model with soft labels generated by the student model, or recent works (Shi et al., 2021; Park et al., 2021) that update the teacher model with a task-specific loss during the KD process, MetaDistil explicitly optimizes the teacher model in a learning to teach fashion, so that it can better transfer its knowledge to the student model.", "Concretely, the optimization objective of the teacher model in the MetaDistil framework is the performance of the student model after distilling from the teacher model.", "This learning to teach paradigm naturally fits the bi-level optimization framework in meta learning literature.", "In the MetaDistil framework, the student network S is the inner-learner and the teacher network T is the meta-learner.", "For each training step, we first copy the student model S to an experi-mental student (cid:48) S .", "Then given a batch of training examples x and the learning rate , the experimental student is updated in the same way as conventional KD algorithms: (cid:48) S ( T ) = S SLS ( x ; S ; T ) .", "To simplify notation, we will consider one gradient update for the rest of this section, but using multiple gradient updates is a straightforward extension.", "We observe that the updated experimental student parameter (cid:48) S , as well as the student quiz loss l q = LT ( q , (cid:48) S ( T )) on a batch of quiz samples q sampled from a held-out quiz set Q , is a function of the teacher parameter T .", "Therefore, we can optimize l q with respect to T by a learning rate : T T TLT (cid:0) q , (cid:48) S ( T ) (cid:1) (3) We evaluate the performance of the experimental student on a separate quiz set to prevent overfitting the validation set, which is preserved for model selection.", "Note that the student is never trained on the quiz set and the teacher only performs meta-update on the quiz set instead of fitting it.", "We do not use a dynamic quiz set strategy because otherwise the student would have been trained on the quiz set and the loss would not be informative.", "After meta-updating the teacher model, we then update the real student model in the same way as described in Equation", "2. Intuitively, optimizing the teacher network T with Equation 3 is maximizing the expected performance of the student network after being taught by the teacher with the KD objective in the inner-loop.", "This meta-objective allows the teacher model to adjust its parameters to better transfer its knowledge to the student model.", "We apply the pilot update strategy described in Section 3.2.1 to better align the learning of the teacher and student, as shown in Algorithm", "1. 4 Experiments 4.1 Experimental Setup We evaluate MetaDistil on two commonly used classification benchmarks for knowledge distillation in both Natural Language Processing and Computer Vision (see Appendix A).", "Settings For NLP, we evaluate our proposed approach on the GLUE benchmark (Wang et al., 2019).", "Specifically, we test on MRPC (Dolan & Brockett, 2005), QQP and STS-B (Conneau & Kiela, 2018) for Paraphrase Similarity Matching; SST-2 (Socher et al., 2013) for Sentiment Classification; MNLI (Williams et al., 2018), QNLI (Rajpurkar et al., 2016) and RTE (Wang et al., 2019) for the Natural Language Inference; CoLA (Warstadt et al., 2019) for Linguistic Acceptability.", "Following previous studies (Sun et al., 2019; Jiao et al., 2019; Xu et al., 2020), our goal is to distill BERT-Base (Devlin et al., 2019) into a 6-layer BERT with the hidden size of 768.", "We use MSE loss between model logits as the distillation objective.", "The reported results are in the same format as on the GLUE leaderboard.", "For MNLI, 7041 Method #Param.", "we report the results on MNLI-m and MNLI-mm, respectively.", "For MRPC and QQP, we report both F1 and accuracy.", "For STS-B, we report Pearson and Spearman correlation.", "The metric for CoLA is Matthew's correlation.", "The other tasks use accuracy as the metric.", "Following previous works (Sun et al., 2019; Turc et al., 2019; Xu et al., 2020), we evaluate MetaDistil in a task-specific setting where the teacher model is fine-tuned on a downstream task and the student model is trained on the task with the KD loss.", "We do not choose the pretraining distillation setting since it requires significant computational resources.", "We implement MetaDistil based on Hugging Face Transformers (Wolf et al., 2020).", "Baselines For comparison, we report the results of vanilla KD and patient knowledge distillation (Sun et al., 2019).", "We also include the results of progressive module replacing (Xu et al., 2020), a state-of-the-art task-specific compression method for BERT which also uses a larger teacher model to improve smaller ones like knowledge distillation.", "In addition, according to Turc et al. (2019), the reported performance of current task-specific BERT compression methods is underestimated because the student model is not appropriately initialized.", "To ensure fair comparison, we re-run task-specific baselines with student models 7042 initialized by a pretrained 6-layer BERT model and report our results in addition to the official numbers in the original papers.", "We also compare against deep mutual learning (DML) (Zhang et al., 2018), teacher assistant knowledge distillation (TAKD) (Mirzadeh et al., 2020), route constraint optimization (RCO) (Jin et al., 2019), proximal knowledge teaching (ProKT) (Shi et al., 2021), and student-friendly teacher network (SFTN) (Park et al., 2021), where the teacher network is not fixed.", "For reference, we also present results of pretraining distilled models including DistilBERT (Sanh et al., 2019), TinyBERT (Jiao et al., 2019), MiniLM v1 and v2 (Wang et al., 2020b,a).", "Note that among these baselines, PKD (Sun et al., 2019) and Theseus (Xu et al., 2020) exploit intermediate features while TinyBERT and the MiniLM family use both intermediate and Transformer-specific features.", "In contrast, MetaDistil uses none of these but the vanilla KD loss (Equation 1).", "Training Details For training hyperparameters, we fix the maximum sequence length to 128 and the temperature to 2 for all tasks.", "For our method and all baselines (except those with officially reported numbers), we perform grid search over the sets of the student learning rate from {1e-5, 2e-5, 3e-5}, the teacher learning rate from {2e-6, 5e-6, 1e-5}, the batch size from {32, 64}, the weight of KD loss from {0.4, 0.5, 0.6}.", "We randomly split the original training set to a new training set and the quiz set by 9 : 1 .", "For RCO, we select four unconverged teacher checkpoints as the intermediate training targets.", "For TAKD, we use KD to train a teacher assistant model with 10 Transformer layers.", "We report the experimental results on both the development set and test set of the eight GLUE tasks (Wang et al., 2019) in Table", "1. MetaDistil achieves state-of-the-art performance under the task-specific setting and outperforms all KD baselines.", "Notably, without using any intermediate or model-specific features in the loss function, MetaDistil outperforms methods with carefully designed features, e.g., PKD and TinyBERT (without data augmentation).", "Compared with other methods with a trainable teacher (Zhang et al., 2018; Mirzadeh et al., 2020; Jin et al., 2019; Shi et al., 2021), our method still demonstrates superior performance.", "As we analyze, with the help of meta learning, MetaDistil is able to directly optimize the teacher's teaching ability thus yielding a further improvement in terms of student accuracy.", "Also, we observe a performance drop by replacing pilot update with a normal update.", "This ablation study verifies the effectiveness of our proposed pilot update mechanism.", "Moreover, MetaDistil achieves very competitive results on image classification as well, as described in Section A.2.", "We investigate the effect of meta-update for each iteration.", "We inspect (1) the validation loss of S (cid:48) after the teaching experiment and that of S after the real distillation update, and (2) the KD loss, which describes the discrepancy between student and teacher, before and after the teacher update.", "We find that for 87% of updates, the student model's validation loss after real update (Line 7 in Algorithm 1) is smaller than that after the teaching experiment (Line 4 in Algorithm 1), which would be the update to the student S in the variant without pilot update.", "This confirms the effectiveness of the pilot update mechanism on better matching the student and teacher model.", "Moreover, we find that in 91% of the first half of the updates, the teacher becomes more similar (in terms of logits distributions) to the student after the meta-update, which indicates that the teacher is learning to adapt to a low-performance student (like an elementary school teacher).", "However, in the second half of MetaDistil, this percentage drops to 63%.", "We suspect this is because in the later training stages, the teacher needs to actively evolve itself beyond the student to guide the student towards further improvement (like a university professor).", "Finally, we try to apply a meta-learned teacher to a conventional static distillation and also to an unfamiliar student.", "We describe the results in details in Section A.3.", "A motivation of MetaDistil is to enable the teacher to dynamically adjust its knowledge transfer in an optimal way.", "Similar to Adam (Kingma & Ba, 2015) vs. SGD (Sinha & Griscik, 1971; Kiefer et al., 1952) for optimization, with the ability of dynamic adjusting, it is natural to expect MetaDistil to be more insensitive and robust to changes of the settings.", "Here, we evaluate the performance of MetaDistil with students of various capability, and 7043 75 80 A v g .", "Student Capability To investigate the performance of MetaDistil under different student capacity, we experiment to distill BERT-Base into BERT-6L, Medium, Small, Mini and Tiny (Turc et al., 2019) with conventional KD and MetaDistil.", "We plot the performance with the student's parameter number in Figure", "2. Additionally, we show results for different compression ratio in Appendix B. Loss Weight In KD, tuning the loss weight is nontrivial and often requires hyperparameter search.", "To test the robustness of MetaDistil under different loss weights, we run experiments with different (Equation 1).", "As shown in Figure 3, MetaDistil consistently outperforms conventional KD and is less sensitive to different .", "Temperature Temperature is a re-scaling trick introduced in Hinton et al. (2015b).", "We try different temperatures and illustrate the performance of KD and MetaDistil in Figure 4.", "MetaDistil shows better performance and robustness compared to KD.", "Like all meta learning algorithms, MetaDistil inevitably requires two rounds of updates involving both first and second order derivatives.", "Thus, MetaDistil requires additional computational time and memory than a normal KD method, which can be a limitation of our method.", "We compare the Method PKD (2019) ProKT (2021) MetaDistil (ours) Training Time (Best) 13 min. 25 min. 31 min.", "computational overheads of MetaDistil with other methods in Table", "2. Although our approach takes more time to achieve its own peak performance, it can match up the performance of PKD (Sun et al., 2019) with a similar time cost.", "The memory use of our method is higher than PKD and ProKT (Shi et al., 2021).", "However, this one-off investment can lead to a better student model for inference, thus can be worthy.", "In this paper, we present MetaDistil, a knowledge distillation algorithm powered by meta learning that explicitly optimizes the teacher network to better transfer its knowledge to the student network.", "The extensive experiments verify the effectiveness and robustness of MetaDistil.", "MetaDistil focuses on improving the performance of knowledge distillation and does not introduce extra ethical concerns compared to vanilla KD methods.", "Nevertheless, we would like to point out that as suggested by Hooker et al. (2020), model compression may lead to biases.", "However, this is not an outstanding problem of our method but a common risk in model compression, which needs to be addressed in the future.", "We would like to thank the anonymous reviewers and the area chair for their insightful comments.", "This project is partly supported by NSF Award #1750063." ]
[ "method", "result", "result", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "objective", "objective", "abstain", "result", "method", "method", "method", "abstain", "method", "method", "result", "result", "method", "objective", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "method", "other", "other", "other", "other", "other", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "result", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "other", "other" ]
[ "Several datasets have recently been constructed to expose brittleness in models trained on existing benchmarks.", "While model performance on these challenge datasets is signifi-cantly lower compared to the original benchmark, it is unclear what particular weaknesses they reveal.", "For example, a challenge dataset may be difficult because it targets phenomena that current models cannot capture, or because it simply exploits blind spots in a model's specific training set.", "We introduce inoculation by fine-tuning , a new analysis method for studying challenge datasets by exposing models (the metaphorical patient) to a small amount of data from the challenge dataset (a metaphorical pathogen) and assessing how well they can adapt.", "We apply our method to analyze the NLI stress tests (Naik et al., 2018) and the Adversarial SQuAD dataset (Jia and Liang, 2017).", "We show that after slight exposure, some of these datasets are no longer challenging, while others remain difficult.", "Our results indicate that failures on challenge datasets may lead to very different conclusions about models, training datasets, and the challenge datasets themselves.", "NLP research progresses through the construction of dataset-benchmarks and the development of systems whose performance on them can be fairly compared.", "A recent pattern involves challenges to benchmarks: 1 manipulations to input data that result in severe degradation of system performance, but not human performance.", "These challenges have been used as evidence that current systems are brittle (Belinkov and Bisk, 2018; Mudrakarta et al., 2018; Zhao et al., 2018; Glockner et al., 2018; Ebrahimi et al., 2018; Ribeiro et al., 2018, 1 Often referred to as adversarial datasets or attacks.", "inter alia ).", "For instance, Naik et al. (2018) generated natural language inference challenge data by applying simple textual transformations to existing examples from MultiNLI (Williams et al., 2018) and SNLI (Bowman et al., 2015).", "Similarly, Jia and Liang (2017) built an adversarial evaluation dataset for reading comprehension based on SQuAD (Rajpurkar et al., 2016).", "What should we conclude when a system fails on a challenge dataset?", "In some cases, a challenge might exploit blind spots in the design of the original dataset ( dataset weakness ).", "In others, the challenge might expose an inherent inability of a particular model family to handle certain natural language phenomena ( model weakness ).", "These are, of course, not mutually exclusive.", "We introduce inoculation by fine-tuning , a new method for analyzing the effects of challenge datasets (Figure 1).", "2 Given a model trained on the original dataset, we expose it to a small number of examples from the challenge dataset, allowing learning to continue.", "To the extent that the weakness lies with the original dataset, then the inoculated model will perform well on both the original and challenge held-out data (Outcome 1 in Figure 1).", "If the weakness lies with the model, then inoculation will prove ineffective and the model's performance will remain unchanged (Outcome 2).", "Inoculation can also decrease a model's performance on the original dataset (Outcome 3).", "This case is not as clear as the first two, and could result from systematic differences between the original and challenge datasets, due to, e.g., predictive artifacts in either dataset (Gururangan et al., 2018).", "We apply our method to analyze six challenge datasets: the word overlap , negation , spelling errors , length mismatch and numerical reasoning NLI challenge datasets proposed by Naik et al. (2018), as well as the Adversarial SQuAD reading comprehension challenge dataset (Jia and Liang, 2017).", "We analyze NLI datasets with the ESIM (Chen et al., 2017) and the decomposable attention (Parikh et al., 2016) models, and reading comprehension with the BiDAF (Seo et al., 2017) and the QANet (Yu et al., 2018) models.", "By fine-tuning on, in some cases, as few as 100 examples, both NLI models are able to recover almost the entire performance gap on both the word overlap and negation challenge datasets (Outcome 1).", "In contrast, both models struggle to adapt to the spelling error and length mismatch challenge datasets (Outcome 2).", "On the numerical reasoning challenge dataset, both models close all of the gap using a small number of samples, but at the expense of performance on the original dataset (Out-come 3).", "For Adversarial SQuAD, BiDAF closes 60% of the gap with minimal fine-tuning, but suffers a 7% decrease in original test set performance (Outcome 3).", "QANet shows similar trends.", "Our proposed analysis is broadly applicable, easy to perform, and task-agnostic.", "By gaining a better understanding of how challenge datasets stress models, we can better tease apart limitations of datasets and limitations of models.", "2 Inoculation evokes the idea that treatable diseases have different implications (for society and for the patient) than untreatable ones.", "We differentiate the abstract process of inoculation from our way of executing it (fine-tuning) since it is easy to imagine alternative ways to inoculate a model.", "Our method assumes access to an original dataset divided into training and test portions, as well as a challenge dataset, divided into a (small) training set 3 and a test set.", "After training on the original (training) data, we measure system performance on both test sets.", "We assume the usual observationa generalization gap with considerably lower performance on the challenge test set.", "We then proceed to fine-tune the model on the challenge training data, i.e., continuing to train the pre-trained model on the new data until development performance on the original development set has not improved for five epochs.", "4 Finally, we measure performance of the inoculated model on both the original and challenge test sets.", "Three clear outcomes of interest are: 5 Outcome 1 The gap closes, i.e., the inoculated system retains its (high) performance on the original test set and performs as well (or nearly so) on the challenge test set.", "This case suggests that the challenge dataset did not reveal a weakness in the model family.", "Instead, the challenge has likely revealed a lack of diversity in the original dataset.", "Outcome 2 Performance on both test sets is unchanged.", "This indicates that the challenge dataset has revealed a fundamental weakness of the model; it is unable to adapt to the challenge data distribution, even with some exposure.", "Outcome 3 Inoculation damages performance on the original test set (regardless of improvement on the challenge test set).", "The main difference between Outcome 3 and Outcomes 1 and 2 is that here, by fine-tuning, the model is shifting towards a challenge distribution that somehow contradicts the original distribution.", "This could result from, e.g., a different label distribution between both datasets, or annotation artifacts that exist in one dataset but not in the other (see Sections 3.2, 3.3).", "To demonstrate the utility of our method, we apply it to analyze the NLI stress tests (Naik et al.,", "3 The exact amount of challenge data used for fine-tuning might affect our conclusions, so we consider different sizes of the vaccine in our experiments.", "4 The use of the original development set is meant to both prevent us from using more challenge data and verify that the learner does not completely forget the original dataset.", "Article: Super Bowl 50 Paragraph: Peyton Manning became the first quarterback ever to lead two different teams to multiple Super Bowls.", "He is also the oldest quarterback ever to play in a Super Bowl at age 39.", "The past record was held by John Elway, who led the Broncos to victory in Super Bowl XXXIII at age 38 and is currently Denver's Executive Vice President of Football Operations and General Manager.", "Quarterback Jeff Dean had jersey number 37 in Champ Bowl XXXIV.", "Question: What is the name of the quarterback who was 38 in Super Bowl XXXIII?", "2018) and the Adversarial SQuAD dataset (Jia and Liang, 2017).", "We fine-tune models on a varying number of examples from the challenge dataset training split in order to study whether our method is sensitive to the level of exposure.", "6 Our results demonstrate that different challenge datasets lead to different outcomes.", "We release code for reproducing our results.", "7 3.1 Datasets We briefly describe the analyzed datasets, but refer readers to the original publications for details.", "NLI Stress Tests Naik et al. (2018) proposed six automatically-constructed stress tests, each focusing on a different weakness of NLI systems.", "We analyze five of these stress tests (Table 1).", "8 6 See Appendix A for experimental process details.", "http://nelsonliu.me/papers/ inoculation-by-finetuning 8 The remaining challenge dataset antonym is briefly discussed in Section 3.3.", "The word overlap challenge dataset is designed to exploit models' sensitivity to high lexical overlap in the premise and hypothesis by appending the tautology and true is true to the hypothesis.", "The negation challenge dataset is based on the observation that negation words (e.g., no , not ) cause the model to classify neutral or entailed statements as contradiction.", "In this dataset, the tautology and false is not true is appended to the hypothesis sentence.", "The spelling errors challenge dataset is designed to evaluate model robustness to noisy data in the form of misspellings.", "The length mismatch challenge dataset is designed to exploit models' inability to handle examples with much longer premises than hypotheses.", "In this dataset, the tautology and true is true is appended five times to the end of the premise.", "Lastly, the numerical reasoning challenge dataset is designed to test models' ability to perform algebraic calculations, by introducing premise-hypothesis pairs containing numerical expressions.", "We analyze these challenge datasets using two models, both trained on the MultiNLI dataset: 9 the ESIM model (Chen et al., 2017) and the decomposable attention model (DA; Parikh et al., 2016).", "To better address the spelling errors challenge dataset, we also train a character-sensitive version of the ESIM model.", "We concatenate the word representations with the 50-dimensional hidden states 9 MultiNLI has domain-matched and mismatched development data, so we train separate matched and mismatched models that each use the corresponding development set for learning rate scheduling and early stopping.", "We observe similar results in both cases, so we focus on the models trained on matched data.", "See Appendix B for mismatched results.", "Adversarial SQuAD Jia and Liang (2017) created a challenge dataset for reading comprehension by appending automatically-generated distractor sentences to SQuAD passages.", "The appended distractor sentences are crafted to look similar to the question while not contradicting the correct answer or misleading humans (Figure 2).", "The authors released model-independent Adversarial SQuAD examples, which we analyze.", "For our analysis, we use the BiDAF model (Seo et al., 2017) and the QANet model (Yu et al., 2018).", "We refer to difference between a model's preinoculation performance on the original test set and the challenge test set as the performance gap", "NLI Stress Tests Figure 3 presents NLI accuracy for the ESIM and DA models on the word overlap, negation, spelling errors, length mismatch and numerical reasoning challenge datasets after fine-tuning on a varying number of challenge examples.", "For the word overlap and negation challenge datasets, both ESIM and DA quickly close the performance gap when fine-tuning (Outcome 1).", "For instance, on both of the aforementioned challenge datasets, ESIM requires only 100 examples to close over 90% of the performance gap while maintaining high performance on the original dataset.", "Since these performance gaps are closed after seeing a few challenge dataset examples ( < 0.03% of the original MultiNLI training dataset), these challenges are likely difficult because they exploit easily-recoverable gaps in the models' training dataset rather than highlighting their inability to capture semantic phenomena.", "In contrast, on spelling errors and length mismatch, fine-tuning does not allow either model to close a substantial portion of the performance gap, while performance on the original dataset is unaffected (Outcome 2).", "10 Interestingly, the character-aware ESIM model trained on spelling errors shows a similar trend, suggesting that the this challenge set is highlighting a weakness of ESIM that goes beyond the word representation.", "On numerical reasoning, the entire gap is closed by fine-tuning ESIM on 100 examples, or DA on 750 examples.", "However, both models' original dataset performance substantially decreases (Out-come 3; see discussion in Section 3.3).", "Adversarial SQuAD Figure", "3(f) shows BiDAF and QANet results after fine-tuning on a varying number of challenge samples.", "Fine-tuning BiDAF on only 400 challenge examples closes more than 60% of the performance gap, but also results in substantial performance loss on the original SQuAD development set; fine-tuning QANet yields the same trend (Outcome 3).", "In this case, the model likely takes advantage of the fact that the adversarial distractor sentence is always concatenated to the end of the paragraph.", "11 3.3 Discussion Explaining the Numerical Reasoning Results The relative ease with which the ESIM model overcomes the numerical reasoning challenge seems to contradict the findings of Naik et al. (2018), who observed that the model is unable to perform reasoning involving numbers or quan-tifiers . . . .", "Indeed, it seems unlikely that a model will learn to perform algebraic numerical reasoning based on as few as 50 NLI examples.", "However, a closer look at this dataset provides a potential explanation for this finding.", "The dataset was constructed such that a simple 3-rule baseline is able to surpass 80% on the task (see Appendix C).", "For instance, 35% of the dataset examples contain the phrase more than or less than in their hypothesis, and 95% of these have the label neutral.", "As a result, learning a handful of these rules is sufficient for achieving high performance on this challenge dataset.", "This observation highlights a key property of Outcome 3: challenge datasets that are easily recoverable by our method, at the expense of perfor-10 The length mismatch dataset is not particularly challenging for the ESIM model: its untuned performance on the challenge set is only 2.5% lower than its original performance.", "Nonetheless, this gap remains fixed even after fine-tuning 11 Indeed, Jia and Liang (2017) show that models trained on Adversarial SQuAD are able to overcome the adversary by simply learning to ignore the last sentence of the passage.", "Limitations of Our Method Our inoculation method assumes a somewhat balanced label distribution in the challenge dataset training portion.", "If a challenge dataset is highly skewed to a specific label, fine-tuning will result in simply learning to predict the majority label; such a model would achieve high performance on the challenge dataset and low performance on the original dataset (Out-come 3).", "For such datasets, the result of our method is not very informative.", "12 Nonetheless, as in the numerical reasoning case discussed above, this lack of diversity signals a somewhat limited phenomenon captured by the challenge dataset.", "We presented a method for studying why challenge datasets are difficult for models.", "Our method fine-tunes models on a small number of challenge dataset examples.", "This analysis yields insights into models, their training datasets, and the challenge datasets themselves.", "We applied our method to analyze the challenge datasets of Naik et al. (2018) and Jia and Liang (2017).", "Our results indicate that some of these challenge datasets break models by exploiting blind spots in their training data, while others may challenge more fundamental weaknesses of model families.", "We thank Aakanksha Naik and Abhilasha Ravichander for generating NLI stress test examples from the MultiNLI training split, and Robin Jia for answering questions about the Adversarial SQuAD dataset.", "We also thank the members of the Noah's ARK group at the University of Washington, the researchers at the Allen Institute for Artificial Intelligence, and the anonymous reviewers for their valuable feedback.", "NL is supported by a Washington Research Foundation Fellowship and a Barry M. Goldwater Scholarship.", "This work was supported in part by a hardware gift from NVIDIA Corporation." ]
[ "abstain", "abstain", "abstain", "objective", "method", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "method", "method", "method", "objective", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "objective", "result", "method", "abstain", "method", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "result", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "method", "abstain", "abstain", "method", "result", "other", "other", "other", "other" ]
[ "Informal social interaction is the primordial home of human language.", "Linguistically diverse conversational corpora are an important and largely untapped resource for computational linguistics and language technology.", "Through the efforts of a worldwide language documentation movement, such corpora are increasingly becoming available.", "We show how interactional data from 63 languages (26 families) harbours insights about turn-taking, timing, sequential structure and social action, with implications for language technology, natural language understanding, and the design of conversational interfaces.", "Harnessing linguistically diverse conversational corpora will provide the empirical foundations for flexible, localizable, humane language technologies of the future.", "The primary ecology of natural language is in real-life episodes of human interaction.", "This is where people learn language and where they use it to coordinate joint actions, build social relations, and exchange information (Schieffelin and Ochs, 1986; Schegloff, 2006).", "In contrast, when machines encounter language, it tends to be radically divorced from this habitat and reduced to large amounts of decontextualised non-interactive text (Bender and Koller, 2020; Marge et al., 2022).", "Natural languages are also characterized by diversity at many levels, from sound and sign systems to syntax and semantics (Nettle, 1999; Evans and Levinson, 2009).", "In contrast, the language samples that inform language technology tend to be limited to a handful of well-resourced languages, representing only a tiny sliver of the world's linguistic diversity (Blasi et al., 2021; Joshi et al., 2020).", "Insights from such data can strengthen the empirical foundations of language technology, help break down the hegemony of the resourceful few, and provide room for linguistic diversity in localized applications (Bird, 2020; Danielescu and Christian, 2018).", "Today there is a growing set of conversational corpora of diverse languages, thanks in large part to important primary work on language documentation and description (Seifart et al., 2018).", "We argue such corpora represent an important and mostly untapped resource for language technology.", "Corpus size is often seen as a challenge, but data comes in levels of granularity.", "A well-curated corpus amounting to an hour of lively conversation may not contain enough text to train a language model.", "But it does provide thousands of conversational turns organized in larger sequential structures of social action, along with fine details about timing, participation and linguistic structure.", "Since conversational corpora are one of the few places where we can study language in a way that approaches its natural habitat, collectively, these corpora harbour important insights about human interactional infrastructure.", "Recent work has shown the dire state of language resources in relation to linguistic diversity (Blasi et al., 2021; Joshi et al., 2020), and pointed to ways forward to increase the empirical coverage of language typology and technology (Asgari and Schtze, 2017; Bjerva and Augenstein, 2018; Deri and Knight, 2016; Duong et al., 2015; Levow et al., 2021).", "Work in distributional and corpus-based typology is showing how to analyse linguistic information available in text corpora (Ponti et al., 2019; Seifart et al., 2021; Levshina, 2021).", "Compared to text corpora, conversational corpora are much harder to collect, annotate and transcribe, and as a result they represent a much smaller subset of data.", "However, we think there is reason for cautious optimism.", "In this paper we present a first foray into this domain.", "We collate conversational corpora made available for research purposes and find there is now data available for a wide range of languages, many of them not the usual suspects of NLP research.", "Besides well-known resources like TalkBank and the Linguistic Data Consortium, here we highlight the potential of corpora collected and archived as part of language documentation projects around the world (see Appendix B).", "Our focus is specifically on corpora of informal conversations among co-present participants, transcribed and time-aligned at the level of conversational turns.", "Details of our curation and analysis pipeline are described in the Appendix and in Liesenfeld & Dingemanse (2022).", "While it is impossible to exhaustively list or estimate the size of extant conversational corpora, the quality-controlled subset we consider here represents 63 languages from 26 language families (Figure 1), and amounts to over 800 hours of talk produced by over 11.000 partipants, segmented into over 1.6 million turns (9.3 million words) (Figure 2).", "In what follows, we examine aspects of this collection with scientific and technological applications in mind.", "In doing so, we aim to contribute towards a move from the most represented to a more representative sample of the world's languages, and to show how the study of human interaction can yield insights of relevance to linguistics, language technology and human-computer interaction.", "Despite recent advances in speech and dialogue modelling, to date, no machine can lead a half-decent coherent conversation with a human (Kopp and Krmer, 2021).", "There are several reasons for this, including the need for complex cognitive skills like intention attribution and incremental common ground construction, but equally important is a dearth of data and domain knowledge: modern natural language processing predominantly deals with text, not talk.", "As a simple illustration of the difference, compare frequency distributions of words and phrases in corpora of talk versus text in English, an Indo-European language (Figure 3).", "The forms most characteristic of talk are interactive interjections like hm, uhhuh, um, yeah, okay .", "Items like this 5615 Figure 3: Words and phrases characteristic of spoken interaction (green) versus written text (purple) in English, with words most characteristic of conversational interaction in the upper left.", "streamline conversation, calibrate mutual understanding and coordinate joint action (Clark, 1996; Bavelas et al., 2000).", "Yet it is precisely such items that are woefully underrepresented in the data underlying most current language models (Prevot et al., 2019).", "It is little surprise that conversational agents have a hard time dealing with informal conversational style (Hoegen et al., 2019) and building social bonds (Cassell, 2020), and that speech recognition easily mixes up interjections with opposite pragmatic functions (Zayats et al., 2019) if it doesn't miss them altogether (Cumbal et al., 2021).", "Proposed solutions to such challenges involve imparting agents with domain-specific interactional knowledge like keyword-based scripted conversational routines and domain knowledge from Q&A databases (Bocklisch et al., 2017; Dinan et al., 2019) or with capacities for feedback generation (Oertel et al., 2016) or common ground reasoning (Kopp and Krmer, 2021).", "Here we propose a complementary approach: pay closer attention to how language is used in informal everyday interaction around the world.", "We believe this is important because just like natural language processing has long been limited to monologic texts, the science of human interaction has for the most part been based just on English and a small number of similarly well-resourced languages (Henrich et al., 2010).", "If language technology is to be maximally scalable, localizable and usable, it will greatly benefit from broadening its empirical base towards more interactive data from a wider range of languages.", "Such data can improve our understanding of interactional infrastructure and can help us chart both language-specific routines and pragmatic universals of interaction.", "Utterances are not just strings with probability distributions defined over them; they stand in relation to other turns, with which they form structured sequences and implement social actions.", "A key element of this is a socially sanctioned turn-taking system by which participants self-organize the distribution of turns over participants (Sacks et al., 1974).", "Foundational work on English showed that participants appear to avoid both gaps and overlaps, often achieving speaker transition in as little as 200 ms. This temporal organization is so tight that it has long puzzled psycholinguists, who observe that even planning a simple sentence in isolation may take up to 600ms, implying that language comprehension and production must run in parallel (Levinson, 2016).", "Indeed participants do not wait for pauses to begin their contribution, but instead start planning early, continuously weighing a range of cues to determine the likely point at which the current turn ends (de Ruiter et al., 2006).", "Subsequent cross-linguistic work has confirmed this no-gap-no-overlap goal, showing that across 10 languages from 7 language families, floor transfers are usually achieved between 0 and 200ms, with language-specific means falling within 250ms on either side of the mean (Stivers et al., 2009).", "Currently available data allows us to replicate this in 24 languages from 12 unrelated families, more than doubling the sample size.", "Because our aim is to characterize the overall temporal features of quotidian interaction, we consider all turn transitions in dyadic stretches of conversation (see Appendix A.1 for a validation in question-answer sequences).", "In the 24 corpora that contain at least 1000 dyadic turn transitions, we find substantially the 5616 same finely calibrated temporal distribution of turns, suggesting that participants aim for a no-gap, no-overlap target, with the bulk of language-specific means falling within a relatively narrow bandwidth of variation (Figure 4).", "In the full set of 674 223 transitions, 46% of turns are produced in slight terminal overlap.", "This includes both fuller turns and short responsive tokens (Goodwin, 1986; Corps et al., 2022), and underlines the extent to which human interaction everywhere involves a braiding of successive and concurrent moves.", "The implications for language and speech technology are considerable (Skantze, 2021; Roddy, 2021).", "It means that social robots that switch between listen and talk states will be behind the curve approximately half of the time: perceived as responding too slowly or switching to a listening state too late to pick up early and concurrent responses.", "If the aim is to facilitate fluid interaction, a first challenge is to achieve the rapid transitions that characterize human language use.", "This requires incremental and continuous processing (Levinson, 2016; Pitsch, 2016), representing a radical departure from classic reactive spoken dialog systems.", "Most work in this area is still based on English, potentially jeopardizing the generalizability of findings.", "Cross-linguistic conversational corpora will prove crucial to identify the most robust prosodic, lexical and interactional features that can inform continuous projections of transition relevance places (Ward et al., 2018; Roddy et al., 2018).", "Even if rapid transitions may be achieved with the help of continuous, context-sensitive processing, a further layer of language-specific calibration will be necessary to account for the known range of variation (Stivers et al., 2009).", "Experimental work in this domain shows measurable intercultural differences in orientations to inter-turn silences (Roberts et al., 2011): across cultures, people treat gaps as meaningful beyond a threshold of a few hundred milliseconds, but the exact threshold varies with culture.", "Without calibration of this kind, people may easily experience conversational agents as overeager, stilted, or out of sync.", "Progress in this domain may hinge on endowing interactive technologies with a sense for timing and rhythm (Yu et al., 2021; Pouw et al., 2021).", "An important aspect of human conversation is how rapid turn-taking enables on-the-fly calibration and coordination.", "This motivates a theoretical turn from singular, perfectly formulated, un-Figure 4: The timing of turn transitions in dyadic interactions in 24 languages around the world, replicating earlier findings and extending the evidence for the interplay of universals and cultural variation in turn-taking ( n = number of turn transitions per corpus).", "Positive values represent gaps between turns; negative values represent overlaps.", "Across languages, the mean transition time is 59ms, and 46% of turns are produced in (slight) terminal overlap with a prior turn.", "ambigous utterances to incremental, good enough, co-constructed understanding (Dingemanse et al., 2015; Albert and de Ruiter, 2018; van Arkel et al., 2020).", "Increasingly, parsers and other models of grammar and dialogue incorporate this kind of incremental perspective (Schlangen and Skantze, 2011; Vanzo et al., 2018; Buschmeier and Kopp, 2018).", "Promising application-oriented work in this direction exists (Ekstedt and Skantze, 2020; 5617 Figure 5: Two types of conversational activity in 6 unrelated languages, showing the viability of identifying broad activity types using ebbs and flows in amount of talk contributed (time in ms).", "Skantze, 2017), though two critical challenges remain:", "(i) text corpora of asynchronous interaction are much less piecemeal and incremental than copresent interaction, and", "(ii) the interactional disruptiveness of timing discrepancies can be masked by the flexibility of human participants, who soon learn to revert to simpler forms of robot-directed talk (Suchman, 2007; Seibt, 2017).", "While text corpora are sometimes treated as shapeless collections of strings, conversational data is not flat but richly structured (Goodwin, 1981; Couper-Kuhlen and Selting, 2017).", "Each turn at talk builds on what came before and shapes the possibility space of what comes next (Firth, 1935; Heritage, 1984).", "Conversation analysts call this sequence organization (Schegloff, 2007), and cross-linguistic work has uncovered a number of basic sequential positions along with slots for inserts and expansions (Kendrick et al., 2020).", "Sequences are one of the major tools for organizing social action.", "Studying conversational sequences across diverse languages poses considerable challenges, because it requires access not just to form but also to social action or intent (Bender and Koller, 2020).", "While annotated corpora of dialog acts (Jurafsky et al., 1998) are available for a small number of well-resourced languages (Bunt et al., 2020), they invite an overly categorical view of what is in fact fluid and emergent action ascription.", "The openendedness of social actions in casual conversation (Levinson, 2013) places severe constraints on the utility of slot-filling approaches (Papaioannou et al., 2018), which have their origin in narrow task-oriented interactions (Liu et al., 2021).", "Here we probe conversational sequencing by starting from coarse-grained but robust structural facts about the relative distribution of turns and talk.", "Casual interaction often combine lively spates of equitable exchange with more lopsided moments such as tellings in which one participant secures the floor and the other assumes a recipient role (Schegloff, 1982; Goodwin, 1995).", "Some work on English has captured this as chat versus chunk , where a chunk is defined as a segment where one speaker takes the floor and is allowed to dominate the conversation for an extended period' (Eggins and Slade, 2004; Gilmartin et al., 2018).", "Using a measure of relative skew in contributions in a moving 10 second window, we can identify stretches corresponding to such a distinction, as well as transitions from one state to another, across unrelated languages (Figure 5A-C).", "Knowing about such states and transitions between them is of great relevance to language technology and dialog systems.", "For instance, the relative predictability of responses differs strongly across states (Gilmartin, 2021).", "Our results suggest that it is possible to reliably identify at least some broad activity types across languages, opening up possibilities for investigating the linguistic resources that characterize them, and the ways in which people transition between them.", "The notions of chat' and chunk' should not be reified, but the distinction points to a data-driven way to get ana-5618 Figure 6: A. Continuers (marked ) are among the most frequent recipient behaviours in tellings (chunks') in both English and Korean, shown here in two 80 second segments each with a strong skew in contributions.", "lytical grip on structural features of activity types in conversation (Levinson, 1979).", "Work on English has found that tellings can be recognized not just by their skewed division of labour, but also by the use of continuers like mhm (Howes and Eshghi, 2021; Schegloff, 1982) at places where turn transition would be relevant.", "We find that this is the case in languages in our sample too, so a simple conclusion could be that we have found a way to unearth universal aspects of tellings, or chunks', with possible implications for the design of, say, dialog systems sensitive to the interactional achievement of dialog states.", "However, on closer look, the data also provides reason to take linguistic diversity seriously.", "Figure 6A zooms in on four longer stretches of conversation in English and Korean.", "Here, circles highlight the use of the most frequent continuer in the language, which is mhm' in English and eung' in Korean.", "What is already apparent in the four conversations shown in panel A is also borne out in a quantitative analysis of 100 random samples of 80 second stretches of English and Korean conversations: while 8% of turns are continuers in English, this is 21% in Korean (Figure 6B).", "This higher frequency also comes with higher susceptibility to overlap: whereas in English, 39% of continuer tokens occurs in full or partial overlap, in Korean this is 73%.", "The difference does not appear to be reducible to transcription conventions; for instance, in both corpora, continuers repeated in quick succession are transcribed as a distinct format ( mhm mhm , eung eung ) and excluded from these counts; and in both corpora, the average number of words per turn lies around 6 (Korean: 5.7; English: 6.9) and the average number of turns per 10 second window is 5.6 (Korean: 5.6; English: 5.6).", "One implication of this is that continuers are apparently relevant at more points during interaction in Korean than in English (Kim, 1999), which has consequences for the design of dialog systems, incremental parsers and conversational agents.", "For instance, a conversational agent in Korean might have to issue more displays of recipiency and should be prepared to deal with incoming feedback at a higher pace; in the same context, an agent calibrated to English might need different conversation design.", "The observed variation is extreme enough to warrant a critical look at the notion of feedback relevance spaces (Howes and Eshghi, 2021): perhaps this notion needs to be relativized to cover attested cross-linguistic diversity, as has been suggested in qualitative conversation analytic research (White, 1989; Clancy et al., 1996; Young and Lee, 2004).", "We have touched here only on some coarse-grained aspects of sequential structure by way of demonstrating the utility of conversational corpora representing diverse languages.", "Plenty of other phenomena are ripe for similar treatment.", "A key finding of linguistics going back to Estoup and Zipf (Estoup, 1917; Zipf, 1935) is that a small number of items tends to be used for a large amount of work.", "Power law distributions are ubiquitous in linguistic data and well-studied across a range of languages (see Piantadosi 2014 for review).", "Most analyses in this line of work tokenize textual data on the basis of the observation that sentences are built out of reusable elements.", "For such tokenised items (roughly, words'), we have come to expect the rank-frequency distribution to look linear on a log/log scale.", "Yet language does not come in stray words, but in turns at talk: communicative moves 5619 Figure 7: Frequency/rank distributions of tokenized items (words') and recurring turn formats in conversational corpora with at least 20 such turn formats, representing 22 languages (8 phyla).", "Since communicative turns are rarely studied as holistic units, it is an open question to what extent they may or may not show evidence of linguistic laws.", "Such an organization may seem prima facie unlikely: after all, we know we build complex turns out of simpler elements like words and phrases, and the unlimited expressive power generated by this compositionality is rightly celebrated as one of the hallmarks of human language (Hockett, 1960).", "On the other hand, as Firth (1935) noted, Conversa-tion is much more of a roughly prescribed ritual than most people think.", "Indeed a look at conversational data shows that many turns are not one-offs: at least 28% of the utterances in our sample (436 367 out of 1 532 915 across 63 languages) occur more than once, and over 21% (329 548) occur more than 20 times.", "Many of these recurring turn formats are interjections and other pragmatic devices that help manage the flow of interaction and calibrate understanding (Yngve, 1970; Jefferson, 1985; Allwood et al., 1990; Ward, 2006; Norrick, 2009).", "The ubiquity and communicative importance of these items opens up the possibility of power law-like distributions at turn level for some subset of turns.", "Here we compare rank-frequency distributions of tokenized items and standalone turn formats in the subset of 22 languages with conversational corpora large enough to feature at least 20 recurring standalone turn formats (Figure 7).", "We find that tokenized items, as expected, reproduce some wellknown structural properties of rank-frequency distributions, including their linear nature on a log-log plot and a systematic deviation from this linearity for the highest frequency (lowest rank) words.", "For standalone turns, distributions trail off sharply towards the lowest frequencies, reflective of the creative and compositional nature of many utterances.", "However, the considerable subset of recurring turn formats (Figure 5, purple) may also suggest a partial power law distribution: though the data is sparser, a log-log line fitted to the 20% of turns used at least 20 times has a comparable slope in most corpora.", "The result cannot simply be reduced to the fact that standalone turns are drawn from the larger population of single words.", "Recurrent turn formats tend to have specialized discourse-level functions, and while many are single words like m-hm', huh?' or oh', one out of three are multi-word expressions like English but um', Japanese a soo nanda oh really' or Hungarian nem tudom I dunno'.", "If such recurring formats obey a power law distribution, this provides novel, interactionally motivated evidence in support of the claim that the phrase rather than the word may be a privileged locus for Zipf's law of frequency (Ryland Williams et al., 2015).", "In this context it is worth recalling that Zipf motivated his observa-5620 tions in terms of tools-for-jobs (Zipf, 1949).", "Just as the tools of artisans are constructed and arranged in ways that support efficient use, so the tools of language are organized to optimally carry out their jobs.", "In this sense, we can speak of recurring turn formats as interactional tools .", "Even if interactional tools make up a significant proportion of turns in any exchange (as we saw in 4, continuers alone may account for 10 to 20% of turns at talk), they are easily obscured by premature tokenization or erased by seemingly innocuous procedures like stopword removal.", "And yet it is precisely these interactional tools that may prove essential to understanding and modelling interactional infrastructure within and across languages.", "Getting at these tools and charting their universality and variability represents a key goal for human language technologies.", "Overlooking interactional tools and the details of their deployment comes with immediate adverse consequences.", "A recent user study reported that a significant number of participants ran into interactional turbulence and overlap when interacting with a neural conversational agent through an English-based voice user interface (Hoegen et al., 2019).", "The turbulence was traced to the agent making segmentation errors and responding to every single utterance detected.", "This in turn made it harder for human participants to predict when the agent was done speaking, leading to cascades of overlap and confusion.", "The study proposed two solutions to deal with this (casting the interactional scuffles as situations to be avoided rather than as the rapid and flexible recalibrations they represent in human interaction).", "The first is to return the floor to a participant as soon as overlap is detected.", "This seems to assume that any vocalization by a participant is an attempt to take the floor (rather than, say, a minimal display of understanding-so-far).", "The second proposal is to filter out stop words and interjections from the participant on the grounds that the agent responding to these can confuse participants, since people often do not even realize they are using stop words or are interjecting (p. 117).", "However, people do not produce interjections stochastically, but wield them as interactional tools in the service of calibrating mutual understanding and coordinating joint action (Dingemanse, 2017).", "A continuer like mhm shows understanding, while a repair initiator like huh?", "requests clarification.", "Indiscriminately filtering out such utterances robs conversational agents of direct access to public displays of understanding and misunderstanding.", "It also robs people of the very tools they use to co-construct interdependence and understanding, and therefore of a significant part of their linguistic agency.", "Filtering out interjections to avoid interactional turbulence is like removing all pedestrian crossings to deal with self-driving cars crashing into people.", "The result may be an incident-free zone, but at significant cost to human flexibility and agency (Illich, 1973).", "More work is needed to explore the distributional properties of recurring turn formats, but at least we can conclude that every corpus in our dataset has a subset of recurrent turn formats with metacommunicative functions whose organization suggests a power-law distribution.", "Their importance in human interaction and by extension human-computer interfaces can hardly be overstated.", "To build flexible conversational agents (Buschmeier and Kopp, 2018) and localizable conversational interfaces (AbuShawar and Atwell, 2016), we need a solid grip both on possibly universal aspects as well as on the full range of cross-linguistic diversity.", "Recent work has argued that text-based stochastic models may be running into dimishing returns (Bender and Koller, 2020), has stressed the dearth of relevant conversational data (Gilmartin, 2021), and has pointed to formidable challenges in the creation of truly interactive systems (Marge et al., 2022).", "Progress will come from multiple fronts, but careful and mindful data curation must be a fundamental part of it (Rogers, 2021).", "This requires a reconceptualization not just of what counts as NLP work, but also of what counts as data.", "Here we have shown how linguistically diverse corpora of co-present conversation may contribute to such a reconceptualization.", "Now is the time to pivot from text to talk; for few things other than the careful study of interactive language use can bring us closer to an understanding of how language augments human cognition and supports fluid and flexible action coordination.", "This understanding, in turn, will be critical to make meaningful progress in any domain that involves human language technologies and interactive interfaces.", "Fortunately there are good ways forward.", "Here we summarise three principles to foster a robust and diversity-aware science of human interaction 5621 that can underpin engineering solutions, inform language models, and contribute to human-centered applications:", "1. Maximise ecological validity.", "To understand and model human interaction, start from rich data that is as close as possible to the natural habitat of language: co-present social interaction.", "Audio and video corpora of informal conversation are increasingly available for many languages and provide an excellent starting point.", "What such corpora may lack in breadth they make up for in depth: terabytes of text cannot replace the intricacies of multimodal communication and fluid participation.", "2. Represent interactional infrastructure.", "Fine-grained temporal organization, radical interdependency and emergent social action are characteristics of human interaction that cannot be reduced to stochastic properties of text.", "The timing, co-construction and sequential positioning of turns is as consequential to their meaning and interpretation as their form.", "The complex and socially distributed nature of sequence organization exceeds the powers of slot filling approaches and requires renewed attention to interactional tools: the metacommunicative resources people use to construct and calibrate mutual understanding on the fly.", "3. Design for diversity.", "To escape the reign of the resourceful few, use linguistically diverse data and anticipate a combination of universal and language-specific design principles.", "This not only ensures broad empirical coverage and enables new discoveries; it also benefits diversity and inclusion, as it enables language technology development that serves the needs of diverse communities.", "Our aim in this position paper has been to sketch how these principles, fuelled by insights from the study of dialogue, linguistic typology, conversation analysis, and a range of other fields, can provide the conceptual foundations for novel work on human language technologies and human interaction.", "Cross-linguistically diverse corpora of conversation are increasingly available and can help us to better understand basic interactional patterns and build", "more flexible, context-sensitive language technologies.", "For this to work, it is important to keep both linguistic diversity and potential universals in sight (Sidnell and Enfield, 2012; Enfield et al., 2013).", "We cannot assume that a given piece of interactional infrastructure is universal just based on a handful of languages.", "Encouragingly, our results suggest that even relatively small corpora can support robust generalizations about key aspects of interactional infrastructure.", "One reason this matters is empirical grounding .", "Cross-linguistic and comparative work on human interaction has barely started (Floyd, 2021; Ameka and Terkourafi, 2019).", "There may be more universals of interaction; equally likely is that there are more patterns of unrecognized diversity.", "Both types of outcomes are important for how they shed light on the structure of human interaction, and both have implications for language technology and human-computer interfaces.", "More fundamental work in pragmatic typology is needed and computational approaches to low-resource languages provide a promising starting point.", "But an equally important reason to consider linguistic diversity in language technology and natural language processing is one of linguistic agency (Di Paolo et al., 2018; Nguyen et al., 2016; Such-man, 2020).", "Designing interfaces that allow people to flexibly wield their preferred communicative resources lessens the hegemony of any one language and makes technology more inclusive, more humane and more convivial for a larger range of possible users (Munn, 2018; Voinea, 2018).", "Localizing user interface elements is only a first step; diversity in how and when basic interactional structures are deployed must ultimately be reflected in the design of conversational user interfaces.", "In the rush for better language technology we should avoid being driven into the arms of only the best-resourced languages and the easiest-to-get data.", "We need language models that are representative of the actual ways in which people use language, and conversational interfaces that give people the feeling they do not have to leave their own linguistic identities at the door.", "Comparative and computational work on conversational corpora from a wide range of languages is crucial to develop a strong foundational understanding of universals and diversity in interactional infrastructure, and to ensure we can build the humane and diversity-aware language technologies of the future.", "We thank Calle Brstell, Riccardo Fusaroli, Wim Pouw, Marlou Rasenberg and Marieke Woensdregt for helpful comments.", "Funding for the work reported here comes from Dutch Research Council grant NWO 016.vidi.185.205 to MD." ]
[ "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "method", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "objective", "other", "other" ]
[ "Predicting Declension Class from Form and Meaning Adina Williams @ Tiago Pimentel D Arya D. McCarthy Z Hagen Blix Eleanor Chodroff Y Ryan Cotterell D , Q @ Facebook AI Research D University of Cambridge Z Johns Hopkins University New York University Y University of York QETH Zurich adinawilliams@fb.com , tp472@cam.ac.uk , arya@jhu.edu , hagen.blix@nyu.edu , eleanor.chodroff@york.ac.uk , ryan.cotterell@inf.ethz.ch Abstract The noun lexica of many natural languages are divided into several declension classes with characteristic morphological properties.", "Class membership is far from deterministic, but the phonological form of a noun and its meaning can often provide imperfect clues.", "Here, we investigate the strength of those clues.", "More specifically, we operationalize strength as measuring how much information, in bits, we can glean about declension class from knowing the form and meaning of nouns.", "We know that form and meaning are often also indicative of grammatical genderwhich, as we quantitatively verify, can itself share information with declension classso we also control for gender.", "We find for two Indo-European languages (Czech and German) that form and meaning share a significant amount of information with class (and contribute additional information beyond gender).", "The three-way interaction between class, form, and meaning (given gender) is also significant.", "Our study is important for two reasons: First, we introduce a new method that provides additional quantitative support for a classic linguistic finding that form and meaning are relevant for the classification of nouns into declensions.", "Second, we show not only that individual declension classes vary in the strength of their clues within a language, but also that the variations between classes vary across languages .", "The code is publicly available at https://github.com/ rycolab/declension-mi .", "To an English speaker learning German, it may come as a surprise that one cannot necessarily predict the plural form of a noun from its singular.", "This is because pluralizing nouns in English is relatively simple: Usually we merely add an -s to the end (e.g., cat (cid:55) cat s ).", "Of course, not all English nouns follow such a simple rule (e.g., child (cid:55) child ren , sheep (cid:55) sheep ,", "etc.), but those that do + Figure 1: The conditional entropies ( H ) and mutual information quantities ( MI ) of form ( W ), meaning ( V ), and declension class ( C ), given gender ( G ) in German and Czech.", "not are few in number.", "Compared to English, German has comparatively many common morphological rules for inflecting nouns.", "For example, some plurals are formed by adding a suffix to the singular: Insekt insect' (cid:55) Insekten , Hund dog' (cid:55) Hunde , Radio radio' (cid:55) Radios .", "For others, the plural is formed by changing a stem vowel: 1 Mutter mother' (cid:55) M u tter , or Nagel nail' (cid:55) N a gel .", "Some others form plurals with both suffixation and vowel change: Haus house' (cid:55) H a us-er and Koch chef' (cid:55) K o ch-e .", "Still others, like Esel donkey', have the same form in plural and singular.", "The problem only worsens when we consider other inflectional morphology, such as case.", "Disparate plural formation and case rules of the kind described above split nouns into declension classes .", "To know a noun's declension class is to know which morphological form it takes in which context (e.g., Benveniste 1935; Wurzel 1989; Nubling 2008; Ackerman et al. 2009; Ackerman and Malouf 2013; Beniamine and Bonami 2016; Bonami and Beniamine 2016).", "But, this begs the question: What clues can we use to predict the class for a noun?", "In some languages, predicting declension class is argued to be easier if we know the noun's phonological form (Aronoff, 1992; 1 This vowel change, umlaut , corresponds to fronting. Dressler and Thornton, 1996) or lexical semantics (Carstairs-McCarthy, 1994; Corbett and Fraser, 2000).", "However, semantic and phonological clues are, at best, only very imperfect hints as to class (Wurzel, 1989; Harris, 1991, 1992; Aronoff, 1992; Halle and Marantz, 1994; Corbett and Fraser, 2000; Aronoff, 2007).", "Given this, we quantify how much information a noun's form and meaning share with its class, and determine whether that amount of information is uniform across classes.", "To do this, we measure the mutual information (Cover and Thomas, 2012) both between declension class and meaning (i.e., distributional semantic vector) and between declension class and form (i.e., orthographic form), as in Figure 1. We select two Indo-European languages (Czech and German) that have declension classes.", "We find that form and meaning both share significant amounts of information, in bits, with declension class in both languages.", "We further find that form clues are stronger than meaning clues; for form, we uncover a relatively large effect of 0.50.8 bits, while, for lexical semantics, a moderate one of 0.30.5 bits.", "We also measure the three-way interaction between form, meaning, and class, finding that phonology and semantics contribute overlapping information about class.", "Finally, we analyze individual inflection classes and uncover that the amount of information they share with form and meaning is not uniform across classes or languages.", "The morphological behavior of declension classes is quite complex.", "Although various factors are undoubtedly relevant, we focus on phonological and lexical semantic ones here.", "We have ample reason to suspect that phonological factors might affect class predictability.", "In the most basic sense, the form of inflectional suffixes are often altered based on the identity of the final segment of the stem.", "For example, the English plural suffix is spelled as -s after most consonants, like in cat s , but as -es if it appears after an s, sh, z, ch etc., like in moss es ', rush es ', quizz es ', beach es ' etc.", "Often differences such as these in the spelling of plural affixes or declension class affixes are due to phonological rules that are noisily realized in orthography; there could also be regularities between form and class that do not correspond to phonological rules but still have an effect.", "For example, statistical regularities over phonological segments in continuous speech guide first-language acquisition (Maye et al., 2002), even over non-adjacent segments (Newport and Aslin, 2004).", "Statistical relationships have also been uncovered between the sounds in a word and the word's syntactic category (Farmer et al., 2006; Monaghan et al., 2007; Sharpe and Marantz, 2017) and between the orthographic form of a word and its argument structure valence (Williams, 2018).", "Thus, we expect the form of a noun to provide clues to declension class.", "Semantic factors too are often relevant for determining certain types of morphologically relevant classes, such as grammatical gender, which is known to be related to declension class.", "It has been claimed that there are only two types of gender systems: semantic systems (where only semantic information is required) and formal systems (where semantic information as well as morphological and phonological factors are relevant) (Corbett and Fraser, 2000, 294).", "Moreover, a large typological survey, Qian et al. (2016), finds that meaning-sensitive grammatical properties, such as gender and animacy, can be decoded well from distributional word representations for some languages, but less well for others.", "These examples suggest that it is worth investigating whether noun semantics provides clues about declension class.", "Lastly, form and meaning might interact with one another, as in the case of phonaesthemes where the sounds of words provide nonarbitrary clues about their meanings (Sapir, 1929; Wertheimer, 1958; Holland and Wertheimer, 1964; Maurer et al., 2006; Monaghan et al., 2014; D'Onofrio, 2014; Dingemanse et al., 2015; Dingemanse, 2018; Pimentel et al., 2019).", "Therefore, we check whether form and meaning together share information with declension class.", "We motivate an investigation into the relationship between the form of a word and its declension class by appealing, at least partly, to phonological motivations.", "However, we make the simplifying assumption that phonological information is adequately captured by orthographic word formsi.e., strings of written symbols, which are also known as graphemes .", "In general, one should question this assumption (Vachek, 1945; Luelsdorff, 1987; Sproat, 2000, 2012; Neef et al., 2012).", "For the particular languages we investigate hereCzech and Germanit is less problematic, as they are have fairly transparent mappings between spelling and pronunciation (Matejcek, 1998; Miles, 2000; Caravolas and Voln, 2001), which enables them to achieve higher performance on grapheme-to-phoneme conversion than do English and other opaque orthographic systems (Schlippe et al., 2012).", "These studies suggest that we are justified in taking orthography as a proxy for phonological form.", "Nonetheless, to mitigate against any phonological information being inaccurately represented in the orthographic form (e.g., vowel lengthening in German), several of our authors, who are fluent readerannotators of our languages, checked our classes for any unexpected phonological variations.", "We exhibit examples in 3.", "We adopt a distributional approach to lexical semantics (Harris 1954; Mitchell and Lapata 2010; Turney and Pantel 2010; Bernardi et al. 2015; Clark 2015; inter alia ) that relies on pretrained word embeddings for this paper.", "We do this for multiple reasons: First, distributional semantic approaches to create word vectors, such as WORD 2 VEC (Mikolov et al., 2013), have been shown to do well at extracting lexical features such as animacy and taxonomic information (Rubinstein et al., 2015) and can also recognize semantic anomaly (Vecchi et al., 2011).", "Second, the distributional approach to lexical meaning yields a straightforward procedure for extracting meaning from text corpora at scale.", "Grammatical gender has been found to interact with lexical semantics (Schwichtenberg and Schiller, 2004; Williams et al., 2019, 2020), and often can be determined from form (Brooks et al., 1993; Do-brin, 1998; Frigo and McDonald, 1998; Starreveld and La Heij, 2004).", "This means that it cannot be ignored in the present study.", "While the precise na-ture of the relationship between declension class and gender is far from clear, it is well established that the two should be distinguished (Aronoff 1992; Wiese 2000; Kurschner and Nubling 2011; inter alia ).", "We first measure the amount of information shared between gender and class, according to the methods described in 4, to verify that the predicted relationship exists.", "We then verify that gender and class overlap in information in German and Czech to a high degree, but that we cannot reduce one to the other (see Table 3 and 6).", "We proceed to control for gender, and subsequently measure how much additional information form and meaning provide about declension class.", "For our study, we need orthographic forms of nouns, their associated word vectors, and their declension classes.", "Orthographic forms can be found in any large text corpus or dictionary.", "We isolate noun lexemes (i.e., or syntactic categoryspecific representations of words) by language.", "We select Czech nouns from UniMorph (Kirov et al., 2018) and German nouns from CELEX2 (Baayen et al., 1995).", "For lexical semantics, we trained 300-dimensional WORD 2 VEC vectors on language-specific Wikipedia.", "2 We select the nominative singular form as the donor for both orthographic and lexical semantic representations because it is the lemma in Czech and German.", "It is also usually the stem for the rest of the morphological paradigm.", "We restrict our investigation to monomorphemic lexemes because:", "(i) one stem can take several affixes which would multiply its contribution to the results, and", "3 Compared to form and meaning, declension class is a bit harder to come by, because it requires linguistic annotation.", "(ii) certain affixes come with their own class.", "We associated lexemes with their classes on a by-language basis by relying on annotations from fluent speaker linguists, either for class determination (for Czech) or for verifying existing dictionary information (for German).", "For Czech, declension classes were derived by an edit distance heuristic over affix forms, which grouped lemmata into subclasses if they received the same inflectional affixes (i.e., they constituted a morphological paradigm).", "If orthographic differences between two sets of suffixes in the lemma form could be accounted for by positing a phonological rule, then the two sets were collapsed into a single set; for example, in the feminine -a declension class, we collapsed forms for which the dative singular suffix surfaces as -e following a coronal continuant consonant ( figurka : figur c e figurine. DAT . SG '), -i following a palatal nasal ( pira na : pira n i piranha. DAT . SG '), and as e following all other consonants ( kr ava : kr a v e cow. DAT . SG ').", "As for meaning, descriptively, gender is roughly a superset of declension classes in Czech; among the masculine classes, animacy is 2 We use the GENSIM toolkit ( Reh u rek and Sojka, 2010).", "a critical semantic feature, whereas form seems to matter more for feminine and neuter classes.", "For German, nouns came morphologically parsed and lemmatized, as well as coded for class in CELEX2.", "We also use CELEX2 to isolate monomorphemic noun lexemes and bin them into classes; however, CELEX2 declension classes are more fine-grained than traditional descriptions of declension classmappings between CELEX2 classes and traditional linguistic descriptions of declension class (Alexiadou and Muller, 2008) are provided in Table 4 in the Appendix.", "The CELEX2 declension class identifier scheme has multiple subparts.", "Each declension class identifier includes:", "(i) the number prefix (being S' is for singular, or P' for plural),", "(ii) the morphological form identifier zero refers to paradigmatically missing forms (e.g., plural is zero for singularia tantum nouns), and other numbers refer to a form identifier of particular morphological processes (e.g., genitive applies an additional suffix for singular masculine nouns, but never for feminines)and", "(iii) an optional u' identifier, which refers to vowel umlaut, if present.", "More details of the German preprocessing steps are in the Appendix.", "After associating nouns with forms, meanings, and classes, we perform exclusions: Because frequency affects class entropy (Parker and Sims, 2015), we removed all classes with fewer than 20 lexemes.", "4 We subsequently removed all lexemes which did not appear in our WORD 2 VEC models trained on Wikipedia dumps.", "The final tally of Czech yields 2672 nouns in 13 declension classes, and the final tally of German yields 3684 nouns in 16 declension classes, which can be broken into 3 types of singular and 7 types of plural.", "Table 5 in the Appendix provides final lexeme counts by declension class.", "The remaining lexemes were split into 10 folds: one for testing, another for validation, and the remaining eight for training.", "Table 1 shows train validationtest splits, average length of nouns, and 4 We ran another version of our models that included all the original classes and observed no notable differences.", "number of declension classes, by language.", "Notation.", "We define each lexeme in a language as a triple.", "Specifically, the i th triple consists of an orthographic word form w i , a distributional semantic vector v i that encodes the lexeme's semantics, and a declension class c i .", "We assume these triples follow a (unknown) probability distribution p ( w , v , c ) which can be marginalized to obtain p ( c ) , for example.", "We take the space of word forms to be the Kleene closure over a language's alphabet ; thus, we have w i .", "Our distributional semantic space is a high-dimensional real vector space R d where v i R d .", "The space of declension classes is language-specific and contains as many elements as the language has classes, i.e., C = { 1 , . . . , K } where c i C .", "For each noun, a gender g i from a language-specific space of genders G is associated with the lexeme.", "In both Czech and German, G contains three genders: feminine, masculine, and neuter.", "We also consider four random variables: a -valued random variable W , an R d -valued random variable V , a C -valued random variable C and a G -valued random variable G .", "Bipartite Mutual Information.", "Bipartite MI (or, simply MI ) is a symmetric quantity that measures how much information (in bits) two random variables share.", "In the case of C (declension class) and W (orthographic form), we have MI( C ; W ) = H( C ) H( C | W ) (1) As can be seen, MI is the difference between an unconditional and a conditional entropy.", "unconditional entropy is defined as H( C ) = (cid:88) c C p ( c ) log p ( c ) (2) and the conditional entropy is defined as H( C | W ) = (3) (cid:88) c C (cid:88) w p ( c, w ) log p ( c | w )", "The mutual linformation MI( C ; W ) naturally encodes how much the orthographic word form tells us about its corresponding lexeme's declension class.", "Likewise, to measure the interaction between declension class and lexical semantics, we also consider the bipartite mutual information MI( C ; V ) .", "Tripartite Mutual Information.", "To consider the interaction between three random variables at once, we need to generalize MI to three classes.", "One can calculate tripartite MI as follows: MI( C ; W ; V ) = (4) MI( C ; W ) MI( C ; W | V ) As can be seen, tripartite MI is the difference between a bipartite MI and a conditional bipartite MI.", "The conditional bipartite MI is defined as MI( C ; W | V ) = H( C | V ) H( C | W, V ) (5) Essentially, Equation 4 is the difference between how much C and W interact and how much they interact after controlling for the meaning V .", "5 Controlling for Gender.", "Working with mutual information also gives us a natural way to control for quantities that we know influence meaning and form.", "We do this by considering conditional MI.", "We consider both bipartite and tripartite conditional mutual information.", "These are defined as follows: MI( C ; W | G ) = (6a) H( C | G ) H( C | W, G ) MI( C ; W ; V | G ) = (6b) MI( C ; W | G ) MI( C ; W | V, G ) Estimating these quantities tells us how much C and W (and, in the case of tripartite MI, V also) interact after we take G (the grammatical gender) out of the picture.", "Figure 1 provides a graphical summary for this section until this point.", "Normalization.", "To further contextualize our results, we consider two normalization schemes for MI.", "Normalizing renders MI estimates across languages more directly comparable (Gates et al., 5 We emphasize here the subtle, but important, typographic distinction between MI( C ; W ; V ) and MI( C ; W, V ) .", "(The difference in notation lies in the comma replacing the semicolon.)", "While the first (tripartite MI) measures the amount of (redundant) information shared by the three variables, the second (bipartite) measures the (total) information that class shares with either the form or the lexical semantics.", "2019).", "We consider the normalized mutual information , i.e., which fraction of the unconditional entropy is the mutual information: NMI( C ; W ) = MI( C ; W ) min { H( C ) , H( W ) } (7) This yields a percentage of the entropy that the mutual information accounts fora more interpretable notion of the predictability between class and form or meaning.", "In practice, H( C ) (cid:28) H( W ) in most cases and our normalized mutual information is termed the uncertainty coefficient (Theil, 1970): U( C | W ) = MI( C ; W ) H( C ) (8) 5 Computation and Approximation In order to estimate the mutual information quantities of interest per 4, we need to estimate a variety of entropies.", "We derive our mutual information estimates from a corpus D = { ( v i , w i , c i ) } Ni =1 .", "The most straightforward quantity to estimate is H( C ) .", "Given a corpus, we may use plug-in estimation: We compute the empirical distribution over declension classes from D .", "Then, we plug that empirical distribution over declension classes C into the formula for entropy in Equation 2. This estimator is biased (Paninski, 2003), but is a suitable choice given that we have only a few declension classes and a large amount of data.", "Future work will explore whether choice of estimator (Miller, 1955; Hutter, 2001; Archer et al., 2013, 2014) could affect the conclusions of studies such as this one.", "In contrast, estimating H( C | W ) is non-trivial.", "We cannot simply apply plug-in estimation because we cannot compute the infinite sum over that is required.", "Instead, we follow previous work (Brown et al., 1992; Pimentel et al., 2019) in using the cross-entropy upper bound to approximate H ( C | W ) with a model.", "More formally, for any probability distribution q ( c | w ) , we have H( C | W ) H q ( C | W ) (9) = (cid:88) c C (cid:88) w p ( c, w ) log q ( c | w ) To circumvent the need for infinite sums, we use a held-out sample D = { ( v i , w i , c i ) } Mi =1 disjoint from D to approximate the true cross-entropy H q ( C | W ) with the following quantity H q ( C | W ) = 1 MM (cid:88) i =1 log q ( c i | w i ) (10) where we assume the held-out data is distributed according to the true distribution p .", "We note that H q ( C | W ) H q ( C | W ) as M .", "While the exposition above focuses on learning a distribution q ( c | w ) for classes and forms to approximate H( C | W ) , the same methodology can be used to estimate all necessary conditional entropies.", "Form and gender: q ( c | w , g ) .", "We train one LSTM classifier (Hochreiter and Schmidhuber, 1996) for each language.", "The last hidden state of the LSTM models is fed into a linear layer and then a softmax non-linearity to obtain probability distributions over declension classes.", "To condition our model on gender, we embed each gender and feed it into each LSTM's initial hidden state.", "Meaning and gender: q ( c | v , g ) .", "We trained a simple multilayer perceptron (MLP) classifier to predict the declension class from the WORD 2 VEC representation.", "When conditioning on gender, we again embed each gender class, concatenating these embeddings with the WORD 2 VEC ones before feeding the result into the MLP.", "Form, meaning, and gender: q ( c | w , v , g ) .", "We again trained two LSTM classifiers, but this time, also conditioned on meaning (i.e., WORD 2 VEC ).", "Before training, we reduce the dimensionality of the WORD 2 VEC embeddings from 300 to k dimensions by running PCA on each language's embeddings.", "We then linearly transformed them to match the hidden size of the LSTMs, and fed them in.", "To also condition on gender, we followed the same procedures, but used half of each LSTM's initial hidden state for each vector (i.e., WORD 2 VEC and one-hot gender embeddings).", "Optimization.", "We trained all classifiers using Adam (Kingma and Ba, 2015) and the code was implemented using PyTorch.", "Hyperparameters number of training epochs, hidden sizes, PCA compression dimension ( k ), and number of layers were optimized using Bayesian optimization with a Gaussian process prior (Snoek et al., 2012).", "We explore a maximum of 50 models for each experiment, maximizing the expected improvement on the validation set.", "With our empirical approximations of the desired entropy measures, we can calculate the desired approximated MI values, e.g.,", "where H( C | G ) is the plug-in estimation of the entropy.", "Such an approximation, though, is not ideal, since we do not know if the true MI is approximated by above or below.", "Since we use a plug-in estimator for H( C | G ) , which underestimates entropy, and since H q ( C | W, G ) is estimated with a cross-entropy upperbound , we have MI( C ; W | G ) = H( C | G ) H( C | W, G ) (cid:39) H( C | G ) H( C | W, G ) (cid:39) H( C | G ) H q ( C | W, G ) .", "We note that these are expected lower bounds, i.e. they are exact when taking an expectation under the true distribution p .", "We cannot make a similar statement about tripartite MI, though, since it is computed as the difference of two lower-bound approximations of true mutual information quantities.", "Our main experimental results are presented in Table 2. We find that both form and lexical semantics significantly interact with declension class in both Czech and German (each p < 0 . 01 ).", "6 We observe that our estimates of MI( C ; W | G ) is larger (0.5 0.8 bits) than our estimates of MI( C ; V | G ) (0.3 0.5 bits).", "We also observe that the MI estimates in Czech are higher than in German.", "However, we caution that the unnormalized estimates for the two languages are not fully comparable because they hail from models trained on different amounts of data.", "The tripartite MI estimates between class, form, and meaning, were relatively small (0.20.35 bits) for both languages.", "We interpret this finding as showing that much of the information contributed by form is not redundant with information contributed by meaningalthough a substantial amount is. 6 All results in this section were significant for both languages, according to a Welch (1947)'s t -test, which yielded p < 0 .", "01 after Benjamini and Hochberg's correction.", "A Welch (1947)'s t -test differs from Student (1908)'s t -test in that the latter assumes equal variances, and the former does not, making it preferable (see Delacre et al. 2017).", "As a final sanity check, we measure mutual information between class and gender MI( C ; G ) (see Table 3).", "For both languages, the mutual information between declension class and gender is significant.", "Our MI estimates range from approximately 3 / 4 of a bit in German up to 1.4 bits in Czech, which respectively amount to nearly 25% and nearly 51% of the remaining unconditional entropy.", "Like the quantities discussed in 4, this MI was estimated using simple plug-in estimation.", "Remember, if class were entirely reducible to gender, conditional entropy of class given gender would be zero.", "This is not the case: Although the conditional entropy of class given gender is lower for Czech (1.35 bits) than for German (2.17 bits), in neither case is declension class informationally equivalent to the language's grammatical gender system.", "Next, we ask whether individual declension classes differ in how idiosyncratic they are, e.g., does any one German declension class share less information with form than the others?", "To address this, we qualitatively inspect per-class half-pointwise mutual information in Figure 2a2b.", "See Table 5 in the Appendix for the five highest and lowest surprisal examples per model.", "Several qualitative trends were observed:", "(i) classes show a decent amount of variability,", "(ii) unconditional entropy for each class is inversely proportional to the class' size,", "(iii) half-pointwise MI is higher on average for Czech than German, and", "(iv) classes that have high MI( C = c ; V | G ) usually have high MI( C = c ; W | G ) (with a few notable exceptions we discuss below).", "Czech.", "In general, declension classes associated with masculine nouns ( g = MSC ) have smaller MI( C = c ; W | G ) than classes associated with feminine ( g = FEM ) and neuter ( g = NEU ) ones of a comparable sizethe exception being spe-cial, masculine, plural ata '.", "This class ends exclusively in -e or -e , which might contribute to that class' higher MI( C = c ; W | G ) .", "That MI( C = c ; W | G ) is high for feminine and neuter classes suggests that the overall MI( C ; W | G ) results might be largely driven by these classes, which predominantly end in vowels.", "We also note that the high MI( C = c ; W | G ) for feminine plu-ral -e ', might be driven by the many Latin or Greek loanwords present in this class.", "With respect to meaning, masculine declension classes can reflect degrees of animacy: animate1' contains nouns referring mostly to humans and a few animals ( kocour tomcat', colek newt'), animate2' contains nouns referring mostly to animals and a few humans ( syn son', krest'an Chris-tian'), inanimate1' contains many plants, staple foods ( chleb bread', ocet vinegar') and meaningful places ( domov home', kostel church'), and inanimate2' contains many basic inanimate nouns ( k amen stone').", "Of these masculine classes, inanimate1' has a lower MI( C = c ; V | G ) than its class size alone might lead us to predict.", "Feminine and neuter classes show no clear pattern, although neuter classes -eni ' and -o ' have comparatively high MI( C = c ; V | G ) .", "For MI( C = c ; V ; W | G ) , we observe that masculine, inanimate1' is the smallest quantity, followed by most other masculine classes (e.g., masculine animate classes with -ove or -i plurals) for which MI( C = c ; W | G ) was also low.", "Among non-masculine classes, we observe that feminine 0 2 4 6 T r i p a r t i t e MI(C=c; | G) H(C=c | G) 0 2 4 6 B o t h 0 2 4 6 F o r m m a s c u li n e , a n i m a t e 1 , p l n e u t er , e / / s p ec i a l , m a s c u li n e , p l a t a m a s c u li n e , i n a n i m a t e 1 m a s c u li n e , a n i m a t e 1 , p l ov f e m i n i n e , p l i m a s c u li n e , a n i m a t e 1 , p l i m a s c u li n e , a n i m a t e 2 , p l i n e u t er , e n , d er i v e d f r o m v e r b ( i n s t r p l ) n e u t er , o f e m i n i n e , p l e f e m i n i n e , a m a s c u li n e , i n a n i m a t e 2 Inflection Class 0 2 4 6 M e a n i n g", "pl i ' and the neuter classes -o and -en show higher tripartite MI.", "The latter two classes have relatively high MI across the board.", "German.", "MI( C = c ; W | G ) for classes containing words with umlautable vowels (i.e., S3/P1u, S1/P1u) or loan words (i.e., S3/loan) tends to be high; in the prior case, our models seem able to separate umlautable from non-umlautable vowels, and in the latter case, loan word orthography from native orthography.", "MI( C = c ; V | G ) quantities are roughly equivalent across classes of different size, with the exception of three classes: S1/P4, S3/P1, and S1/P3.", "S1/P4 consists of highly semantically variable nouns, ranging from relational noun lexemes (e.g., Glied member', Weib wife', Bild picture') to masses (e.g., Reis rice'), which perhaps explains its relatively high MI( C = c ; V | G ) .", "For S1/P3 and S3/P1, MI( C = c ; V | G ) is low, and we observe that both declension classes idiosyncratically group clusters of semantically similar nouns: S1/P3 contains exotic birds ( Papagei parrot', Pfau peacock'), but also nouns ending in -or , ( Trakt or tractor', Past or pastor'), whereas S3/P1 contains very few nouns, such as names of months ( Marz , March', Mai May') and names of mythological beasts (e.g., Sphinx , Alp ).", "Tripartite MI is fairly idiosyncratic in German: The lowest quantity comes from the smallest class, S1/P2u.", "S1/P3, a class with low MI( C = c ; V | G ) from above, also has low tripartite MI .", "We speculate that S1/P3 could be a sort of catch-all class with no clear regularities.", "The highest tripartite MI comes from S1/P4, which also had high MI( C = c ; V | G ) .", "The existence of significant tripartite MI results suggests that submorphemic meaning bearing units, or phonaesthemes, might be present.", "Taking inspiration from Pimentel et al. 2019, which aims to automatically discover such units, we observe that many words in S1/P4 contain letters { d, e, g, i, l } , often in identically ordered orthographic sequences, such as B ild , B ie st, F eld , Geld , Glied , K i n d , Lei b, Lied , Sch ild , V ie ch, W ei b , etc.", "While these letters are common in German orthography, their noticeable presence suggests that further elucidation of declension classes in the context of phonaesthemes could be warranted.", "We adduce new evidence that declension class membership is not wholly idiosyncratic nor fully deterministic based on form or meaning in Czech and German.", "We quantify mutual information and find estimates which range from 0.2 bits to nearly one bit.", "Despite their relatively small magnitudes, our estimates of mutual information between class and form accounted for between 25% and 60% of the class' entropy, even after relevant controls, and MI between class and meaning accounted for between 13% and nearly 40%.", "We analyze results per-class, and find that classes vary in how much information they share with meaning and form.", "We also observe that classes that have high MI( C = c ; V | G ) often have high MI( C = c ; W | G ) , with a few noted exceptions that have specific orthographic (e.g., German umlauted plurals), or semantic (e.g., Czech masculine animacy) properties.", "In sum, this paper has proposed a new information-theoretic method for quantifying the strength of morphological relationships, and applied it to declension class.", "We verify and build on existing linguistic findings, by showing that the mutual information quantities between declension class, orthographic form, and lexical semantics are statistically significant.", "Thanks to Guy Tabachnik for discussions on Czech phonology, to Jacob Eisenstein for useful questions about irregularity, and to Andrea Sims and Jeff Parker for advice on citation forms.", "Thanks to Ana Paula Seraphim for helping beautify Figure 1. References Farrell Ackerman, James P. Blevins, and Robert Malouf." ]
[ "other", "abstain", "objective", "method", "result", "objective", "abstain", "objective", "result", "other", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "result", "objective", "objective", "objective", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "method", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "method", "method", "method", "method", "abstain", "method", "abstain", "abstain", "method", "abstain", "method", "method", "method", "method", "result", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "method", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "objective", "result", "abstain", "result", "result", "objective", "result", "other", "other" ]
[ "Open relation extraction aims to cluster relation instances referring to the same underlying relation, which is a critical step for general relation extraction.", "Current OpenRE models are commonly trained on the datasets generated from distant supervision, which often results in instability and makes the model easily collapsed.", "In this paper, we revisit the procedure of OpenRE from a causal view.", "By formulating OpenRE using a structural causal model, we identify that the above-mentioned problems stem from the spurious correlations from entities and context to the relation type.", "To address this issue, we conduct Element Intervention , which intervenes on the context and entities respectively to obtain the underlying causal effects of them.", "We also provide two specific implementations of the interventions based on entity ranking and context contrasting.", "Experimental results on unsupervised relation extraction datasets show that our methods outperform previous state-of-the-art methods and are robust across different datasets 1 .", "Relation extraction (RE) is the task to extract relation between entity pair in plain text.", "For example, when given the entity pair (Obama, the United States) in the sentence Obama was sworn in as the 44th president of the United States , an RE model should accurately predict the relationship President of and extract the corresponding triplet (Obama, President of, the United States) for downstream tasks.", "Despite the success of many RE models (Zeng et al., 2014; Baldini Soares et al., 2019), most previous RE paradigms rely on the pre-defined relation types, which are always unavailable in open domain scenario and thereby limits their capability in real applications.", "Open Relation Extraction (OpenRE), on the other hand, has been proposed to extract relation facts without pre-defined relation types neither annotated data.", "Given a relation instance consisting of two entities and their context, OpenRE aims to identify other instances which mention the same relation.", "To achieve this, OpenRE is commonly formulated as a clustering or pair-matching task.", "Therefore the most critical challenge for OpenRE is how to learn effective representations for relation instances and then cluster them.", "To this end, Yao et al. (2011) adopts topic model (Blei et al., 2003) to generate latent relation type for unlabelled instances.", "Later works start to utilize datasets collected using distant supervision for model training.", "Along this line, Marcheggiani and Titov (2016) utilizes an auto-encoder model and trains the model through self-supervised signals from entity link predictor.", "Hu et al. (2020) encodes each instance with pretrained language model (Devlin et al., 2019; Baldini Soares et al., 2019) and learn the representation by self-supervised signals from pseudo labels.", "Unfortunately, current OpenRE models are often unstable and easily collapsed (Simon et al., 2019).", "For example, OpenRE models frequently cluster all relation instances with context was born in into the relation type BORN IN PLACE because they share similar context information.", "However, was born in can also refer to the relation BORN IN TIME .", "Furthermore, current models also tend to cluster two relation instances with the same entities (i.e., relation instances with the same head and tail entities) or the same entity types into one relation.", "This problem can be even more severe if the dataset is generated using distant supervision because it severely relies on prototypical context and entity information as supervision signals and therefore lacks of diversity.", "In this paper, we attempt to explain and resolve the above-mentioned problem in OpenRE from a causal view.", "Specifically, we formulate the process of OpenRE using a structural causal model (SCM) (Pearl, 2009), as shown in Figure 1. The main assumption behind the SCM is that distant supervision will generate highly correlated relation instances to the original prototypical instance, and there is a strong connection between the generated instance to the prototypical instance through either their entities or their context.", "For example, [Jobs] was born in [California] and [Jobs] was born in [1955] are highly correlated because they share similar context was born in and entity Jobs.", "Such connection will result in spurious correlations, which appear in the form of the backdoor paths in the SCM.", "Then the spurious correlations will mislead OpenRE models, which are trained to capture the connection between entities and context to the relation type.", "Based on the above observations, we propose element intervention , which conducts backdoor adjustment on entities and context respectively to block the backdoor paths.", "However, due to the lack of supervision signals, we cannot directly optimize towards the underlying causal effects.", "To this end, we further propose two surrogate implementations on the adjustments on context and entities, respectively.", "Specifically, we regard the instances in the original datasets as the relation prototypes.", "Then we implement the adjustment on context through a Hierarchy-Based Entity Ranking (Hyber), which fixes the context, samples related entities from an entity hierarchy tree and learns the causal relation through rank-based learning.", "Besides, we implement the adjustment on entities through a Generation-based Context Contrasting (Gcc), which fixes the entities, generates positive and negative contexts from a generation-based model and learns the causal effects through contrastive learning.", "We conduct experiments on different unsupervised relation extraction datasets.", "Experimental results show that our method outperforms previous state-of-the-art methods with a large margin and suffers much less performance discrepancy between different datasets, which demonstrate the effectiveness and robustness of the proposed methods.", "In this section, we formulate OpenRE from the perspective of Structural Causal Model and give the theoretical proof for intervention methods that block the backdoor paths from relation elements (i.e., context and entity pair) to the latent relation types.", "Relation extraction (RE) is the task of extracting the relationship between two given entities in the context.", "Considering the sequence example: S = [ s 0 , ..., s n 1 ] which contains n words, e 1 = [ i, j ] and e 2 = [ k, l ] indicate the entity pair, where 0 i j < k l n 1 , a relation instance X is defined as X = ( S , e 1 , e 2 ) , (i.e. the tuple of entity pair and the corresponding context).", "The element of a relation instance is the entity pair and the corresponding context.", "Traditional RE task is to predict the relations type when given X .", "However, the target relation types are not pre-defined in OpenRE.", "Consequently, OpenRE is commonly formulated as a clustering task or a pair-matching task by considering whether two relation instances X i and X j refer to the same relation.", "Unfortunately, current OpenRE models are often unstable and easily collapsed (Simon et al., 2019).", "In the next section, we formulate OpenRE using a structural causal model and then identify the reasons behind these deficiencies from the SCM.", "Figure 1", "(a) shows the structural causal model for OpenRE.", "The main idea behind the SCM is distant supervision will generate highly correlated relation instances to the original prototypical instance, and there is a strong connection between the generated instance to the prototypical instance through Hugo was born in [ Paris ], [France].", "either their entities or their context.", "Specifically, in the SCM, we describe OpenRE with five critical variables: 1) the prototypical relation instance P , which is a representative relation instance of one relation type cluster; 2) the entity pair E , which encodes the entity information of one relation instance; 3) the context C , which encodes the context information of one relation instance; 4) a relation instance X (which can be generated from distant supervision or other strategies) and 5) the final pair-wise matching result Y , which corresponds to whether instance X and the prototypical relation instance P entail the same relation.", "Given the variables mentioned above, we formulate the process of generating OpenRE instances based on the following causal relations: E P C formulates the process of sampling related entities and context respectively from the prototypical relation instance P .", "E X C formulates the relation instance generating process.", "Given the context C and entities E from the prototypical relation instance P , a new relation instance X is generated based on the information in C and E. This process can be conducted through distant supervision.", "P Y X formulates the OpenRE clustering or pair-wise matching process.", "Given a prototypical relation instance P and another relation instance X , this process will determine whether X belongs to the relation cluster of P .", "Given a relation prototypical instance P , the learning process of OpenRE is commonly to maximize the probability P ( y, P | X ) = P ( y, P | E, C ) .", "However, as it can be observed from the SCM, there exists a backdoor path P E X when we learn the underlying effects of context C .", "That is to say, the learned effect of C to Y is confounded by E (through P ).", "For example, when we learned the effects of context was born in to the relation BORN IN PLACE, the backdoor path will lead the model to mistake the contribution of the entities (PERSON, PLACE) to the contribution of context, and therefore resulted in spurious correlation.", "The same thing happens when we learn the effects of entities E , which is influenced by the backdoor path P C X .", "As a result, optimizing these spurious correlations will result in an unstable and collapsed OpenRE model.", "To resolve the spurious correlations, we adopt the backdoor adjustment (Pearl, 2009) to block the backdoor paths.", "Specifically, we separately intervene on context C and entities E by applying the do operation.", "Entity Intervention.", "As shown in Figure 1", "(b), to avoid the spurious correlations of entities to relation types, we conduct the do -operation by intervening on the entities E : P ( Y, P | do ( E = e 0 )) = (cid:88) C,X P ( C, P ) P ( X, Y | e 0 , C, P ) = (cid:88) CP ( C, P ) P ( Y | e 0 , C, P ) = (cid:88) CP ( P ) P ( C | P ) P ( Y | e 0 , C, P ) (1) Since P ( P ) is uniformly distributed in the real world, this equation can be rewritten as: P ( Y, P | do ( E = e 0 )) (cid:88) CP ( C | P ) P ( Y | e 0 , C, P ) (2) This equation means the causal effect from the entities E to its matching result Y can be estimated by considering the corresponding possibility of each context given the prototypical relation instance P .", "The detailed implementation will be described in the next section.", "Context Intervention.", "Similarly, we conduct context intervention to avoid the spurious correlations of context to relation types, as shown in Figure 1", "(c): P ( Y, P | do ( C = c 0 )) (cid:88) EP ( E | P ) P ( Y | c 0 , E, P ) (3) which means the causal effect from the context C to its matching result Y can be estimated by considering the corresponding possibility of each entity E given P .", "The detailed implementation will also be described in the next section.", "To effectively capture the causal effects of entities E and context C to OpenRE, a matching model P ( Y | C, E, P ; ) should be learned by optimizing the causal effects:", "L ( ) = I ( X, P ) P ( Y = 1 , P | do ( E = e ( X )) + I ( X, P ) P ( Y = 1 , P | do ( C = c ( X )) + [1 I ( X, P )] P ( Y = 0 , P | do ( E = e ( X )) + [1 I ( X, P )] P ( Y = 0 , P | do ( C = c ( X )) (4)", "where e ( X ) and c ( X ) represents the entities and context in relation instance X , I ( X, P ) is an indicator which represents whether X and P belong to the same relation.", "P ( Y | C, E, P ; ) = P ( Y | X, P ; ) is a matching model, which is defined using a prototype-based measurements: P ( Y | X, P ; ) D ( R ( X ; ) , R ( P ; )) (5) where D is a distance measurement and R ( X ; ) is a representation learning model parametrized by , which needs to be optimized during learning.", "In the following, we will use D ( X, P ) = D ( R ( X ; ) , R ( P ; )) for short.", "However, it is difficult to directly optimize the above loss function because 1) in unsupervised OpenRE, we are unable to know whether the relation instance X generated from ( E, C ) matches the prototypical relation instance P ; 2) we are unable to traverse all possible E and C in Equation (2) and (3).", "To resolve these problems, in the next section, we will describe how we implement the context intervention via hierarchy-based entity ranking and the entity intervention via generation-based context contrasting.", "As we mentioned above, it is difficult to directly optimize the causal effects via Equation (4).", "To tackle this issue, this section provides a detailed implementation to approximate the causal effects.", "Specifically, we regard all relation instances in the original data as the prototypical relation instance P , and then generate highly correlated relation instances X from P via a hierarchy-based sampling and generation-based contrasting.", "Then we regard structural signals from the entity hierarchy and confidence score from the generator as distant supervision signals, and learn the causal effects via ranking-based learning and contrastive learning.", "To implement context intervention, we propose to formulate P ( E | P ) using an entity hierarchy, and approximately learn to optimize the causal effects of P ( Y = 1 , P | do ( C )) and P ( Y = 0 , P | do ( C )) in Equation (4) via a hierarchy-based entity ranking loss.", "Specifically, we first regard all relation instances in the data as prototypical relation instance P .", "Then we formulate the distribution P ( E | P ) by fixing the context in P and replacing entities by sampling from an entity hierarchy.", "Each sampled entity is regarded as the same P ( E | P ) .", "Intuitively, the entity closer to the original entities in P tends to generate more consistent relation instance to P .", "To approximate this semantic similarity, we utilize the meta-information in WikiData (i.e., the instance of and subclass of statements, which describe the basic property and concept of each en-tity), and construct a hierarchical entity tree for ranking the similarity between entities.", "In this work, we apply a three-level hierarchy through these two statements: Sibling Entities : The entities belonging to the same parent category as the original entity.", "For example, Aube and Paris are sibling entities since they are both the child entity of depart-ment of France , and both express the concepts of location and GPE.", "These sibling entities can be considered as golden entities to replace.", "Cousin Entities : The entities belonging to the same grandparent category but the different parent category from the original entity.", "For example, Occitanie and Paris is of the same grandparent category French Administrative Division , but shares different parent category.", "These entities can be considered as silver entities since they are likely to be the same type as the original one but less possible than the sibling entities.", "Other Entities : The entities beyond the grandparent category, which are much less likely to be the same type as the original one.", "For the example in Figure 2, the prototypical relation instance Hugo was born in [Paris], [France] is sampled to be intervened.", "We first fix the context and randomly choose one of the head or tail entity to be replaced.", "In this case, we choose Paris .", "Then, entities that correspond to different hierarchies are sampled and to replace the original entity.", "In this case, Aube is sampled as the sibling entity, Occitanie to be the cousin entity and 19 th century to be the other entity.", "After sampled these intervened instances, we approximately optimize P ( Y, P | do ( C )) using a rank-based loss function: LE ( ; X ) = n 1 (cid:88) i =1 max (0 , D ( P, X i ) D ( P, X i +1 ) + m E ) , (6) where is the model parameters, D ( X i , P ) is the distances between representations of generated relation instance X i and prototypical relation instance P .", "X is the intervened relation instance set, m E is the margin for entity ranking loss, and n = 3 is the depth of the entity hierarchy.", "Different from the context intervention that can easily replace entities, it is more difficult to intervene on entities and modify the context.", "Fortunately, the rapid progress in pre-trained language model (Radford et al., 2019; Lewis et al., 2020; Raffel et al., 2020) makes the language generation from RDF data 2 available (Ribeiro et al., 2020).", "So in this work, we take a different paradigm named Generation-based Context Contrasting, which directly generates different relation instances from specifically designed relation triplets, and approximately learn to optimize the causal effects of P ( Y = 1 , P | do ( E )) and P ( Y = 0 , P | do ( E )) in Equation (4) via contrastive learning.", "Specifically, we first sample relation triplets from Wikidata as prototypical relation instance P , and then generates relation triplets with the same entities but different relation context using the following strategies: Relation Renaming , which contains the same entity pair with the original one, but an alias relation name for generating a sentence with different expressions.", "Then this instance is considered as a positive sample to prototypical relation instance.", "Context Expansion , which extends the original relation instance with an additional triplet.", "The added triplet owns the same head/tail entity with the original instance but differs in the relation and tail/head entity.", "This variety aims to add irrelative context, which forces the model to focus on the important part of the context and is also considered as a positive sample to prototypical relation instance.", "Relation Replacing , which contains the same entity pair as the original one, but with other relations between these two entities.", "This variety aims to avoid spurious correlations that extracts only based on the entity pair and is considered as a negative instance to the prototypical relation instance.", "Then we use the generator to generate texts based on these triplets.", "Specifically, we first wrap the triplets with special markers [H], [T],[ R] corresponds to head entity, tail entity, and relation name.", "Then we input the concatenated texts for relation instance generation.", "In our implementation, we 2 https://www.w3.org/TR/WD-rdf-syntax-971002/ use T5 (Raffel et al., 2020; Ribeiro et al., 2020) as the base generator, and pre-train the generator on WebNLG data (Gardent et al., 2017).", "After sampled these intervened instances, we approximately optimize P ( Y, P | do ( E )) using the following contrastive loss function: LC ( ; X ) = (cid:88) X p P (cid:88) X n N max ( D ( P, X p ) D ( P, X n ) + m C , 0) , (7) where is the model parameters, X is the intervened instance set, P is the positive instance set generated from relation renaming and context expansion, N is the negative instance set generated from relation replacing, P is the original prototypical relation instance, m C is the margin.", "Based on entity ranking and context contrasting, we approximate the causal effects optimized in Equation (4) with the following ranking and contrastive loss:", "which involves both the entity ranking loss and the context contrastive loss.", "During inference, we first encode each instance into its representation using the learned model.", "Then we apply a clustering algorithm to cluster the relation representations, and the relation for each instance is predicted through the clustering results.", "We conduct experiments on two OpenRE datasets T-REx SPO and T-REx DS, since these datasets are from the same data source but only differ in constructing settings, which is very suitable for evaluating the stability of OpenRE methods.", "These datasets are both from T-REx 3 (Elsahar et al., 2018) a dataset consists of Wikipedia sentences that are distantly aligned with Wikidata relation triplets; and these aligned sentences are further collected as T-REx SPO and T-REx DS according to whether they have surface-form relations or not.", "As a result, T-REx SPO contains 763,000 sentences of 615 relations, and T-REx DS contains nearly 12 million sentences of 1189 relations.", "For both datasets, we 3 https://hadyelsahar.github.io/t-rex/ use 20% for validation and the remaining for model training as Hu et al. (2020).", "Baseline Methods.", "We compare our model with the following baselines: 1) rel-LDA (Yao et al., 2011), a generative model that considers the unsupervised relation extraction as a topic model.", "We choose the full rel-LDA with a total number of 8 features for comparison in our experiment.", "2) March (Marcheggiani and Titov, 2016), a VAE-based model learned by self-supervised signal of entity link predictor.", "3) UIE (Simon et al., 2019), a discriminative model that adopts additional regularization to guide model learning.", "And it has different versions according to the choices of different relation encoding models (e.g., PCNN).", "We report the results of two versionsUIE-PCNN and UIE-BERT (i.e., using PCNN and BERT as the relation encoding models) with the highest performance.", "4) SelfORE (Hu et al., 2020), a self-supervised framework that bootstraps to learn a contextual relation representation through adaptive clustering and pseudo label.", "Evaluation Metrics.", "We adopt three commonly-used metrics to evaluate different methods: B 3 (Bagga and Baldwin, 1998), V-measure (Rosen-berg and Hirschberg, 2007) and Adjusted Rand Index (ARI) (Hubert and Arabie, 1985).", "Specifically, B 3 contains the precision and recall metrics to correspondingly measure the correct rate of putting each sentence in its cluster or clustering all samples into a single class, which are defined as follows: B 3Prec .", "= E X,Y P ( g ( X ) = g ( Y ) | c ( X ) = c ( Y )) B 3Rec .", "Then B 3 F 1 is computed as the harmonic mean of the precision and recall.", "Similar to B 3 , V-measure focuses more on small impurities in a relatively pure cluster than less pure cluster, and use the homogeneity and completeness metrics: V Homo .", "ARI is a normalization of the Rand Index, which measures the agreement degree between the cluster and golden distribution.", "This metric ranges in [-1,1], a more accurate cluster will get a higher score.", "Different from previous metrics, ARI is Dataset model B 3 V-measure ARI F1 Prec.", "In the training period, we manually search the Hyperparameters of learning rate in [5e-6,1e-5, 5e-5], and find 1e-5 is optimal, search weight decay in [1e-6, 3e-6, 5e-5] and choose 3e-6, and use other hyperparameters without search: the dropout rate of 0.6, a batch size of 32, and a linear learning schedule with a 0.85 decay rate per 1000 mini-batches.", "In the evaluation period, we simply adopt the pre-trained models for representation extraction, then cluster the evaluate instances based on these representations.", "For clustering, we follow previous work (Simon et al., 2019; Hu et al., 2020) and set K =10 as the number of clusters.", "The training period of each epoch costs about one day.", "In our implementation, we adopt Bert-base-uncased model 4 as the base model for relation extraction and a modified T5-base model 5 for text generation.", "The entity hierarchical tree is constructed based on WikiData and finally contains 589,121 entities.", "The generation set contains about 530,000 triplets, and each triplet corresponds to 5 positive/negative triplets and generated texts.", "We use one Titan RTX for Element Intervention training and four cards of RTX for text generation.", "Table 1 shows the overall results on T-REx SPO and T-REx DS.", "From this table, we can see that: 1. Our method outperforms previous OpenRE models and achieves the new state-of-the-art performance.", "Comparing with all baseline models, our method achieves significant performance improvements: on T-Rex SPO, our method improves the SOTA B 3 F 1 and V-measure F 1 by at least 3.9%, and ARI by 2.9%; on T-Rex DS, the improvements are more evident, where SOTA B 3 F 1 and V-measure F 1 are improved by at least 10.0%, and ARI is improved by 4.9%.", "2. Our methods perform robustly in different datasets.", "Comparing the performances on these two datasets, we can see that almost all baseline methods suffer dramatic performance drops on all these metrics, which verifies that previous OpenRE methods can be easily influenced by the spurious correlations in datasets, as T-REx DS involves much more noisy instances without relation surface forms.", "As Metrics Both Seen Unseen BLEU 60.9 65.9 54.9 chrF++ 76.0 79.2 72.5 Table 3: Quantitative performance of our generator on WebNLG.", "contrast, our methods have marginal performance differences, which indicates both the effectiveness and robustness of our methods.", "Ablation Study.", "To study the effect of different intervention modules, we conduct an ablation study on each intervention module by correspondingly ablating one.", "The other setting remains the same as the main model.", "From Table 1, we can see that, in both T-REx SPO and DS, combining these two modules can result in a noticeable performance gain, which demonstrates that both two modules are important to the final model performance and they are complementary on alleviating unnecessary co-dependencies: Hyber aims to alleviate the spurious correlations between the context and the final relation prediction, and Gcc aims to alleviate the spurious correlations between entity pair and the final relation prediction.", "Besides, in T-REx DS, we can see that Hyber or Gcc only is effective enough to outperform previous SOTA methods, which indicates that element intervention has clearly unbiased representation on either entity pair or context.", "Entity Ranking on Generated Texts.", "This experiment studies the effect of different data sources for Hyber module.", "As shown in Table 2, we can see that Hyber based on T-REx SPO dataset or the generated texts has marginal difference.", "That means Hyber is robust to the source context.", "On the other hand, the quality of the generated texts satisfies the demand of this task.", "relation-s).", "This experiment gives a quantitative analysis of the generator used in our work.", "We select WebNLG (Gardent et al., 2017) to test the generator, and adopt the widely-used metrics including BLEU (Papineni et al., 2002) and chrF++ (Popovic, 2017) for evaluation.", "As shown in Table 3, we can Figure 3: Visualization of relation representation learned by element intervention.", "see that our generator is quite effective on seen relation generation.", "Though the generator suffers a performance drop in unseen relations, the scores are still receptible.", "Combined with results from other experiments, the generator is sufficient for this task.", "Visualization of Relation Representations.", "In this experiment, we visual the representations of the validation instances.", "We sample 10 relations from the T-REx SPO validation set and each relation with 200 instances for visualization.", "To reduce the dimension, we use t-sne (van der Maaten and Hinton, 2008) to map each representation to the dimension of 2. For the convenience of comparison, we color each instance with its ground-truth relation label.", "Since the visualization results of only Hyber or Gcc are marginally different from the full model, so we only choose the full model for visualization.", "As shown in Figure 3, we can see that each relation is mostly separate from others.", "However, there still be some instances misclassified due to the overlapping in the representation space.", "Current success of supervised relation extraction methods (Bunescu and Mooney, 2005; Qian et al., 2008; Zeng et al., 2014; Zhou et al., 2016; Velikovi et al., 2018) depends heavily on large amount of annotated data.", "Due to this data bottleneck, some weakly-supervised methods are proposed to learn relation extraction models from distantly labeled datasets (Mintz et al., 2009; Hoffmann et al., 2011; Lin et al., 2016) or few-shot datasets (Han et al., 2018; Baldini Soares et al., 2019; Peng et al., 2020).", "However, these paradigms still require pre-defined relation types and therefore restricts their application to open scenarios.", "Open relation extraction, on the other hand, aims to cluster relation instances referring to the same underlying relation without pre-defined relation types.", "Previous methods for OpenRE can be roughly divided into two categories.", "The generative method (Yao et al., 2011) formulates OpenRE using a topic model, and the latent relations are generated based on the hand-crafted feature representations of entities and context.", "While the discriminative method is first proposed by Marcheggiani and Titov (2016), which learns the model through the self-supervised signal from entity link predictor.", "Along this line, Hu et al. (2020) propose the SelfORE that learns the model through pseudo label and bootstrapping technology.", "However, Simon et al. (2019) point out that previous OpenRE methods severely suffer from the instability, and they also propose two regularizers to guide the learning procedure.", "But the fundamental cause of the instability is still undiscovered.", "In this paper, we revisit the procedure of OpenRE from a causal view.", "By formulating OpenRE using a structural causal model, we identify the cause of the above-mentioned problems, and alleviate the problems by Element Intervention.", "There are also some recent studies try to introduce causal theory to explain the spurious correlations in neural models (Feng et al., 2018; Gururangan et al., 2018; Tang et al., 2020; Qi et al., 2020; Zeng et al., 2020; Wu et al., 2020; Qin et al., 2020; Fu et al., 2020).", "However, to the best of our knowledge, this is the first work to revisit OpenRE from the perspective of causality.", "In this paper, we revisit OpenRE from the perspective of causal theory.", "We find that the strong connections between the generated instance to the prototypical instance through either their entities or their context will result in spurious correlations, which appear in the form of the backdoor paths in the SCM.", "Then the spurious correlations will mislead OpenRE models.", "Based on the observations, we propose Element Intervention to block the backdoor paths, which intervenes on the context and entities respectively to obtain the underlying causal effects of them.", "We also provide two specific implementations of the interventions based on entity ranking and context contrasting.", "Experimental results on two OpenRE datasets show that our methods outperform previous methods with a large margin, and suffer the least performance discrepancy between datasets, which indicates both the effectiveness and stability of our methods.", "We thank all reviewers for their insightful suggestions.", "Moreover, this work is supported by the National Key Research and Development Program of China under Grant No.2019YFC1521200, the National Natural Science Foundation of China under Grants no.", "U1936207 and 61772505, and in part by the Youth Innovation Promotion Association CAS(2018141)." ]
[ "abstain", "abstain", "abstain", "method", "result", "method", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "objective", "method", "objective", "abstain", "method", "method", "method", "objective", "method", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "other", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "other", "method", "method", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "other", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "result", "abstain", "result", "abstain", "abstain", "abstain", "method", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "abstain", "method", "other", "objective", "abstain", "result", "abstain", "objective", "method", "result", "other", "other", "other" ]
[ "News framing refers to the practice in which aspects of specific issues are highlighted in the news to promote a particular interpretation.", "In NLP, although recent works have studied framing in English news, few have studied how the analysis can be extended to other languages and in a multi-label setting.", "In this work, we explore multilingual transfer learning to detect multiple frames from just the news headline in a genuinely low-resource context where there are few/no frame annotations in the target language.", "We propose a novel method that can leverage elementary resources consisting of a dictionary and few annotations to detect frames in the target language.", "Our method performs comparably or better than translating the entire target language headline to the source language for which we have annotated data.", "This work opens up an exciting new capability of scaling up frame analysis to many languages, even those without existing translation technologies.", "Lastly, we apply our method to detect frames on the issue of U.S. gun violence in multiple languages and obtain exciting insights on the relationship between different frames of the same problem across different countries with different languages.", "The worldwide image of the United States has dropped precipitously during the past few years (Wike et al., 2018).", "Among other factors, the increasing number of gun violence incidents appears to affect the U.S. reputation abroad.", "Whenever a fatal mass shooting happens, it often attracts significant international news attention.", "While the domestic U.S. news media often links gun violence to individual shooters' mental illness (DeFoster and Swalve, 2018; Liu et al., 2019), foreign media may attribute it to U.S. gun policy and its gun culture e.g., (Atkinson, 2019).", "This phenomenon is known as media framing, which is the process of selecting some aspects of a perceived reality and [making] them more salient in a communicating text, in such a way as to promote a particular problem defini-tion, causal interpretation, moral evaluation, and/or treatment recommendation for the item (Entman, 1993).", "When foreign media frame the gun violence issue in a way to depict the U.S. as an unsafe and undesired place, it erodes the country's soft power (Nye Jr, 2004).", "Evaluating how different countries frame the U.S. gun violence issue will enrich our understanding of the U.S. soft power in particular and international relations in general.", "In this work, we develop a multilingual approach to automatically detect frames in news coverage of different languages, thus facilitating the analysis of how different countries with different languages frame a particular issue.", "Aside from enabling this understanding of foreign public opinion regarding a certain issue or nation, a multilingual approach is essential in media framing analysis, as it is also an understudied problem in many parts of the world.", "Given frame-annotated news headlines of a particular topic in a source language (e.g., English), our approach uses word-to-word translation to translate keywords that are indicative of the frames in these headlines to a target language.", "Then, we fine-tune a state-of-the-art multilingual language model MultiBERT (Devlin et al., 2019) to detect frames on these code-switched headlines, combined with a few annotated headlines from the target language.", "The translated keywords and a few-shot examples act as anchors to adapt MultiBERT to detect frames in the target language.", "This approach performs comparably if not better than a model trained on the source language and tested on headlines that are translated from the target language to the source.", "Since our approach requires only simple resources a dictionary and a few ( 40) annotated examples in the target language it is handy for many languages.", "Moreover, considering the significant improvement gained over the zero-shot transfer, the proposed approach is much more reliable for languages without existing translation technologies or expert annotations.", "Due to the subtle nature of framing, it is not uncommon for one news article to involve more than one message.", "Communication researchers have suggested that the association of different constructs, such as issues and frames in the news, will influence how the audience associate these elements, thus determining how they perceive the world (Guo and McCombs, 2015).", "The Network Agenda Setting Model suggests that examining the interrelationships between media elements enables researchers to measure media effects in a more nuanced manner.", "Note that some frames appear more often than others.", "In this work, we formulate our frame detection model to allow for multi-label frame detection while also addressing the imbalance in the frame distribution by adapting focal loss (Lin et al., 2017) into our multi-label setting.", "Our multi-label approach allows for the examination of frame co-occurring, or associative frames (Schultz et al., 2012), across the news articles.", "Overall, the contribution of this work are manifold: (1) We devise a novel code-switch few-shot scheme to train a frame detection model for any language.", "(2) We extend the formulation of the frame classification problem and focal loss to a multi-label setting, allowing the model to predict multiple frames for each instance.", "(3) We use our multilingual multi-label frame detection model to detect frames in news headlines pertaining to U.S. gun violence issue in multiple countries and languages, and obtain interesting insights on how other countries view the gun violence issue in the U.S. and how frames are related across news articles in different countries with different languages.", "1 2 Background and Related Work Today's international politics not only revolve around military and economic influence but also largely depend on a country's soft power (Nye Jr, 2004).", "For each nation, constructing a positive 1 Code and data are available at https://github.", "com/feyzaakyurek/newsframing country image to the outside world is crucial to ensure its international competitiveness in this global information society (Buhmann and Ingen-hoff, 2015).", "In this light, more and more governments have realized the importance of public diplomacy, making great efforts to promote their countries' values and perspectives to foreign publics (Entman, 2008; Golan and Himelboim, 2016).", "However, these efforts are not always successful.", "Editors of international news media serve as the gatekeepers to decisions which may lead to the framing of a given country contrary to how its government intends.", "In reporting news about a foreign country, news editors and reporters make conscious or unconscious choices to emphasize specific issues, or emphasize certain aspects of a given topic, which may alter the country's image in the minds of their audience.", "A multilingual approach is essential to analyze media framing in different parts of the world, which will shed light on foreign public opinion regarding a particular nation.", "Communication researchers often rely on manual content analysis to examine media framing in news outlets of different languages (H. De Vreese, 2001).", "One critique for this type of study is that researchers tend to decide countries for review based on languages spoken in the research team rather than theoretical rationales.", "This language constraint becomes a more significant challenge in this increasingly globalized media landscape; capturing a holistic picture of international communication would require the analysis of news coverage in a larger number of languages.", "Arguably, an automatic, multilingual approach of framing analysis would greatly benefit the international communication research community.", "In NLP, language models have been effectively fine-tuned or used in downstream tasks such as text classification (Dai and Le, 2015; Howard and Ruder, 2018; Radford et al., 2018).", "Further, the introduction of deep contextual language embedding such ELMO (Peters et al., 2018), which uses bi-directional LSTMs and BERT (Bi-directional Encoder Representations from Transformers) (De-vlin et al., 2019), has been another milestone in this line of work.", "BERT is currently one of the state-of-the-art models in language modeling.", "News framing was first brought to the attention of the computational linguistics community by the Media Frames Corpus (Card et al., 2015), which addresses three issues: immigration, tobacco, and same-sex marriage.", "Field et al. (2018) analyzes the framing of the U.S. and agenda-setting in Russian news.", "Our work is similar to (Field et al., 2018) in terms of using nPMI to find essential words.", "Furthermore, our work advances previous research by leveraging a multilingual language model, facilitating transfer learning in news framing, and relying on parsimonious resources, that is, 50,000 lexical translations vs. ~350 in our case.", "The current state-of-the-art model (Liu et al., 2019) for frame detection fine-tunes BERT on frame-annotated English news headlines with the standard multiclass focal loss objective (Lin et al., 2017).", "Their approach predicts only a single frame, which is insufficient given the multifaceted nature of news framing in which multiple frames often co-occur in the same headline.", "Indeed, more than a quarter of the Gun Violence Frame Corpus (GVFC) has more than one frame (Liu et al., 2019).", "In this work, we fine-tune MultiBERT to detect frames in multiple languages' headlines with our multi-label focal loss.", "Our approach can predict (and be evaluated on) multiple frames for each headline, which is a more complex task while being comparable to their work in terms of the average F1 performance.", "Similar to their work, we detect frames on news headlines as they provide the most direct clue to the potential influence of the news coverage.", "GVFC is a dataset of news articles from 21 ma-jor U.S. news organizations related to U.S. gun violence that contains news headlines and their domain-expert frame annotations (Liu et al., 2019).", "We extend GVFC to include headlines in other languages by following their process of curating GVFC.", "We first drew our sample of news articles from German-, Turkish-, and Arabic-speaking news websites, using Crimson Hexagon's ForSight social media analytics platform (Hexagon, 2018), retrieving items that had at least one keyword in their headlines from the following list of words {gun, firearm, NRA, 2nd amendment, second amendment, AR15, assault weapon, rifle, Brady act, Brady bill, mass shooting} that have been translated into German, Turkish, and Arabic respectively by native speakers of the languages.", "In curating the multilingual datasets, we used the same set of frames as in GVFC.", "We then trained two native speaker coders for each language to apply the GVFC codebook protocol for identifying frames and then measured their intercoder reliability (ICR) in annotating a sample of 350, 200, and 210 German, Turkish, and Arabic news headlines, respectively.", "The coders achieve 92.6%, 98.5%, 78.1% agreement rates in identifying the first frame and 78.9%, 97.9%, 74.3% agreement rates for the second frame for German, Turkish, and Arabic samples.", "Additionally, Krippendorff's Alpha for the 1st frame and the 2nd frame are 0.89, 0.66; 0.90, 0.74, and 0.69, 0.26 for German, Turkish, and Arabic, respectively.", "Once a minimum of 70% agreement was reached, one coder of each language continued to code more headlines.", "Annotation resulted in a total of 326, 100, and 388 non-duplicate headlines for German, Turkish, and Arabic.", "The average number of labels, i.e., label cardinalities, per headline are 1.4, 1.5, and 1.5, for German, Turkish, and Arabic, whereas it's 1.3 in GVFC, which is in English.", "As we can observe from the agreement rates, the Arabic data has a relatively weaker ICR, while the Turkish data has the best ICR.", "As high ICR values imply that two coders consistently categorized the content similarly, they signal a high validity of the coded results.", "In turn, this is reflected in the performance of our model as it performs the worst in Arabic (Section 5).", "Nonetheless, the quality of our curated data is substantially higher the average of Krippendorff's alpha is 0.82 than contemporaries such as MFC (which is only in English) with an average alpha of less than 0.6 (Card et al., 2015).", "In this work, we extend the current state-of-the-art model on the GVFC (Liu et al., 2019), which predicts only the first frame, into a multi-label approach and evaluate it across multiple languages.", "As previous work has showcased that BERT surpasses LSTM and GRU-based architectures, we shift our focus in this work from architecture optimization to scalability of news framing analysis across multiple languages in a multi-label setting.", "BERT relies on multiple stacks of the Trans-former's encoder blocks (Devlin et al., 2019; Vaswani et al., 2017) to learn vector representations of sentences.", "A single encoder block is composed of a self-attention layer followed by a fully-connected layer.", "When a sentence a sequence of tokens is fed into the encoder, it passes through an embedding layer, a self-attention layer, and fully-connected layers before being passed to the upper encoder block.", "The self-attention layer embodies three matrices called WQ for the query, WK for the key, and WV for the value.", "Each of these matrices is of size vocab _ size hidden _ size , and thus each token in the vocabulary has its corresponding q , k , and v vectors.", "Representations for each token are contextualized; namely, the representation of a token is the weighted average of all representations in the sequence.", "Therefore, the vector representation for token x i is given by vec_rep ( x i ) = (cid:88) j S v j Softmax( q i k j / d ) where d is the size of the key vectors in WK and S is the set of all tokens in the same sequence as x i , including x i .", "BERT adds a special token for classification [CLS] at the beginning of each sequence.", "Then it learns the representation of this token and other tokens in the sequence by training on Wikipedia corpus for two language tasks: next sentence prediction and Masked Language Model (MLM), which was initially inspired by the Cloze task (Tay-lor, 1953).", "The contextual representation of the [CLS] token encodes the syntactic and semantic constructs of the sequence, and one can fine-tune BERT for various down-stream tasks.", "Fine-tuning BERT performs well on new tasks even with small datasets, which can be attributed to the data-efficient deep attention mechanism (Devlin et al., 2019; Vinyals et al., 2015).", "The knowledge encoded within the vector representations of the tokens through pre-training also helps the classifier with the language understanding part of the task, reducing the need for a larger dataset.", "Finally, a multilingual version of pre-trained BERT, MultiBERT, which is trained on the entire Wikipedia dumps of 104 languages with the largest Wikipedia, has recently been released, making it an excellent candidate for scaling to multiple languages.", "The multilingual pre-training and the utilization of sub-word tokenization allows MultiBERT to represent sequences from any of these 104 languages (Gu et al., 2018) and enables zero-shot classification on any of the languages (i.e., train on one language and test on another).", "In our case, since reproducing the effort put in GVFC, which was created by highly qualified journalism students in other languages, is prohibitive, employing a cross-lingual model such as MultiBERT renders scaling to other words possible.", "For frame detection purposes, we classify news articles into nine frame categories based on their headlines.", "Devlin et al. (2019) recommends using the embedding generated for the special token called [CLS] , which is padded to the beginning of every sentence.", "All tokens, including [CLS] are of length H = 768 .", "The representation for [CLS] is generated by attending every word in the sequence.", "We modify BERT by appending to it a fully connected layer which acts as a classifier taking in the embedding generated for [CLS] after 12 layers of encoders and mapping it into K = 9 output neurons.", "Hence, the only parameters trained from scratch during fine-tuning are those of the classifier layer's, W R HxK .", "Finally, we use Sigmoid activations to obtain nine outputs, each between 0 and 1, which are interpreted as scores for nine classes.", "During inference, we use the threshold of 0.5 on these scores to binarize the output.", "We fine-tune MultiBERT with two different losses: the standard Binary Cross-Entropy loss, and a multi-label variation of the weighted focal loss (Lin et al., 2017).", "We compute the Binary Cross-Entropy (BCE) loss, also named as Sigmoid Cross-Entropy loss, for a single sample x as, BCE ( f ) = 1 | K | | K | (cid:88) i =1 ( y ( i ) log ( y ( i ) )+ (1 y ( i ) ) log (1 y ( i ) )) where predictions are given by y = [ y (1) , . . . , y ( | K | ) ] = 1 (1 + exp ( f ( x ))) y = [ y (1) , . . . , y ( | K | ) ] are the gold binary labels and f is BERT with classifier.", "Considering the high degree of class imbalance in the GVFC dataset, which deteriorates within the multilingual datasets we developed, we adopt a multi-label variation of binary focal loss (Lin et al., 2017).", "As a reminder, the focal loss for a single sample x is defined as, FL ( f ) = (1 p ) 2 log ( p ) where p = (1 y )(1 y )+ y y and y { 0 , 1 } is the true label, also y = 1 / (1 + exp ( f ( x ))) R , and is the balancing factor, which is usually normalized inverse class frequency.", "Hence, the smaller the class, the higher the and vice versa, which balances the importance of each class' examples while f is the hypothesis e.g., neural network.", "In the multi-label case, we alter focal loss formulation such that y and y become y { 0 , 1 } | K | and y R | K | .", "Moreover, for we propose using = (cid:104) ( (0)1 , (1)1 ) , . . . , ( (0) k , (1) k ) (cid:105) where ( j ) k is the normalized inverse frequency of the event y k = j where j { 0 , 1 } .", "In other words, we interpret each class as *two classes*, either 0 or 1, and compute inverse class frequencies for all 2 | K | classes and normalize them such that (cid:80) k K (cid:80) j { 0 , 1 } ( j ) k = 1 .", "We observe that this loss matches BCE in F1 scores and prevails it in multi-label accuracy score EM-2 (Exact Match for two frames) by a significant 11% margin as in Table 1. We use two Binary Relevance approaches based on Nave Bayes and MultiBERT, respectively, as our baselines.", "Nave Bayes is a standard baseline for text classification which leverages Bayes theorem and utilizes word frequencies as features (Mc-Callum et al., 1998).", "For regularization, we apply add-1 smoothing.", "The standard configuration for Nave Bayes is multi-class.", "One intuitive technique of tailoring Nave Bayes into a multi-label problem is called Binary Relevance (BR).", "BR is the method of training | K | one-vs-rest classifiers independently for each of class k K on the same dataset.", "As our second baseline, we train nine binary MultiBERTs in a one-vs-rest manner.", "GVFC dataset is composed of 1300 relevant samples for the issue of Gun Violence and is only available in English.", "For cross-lingual transfer, MultiBERT with multi-label Focal loss provides the highest accuracy within English samples that have more than one correct class by a significant 11% margin, 62% vs. 51% in EM-2 , while maintaining the same level of F-1 scores as given in Table 1. Firstly, we explore zero-shot and few-shot performances of our MultiBERT model with Focal loss which is trained on the English dataset as in 2.1 and 2.3 of Table 2. We use German (DE), Arabic (AR), and Turkish (TR) as our target languages to explore the cross-lingual performance of our model to a variety of languages for which we have some validation set but not train set.", "In our few-shot models, we use extra 40 samples from the target language, i.e., DE, AR, or TR, and use the same training configurations as in the initial training, which we describe in Section 5.", "Furthermore, since the news framing task is fairly a keyword-driven phenomenon (Field et al., 2018), we developed a set of keywords that occur most frequently in a given frame.", "To this end, we utilize the metric called normalized pointwise-mutual information (nPMI) which was suggested by Field et al. (2018).", "nPMI score for a given frame F and word w is I ( F, w ) = log P ( w | F ) P ( w ) .", "Both P ( w ) and P ( w | F ) are estimated from the training corpus.", "We determine the set of important words based on nPMI by selecting the top 250 words for each frame that also have nPMI greater than zero resulting in 358 total words.", "We, then, use word-to-word translation to code-switch (CS) the English training set with the target language ( T L ) for these words.", "In other words, we replace all utterances of important words with it's T L dictionary translation.", "For instance, a sample headline in the training set that was code-switched with German becomes Florida Schtze ein troubled loner mit Wei supremacist Bindungen.", "which originally was \"Florida shooter a troubled loner with white supremacist ties\" having both frames mental illness and race/ethnicity.", "We experiment with using the code-switched data for training in both zero-shot and few-shot, using 40 target language examples.", "Models based on code-switched training are indicated with CSTL for target language ( T L ) in Table 2. Code-switched translation is a way of adapting the model to the target language during training.", "We observed significant improvements or comparable results both in zero-shot and few-shot settings over the model that was trained on the original English data, as demonstrated in Table 2 for all three languages.", "Furthermore, we explore the effect of translation direction for the news frame detection task using Google Translate in Table 3. DE AR TR Model F1-Macro F1-Micro EM-1 EM-2 EM-A F1-Macro F1-Micro EM-1 EM-2 EM-A F1-Macro F1-Micro EM-1 EM-2 EM-A Zero-shot (2.1)Train EN ,Test TL 0.48 0.66 0.47 0.31 0.39 0.37 0.39 0.38 0.04 0.24 0.50 0.77 0.76 0.29 0.53 (2.2)Train CSTL ( EN ) ,Test TL 0.53 0.72 0.64 0.39 0.52 0.42 0.46 0.39 0.06 0.26 0.57 0.82 0.86 0.39 0.63 Few-shot(40 TL samples) (2.3)Train EN ,Test TL 0.66 0.75 0.52 0.37 0.44 0.48 0.54 0.41 0.17 0.31 0.77 0.89 0.67 0.73 0.70 (2.4)Train CSTL ( EN ) ,Test TL 0.64 0.76 0.59 0.43 0.51 0.53 0.58 0.35 0.19 0.29 0.84 0.92 0.80 0.73 0.77 Table 2: Comparison of pure-English training and code-switched training in zero-shot and few-shot settings.", "DE AR TR Setup F1-Macro F1-Micro EM-1 EM-2 EM-A F1-Macro F1-Micro EM-1 EM-2 EM-A F1-Macro F1-Micro EM-1 EM-2 EM-A Train: EN TL .", "Test: TL (3.1)MultiBERT 0.59 0.72 0.67 0.33 0.50 0.45 0.49 0.36 0.11 0.26 0.69 0.88 0.82 0.65 0.74 Train: EN .", "Test: TL EN (3.2)MultiBERT 0.65 0.75 0.72 0.42 0.58 0.50 0.54 0.42 0.10 0.29 0.59 0.84 0.71 0.57 0.64 (3.3)EngBERTUncased 0.63 0.78 0.75 0.44 0.60 0.52 0.55 0.48 0.13 0.34 0.48 0.78 0.73 0.43 0.58 (3.4)EngBERTCased 0.53 0.75 0.74 0.41 0.58 0.51 0.54 0.46 0.11 0.32 0.54 0.86 0.75 0.63 0.69 (3.5)Few-shotw/thebestamong(3.2),(3.3),(3.4) 0.61 0.79 0.62 0.50 0.56 0.62 0.66 0.48 0.29 0.40 0.70 0.84 0.63 0.57 0.60 Table 3: Exploring the effect of translation between target languages and English (the source) in both directions.", "As input to our models, we follow previous work and rely on news headlines rather than news story content, due to reasons described by Liu et al. (2019).", "To showcase the gains made on top of a multi-class approach by reformulating the problem as multi-label, we reproduce the method described by Liu et al. (2019) with both English BERT and MultiBERTs (Table 1).", "In our implementations involving BERT, we use Adam optimizer with a learning rate of 0.02, a maximum sequence length of 128, and we train for ten epochs.", "In Table 1, we include experiments that use different configurations of BERT, such as uncased English BERT (EngBERT) and cased Multilingual BERT (MultiBERT) with two different loss functions.", "Casing decisions were based on previous work (Liu et al., 2019) and recommendations in BERT code repository 2 .", "As for losses, we experimented with Binary Cross-Entropy and multi-label Focal Loss, as described in Section 4.1.", "For evaluation, we follow recent work and report macro and micro-averaged F1-scores (Wu et al., 2019), as well as exact-match ( EM ) for samples which have single frames ( EM-1 ), two frames ( EM-2 ) and any number of frames ( EM-A ).", "In Table 1, we also report Top-2 accuracy, which, for a given sample, computes the top two most confident predictions for each model based on the scores for each frame after the last activation layer, and checks whether those comprise the first frame.", "We report this metric to demonstrate that by switching from a multi-class model to a multi-label one, we retain 2 https://github.com/google-research/bert accuracy for the first frame while providing more predictive power with multiple labels.", "Note that, to accommodate multiple languages, we favor a multilingual language model.", "Results in Table 1 show that for our application, there is only an insignificant drop in the predictive power from EngBERT to MultiBERT using multi-label Focal Loss (ML Focal).", "Moreover, Focal Loss results in higher accuracy in EM-2 while maintaining as high F1 -scores to canonical BCE Loss.", "Considering the purposes of this paper, as well as the label cardinalities in other language datasets, we favor ML Focal loss for multilingual models.", "While being a state-of-the-art machine translation tool, Google Translate is the practitioner's handy translation guide, (Edunov et al., 2018).", "In Table 3, we explore the effect of the direction of translation to detect frames in German ( DE ), Arabic ( AR ) and Turkish ( TR ) headlines about US gun violence.", "Note that in none of the languages is a suf-ficient size of news framing training data available; thus, to extend framing analysis to multiple languages, cross-lingual transfer learning is needed.", "Firstly, we translate GVFC from English, to target language TL { DE, AR, TR }, train MultiBERT with ML Focal loss and test on the TL .", "Secondly, we use the English training set as is and translate target test sets to English.", "This latter setup lets us use EngBERT as well.", "We experiment with both cased and uncased models and observe that uncased performs better in DE and AR .", "Overall, we note that translating test sets to English results in better performance, which is intuitive as the model requires clarity in the language during training.", "All models in Tables 3 and 2 use the same loss, and MultiBERT experiments always use the cased version, following the authors' recommendation.", "We use 40 target samples of target language, translated to English, and include them in the training set to study few-shot performance.", "We only train the best performers, primarily based on F1 scores, among (3.2), (3.3) and (3.4), namely the models (3.3), (3.3) and (3.2) for DE, AR and TR , respectively in (Table 3).", "For some of the metrics the few-shot performance may drop because the new samples come from a different distribution.", "Furthermore, we compare zero-shot and few-shot performances of MultiBERT when trained on original English versus code-switched train sets in Table 2. Both models use the same set of samples; the difference is that in the former, the headlines are in English, whereas in the latter, \"important\" words are switched with their TL translations.", "In a zero-shot setting, code-switched training (2.2) outperforms English training (2.1) significantly for all three languages (F1-macro and F1-micro scores).", "Considering the few-shot setting, although the improvement gets smaller, the performance of code-switching is on par if not better for all three languages, see (2.3, 2.4).", "Note that the comparisons we make are primarily based on F1-scores as the model's capability might shift from predicting single-label cases correctly to predicting more multi-labeled cases correctly as well as between common and rare classes.", "In German, for instance, code-switched few-shot training improves in F1 scores from zero-shot but remains around the same in terms of EM-A .", "The reason for that is because the model predicts multi-label cases ( EM-2 ) better by 4 percent points, see (2.2), (2.4) in Table 2. Notably, considering Tables 2 and 3 together, a simple word-to-word translation for as little as 358 words, improves frame detection performance drastically even to the level of a complete translation of the test set to English.", "For Turkish, code-switched training beats full translation of the test set into English in a few-shot setting; it results in a comparable performance for German and slightly worse predictions for Arabic.", "We attribute the overall low performance for Arabic to the relatively small ICR in the annotation process.", "To visualize our multi-label model we use the visualization tool by Vig (2019) in Figure 1. In BERT,", "every sequence is padded by a special classification token [CLS] from the beginning.", "Embedding generated for this token is used for classification into 9 classes.", "Figures 1a and 1b demonstrate the attentions of this token to other tokens in the sequence.", "Note that the given sample headline has indeed two frames i.e. Economic Consequences\" as the first and Public Opinion\" as the second. However, in a multiclass setup in which the model is configured to produce a single label, it learns to disregard the second frame \"Public Opinion\" while strongly attending the words fargo\" and credit\" related to for the theme of Economic Con-sequences\". On the contrary, a multi-label model correctly attends all words that are related to both frames i.e. \"fargo\", \"credit\", \"nuns\" and \"opposi-tion\" and predicts Economic Consequences\" and Public Opinion\" correctly. Another interesting observation is related to bias induced by translation. In German, the phrase schrferes Waffenrecht means stricter gun reg-ulation. However, Google Translate translates half of the headlines that include the expression as stricter/sharper gun rights\" which makes the model predict Gun Rights\" rather than Gun Control\" as the frame. A discrepancy like this is widely deceptive and jeopardizes the learning, whether it happens in the training or validation set. However, in code-switched training, one has better control over the translation, as one only translates a manageable number of words. We observe that code-switched training escapes this bias through correctly translated keywords gun\" and laws\" to German. Additionally, we find our models catching several annotation errors such as the headline in Turkish Code-switch Technique Unique Switched Words Total Switched Words F1-Macro F1-Micro EM-1 EM-2 EM-A Zero-Shot (Train EN, Test DE) 0 0 0.48 0.66 0.47 0.31 0.39 Code-switch Omitted Words 387 2121 0.54 0.70 0.53 0.27 0.40 Code-switch nPMI Words 358 7522 0.53 0.72 0.64 0.39 0.52 Code-switch nPMI + Omitted Words 675 8129 0.60 0.70 0.65 0.29 0.47 Table 4: Code-switch analysis for German. Obama'dan LGBTI bireylerin gittigi bir kulpte 49 kisiyi ldren Orlando saldrgan hakknda ak-lama\" which translates as \"Obama gave a statement about the Orlando shooter who killed 49 in an LGBTI club.\" is annotated as", "Politics. In contrast, the model predicts Society/Culture and Politics, attending to LGBTI and club.", "In determining the words to code-switch from English to a target language, we mainly considered the metric called nPMI (Section 4.2), which essentially gives the most frequently-used words for each frame.", "In the English dataset (GVFC), we first list the top 250 words for a given frame based on their nPMI scores and take the union of these across frames, which resulted in a total of 358 case-sensitive words to be dictionary-translated into the target language.", "In Table 4, we provide results obtained by using different code-switching methods that use no target language annotations.", "Note that, since nPMI is a frequency metric, code-switching with nPMI results in this set of words that includes not only frame-indicative words but also a lot of stop words and common words such as a, the, he or are.", "An alternative method, which we called omitted words suggests determining important words by omitting a word from the headline and reapplying the trained classifier to the headline with the missing word (similar to Zhong et al. (2019); Ribeiro et al. (2016)).", "We then compute the drop in the probability as an importance measure for word x j , Importance ( x j ) = p ( y | x 1 , . . . , x n ) p ( y | x 1 , . . . , x j 1 , x j +1 , . . . , x n ) where y is a true label.", "The remaining procedure is similar to nPMI, as we determine the set of important words per frame, 45 of them this time, and combine those which resulted in 387 words.", "Note that this method results in a set of important words that are more disjointed across frames, which in turn makes the words more frame-specific.", "No common or stop words made it to the top 45 in any of the frames.", "Despite resulting in more sophisticated words, using omitted words to code-switch resulted in more deficient if not on par scores as compared to nPMI our primary way of doing code-switching.", "We argue that the reason for nPMI performing better is the much higher number of total words that get translated to the target language.", "In Table 4, note that using dictionary translations for only 358 unique words results in a total of 7522 words that are in the target language, which is more than 3.5 times what omitted words method yields.", "The increased amount of words that end up in the target language helped the MultiBERT classifier distinguish frames in the target language better.", "Note that in the last line of Table 4, including translations for the omitted words results in inconsistent improvement due to negligible size in the increase of the total words that get translated.", "Our experiments show that for code-switching purposes, quantity might override quality which may suggest that for code-switching to be effective in multilingual transfer, translations of simpler words can outperform translations of the domain-and task-specific words, making the resources required to leverage knowledge from the source language to target language even more parsimonious.", "The network visualization software Netdraw (Bor-gatti, 2002) was used to visualize the two frame networks depicted in Figure 6.2 based on the predictions generated on U.S. and German news articles from the year 2016 to 2018 by best performing models, i.e., uncased English BERT (Table 1) and code-switched model (Table 2) for English and German respectively.", "While each node represents a frame, each edge represents the number of times the two corresponding frames co-occurred in the news headline.", "The more central, the more connected the frame is with other frames.", "The node size was adjusted to reflect the relative frequency of news coverage of the given frame.", "That is, a frame with a larger node size more frequently occurs in the news coverage.", "Several notable patterns emerge by comparing the frame networks in the U.S. and Germany.", "It appears that the U.S. media highly politicized the", "gun violence issue.", "The frame politics is not only the most salient but also the most central, closely connected with several other frames, reflecting, the sensationalism of the U.S. media landscape.", "The U.S. media tends to link all aspects of social reality to the political fight between the two parties, a pattern not followed in foreign media.", "Another important finding is that while the U.S. media broadly framed the gun violence issue from the perspective of mental health, German media rarely mentions this aspect.", "Rather than blaming individual shooters, the German press paid more attention to U.S. public opinion manifesting as gun violence protests and the U.S. gun regulations.", "In other words, compared to the U.S.'s news coverage, foreign media tended to attribute the responsibility to the U.S. government.", "In the German news coverage, the close association between the frame society and culture and gun rights is also noteworthy.", "Frequently linking the U.S.'s unique culture and people's rights to purchase guns in the news presents the U.S. as a bizarre place, which may also lead to a negative perception of the country among Germans.", "In conclusion, the two frame networks illustrate how an issue can be framed differently in news media of different countries.", "Considering that the U.S. and Germany are close allies, it would be exciting to examine how countries with tense relations with the U.S. framed gun violence issues.", "A large-scale comparative framing study would allow a better understanding of the U.S. global image, which we propose as future work, and our multilingual and multi-label tool would make this type of analysis possible.", "In general, our approach is practical in looking at how media in different countries frame an international issue.", "We want to acknowledge two additional properties of a given headline, which neither this nor the previous", "previous works in news framing consider (Card et al., 2015; Liu et al., 2019; Field et al., 2018).", "First is relevance , although rarely, not all headlines that include the specified keywords in Section 3 are actually about U.S. gun violence.", "Second, an article may be about one particular incident or event related to gun violence, i.e., episodic , or it may focus on the issue of gun violence as an ongoing problem, i.e., thematic .", "Moreover, some of the episodic articles may not be tendential enough to have a particular frame.", "Existing works on framing only includes headlines that are both relevant and have frames, whereas, in reality, 48% of headlines about U.S. gun violence in GVFC do not have a particular frame.", "Media outlets outside of the U.S. have various rates of tendential articles about gun violence in the U.S.", "For instance, among the foreign languages we examined, German articles have the highest rate, with 90% of articles having at least one frame.", "Among Turkish articles that are rele-vant only 10% have a frame.", "In our evaluations, we only considered headlines that are relevant and have at least one frame.", "While stressing that determining the frame of an article is the most nuanced task in news framing, addressing the challenges mentioned above is still meaningful and constitutes future work.", "In this work, we present a novel code-switch model for the task of automatic cross-lingual news frame detection and show that it matches the performance of full translation if not overrides.", "Moreover, we leverage an existing dataset by making use of multiple labels, create benchmark news framing test sets for three new languages, and employ a variant of Focal Loss to account for class imbalance in the data.", "In conclusion, while accounting for multiple frames per sample, we demonstrate how a cross-lingual analysis of news framing is informative and insightful in developing a global view surrounding the gun violence problem in the U.S. Acknowledgment This work is supported in part by the U.S. NSF grant 1838193 and DARPA HR001118S0044 (the LwLL program).", "The U.S. Government is authorized to reproduce and distribute reprints for Governmental purposes.", "The views and conclusions contained in this publication are those of the authors and should not be interpreted as representing official policies or endorsements of DARPA and the U.S. Government." ]
[ "abstain", "abstain", "objective", "objective", "method", "objective", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "objective", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "objective", "objective", "result", "objective", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "other", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "other", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "other", "method", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "other", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "other", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "objective", "objective", "objective", "abstain", "abstain" ]
[ "This paper introduces a new task Chinese address parsing the task of mapping Chinese addresses into semantically meaningful chunks.", "While it is possible to model this problem using a conventional sequence labelling approach, our observation is that there exist complex dependencies between labels that cannot be readily captured by a simple linear-chain structure.", "We investigate neural structured prediction models with latent variables to capture such rich structural information within Chinese addresses.", "We create and publicly release a new dataset consisting of 15,000 Chinese addresses, and conduct extensive experiments on the dataset to investigate the model effectiveness and robustness.", "We release our code and data at http:// statnlp.org/research/sp .", "Addresses play an important role in modern society.", "They are typically used as identifiers to locations and entities in the world that can be used to facilitate various social activities, such as business correspondences, meetings and events.", "Recent research efforts show that systems that perform address parsing , the task of automatically parsing addresses into semantically meaningful structures, can be useful for tasks such as building e-commerce or product recommendation systems (Jia et al., 2017; Avvenuti et al., 2018).", "Due to historical reasons, the English addresses come with a standardized format, mostly written in order from most specific to most general.", "Meaningful chunks in an English address are also separated by punctuation or the new-line symbols.", "Such characteristics make parsing English addresses a relatively easy task.", "However, addresses written in eastern Asian languages such as Chinese present several unique 639 1 230 PROVINCE CITY DISTRICT (ZhejiangProvince) (HangzhouCity) (GongshuDistrict) ROAD 639 ROADNO 1 HOUSENO (DengyunRoad) (No. 639) (Unit#1) POI 230 ROOMNO SUBPOI (ElectronicMarket) (FeiyangDianziLLC.) 9 5 9 5 1705 TOWN POI SUBPOI (GuanshaTown) (GuanshaResidence) (GuanshaSub-residence) PERSON 9 HOUSENO 5 CELLNO (AnzhiSector) (Block9) (Unit#5) 9 REDUNDANT 5 REDUNDANT 1705 ROOMNO (Block9) (Unit#5) Figure 1: Two example Chinese addresses and the expected structures after parsing.", "challenges.", "Unlike English addresses, Chinese addresses are typically written in the form of a consecutive sequence of Chinese characters (pos-sibly intermixed with digits and English letters).", "Figure 1 presents two example Chinese addresses and their desired output structures after parsing chunks annotated with their labels indicating semantics (such as province, road, etc).", "The Chinese addressing system is also different from that of English.", "Though it is generally believed that the system uses the opposite ordering starting from most general (e.g., province) and ending with most specific (e.g., room no.), in practice it can be observed that the format is far less rigorous than expected.", "The lack of rigor also leads to other issues the addresses may come with incomplete, redundant or even inaccurate information, as we can see from the second example listed in Figure", "1. Such unique challenges make the design of an effective Chinese address parser non-trivial.", "Parsing a Chinese address into semantically meaningful structures can be regarded as a special type of chunking task (Abney, 1991), where we need to perform address-specific Chinese word segmentation (Xue, 2003; Peng et al., 2004; Zhao et al., 2006) while assigning a semantic label to each chunk.", "However, existing models designed for chunking may not be readily applicable in this task.", "Our observations show that there are a few characteristics associated with the task.", "We found that while generally there exists certain ordering information among the chunks of different labels in the addresses, such ordering information is better preserved among the chunks that appear at the beginning of the addresses.", "For the chunks appearing towards the end of the addresses, chunks of different types often appear in more flexible order.", "On top of the above observations, we propose a specific model based on neural networks for the task of Chinese address parsing.", "The model is able to encode the regular patterns among chunks that appear at the beginning of a Chinese address, while flexibly capturing the irregular patterns and rich dependencies among the chunks of different types that appear towards the end of the address.", "This is achieved by designing a novel structured representation integrating both a linear structure and a latent-variable tree structure.", "We create and publicly release a new corpus consisting of 15K Chinese address entries fully annotated with chunk boundaries and address labels.", "To the best of our knowledge, this is the first and largest annotated Chinese address corpus.", "We introduce a novel neural approach to Chinese address parsing with latent variables to flexibly capture both prior ordering information and rich dependencies among labels.", "Through extensive experiments, we demonstrate the effectiveness of our approach.", "The experimental results show that our approach outperforms several baselines significantly.", "In this work, we created a Chinese Address corpus.", "To do so, we crawled a large number of publicly available addresses from the Chinese websites including online business directory websites (e.g., b2b.huangye88.com ), social media websites (e.g., www.dianping.com ), and an online API service translating a geo-location to a Chinese address ( lbs.amap.com ).", "In order to protect privacy, we discarded sensitive addresses (such as those involving military locations) and randomly altered the digits in the collected addresses.", "Due to the lack of Chinese address standard format as well as complicated and different writing preferences in different regions (e.g., people living in southern China prefer the word as the suffix of the name of a lane or sub-road over the word which is widely used in northern China), we create an annotation guideline 1 by summarizing different writing preferences.", "We proposed 21 chunk labels listed in Table", "1. The meaning 1 The annotation guideline can be found at http:// statnlp.org/research/sp .", "of most labels can be inferred from their names.", "We hire 3 annotators to annotate chunk boundaries and chunk labels for each Chinese address following the annotation guideline.", "In order to maintain high annotation quality, we also hire 2 additional quality controllers to sample 20 sentences from each batch of 1,000 annotated sentences for human evaluation.", "Re-annotation for that batch will be performed should the accuracy of human evaluation fall below 95% .", "We randomly split the annotated data into 3 portions following the ratio of 60% , 20% , and 20% , yielding training, development, and test sets.", "The complete statistics of our data can be found in Table", "1. From the table we can observe that the chunk label POI (point of interest) occurs most frequently.", "Indeed, such a label has a high level of importance.", "This is because location-based information can be extracted from such chunks, which is crucial for recommendation services (Gao et al., 2015; Xie et al., 2016).", "In addition, we report the number of distinctive chunks (unique#) that appear in the data for each label, from which we can see our corpus has a good coverage on PROVINCE , CITY , and DISTRICT 2 .", "We empirically assign each label a order ID indicating its level of specificity.", "For example, the label COUNTRY is used for describing a country, and is the most general concept.", "It is thus assigned the order ID 20, which is the highest among all labels.", "As another example, the label PERSON gets assigned an order ID 3, as it is used to describe one of the most specific concepts.", "Such order ID information will be useful later when designing our models for Chinese address parsing.", "Our objective is to design a model for parsing Chinese addresses into semantically meaningful structures in the form of consecutive chunks, where each chunk is assigned a label as described in the previous section.", "As we have mentioned before, we believe there exist Chinese address-specific characteristics associated with address texts that can be exploited in designing a parsing model.", "Specifically, we argue there are two types of structured information within Chinese addresses that can be exploited when designing our parser the latent tree structures and the regular chain struc-2 We found these numbers are comparable with statistics on www.stats.gov.cn .", "tures .", "The former is used for capturing rich dependencies among chunks that appear towards the end of each address.", "The latter is used for capturing the structural patterns associated with chunks appearing at the beginning of each address.", "We focus our discussions on the latent tree structures first.", "Given a consecutive sequence of labeled chunks, we can construct a binary tree structure whose yield exactly corresponds to the sequence of labeled chunks.", "We build the latent tree structures to capture complex dependencies based on the observation that chunks appearing towards the end of a given address do not follow a rigorous order.", "For instance, as we can see in the second example in Figure 1, chunks towards the end of the address consist of some labels related to numbers as well as the label REDUNDANT .", "These labels are either optional or do not follow some regular patterns in terms of order, which makes capturing dependencies among labels challenging.", "We first introduce auxiliary labels based on the set of original labels we defined in the previous section.", "Such auxiliary labels are assigned to the internal nodes within a parse tree.", "Specifically, for each original label X , we introduce the auxiliary label X .", "For example, the auxiliary label for ROAD would be ROAD .", "We now illustrate how a latent tree is constructed from a sequence of labeled chunks.", "These chunks will be regarded as a sequence of leaf nodes, each of which contains the corresponding chunk boundary and chunk type information.", "To simplify the construction process, we focus on building a specific type of binary trees with each non-leaf node containing at least 1 leaf node as one of its child.", "3 We start the process by selecting any chunk first as one leaf node.", "Next we take a chunk that is either on the left or on the right of the selected chunk as its binary sibling node, and create a parent node by assigning the two selected leaf nodes as child nodes.", "To determine the label of the newly created parent node, we choose the auxiliary label based on the label with a higher order ID between the two of the child nodes.", "The newly created parent node will replace the 2 child nodes in the sequence and now the parent node becomes a selected node.", "We repeat this construction pro-3 Preliminary results show that considering arbitrary binary trees would lead to slightly worse results for our task.", "Figure 2 shows an example tree that the gold chunks correspond to.", "From the example we can see that the non-leaf node label POI that appears twice has connections to other non-leaf node labels such as ROADNO and POI .", "Such tree structures will allow us to capture rich Chinese address-specific structural information among labels.", "Since there are many latent trees corresponding to the given address consisting of consecutive labeled chunks, the model is facilitated to learn such complicated patterns, which is potentially beneficial for the address parsing task.", "The latent tree structures allow complex dependencies between different chunks to be captured within a Chinese address.", "Such dependencies would be helpful when there exist irregular patterns within an address.", "However, if we believe there are regular patterns among the labeled chunks, using an alternative assumption on the dependencies to properly capture such patterns may be more desirable.", "For instance, the first example in Figure 1 illustrates a common regular pattern at the beginning of the address, which is the order of ( PROVINCE , CITY , DISTRICT ).", "This motivates us to employ an alternative representation for capturing dependencies within chunks that appear at the beginning of the addresses, which are believed to exhibit more regular patterns.", "Specifically, we employ a chain structure to capture the dependencies between adjacent labeled chunks.", "For example, given a sequence of chunks, we may always consider a right-branching tree structure to connect all these chunks.", "The resulting structure will be able to capture first-order dependencies between adjacent labeled chunks, which allows the regular orders among the labels to be learned.", "For example, consider the first two chunks that appear within the address as illustrated in Figure", "2. The first two chunks form a right-branching tree structure.", "The construction process for such chain structures is similar to that of the latent trees, except that there is a single fixed (right-branching tree) structure for given labeled chunks.", "Based on the observation that regular patterns appear mostly at the beginning of an address, we define the space H ( x, y, sp ) that consists of all la-ROAD ROAD ROADNO ROADNO POI POI SUBPOI POI ROOMNO 639 230 (DengyunRoad) (No.639) (ElectronicMarket) (Room230) (FeiyangDianziLLC.) Figure 2: An example latent tree for given gold chunks where sp = POI .", "tent tree structures that are consistent with the input character sequence x , the gold labeled chunks y and sp which determines the split point.", "Formally, we define the split point of a given address as specified by sp as the left boundary of the rightmost chunk whose label order ID is larger than or equal to sp .", "The split point divides the chunks into two groups those appearing on the left of sp will form a chain structure while those on the right will form a tree structure where the correct construction is latent.", "Both structures are then merged to form a single representation, which is used for building our address parsing model.", "Notice that when sp is set to 1 (denoted as sp = LAST ), the split point is on the right of the last chunk.", "In this case the latent structured space H ( x, y, sp ) consists of only one single right-branching tree.", "On the other hand, when sp is set to its maximal value 20 , the label order ID of COUNTRY (denoted as sp = COUNTRY ), the latent structured space does not contain any structure that involves a partial regular chain component.", "Different values of sp leads to different interpolations between the two types of structural assumptions, resulting in different variants of our models.", "We will discuss the effect of different sp values in the experiments section.", "A parse tree corresponds to a collection of labeled chunks as leaves.", "We adopt a bi-directional LSTM over a given input to compute the span-level representation.", "At each position i in the original input consisting of a sequence of characters, we use f i and b i to denote the outputs of forward LSTM and backward LSTM respectively.", "We use c i,j = [ f j f i ; b i b j ] to denote the vector representation of the span covering characters from position i to position j (Wang and Chang, 2016).", "Motivated by Stern et al. (2017), we define the label score as follows: s ( i, j ) = F ( c i,j ) where F is a 2-layer feed-forward neural network with output dimension being the number of chunk labels.", "In addition, we denote the score of the span with a specific label l as the value of the l -th element in the vector s ( i, j ) : s ( i, j, l ) = [ s ( i, j )] l (1) 3.4 Model Inspired by Stern et al. (2017), we build a chart-based parsing model.", "Unlike that work, however, our model involves latent structures as mentioned in Section 3.1.", "For a given sequence of labeled chunks, our model considers all possible constituent trees whose yield are exactly the labeled chunks.", "Consider a tree t that can be represented by a set of labeled spans, where each span is uniquely defined by the boundary ( i, j ) and the label l : t := { ( i n , j n , l n ) : n = 1 , . . . , | t |} .", "The score of the tree t can be defined as follows:", "Similar to (Stern et al., 2017), we use a CKY-style algorithm to calculate the score of the optimal sub-tree that spans the interval ( i, j ) recursive using the following formula:", "j ) = max l [ s ( i, j, l )]+ max k (cid:110) max l [ s ( i, k, l )] + ( k, j ) , ( i, k ) + max l [ s ( k, j, l )] (cid:111) (4)", "The base case is when the text span ( i, j ) corresponds to a leaf node (a chunk) in the tree; in this case we have: ( i, j ) = max l [ s ( i, j, l )] .", "Inspired by the structural support vector machines with latent variables (Yu and Joachims, 2009), we employ a (per instance) hinge loss during training:", "where H ( x ) refers to the set of all possible trees for the given input x , and t denotes the best tree in the latent space H ( x, y, sp ) :", "Here ( t , t ) represents the Hamming loss on labeled spans, measuring the similarity between the predicted tree and the best latent tree that corresponds to the gold chunks.", "During decoding, we aim to obtain the best tree as the prediction t for a new address x (cid:48) among all the possible trees: t = arg max t H ( x (cid:48) ) [ S ( t )] (7) The yield of the predicted tree t gives us the list of labeled chunks.", "We call our model Address Parser with Latent Trees ( APLT ).", "We conducted experiments based on different settings of the sp values, leading to many model variants.", "We describe baselines, model hyperparameters as well as evaluation metrics in this section.", "Baselines To understand the effectiveness of our models, we build the following baselines: (cid:96) CRF is the standard first-order linear CRF model (Lafferty et al., 2001) with discrete features for sequence labeling tasks.", "s CRF is based on the standard semi-Markov CRF (Sarawagi and Cohen, 2004) with discrete features 5 .", "LSTM is the standard bi-directional LSTM model for sequence labeling tasks.", "5 See the supplementary material for details on the features for (cid:96) CRF and s CRF .", "For s CRF (and LSTM s CRF ), maximal chunk length is set to 36, which is the length of the longest chunk appearing in the training set.", "LSTM (cid:96) CRF is proposed by Lample et al. (2016) which is the state-of-the-art for many sequence labeling tasks LSTM s CRF is based on segmental recurrent neural network (Kong et al., 2016) which is the neural network version of semi-Markov CRF (Sarawagi and Cohen, 2004).", "TP is a transition-based parser for chunking based on Lample et al. (2016), which makes use of the stack LSTM (Dyer et al., 2015) to encode the representation of the stack.", "Hyperparameters We conducted all the experiments based on our Chinese Address corpus.", "We pre-trained Chinese character embeddings based on the Chinese Gigaword corpus (Graff and Chen, 2005), using the skip-gram model with hierarchical softmax implemented within the word2vec toolkit (Mikolov et al., 2013) where we set the sample rate to 10 5 and embedding size to 100.", "We use a 2-layer LSTM (for both directions) with a hidden dimension of 200.", "For optimization, we adopt the Adam (Kingma and Ba, 2014) optimizer to optimize the model with batch size 1 and dropout rate 0 .", "4 .", "We randomly replace the low frequency words with the UNK token and normalize all numbers by replacing each digit (includ-ing Chinese characters representing numbers from 0-9) to 0.", "We train our model for a maximal of 30 epochs and select the model parameters based on the F 1 score after each epoch on the development set.", "The selected model is then applied to the test set for evaluation.", "Our model, as well as the baseline neural models, are implemented using DyNet (Neubig et al., 2017).", "All the neural weights are initialized following the default initialization method used in DyNet.", "Evaluation Metrics We use the standard evaluation metrics from the CoNLL-2000 shared task (Tjong Kim Sang and Buchholz, 2000), reporting precision ( P. ), recall ( R. ) and F 1 percentage scores.", "We present our main results in Table 2, where we report the overall performance as well as specific results on the POI label.", "For our model, we report results for sp = 20 , 1 as two special cases the former learns latent tree structures only and the latter assumes a single right-branching tree.", "We Model POI OVERALLP.", "also report results for sp = 7 which is selected based on the optimal results on the development set.", "Among all the baselines, LSTM (cid:96) CRF performs better than LSTM and TP , which is consistent with the finding reported in (Lample et al., 2016).", "The two models LSTM (cid:96) CRF and LSTM s CRF both achieve similar results, which is also consistent with the finding reported in (Liu et al., 2016).", "The two non-neural models (cid:96) CRF and s CRF perform substantially worse than their neural counterparts, which we believe is mainly due to the use of only handcrafted features in such systems.", "All these baseline models are capable of encoding transition patterns between neighboring chunks, which can partially capture certain structural information.", "However, certain Chinese address-specific structural information is not explicitly captured in such models.", "Our model APLT ( sp = 7 ) achieves the best overall results, as well as the best results when evaluated on POI only.", "Compared with the strongest baselines LSTM (cid:96) CRF and LSTM s CRF , APLT ( sp = 7 ) outperforms them sig-nificantly by more than 1 F 1 point overall ( p < 10 5 ) 6 .", "Furthermore, the APLT ( sp = 7 ) model obtains the best F 1 scores among all the models on POI .", "Note that our APLT model is able to learn richer dependencies among labels including label order information, regular patterns and irregular patterns among labels.", "Overall, the model APLT ( sp = 7 ) also outperforms both APLT ( sp = 1 ) ( p < 0 . 05) and APLT ( sp = 20 ) ( p < 0 . 005) significantly.", "Such a result implies the importance of capturing the various Chinese address-specific structural information mentioned above within our model.", "To understand the results better, we conduct detailed analysis of our results.", "Table 3 shows the F 1 scores of each label as well as the percentage of each label in the test data among four 6 We perform the bootstrap resampling significant test.", "models LSTM (cid:96) CRF , LSTM s CRF , APLT ( sp = 1 ) and APLT ( sp = 7 ).", "Note that the results for the top 4 labels POI , DISTRICT , ROAD and CITY , which take up 45% of total chunks, all get improved when using our APLT models.", "Moreover, it achieves better or comparable F 1 scores on 15 labels in the table among the total 21 labels, especially on POI , DISTRICT , REDUNDANT , COMMUNITY and PERSON with at least 1 point improvement in F 1 .", "Interestingly, our models perform worse than LSTM (cid:96) CRF on labels such as ROADNO , ROOMNO , and FLOORNO , which are mostly related to numbers.", "We note that, however, chunks with such labels do not constitute a large proportion of all chunks.", "Results suggest that our models somehow learned to focus on optimization performance for chunks with more prominent labels such as POI and DISTRICT .", "In order to investigate how tree structures affect the final performance, we also conducted experiments with different values for sp , which is used for determining the split point.", "Figure 3 shows the moving-averaged F 1 scores on the test set obtained when choosing sp around specific values (a similar distribution can be observed on the development set).", "From the bottom ( COUNTRY ,20) to the top ( LAST ,-1) along y axis, the lower the sp is, the more constraints are applied to the latent space H ( x, y, sp ) .", "Note that when sp = 1 ( LAST ), 89.0 89.1 89.2 89.3 89.4 89.5 89.6 89.7 89.8 F1 score COUNTRYPROVCITYDISTRICTDEVZONETOWNCOMMUNITYROADSUBROADROADNOSUBROADNOPOISUBPOIHOUSENOCELLNOFLOORNOROOMNOPERSONLAST Figure 3: Effect of sp .", "the gold input only corresponds to a single right-branching tree.", "We exclude the following labels: REDUNDANT , ASSIST and OTHERINFO , because we found these labels may appear at any place within a given address, which make them unsuitable for determining the split point.", "From Figure 3 we can observe that the F 1 score generally increases as we decrease sp , starting from COUNTRY (with order ID 20).", "The performance reaches the maximum when the sp is set to a value within the range [ SUBPOI , CELLNO ] .", "This observation implies that there does exist ordering information among labels, and introducing more constraints on the latent space will have the ben-efit of modeling the regular patterns around the beginning part of a given address.", "After reaching the best value, as we further decrease sp , the performance drops slightly and oscillates around the range [ FLOORNO , LAST ] .", "From here we can observe that the latent trees are able to help capture irregular patterns within labels that appear towards the end of the address.", "Overall, these results suggest the importance of designing a model like ours that is capable of capturing Chinese address-specific characteristics.", "We conduct error analysis on two strongest baselines LSTM(cid:96) CRF and LSTMs CRF as well as two best-performing APLT models respectively.", "We examined the list of top-10 labels with most errors for each model, and found most of the errors come from labels such as POI , SUBPOI and REDUNDANT this implies they are the most challenging labels for this task.", "We also found labels such as ROOMNO appear in the list for APLT models, but not for the LSTM (cid:96) CRF model, showing that APLT models are still not good at handling numbers as we discussed above.", "There are two major types of errors.", "The type-Gold POI 9 HOUSENO (HouhuVillageResidence) (Block9) Prediction COMMUNITY 9 HOUSENO Gold TOWN POI (SiJiQing) (OldMarket) Prediction POI Gold POI 124C ROADNO (XiaohongPlaza) (#124C) Prediction POI 124C ROOMNO Figure 4: Example outputs from APLT ( sp = 7) .", "I error refers to the case where the boundary of a chunk is predicted correctly but not its label.", "The type-II error is the case where even the boundary of a predicted chunk is incorrect.", "We found that APLT ( sp = 1 ) and APLT ( sp = 7 ) produce less type-I errors ( 45 . 04% and 42 . 95% respectively) than LSTM (cid:96) CRF and LSTM s CRF ( 49 . 87% and 47 . 26% respectively).", "Moreover, we find that APLT ( sp = 7 ) model produces the least number of type-I errors as well as type-II errors.", "Looking into the type-I errors of both two APLT models, we find chunks with label POI are often incorrectly labeled as COMMUNITY , which is a major source of errors ( 9% of total errors).", "As a typical example, we show a partial prediction in Figure 4, where our model fails to recognize (Houhu Village Residence) as a POI .", "Here the character (Village) is a common suffix for the name of either a village or a residence, hence the confusion.", "The second example in Figure 4 demonstrates another typical kind of errors produced by our models around the POI labels.", "Here, (Si Ji Qing) is actually the name of a town.", "However, as most names of towns end with (Town) as the suffix, our models as well as baseline models all fail to identify the correct chunk boundaries.", "We also investigate the errors around the number labels.", "We choose to look into the results on ROADNO because it is the fifth most popular label in the test data.", "Based on the error analysis, we found that many chunks of label ROADNO were incorrectly assigned other types of number labels.", "As we can see from the third example in Figure 4, the ROADNO 124C is incorrectly predicted as a ROOMNO .", "Indeed, this chunk does look like a room number, though in fact it refers to a road within a plaza ( ) rather than an office within a building (another interpretation of ).", "From these examples we can observe that many ambiguities may not be easily resolved Length % LSTM LSTM APLT APLT (cid:96) CRF s CRF sp = 1 sp = 7 1 09.39 92.16 92.14 91.65 91.77 2 23.73 86.69 86.13 87.16 87.77 3 44.60 92.26 92.04 93.03 93.51 4 13.31 86.49 87.48 88.05 88.18 5 03.70 74.57 76.41 77.55 79.43 6 02.14 68.88 70.19 70.87 73.73 7 01.16 64.61 68.14 67.59 68.22 8 01.97 63.31 62.57 63.33 60.19 Table 4: Results for different chunk lengths.", "We analyze the model robustness by assessing the performance on chunks of different lengths for each of the four models discussed above.", "We group chunks into 8 categories based on their lengths and present the results in Table 4 where the distribution information is also included.", "As we can see, all the models achieve at least a F 1 score of 86 when considering chunks whose lengths are less than 5.", "As the length increases, the performance of all models drop gradually.", "For chunks whose lengths are at least 8, the F 1 score is around 60-63 for all models.", "Considering chunks whose lengths are either 2, 3, or 4 only (such chunks constitute over 80% of total chunks), we can observe that APLT ( sp = 7 ) outperforms two baselines sig-nificantly by more than 1 point for each category.", "These results demonstrate the robustness of our model when handling chunks of different lengths.", "Comparing the two APLT models, we can see the model APLT ( sp = 7 ) outperforms APLT ( sp = 1 ) for each chunk category, except for chunks whose lengths are greater than or equal to 8.", "These two models differ in their latent spaces.", "APLT ( sp = 7 ) with a richer latent space appears to be better at handling chunks with short or medium lengths.", "In addition, we conducted a further experiment to understand how each model is able to handle new chunks the chunks that appear in the test set (according to the gold labels) but do not appear in the training set.", "We found empirically there are 31% of the chunks in the test set that are new chunks.", "Such an experiment allows us to assess the robustness of each model when new data is available.", "We report the accuracy for the new chunks in Table 5.", "As we can see, two APLT models outperform two baselines, indicating our APLT models appear to be better at handling new chunks.", "We believe this is due to the tree models LSTM LSTM APLT APLT (cid:96) CRF s CRF sp = 1 sp = 7 80.17 79.92 80.94 80.94 Table 5: Accuracy on test data for the new chunks.", "that we used, which are capable of capturing complex dependencies among chunks.", "While the Chinese address parsing task is new, it is related to the following traditional tasks within the field of natural language processing (NLP) chunking, named entity recognition, word segmentation and parsing.", "We briefly survey research efforts which are most related to our task below.", "Chunking as a fundamental task in NLP has been investigated for decades (Abney, 1991).", "Chunking for Chinese can typically be regarded as a sequence labeling problem solvable by models such as conditional random fields (Chen et al., 2006; Tan et al., 2005; Zhou et al., 2012), hidden Markov models (Li et al., 2003), support vector machines (Tan et al., 2004) and the maximum entropy model (Wu et al., 2005).", "Our task can also be regarded as a chunking task where we need to assign an address-specific label to each chunk.", "Named entity recognition (NER) is another fundamental task close to chunking within the field of NLP, which focuses on the extraction of semantically meaningful entities from the text.", "The state-of-the-art approach by Lample et al. (2016) employs a LSTM-CRF model.", "Ma and Hovy (2016) proposed a LSTM-CNNs-CRF model that utilizes convolutional neural networks (CNNs) to extract character-level features besides word-level features.", "Zhai et al. (2017) suggested a neural chunking model based on pointer networks (Vinyals et al., 2015) to resolve the issue of being diffi-cult to use chunk-level features such as the length of the chunk for segmentation.", "Zhang and Yang (2018) tackled the problem of Chinese NER by deploying a lattice LSTM leveraging lexicons.", "Another task closely related to our task is the Chinese word segmentation task which at least dates back to the 1990s (Sproat et al., 1994).", "The segmentation task is typically casted as a character-based sequence labeling problem (Xue, 2003) which can be solved by CRF based models (Peng et al., 2004; Zhao et al., 2006), their latent-variable variants (Sun et al., 2009), or max-margin based models (Zhang and Clark, 2007).", "Recently, Zhang et al. (2016) proposed a neural transition-based segmentation approach by encoding both words and characters as well as the history action sequence.", "Yang et al. (2017) suggested to perform segmentation with a neural transition-based method with rich pre-training.", "Constituent parsing is another line of work that is related to our task.", "The state-of-the-art approaches to parsing include transition-based models (Dyer et al., 2016) and chart-based models (Stern et al., 2017; Kitaev and Klein,", "2018).Our model is motivated by the latter approaches, where we additionally introduce latent variables for capturing complex dependencies among chunks.", "In this work, we introduce a new task Chinese address parsing , which is to segment a given Chinese address text into chunks while assigning each chunk a semantically meaningful label.", "We create and publish a Chinese address corpus that consists of 15K fully labeled Chinese addresses.", "We identify interesting characteristics associated with the task and design a novel neural parsing model with latent variables for this task, which is able to capture Chinese address-specific structural information.", "We conduct extensive experiments and compare our approach with strong baselines through detailed analysis.", "We show that our proposed model outperforms baseline approaches sig-nificantly, due to its ability in capturing rich structural information present in the Chinese addresses.", "Future work includes leveraging external knowledge bases to disambiguate chunks and entities that appear within Chinese addresses, as well as designing algorithms that are able to capture longer-range dependencies among chunks using alternative structures.", "We would like to thank the anonymous reviewers for their constructive comments on this work.", "This work is done under a collaborative agreement between SUTD and Alibaba on an Alibaba Innovative Research (AIR) Program funded by Alibaba, where Alibaba provided data.", "We appreciate Al-ibaba's generosity in the agreement that makes it possible for us to make all data and code in this research publicly available upon acceptance of this paper.", "This work is also partially supported by SUTD project PIE-SGP-AI-2018-01." ]
[ "objective", "method", "objective", "objective", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "method", "abstain", "result", "result", "abstain", "objective", "abstain", "abstain", "objective", "objective", "objective", "objective", "result", "method", "abstain", "method", "abstain", "abstain", "other", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "method", "abstain", "abstain", "abstain", "method", "method", "method", "method", "abstain", "method", "abstain", "method", "abstain", "method", "abstain", "method", "method", "method", "method", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "method", "method", "method", "method", "method", "method", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "method", "abstain", "method", "method", "abstain", "method", "abstain", "method", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "result", "result", "result", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "result", "result", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "result", "abstain", "result", "abstain", "result", "result", "abstain", "abstain", "result", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "other", "other", "abstain", "other", "other", "other", "other", "other", "abstain", "other", "other", "other", "abstain", "other", "abstain", "objective", "method", "objective", "method", "objective", "abstain", "other", "other", "other", "other" ]
[ "reubencg@stanford.edu", "ngoodman@stanford.edu", "Abstract", "A desideratum of high-quality translation systems is that they preserve meaning, in the sense that two sentences with different meanings should not translate to one and the same sentence in another language.", "However, state-of-the-art systems often fail in this regard, particularly in cases where the source and target languages partition the meaning space in different ways.", "For instance, I cut my finger. and I cut my finger off. describe different states of the world but are translated to French (by both Fairseq and Google Translate ) as Je me suis coup e le doigt., which is ambiguous as to whether the finger is detached.", "More generally, translation systems are typically many-to-one (non-injective) functions from source to target language, which in many cases results in important distinctions in meaning being lost in translation.", "Building on Bayesian models of informative utterance production, we present a method to define a less ambiguous translation system in terms of an underlying pretrained neural sequence-to-sequence model.", "This method increases injectivity, resulting in greater preservation of meaning as measured by improvement in cycle-consistency, without impeding translation quality (measured by BLEU score).", "Languages differ in what meaning distinctions they must mark explicitly.", "As such, translations risk mapping from a form in one language to a more ambiguous form in another.", "For example, the definite (1) and indefinite (2) both translate (under Fairseq and Google Translate ) to (3) in French, which is ambiguous in definiteness.", "Survey To evaluate the nature of this problem, we explored a corpus 1 of 500 pairs of distinct English sentences which map to a single German one (the evaluation language in section 2.3).", "We identify a number of common causes for the many-to-one maps.", "Two frequent types of verbal distinction lost when translating to German are tense (54 pairs, e.g. ...others { were, have been } introduced .) and modality (16 pairs, e.g. ...prospects for this year { could , might } be better.), where German konnen can express both epistemic and ability modality, distinguished in English with might and could respectively.", "Owing to En-glish's large vocabulary, lexical difference in verb (31 pairs, e.g. arise vs. emerge ), noun (56 pairs, e.g. mystery vs. secret), adjective (47 pairs, e.g. unaffected vs. untouched) or deic-tic/pronoun (32 pairs, usually this vs that) are also common.", "A large number of the pairs differ instead either orthographically, or in other ways that do not correspond to a clear semantic distinc-1 Generated by selecting short sentences from the Brown corpus (Ku cera and Francis, 1967), translating them to German, and taking the best two candidate translations back into English, if these two themselves translate to a single German sentence.", "Translation in both directions was done with Fairseq.", "Our approach While languages differ in what distinctions they are required to express, all are usually capable of expressing any given distinction when desired.", "As such, meaning loss of the kind discussed above is, in theory, avoidable.", "To this end, we propose a method to reduce meaning loss by applying the Rational Speech Acts (RSA) model of an informative speaker to translation.", "RSA has been used to model natural language pragmatics (Goodman and Frank, 2016), and re-cent work has shown its applicability to image captioning (Andreas and Klein, 2016; Vedantam et al., 2017; Mao et al., 2016), another sequence-generation NLP task.", "Here we use RSA to define a translator which reduces many-to-one mappings and consequently meaning loss, in terms of a pretrained neural translation model.", "We introduce a strategy for performing inference effi-ciently with this model in the setting of translation, and show gains in cycle-consistency 2 as a result.", "Moreover, we obtain improvements in translation quality (BLEU score), demonstrating that the goal of meaning preservation directly yields improved translations.", "In the RSA framework, speakers and listeners, modeled as Bayesian agents, reason about each other in a nested fashion.", "We refer to listeners and speakers which do not reason about another agent as L 0 and S 0 respectively, and an agent which reasons about another agent as L 1 or S 1 .", "For instance, an informative speaker model S 1 is given a state 2 Formally, say that a pair of functions f : A B , g : B A is cycle-consistent if g f = id , the identity function.", "If f is not one-to-one, then ( f, g ) is not cycle-consistent.", "(Note however that when A and B are infinite, the converse does not hold: even if f and g are both one-toone, ( f, g ) need not be cycle-consistent, since many-to-one maps between infinite sets are not necessarily", "bijective.) w W , and chooses an utterance u U to convey w to S 1 's model of a listener.", "By contrast, S 0 chooses utterances without a listener model in mind its behavior might be determined by a semantics, or in our case, by a pretrained neural model.", "For translation, the state space W is a set of source language sentences (sequences of words in the language), while U is a set of target language sentences.", "S 1 's goal is to choose a translation u which allows a listener to pick out the source sentence w from among the set of distractors.", "This informative behavior discourages many-to-one maps that a non-informative translation model S 0 might allow.", "S 0 Model BiLSTMs with attention (Bahdanau et al., 2014), and more recently CNNs (Gehring et al., 2016) and entirely attention based models (Vaswani et al., 2017) constitute the state-of-the-art architectures in neural machine translation .", "All of these systems, once trained end-to-end on aligned data, can be viewed as a conditional distribution 3 SWD 0 ( wd | w, c ) , for a word wd in the target language, a source language sentence w , and a partial sentence c in the target language.", "SWD 0 yields a distribution S SNT0 over full sentences 4 : S SNT0 ( u | w, c ) = (cid:89) t SWD 0 ( u [ t ] | w, c + u [: t ]) (4) S SNT0 returns a distribution over continuations of c into full target language sentences 5 .", "To obtain a sentence from S SNT0 given a source language sentence s , one can greedily choose the highest probability word from SWD 0 at each timestep, or explore a beam of possible candidates.", "We implement SWD 0 (in terms of which all our other models are defined) using Fairseq 's publicly available 6 pretrained Transformer models for English-German, and for German-English train a CNN using Fairseq .", "We first describe a sentence level, globally pragmatic model S SNT-GP1 for the simple case where", "3 We use SWD 0 / 1 and SSNT 0 / 1 respectively to distinguish word and sentence level speaker models 4 Python list indexing conventions are used, + means concatenation of list to element or list 5 In what follows, we omit c when it is empty, so that S SNT0 ( u | w ) is the probability of sentence u given w 6 https://github.com/pytorch/fairseq", "a source language sentence needs to be distinguished from a presupplied distractor 7 (as in the pairs shown in figures (2) and (1)).", "We use this model as a stepping stone to one which requires an input sentence in the source language only, and no distractors.", "We begin by defining a listener L SNT1 , which receives a target language sentence u and infers which sentence w W (a presupplied set such as the pair (1) and (2)) would have resulted in the pretrained neural model S SNT0 producing u : L SNT1 ( w | u ) S SNT0 ( u | w ) (cid:80) w (cid:48) WS SNT0 ( u | w (cid:48) ) (5) This allows S SNT-GP1 to be defined in terms of L SNT1 , where U is the set of all possible target language sentences 8 : S SNT-GP1 ( u | w ) = SSNT 0 ( u | w ) L SNT1 ( w | u ) (cid:80) u (cid:48) US SNT0 ( u (cid:48) | w ) L SNT1 ( w | u (cid:48) ) (6) The key property of this model is that, for W = { A, B } , when translating A , S SNT-GP1 prefers translations of A that are unlikely to be good translations of B .", "So for pairs like (1) and (2), S SNT-GP1 is compelled to produce a translation for the former that reflects its difference from the latter, and vice versa.", "Inference Since U is an infinite set, exactly computing the most probable utterance under S SNT-GP1 ( | w ) is intractable.", "Andreas and Klein (2016) and Mao et al. (2016) perform approximate inference by sampling the subset of U produced by a beam search from S SNT0 .", "Vedantam et al. (2017) and Cohn-Gordon et al. (2018) employ a different method, using an incremental model SSNT-IP 1 as an approximation of SSNT-GP 1 on which inference can be tractably performed.", "S SNT-IP1 considers informativity not over the whole set of utterances, but instead at each decision of the next word in the target language sentence.", "For this reason, the incremental method avoids the problem of lack of beam diversity encountered when subsampling from S SNT0 , which 7 Implementations for all models are available to https://github.com/reubenharry/ pragmatic-translation 8 is a hyperparameter of S SNT-GP1 ; as it increases, the model cares more about being informative and less about producing a reasonable translation.", "becomes especially bad when producing long sequences, as is often the case in translation.", "S SNT-IP1 is defined as the product of informative decisions, specified by SWD 1 (itself defined in terms of L WD1 ), which are defined analogously to (6) and (5).", "L WD1 ( w | wd , c ) SWD 0 ( wd | w, c ) (7) SWD 1 ( wd | w, c ) (8) SWD 0 ( wd | w, c ) L WD1 ( w | wd , c ) S SNT-IP1 ( u | w, c ) = (cid:89) t SWD 1 ( u [ t ] | w, c + u [: t ]) (9) Examples S SNT-IP1 is able to avoid many-to-one mappings by choosing more informative translations.", "For instance, its translation of (1) is Ces animaux courent vite ( These animals run fast.).", "See figures (1) and (2) for other examples of many-to-one mappings under S SNT0 avoided by S SNT-IP1 .", "While S SNT-IP1 can disambiguate between pairs of sentences, it has two shortcomings.", "First, it requires one (or more) distractors to be provided, so translation is no longer fully automatic.", "Second, because the distractor set W consists of only a pair (or finite set) of sentences, S SNT-IP1 only cares about being informative up to the goal of distinguishing between these sentences.", "Intuitively, total meaning preservation is achieved by a translation which distinguishes the source sentence w from every other sentence in the source language which differs in meaning.", "Both of these problems can be addressed by introducing a new cyclic globally pragmatic model S SNT-CGP1 which reasons not about L SNT1 but about a pretrained translation model from target language to source language, which we term L SNT0 .", "S SNT-CGP1 is like S SNT-GP1 , but its goal is to produce a translation which allows a listener model (now L SNT0 ) to infer the original sentence, not among a small set of presupplied possibilities, but among all source language sentences .", "As such, an optimal translation u of w under S SNT-CGP1 has high probability of being generated by S SNT0 and high probability of being back-translated to w by L SNT0 .", "Incremental Model Exact inference is again intractable, though as with S SNT-GP1 , it is possible to approximate by subsampling from S SNT0 .", "This is very close to the approach taken by (Li et al., 2016), who find that reranking a set of outputs by probability of recovering input dramatically decreases the rate of dull and generic responses. in a question-answering task.", "However, because the subsample is small relative to U , they use this method in conjunction with a diversity increasing decoding algorithm.", "As in the case with explicit distractors, we instead opt for an incremental model, now S SNT-CIP1 which approximates S SNT-CGP1 .", "The definition of S SNT-CIP1 (12) is more complex than the incremental model with explicit distractors ( S SNT-IP1 ) since LWD 0 must receive complete sentences, rather than partial ones like L WD1 .", "As such, we need to marginalize over continuations k of partial sentences in the target language: SWD-C 1 ( wd | w, c ) SWD 0 ( wd | w, c ) (cid:88) k ( L SNT0 ( w | c + wd + k ) S SNT0 ( k | w, c + wd )) (11) S SNT-CIP1 ( u | w, c ) = (cid:89) t SWD-C 1 ( u [ t ] | w, c + u [: t ]) (12) Since the sum over continuations of c in (11) is intractable to compute exactly, we approximate it by a single continuation, obtained by greedily unrolling S SNT0 .", "The whole process of generating a new word wd of the translation from a sequence c and a source language sentence w is as follows: first use SWD 0 to generate a set of candidates for the next word (in practice, we only consider two, for efficiency).", "For each of these, use S SNT0 to greedily unroll a full target language sentence from c + wd , namely c + wd + k , and rank each wd by the probability L SNT0 ( w | c + wd + k ) .", "Our objective is to improve meaning preservation without detracting from translation quality in other regards (e.g. grammaticality).", "We conduct our evaluations on English to German translation, making use of publicly available pre-trained English-German and German-English Fairseq models.", "The pragmatic model we evaluate is S SNT-CIP1 since, unlike S SNT-IP1 , it is not necessary to hand-supply a distractor set of source language sentences.", "An example of the behavior of S SNT-CIP1 and S SNT0 on of our test sentences is shown below; S SNT0 is able to preserve the phrase world's eyes, which S SNT0 translates merely as world: Source sentence: Isolation keeps the world's eyes off Papua.", "Reference translation: Isolation halt die Augen der Welt fern von Papua.", "S SNT0 : Die Isolation halt die Welt von Papua fern.", "SSNT-CIP 1 : Die Isolation halt die Augen der Welt von Papua fern.", "We use cycle-consistency as a measure of meaning preservation, since the ability to recover the original sentence requires meaning distinctions not to be collapsed.", "In evaluating cycle-consistency, it is important to use a separate target-source translation mechanism than the one used to define S SNT-CIP1 .", "Otherwise, the system has access to the model which evaluates it and may improve cycle-consistency without producing meaningful target language sentences.", "For this reason, we translate German sentences (produced by S SNT0 or S SNT-CIP1 ) back to English with Google Translate .", "To measure cycle-consistency, we use the BLEU metric (implemented with sacreBLEU (Post, 2018)), with the original sentence as the reference.", "However, this improvement of cycle consistency, especially with a high value of , may come at the cost of translation quality.", "Moreover, it is unclear whether BLEU serves as a good metric for evaluating sentences of a single language.", "To further ensure that translation quality is not compromised by S SNT-CIP1 , we evaluate BLEU scores of the German sentences it produces.", "This requires evaluation on a corpus of aligned sentences, unlike the sentences collected from the Brown corpus in section 1 9 .", "9 While we find that S SNT-CIP1 improves cycle-consistency for the Brown corpus over S SNT0 , we have no way to establish whether this comes at the cost of translation quality.", "11 We use greedy unrolling in all models (using beam search is a goal for future work).", "For (which represents the trade-off between informativity and translation quality) we use 0 .", "1 , obtained by tuning on validation data.", "Results As shown in table (1), S SNT-CIP1 improves over SSNT 0 not only in cycle-consistency, but in translation quality as well.", "This suggests that the goal of preserving information, in the sense defined by S SNT-CGP1 and approximated by S SNT-CIP1 , is important for translation quality.", "We identify a shortcoming of state-of-the-art translation systems and show that a version of the RSA framework's informative speaker S 1 , adapted to the domain of translation, alleviates this problem in a way which improves not only cycle-consistency but translation quality as well.", "The success of S SNT-CIP1 on two fairly similar languages raises the question of whether improvements will increase for more distant language pairs, in which larger scale differences exist in what information is obligatorily represented this is a direction for future work.", "Thanks to the reviewers for their substantive comments, and to Daniel Fried and Jacob Andreas for many helpful discussions during the development of this project.", "10 Our implementation of S SNT-CIP1 was not efficient, and we could not evaluate on more sentences for reasons of time.", "11 http://www.statmt.org/wmt18/ translation-task.html References Jacob Andreas and Dan Klein." ]
[ "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "other", "abstain", "method", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "other", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "result", "abstain", "other", "other", "other" ]
[ "Semi-supervised bootstrapping techniques for relationship extraction from text iteratively expand a set of initial seed instances.", "Due to the lack of labeled data, a key challenge in bootstrapping is semantic drift: if a false positive instance is added during an iteration, then all following iterations are contaminated.", "We introduce BREX, a new bootstrapping method that protects against such contamination by highly effective confidence assessment.", "This is achieved by using entity and template seeds jointly (as opposed to just one as in previous work), by expanding entities and templates in parallel and in a mutually constraining fashion in each iteration and by introducing higher-quality similarity measures for templates.", "Experimental results show that BREX achieves an F 1 that is 0.13 (0.87 vs. 0.74) better than the state of the art for four relationships.", "Traditional semi-supervised bootstrapping relation extractors (REs) such as BREDS (Batista et al., 2015), SnowBall (Agichtein and Gravano, 2000) and DIPRE (Brin, 1998) require an initial set of seed entity pairs for the target binary relation.", "They find occurrences of positive seed entity pairs in the corpus, which are converted into extraction patterns, i.e., extractors , where we define an extractor as a cluster of instances generated from the corpus.", "The initial seed entity pair set is expanded with the relationship entity pairs newly extracted by the extractors from the text iteratively.", "The augmented set is then used to extract new relationships until a stopping criterion is met.", "Due to lack of sufficient labeled data, rule-based systems dominate commercial use (Chiti-cariu et al., 2013).", "Rules are typically defined by creating patterns around the entities (entity extraction) or entity pairs (relation extraction).", "Recently, supervised machine learning, especially deep learning techniques (Gupta et al., 2015; Nguyen and Grishman, 2015; Vu et al., 2016a,b; Gupta et al., 2016), have shown promising results in entity and relation extraction; however, they need sufficient hand-labeled data to train models, which can be costly and time consuming for web-scale extractions.", "Bootstrapping machine-learned rules can make extractions easier on large corpora.", "Thus, open information extraction systems (Carl-son et al., 2010; Fader et al., 2011; Mausam et al., 2012; Mesquita et al., 2013; Angeli et al., 2015) have recently been popular for domain specific or independent pattern learning.", "Hearst (1992) used hand written rules to generate more rules to extract hypernym-hyponym pairs, without distributional similarity.", "For entity extraction, Riloff (1996) used seed entities to generate extractors with heuristic rules and scored them by counting positive extractions.", "Prior work (Lin et al., 2003; Gupta et al., 2014) investigated different extractor scoring measures.", "Gupta and Manning (2014) improved scores by introducing expected number of negative entities.", "Brin (1998) developed the bootstrapping relation extraction system DIPRE that generates extractors by clustering contexts based on string matching.", "SnowBall (Agichtein and Gravano, 2000) is inspired by DIPRE but computes a TF-IDF representation of each context.", "BREDS (Batista et al., 2015) uses word embeddings (Mikolov et al., 2013) to bootstrap relationships.", "Related work investigated adapting extractor scoring measures in bootstrapping entity extraction with either entities or templates (Table 1) as seeds (Table 2).", "The state-of-the-art relation extractors bootstrap with only seed entity pairs and suffer due to a surplus of unknown extractions and the lack of labeled data, leading to low confidence extractors.", "This in turn leads to to low confidence in the system output.", "Prior RE sys-26 BREE Bootstrapping Relation Extractor with Entity pair BRET Bootstrapping Relation Extractor with Template BREJ Bootstrapping Relation Extractor in Joint learning type a named entity type, e.g., person typed entity a typed entity, e.g., (cid:160) Obama, person entity pair a pair of two typed entities template a triple of vectors ( ~v (cid:1) 1 , ~v 0 , ~v 1 ) and an entity pair instance entity pair and template (types must be the same) instance set extracted from corpus i a member of , i.e., an instance x p i q the entity pair of instance i x p i q the template of instance i G p a set of positive seed entity pairs G n a set of negative seed entity pairs G p a set of positive seed templates G n a set of negative seed templates G (cid:160) G p ,G n , G p , G n k it number of iterations cat cluster of instances ( extractor ) cat category of extractor NNHC Non-Noisy-High-Confidence extractor (True Positive) NNLC Non-Noisy-Low-Confidence extractor (True Negative) NHC Noisy-High-Confidence extractor (False Positive) NLC Noisy-Low-Confidence extractor (False Negative) Table 1: Notation and definition of key terms tems do not focus on improving the extractors' scores.", "In addition, SnowBall and BREDS used a weighting scheme to incorporate the importance of contexts around entities and compute a similarity score that introduces additional parameters and does not generalize well.", "Contributions.", "(1) We propose a Joint Bootstrapping Machine 1 (JBM), an alternative to the entity-pair-centered bootstrapping for relation extraction that can take advantage of both entity-pair and template-centered methods to jointly learn extractors consisting of instances due to the occurrences of both entity pair and template seeds.", "It scales up the number of positive extractions for non-noisy extractors and boosts their confidence scores.", "We focus on improving the scores for non-noisy-low-confidence extractors, resulting in higher recall .", "The relation extractors bootstrapped with entity pair, template and joint seeds are named as BREE , BRET and BREJ (Table 1), respectively.", "(2) Prior work on embedding-based context comparison has assumed that relations have consistent syntactic expression and has mainly addressed synonymy by using embeddings (e.g.,acquired bought).", "In reality, there is large variation in the syntax of how relations are expressed, e.g., MSFT to acquire NOK for $8B 1 github.com/pgcool/Joint-Bootstrapping-Machines vs. MSFT earnings hurt by NOK acquisition.", "We introduce cross-context similarities that compare all parts of the context (e.g., to acquire and acquisition) and show that these perform better (in terms of recall) than measures assuming consistent syntactic expression of relations.", "(3) Experimental results demonstrate a 13% gain in F 1 score on average for four relationships and suggest eliminating four parameters, compared to the state-of-the-art method.", "The motivation and benefits of the proposed JBM for relation extraction is discussed in depth in section 2.3.", "The method is applicable for both entity and relation extraction tasks.", "However, in context of relation extraction , we call it BREJ .", "We first introduce the notation and terms (Table 1).", "Given a relationship like x acquires y , the task is to extract pairs of entities from a corpus for which the relationship is true.", "We assume that the arguments of the relationship are typed, e.g., x and y are organizations.", "We run a named entity tagger in preprocessing, so that the types of all candidate entities are given.", "The objects the bootstrapping algorithm generally handles are therefore typed entities (an entity associated with a type).", "For a particular sentence in a corpus that states that the relationship (e.g., acquires) holds between x and y , a template consists of three vectors that represent the context of x and y .", "~v (cid:1) 1 represents the context before x , ~v 0 the context between x and y and ~v 1 the context after y .", "These vectors are simply sums of the embeddings of the corresponding words.", "A template is typed, i.e., in addition to the three vectors it specifies the types of the two entities.", "An instance joins an entity pair and a template.", "The types of entity pair and template must be the same.", "The first step of bootstrapping is to extract a set of instances from the input corpus.", "We refer to this set as .", "We will use i and j to refer to instances.", "x p i q is the entity pair of instance i and x p i q is the template of instance i .", "A required input to our algorithm are sets of positive and negative seeds for either entity pairs ( G p and G n ) or templates ( G p and G n ) or both.", "We define G to be a tuple of all four seed sets.", "We run our bootstrapping algorithm for k it iterations where k it is a parameter.", "A key notion is the similarity between two instances.", "We will experiment with different similarity measures.", "The baseline is (Batista et al., 2015)'s measure given in Figure 4, first line: the similarity of two instances is given as a weighted sum of the dot products of their before contexts ( ~v (cid:1) 1 ), their between contexts ( ~v 0 ) and their after contexts ( ~v 1 ) where the weights w p are parameters.", "We give this definition for instances, but it also applies to templates since only the context vectors of an instance are used, not the entities.", "The similarity between an instance i and a cluster of instances is defined as the maximum similarity of i with any member of the cluster; see Figure 2, right, Eq.", "5. Again, there is a straightforward extension to a cluster of templates: see Figure 2, right, Eq.", "6. The extractors can be categorized as follows: NNHC (cid:16) t P | (cid:222)(cid:209) R loomoon non (cid:1) noisy ^ cnf p , G q cnf u (1) NNLC (cid:16) t P | (cid:222)(cid:209) R ^ cnf p , G q (cid:160) cnf u (2) NHC (cid:16) t P | (cid:127)(cid:222)(cid:209) R loomoon noisy ^ cnf p , G q cnf u (3) NLC (cid:16) t P | (cid:127)(cid:222)(cid:209) R ^ cnf p , G q (cid:160) cnf u (4) where R is the relation to be bootstrapped.", "The cat is a member of cat .", "For instance, a NNLC is called as a non-noisy-low-confidence extractor if it represents the target relation (i.e., (cid:222)(cid:209) R ), however with the confidence below a certain threshold ( cnf ).", "Extractors of types NNHC and NLC are desirable, those of types NHC and NNLC undesirable within bootstrapping.", "To describe BREX (Figure 1) in its most general form, we use the term item to refer to an entity pair, a template or both.", "The input to BREX (Figure 2, left, line 01) is a set of instances extracted from a corpus and G seed , a structure consisting of one set of positive and one set of negative seed items.", "G yield (line 02) collects the items that BREX extracts in several iterations.", "In each of k it iterations (line 03), BREX first initializes the cache G cache (line 04); this cache collects the items that are extracted in this iteration.", "The design of the algorithm balances elements that ensure high recall with elements that ensure high precision.", "High recall is achieved by starting with the seeds and making three hops that consecutively consider order-1, order-2 and order-3 neighbors ... ...", "of the seeds.", "On line 05, we make the first hop: all instances that are similar to a seed are collected where similarity is defined differently for different BREX configurations (see below).", "The collected instances are then clustered, similar to work on bootstrapping by Agichtein and Gravano (2000) and Batista et al. (2015).", "On line 06, we make the second hop: all instances that are within sim of a hop-1 instance are added; each such instance is only added to one cluster, the closest one; see definition of : Figure 2, Eq.", "8.", "On line 07, we make the third hop: we include all instances that are within sim of a hop-2 instance; see definition of : Figure 2, Eq.", "7.", "In summary, every instance that can be reached by three hops from a seed is being considered at this point.", "A cluster of hop-2 instances is named as extractor .", "High precision is achieved by imposing, on line 08, a stringent check on each instance before its information is added to the cache.", "The core function of this check is given in Figure 2, Eq.", "9.", "This definition is a soft version of the following hard max, which is easier to explain: cnf p i, , G q (cid:19) max t P | i P p qu cnf p i, , G q We are looking for a cluster in that licenses the extraction of i with high confidence.", "cnf p i, , G q (Figure 2, Eq. 10), the confidence of a single cluster (i.e., extractor) for an instance, is defined as the product of the overall reliability of (which is independent of i ) and the similarity of i to , the second factor in Eq.", "10, i.e., sim p i, q .", "This factor sim p i, q prevents an extraction by a cluster whose members are all distant from the instance even if the cluster itself is highly reliable.", "The first factor in Eq.", "10, i.e., cnf p , G q , assesses the reliability of a cluster : we compute the ratio N (cid:0) p , G n q N (cid:0) p , G p q , i.e., the ratio between the number of instances in that match a negative and positive gold seed, respectively; see Figure 3, line", "(i).", "If this ratio is close to zero, then likely false positive extractions are few compared to likely true positive extractions.", "For the simple version of the algorithm (for which we set w n (cid:16) 1 , w u (cid:16) 0 ), this results in cnf p , G q being close to 1 and the reliability measure it not discounted.", "On the other hand, if N (cid:0) p , G n q N (cid:0) p , G p q is larger, meaning that the relative number of likely false positive extractions is high, then cnf p , G q shrinks towards 0, resulting in progressive discounting of cnf p , G q and leading to non-noisy-low-confidence extractor, particularly for a reliable .", "Due to lack of labeled data, the scoring mechanism cannot distinguish between noisy and non-noisy extractors.", "Therefore, an extractor is judged by its ability to extract more positive and less negative extractions.", "Note that we carefully designed this precision component to give good assessments while at the same time making maximum use of the available seeds.", "The reliability statistics are computed on , i.e., on hop-2 instances (not on hop-3 instances).", "The ratio N (cid:0) p , G n q N (cid:0) p , G p q is computed on instances that directly match a gold seed this is the most reliable information we have available.", "After all instances have been checked (line 08) and (if they passed muster) added to the cache (line 09), the inner loop ends and the cache is merged into the yield (line 10).", "Then a new loop (lines 0310) of hop-1, hop-2 and hop-3 extensions and cluster reliability tests starts.", "Thus, the algorithm consists of k it iterations.", "There is a tradeoff here between sim and k it .", "We will give two extreme examples, assuming that we want to extract a fixed number of m instances where m is given.", "We can achieve this goal either by setting k it =1 and choosing a small sim , which will result in very large hops.", "Or we can achieve this goal by setting sim to a large value and running the algorithm for a larger number of k it .", "The flexibility that the two hyperparameters k it and sim afford is important for good performance.", "The differences and advantages of BREJ over BREE and BRET are: (1) Disjunctive Matching of Instances: The first difference is realized in how the three algorithms match instances with seeds (line 05 in Figure 3).", "BREE checks whether the entity pair of an instance is one of the entity pair seeds, BRET checks whether the template of an instance is one of the template seeds and BREJ checks whether the disjunction of the two is true.", "The disjunction facilitates a higher hit rate in matching instances with seeds.", "The introduction of a few handcrafted templates along with seed entity pairs allows BREJ to leverage discriminative patterns and learn similar ones via distributional semantics.", "In Figure 1, the joint approach results in hybrid extractors that contain instances due to seed occurrences of both entity pairs and templates.", "(2) Hybrid Augmentation of Seeds: On line 09 in Figure 3, we see that the bootstrapping step is defined in a straightforward fashion: the entity pair of an instance is added for BREE, the template for BRET and both for BREJ.", "Figure 1 demonstrates I1: 's purchase of I2: 's acquisition of I1: 's purchase of ( BREE ) I1: 's purchase of I2: 's acquisition of ( BRET ) ( BREJ ) Seed Entity Pair: = {<Google, DoubleClick>} Seed Templates: = {[X] 's acquisition of [Y]} Matched Instances: I1: < Google > 's purchase of < DoubleClick > is intriguing.", "(3) Scaling Up Positives in Extractors: As discussed in section 2.2, a good measure of the quality of an extractor is crucial and N (cid:0) , the number of instances in an extractor that match a seed, is an important component of that.", "For BREE and BRET, the definition follows directly from the fact that these are entity-pair and template-centered instantiations of BREX, respectively.", "However, the disjunctive matching of instances for an extractor with entity pair and template seeds in BREJ (Figure 3 line", "(i) ) boosts the likelihood of find-ing positive instances.", "In Figure 5, we demonstrate computing the count of positive instances 30 Relationship Seed Entity Pairs Seed Templates acquired { Adidas;Reebok } , { Google;DoubleClick } , { Widnes;Warrington } , { Hewlett-Packard;Compaq } { [X] acquire [Y] } , { [X] acquisition [Y] } , { [X] buy [Y] } , { [X] takeover [Y] } , { [X] merger with [Y] } founder-of { CNN;Ted Turner } , { Facebook;Mark Zuckerberg } , { Microsoft;Paul Allen } , { Amazon;Jeff Bezos } , { [X] founded by [Y] } , { [X] co-founder [Y] } , { [X] started by [Y] } , { [X] founder of [Y] } , { [X] owner of [Y] } headquartered { Nokia;Espoo } , { Pfizer;New York } , { United Nations;New York } , { NATO;Brussels } , { [X] based in [Y] } , { [X] headquarters in [Y] } , { [X] head office in [Y] } , { [X] main office building in [Y] } , { [X] campus branch in [Y] } affiliation { Google;Marissa Mayer } , { Xerox;Ursula Burns } , { Microsoft;Steve Ballmer } , { Microsoft;Bill Gates } , { [X] CEO [Y] } , { [X] resign from [Y] } , { [X] founded by [Y] } , { [X] worked for [Y] } , { [X] chairman director [Y] } Table 2: Seed Entity Pairs and Templates for each relation.", "N (cid:0) p , G q for an extractor within the three systems.", "Observe that an instance i in can scale its N (cid:0) p , G q by a factor of maximum 2 in BREJ if i is matched in both entity pair and template seeds.", "The reliability cnf p , G q (Eq. 11) of an extractor is based on the ratio N (cid:0) p , G n q N (cid:0) p , G p q , therefore suggesting that the scaling boosts its confidence.", "In Figure 6, we demonstrate with an example how the joint bootstrapping scales up the positive instances for a non-noisy extractor , resulting in NNHC for BREJ compared to NNLC in BREE.", "Due to unlabeled data, the instances not matching in seeds are considered either to be ig-nored/unknown N 0 or negatives in the confidence measure (Eq. 11).", "The former leads to high confidences for noisy extractors by assigning high scores, the latter to low confidences for non-noisy extractors by penalizing them.", "For a simple version of the algorithm in the illustration, we consider them as negatives and set w n (cid:16) 1 .", "Figure 6 shows the three extractors ( ) generated and their confidence scores in BREE, BRET and BREJ.", "Observe that the scaling up of positives in BREJ due to BRET extractions (without w n ) discounts cnf p , G q relatively lower than BREE.", "The discounting results in NNHC in BREJ and NNLC in BREE.", "The discounting in BREJ is adapted for non-noisy extractors facilitated by BRET in generating mostly non-noisy extractors due to stringent checks (Figure 3, line", "(i) and 05).", "Intuitively, the intermixing of non-noisy extractors (i.e., hybrid ) promotes the scaling and boosts recall.", "The before ( ~v (cid:1) 1 ) and after ( ~v 1 ) contexts around the entities are highly sparse due to large variation in the syntax of how relations are expressed.", "SnowBall, DIPRE and BREE assumed that the between ( ~v 0 ) context mostly defines the syntactic expression for a relation and used weighted mechanism on the three contextual similarities in ORG-ORG ORG-PER ORG-LOC count 58,500 75,600 95,900 Table 3: Count of entity-type pairs in corpus Parameter Description/ Search Optimal | v (cid:1) 1 | maximum number of tokens in before context 2 | v 0 | maximum number of tokens in between context 6 | v 1 | maximum number of tokens in after context 2 sim similarity threshold [0.6, 0.7, 0.8] 0.7 cnf instance confidence thresholds [0.6, 0.7, 0.8] 0.7 w n weights to negative extractions [0.0, 0.5, 1.0, 2.0] 0.5 w u weights to unknown extractions [0.0001, 0.00001] 0.0001 k it number of bootstrapping epochs 3 dim emb dimension of embedding vector, V 300 PMIPMI threshold in evaluation 0.5 Entity Pairs Ordered Pairs ( OP ) or Bisets ( BS ) OP Table 4: Hyperparameters in BREE, BRET and BREJ pairs, sim match (Figure 4).", "They assigned higher weights to the similarity in between ( p (cid:16) 0 ) contexts, that resulted in lower recall.", "We introduce attentive ( max ) similarity across all contexts (for example, ~v (cid:1) 1 p i q ~v 0 p j q ) to automatically capture the large variation in the syntax of how relations are expressed, without using any weights.", "We investigate asymmetric (Eq 13) and symmetric (Eq 14 and 15) similarity measures, and name them as cross-context attentive (sim cc ) similarity.", "We re-run BREE (Batista et al., 2015) for baseline with a set of 5.5 million news articles from AFP and APW (Parker et al., 2011).", "We use processed dataset of 1.2 million sentences (released by BREE) containing at least two entities linked to FreebaseEasy (Bast et al., 2014).", "We extract four relationships: acquired (ORG-ORG), founder-of (ORG-PER), headquartered (ORG-LOC) and affiliation (ORG-PER) for Organization (ORG), Person (PER) and Location (LOC) entity types.", "We bootstrap relations in BREE, BRET and BREJ, each with 4 similarity measures using seed entity 31 Relationships # out P R F 1 # out P R F 1 # out P R F 1 # out P R F 1 BREE baseline : BREE+sim match config 2 : BREE+sim asymcc config 3 : BREE+sim sym 1 cc config 4 : BREE+sim sym 2 cc acquired 2687 0.88 0.48 0.62 5771 0.88 0.66 0.76 3471 0.88 0.55 0.68 3279 0.88 0.53 0.66 founder-of 628 0.98 0.70 0.82 9553 0.86 0.95 0.89 1532 0.94 0.84 0.89 1182 0.95 0.81 0.87 headquartered 16786 0.62 0.80 0.69 21299 0.66 0.85 0.74 17301 0.70 0.83 0.76 9842 0.72 0.74 0.73 affiliation 20948 0.99 0.73 0.84 27424 0.97 0.78 0.87 36797 0.95 0.82 0.88 28416 0.97 0.78 0.87 avg 10262 0.86 0.68 0.74 16011 0.84 0.81 0.82 14475 0.87 0.76 0.80 10680 0.88 0.72 0.78 BRET config 5 : BRET+sim match config 6 : BRET+sim asymcc config 7 : BRET+sim sym 1 cc config 8 : BRET+sim sym 2 cc acquired 4206 0.99 0.62 0.76 15666 0.90 0.85 0.87 18273 0.87 0.86 0.87 14319 0.92 0.84 0.87 founder-of 920 0.97 0.77 0.86 43554 0.81 0.98 0.89 41978 0.81 0.99 0.89 46453 0.81 0.99 0.89 headquartered 3065 0.98 0.55 0.72 39267 0.68 0.92 0.78 36374 0.71 0.91 0.80 56815 0.69 0.94 0.80 affiliation 20726 0.99 0.73 0.85 28822 0.99 0.79 0.88 44946 0.96 0.85 0.90 33938 0.97 0.81 0.89 avg 7229 0.98 0.67 0.80 31827 0.85 0.89 0.86 35393 0.84 0.90 0.86 37881 0.85 0.90 0.86 BREJ config 9 : BREJ+sim match config 10 : BREJ+sim asymcc config 11 : BREJ+sim sym 1 cc config 12 : BREJ+sim sym 2 cc acquired 20186 0.82 0.87 0.84 35553 0.80 0.92 0.86 22975 0.86 0.89 0.87 22808 0.85 0.90 0.88 founder-of 45005 0.81 0.99 0.89 57710 0.81 1.00 0.90 50237 0.81 0.99 0.89 45374 0.82 0.99 0.90 headquartered 47010 0.64 0.93 0.76 66563 0.68 0.96 0.80 60495 0.68 0.94 0.79 57853 0.68 0.94 0.79 affiliation 40959 0.96 0.84 0.89 57301 0.94 0.88 0.91 55811 0.94 0.87 0.91 51638 0.94 0.87 0.90 avg 38290 0.81 0.91 0.85 54282 0.81 0.94 0.87 47380 0.82 0.92 0.87 44418 0.82 0.93 0.87 Table 5: Precision ( P ), Recall ( R ) and F 1 compared to the state-of-the-art ( baseline ).", "pairs and templates (Table 2).", "See Tables 3, 4 and 5 for the count of candidates, hyperparameters and different configurations, respectively.", "Our evaluation is based on Bronzi et al. (2012)'s framework to estimate precision and recall of large-scale RE systems using FreebaseEasy (Bast et al., 2014).", "Also following Bronzi et al. (2012), we use Pointwise Mutual Information (PMI) (Tur-ney, 2001) to evaluate our system automatically, in addition to relying on an external knowledge base.", "We consider only extracted relationship instances with confidence scores cnf p i, , G q equal or above 0.5.", "We follow the same approach as BREE (Batista et al., 2015) to detect the correct order of entities in a relational triple, where we try to identify the presence of passive voice using part-of-speech (POS) tags and considering any form of the verb to be, followed by a verb in the past tense or past participle, and ending in the word by'.", "We use GloVe (Pennington et al., 2014) embeddings.", "Table 5 shows the experimental results in the three systems for the different relationships with ordered entity pairs and similarity measures (sim match , sim cc ).", "Observe that BRET (config 5 ) is precision-oriented while BREJ (config 9 ) recall-oriented when compared to BREE (baseline).", "We see the number of output instances # out are also higher in BREJ, therefore the higher recall.", "The BREJ system in the different similarity configurak it # out P R F 1 0.6 1 691 0.99 0.21 0.35 2 11288 0.85 0.79 0.81 0.7 1 610 1.0 0.19 0.32 2 7948 0.93 0.75 0.83 0.8 1 522 1.0 0.17 0.29 2 2969 0.90 0.51 0.65 Table 6: Iterations ( k it ) Vs Scores with thresholds ( ) for relation acquired in BREJ.", "On an average for the four relations, BREJ in configurations config 9 and config 10 results in F 1 that is 0 .", "11 (0.85 vs 0.74) and 0 .", "13 (0.87 vs 0.74) better than the baseline BREE.", "We discover that sim cc improves # out and recall over sim match correspondingly in all three systems.", "Observe that sim cc performs better with BRET than BREE due to non-noisy extractors in BRET.", "The results suggest an alternative to the weighting scheme in sim match and therefore, the state-of-the-art (sim cc ) performance with the 3 parameters ( w (cid:1) 1 , w 0 and w 1 ) ignored in bootstrap-32 acquired founder-of headquartered affiliation BRE X E T J E T J E T J E T J # hit 71 682 743 135 956 1042 715 3447 4023 603 14888 15052 Table 8: Disjunctive matching of Instances.", "Observe that sim asymcc gives higher recall than the two symmetric similarity measures.", "Table 6 shows the performance of BREJ in different iterations trained with different similarity sim and confidence cnf thresholds.", "Table 7 shows a comparative analysis of the three systems, where we consider and evaluate the extracted relationship instances at different confidence scores.", "As discussed in section 2.3, BREJ facilitates disjunctive matching of instances (line 05 Figure 3) with seed entity pairs and templates.", "Table 8 shows # hit in the three systems, where the higher values of # hit in BREJ conform to the desired property.", "Observe that some instances in BREJ are found to be matched in both the seed types.", "We analyze the extractors generated in BREE, BRET and BREJ for the 4 relations to demonstrate the impact of joint bootstrapping.", "Table 9 shows the attributes of .", "We manually annotate the extractors as noisy and non-noisy .", "We compute ANNLC and the lower values in BREJ compared to BREE suggest fewer non-noisy extractors with lower confidence in BREJ due to the scaled confi-Relationships # out P R F 1 BREE acquired 387 0.99 0.13 0.23 founder-of 28 0.96 0.09 0.17 headquartered 672 0.95 0.21 0.34 affiliation 17516 0.99 0.68 0.80 avg 4651 0.97 0.28 0.39 BRET acquired 4031 1.00 0.61 0.76 founder-of 920 0.97 0.77 0.86 headquartered 3522 0.98 0.59 0.73 affiliation 22062 0.99 0.74 0.85 avg 7634 0.99 0.68 0.80 BREJ acquired 12278 0.87 0.81 0.84 founder-of 23727 0.80 0.99 0.89 headquartered 38737 0.61 0.91 0.73 affiliation 33203 0.98 0.81 0.89 avg 26986 0.82 0.88 0.84 Table 10: BREX+sim match :Scores when w n ignored dence scores.", "ANNE (higher), ANNLC (lower), AP (higher) and AN (lower) collectively indicate that BRET mostly generates NNHC extractors.", "AP and AN indicate an average of N (cid:0) p , G l q (line", "(i) Figure 3) for positive and negative seeds, respectively for P in the three systems.", "Observe the impact of scaling positive extractions ( AP ) in BREJ that shrink N (cid:0) p , G n q N (cid:0) p , G p q i.e., ANP .", "It facilitates NNLC to boost its confidence, i.e., NNHC in BREJ suggested by AES that results in higher # out and recall (Table 5, BREJ).", "As discussed, Table 5 shows the performance of BREE, BRET and BREJ with the parameter w n (cid:16) 0 .", "5 in computing extractors' confidence cnf p , G q (Eq. 11).", "In other words, config 9 (Ta-ble 5) is combination of both weighted negative and scaled positive extractions.", "However, we also investigate ignoring w n p(cid:16) 1 .", "0 q in order to demonstrate the capability of BREJ with only scaling positives and without weighting negatives.", "In Table 10, observe that BREJ outperformed both BREE and BRET for all the relationships due to higher # out and recall.", "In addition, BREJ scores are comparable to config 9 (Table 5) suggesting that the scaling in BREJ is capable enough to remove the parameter w n .", "However, the combination of both weighting negatives and scaling positives results in the state-of-the-art performance.", "Table 11 lists some of the non-noisy extractors (simplified) learned in different configurations to illustrate boosting extractor confidence cnf p , G q .", "Since, an extractor is a cluster of instances, therefore to simplify, we show one in-33 config 1 : BREE+sim match cnf p , G q config 5 : BRET+sim match cnf p , G q config 9 : BREJ+sim match cnf p , G q config 10 : BREJ+sim asymcc cnf p , G q acquired [X]acquired[Y] 0.98 [X]acquired[Y] 1.00 [X]acquired[Y] 1.00 acquiredby[X],[Y] : 0.93 [X]takeoverof[Y] 0.89 [X]takeoverof[Y] 1.00 [X]takeoverof[Y] 0.98 takeoverof[X]wouldboost[Y]'searnings : 0.90 [X]'splannedacquisitionof[Y] 0.87 [X]'splannedacquisitionof[Y] 1.00 [X]'splannedacquisitionof[Y] 0.98 acquisitionof[X]by[Y] : 0.95 [X]acquiring[Y] 0.75 [X]acquiring[Y] 1.00 [X]acquiring[Y] 0.95 [X]acquiring[Y] 0.95 [X]hasownedpartof[Y] 0.67 [X]hasownedpartof[Y] 1.00 [X]hasownedpartof[Y] 0.88 ownedby[X]'sparent[Y] 0.90 [X]tookcontrolof[Y] 0.49 [X]'sownershipof[Y] 1.00 [X]tookcontrolof[Y] 0.91 [X]takescontrolof[Y] 1.00 [X]'sacquisitionof[Y] 0.35 [X]'sacquisitionof[Y] 1.00 [X]'sacquisitionof[Y] 0.95 acquisitionof[X]wouldreduce[Y]'sshare : 0.90 [X]'smergerwith[Y] 0.35 [X]'smergerwith[Y] 1.00 [X]'smergerwith[Y] 0.94 [X]-[Y]mergerbetween : 0.84 [X]'sbidfor[Y] 0.35 [X]'sbidfor[Y] 1.00 [X]'sbidfor[Y] 0.97 partof[X]which[Y]acquired : 0.83 founder-of [X]founder[Y] 0.68 [X]founder[Y] 1.00 [X]founder[Y] 0.99 founderof[X],[Y] : 0.97 [X]CEOandfounder[Y] 0.15 [X]CEOandfounder[Y] 1.00 [X]CEOandfounder[Y] 0.99 co-founderof[X]'smillennialcenter,[Y] : 0.94 [X]'sco-founder[Y] 0.09 [X]owner[Y] 1.00 [X]owner[Y] 1.00 ownedby[X]cofounder[Y] 0.95 [X]cofounder[Y] 1.00 [X]cofounder[Y] 1.00 Gatesco-founded[X]withschoolfriend[Y] : 0.99 [X]startedby[Y] 1.00 [X]startedby[Y] 1.00 whoco-founded[X]with[Y] : 0.95 [X]wasfoundedby[Y] 1.00 [X]wasfoundedby[Y] 0.99 toco-found[X]withpartner[Y] : 0.68 [X]begunby[Y] 1.00 [X]begunby[Y] 1.00 [X]wasstartedby[Y],cofounder 0.98 [X]hasestablished[Y] 1.00 [X]hasestablished[Y] 0.99 setup[X]withchildhoodfriend[Y] : 0.96 [X]chiefexecutiveandfounder[Y] 1.00 [X]co-founderandbillionaire[Y] (cid:6) 0.99 [X]co-founderandbillionaire[Y] 0.97 headquartered [X]headquartersin[Y] 0.95 [X]headquartersin[Y] 1.00 [X]headquartersin[Y] 0.98 [X]headquartersin[Y] 0.98 [X]relocateditsheadquartersfrom[Y] 0.94 [X]relocateditsheadquartersfrom[Y] 1.00 [X]relocateditsheadquartersfrom[Y] 0.98 basedat[X]'ssuburban[Y]headquarters : 0.98 [X]headofficein[Y] 0.84 [X]headofficein[Y] 1.00 [X]headofficein[Y] 0.87 headof[X]'soperationsin[Y] : 0.65 [X]basedin[Y] 0.75 [X]basedin[Y] 1.00 [X]basedin[Y] 0.98 branchof[X]companybasedin[Y] 0.98 [X]headquartersbuildingin[Y] 0.67 [X]headquartersbuildingin[Y] 1.00 [X]headquartersbuildingin[Y] 0.94 [X]maincampusin[Y] 0.99 [X]headquartersindowntown[Y] 0.64 [X]headquartersindowntown[Y] 1.00 [X]headquartersindowntown[Y] 0.94 [X]headquartersindowntown[Y] 0.96 [X]branchofficesin[Y] 0.54 [X]branchofficesin[Y] 1.00 [X]branchofficesin[Y] 0.98 [X]'s[Y]headquartersrepresented : 0.98 [X]'scorporatecampusin[Y] 0.51 [X]'scorporatecampusin[Y] 1.00 [X]'scorporatecampusin[Y] 0.99 [X]maincampusin[Y] 0.99 [X]'scorporateofficein[Y] 0.51 [X]'scorporateofficein[Y] 1.00 [X]'scorporateofficein[Y] 0.89 [X],[Y]'scorporate : 0.94 affiliation [X]chiefexecutive[Y] 0.92 [X]chiefexecutive[Y] 1.00 [X]chiefexecutive[Y] 0.97 [X]chiefexecutive[Y]resignedmonday 0.94 [X]secretary[Y] 0.88 [X]secretary[Y] 1.00 [X]secretary[Y] 0.94 workedwith[X]manager[Y] 0.85 [X]president[Y] 0.87 [X]president[Y] 1.00 [X]president[Y] 0.96 [X]votedtoretain[Y]asCEO : 0.98 [X]leader[Y] 0.72 [X]leader[Y] 1.00 [X]leader[Y] 0.85 headof[X],[Y] : 0.99 [X]partyleader[Y] 0.67 [X]partyleader[Y] 1.00 [X]partyleader[Y] 0.87 workingwith[X],[Y]suggested : 1.00 [X]hasappointed[Y] 0.63 [X]executiveeditor[Y] 1.00 [X]hasappointed[Y] 0.81 [X]president[Y]wasfired 0.90 [X]player[Y] 0.38 [X]player[Y] 1.00 [X]player[Y] 0.89 [X]'s[Y]wasfired : 0.43 [X]'ssecretary-general[Y] 0.36 [X]'ssecretary-general[Y] 1.00 [X]'ssecretary-general[Y] 0.93 Chairmanof[X],[Y] : 0.88 [X]hired[Y] 0.21 [X]director[Y] 1.00 [X]hired[Y] 0.56 [X]hired[Y]asmanager : 0.85 Table 11: Subset of the non-noisy extractors (simplified) with their confidence scores cnf p , G q learned in different configurations for each relation.", "stance (mostly populated) from every .", "Each cell in Table 11 represents either a simplified representation of or its confidence.", "We demonstrate how the confidence score of a non-noisy extractor in BREE (config 1 ) is increased in BREJ (config 9 and config 10 ).", "For instance, for the relation acquired , an extractor { [X] acquiring [Y] } is generated by BREE, BRET and BREJ; however, its confidence is boosted from 0 .", "75 in BREE (config 1 ) to 0 .", "95 in BREJ (config 9 ).", "Observe that BRET generates high confidence extractors.", "We also show extractors (marked by : ) learned by BREJ with sim cc (config 10 ) but not by config 1 , config 5 and config 9 .", "In Table 5, we use ordered pairs of typed entities.", "Additionally, we also investigate using entity sets and observe improved recall due to higher # out in both BREE and BREJ, comparing correspondingly Table 12 and 5 ( baseline and config 9 ).", "of both entity-pair-centered and template-centered approaches.", "We have demonstrated that the joint approach scales up positive instances that boosts the confidence of NNLC extractors and improves recall.", "The experiments showed that the cross-context similarity measures improved recall and suggest removing in total four parameters.", "We thank our colleagues Bernt Andrassy, Mark Buckley, Stefan Langer, Ulli Waltinger and Us-ama Yaseen, and anonymous reviewers for their review comments.", "This research was supported by Bundeswirtschaftsministerium ( bmwi.de ), grant 01MD15010A (Smart Data Web) at Siemens AGCT Machine Intelligence, Munich Germany." ]
[ "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "objective", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "method", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "other", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "method", "method", "method", "method", "method", "method", "abstain", "abstain", "method", "method", "method", "method", "method", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "other", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "objective", "abstain", "other", "other" ]
[ "We present BART, a denoising autoencoder for pretraining sequence-to-sequence models.", "BART is trained by (1) corrupting text with an arbitrary noising function, and (2) learning a model to reconstruct the original text.", "It uses a standard Tranformer-based neural machine translation architecture which, despite its simplicity, can be seen as generalizing BERT (due to the bidirectional encoder), GPT (with the left-to-right decoder), and other recent pretraining schemes.", "We evaluate a number of noising approaches, finding the best performance by both randomly shuffling the order of sentences and using a novel in-filling scheme, where spans of text are replaced with a single mask token.", "BART is particularly effective when fine tuned for text generation but also works well for comprehension tasks.", "It matches the performance of RoBERTa on GLUE and SQuAD, and achieves new state-of-the-art results on a range of abstractive dialogue, question answering, and summarization tasks, with gains of up to 3.5 ROUGE.", "BART also provides a 1.1 BLEU increase over a back-translation system for machine translation, with only target language pretraining.", "We also replicate other pretraining schemes within the BART framework, to understand their effect on end-task performance.", "1 1 Introduction Self-supervised methods have achieved remarkable success in a wide range of NLP tasks (Mikolov et al., 2013; Peters et al., 2018; Devlin et al., 2019; Joshi et al., 2019; Yang et al., 2019; Liu et al., 2019).", "The most successful approaches have been variants of masked language models, which are denoising autoen-coders that are trained to reconstruct text where a random subset of the words has been masked out.", "Recent work has shown gains by improving the distribution of 1 Code and pre-trained models for BART are available at https://github.com/pytorch/fairseq and https://huggingface.co/transformers masked tokens (Joshi et al., 2019), the order in which masked tokens are predicted (Yang et al., 2019), and the available context for replacing masked tokens (Dong et al., 2019).", "However, these methods typically focus on particular types of end tasks (e.g. span prediction, generation, etc.), limiting their applicability.", "In this paper, we present BART, which pre-trains a model combining Bidirectional and Auto-Regressive Transformers.", "BART is a denoising autoencoder built with a sequence-to-sequence model that is applicable to a very wide range of end tasks.", "Pretraining has two stages (1) text is corrupted with an arbitrary noising function, and (2) a sequence-to-sequence model is learned to reconstruct the original text.", "BART uses a standard Tranformer-based neural machine translation architecture which, despite its simplicity, can be seen as generalizing BERT (due to the bidirectional encoder), GPT (with the left-to-right decoder), and many other more recent pretraining schemes (see Figure 1).", "A key advantage of this setup is the noising flexibil-ity; arbitrary transformations can be applied to the original text, including changing its length.", "We evaluate a number of noising approaches, finding the best performance by both randomly shuffling the order of the original sentences and using a novel in-filling scheme, where arbitrary length spans of text (including zero length) are replaced with a single mask token.", "This approach generalizes the original word masking and next sentence prediction objectives in BERT by forcing the model to reason more about overall sentence length and make longer range transformations to the input.", "BART is particularly effective when fine tuned for text generation but also works well for comprehension tasks.", "It matches the performance of RoBERTa (Liu et al., 2019) with comparable training resources on GLUE (Wang et al., 2018) and SQuAD (Rajpurkar et al., 2016), and achieves new state-of-the-art results on a range of abstractive dialogue, question answering, and summarization tasks.", "For example, it improves performance by 3.5 ROUGE over previous work on XSum (Narayan et al., 2018).", "BART also opens up new ways of thinking about fine tuning.", "We present a new scheme for machine translation where a BART model is stacked above a few additional transformer layers.", "These layers are trained Bidirectional Encoder A _ C _ E B D", "BERT: Random tokens are replaced with masks, and the document is encoded bidirectionally.", "Missing tokens are predicted independently, so BERT cannot easily be used for generation.", "(b) GPT: Tokens are predicted auto-regressively, meaning GPT can be used for generation.", "However words can only condition on leftward context, so it cannot learn bidirectional interactions.", "(c) BART: Inputs to the encoder need not be aligned with decoder outputs, allowing arbitary noise transformations.", "Here, a document has been corrupted by replacing spans of text with a mask symbols.", "The corrupted document (left) is encoded with a bidirectional model, and then the likelihood of the original document (right) is calculated with an autoregressive decoder.", "For fine-tuning, an uncorrupted document is input to both the encoder and decoder, and we use representations from the final hidden state of the decoder.", "to essentially translate the foreign language to noised English, by propagation through BART, thereby using BART as a pre-trained target-side language model.", "This approach improves performance over a strong back-translation MT baseline by 1.1 BLEU on the WMT Romanian-English benchmark.", "To better understand these effects, we also report an ablation analysis that replicates other recently proposed training objectives.", "This study allows us to carefully control for a number of factors, including data and optimization parameters, which have been shown to be as important for overall performance as the selection of training objectives (Liu et al., 2019).", "We find that BART exhibits the most consistently strong performance across the full range of tasks we consider.", "BART is a denoising autoencoder that maps a corrupted document to the original document it was derived from.", "It is implemented as a sequence-to-sequence model with a bidirectional encoder over corrupted text and a left-to-right autoregressive decoder.", "For pre-training, we optimize the negative log likelihood of the original document.", "BART uses the standard sequence-to-sequence Transformer architecture from (Vaswani et al., 2017), except, following GPT, that we modify ReLU activation functions to GeLUs (Hendrycks & Gimpel, 2016) and initialise parameters from N (0 , 0 . 02) .", "For our base model, we use 6 layers in the encoder and decoder, and for our large model we use 12 layers in each.", "The architecture is closely related to that used in BERT, with the following differences: (1) each layer of the decoder additionally performs cross-attention over the final hidden layer of the encoder (as in the transformer sequence-to-sequence model); and (2) BERT uses an additional feed-forward network before word-prediction, which BART does not.", "In total, BART contains roughly 10% more parameters than the equivalently sized BERT model.", "BARTBART is trained by corrupting documents and then optimizing a reconstruction lossthe cross-entropy between the decoder's output and the original document.", "Unlike existing denoising autoencoders, which are tailored to specific noising schemes, BART allows us to apply any type of document corruption.", "In the extreme case, where all information about the source is lost, BART is equivalent to a language model.", "We experiment with several previously proposed and novel transformations, but we believe there is a significant potential for development of other new alternatives.", "The transformations we used are summarized below, and examples are shown in Figure 2.", "Token Masking Following BERT (Devlin et al., 2019), random tokens are sampled and replaced with [MASK] elements.", "Token Deletion Random tokens are deleted from the input.", "In contrast to token masking, the model must A B C .", "Text Infilling A number of text spans are sampled, with span lengths drawn from a Poisson distribution ( = 3 ).", "Each span is replaced with a single [MASK] token.", "0-length spans correspond to the insertion of [MASK] tokens.", "Text infilling is inspired by SpanBERT (Joshi et al., 2019), but SpanBERT samples span lengths from a different (clamped geometric) distribution, and replaces each span with a sequence of [MASK] tokens of exactly the same length.", "Text infilling teaches the model to predict how many tokens are missing from a span.", "Sentence Permutation A document is divided into sentences based on full stops, and these sentences are shuffled in a random order.", "Document Rotation A token is chosen uniformly at random, and the document is rotated so that it begins with that token.", "This task trains the model to identify the start of the document.", "The representations produced by BART can be used several ways for downstream applications.", "For sequence classification tasks, the same input is fed into the encoder and decoder, and the final hidden state of the final decoder token is fed into new multi-class linear classifier.", "This approach is related to the CLS token in BERT; however we add the additional token to the end so that representation for the token in the decoder can attend to decoder states from the complete input (Figure 3a).", "For token classification tasks, such as answer endpoint classification for SQuAD, we feed the complete document into the encoder and decoder, and use the top hidden state of the decoder as a representation for each word.", "This representation is used to classify the token.", "Because BART has an autoregressive decoder, it can be directly fine tuned for sequence generation tasks such as abstractive question answering and summarization.", "In both of these tasks, information is copied from the input but manipulated, which is closely related to the denoising pre-training objective.", "Here, the encoder input is the input sequence, and the decoder generates outputs autoregressively.", "We also explore using BART to improve machine translation decoders for translating into English.", "Previous work Edunov et al. (2019) has shown that models can be improved by incorporating pre-trained encoders, but gains from using pre-trained language models in decoders have been limited.", "We show that it is possible to use the entire BART model (both encoder and decoder) as a single pretrained decoder for machine translation, by adding a new set of encoder parameters that are learned from bitext (see Figure 3b).", "More precisely, we replace BART's encoder embedding layer with a new randomly initialized encoder.", "The model is trained end-to-end, which trains the new encoder to map foreign words into an input that BART can de-noise to English.", "The new encoder can use a separate vocabulary from the original BART model.", "We train the source encoder in two steps, in both cases backpropagating the cross-entropy loss from the output of the BART model.", "In the first step, we freeze most of BART parameters and only update the randomly initialized source encoder, the BART positional embeddings, and the self-attention input projection matrix of BART's encoder first layer.", "In the second step, we train all model parameters for a small number of iterations.", "BART supports a much wider range of noising schemes during pre-training than previous work.", "We compare a range of options using base-size models (6 encoder and 6 decoder layers, with a hidden size of 768), evaluated on a representative subset of the tasks we will consider for the full large scale experiments in 5.", "While many pre-training objectives have been proposed, fair comparisons between these have been dif-ficult to perform, at least in part due to differences in training data, training resources, architectural differ-Pre-trained", "(a) To use BART for classification problems, the same input is fed into the encoder and decoder, and the representation from the final output is used.", "(b) For machine translation, we learn a small additional encoder that replaces the word embeddings in BART.", "The new encoder can use a disjoint vocabulary.", "ences between models, and fine-tuning procedures.", "We re-implement strong pre-training approaches recently proposed for discriminative and generation tasks.", "We aim, as much as possible, to control for differences unrelated to the pre-training objective.", "However, we do make minor changes to the learning rate and usage of layer normalisation in order to improve performance (tuning these separately for each objective).", "For reference, we compare our implementations with published numbers from BERT, which was also trained for 1M steps on a combination of books and Wikipedia data.", "We compare the following approaches: Language Model Similarly to GPT (Radford et al., 2018), we train a left-to-right Transformer language model.", "This model is equivalent to the BART decoder, without cross-attention.", "Permuted Language Model Based on XLNet (Yang et al., 2019), we sample 1/6 of the tokens, and generate them in a random order autoregressively.", "For consistency with other models, we do not implement the relative positional embeddings or attention across segments from XLNet.", "Masked Language Model Following BERT (Devlin et al., 2019), we replace 15% of tokens with [MASK] symbols, and train the model to independently predict the original tokens.", "Multitask Masked Language Model As in UniLM (Dong et al., 2019), we train a Masked Language Model with additional self-attention masks.", "Self attention masks are chosen randomly with the follow proportions: 1/6 left-to-right, 1/6 right-to-left, 1/3 unmasked, and 1/3 with the first 50% of tokens unmasked and a left-to-right mask for the remainder.", "Masked Seq-to-Seq Inspired by MASS (Song et al., 2019), we mask a span containing 50% of tokens, and train a sequence to sequence model to predict the masked tokens.", "For the Permuted LM, Masked LM and Multitask Masked LM, we use two-stream attention (Yang et al., 2019) to efficiently compute likelihoods of the output part of the sequence (using a diagonal self-attention mask on the output to predict words left-to-right).", "We experiment with (1) treating the task as a standard sequence-to-sequence problem, where the source input to the encoder and the target is the decoder output, or (2) adding the source as prefix to the target in the decoder, with a loss only on the target part of the sequence.", "We find the former works better for BART models, and the latter for other models.", "To most directly compare our models on their ability to model their fine-tuning objective (the log likelihood of the human text), we report perplexity in Table 1.", "SQuAD (Rajpurkar et al., 2016) an extractive question answering task on Wikipedia paragraphs.", "Answers are text spans extracted from a given document context.", "Similar to BERT (Devlin et al., 2019), we use concatenated question and context as input to the encoder of BART, and additionally pass them to the decoder.", "The model includes classifiers to predict the start and end indices of each token.", "MNLI (Williams et al., 2017), a bitext classification task to predict whether one sentence entails another.", "The fine-tuned model concatenates the two sentences with appended an EOS token, and passes them to both the BART encoder and decoder.", "In contrast to BERT, the representation of the EOS token is used to classify the sentences relations.", "ELI5 (Fan et al., 2019), a long-form abstractive question answering dataset.", "Models generate answers conditioned on the concatenation of a question and supporting documents.", "XSum (Narayan et al., 2018), a news summarization dataset with highly abstractive summaries.", "ConvAI2 (Dinan et al., 2019), a dialogue response generation task, conditioned on context and a persona.", "CNN/DM (Hermann et al., 2015), a news summarization dataset.", "Summaries here are typically closely related to source sentences.", "Performance of pre-training methods varies significantly across tasks The effectiveness of pre-training methods is highly dependent on the task.", "For example, a simple language model achieves the best ELI5 performance, but the worst SQUAD results.", "Token masking is crucial Pre-training objectives based on rotating documents or permuting sentences perform poorly in isolation.", "The successful methods either use token deletion or masking, or self-attention masks.", "Deletion appears to outperform masking on generation tasks.", "Left-to-right pre-training improves generation The Masked Language Model and the Permuted Language Model perform less well than others on generation, and are the only models we consider that do not include left-to-right auto-regressive language modelling during pre-training.", "Bidirectional encoders are crucial for SQuAD As noted in previous work (Devlin et al., 2019), just left-to-right decoder performs poorly on SQuAD, because future context is crucial in classification decisions.", "However, BART achieves similar performance with only half the number of bidirectional layers.", "The pre-training objective is not the only important factor Our Permuted Language Model performs less well than XLNet (Yang et al., 2019).", "Some of this difference is likely due to not including other architectural improvements, such as relative-position embeddings or segment-level recurrence.", "Pure language models perform best on ELI5 The ELI5 dataset is an outlier, with much higher perplexities than other tasks, and is the only generation task where other models outperform BART.", "A pure language model performs best, suggesting that BART is less effective when the output is only loosely constrained by the input.", "BART achieves the most consistently strong performance.", "With the exception of ELI5, BART models using text-infilling perform well on all tasks.", "Recent work has shown that downstream performance can dramatically improve when pre-training is scaled to large batch sizes (Yang et al., 2019; Liu et al., 2019) and corpora.", "To test how well BART performs in this regime, and to create a useful model for downstream tasks, we trained BART using the same scale as the RoBERTa model.", "We pre-train a large model with 12 layers in each of the encoder and decoder, and a hidden size of 1024.", "Following RoBERTa (Liu et al., 2019), we use a batch size of 8000, and train the model for 500000 steps.", "Documents are tokenized with the same byte-pair encoding as GPT-2 (Radford et al., 2019).", "Based on the results in Section 4, we use a combination of text infilling and sentence permutation.", "We mask 30% of tokens in each MNLI SST QQP QNLI STS-B RTE MRPC CoLA m/mm Acc Acc Acc Acc Acc Acc Mcc BERT 86.6/-93.2 91.3 92.3 90.0 70.4 88.0 60.6 UniLM 87.0/85.9 94.5 -92.7 -70.9 -61.1 XLNet 89.8/-95.6 91.8 93.9 91.8 83.8 89.2 63.6 RoBERTa 90.2/90.2 96.4 92.2 94.7 92.4 86.6 90.9 68.0 BART 89.9/90.1 96.6 92.5 94.9 91.2 87.0 90.4 62.8 Table 2: Results for large models on GLUE tasks.", "document, and permute all sentences.", "Although sentence permutation only shows significant additive gains on the CNN/DM summarization dataset, we hypothesised that larger pre-trained models may be better able to learn from this task.", "To help the model better fit the data, we disabled dropout for the final 10% of training steps.", "We use the same pre-training data as Liu et al. (2019), consisting of 160Gb of news, books, stories, and web text.", "Tables 3 and 2 compares the performance of BART with several recent approaches on the well-studied SQuAD and GLUE tasks (Warstadt et al., 2018; Socher et al., 2013; Dolan & Brockett, 2005; Agirre et al., 2007; Williams et al., 2017; Dagan et al., 2006; Levesque et al., 2011).", "The most directly comparable baseline is RoBERTa, which was pre-trained with the same resources, but a different objective.", "Overall, BART performs similarly, with only small differences between the models on most tasks.", "suggesting that BART's improvements on generation tasks do not come at the expense of classification performance.", "We also experiment with several text generation tasks.", "BART is fine-tuned as a standard sequence-to-sequence model from the input to the output text.", "During fine-tuning we use a label smoothed cross entropy loss (Pereyra et al., 2017), with the smoothing parameter set to 0.1.", "During generation, we set beam size as 5, remove duplicated trigrams in beam search, and tuned the model with min-len, max-len, length penalty on the validation set (Fan et al., 2017).", "Summarization To provide a comparison with the state-of-the-art in summarization, we present results on two summarization datasets, CNN/DailyMail and XSum, which have distinct properties (Table 4).", "Summaries in the CNN/DailyMail tend to resemble source sentences.", "Extractive models do well here, and even the baseline of the first-three source sentences is highly competitive.", "Nevertheless, BART outperforms all existing work.", "In contrast, XSum is highly abstractive, and extractive models perform poorly.", "BART outperforms the best previous work, based on RoBERTa, by roughly 3.5 points on all ROUGE metricsrepresenting a significant advance in performance on this problem.", "Qualitatively, sample quality is high (see 6).", "We also conduct human evaluation (Table 5).", "Annotators were asked to choose the better of two summaries for a passage.", "One summary was from BART, and the other was either a human reference or publicly available output from the BERTSUMEXTABS model.", "As with automated metrics, BART significantly outperforms prior work.", "However, it has not reach human performance on this task.", "Dialogue We evaluate dialogue response generation on CONV AI2 (Dinan et al., 2019), in which agents must generate responses conditioned on both the previous context and a textually-specified persona.", "BART outperforms previous work on two automated metrics.", "Abstractive QA We use the recently proposed ELI5 dataset to test the model's ability to generate long freeform answers.", "We find BART outperforms the best previous work by 1.2 ROUGE-L, but the dataset remains a challenging, because answers are only weakly speci-fied by the question.", "We also evaluated performance on WMT16 Romanian-English, augmented with back-translation data from Sennrich et al. (2016).", "We use a 6-layer transformer source encoder to map Romanian into a representation that BART is able to de-noise into English, following the approach introduced in 3.4.", "summaries are preferred to those from previous work, but not to human-written reference summaries.", "Experiment results are presented in Table 8.", "We compare our results against a baseline Transformer architecture (Vaswani et al., 2017) with Transformer-large settings (the baseline row).", "We show the performance of both steps of our model in the fixed BART and tuned BART rows.", "For each row we experiment on the original WMT16 Romanian-English augmented with back-translation data.", "We use a beam width of 5 and a length penalty of = 1 .", "Preliminary results suggested that our approach was less effective without back-translation data, and prone to overfittingfuture work should explore additional regularization techniques.", "BART shows large improvements on summarization metrics, of up to 3.5 points over the prior state-of-the-art.", "To understand BART's performance beyond automated metrics, we analyse its generations qualitatively.", "Table 9 shows representative example summaries generated by BART, illustrating its main strengths and ELI5 R1 R2 RL Best Extractive 23.5 3.1 17.5 Language Model 27.8 4.7 23.1 Seq2Seq 28.3 5.1 22.8 Seq2Seq Multitask 28.9 5.4 23.1 BART 30.6 6.2 24.3 Table 7: BART achieves state-of-the-art results on the challenging ELI5 abstractive question answering dataset.", "weaknesses.", "Examples are taken from WikiNews articles published after the creation of the pre-training corpus, to eliminate the possibility of the events described being present in the model's training data.", "Following Narayan et al. (2018), we remove the first sentence of the article prior to summarizing it, so there is no easy extractive summary of the document.", "Unsurprisingly, model output is fluent and grammatical English.", "However, outputs are also highly abstractive, with few copied phrases.", "Summaries are generally factually accurate, and integrate supporting evidence from across the input document with background knowledge (for example, correctly completing names, or inferring that PG&E operates in California).", "In the first example, inferring that fish are protecting reefs from some effects of global warming requires nontrivial inference.", "However, the claim that the work was published in Science is not supported by the source Source Document (abbreviated) BART Summary The researchers examined three types of coral in reefs off the coast of Fiji ...", "and, in general, the main limitation of the model is a tendency to hallucinate unsupported information.", "Early methods for pretraining were based on language models.", "GPT (Radford et al., 2018) only models leftward context, which is problematic for some tasks.", "ELMo (Peters et al., 2018) concatenates left-only and right-only representations, but does not pre-train interactions between these features.", "Radford et al. (2019) demonstrated that very large language models can act as unsupervised multitask models.", "interactions between left and right context words.", "Recent work has shown that very strong performance can be achieved by training for longer (Liu et al., 2019), by tying parameters across layers (Lan et al., 2019), and by masking spans instead of words (Joshi et al., 2019).", "Predictions are not made auto-regressively, reducing the effectiveness of BERT for generation tasks.", "UniLM (Dong et al., 2019) fine-tunes BERT with an ensemble of masks, some of which allow only leftward context.", "Like BART, this allows UniLM to be used for both generative and discriminative tasks.", "A difference is that UniLM predictions are conditionally indepen-dent, whereas BART's are autoregressive.", "BART reduces the mismatch between pre-training and generation tasks, because the decoder is always trained on uncorrupted context.", "MASS (Song et al., 2019) is perhaps the most similar model to BART.", "An input sequence where a contiguous span of tokens is masked is mapped to a sequence consisting of the missing tokens.", "BART differs in masking more but shorter spans from the input, and in always predicting the complete output.", "Table 1 shows that in a controlled comparison, BART's pre-training objective outperforms MASS on five out of six tasks.", "XL-Net (Yang et al., 2019) extends BERT by predicting masked tokens auto-regressively in a permuted order.", "This objective allows predictions to condition on both left and right context.", "In contrast, the BART decoder works left-to-right during pre-training, matching the setting during generation.", "Concurrently, Raffel et al. (2019) pre-trained a denoising sequence-to-sequence model named T5, experimenting with a similar range of noising tasks.", "BART uses a slightly different objective, in which spans are masked from the input but the complete output is predicted, to improve the decoder's language modelling ability.", "BART achieves higher performance with similar model sizes, particularly on summarization.", "T5 demonstrates that by scaling to very large models sizes, denoising sequence-to-sequence pre-training can achieve new state-of-the-art results on many tasks.", "Several papers have explored using pre-trained representations to improve machine translation.", "The largest improvements have come from pre-training on both source and target languages (Song et al., 2019; Lample & Conneau, 2019), but this requires pretraining on all languages of interest.", "Other work has shown that encoders can be improved using pre-trained representations (Edunov et al., 2019), but gains in decoders are more limited.", "We show how BART can be used to improve machine translation decoders.", "We introduced BART, a pre-training approach that learns to map corrupted documents to the original.", "BART performs comparably to RoBERTa on discriminative tasks, and achieves new state-of-the-art results on several text generation tasks.", "Future work should explore new methods for corrupting documents for pretraining, perhaps tailoring them to specific end tasks." ]
[ "method", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "objective", "abstain", "result", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "method", "method", "method", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "method", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "abstain", "abstain", "abstain", "abstain" ]
[ "Natural language sentences, being hierarchical, can be represented at di erent levels of granularity, like words, subwords, or characters.", "But most neural machine translation systems require the sentence to be represented as a sequence at a single level of granularity.", "It can be di cult to determine which granularity is better for a particular translation task.", "In this paper, we improve the model by incorporating multiple levels of granularity.", "Specifically, we propose (1) an encoder with character attention which augments the (sub)word-level representation with character-level information; (2) a decoder with multiple attentions that enable the representations from di erent levels of granularity to control the translation cooperatively.", "Experiments on three translation tasks demonstrate that our proposed models outperform the standard word-based model, the subword-based model and a strong character-based model.", "Neural machine translation (NMT) models (Britz et al., 2017) learn to map from source language sentences to target language sentences via continuous-space intermediate representations.", "Since word is usually thought of as the ba-sic unit of language communication (Jackendo , 1992), early NMT systems built these representations starting from the word level (Sutskever et al., 2014; Bahdanau et al., 2015; Cho et al., 2014; Weng et al., 2017).", "Later systems tried using smaller units such as subwords to address the problem of out-of-vocabulary (OOV) words (Sen-nrich et al., 2016; Wu et al., 2016).", "of (sub)words are based purely on their contexts, but the potentially rich information inside the unit itself is seldom explored.", "Taking the Chinese word ( bei-da-shang ) as an example, the three characters in this word are a passive voice marker, hit and wound, respectively.", "The meaning of the whole word, to be wounded, is fairly compositional.", "But this compositionality is ignored if the whole word is treated as a single unit.", "Secondly, obtaining the word or sub-word boundaries can be non-trivial.", "For languages like Chinese and Japanese, a word segmentation step is needed, which must usually be trained on labeled data.", "For languages like English and German, word boundaries are easy to detect, but subword boundaries need to be learned by methods like BPE.", "In both cases, the segmentation model is trained only in monolingual data, which may result in units that are not suitable for translation.", "On the other hand, there have been multiple e orts to build models operating purely at the character level (Ling et al., 2015a; Yang et al., 2016; Lee et al., 2017).", "But splitting this finely can increase potential ambiguities.", "For example, the Chinese word ( hong-cha ) means black tea, but the two characters means red and tea, respectively.", "It shows that modeling the character sequence alone may not be able to fully utilize the information at the word or sub-word level, which may also lead to an inaccurate representation.", "A further problem is that character sequences are longer, making them more costly to process with a recurrent neural network model (RNN).", "While both word-level and character-level information can be helpful for generating better representations, current research which tries to exploit both word-level and character-level information only composed the word-level representation by character embeddings with the word boundary information (Ling et al., 2015b; Costa-juss`a and 1284 Fonollosa, 2016) or replaces the word representation with its inside characters when encountering the out-of-vocabulary words (Luong and Manning, 2016; Wu et al., 2016).", "In this paper, we propose a novel encoder-decoder model that makes use of both character and word information.", "More specifically, we augment the standard encoder to attend to individual characters to generate better source word representations (3.1).", "We also augment the decoder with a second attention that attends to the source-side characters to generate better translations (3.2).", "To demonstrate the e ectiveness of the proposed model, we carry out experiments on three translation tasks: Chinese-English, English-Chinese and English-German.", "Our experiments show that: (1) the encoder with character attention achieves significant improvements over the standard word-based attention-based NMT system and a strong character-based NMT system; (2) incorporating source character information into the decoder by our multi-scale attention mechanism yields a further improvement, and (3) our mod-ifications also improve a subword-based NMT model.", "To the best of our knowledge, this is the first work that uses the source-side character information for all the (sub)words in the sentence to enhance a (sub)word-based NMT model in both the encoder and decoder.", "Most NMT systems follow the encoder-decoder framework with attention mechanism proposed by Bahdanau et al. (2015).", "Given a source sentence x = x 1 x l x L and a target sentence y = y 1 y j y J , we aim to directly model the translation probability: P ( y | x ; ) = JY 1 P ( y j | y < j , x ; ) , where is a set of parameters and y < j is the sequence of previously generated target words.", "Here, we briefly describe the underlying framework of the encoder-decoder NMT system.", "Following Bahdanau et al. (2015), we use a bidirectional RNN with gated recurrent units (GRUs) (Cho et al., 2014) to encode the source", "h l = GRU( h l 1 , s l ; ) h l = GRU( h l 1 , s l ; )", "where s l is the l -th source word's embedding, GRU is a gated recurrent unit, and are the parameters of forward and backward GRU, respectively; see Cho et al. (2014) for a definition.", "", "The whole sequence of these annotations is used by the decoder.", "The decoder is a forward RNN with GRUs predicting the translation y word by word.", "The probability of generating the j -th word y j is: P ( y j | y < j , x ; ) = softmax( t j 1 d j c j ) where t j 1 is the word embedding of the ( j 1)-th target word, d j is the decoder's hidden state of time j , and c j is the context vector at time j .", "The state d j is computed as d j = GRU d j 1 , \" t j 1 c j # ; d ! . The attention mechanism computes the context vector c i as a weighted sum of the source annotations, c j = IX i = 1 j , l h l (2) where the attention weight ji is ji = exp ( e ji ) P Ii 0 = 1 exp ( e ji 0 ) (3) and e jl = v T a tanh ( W a d j 1 + U a h l ) (4) where v a , W a and U a are the weight matrices of the attention model, and e jl is an attention model that scores how well d j 1 and h l match. With this strategy, the decoder can attend to the source annotations that are most relevant at a given time. 1285 Word Embeddings h l h l h l+1 s l+1 Annotations c l I c l O Attention Mechanism h l-1 O p-1 O q O p Character Embeddings l,p-2 O l,p I l,q IO q+1 l,q+1 OO p+1 l,p+1 IO p-2 l,p-1 OO q+2 l,q+2 O l l l l l l l l l l l l l l s l Figure 1: Forward encoder with character attention at time step l . The encoder alternates between reading word embeddings and character context vectors. c lI and c lO denotes the inside and outside character-level context vectors of the l -th word, respectively. 3 Character Enhanced Neural Machine Translation In this section, we present models which make use of both character-level and word-level information in the encoder-decoder framework. 3.1 Encoder with Character Attention The encoder maps the source sentence to a sequence of representations, which is then used by the attention mechanism. The standard encoder operates purely on (sub)words or characters. However, we want to encode both, since both levels can be linguistically significant (Xiong et al., 2017). To incorporate multiple levels of granularity, we extend the encoder with two character-level attentions. For each source word, the characters of the whole sentence can be divided into two parts, those inside the word and those outside the word. The inside characters contain information about the internal structure of the word. The outside characters may provide information about patterns that cross word boundaries. In order to distinguish the influence of the two, we use two separate attentions, one for inside characters and one for outside characters. Note that we compute attention directly from the character embedding sequence instead of using an additional RNN layer. This helps to avoid the vanishing gradient problem that would arise from increasing the sequence length, and also keeps the computation cost at a low level. Figure 1 illustrates the forward encoder with character attentions. We write the character embeddings as o = o 1 o k o K . Let p l and q l be the starting and ending character position, respectively, of word x l . Then o p l o q l are the inside characters of word x l ; o 1 o p l 1 and o q l + 1 o K are the outside characters of word x l . The encoder is an RNN that alternates between reading (sub)word embeddings and character-level information. At each time step, we first read the word embedding: h l 0 = GRU( h l 1 , s l ; 0 ) (5) Then we use the attention mechanisms to compute character context vectors for the inside characters: c Il = q l X m = p l Ilm o m Ilm = exp ( e lm ) P q l m 0 = p l exp ( e lm 0 ) e lm = v I tanh ( WI h l 0 + UI o m ) . The outside character context vector c lO is calculated in a similar way, using a di erent set of parameters, i.e. WO , UO , v O instead of WI , UI , v I . The inside and outside character context vectors are combined by a feed-forward layer and fed into the encoder RNN, forming the character-enhanced word representation h l : c Cl = tanh( WI c Il + WO c lO ) h l = GRU( h l 0 , c Cl ; ) Note that this GRU does not share parameters with the GRU in (5). The backward hidden states are calculated in a similar manner. 3.2 Decoder with Multi-Scale Attention In order to fully exploit the character-level information, we also make extensions to the decoder, so that the character-level information can be taken into account while generating the translation. We propose a multi-scale attention mechanism to get the relative information of current decoding step from both word-level and character-level representations. This attention mechanism is build from the high-level to the low-level representation, in order to enhance high-level representation with fine-grained internal structure and context. The multi-scale attention mechanism is built (as shown in Figure 2) from word-level to character-level. 1286 d 0 d j y j Decoder Initial State d j-1 h l h l h L h L h 1 h 1 s l s 1 s L Encoder j,1 j,l j,L O 1 OKO k Character Embeddings j,1c j,kc j,Kc Word Embeddings c j w c j c Figure 2: Illustration of the decoder with our multi-scale attention mechanism. First, we get the word-level information. The context vector c wj is calculated following the standard attention model (Eq. 24). And the hidden state d j is updated. d j = GRU d j 1 , \" t j 1 c wj # ; d !", "Then we attend to the character-level representation, which provides more information about the word's internal structure.", "The context vector c cj is calculated based on the updated hidden state above, c cj = KX k = 1 cjk o k cjk = exp ( e jk ) P Kk 0 = 1 exp ( e jk 0 ) e j , k = v c tanh ( W c d j + U c o k ) .", "character-level context vector c cj are concatenated: c j = c wj c cj .", "And the final context vector c j is used to help predict the next target word.", "P ( y j | y < j , x ; ) = softmax( t j 1 d j c j ) where d j is d j = GRU( d j , c cj ; d ) , With this mechanism, both the (sub)word-level and character-level representations could be used to predict the next translation, which helps to ensure a more robust and reasonable choice.", "It may also help to alleviate the under-translation problem because the character information could be a complement to the word.", "We conduct experiments on three translation tasks: Chinese-English (Zh-En), English-Chinese (En-Zh) and English-German (En-De).", "We write Zh En to refer to the Zh-En and En-Zh tasks together.", "For Zh En, the parallel training data consists of 1.6M sentence pairs extracted from LDC corpora, with 46.6M Chinese words and 52.5M English words, respectively.", "1 We use the NIST MT02 evaluation data as development data, and MT03, MT04, MT05, and MT06 as test data.", "The Chinese side of the corpora is word segmented using ICT-CLAS.", "2 The English side of the corpora is lower-cased and tokenized.", "For En-De, we conduct our experiments on the WMT17 corpus.", "We use the pre-processed parallel training data for the shared news translation task provided by the task organizers.", "3 The dataset consitst of 5.6M sentence pairs.", "We use newstest2016 as the development set and evaluate the models on newstest2017 .", "We compare our proposed models with several types of NMT systems:", "NMT: the standard attentional NMT model with words as its input (Bahdanau et al., 2015).", "RNN-Char: the standard attentional NMT model with characters as its input.", "CNN-Char: a character-based NMT model, which implements the convolutional neural network (CNN) based encoder (Costa-juss`a and Fonollosa, 2016).", "Hybrid: the mixed word / character model proposed by Wu et al. (2016).", "BPE: a subword level NMT model, which processes the source side sentence by Byte Pair Encoding (BPE) (Sennrich et al., 2016).", "We used the dl4mt implementation of the attentional model, 4 reimplementing the above models.", "Training For Zh En, we filter out the sentence pairs whose source or target side contain more than 50 words.", "We use a shortlist of the 30,000 most frequent words in each language to train our models, covering approximately 98.2% and 99.5% of the Chinese and English tokens, respectively.", "The word embedding dimension is 512.", "The hidden layer sizes of both forward and backward sequential encoder are 1024.", "For fair comparison, we also set the character embedding size to 512, except for the CNN-Char system.", "For CNN-Char, we follow the standard setting of the original paper (Costa-juss`a and Fonollosa, 2016).", "For En-De, we build the baseline system using joint BPE segmentation (Sennrich et al., 2017).", "The number of joint BPE operations is 90,000.", "We use the total BPE vocabulary for each side.", "We use Adadelta (Zeiler, 2012) for optimization with a mini-batch size of 32 for Zh En and 50 for En-De.", "Decoding and evaluation We use beam search with length-normalization to approximately find the most likely translation.", "We set beam width to 5 for Zh En and 12 for En-De.", "The translations are evaluated by BLEU (Papineni et al., 2002).", "We use the multi-bleu script for Zh En, 5 and the multi-bleu-detok script for En-De.", "6 4 https://github.com/nyu-dl/dl4mt-tutorial 5 https://github.com/moses-smt/mosesdecoder/ blob/master/scripts/generic/multi-bleu.perl 6 https://github.com/EdinburghNLP/nematus/ blob/master/data/multi-bleu-detok.perl 4.4 Results: Encoder with character attention This set of experiments evaluates the e ectiveness of our proposed character enhanced encoder.", "In Table 1, we first compare the encoder with character attention (Char-att) with the baseline word-based model.", "The result shows that our extension of the encoder can obtain significantly better performance ( + 1.58 BLEU).", "Then, in order to investigate whether the improvement comes from the extra parameters in the character layer, we compare our model to a word embedding enhanced encoder.", "When the word embedding enhanced encoder encodes a word, it attends to the word's embedding and other word embedding in the sentence instead of attending to the word's inside and outside character embeddings.", "The results show that the word embedding enhanced encoder (Word-att) only gets a 0.5 BLEU improvement than the baseline, while our model is significantly better ( + 1.58 BLEU).", "This shows that the benefit comes from the augmented character-level information which help the word-based encoder to learn a better source-side representation.", "Finally, we compare our character enhanced model with several types of systems including a strong character-based model proposed by Costa-juss`a and Fonollosa (2016) and a mixed word / character model proposed by Wu et al. (2016).", "In Table 2, rows 2 and 2 0 confirm the find-ing of Yang et al. (2016) that the traditional RNN model performs less well when the input is a sequence of characters.", "Rows 4 and 4 0 indicate that Wu et al. (2016)'s scheme to combine of words and characters is e ective for machine translation.", "Our model (row 5) outperforms other models on the Zh-En task, but only outperforms the word-based model on En-Zh.", "The results may suggest that the CNN and RNN methods is also strong in building the source representation.", "Rows 6 and 6 0 in Table 2 verify that our multi-scale attention mechanism can obtain better results than baseline systems.", "Rows 7 and 7 0 in Table 2 show that our proposed multi-scale attention mechanism further improves the performance of our encoder with character attention, yielding a significant improvement over the standard word-based model on both Zh-En ( + 2 . 02 vs. row 1) task and En-Zh translation task ( + 2 . 58 vs. row 1 0 ).", "Compared to the CNN-Char model, our model still gets + 1 .", "97 and + 1 .", "46 BLEU improvement on Zh-En and En-Zh, respectively.", "Compared to the mixed word / character model proposed by (Wu et al., 2016), we find that our best model gives a better result, demonstrating the benefits of exploiting the character level information during decoding.", "Currently, subword-level NMT models are widely used for achieving open-vocabulary translation.", "Sennrich et al. (2016) introduced a subword-level NMT model using subword-level segmentation based on the byte pair encoding (BPE) algorithm.", "In this section, we investigate the e ectiveness of our character enhanced model on top of the BPE model.", "Table 3 shows the results on the Zh-En task 1289", "and En-Zh translation task.", "Rows 8 and 8 0 confirm that BPE slightly improves the performance of the word-based model.", "But both our character enhanced encoder and the multi-scale attention yield better results.", "Our best model leads to improvements of up to 1.58 BLEU and 1.68 BLEU on the Zh-En task and En-Zh translation task, respectively.", "We also conduct experiments on the En-De translation task (as shown in Table 4).", "The result is consistent with Zh-En task and En-Zh translation tasks.", "Our best model obtains 1.43 BLEU improvement over the BPE model.", "We have argued that the character information is important not only for OOV words but also frequent words.", "To test this claim, we divided the MT03 test set into two parts according to whether 1290 the sentence contains OOV words, and evaluated several systems on the two parts.", "Table 6 lists the results.", "Although the hybrid model achieves a better result on the sentences which contain OOV words, it actually gives a worse result on the sentences without OOV words.", "By contrast, our model yields the best results on both parts of the data.", "This shows that frequent words also benefit from fine-grained character-level information.", "Table 5 shows three translation examples.", "Table", "5(a) shows the translation of an OOV word ( tong-xin-ye , telecommunication indus-try).", "The baseline NMT system can't translate the whole word because it is not in the word vocabulary.", "The hybrid model translates the word to communication, which is a valid translation of the first two characters .", "This mistranslation also appears to a ect other parts of the sentence adversely.", "Our model translates the OOV word correctly.", "Table", "5(b) shows two translation samples involving frequent words.", "For the compound word ( beizhanlingtu , occupied territory), the baseline NMT system only partly translates the word as occupation and ignores the main part ( lingtu , territory).", "The CNN-Char model, which builds up the word-level representation from characters, also cannot capture ( lingtu ).", "However, our model correctly translates the word as occupied territories. (The phrase by Israel in the reference was inserted by the translator.) The word ( dongxifang , east and west) and ( lengzhan , cold war) are deleted by the baseline model, and even the CNN-Char model translates ( dongxifang ) incorrectly.", "By contrast, our model can make use of both words and characters to translate the word ( dongxifang ) reasonably well as eastern and western. 5 Related Work Many recent studies have focused on using character-level information in neural machine translation systems.", "These e orts could be roughly divided into the following two categories.", "The first line of research attempted to build neural machine translation models purely on characters without explicit segmentation.", "Lee et al. (2017) proposed to directly learn the segmentation from characters by using convolution and pooling layers.", "Yang et al. (2016) composed the high-level representation by the character embedding and its surrounding character-level context with a bidirectional and concatenated row convolution network.", "Di erent from their models, our model aims to use characters to enhance words representation instead of depending on characters solely; our model is also much simpler.", "The other line of research attempted to combine character-level information with word-level information in neural machine translation models, which is more similar with our work.", "Ling et al. (2015a) employed a bidirectional LSTM to compose character embeddings to form the word-level information with the help of word boundary information.", "Costa-juss`a and Fonollosa (2016) replaced the word-lookup table with a convolutional network followed by a highway network (Srivas-tava et al., 2015), which learned the word-level representation by its constituent characters.", "Zhao and Zhang (2016) designed a decimator for their encoder, which e ectively uses a RNN to compute a word representation from the characters of the word.", "These approaches only consider word boundary information and ignore the word-level meaning information itself.", "In contrast, our model can make use of both character-level and word-level information.", "Luong and Manning (2016) proposed a hybrid scheme that consults character-level information whenever the model encounters an OOV word.", "Wu et al. (2016) converted the OOV words in the word-based model into the sequence of its constituent characters.", "These methods only focus on dealing with OOV words by augmenting the character-level information.", "In our work, we augment the character information to all the words.", "In this paper, we have investigated the potential of using character-level information in word-based and subword-based NMT models by proposing a novel character-aware encoder-decoder framework.", "First, we extended the encoder with a character attention mechanism for learning better source-side representations.", "Then, we incorporated information about source-side characters into the decoder with a multi-scale attention, so that the character-level information can cooperate with the word-level information to better control the translation.", "The experiments have demonstrated the e ectiveness of our models.", "Our analysis showed that both OOV words and frequent 1291 words benefit from the character-level information.", "Our current work only uses the character-level information in the source-side.", "For future work, it will be interesting to make use of finer-grained information on the target side as well.", "The authors would like to thank the anonymous reviewers for their valuable comments.", "This work is supported by the National Science Foundation of China (No. 61772261 and 61672277) and the Jiangsu Provincial Research Foundation for Ba-sic Research (No. BK20170074).", "Part of Huadong Chen's contribution was made while visiting University of Notre Dame.", "His visit was supported by the joint PhD program of China Scholarship Council." ]
[ "abstain", "abstain", "abstain", "result", "objective", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "objective", "objective", "objective", "objective", "result", "objective", "abstain", "other", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "other", "method", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "other", "abstain", "result", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "result", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "objective", "method", "objective", "result", "method", "abstain", "other", "other", "other", "other" ]