diff --git "a/abstractive/val.jsonl" "b/abstractive/val.jsonl" new file mode 100644--- /dev/null +++ "b/abstractive/val.jsonl" @@ -0,0 +1,50 @@ +{"id": "2021.eacl-main.84", "document": "Identifying named entities in written text is an essential component of the text processing pipeline used in applications such as text editors to gain a better understanding of the semantics of the text . However , the typical experimental setup for evaluating Named Entity Recognition ( NER ) systems is not directly applicable to systems that process text in real time as the text is being typed . Evaluation is performed on a sentence level assuming the end-user is willing to wait until the entire sentence is typed for entities to be identified and further linked to identifiers or coreferenced . We introduce a novel experimental setup for NER systems for applications where decisions about named entity boundaries need to be performed in an online fashion . We study how state-of-the-art methods perform under this setup in multiple languages and propose adaptations to these models to suit this new experimental setup . Experimental results show that the best systems that are evaluated on each token after its typed , reach performance within 1 - 5 F 1 points of systems that are evaluated at the end of the sentence . These show that entity recognition can be performed in this setup and open up the development of other NLP tools in a similar setup . Automatically identifying named entities such as organizations , people and locations is a key component in processing written text as it aids with understanding the semantics of the text . Named entity recognition is used as a pre-processing step to subsequent tasks such as linking named entities to concepts in a knowledge graph , identifying the salience of an entity to the text , identifying coreferential mentions , computing sentiment towards an entity , in question answering or for extracting relations . Figure 1 : An example of the proposed task and evaluation setup . After the word ' Foreign ' is typed , the model immediately predicts an NER label for this word , only using left context ( ' A spokesman for ' ) and the word itself . The prediction is then compared against the gold label to compute token-level F 1 score . This token 's prediction will not be changed , even if the model 's internal prediction for it can be revised later as more tokens are typed . Identifying named entities as they are typed benefits any system that processes text on the fly . Examples of such applications include : a ) News editorswhere named entities can be highlighted , suggested ( auto-completion ) , co-referenced or linked as the editor is typing ; b ) auto-correct -where named entities that are just typed are less likely to need correction as they may come from a different language or be out-of-vocabulary ( OOV ) ; c ) simultaneous machine translation -where translation of OOV named entities requires different approaches ; d ) live speech-to-text ( e.g. , TV shows ) -where named entities are more likely to be OOV , hence the transcription should focus more on the phonetic transcription rather than on n-gram language modelling . This paper introduces a novel experimental setup of Named Entity Recognition systems illustrated in Figure 1 . In this setup , inference about the span and type of named entities is performed for each token , immediately after it was typed . The sentence level tag sequence is composed through appending all individual token predictions as they were made . The current named entity recognition systems that are trained and evaluated to predict full sentences are likely to under-perform in this experimental setup as they : expect that right context is available , are faced with unseen types of inputs in the form of truncated sentences and can not reconcile the final sentence-level tag sequence across the entire sentence as the result may not be a valid sequence . The goal of this study is to present a comprehensive analysis of the task of NER in the as-you-type scenario , with the following contributions : a ) A novel experimental setup for conducting named entity recognition experiments , denoted as the as-you-type scenario ; b ) Experiments with state-of-the-art sentence-level approaches to named entity recognition in the as-you-type setup across three languages , which indicate a 1 - 5 F 1 points decrease compared to sentence-level inference ; c ) Tailored methods for as-you-type entity recognition models which reduce the gap to entire sentence-level inference by 9 - 23 % compared to regular approaches ; d ) An extensive analysis of existing data sets in the context of this task and model error analysis , which highlight future modelling opportunities . This paper introduced and motivated the as-youtype experimental setup for the popular NER task . We presented results across three different languages , which show the extent to which sentencelevel state-of-the-art models degrade in this setup . Through insights gained from data analysis , we proposed modelling improvements to further reduce the gap to the regular sentence-level performance . Our error analysis highlights the cases that pose challenges to the as-you-type scenario and uncovers insights into way to further improve the modelling of this task . This setup is tailored for end-applications such as text editors , speech-to-text , machine translation , auto-completion , or auto-correct . For text editors , the editor would be able to receive suggestions for entities inline , right after they type the entity , which can further be coupled with a linking algorithm . This would increase the user experience and efficiency of the editor , as they can make selections about entities inline ( similar to a phone 's autocorrect ) , rather than having to go back over the entire sentence after it was completed . Another avenue of future work would be to couple the NER as-you-type with ASR data and using methods that adapt NER to noisy ASR input ( Benton and Dredze , 2015 ) for building an end-to-end live speech to entities system .", "challenge": "The current experimental setup and evaluation for named entity recognition systems run on complete sentences limiting the real-time applications that process texts as being typed.", "approach": "They propose an experimental setup where systems are evaluated in an online fashion and perform analysis of state-of-the-art methods in multiple languages.", "outcome": "They find that models degrade performance by 1 to 5 F1 points in their setup and the proposed tailored methods can reduce the gaps."} +{"id": "D14-1082", "document": "Almost all current dependency parsers classify based on millions of sparse indicator features . Not only do these features generalize poorly , but the cost of feature computation restricts parsing speed significantly . In this work , we propose a novel way of learning a neural network classifier for use in a greedy , transition-based dependency parser . Because this classifier learns and uses just a small number of dense features , it can work very fast , while achieving an about 2 % improvement in unlabeled and labeled attachment scores on both English and Chinese datasets . Concretely , our parser is able to parse more than 1000 sentences per second at 92.2 % unlabeled attachment score on the English Penn Treebank . In recent years , enormous parsing success has been achieved by the use of feature-based discriminative dependency parsers ( K\u00fcbler et al . , 2009 ) . In particular , for practical applications , the speed of the subclass of transition-based dependency parsers has been very appealing . However , these parsers are not perfect . First , from a statistical perspective , these parsers suffer from the use of millions of mainly poorly estimated feature weights . While in aggregate both lexicalized features and higher-order interaction term features are very important in improving the performance of these systems , nevertheless , there is insufficient data to correctly weight most such features . For this reason , techniques for introducing higher-support features such as word class features have also been very successful in improving parsing performance ( Koo et al . , 2008 ) . Second , almost all existing parsers rely on a manually designed set of feature templates , which require a lot of expertise and are usually incomplete . Third , the use of many feature templates cause a less studied problem : in modern dependency parsers , most of the runtime is consumed not by the core parsing algorithm but in the feature extraction step ( He et al . , 2013 ) . For instance , Bohnet ( 2010 ) reports that his baseline parser spends 99 % of its time doing feature extraction , despite that being done in standard efficient ways . In this work , we address all of these problems by using dense features in place of the sparse indicator features . This is inspired by the recent success of distributed word representations in many NLP tasks , e.g. , POS tagging ( Collobert et al . , 2011 ) , machine translation ( Devlin et al . , 2014 ) , and constituency parsing ( Socher et al . , 2013 ) . Low-dimensional , dense word embeddings can effectively alleviate sparsity by sharing statistical strength between similar words , and can provide us a good starting point to construct features of words and their interactions . Nevertheless , there remain challenging problems of how to encode all the available information from the configuration and how to model higher-order features based on the dense representations . In this paper , we train a neural network classifier to make parsing decisions within a transition-based dependency parser . The neural network learns compact dense vector representations of words , part-of-speech ( POS ) tags , and dependency labels . This results in a fast , compact classifier , which uses only 200 learned dense features while yielding good gains in parsing accuracy and speed on two languages ( English and Chinese ) and two different dependency representations ( CoNLL and Stanford dependencies ) . The main contributions of this work are : ( i ) showing the usefulness of dense representations that are learned within the parsing task , ( ii ) developing a neural network architecture that gives good accuracy and speed , and ( iii ) introducing a novel acti-vation function for the neural network that better captures higher-order interaction features . We have presented a novel dependency parser using neural networks . Experimental evaluations show that our parser outperforms other greedy parsers using sparse indicator features in both accuracy and speed . This is achieved by representing all words , POS tags and arc labels as dense vectors , and modeling their interactions through a novel cube activation function . Our model only relies on dense features , and is able to automatically learn the most useful feature conjunctions for making predictions . An interesting line of future work is to combine our neural network based classifier with searchbased models to further improve accuracy . Also , there is still room for improvement in our architecture , such as better capturing word conjunctions , or adding richer features ( e.g. , distance , valency ) .", "challenge": "Existing dependency parsers are based on millions of parse indicator features which have poor generalization ability and restrictions on parsing speed.", "approach": "They propose to represent words, part-of-speech tags, and dependency labels and train a neural network classifier with an activation function which performs transition-based dependency parsing.", "outcome": "The proposed parser can parse more than 1000 sentences per second on the English Penn Treebank and also achieves both speed and accuracy in Chinese."} +{"id": "N07-1040", "document": "Scientific papers revolve around citations , and for many discourse level tasks one needs to know whose work is being talked about at any point in the discourse . In this paper , we introduce the scientific attribution task , which links different linguistic expressions to citations . We discuss the suitability of different evaluation metrics and evaluate our classification approach to deciding attribution both intrinsically and in an extrinsic evaluation where information about scientific attribution is shown to improve performance on Argumentative Zoning , a rhetorical classification task . In the recent past , there has been a focus on information management from scientific literature . In the genetics domain , for instance , information extraction of genes and gene-protein interactions helps geneticists scan large amounts of information ( e.g. , as explored in the TREC Genomics track ( Hersh et al . , 2004 ) ) . Elsewhere , citation indexes ( Garfield , 1979 ) provide bibliometric data about the frequency with which particular papers are cited . The success of citation indexers such as CiteSeer ( Giles et al . , 1998 ) and Google Scholar relies on the robust detection of formal citations in arbitrary text . In bibliographic information retrieval , anchor text , i.e. , the context of a citation can be used to characterise ( index ) the cited paper using terms outside of that paper ( Bradshaw , 2003 ) ; O'Connor ( 1982 ) presents an approach for identifying the area around citations where the text focuses on that citation . And automatic citation classification ( Nanba and Okumura , 1999 ; Teufel et al . , 2006 ) determines the function that a citation plays in the discourse . For such information access and retrieval purposes , the relevance of a citation within a paper is often crucial . One can estimate how important a citation is by simply counting how often it occurs in the paper . But as Kim and Webber ( 2006 ) argue , this ignores many expressions in text which refer to the cited author 's work but which are not as easy to recognise as citations . They address the resolution of instances of the third person personal pronoun \" they \" in astronomy papers : it can either refer to a citation or to some entities that are part of research within the paper ( e.g. , planets or galaxies ) . Several applications should profit in principle from detecting connections between referring expressions and citations . For instance , in citation function classification , the task is to find out if a citation is described as flawed or as useful . Consider : Most computational models of discourse are based primarily on an analysis of the intentions of the speakers [ Cohen and Perrault , 1979 ] [ Allen and Perrault , 1980 ] [ Grosz and Sidner , 1986 ] WEAK . The speaker will form intentions based on his goals and then act on these intentions , producing utterances . The hearer will then reconstruct a model of the speaker 's intentions upon hearing the utterance . This approach has many strong points , but does not provide a very satisfactory account of the adherence to discourse conventions in dialogue . The three citations above are described as flawed ( detectable by \" does not provide a very satisfactory account \" ) , and thus receive the label Weak . However , in order to detect this , one must first realise that \" this approach \" refers to the three cited papers . A contrasting hypothesis could be that the citations are used ( thus deserving the label Use ; the cue phrase \" based on \" might make us think so ( as in the context \" our work is based on \" ) . This , however , can be ruled out if we know that \" the speaker \" is not referring to some aspect of the current paper . We have described a new reference task -deciding scientific attribution , and demonstrated high human agreement ( \u03b1 > 0.8 ) on this task . Our machine learning solution using shallow features achieves an agreement of \u03b1 M = 0.68 with the human gold standard , increasing to \u03b1 M = 0.71 if only pronouns need to be resolved . We have also demonstrated that information about scientific attribution improves results for a discourse classification task ( Argumentative Zoning ) . We believe that similar improvements can be achieved on other discourse annotation tasks in the scientific literature domain . In particular , we plan to investigate the use of scientific attribution information for the citation function classification task .", "challenge": "Relevance of a citation in a paper to the cited paper is crucial information for retrieval purposes but computing occurrences would result in low recalls.", "approach": "They propose the scientific attribution task which links different linguistic expressions to citations and explore both intrinsic and extrinsic evaluation setups.", "outcome": "They show that high human agreement on the proposed task and information about scientific attributions can improve the performance of discourse classification."} +{"id": "P13-1174", "document": "Understanding the connotation of words plays an important role in interpreting subtle shades of sentiment beyond denotative or surface meaning of text , as seemingly objective statements often allude nuanced sentiment of the writer , and even purposefully conjure emotion from the readers ' minds . The focus of this paper is drawing nuanced , connotative sentiments from even those words that are objective on the surface , such as \" intelligence \" , \" human \" , and \" cheesecake \" . We propose induction algorithms encoding a diverse set of linguistic insights ( semantic prosody , distributional similarity , semantic parallelism of coordination ) and prior knowledge drawn from lexical resources , resulting in the first broad-coverage connotation lexicon . There has been a substantial body of research in sentiment analysis over the last decade ( Pang and Lee , 2008 ) , where a considerable amount of work has focused on recognizing sentiment that is generally explicit and pronounced rather than implied and subdued . However in many real-world texts , even seemingly objective statements can be opinion-laden in that they often allude nuanced sentiment of the writer ( Greene and Resnik , 2009 ) , or purposefully conjure emotion from the readers ' minds ( Mohammad and Turney , 2010 ) . Although some researchers have explored formal and statistical treatments of those implicit and implied sentiments ( e.g. Wiebe et al . ( 2005 ) , Esuli and Sebastiani ( 2006 ) , Greene and Resnik ( 2009 ) , Davidov et al . ( 2010 ) ) , automatic analysis of them largely remains as a big challenge . In this paper , we concentrate on understanding the connotative sentiments of words , as they play an important role in interpreting subtle shades of sentiment beyond denotative or surface meaning of text . For instance , consider the following : Geothermal replaces oil-heating ; it helps reducing greenhouse emissions.1 Although this sentence could be considered as a factual statement from the general standpoint , the subtle effect of this sentence may not be entirely objective : this sentence is likely to have an influence on readers ' minds in regard to their opinion toward \" geothermal \" . In order to sense the subtle overtone of sentiments , one needs to know that the word \" emissions \" has generally negative connotation , which geothermal reduces . In fact , depending on the pragmatic contexts , it could be precisely the intention of the author to transfer his opinion into the readers ' minds . The main contribution of this paper is a broadcoverage connotation lexicon that determines the connotative polarity of even those words with ever so subtle connotation beneath their surface meaning , such as \" Literature \" , \" Mediterranean \" , and \" wine \" . Although there has been a number of previous work that constructed sentiment lexicons ( e.g. , Esuli and Sebastiani ( 2006 ) , Wilson et al . ( 2005a ) , Kaji and Kitsuregawa ( 2007 ) , Qiu et al . ( 2009 ) ) , which seem to be increasingly and inevitably expanding over words with ( strongly ) connotative sentiments rather than explicit sentiments alone ( e.g. , \" gun \" ) , little prior work has directly tackled this problem of learning connotation , 2 and much of the subtle connotation of many seemingly objective words is yet to be determined . POSITIVE NEGATIVE FEMA , Mandela , Intel , Google , Python , Sony , Pulitzer , Harvard , Duke , Einstein , Shakespeare , Elizabeth , Clooney , Hoover , Goldman , Swarovski , Hawaii , Yellowstone Katrina , Monsanto , Halliburton , Enron , Teflon , Hiroshima , Holocaust , Afghanistan , Mugabe , Hutu , Saddam , Osama , Qaeda , Kosovo , Helicobacter , HIV A central premise to our approach is that it is collocational statistics of words that affect and shape the polarity of connotation . Indeed , the etymology of \" connotation \" is from the Latin \" com- \" ( \" together or with \" ) and \" notare \" ( \" to mark \" ) . It is important to clarify , however , that we do not simply assume that words that collocate share the same polarity of connotation . Although such an assumption played a key role in previous work for the analogous task of learning sentiment lexicon ( Velikovich et al . , 2010 ) , we expect that the same assumption would be less reliable in drawing subtle connotative sentiments of words . As one example , the predicate \" cure \" , which has a positive connotation typically takes arguments with negative connotation , e.g. , \" disease \" , when used as the \" relieve \" sense . 3 Therefore , in order to attain a broad coverage lexicon while maintaining good precision , we guide the induction algorithm with multiple , carefully selected linguistic insights : [ 1 ] distributional similarity , [ 2 ] semantic parallelism of coordination , [ 3 ] selectional preference , and [ 4 ] semantic prosody ( e.g. , Sinclair ( 1991 ) , Louw ( 1993 ) , Stubbs ( 1995 ) , Stefanowitsch and Gries ( 2003 ) ) ) , and also exploit existing lexical resources as an additional inductive bias . We cast the connotation lexicon induction task as a collective inference problem , and consider approaches based on three distinct types of algorithmic framework that have been shown successful for conventional sentiment lexicon induction : Random walk based on HITS / PageRank ( e.g. , Kleinberg ( 1999 ) Label / Graph propagation ( e.g. , Zhu and Ghahra-(2011 ) but with practical limitations . See \u00a7 3 for detailed discussion . 3 Note that when \" cure \" is used as the \" preserve \" sense , it expects objects with non-negative connotation . Hence wordsense-disambiguation ( WSD ) presents a challenge , though not unexpectedly . In this work , we assume the general connotation of each word over statistically prevailing senses , leaving a more cautious handling of WSD as future work . mani ( 2002 ) , Velikovich et al . ( 2010 ) ) Constraint optimization ( e.g. , Roth and Yih ( 2004 ) , Choi and Cardie ( 2009 ) , Lu et al . ( 2011 ) ) . We provide comparative empirical results over several variants of these approaches with comprehensive evaluations including lexicon-based , human judgments , and extrinsic evaluations . It is worthwhile to note that not all words have connotative meanings that are distinct from denotational meanings , and in some cases , it can be difficult to determine whether the overall sentiment is drawn from denotational or connotative meanings exclusively , or both . Therefore , we encompass any sentiment from either type of meanings into the lexicon , where non-neutral polarity prevails over neutral one if some meanings lead to neutral while others to non-neutral . 4Our work results in the first broad-coverage connotation lexicon,5 significantly improving both the coverage and the precision of Feng et al . ( 2011 ) . As an interesting by-product , our algorithm can be also used as a proxy to measure the general connotation of real-world named entities based on their collocational statistics . Table 1 highlights some example proper nouns included in the final lexicon . The rest of the paper is structured as follows . In \u00a7 2 we describe three types of induction algorithms followed by evaluation in \u00a7 3 . Then we revisit the induction algorithms based on constraint optimization in \u00a7 4 to enhance quality and scalability . \u00a7 5 presents comprehensive evaluation with human judges and extrinsic evaluations . Related work and conclusion are in \u00a7 6 and \u00a7 7 . We presented a broad-coverage connotation lexicon that determines the subtle nuanced sentiment of even those words that are objective on the surface , including the general connotation of realworld named entities . Via a comprehensive evaluation , we provided empirical insights into three different types of induction algorithms , and proposed one with good precision , coverage , and efficiency .", "challenge": "Most works on automatic sentiment analysis focus on explicit expression while subtle implicit and implied sentiments have not been investigated.", "approach": "They regard the connotation lexicon induction task as a collective inference problem and propose induction algorithms that encode a diverse set of linguistic insights.", "outcome": "They show their algorithms can significantly improve the coverage and the precision from the previous method and also measure named entities on the collocational statistics."} +{"id": "D10-1041", "document": "For resource-limited language pairs , coverage of the test set by the parallel corpus is an important factor that affects translation quality in two respects : 1 ) out of vocabulary words ; 2 ) the same information in an input sentence can be expressed in different ways , while current phrase-based SMT systems can not automatically select an alternative way to transfer the same information . Therefore , given limited data , in order to facilitate translation from the input side , this paper proposes a novel method to reduce the translation difficulty using source-side lattice-based paraphrases . We utilise the original phrases from the input sentence and the corresponding paraphrases to build a lattice with estimated weights for each edge to improve translation quality . Compared to the baseline system , our method achieves relative improvements of 7.07 % , 6.78 % and 3.63 % in terms of BLEU score on small , medium and largescale English-to-Chinese translation tasks respectively . The results show that the proposed method is effective not only for resourcelimited language pairs , but also for resourcesufficient pairs to some extent . In recent years , statistical MT systems have been easy to develop due to the rapid explosion in data availability , especially parallel data . However , in reality there are still many language pairs which lack parallel data , such as Urdu-English , Chinese-Italian , where large amounts of speakers exist for both languages ; of course , the problem is far worse for pairs such as Catalan-Irish . For such resourcelimited language pairs , sparse amounts of parallel data would cause the word alignment to be inaccurate , which would in turn lead to an inaccurate phrase alignment , and bad translations would result . Callison-Burch et al . ( 2006 ) argue that limited amounts of parallel training data can lead to the problem of low coverage in that many phrases encountered at run-time are not observed in the training data and so their translations will not be learned . Thus , in recent years , research on addressing the problem of unknown words or phrases has become more and more evident for resource-limited language pairs . Callison-Burch et al . ( 2006 ) proposed a novel method which substitutes a paraphrase for an unknown source word or phrase in the input sentence , and then proceeds to use the translation of that paraphrase in the production of the target-language result . Their experiments showed that by translating paraphrases a marked improvement was achieved in coverage and translation quality , especially in the case of unknown words which previously had been left untranslated . However , on a large-scale data set , they did not achieve improvements in terms of automatic evaluation . Nakov ( 2008 ) proposed another way to use paraphrases in SMT . He generates nearly-equivalent syntactic paraphrases of the source-side training sentences , then pairs each paraphrased sentence with the target translation associated with the original sentence in the training data . Essentially , this method generates new training data using paraphrases to train a new model and obtain more useful phrase pairs . However , he reported that this method results in bad system performance . By contrast , real improvements can be achieved by merging the phrase tables of the paraphrase model and the original model , giving priority to the latter . Schroeder et al . ( 2009 ) presented the use of word lattices for multi-source translation , in which the multiple source input texts are compiled into a compact lattice , over which a single decoding pass is then performed . This lattice-based method achieved positive results across all data conditions . In this paper , we propose a novel method using paraphrases to facilitate translation , especially for resource-limited languages . Our method does not distinguish unknown words in the input sentence , but uses paraphrases of all possible words and phrases in the source input sentence to build a source-side lattice to provide a diverse and flexible list of source-side candidates to the SMT decoder so that it can search for a best path and deliver the translation with the highest probability . In this case , we neither need to change the phrase table , nor add new features in the log-linear model , nor add new sentences in the training data . The remainder of this paper is organised as follows . In Section 2 , we define the \" translation difficulty \" from the perspective of the source side , and then examine how well the test set is covered by the phrase table and the parallel training data . Section 3 describes our paraphrase lattice method and discusses how to set the weights for the edges in the lattice network . In Section 4 , we report comparative experiments conducted on small , medium and largescale English-to-Chinese data sets . In Section 5 , we analyse the influence of our paraphrase lattice method . Section 6 concludes and gives avenues for future work . 2 What Makes Translation Difficult ? In this paper , we proposed a novel method using paraphrase lattices to facilitate the translation process in SMT . Given an input sentence , our method firstly discovers all possible paraphrases from a paraphrase database for N -grams ( 1 < = N < = 10 ) in the test set , and then filters out the paraphrases which do not appear in the phrase table in order to avoid adding new unknown words on the input side . We then use the original words and the paraphrases to build a word lattice , and set the weights to prioritise the original edges and penalise the paraphrase edges . Finally , we import the lattice into the decoder to perform lattice decoding . The experiments are conducted on English-to-Chinese translation using the FBIS data set with small and medium-sized amounts of data , and on a large-scale corpus of 2.1 million sentence pairs . We also performed comparative experiments for the baseline , the \" Para-Sub \" system and our paraphrase lattice-based system . The experimental results show that our proposed system significantly outperforms the baseline and the \" Para-Sub \" system , and the effectiveness is consistent on the small , medium and large-scale data sets . As for future work , firstly we plan to propose a pruning algorithm for the duplicate paths in the lattice , which will track the edge generation with respect to the path span , and thus eliminate duplicate paths . Secondly , we plan to experiment with another feature function in the log-linear model to discount words derived from paraphrases , and use MERT to assign an appropriate weight to this feature function .", "challenge": "Vocabulary coverage is crucial for SMT systems, especially for resource-lean language pairs, however; current systems cannot find alternative solutions.", "approach": "They propose to use paraphrasing the source side of inputs at test time listing all the possible alternatives to expand the lattice.", "outcome": "They show that their system outperforms the baseline system in the English-Chinese translation task for both resource-lean and sufficient setups."} +{"id": "H05-1062", "document": "Traditional approaches to Information Extraction ( IE ) from speech input simply consist in applying text based methods to the output of an Automatic Speech Recognition ( ASR ) system . If it gives satisfaction with low Word Error Rate ( WER ) transcripts , we believe that a tighter integration of the IE and ASR modules can increase the IE performance in more difficult conditions . More specifically this paper focuses on the robust extraction of Named Entities from speech input where a temporal mismatch between training and test corpora occurs . We describe a Named Entity Recognition ( NER ) system , developed within the French Rich Broadcast News Transcription program ESTER , which is specifically optimized to process ASR transcripts and can be integrated into the search process of the ASR modules . Finally we show how some metadata information can be collected in order to adapt NER and ASR models to new conditions and how they can be used in a task of Named Entity indexation of spoken archives . Named Entity Recognition ( NER ) is a crucial step in many Information Extraction ( IE ) tasks . It has been a specific task in several evaluation pro-grams such as the Message Understanding Conferences ( MUC ) , the Conferences on Natural Language Learning ( CoNLL ) , the DARPA HUB-5 program or more recently the French ESTER Rich Transcription program on Broadcast News data . Most of these conferences have studied the impact of using transcripts generated by an Automatic Speech Recognition ( ASR ) system rather than written texts . It appears from these studies that unlike other IE tasks , NER performance is greatly affected by the Word Error Rate ( WER ) of the transcripts processed . To tackle this problem , different ideas have been proposed : modeling explicitly the ASR errors ( Palmer and Ostendorf , 2001 ) or using the ASR system alternate hypotheses found in word lattices ( Saraclar and Sproat , 2004 ) . However performance in NER decreases dramatically when processing high WER transcripts like the ones that are obtained with unmatched conditions between the ASR training model and the data to process . This paper investigates this phenomenon in the framework of the NER task of the French Rich Transcription program of Broadcast News ESTER ( Gravier et al . , 2004 ) . Several issues are addressed : \u2022 how to jointly optimize the ASR and the NER models ? \u2022 what is the impact in term of ASR and NER performance of a temporal mismatch between the corpora used to train and test the models and how can it be recovered by means of metadata information ? \u2022 Can metadata information be used for indexing large spoken archives ? After a quick overview of related works in IE from speech input , we present the ESTER evaluation program ; then we introduce a NER system tightly integrated to the ASR process and show how it can successfully index high WER spoken databases thanks to metadata . We have presented in this paper a robust Named Entity Recognition system dedicated to process ASR transcripts . The FSM-based approach allows us to control the generalization capabilities of the system while the statistical tagger provides good labeling decisions . The main feature of this system is its ability to extract n-best lists of NE hypothesis from word lattices leaving the decision strategy choosing to either emphasize the recall or the precision of the extraction , according to the task targeted . A comparison between this approach and a standard approach based on the NLP tools Lingpipe validates our hypotheses . This integration of the ASR and the NER processes is particularly important in difficult conditions like those that can be found in large spoken archives where the training corpus does not match all the documents to process . A study of the use of metadata information in order to adapt the ASR and NER models to a specific situation showed that if the overall improvement is small , some salient information related to the metadata added can be better extracted by means of this adaptation .", "challenge": "Existing named entity recognition models for speech input process the output from automatic speech recognition disjointly making the process sensitive to the word error rate.", "approach": "They propose a system which is optimized to process transcripts and can be integrated into the search process of the automatic speech recognition modules.", "outcome": "The proposed method allows the control over generalization capabilities and comparison with a standard approach shows its effectiveness."} +{"id": "N16-1124", "document": "This paper presents an study of the use of interlocking phrases in phrase-based statistical machine translation . We examine the effect on translation quality when the translation units used in the translation hypotheses are allowed to overlap on the source side , on the target side and on both sides . A large-scale evaluation on 380 language pairs was conducted . Our results show that overall the use of overlapping phrases improved translation quality by 0.3 BLEU points on average . Further analysis revealed that language pairs requiring a larger amount of re-ordering benefited the most from our approach . When the evaluation was restricted to such pairs , the average improvement increased to up to 0.75 BLEU points with over 97 % of the pairs improving . Our approach requires only a simple modification to the decoding algorithm and we believe it should be generally applicable to improve the performance of phrase-based decoders . In this paper we examine the effect on machine translation quality of using interlocking phrases to during the decoding process in phrase-based statistical machine translation ( PBSMT ) . The motivation for this is two-fold . Firstly , during the phrase-pair extraction process that occurs in the training of a typical PBSMT system , all possible alternative phrase-pairs are extracted that are consistent with a set of alignment points . As a consequence , the source and target sides of these extracted phrase pairs may over-lap . However , in contrast to this , the decoding process traditionally proceeds by concatenating disjoint translation units ; the process relies on the language model to eliminate awkward hypotheses with repeated words produced by sequences of translation units that overlap . Secondly , the transduction process in PBSMT is carried out by generating hypotheses that are composed of sequences of translation units . These sequences are normally generated independently , as modeling the dependencies between them is difficult due to the data sparseness issues arising from modeling with word sequences . The process of interlocking is a way of introducing a form of dependency between translation units , effectively producing larger units from pairs of compatible units . 2 Related Work ( Karimova et al . , 2014 ) presented a method to extract overlapping phrases offline for hierarchical phrase based SMT . They used the CDEC SMT decoder ( Dyer et al . , 2010 ) that offers several learners for discriminative tuning of weights for the new phrases . Their results showed improvements of 0.3 to 0.6 BLEU points over discriminatively trained hierarchical phrase-based SMT systems on two datasets for German-to-English translation . ( Tribble and et al . , 2003 ) proposed a method to generate longer new phrases by merging existing phraselevel alignments that have overlaping words on both source and target sides . Their experiments on translating Arabic-English text from the news domain were encouraging . ( Roth and McCallum , 2010 ) proposed a conditional-random-field approach to discriminatively train phrase based machine translation in which training and decoding are both cast in a sampling framework . Different with traditional PB-SMT decoding that infers both a Viterbi alignment and the target sentence , their approach produced a rich overlapping phrase alignment . Their approach leveraged arbitrary features of the entire source sentence , target sentence and alignment . ( K\u00e4\u00e4ri\u00e4inen , 2009 ) proposed a novel phrase-based conditional exponential family translation model for SMT . The model operates on a feature representation in which sentence level translations are represented by enumerating all the known phrase level translations that occur inside them . The model automatically takes into account information provided by phrase overlaps . Although both of the latter two approaches were innovative the translation performance was lower than tranditional PBSMT baselines . Our proposed approach is most similar to that of ( Tribble and et al . , 2003 ) . Our approach differs in the interlocking process is less constrained ; phrase pairs can interlock independently on source and target sides , and the interlocking process performed during the decoding process itself , rather than by augmenting the phrase-table . In this paper we propose and evaluate a simple technique for improving the performance of phrasebased statistical machine translation decoders , that can be implemented with only minor modifications to the decoder . In the proposed method phrases are allowed to interlock freely on both the source and target side during decoding . The experimental results , based on a large-scale study involving 380 language pairs provide strong evidence that our approach is genuinely effective in improving the machine translation quality . The translation quality improved for 77 % of the language pairs tested , and this was increased to over 97 % when the set of language pairs was filtered according to Kendall 's tau distance . The translation quality improved by an average of up to 0.75 BLEU points on this subset . This value represents a lower bound on what is possible with this technique and in future work we intend to study the introduction of additional features into the log-linear model to encourage or discourage the use of interlocking phrases during decoding , and investigate the effect of increasing the number of interlocked words .", "challenge": "Existing phrase-based statistical machine translation systems can generate the same words repeatedly because there can be overlaps in extracted phrases.", "approach": "They first analyze the effect of overlapping phrases and propose a simple modification to the decoder to handle the interlocking phrases.", "outcome": "A large-scale evaluation including 380 language pairs reveals that their modification effectively improves the translation quality."} +{"id": "2021.eacl-main.102", "document": "Existing approaches for table annotation with entities and types either capture the structure of table using graphical models , or learn embeddings of table entries without accounting for the complete syntactic structure . We propose TabGCN , which uses Graph Convolutional Networks to capture the complete structure of tables , knowledge graph and the training annotations , and jointly learns embeddings for table elements as well as the entities and types . To account for knowledge incompleteness , TabGCN 's embeddings can be used to discover new entities and types . Using experiments on 5 benchmark datasets , we show that TabGCN significantly outperforms multiple state-of-the-art baselines for table annotation , while showing promising performance on downstream table-related applications . Table data abounds in webpages and organizational documents . Annotation of table entries , such as columns , cells and rows , using available background knowledge ( e.g. Yago , DBPedia , Freebase , etc . ) , such as knowledge of entities and their types , helps in better understanding and semantic interpretation of such tabular data . The challenge , however , is that such web tables do not adhere to any standard format , schema or convention ( Limaye et al . , 2010 ) . Additionally , knowledge graphs are typically incomplete -entities and types mentioned in tables may not always exist in the knowledge graph . Therefore , it becomes necessary to expand the knowledge graph with new entities ( Zhang et al . , 2020 ) and types for annotating tables . Initial research on table annotation ( Limaye et al . , 2010 ; Takeoka et al . , 2019 ; Bhagavatula et al . , 2015 ) used probabilistic graphical models to capture the complete row-column structure of tables and also the knowledge graph for collective annotation . More recent approaches using embeddings ( Gentile et al . , 2017 ; Zhang and Balog , 2018 ; Zhang et al . , 2019 ; Chen et al . , 2019 ; Yin et al . , 2020 ) only partly capture the syntactic structure of tables , and also ignore the structure of the knowledge graph . The problem of incompleteness of the knowledge representation ( Zhang et al . , 2020 ) is mostly not addressed . In this work , we propose the TabGCN model that uses a Graph Convolutional Network ( GCN ) ( Kipf and Welling , 2017 ) to unify the complete syntactic structure of tables ( rows , columns and cells ) and that of the knowledge graph ( entities and types ) via available annotations . The embeddings of the table elements as well as knowledge graph entities and types are trained jointly and end-to-end . While GCNs have been used for learning embeddings for many NLP tasks using the syntactic and semantic structure of natural language sentences ( Marcheggiani and Titov , 2017 ; Vashishth et al . , 2019 ) , encoding tabular structure using GCNs has not been addressed before . The model and embeddings thus trained are used to annotate new tables with known entities and types , while discovering hitherto unseen entities and types . Additionally , we use the trained embeddings for tables and rows for downstream table-related tasks -identifying similar tables , and identifying the appropriate table for any row . We demonstrate these capabilities of TabGCN using experiments on 5 benchmark web table datasets comparing against 5 existing models . We show that WebGCN significantly improves performance for entity and type annotation . For the other tasks , we show that the same embeddings show impressive performance . No existing model can perform all of these tasks . Our contributions are as follows : ( a ) We propose a model called TabGCN based on the GCN architecture that captures the complete syntactic structure of tables as well as the knowledge representation , and learns embeddings of tables , rows , columns and cells , as well as entities and types jointly and in an end-to-end fashion . ( b ) TabGCN addresses incompleteness in the knowledge representation by discovering new entities and types . ( c ) TabGCN significantly outperforms 5 existing approaches in 5 different benchmark datasets for the task of table annotation . ( d ) The trained embeddings show impressive performance in downstream tasks such as identifying similar tables and assignment of rows to appropriate tables . We have proposed a model for that jointly learns representations of tables , rows , columns and cell , as well as entities and types by capturing the complete syntactic structure of all tables , the relevant entities and types and the available annotations using the Graph Convolutional Network . As a result , TabGCN unifies the benefits of probabilistic graphical model based approaches and embedding based approaches for table annotation . Using these embeddings , TabGCN significantly outperforms existing approaches for table annotation , as well as entity and type discovery .", "challenge": "Existing embedding-based approaches for table annotation only partially capture the syntactic structure of tables while ignoring the structure of the knowledge graph.", "approach": "They propose a graph convolutional network that jointly learns embeddings for table structures, knowledge graphs, and annotations.", "outcome": "The proposed model outperforms 5 state-of-the-art baselines for table annotation and other table-related downstream applications on 5 benchmark datasets"} +{"id": "E09-1015", "document": "This paper describes a method using morphological rules and heuristics , for the automatic extraction of large-coverage lexicons of stems and root word-forms from a raw text corpus . We cast the problem of high-coverage lexicon extraction as one of stemming followed by root word-form selection . We examine the use of POS tagging to improve precision and recall of stemming and thereby the coverage of the lexicon . We present accuracy , precision and recall scores for the system on a Hindi corpus . Large-coverage morphological lexicons are an essential component of morphological analysers . Morphological analysers find application in language processing systems for tasks like tagging , parsing and machine translation . While raw text is an abundant and easily accessible linguistic resource , high-coverage morphological lexicons are scarce or unavailable in Hindi as in many other languages ( Cl\u00e9ment et al . , 2004 ) . Thus , the development of better algorithms for the extraction of morphological lexicons from raw text corpora is a task of considerable importance . A root word-form lexicon is an intermediate stage in the creation of a morphological lexicon . In this paper , we consider the problem of extracting a large-coverage root word-form lexicon for the Hindi language , a highly inflectional and moderately agglutinative Indo-European language spoken widely in South Asia . Since a POS tagger , another basic tool , was available along with POS tagged data to train it , and since the error patterns indicated that POS tagging could greatly improve the accuracy of the lexicon , we used the POS tagger in our experiments on lexicon extraction . Previous work in morphological lexicon extraction from a raw corpus often does not achieve very high precision and recall ( de Lima , 1998 ; Oliver and Tadi\u0107 , 2004 ) . In some previous work the process of lexicon extraction involves incremental or post-construction manual validation of the entire lexicon ( Cl\u00e9ment et al . , 2004 ; Sagot , 2005 ; Forsberg et al . , 2006 ; Sagot et al . , 2006 ; Sagot , 2007 ) . Our method attempts to improve on and extend the previous work by increasing the precision and recall of the system to such a point that manual validation might even be rendered unnecessary . Yet another difference , to our knowledge , is that in our method we cast the problem of lexicon extraction as two subproblems : that of stemming and following it , that of root word-form selection . The input resources for our system are as follows : a ) raw text corpus , b ) morphological rules , c ) POS tagger and d ) word-segmentation labelled data . We output a stem lexicon and a root wordform lexicon . We take as input a raw text corpus and a set of morphological rules . We first run a stemming algorithm that uses the morphological rules and some heuristics to obtain a stem dictionary . We then create a root dictionary from the stem dictionary . The last two input resources are optional but when a POS tagger is utilized , the F-score ( harmonic mean of precision and recall ) of the root lexicon can be as high as 94.6 % . In the rest of the paper , we provide a brief overview of the morphological features of the Hindi language , followed by a description of our method including the specification of rules , the corpora and the heuristics for stemming and root word-form selection . We then evaluate the system with and without the POS tagger . We have described a system for automatically constructing a root word-form lexicon from a raw text corpus . The system is rule-based and utilizes a POS tagger . Though preliminary , our results demonstrate that it is possible , using this method , to extract a high-precision and high-recall root word-form lexicon . Specifically , we show that with a POS tagger capable of labelling wordforms with POS categories at an accuracy of about 88 % , we can extract root word-forms with an accuracy of about 87 % and a precision and recall of 94.1 % and 95.3 % respectively . Though the system has been evaluated on Hindi , the techniques described herein can probably be applied to other inflectional languages . The rules selected by the system and applied to the wordforms also contain information that can be used to determine the paradigm membership of each root word-form . Further work could evaluate the accuracy with which we can accomplish this task .", "challenge": "High-coverage morphological lexicons are scarce or unavailable in Hindi calling algorithms that extract morphological lexicons from raw corpora with high recall and precision.", "approach": "They propose to use morphological rules and heuristics to automatically extract lexicons of stems and root word forms from a raw corpus.", "outcome": "They show that high-precision and high-recall root word-form lexicon can be extracted from raw corpora in Hindi, and a POS tagger improves results."} +{"id": "D10-1092", "document": "Automatic evaluation of Machine Translation ( MT ) quality is essential to developing highquality MT systems . Various evaluation metrics have been proposed , and BLEU is now used as the de facto standard metric . However , when we consider translation between distant language pairs such as Japanese and English , most popular metrics ( e.g. , BLEU , NIST , PER , and TER ) do not work well . It is well known that Japanese and English have completely different word orders , and special care must be paid to word order in translation . Otherwise , translations with wrong word order often lead to misunderstanding and incomprehensibility . For instance , SMT-based Japanese-to-English translators tend to translate ' A because B ' as ' B because A. ' Thus , word order is the most important problem for distant language translation . However , conventional evaluation metrics do not significantly penalize such word order mistakes . Therefore , locally optimizing these metrics leads to inadequate translations . In this paper , we propose an automatic evaluation metric based on rank correlation coefficients modified with precision . Our meta-evaluation of the NTCIR-7 PATMT JE task data shows that this metric outperforms conventional metrics . SMT systems thus fail to find ( R0 ) . Consequently , the global word order is essential for translation between distant language pairs , and wrong word order can easily lead to misunderstanding or incomprehensibility . Perhaps , some readers do not understand why we emphasize word order from this example alone . A few more examples will clarify what happens when SMT is applied to Japanese-to-English translation . Even the most famous SMT service available on the web failed to translate the following very simple sentence at the time of writing this paper . Japanese : meari wa jon wo koroshita . Reference : Mary killed John . SMT output : John killed Mary . Since it can not translate such a simple sentence , it obviously can not translate more complex sentences correctly . Japanese : bobu ga katta hon wo jon wa yonda . Reference : John read a book that Bob bought . SMT output : Bob read the book John bought . Japanese : bobu wa meari ni yubiwa wo kau tameni , jon no mise ni itta . Reference : Bob went to John 's store to buy a ring for Mary . SMT output : Bob Mary to buy the ring , John went to the store . Automatic evaluation of machine translation ( MT ) quality is essential to developing high-quality machine translation systems because human evaluation is time consuming , expensive , and irreproducible . If we have a perfect automatic evaluation metric , we can tune our translation system for the metric . BLEU ( Papineni et al . , 2002b ; Papineni et al . , 2002a ) showed high correlation with human judgments and is still used as the de facto standard automatic evaluation metric . However , Callison-Burch et al . ( 2006 ) argued that the MT community is overly reliant on BLEU by showing examples of poor performance . For Japanese-to-English ( JE ) translation , Echizen-ya et al . ( 2009 ) showed that the popular BLEU and NIST do not work well by using the system outputs of the NTCIR-7 PATMT ( patent translation ) JE task ( Fujii et al . , 2008 ) . On the other hand , ROUGE-L ( Lin and Hovy , 2003 ) , Word Error Rate ( WER ) , and IMPACT ( Echizen-ya and Araki , 2007 ) worked better . In these studies , Pearson 's correlation coefficient and Spearman 's rank correlation \u03c1 with human evaluation scores are used to measure how closely an automatic evaluation method correlates with human evaluation . This evaluation of automatic evaluation methods is called meta-evaluation . In human evaluation , people judge the adequacy and the fluency of each translation . Denoual and Lepage ( 2005 ) pointed out that BLEU assumes word boundaries , which is ambiguous in Japanese and Chinese . Here , we assume the word boundaries given by ChaSen , one of the standard morphological analyzers ( http://chasenlegacy.sourceforge.jp/ ) following Fujii et al . ( 2008 ) In JE translation , most Statistical Machine Translation ( SMT ) systems translate the Japanese sentence ( J0 ) kare wa sono hon wo yonda node sekaishi ni kyoumi ga atta which means ( R0 ) he was interested in world history because he read the book into an English sentence such as ( H0 ) he read the book because he was interested in world history in which the cause and the effect are swapped . Why does this happen ? The former half of ( J0 ) means \" He read the book , \" and the latter half means \" ( he ) was interested in world history . \" The middle word \" node \" between them corresponds to \" because . \" Therefore , SMT systems output sentences like ( H0 ) . On the other hand , Rule-based Machine Translation ( RBMT ) systems correctly give ( R0 ) . In order to find ( R0 ) , SMT systems have to search a very large space because we can not restrict its search space with a small distortion limit . Most In this way , this SMT service usually gives incomprehensible or misleading translations , and thus people prefer RBMT services . Other SMT systems also tend to make similar word order mistakes , and special care should be paid to the translation between distant language pairs such as Japanese and English . Even Japanese people can not solve this word order problem easily : It is well known that Japanese people are not good at speaking English . From this point of view , conventional automatic evaluation metrics of translation quality disregard word order mistakes too much . Single-reference BLEU is defined by a geometrical mean of n-gram precisions p n and is modified by Brevity Penalty ( BP ) min(1 , exp(1 -r / h ) ) , where r is the length of the reference and h is the length of the hypothesis . BLEU = BP \u00d7 ( p 1 p 2 p 3 p 4 ) 1/4 . Its range is [ 0 , 1 ] . The BLEU score of ( H0 ) with reference ( R0 ) is 1.0\u00d7(11/11\u00d79/10\u00d76/9\u00d74/8 ) 1/4 = 0.740 . Therefore , BLEU gives a very good score to this inadequate translation because it checks only ngrams and does not regard global word order . Since ( R0 ) and ( H0 ) look similar in terms of fluency , adequacy is more important than fluency in the translation between distant language pairs . Similarly , other popular scores such as NIST , PER , and TER ( Snover et al . , 2006 ) also give relatively good scores to this translation . NIST also considers only local word orders ( n-grams ) . PER ( Position-Independent Word Error Rate ) was designed to disregard word order completely . TER ( Snover et al . , 2006 ) was designed to allow phrase movements without large penalties . Therefore , these standard metrics are not optimal for evaluating translation between distant language pairs . In this paper , we propose an alternative automatic evaluation metric appropriate for distant language pairs . Our method is based on rank correlation coefficients . We use them to compare the word ranks in the reference with those in the hypothesis . There are two popular rank correlation coefficients : Spearman 's \u03c1 and Kendall 's \u03c4 ( Kendall , 1975 ) . In Isozaki et al . ( 2010 ) , we used Kendall 's \u03c4 to measure the effectiveness of our Head Finalization rule as a preprocessor for English-to-Japanese translation , but we measured the quality of translation by using conventional metrics . It is not clear how well \u03c4 works as an automatic evaluation metric of translation quality . Moreover , Spearman 's \u03c1 might work better than Kendall 's \u03c4 . As we discuss later , \u03c4 considers only the direction of the rank change , whereas \u03c1 considers the distance of the change . The first objective of this paper is to examine which is the better metric for distant language pairs . The second objective is to find improvements of these rank correlation-metrics . Spearman 's \u03c1 is based on Pearson 's correlation coefficients . Suppose we have two lists of numbers x = [ 0.1 , 0.4 , 0.2 , 0.6 ] , y = [ 0.9 , 0.6 , 0.2 , 0.7 ] . To obtain Pearson 's coefficients between x and y , we use the raw values in these lists . If we substitute their ranks for their raw values , we get x = [ 1 , 3 , 2 , 4 ] and y = [ 4 , 2 , 1 , 3 ] . Then , Spearman 's \u03c1 between x and y is given by Pearson 's coefficients between x and y . This \u03c1 can be rewritten as follows when there is no tie : \u03c1 = 1 -i d 2 i n+1 C 3 . Here , d i indicates the difference in the ranks of the i-th element . Rank distances are squared in this formula . Because of this square , we expect that \u03c1 decreases drastically when there is an element that significantly changes in rank . But we are also afraid that \u03c1 may be too severe for alternative good translations . Since Pearson 's correlation metric assumes linearity , nonlinear monotonic functions can change its score . On the other hand , Spearman 's \u03c1 and Kendall 's \u03c4 uses ranks instead of raw evaluation scores , and simple application of monotonic functions can not change them ( use of other operations such as averaging sentence scores can change them ) . When Statistical Machine Translation is applied to distant language pairs such as Japanese and English , word order becomes an important problem . SMT systems often fail to find an appropriate translation because of a large search space . Therefore , they often output misleading or incomprehensible sentences such as \" A because B \" vs. \" B because A. \" To penalize such inadequate translations , we presented an automatic evaluation method based on rank correlation . There were two questions for this approach . First , which correlation coefficient should we use : Spearman 's \u03c1 or Kendall 's \u03c4 ? Second , how should we solve the overestimation problem caused by the nature of one-to-one correspondence ? We answered these questions through our experiments using the NTCIR-7 PATMT JE translation data . For the first question , \u03c4 was slightly better than \u03c1 , but \u03c1 was improved by precision . For the second question , it turned out that BLEU 's Brevity Penalty was counter-productive . A precision-based penalty gave a better solution . With this precisionbased penalty , both \u03c1 and \u03c4 worked well and they outperformed conventional methods for NTCIR-7 data . For similar language pairs , our method was comparable to conventional evaluation methods . Fu-", "challenge": "Existing automatic evaluation metrics do not perform well on two distant language pairs where word order is important.", "approach": "They propose to use rank correlation coefficients for the evaluation of distant language pairs which can penalize wrongly ordered outputs.", "outcome": "The proposed methods outperform existing methods for distant language pairs and are comparable to similar language pairs."} +{"id": "D16-1165", "document": "We address the problem of answering new questions in community forums , by selecting suitable answers to already asked questions . We approach the task as an answer ranking problem , adopting a pairwise neural network architecture that selects which of two competing answers is better . We focus on the utility of the three types of similarities occurring in the triangle formed by the original question , the related question , and an answer to the related comment , which we call relevance , relatedness , and appropriateness . Our proposed neural network models the interactions among all input components using syntactic and semantic embeddings , lexical matching , and domain-specific features . It achieves state-of-the-art results , showing that the three similarities are important and need to be modeled together . Our experiments demonstrate that all feature types are relevant , but the most important ones are the lexical similarity features , the domain-specific features , and the syntactic and semantic embeddings . In recent years , community Question Answering ( cQA ) forums , such as StackOverflow , Quora , Qatar Living , etc . , have gained a lot of popularity as a source of knowledge and information . These forums typically organize their content in the form of multiple topic-oriented question-comment threads , where a question posed by a user is followed by a list of other users ' comments , which intend to answer the question . Many of such on-line forums are not moderated , which often results in ( a ) noisy and ( b ) redundant content , as users tend to deviate from the question and start asking new questions or engage in conversations , fights , etc . Web forums try to solve problem ( a ) in various ways , most often by allowing users to up / downvote answers according to their perceived usefulness , which makes it easier to retrieve useful answers in the future . Unfortunately , this negatively penalizes recent comments , which might be the most relevant and updated ones . This is due to the time it takes for a comment to accumulate votes . Moreover , voting is prone to abuse by forum trolls ( Mihaylov et al . , 2015 ; Mihaylov and Nakov , 2016a ) . Problem ( b ) is harder to solve , as it requires that users verify that their question has not been asked before , possibly in a slightly different way . This search can be hard , especially for less experienced users as most sites only offer basic search , e.g. , a site search by Google . Yet , solving problem ( b ) automatically is important both for site owners , as they want to prevent question duplication as much as possible , and for users , as finding an answer to their questions without posting means immediate satisfaction of their information needs . In this paper , we address the general problem of finding good answers to a given new question ( referred to as original question ) in one such community-created forum . More specifically , we use a pairwise deep neural network to rank comments retrieved from different question-comment threads according to their relevance as answers to the original question being asked . A key feature of our approach is that we investigate the contribution of the edges in the triangle formed by the pairwise interactions between the original question , the related question , and the related comments to rank comments in a unified fashion . Additionally , we use three different sets of features that capture such similarity : lexical , distributed ( semantics / syntax ) , and domain-specific knowledge . The experimental results show that addressing the answer ranking task directly , i.e. , modelling only the similarity between the original question and the answer-candidate comments , yields very low results . The other two edges of the triangle are needed to obtain good results , i.e. , the similarity between the original question and the related question and the similarity between the related question and the related comments . Both aspects add significant and cumulative improvements to the overall performance . Finally , we show that the full network , including the three pairs of similarities , outperforms the state-of-the-art on a benchmark dataset . The rest of the paper is organized as follows : Section 2 discusses the similarity triangle in answer ranking for cQA , Section 3 presents our pairwise neural network model for answering new questions in community forums , which integrates multiple levels of interaction , Section 4 describes the features we used , Section 5 presents our evaluation setup , the experiments and the results , Section 6 discusses some related work , and Section 7 wraps up the paper with a brief summary of the contributions and some possible directions for future work . 2 The Similarity Triangle in cQA Figure 1 presents an example illustrating the similarity triangle that we use when solving the answer ranking problem in cQA . In the figure , q stands for the new question , q is an existing related question , and c is a comment within the thread of question q . The edge qc relates to the main cQA task addressed in this paper , i.e. , deciding whether a comment for a potentially related question is a good answer to the original question . We will say that the relation captures the relevance of c for q. The edge qq represents the similarity between the original and the related questions . We will call this relation relatedness . We presented a neural-based approach to a novel problem in cQA , where given a new question , the task is to rank comments from related questionthreads according to their relevance as answers to the original question . We explored the utility of three types of similarities between the original question , the related question , and the related comment . We adopted a pairwise feed-forward neural network architecture , which takes as input the original question and two comments together with their corresponding related questions . This allowed us to study the impact and the interaction effects of the question-question relatedness and commentto-related question appropriateness relations when solving the primary cQA relevance task . The large performance gains obtained from using relatedness features show that question-question similarity plays a crucial role in finding relevant comments ( +30 MAP points ) . Yet , including appropriateness relations is needed to achieve state-of-the-art results ( +3.3 MAP ) on benchmark datasets . We also studied the impact of several types of features , especially domain-specific features , but also lexical features and syntactic embeddings . We observed that lexical similarity MTE features prove the most important , followed by domain-specific features , and syntactic and semantic embeddings . Overall , they all showed to be necessary to achieve state-of-the-art results . In future work , we plan to use the labels for subtasks A and B , which are provided in the datasets in order to pre-train the corresponding components of the full network for answer ranking . We further want to apply a similar network to other semantic similarity problems , such as textual entailment .", "challenge": "Detecting duplicated questions in forums is difficult while it offers merits for owners to prevent them and users from finding answers to questions without asking.", "approach": "They propose to feed related questions and the corresponding answers coupled with the original question to a pairwise model to approach an answer ranking problem.", "outcome": "The proposed model achieves the state-of-the-art and they find that lexical similarity contributes more to the final performance than other similarity measures."} +{"id": "D13-1204", "document": "Many statistical learning problems in NLP call for local model search methods . But accuracy tends to suffer with current techniques , which often explore either too narrowly or too broadly : hill-climbers can get stuck in local optima , whereas samplers may be inefficient . We propose to arrange individual local optimizers into organized networks . Our building blocks are operators of two types : ( i ) transform , which suggests new places to search , via non-random restarts from already-found local optima ; and ( ii ) join , which merges candidate solutions to find better optima . Experiments on grammar induction show that pursuing different transforms ( e.g. , discarding parts of a learned model or ignoring portions of training data ) results in improvements . Groups of locally-optimal solutions can be further perturbed jointly , by constructing mixtures . Using these tools , we designed several modular dependency grammar induction networks of increasing complexity . Our complete system achieves 48.6 % accuracy ( directed dependency macro-average over all 19 languages in the 2006/7 CoNLL data ) -more than 5 % higher than the previous state-of-the-art . Statistical methods for grammar induction often boil down to solving non-convex optimization problems . Early work attempted to locally maximize the likelihood of a corpus , using EM to estimate probabilities of dependency arcs between word bigrams ( Paskin 2001a ; 2001b ) . That parsing model has since been extended to make unsupervised learning more feasible ( Klein and Manning , 2004 ; Headden et al . , 2009 ; Spitkovsky et al . , 2012b ) . But even the latest techniques can be quite error-prone and sensitive to initialization , because of approximate , local search . In theory , global optima can be found by enumerating all parse forests that derive a corpus , though this is usually prohibitively expensive in practice . A preferable brute force approach is sampling , as in Markov-chain Monte Carlo ( MCMC ) and random restarts ( Hu et al . , 1994 ) , which hit exact solutions eventually . Restarts can be giant steps in a parameter space that undo all previous work . At the other extreme , MCMC may cling to a neighborhood , rejecting most proposed moves that would escape a local attractor . Sampling methods thus take unbounded time to solve a problem ( and ca n't certify optimality ) but are useful for finding approximate solutions to grammar induction ( Cohn et al . , 2011 ; Mare\u010dek and \u017dabokrtsk\u00fd , 2011 ; Naseem and Barzilay , 2011 ) . We propose an alternative ( deterministic ) search heuristic that combines local optimization via EM with non-random restarts . Its new starting places are informed by previously found solutions , unlike conventional restarts , but may not resemble their predecessors , unlike typical MCMC moves . We show that one good way to construct such steps in a parameter space is by forgetting some aspects of a learned model . Another is by merging promising solutions , since even simple interpolation ( Jelinek and Mercer , 1980 ) of local optima may be superior to all of the originals . Informed restarts can make it possible to explore a combinatorial search space more rapidly and thoroughly than with traditional methods alone . We proposed several simple algorithms for combining grammars and showed their usefulness in merging the outputs of iterative and static grammar induction systems . Unlike conventional system combination methods , e.g. , in machine translation ( Xiao et al . , 2010 ) , ours do not require incoming models to be of similar quality to make improvements . We exploited these properties of the combiners to reconcile grammars induced by different views of data ( Blum and Mitchell , 1998 ) . One such view retains just the simple sentences , making it easier to recognize root words . Another splits text into many inter-punctuation fragments , helping learn word associations . The induced dependency trees can themselves also be viewed not only as directed structures but also as skeleton parses , facilitating the recovery of correct polarities for unlabeled dependency arcs . By reusing templates , as in dynamic Bayesian network ( DBN ) frameworks ( Koller and Friedman , 11 The so-called Yarowsky-cautious modification of the original algorithm for unsupervised word-sense disambiguation . 2009 , \u00a7 6.2.2 ) , we managed to specify relatively \" deep \" learning architectures without sacrificing ( too much ) clarity or simplicity . On a still more speculative note , we see two ( admittedly , tenuous ) connections to human cognition . First , the benefits of not normalizing probabilities , when symmetrizing , might be related to human language processing through the base-rate fallacy ( Bar-Hillel , 1980 ; Kahneman and Tversky , 1982 ) and the availability heuristic ( Chapman , 1967 ; Tversky and Kahneman , 1973 ) , since people are notoriously bad at probability ( Attneave , 1953 ; Kahneman and Tversky , 1972 ; Kahneman and Tversky , 1973 ) . And second , intermittent \" unlearning \" -though perhaps not of the kind that takes place inside of our transformsis an adaptation that can be essential to cognitive development in general , as evidenced by neuronal pruning in mammals ( Craik and Bialystok , 2006 ; Low and Cheng , 2006 ) . \" Forgetful EM \" strategies that reset subsets of parameters may thus , possibly , be no less relevant to unsupervised learning than is \" partial EM , \" which only suppresses updates , other EM variants ( Neal and Hinton , 1999 ) , or \" dropout training \" ( Hinton et al . , 2012 ; Wang and Manning , 2013 ) , which is important in supervised settings . Future parsing models , in grammar induction , may benefit by modeling head-dependent relations separately from direction . As frequently employed in tasks like semantic role labeling ( Carreras and M\u00e0rquez , 2005 ) and relation extraction ( Sun et al . , 2011 ) , it may be easier to first establish existence , before trying to understand its nature . Other key next steps may include exploring more intelligent ways of combining systems ( Surdeanu and Manning , 2010 ; Petrov , 2010 ) and automating the operator discovery process . Furthermore , we are optimistic that both count transforms and model recombination could be usefully incorporated into sampling methods : although symmetrized models may have higher cross-entropies , hence prone to rejection in vanilla MCMC , they could work well as seeds in multi-chain designs ; existing algorithms , such as MCMCMC ( Geyer , 1991 ) , which switch contents of adjacent chains running at different temperatures , may also benefit from introducing the option to combine solutions , in addition to just swapping them .", "challenge": "Existing local model search methods tend to explore too narrowly or broadly because they are error-prone and initialization sensitive, and finding global optima is expensive.", "approach": "They propose a search heuristic that combines local optimization via EM with non-random restarts which may not resemble their predecessors by using previously found solutions.", "outcome": "The proposed methods show improvements in grammar induction experiments and a newly designed system outperforms the previous state-of-the-art by more than 5%."} +{"id": "N01-1024", "document": "We propose an algorithm to automatically induce the morphology of inflectional languages using only text corpora and no human input . Our algorithm combines cues from orthography , semantics , and syntactic distributions to induce morphological relationships in German , Dutch , and English . Using CELEX as a gold standard for evaluation , we show our algorithm to be an improvement over any knowledge-free algorithm yet proposed . Many NLP tasks , such as building machine-readable dictionaries , are dependent on the results of morphological analysis . While morphological analyzers have existed since the early 1960s , current algorithms require human labor to build rules for morphological structure . In an attempt to avoid this labor-intensive process , recent work has focused on machine-learning approaches to induce morphological structure using large corpora . In this paper , we propose a knowledge-free algorithm to automatically induce the morphology structures of a language . Our algorithm takes as input a large corpus and produces as output a set of conflation sets indicating the various inflected and derived forms for each word in the language . As an example , the conflation set of the word \" abuse \" would contain \" abuse \" , \" abused \" , \" abuses \" , \" abusive \" , \" abusively \" , and so forth . Our algorithm extends earlier approaches to morphology induction by combining various induced information sources : the semantic relatedness of the affixed forms using a Latent Semantic Analysis approach to corpusbased semantics ( Schone and Jurafsky , 2000 ) , affix frequency , syntactic context , and transitive closure . Using the hand-labeled CELEX lexicon ( Baayen , et al . , 1993 ) as our gold standard , the current version of our algorithm achieves an F-score of 88.1 % on the task of identifying conflation sets in English , outperforming earlier algorithms . Our algorithm is also applied to German and Dutch and evaluated on its ability to find prefixes , suffixes , and circumfixes in these languages . To our knowledge , this serves as the first evaluation of complete regular morphological induction of German or Dutch ( although researchers such as Nakisa and Hahn ( 1996 ) have evaluated induction algorithms on morphological sub-problems in German ) . We have illustrated three extensions to our earlier morphology induction work ( Schone and Jurafsky ( 2000 ) ) . In addition to induced semantics , we incorporated induced orthographic , syntactic , and transitive information resulting in almost a 20 % relative reduction in overall induction error . We have also extended the work by illustrating performance in German and Dutch where , to our knowledge , complete morphology induction performance measures have not previously been obtained . Lastly , we showed a mechanism whereby circumfixes as well as combinations of prefixing and suffixing can be induced in lieu of the suffixonly strategies prevailing in most previous research . For the future , we expect improvements could be derived by coupling this work , which focuses primarily on inducing regular morphology , with that of Yarowsky and Wicentowski ( 2000 ) , who assume some information about regular morphology in order to induce irregular morphology . We also believe that some findings of this work can benefit other areas of linguistic induction , such as part of speech .", "challenge": "To avoid human labor to build rules, machine-leaning approaches have been studied to induce morphological structures using large corpora.", "approach": "They propose a knowledge-free algorithm that induces the morphology of inflectional languages by combining cues from orthography, semantics, and syntactic distributions.", "outcome": "Their method is applied to German, Dutch, and English, and improves over any knowledge-free algorithm using the CELEX lexicon as a gold standard in English."} +{"id": "2021.emnlp-main.767", "document": "Chatbot is increasingly thriving in different domains , however , because of unexpected discourse complexity and training data sparseness , its potential distrust hatches vital apprehension . Recently , Machine-Human Chatting Handoff ( MHCH ) , predicting chatbot failure and enabling human-algorithm collaboration to enhance chatbot quality , has attracted increasing attention from industry and academia . In this study , we propose a novel model , Role-Selected Sharing Network ( RSSN ) , which integrates both dialogue satisfaction estimation and handoff prediction in one multi-task learning framework . Unlike prior efforts in dialog mining , by utilizing local user satisfaction as a bridge , global satisfaction detector and handoff predictor can effectively exchange critical information . Specifically , we decouple the relation and interaction between the two tasks by the role information after the shared encoder . Extensive experiments on two public datasets demonstrate the effectiveness of our model . Chatbot , as one of the recent palpable AI excitements , has been widely adopted to reduce the cost of customer service ( Qiu et al . , 2017 ; Ram et al . , 2018 ; Zhou et al . , 2020 ) . However , due to the complexity of human conversation , auto-chatbot can hardly meet all users ' needs , while its potential failure perceives skepticism . AI-enabled customer service , for instance , may trigger unexpected business losses because of chatbot failures ( Radziwill and Benton , 2017 ; Rajendran et al . , 2019 ) . Moreover , for chatbot adoption in sensitive areas , such as healthcare ( Chung and Park , 2019 ) and criminal justice ( Wang et al . , 2020a ) , any subtle statistical miscalculation may trigger serious health and legal * Corresponding authors . What a business ! It has been a week ! utter 3 We will ship goods about a week after placing the order . Please be patient . utter 2 Sorry , my dear customer . We have pushed the warehouse to ship as soon as possible , and we will compensate the freight for you . utter 7 We will try our best to improve your shopping experience . Thank you for your understanding and patience . consequences . To address this problem , recently , scholars proposed new dialog mining tasks to autoassess dialogue satisfaction , a.k.a . Service Satisfaction Analysis ( SSA ) at dialogue-level ( Song et al . , 2019 ) , and to predict potential chatbot failure via machine-human chatting handoff ( MHCH ) at utterance-level ( Huang et al . , 2018 ; Liu et al . , 2021 ) . In a MHCH context , algorithm can transfer an ongoing auto-dialogue to the human agent when the current utterance is confusing . Figure 1 depicts an exemplar dialogue of online customer service . In this dialogue , the chatbot gives an unsatisfied answer about shipping , thus causing the customer 's complaint ( local dissatisfaction utter 2 and utter 3 ) . Ideally , chatbot should be able to detect the negative ( local ) emotion ( utter 3 ) and tries to appease complaints , but this problem remains unresolved . If chatbot continues , the customer may cancel the deal and give a negative rating ( dialogue global dissatisfaction ) . With MHCH ( detects the risks of utter 2 and utter 3 ) , the dialogue can be transferred to the human agent , who is better at handling , compensating , and comforting the customer and enhance customer satisfaction . This example illustrates the cross-impact between handoff and dialogue ( local+global ) satisfaction . Intuitively , MHCH and SSA tasks can be compatible and complementary given a dialogue discourse , i.e. , the local satisfaction is related to the quality of the conversation ( Bodigutla et al . , 2019a ( Bodigutla et al . , , 2020 ) ) , which can support the handoff judgment and ultimately affect the overall satisfaction . On the one hand , handoff labels of utterances are highly pertinent to local satisfaction , e.g. , one can utilize single handoff information to enhance local satisfaction prediction , which ultimately contributes to the overall satisfaction estimation . On the other hand , the overall satisfaction is obtained by combining local satisfactions , which reflects the quality in terms of answer generation , language understanding , and emotion perception , and subsequently helps to facilitate handoff judgment . In recent years , researchers ( Bodigutla et al . , 2019a , b ; Ultes , 2019 ; Bodigutla et al . , 2020 ) explore joint evaluation of turn and dialogue level qualities in spoken dialogue systems . In terms of general dialogue system , to improve the efficiency of dialogue management , Qin et al . ( 2020 ) propose a co-interactive relation layer to explicitly examine the cross-impact and model the interaction between sentiment classification and dialog act recognition , which are relevant tasks at the same level ( utterancelevel ) . However , MHCH ( utterance-level ) and SSA ( dialogue-level ) target satisfaction at different levels . More importantly , handoff labels of utterances are more comprehensive and pertinent to local satisfaction than sentiment polarities . Meanwhile , customer utterances have significant impacts on the overall satisfaction ( Song et al . , 2019 ) , which motivates us that the role information can be critical for knowledge transfer of these two tasks . To address the aforementioned issues , we propose an innovative Role-Selected Sharing Network ( RSSN ) for handoff prediction and dialogue satisfaction estimation , which utilizes role information to selectively characterize complex relations and interactions between two tasks . To the best of our knowledge , it is the pioneer investigation to leverage the multi-task learning approach for integrating MHCH and SSA . In practice , we first adopt a shared encoder to obtain the shared representations of utterances . Inspired by the co-attention mechanism ( Xiong et al . , 2016 ; Qin et al . , 2020 ) , the shared representations are then fed into the roleselected sharing module , which consists of two directional interactions : MHCH to SSA and SSA to MHCH . This module is used to get the fusion of MHCH and SSA representations . We propose the role-selected sharing module based on the hypothesis that the role information can benefit the tasks ' performances . The satisfaction distributions of utterances from different roles ( agent and customer ) are different , and the effects for the tasks are also different . Specifically , the satisfaction of agent is non-negative . The utterances from agent can enrich the context of customer 's utterances and indirectly affect satisfaction polarity . Thus , directly employing local satisfaction of agent into the interaction with handoff may introduce noise . In the proposed role-selected sharing module , we adopt local satisfaction based on the role information : only the local satisfaction from customer can be adopted to interact with handoff information . By this means , we can control knowledge transfer for both tasks and make our framework more explainable . The final integrated outputs are then fed to separate decoders for handoff and satisfaction predictions . To summarize , our contributions are mainly as follows : ( 1 ) We introduce a novel multi-task learning framework for combining machine-human chatting handoff and service satisfaction analysis . ( 2 ) We propose a Role-Selected Sharing Network for handoff prediction and satisfaction rating estimation , which can utilize different role information to control knowledge transfer for both tasks and enhance model performance and explainability . ( 3 ) The experimental results demonstrate that our model outperforms a series of baselines that consists of the state-of-the-art ( SOTA ) models on each task and multi-task learning models for both tasks . To assist other scholars in reproducing the experiment outcomes , we release the codes and the annotated dataset1 . In this paper , we propose an innovative multi-task framework for service satisfaction analysis and machine-human chatting handoff , which deliberately establishes the mutual interrelation for each other . Specifically , we propose a Role-Selected Sharing Network for joint handoff prediction and satisfaction estimation , utilizing role and positional information to control knowledge transfer for both tasks . Extensive experiments and analyses reveal that explicitly modeling the interrelation between the two tasks can boost the performance mutually . However , our model has not been calibrated to account for user preferences and biases , which we plan to address in future work . Moreover , we will further explore how to adjust the handoff priority with the assistance of personalized information .", "challenge": "For chatbots to achieve satisfactory user experience, a system needs to model user satisfaction well but existing systems model local uttrance-level and global dialogue-level disjointly. ", "approach": "They propose a Role-Selected Sharing Network which models local dialogue satisfaction estimation and global handoff prediction in one multi-task learning framework.", "outcome": "Experiments with two datasets show the proposed model outperforms existing models for both tasks modeled in multi-task learning showing that interrelation helps performance improvements."} +{"id": "D16-1135", "document": "The state-of-the-art named entity recognition ( NER ) systems are statistical machine learning models that have strong generalization capability ( i.e. , can recognize unseen entities that do not appear in training data ) based on lexical and contextual information . However , such a model could still make mistakes if its features favor a wrong entity type . In this paper , we utilize Wikipedia as an open knowledge base to improve multilingual NER systems . Central to our approach is the construction of high-accuracy , highcoverage multilingual Wikipedia entity type mappings . These mappings are built from weakly annotated data and can be extended to new languages with no human annotation or language-dependent knowledge involved . Based on these mappings , we develop several approaches to improve an NER system . We evaluate the performance of the approaches via experiments on NER systems trained for 6 languages . Experimental results show that the proposed approaches are effective in improving the accuracy of such systems on unseen entities , especially when a system is applied to a new domain or it is trained with little training data ( up to 18.3 F 1 score improvement ) . Named entity recognition ( NER ) is an important NLP task that automatically detects entities in text and classifies them into pre-defined entity types such as persons , organizations , geopolitical entities , locations , events , etc . NER is a fundamental component of many information extraction and knowledge discovery applications , including relation extraction , entity linking , question answering and data mining . The state-of-the-art NER systems are usually statistical machine learning models that are trained with human-annotated data . Popular models include maximum entropy Markov models ( MEMM ) ( McCallum et al . , 2000 ) , conditional random fields ( CRF ) ( Lafferty et al . , 2001 ) and neural networks ( Collobert et al . , 2011 ; Lample et al . , 2016 ) . Such models have strong generalization capability to recognize unseen entities1 based on lexical and contextual information ( features ) . However , a model could still make mistakes if its features favor a wrong entity type , which happens more frequently for unseen entities as we have observed in our experiments . Wikipedia is an open-access , free-content Internet encyclopedia , which has become the de facto on-line source for general reference . A Wikipedia page about an entity normally includes both structured information and unstructured text information , and such information can be used to help determine the entity type of the referred entity . So far there are two classes of approaches that exploit Wikipedia to improve NER . The first class of approaches use Wikipedia to generate features for NER systems , e.g. , ( Kazama and Torisawa , 2007 ; Ratinov and Roth , 2009 ; Radford et al . , 2015 ) . Kazama and Torisawa ( 2007 ) try to find the Wikipedia entity for each candidate word sequence and then extract a category label from the first sentence of the Wikipedia entity page . A part-of-speech ( POS ) tagger is used to extract the category label features in the training and decoding phase . Ratinov and Roth ( 2009 ) aggregate several Wikipedia categories into higher-level concept and build a gazetteer on top of it . The two approaches were shown to be able to improve an English NER system . Both approaches , however , are language-dependent because ( Kazama and Torisawa , 2007 ) requires a POS tagger and ( Ratinov and Roth , 2009 ) requires manual category aggregation by inspection of the annotation guidelines and the training set . Radford et al . ( 2015 ) assume that document-specific knowledge base ( e.g. , Wikipedia ) tags for each document are provided , and they use those tags to build gazetteer type features for improving an English NER system . The second class of approaches use Wikipedia to generate weakly annotated data for training multilingual NER systems , e.g. , ( Richman and Schone , 2008 ; Nothman et al . , 2013 ) . The motivation is that annotating multilingual NER data by human is both expensive and time-consuming . Richman and Schone ( 2008 ) utilize the category information of Wikipedia to determine the entity type of an entity based on manually constructed rules ( e.g. , category phrase \" Living People \" is mapped to entity type PERSON ) . Such a rule-based entity type mapping is limited both in accuracy and coverage , e.g. , ( Toral and Muoz , 2006 ) . Nothman et al . ( 2013 ) train a Wikipedia entity type classifier using human-annotated Wikipedia pages . Such a supervised-learning based approach has better accuracy and coverage , e.g. , ( Dakka and Cucerzan , 2008 ) . A number of heuristic rules are developed in both works to label the Wikipedia text to create weakly annotated NER training data . The NER systems trained with the weakly annotated data may achieve similar accuracy compared with systems trained with little human-annotated data ( e.g. , up to 40 K tokens as in ( Richman and Schone , 2008 ) ) , but they are still significantly worse than well-trained systems ( e.g. , a drop of 23.9 F 1 score on the CoNLL data and a drop of 19.6 F 1 score on the BBN data as in ( Nothman et al . , 2013 ) ) . In this paper , we propose a new class of approaches that utilize Wikipedia to improve multilingual NER systems . Central to our approaches is the construction of high-accuracy , high-coverage multilingual Wikipedia entity type mappings . We use weakly annotated data to train an English Wikipedia entity type classifier , as opposed to using humanannotated data as in ( Dakka and Cucerzan , 2008 ; Nothman et al . , 2013 ) . The accuracy of the classifier is further improved via self-training . We apply the classifier on all the English Wikipedia pages and construct an English Wikipedia entity type mapping that includes entities with high classification confidence scores . To build multilingual Wikipedia entity type mappings , we generate weakly annotated classifier training data for another language via projection using the inter-language links of Wikipedia . This approach requires no human annotation or language-dependent knowledge , and thus can be easily applied to new languages . Our goal is to utilize the Wikipedia entity type mappings to improve NER systems . A natural approach is to use a mapping to create dictionary type features for training an NER system . In addition , we develop several other approaches . The first approach applies an entity type mapping as a decoding constraint for an NER system . The second approach uses a mapping to post-process the output of an NER system . We also design a robust joint approach that combines the decoding constraint approach and the post-processing approach in a smart way . We evaluate the performance of the Wikipediabased approaches on NER systems trained for 6 languages . We find that when a system is well trained ( e.g. , with 200 K to 300 K tokens of human-annotated data ) , the dictionary feature approach achieves the best improvement over the baseline system ; while when a system is trained with little human-annotated training data ( e.g. , 20 K to 30 K tokens ) , a more aggressive decoding constraint approach achieves the best improvement . In both scenarios , the Wikipediabased approaches are effective in improving the accuracy on unseen entities , especially when a system is applied to a new domain ( 3.6 F 1 score improvement on political party articles / English NER ) or it is trained with little training data ( 18.3 F 1 score improvement on Japanese NER ) . We organize the paper as follows . We describe how to build English Wikipedia entity type mapping in Section 2 and extend it to multilingual mappings in Section 3 . We present several Wikipedia-based approaches for improving NER systems in Section 4 and evaluate their performance in Section 5 . We conclude the paper in Section 6 . In this paper , we proposed and evaluated several approaches that utilize high-accuracy , high-coverage Wikipedia entity type mappings to improve multilingual NER systems . These mappings are built from weakly annotated data , and can be easily extended to new languages with no human annotation or language-dependent knowledge involved . Experimental results show that the Wikipediabased approaches are effective in improving the generalization capability of NER systems . When a system is well trained , the dictionary feature approach achieves the best improvement over the baseline system ; while when a system is trained with little human-annotated training data , a more aggressive decoding constraint approach achieves the best improvement . The improvements are larger on unseen entities , and the approaches are especially useful when a system is applied to a new domain or it is trained with little training data .", "challenge": "Even with strong generalization capability, existing state-of-the-art named entity recognition still favors a wrong entity type and makes mistakes on unseen entities.", "approach": "They propose to use entity type mappings obtained by a weakly supervised classifier based on Wikipedia to improve multilingual named entity recognition systems.", "outcome": "Evaluation in 6 languages shows the proposed method improves over baselines especially on unseen entities and in new domains or is trained with little data."} +{"id": "N19-1327", "document": "We address relation extraction as an analogy problem by proposing a novel approach to learn representations of relations expressed by their textual mentions . In our assumption , if two pairs of entities belong to the same relation , then those two pairs are analogous . Following this idea , we collect a large set of analogous pairs by matching triples in knowledge bases with web-scale corpora through distant supervision . We leverage this dataset to train a hierarchical siamese network in order to learn entity-entity embeddings which encode relational information through the different linguistic paraphrasing expressing the same relation . We evaluate our model in a one-shot learning task by showing a promising generalization capability in order to classify unseen relation types , which makes this approach suitable to perform automatic knowledge base population with minimal supervision . Moreover , the model can be used to generate pretrained embeddings which provide a valuable signal when integrated into an existing neuralbased model by outperforming the state-ofthe-art methods on a downstream relation extraction task . The task of identifying semantic relationships between entities in unstructured textual corpora , namely Relation Extraction ( RE ) , is often a prerequisite for many other natural language understanding tasks , e.g. automatic knowledge base population , question answering , etc . RE is commonly addressed as a classification task ( Bunescu et al . , 2005 ) , where a model is trained to classify relation mentions in text among a predefined set of relation types . For instance , given the sentence \" Robert Plant is the singer of the band Led Zeppelin \" , an effective RE system might extract the triple memberOf(ROBERT PLANT , LED ZEP-PELIN ) , where memberOf is a relation label expressed by the linguistic context \" is the singer of the band \" . Since a given relation can be expressed using different textual patterns surrounding entities , the state-of-the-art RE models which follow this approach need a considerable amount of examples for each relation to reach satisfactory performance . Distant supervision ( Mintz et al . , 2009 ) instead uses training examples from a knowledge base , guaranteeing a large amount of ( popular ) relation examples without human intervention , which can be used effectively by neural networks ( Lin et al . , 2016 ; Glass et al . , 2018 ) . However , even with this technique , approaching RE as a classification task presents several limitations : ( 1 ) distant supervision models are not accurate in extracting relations with a long-tailed distribution , because they typically have a small set of instances in knowledge bases ; ( 2 ) in most domains , relation types are very specific and only a few examples of each relation are available ; ( 3 ) these models can not be applied to recognize new relation types not observed during training . In this paper , we address RE from a different perspective by reducing it to an analogy problem . Our assumption states that if two pairs of entities , ( A , B ) and ( C , D ) , have at least one relation in common r , then those two pairs are analogous . Viceversa , solving proportional analogies , such as A : B = C : D , consists of identifying the implicit relations shared between two pairs of entities . For example , ROME : ITALY = PARIS : FRANCE is a valid analogy because capitalOf is a relation in common . Based on this idea , we propose an end-to-end neural model able to measure the degree of analogical similarity between two entity pairs , instead of predicting a confidence score for each relation type . An entity pair is represented through its mentions in a textual corpus , sequences of sentences where entities in the pair co-occur . If a mention represents a specific relation type , then this relationship is expressed by the linguistic context surrounding the two entities . E.g. , \" Rome is the capital of Italy \" or \" The capital of France is Paris \" referring to the example above . Thus , given two analogous entity pairs represented by their textual mentions sets as input , the model is trained to minimize the difference between the representations of relations having the same linguistic patterns . In other words , the model learns the different paraphrases expressing the same relation . In our research hypothesis , a model trained in such way is able to recognize analogies between unseen entity pairs belonging to new unseen relation types by : ( 1 ) generalizing over the sequence of words in the mentions ; ( 2 ) projecting the sequence of words in the mentions into a vector space representing relational semantics . This approach poses several research questions : ( RQ1 ) How to collect and organize a dataset for training ? ( RQ2 ) What kinds of models are effective for this task ? ( RQ3 ) How should the model be evaluated ? Knowledge bases , such as Wikidata or DBpedia , consist of large relational data sources organized in the form of triples , predicate(SUBJECT , OBJECT ) . We exploit this information to build a reliable set of analogous facts used as ground truth . Then , we adopt distant supervision to retrieve relation mentions in web-scale textual corpora by matching the subject-object entities which co-occur in the same sentences ( Riedel et al . , 2010 ; ElSahar et al . , 2018 ; Glass and Gliozzo , 2018a ) . Through this technique we can train our model on millions of analogy examples without human supervision . Since our goal is to train a model able to compute the relational similarity given two sets of textual mentions , we use siamese networks to learn discriminative features between those two instances ( Hadsell et al . , 2006 ) . This kind of neural network has been used in both computer vision ( Koch et al . , 2015 ) and natural language processing ( Mueller and Thyagarajan , 2016 ; Neculoiu et al . , 2016 ) in order to map two similar instances close in a feature space . However , in our setting each instance consists of a set of mentions , therefore it is inherently a multi-instance learning task 1 . We propose a hierarchical siamese network 1 Due to the weak supervision , the whole set of mentions with an attention mechanism at both word level ( Yang et al . , 2016 ) and at the set level ( Ilse et al . , 2018 ) in order to select the textual mention which better describes the relation . To the best of our knowledge , this is the first application of a siamese network by pairing sets of instances , so it can be considered a novelty of this work . We evaluate the generalization capability of our model in recognizing unseen relation types through an one-shot relational classification task introduced in this paper . We train the parameters of the model on a subset of most frequent relations of one of three different distantly supervised datasets used in our experiments . Then , we evaluate it on the long-tailed relations of each dataset . During the test phase , only a single example for each unseen relation is provided . This example is not used to update the parameters of the model as in a classification task , but rather to produce the vector representation of the relation itself . Entity pairs having mention sets close to this representation are more likely to be analogous . The experiments show promising results of our approach on this task , compared with the recent deep models commonly used for encoding textual representations ( Conneau et al . , 2017 ) . However , when the number of the unseen relation types increases , the performance of our model become far from the results obtained in the one-shot image classification ( Koch et al . , 2015 ) , opening an interesting challenge for future work . Finally , our model shows a transfer capability in other tasks through the use of its pre-trained vectors . Indeed , a branch of the hierarchical siamese network can be used to generate entity-entity representations given sets of mentions as input , that we call analogy embeddings . In our experiments , we integrate those representations into an existing end-to-end model based on convolutional networks ( Glass and Gliozzo , 2018b ) , outperfoming the state-of-the-art systems on two shared datasets commonly used for distantly supervised relation extraction . In this paper , we proposed a novel approach to learn representations of relations in text . Alignments between knowledge bases and textual corpora are used as ground truth in order to collect a set of analogies between entity pairs . We designed a hierarchical siamese network trained to recognize those analogies . The experiments showed the two main advantages of our approach . First , the model can generalize on new unseen relation types , obtaining promising results in one-shot learning compared with the state-of-the-art sentence encoders . Second , the model can generate low-rank representations can help existing neuralbased models designed for other tasks . As future work , we plan to continue our investigation by extending the method with other ideas . For instance , the use of positional embeddings , as well as the use of placeholders replacing the entities in the textual mentions are promising future directions . Finally , we plan also to explore the use of analogy embeddings in other tasks , such as question answering and knowledge base population .", "challenge": "Current extraction-based models require many training samples for relation extraction tasks, and classification-based approaches also have several limitations such as non-accurate extraction for long-tailed distributions.", "approach": "They propose to regard the relation extraction task as an analogy problem and develop a hierarchical siamese network that measures analogical similarity between two entities.", "outcome": "The proposed model trained by newly collected analogy pairs outperforms existing models with minimal supervision however it degrates when the number of unseen types increases."} +{"id": "2021.emnlp-main.284", "document": "Biomedical Concept Normalization ( BCN ) is widely used in biomedical text processing as a fundamental module . Owing to numerous surface variants of biomedical concepts , BCN still remains challenging and unsolved . In this paper , we exploit biomedical concept hypernyms to facilitate BCN . We propose Biomedical Concept Normalizer with Hypernyms ( BCNH ) , a novel framework that adopts list-wise training to make use of both hypernyms and synonyms , and also employs norm constraint on the representation of hypernym-hyponym entity pairs . The experimental results show that BCNH outperform the previous state-of-the-art model on the NCBI dataset . Code will be available at https://github.com/ yan-cheng / BCNH . Biomedical Concept Normalization ( BCN ) plays an important and prerequisite role in biomedical text processing . The goal of BCN is to link the entity mention in the context to its normalized CUI ( Unique Concept Identifier ) in the biomedical dictionaries such as UMLS ( Bodenreider , 2004 ) , SNOMED-CT ( Spackman et al . , 1997 ) and MedDRA ( Brown et al . , 1999 ) . Figure 1 is an example of BCN from NCBI dataset ( Dogan et al . , 2014 ) , the mention B-cell non-Hodgkins lymphomas should be linked to D016393 Lymphoma , B-Cell in the MEDIC ( Davis et al . , 2012 ) dictionary . Recent works on BCN usually adopt encoders like CNN ( Li et al . , 2017 ) , LSTM ( Phan et al . , 2019 ) , ELMo ( Peters et al . , 2018 ; Schumacher et al . , 2020 ) or BioBERT ( Lee et al . , 2020 ; Fakhraei et al . , 2019 ; Ji et al . , 2020 ) to embed both the mention and the concept 's name entities , and then feed the representations to the following classifier or ranking network to determine the corresponding concept in the biomedical dictionary . However , biomedical dictionaries are generally sparse in nature : a concept is usually provided with only CUI , referred name ( recommended concept name string ) , synonyms ( acceptable name variants , synonyms ) , and related concepts ( mainly hypernym concepts ) . Therefore , effectively using the limited information in the biomedical dictionary where the candidate entities came from is paramount for the BCN task . For concept 's synonym entities , recent BNE ( Phan et al . , 2019 ) and BIOSYN ( Sung et al . , 2020 ) tries to make full use of them by synonym marginalization to enhance biomedical entity representation and achieved consistent performance improvement . Unfortunately , previous works generally ignore concept hypernym hierarchy structure , which is exactly the initial motivation of biomedical dictionary : organization of thousands of concepts under a unified and multi-level hierarchical classification schema . We believe that leveraging hypernym information in the biomedical dictionary can improve the BCN performance based on two intuitions . First , hard negative sampling ( Fakhraei et al . , 2019 ; Phan et al . , 2019 ) is vital for the BCN model 's discriminating ability and a hypernym is a hard negative example for its hyponym naturally . Second , injecting the hypernym hierarchy information during the training process is beneficial for encoders , since currently used encoders like BioBERT only encodes the context semantics in biomedical corpora instead of the biomedical concept structural information . To this end , we propose Biomedical Concept Normalizer with Hypernyms ( BCNH ) , a novel framework combining the list-wise cross entropy loss with norm constraint on hypernym-hyponym entity pairs . Concretely , we reformulate the candidate target list as a three-level relevance list to consider both synonyms and hypernyms , and apply the list-wise cross entropy loss . On the one hand , synonyms help to encode surface name variants , on the other hand , hypernyms help encode hierarchical structural information . We also apply the norm constraint on the embedding of hypernym-hyponym entity pairs to further preserve the principal hypernym relation . Specifically , for a hypernym-hyponym entity pair ( e hyper , e hypo ) , we constraint that the norm of hypernym entity e hyper is larger than that of e hyper in a multi-task manner . We conduct experiments on the NCBI dataset and outperforms the previous state-of-the-art model . To sum up , the contributions of this paper are as follows . First , for the first time , we reformulate the candidate target list as a three-level relevance list and apply the list-wise loss to attend all candidate entities . Second , we innovatively use norm constraint to model the hypernym-hyponym relation , preserving the hierarchy structure information inside the entity representation . The proposed BCNH outperforms the previous state-of-the-art model on the NCBI dataset , leading to an improvement of 0.73 % on top1 accuracy . In this paper , we propose BCNH to leverage hypernyms in the biomedical concept normalization task . We adopts both list-wise training and norm constraint with the help of hypernym information . The experimental results on the NCBI dataset show that BCNH outperforms previous state-of-the-art models .", "challenge": "Previous works on the Biomedical Concept Normalization that link the entity mentions in the context to its normalized dictionaries ignore the concept hypernym hierarchy structure.", "approach": "They propose to adopt list-wise training to make use of both hypernyms and synonyms and also employ norm constraints on representations of hypernym-synonym entity pairs.", "outcome": "The proposed method outperforms the state-of-the-art model on the NCBI dataset by 0.73% on top 1 accuracy."} +{"id": "P15-1042", "document": "The sentiment classification performance relies on high-quality sentiment resources . However , these resources are imbalanced in different languages . Cross-language sentiment classification ( CLSC ) can leverage the rich resources in one language ( source language ) for sentiment classification in a resource-scarce language ( target language ) . Bilingual embeddings could eliminate the semantic gap between two languages for CLSC , but ignore the sentiment information of text . This paper proposes an approach to learning bilingual sentiment word embeddings ( BSWE ) for English-Chinese CLSC . The proposed B-SWE incorporate sentiment information of text into bilingual embeddings . Furthermore , we can learn high-quality BSWE by simply employing labeled corpora and their translations , without relying on largescale parallel corpora . Experiments on NLP&CC 2013 CLSC dataset show that our approach outperforms the state-of-theart systems . Sentiment classification is a task of predicting sentiment polarity of text , which has attracted considerable interest in the NLP field . To date , a number of corpus-based approaches ( Pang et al . , 2002 ; Pang and Lee , 2004 ; Kennedy and Inkpen , 2006 ) have been developed for sentiment classification . The approaches heavily rely on quality and quantity of the labeled corpora , which are considered as the most valuable resources in sentiment classification task . However , such sentiment resources are imbalanced in different languages . To leverage resources in the source language to improve the sentiment classification performance in the target language , cross-language sentiment classification ( CLSC ) approaches have been investigated . The traditional CLSC approaches employ machine translation ( MT ) systems to translate corpora in the source language into the target language , and train the sentiment classifiers in the target language ( Banea et al . , 2008 ) . Directly employing the translated resources for sentiment classification in the target language is simple and could get acceptable results . However , the gap between the source language and target language inevitably impacts the performance of sentiment classification . To improve the classification accuracy , multiview approaches have been proposed . In these approaches , the resources in the source language and their translations in the target language are both used to train sentiment classifiers in two independent views ( Wan , 2009 ; Gui et al . , 2013 ; Zhou et al . , 2014a ) . The final results are determined by ensemble classifiers in these two views to overcome the weakness of monolingual classifiers . However , learning language-specific classifiers in each view fails to capture the common sentiment information of two languages during training process . With the revival of interest in deep learning ( Hinton and Salakhutdinov , 2006 ) , shared deep representations ( or embeddings ) ( Bengio et al . , 2013 ) are employed for CLSC ( Chandar A P et al . , 2013 ) . Usually , paired sentences from parallel corpora are used to learn word embeddings across languages ( Chandar A P et al . , 2013 ; Chandar A P et al . , 2014 ) , eliminating the need of MT systems . The learned bilingual embeddings could easily project the training data and test data into a common space , where training and testing are performed . However , high-quality bilingual embeddings rely on the large-scale task-related parallel corpora , which are not always readily available . Meanwhile , though semantic similarities across languages are captured during bilingual embedding learning process , sentiment information of text is ignored . That is , bilingual embeddings learned from unlabeled parallel corpora are not effective enough for CLSC because of a lack of explicit sentiment information . Tang and Wan ( 2014 ) first proposed a bilingual sentiment embedding model using the original training data and the corresponding translations through a linear mapping rather than deep learning technique . This paper proposes a denoising autoencoder based approach to learning bilingual sentiment word embeddings ( BSWE ) for CLSC , which incorporates sentiment polarities of text into the bilingual embeddings . The proposed approach learns BSWE with the original labeled documents and their translations instead of parallel corpora . The BSWE learning process consists of two phases : the unsupervised phase of semantic learning and the supervised phase of sentiment learning . In the unsupervised phase , sentiment words and their negation features are extracted from the source training data and their translations to represent paired documents . These features are used as inputs for a denoising autoencoder to learn the bilingual embeddings . In the supervised phase , sentiment polarity labels of documents are used to guide BSWE learning for incorporating sentiment information into the bilingual embeddings . The learned BSWE are applied to project English training data and Chinese test data into a common space . In this space , a linear support vector machine ( SVM ) is used to perform training and testing . The experiments are carried on NLP&CC 2013 CLSC dataset , including book , DVD and music categories . Experimental results show that our approach achieves 80.68 % average accuracy , which outperforms the state-of-the-art systems on this dataset . Although the BSWE are only evaluated on English-Chinese CLSC here , it can be popularized to many other languages . The major contributions of this work can be summarized as follows : \u2022 We propose bilingual sentiment word embeddings ( BSWE ) for CLSC based on deep learning technique . Experimental results show that the proposed BSWE significantly outperform the bilingual embeddings by incorporating sentiment information . \u2022 Instead of large-scale parallel corpora , only the labeled English corpora and Englishto-Chinese translations are required for B-SWE learning . It is proved that in spite of the small-scale of training set , our approach outperforms the state-of-the-art systems in NLP&CC 2013 CLSC share task . \u2022 We employ sentiment words and their negation features rather than all words in documents to learn sentiment-specific embeddings , which significantly reduces the dimension of input vectors as well as improves sentiment classification performance . This paper proposes an approach to learning B-SWE by incorporating sentiment information into the bilingual embeddings for CLSC . The proposed approach learns BSWE with the labeled documents and their translations rather than parallel corpora . In addition , BDR is proposed to enhance the sentiment expression ability which combines English and Chinese representations . Experiments on the NLP&CC 2013 CLSC dataset show that our approach outperforms the previous stateof-the-art systems as well as traditional bilingual embedding systems . The proposed BSWE are only evaluated on English-Chinese CLSC in this paper , but it can be popularized to other languages . Both semantic and sentiment information play an important role in sentiment classification . In the following work , we will further investigate the relationship between semantic and sentiment information for CLSC , and balance their functions to optimize their combination for CLSC .", "challenge": "Bilingual word embeddings trained on unlabeled corpus lack explicit information for sentiment classification however task-related parallel corpus is not always available.", "approach": "They propose to first extract sentiment words from training data and their translations then use the results to guide word embeddings to incorporate sentiment information.", "outcome": "The proposed method achieves 80.68% average accuracy on the NLP&CC 2013 CLSC dataset which outperforms the state-of-the-art and traditional bilingual embedding systems."} +{"id": "P13-1096", "document": "During real-life interactions , people are naturally gesturing and modulating their voice to emphasize specific points or to express their emotions . With the recent growth of social websites such as YouTube , Facebook , and Amazon , video reviews are emerging as a new source of multimodal and natural opinions that has been left almost untapped by automatic opinion analysis techniques . This paper presents a method for multimodal sentiment classification , which can identify the sentiment expressed in utterance-level visual datastreams . Using a new multimodal dataset consisting of sentiment annotated utterances extracted from video reviews , we show that multimodal sentiment analysis can be effectively performed , and that the joint use of visual , acoustic , and linguistic modalities can lead to error rate reductions of up to 10.5 % as compared to the best performing individual modality . Video reviews represent a growing source of consumer information that gained increasing interest from companies , researchers , and consumers . Popular web platforms such as YouTube , Amazon , Facebook , and ExpoTV have reported a significant increase in the number of consumer reviews in video format over the past five years . Compared to traditional text reviews , video reviews provide a more natural experience as they allow the viewer to better sense the reviewer 's emotions , beliefs , and intentions through richer channels such as intonations , facial expressions , and body language . Much of the work to date on opinion analysis has focused on textual data , and a number of resources have been created including lexicons ( Wiebe and Riloff , 2005 ; Esuli and Sebastiani , 2006 ) or large annotated datasets ( Maas et al . , 2011 ) . Given the accelerated growth of other media on the Web and elsewhere , which includes massive collections of videos ( e.g. , YouTube , Vimeo , VideoLectures ) , images ( e.g. , Flickr , Picasa ) , audio clips ( e.g. , podcasts ) , the ability to address the identification of opinions in the presence of diverse modalities is becoming increasingly important . This has motivated researchers to start exploring multimodal clues for the detection of sentiment and emotions in video content ( Morency et al . , 2011 ; Wagner et al . , 2011 ) . In this paper , we explore the addition of speech and visual modalities to text analysis in order to identify the sentiment expressed in video reviews . Given the non homogeneous nature of full-video reviews , which typically include a mixture of positive , negative , and neutral statements , we decided to perform our experiments and analyses at the utterance level . This is in line with earlier work on text-based sentiment analysis , where it has been observed that full-document reviews often contain both positive and negative comments , which led to a number of methods addressing opinion analysis at sentence level . Our results show that relying on the joint use of linguistic , acoustic , and visual modalities allows us to better sense the sentiment being expressed as compared to the use of only one modality at a time . Another important aspect of this paper is the introduction of a new multimodal opinion database annotated at the utterance level which is , to our knowledge , the first of its kind . In our work , this dataset enabled a wide range of multimodal sentiment analysis experiments , addressing the relative importance of modalities and individual features . The following section presents related work in text-based sentiment analysis and audio-visual emotion recognition . Section 3 describes our new multimodal datasets with utterance-level sentiment annotations . Section 4 presents our multimodal sen-timent analysis approach , including details about our linguistic , acoustic , and visual features . Our experiments and results on multimodal sentiment classification are presented in Section 5 , with a detailed discussion and analysis in Section 6 . In this paper , we presented a multimodal approach for utterance-level sentiment classification . We introduced a new multimodal dataset consisting AU6 AU12 AU45 AUs 1,1 + 4 of sentiment annotated utterances extracted from video reviews , where each utterance is associated with a video , acoustic , and linguistic datastream . Our experiments show that sentiment annotation of utterance-level visual datastreams can be effectively performed , and that the use of multiple modalities can lead to error rate reductions of up to 10.5 % as compared to the use of one modality at a time . In future work , we plan to explore alternative multimodal fusion methods , such as decision-level and meta-level fusion , to improve the integration of the visual , acoustic , and linguistic modalities .", "challenge": "The growth of media with a large collection of videos, images, and audio clips calls abilities to identify opinions in such diverse modalities.", "approach": "They propose a method for multimodal sentiment classification which identifies expressions in utterance-level visual datastreams and a dataset of sentiment annotated utterances from video reviews.", "outcome": "They show that visual, acoustic, and linguistic modalities can reduce error rates by up to 10.5% compared to the best single model model."} +{"id": "D10-1060", "document": "In modern machine translation practice , a statistical phrasal or hierarchical translation system usually relies on a huge set of translation rules extracted from bi-lingual training data . This approach not only results in space and efficiency issues , but also suffers from the sparse data problem . In this paper , we propose to use factorized grammars , an idea widely accepted in the field of linguistic grammar construction , to generalize translation rules , so as to solve these two problems . We designed a method to take advantage of the XTAG English Grammar to facilitate the extraction of factorized rules . We experimented on various setups of low-resource language translation , and showed consistent significant improvement in BLEU over state-ofthe-art string-to-dependency baseline systems with 200 K words of bi-lingual training data . A statistical phrasal ( Koehn et al . , 2003 ; Och and Ney , 2004 ) or hierarchical ( Chiang , 2005 ; Marcu et al . , 2006 ) machine translation system usually relies on a very large set of translation rules extracted from bi-lingual training data with heuristic methods on word alignment results . According to our own experience , we obtain about 200 GB of rules from training data of about 50 M words on each side . This immediately becomes an engineering challenge on space and search efficiency . A common practice to circumvent this problem is to filter the rules based on development sets in the step of rule extraction or before the decoding phrase , instead of building a real distributed system . However , this strategy only works for research systems , for which the segments for translation are always fixed . However , do we really need such a large rule set to represent information from the training data of much smaller size ? Linguists in the grammar construction field already showed us a perfect solution to a similar problem . The answer is to use a factorized grammar . Linguists decompose lexicalized linguistic structures into two parts , ( unlexicalized ) templates and lexical items . Templates are further organized into families . Each family is associated with a set of lexical items which can be used to lexicalize all the templates in this family . For example , the XTAG English Grammar ( XTAG-Group , 2001 ) , a hand-crafted grammar based on the Tree Adjoining Grammar ( TAG ) ( Joshi and Schabes , 1997 ) formalism , is a grammar of this kind , which employs factorization with LTAG e-tree templates and lexical items . Factorized grammars not only relieve the burden on space and search , but also alleviate the sparse data problem , especially for low-resource language translation with few training data . With a factored model , we do not need to observe exact \" template -lexical item \" occurrences in training . New rules can be generated from template families and lexical items either offline or on the fly , explicitly or implicitly . In fact , the factorization approach has been successfully applied on the morphological level in previous study on MT ( Koehn and Hoang , 2007 ) . In this work , we will go further to investigate factorization of rule structures by exploiting the rich XTAG English Grammar . We evaluate the effect of using factorized translation grammars on various setups of low-resource language translation , since low-resource MT suffers greatly on poor generalization capability of trans-lation rules . With the help of high-level linguistic knowledge for generalization , factorized grammars provide consistent significant improvement in BLEU ( Papineni et al . , 2001 ) over string-todependency baseline systems with 200 K words of bi-lingual training data . This work also closes the gap between compact hand-crafted translation rules and large-scale unorganized automatic rules . This may lead to a more effective and efficient statistical translation model that could better leverage generic linguistic knowledge in MT . In the rest of this paper , we will first provide a short description of our baseline system in Section 2 . Then , we will introduce factorized translation grammars in Section 3 . We will illustrate the use of the XTAG English Grammar to facilitate the extraction of factorized rules in Section 4 . Implementation details are provided in Section 5 . Experimental results are reported in Section 6 . In this paper , we proposed a novel statistical machine translation model using a factorized structurebased translation grammar . This model not only alleviates the sparse data problem but only relieves the burden on space and search , both of which are imminent issues for the popular phrasal and/or hierarchical MT systems . We took low-resource language translation , especially X-to-English translation tasks , for case study . We designed a method to exploit family information in the XTAG English Grammar to facilitate the extraction of factorized rules . We tested the new model on low-resource translation , and the use of factorized models showed significant improvement in BLEU on systems with 200 K words of bi-lingual training data of various language pairs and genres . The factorized translation grammar proposed here shows an interesting way of using richer syntactic resources , with high potential for future research . In future , we will explore various learning methods for better estimation of families , templates and lexical items . The target linguistic knowledge that we used in this paper will provide a nice starting point for unsupervised learning algorithms . We will also try to further exploit the factorized representation with discriminative learning . Features defined on templates and families will have good generalization capability .", "challenge": "Existing statistical phrasal or hierarchical machine translation system relies on a large set of translation rules which results in engineering challenges.", "approach": "They propose to use factorized grammar from the field of linguistics as more general translation rules from XTAG English Grammar.", "outcome": "The proposed method consistently outperforms existing methods in BLEU on various low-resource language translation tasks with less training data."} +{"id": "P06-1071", "document": "Recent developments in statistical modeling of various linguistic phenomena have shown that additional features give consistent performance improvements . Quite often , improvements are limited by the number of features a system is able to explore . This paper describes a novel progressive training algorithm that selects features from virtually unlimited feature spaces for conditional maximum entropy ( CME ) modeling . Experimental results in edit region identification demonstrate the benefits of the progressive feature selection ( PFS ) algorithm : the PFS algorithm maintains the same accuracy performance as previous CME feature selection algorithms ( e.g. , Zhou et al . , 2003 ) when the same feature spaces are used . When additional features and their combinations are used , the PFS gives 17.66 % relative improvement over the previously reported best result in edit region identification on Switchboard corpus ( Kahn et al . , 2005 ) , which leads to a 20 % relative error reduction in parsing the Switchboard corpus when gold edits are used as the upper bound . Conditional Maximum Entropy ( CME ) modeling has received a great amount of attention within natural language processing community for the past decade ( e.g. , Berger et al . , 1996 ; Reynar and Ratnaparkhi , 1997 ; Koeling , 2000 ; Malouf , 2002 ; Zhou et al . , 2003 ; Riezler and Vasserman , 2004 ) . One of the main advantages of CME modeling is the ability to incorporate a variety of features in a uniform framework with a sound mathematical foundation . Recent improvements on the original incremental feature selection ( IFS ) algorithm , such as Malouf ( 2002 ) and Zhou et al . ( 2003 ) , greatly speed up the feature selection process . However , like many other statistical modeling algorithms , such as boosting ( Schapire and Singer , 1999 ) and support vector machine ( Vapnik 1995 ) , the algorithm is limited by the size of the defined feature space . Past results show that larger feature spaces tend to give better results . However , finding a way to include an unlimited amount of features is still an open research problem . In this paper , we propose a novel progressive feature selection ( PFS ) algorithm that addresses the feature space size limitation . The algorithm is implemented on top of the Selective Gain Computation ( SGC ) algorithm ( Zhou et al . , 2003 ) , which offers fast training and high quality models . Theoretically , the new algorithm is able to explore an unlimited amount of features . Because of the improved capability of the CME algorithm , we are able to consider many new features and feature combinations during model construction . To demonstrate the effectiveness of our new algorithm , we conducted a number of experiments on the task of identifying edit regions , a practical task in spoken language processing . Based on the convention from Shriberg ( 1994 ) and Charniak and Johnson ( 2001 ) , a disfluent spoken utterance is divided into three parts : the reparandum , the part that is repaired ; the inter-regnum , which can be filler words or empty ; and the repair / repeat , the part that replaces or repeats the reparandum . The first two parts combined are called an edit or edit region . An example is shown below : interregnum It is , you know , this is a tough problem . This paper presents our progressive feature selection algorithm that greatly extends the feature space for conditional maximum entropy modeling . The new algorithm is able to select features from feature space in the order of tens of millions in practice , i.e. , 8 times the maximal size previous algorithms are able to process , and unlimited space size in theory . Experiments on edit region identification task have shown that the increased feature space leads to 17.66 % relative improvement ( or 3.85 % absolute ) over the best result reported by Kahn et al . ( 2005 ) , and 10.65 % relative improvement ( or 2.14 % absolute ) over the new baseline SGC algorithm with all the variables from Zhang and Weng ( 2005 ) . We also show that symbolic prosody labels together with confidence scores are useful in edit region identification task . In addition , the improvements in the edit identification lead to a relative 20 % error reduction in parsing disfluent sentences when gold edits are used as the upper bound .", "challenge": "Although it is known models with a larger number of features can achieve better performance, applicable feature space is limited with most models.", "approach": "They propose a progressive feature selection algorithm that allows a conditional maximum entropy model to be efficiently trained with an unlimited number of features.", "outcome": "The proposed training algorithm can improve over baseline models by increasing feature space on the edit region identification task."} +{"id": "N15-1065", "document": "This study tackles the problem of paraphrase acquisition : achieving high coverage as well as accuracy . Our method first induces paraphrase patterns from given seed paraphrases , exploiting the generality of paraphrases exhibited by pairs of lexical variants , e.g. , \" amendment \" and \" amending , \" in a fully empirical way . It then searches monolingual corpora for new paraphrases that match the patterns . This can extract paraphrases comprising words that are completely different from those of the given seeds . In experiments , our method expanded seed sets by factors of 42 to 206 , gaining 84 % to 208 % more coverage than a previous method that generalizes only identical word forms . Human evaluation through a paraphrase substitution test demonstrated that the newly acquired paraphrases retained reasonable quality , given substantially high-quality seeds . One of the characteristics of human languages is that the same semantic content can be expressed using several different linguistic expressions , i.e. , paraphrases . Dealing with paraphrases is an important issue in a broad range of natural language processing ( NLP ) tasks ( Madnani and Dorr , 2010 ; Androutsopoulos and Malakasiotis , 2010 ) . To adequately and robustly deal with paraphrases , a large-scale knowledge base containing words and phrases having approximately the same meaning is indispensable . Thus , the task of automatically creating such large-scale paraphrase lexicons has been drawing the attention of many researchers ( see Section 2 for details ) . The challenge is to en-sure substantial coverage along with high accuracy despite the natural tension between these factors . Among the different types of language resources , monolingual corpora1 offer the largest coverage , but the quality of the extracted candidates is generally rather low . The difficulty lies in the manner of distinguishing paraphrases from expressions that stand in different semantic relations , e.g. , antonyms and sibling words , using only the statistics estimated from such corpora . In contrast , highly accurate paraphrases can be extracted from parallel or comparable corpora , but their coverage is limited owing to the limited availability of such corpora for most languages . This study aims to improve coverage while maintaining accuracy . To that end , we propose a method that exploits the generality exhibited by pairs of lexical variants . Given a seed set of paraphrase pairs , our method first induces paraphrase patterns by generalizing not only identical word forms ( Fujita et al . , 2012 ) but also pairs of lexical variants . For instance , from a seed pair ( 1a ) , a pattern ( 1b ) is acquired , where the pair of lexical variants ( \" amendment \" , \" amending \" ) and the shared word form \" regulation \" are generalized . ( 1 ) a. amendment of regulation \u21d4 amending regulation b. X : ment of Y : \u03d5 \u21d4 X : ing Y : \u03d5 With such patterns , new paraphrase pairs that would have been missed using only the surface forms are extracted from a monolingual corpus . Obtainable pairs can include those comprising words that are completely different from those of the seed paraphrases , e.g. , ( 2a ) and ( 2b ) . ( 2 ) a. investment of resources \u21d4 investing resources b. recruitment of engineers \u21d4 recruiting engineers While the generality underlying paraphrases has been exploited either by handcrafted rules ( Harris , 1957 ; Mel'\u010duk and Polgu\u00e8re , 1987 ; Jacquemin , 1999 ; Fujita et al . , 2007 ) or by data-driven techniques ( Ganitkevitch et al . , 2011 ; Fujita et al . , 2012 ) , we still lack a robust and accurate way of identifying various types of lexical variants . Our method tackles this issue using affix patterns that are also acquired from high-quality seed paraphrases in a fully empirical way . Consequently , our method has the potential to apply to many languages . We proposed a method for expanding given paraphrase lexicons by first inducing paraphrase patterns and then searching monolingual corpora with these patterns for new paraphrase pairs . To the best of our knowledge , this is the first attempt to exploit various types of lexical variants for acquiring paraphrases in a completely empirical way . Our method requires minimal language-dependent resources , i.e. , stoplists and tokenizers , other than raw corpora . We demonstrated the quantitative impact of our method and confirmed the potential quality of the expanded paraphrase lexicon . Our future work is four-fold . ( i ) Paraphrase lexicons created by different methods and sources have different properties . Designing an overall model to harmonize such heterogeneous lexicons is an important issue . ( ii ) We aim to investigate an extensive collection of corpora : there are far more corpora than those we used in this experiment . We are also interested in expanding paraphrase lexicons created by a method other than bilingual pivoting ; for instance , those extracted from a Web-harvested monolingual comparable corpus ( Hashimoto et al . , 2011 ; Yan et al . , 2013 ) . ( iii ) We will apply our method to various languages for demonstrating its applicability , extending it for a wider range of lexical variants depending on the targeted language . ( iv ) Paraphrases are the fundamental linguistic phenomena that affect a wide range of NLP tasks . We are therefore interested in determining to what extent our paraphrase lexicons can improve the performance of application tasks such as machine translation , text summarization , and text simplification .", "challenge": "For paraphrase acquisition, monolingual corpus achieves high coverage but quality is low and parallel or comparable corpus can keep quality high but coverage is low.", "approach": "They propose to first induce paraphrase patterns from high quality seed paraphrases and then perform a search over a monolingual corpus with obtained patterns.", "outcome": "The proposed method achieves up to 208% more coverage than a previous method and human evaluation shows the quality is reasonably high."} +{"id": "D15-1311", "document": "Social media is a rich source of rumours and corresponding community reactions . Rumours reflect different characteristics , some shared and some individual . We formulate the problem of classifying tweet level judgements of rumours as a supervised learning task . Both supervised and unsupervised domain adaptation are considered , in which tweets from a rumour are classified on the basis of other annotated rumours . We demonstrate how multi-task learning helps achieve good results on rumours from the 2011 England riots . There is an increasing need to interpret and act upon rumours spreading quickly through social media , especially in circumstances where their veracity is hard to establish . For instance , during an earthquake in Chile rumours spread through Twitter that a volcano had become active and that there was a tsunami warning in Valparaiso ( Mendoza et al . , 2010 ) . Other examples , from the riots in England in 2011 , were that rioters were going to attack Birmingham 's children hospital and that animals had escaped from the zoo ( Procter et al . , 2013 ) . Social scientists ( Procter et al . , 2013 ) analysed manually a sample of tweets expressing different judgements towards rumours and categorised them manually in supporting , denying or questioning . The goal here is to carry out tweet-level judgement classification automatically , in order to assist in ( near ) real-time rumour monitoring by journalists and authorities ( Procter et al . , 2013 ) . In addition , information about tweet-level judgements has been used as a first step for early rumour detection by ( Zhao et al . , 2015 ) . The focus here is on tweet- ( Qazvinian et al . , 2011 ) or proposed regular expressions as a solution ( Zhao et al . , 2015 ) . We expect posts expressing similar opinions to exhibit many similar characteristics across different rumours . Based on the assumption of a common underlying linguistic signal , we build a transfer learning system that labels newly emerging rumours for which we have little or no annotated data . Results demonstrate that Gaussian Processbased multi task learning allows for significantly improved performance . The novel contributions of this paper are : 1 . Formulating the problem of classifying judgements of rumours in both supervised and unsupervised domain adaptation settings . 2 . Showing how a multi-task learning approach outperforms singletask methods . This paper investigated the problem of classifying judgements expressed in tweets about rumours . First , we considered a setting where no training data from target rumour is available ( LOO ) . Without access to annotated examples of the target rumour the learning problem becomes very difficult . We showed that in the supervised domain adaptation setting ( LPO ) even annotating a small number of tweets helps to achieve better results . Moreover , we demonstrated the benefits of a multi task learning approach , as well as that Brown cluster features are more useful for the task than simple bag of words . Judgement estimation is undoubtedly of great value e.g. for marketing , politics and journalism , helping to target widely believed topics . Although the focus here is on classifying community reactions , Castillo et al . ( 2013 ) showed that community reaction is correlated with actual rumour veracity . Consequently our classification methods may prove useful in the broader and more challenging task of annotating veracity . An interesting direction for future work would be adding non-textual features . For example , the rumour diffusion pattern ( Lukasik et al . , 2015 ) may be a useful cue for judgement classification .", "challenge": "Currently, social scientists manually judge and categorize tweets about rumours to analyze, and building automatic systems is challenging without annotated examples.", "approach": "They regard the tweet rumour judgement classification problem as a supervised learning task and apply domain adaptation with annotated data from other domains.", "outcome": "The evaluation with rumours of England riots shows that the proposed multi-task learning performs well and providing few annotated tweets helps improve results."} +{"id": "P18-1164", "document": "In neural machine translation , a source sequence of words is encoded into a vector from which a target sequence is generated in the decoding phase . Differently from statistical machine translation , the associations between source words and their possible target counterparts are not explicitly stored . Source and target words are at the two ends of a long information processing procedure , mediated by hidden states at both the source encoding and the target decoding phases . This makes it possible that a source word is incorrectly translated into a target word that is not any of its admissible equivalent counterparts in the target language . Neural machine translation ( NMT ) is an endto-end approach to machine translation that has achieved competitive results vis-a-vis statistical machine translation ( SMT ) on various language pairs ( Bahdanau et al . , 2015 ; Cho et al . , 2014 ; Sutskever et al . , 2014 ; Luong and Manning , 2015 ) . In NMT , the sequence-to-sequence ( seq2seq ) model learns word embeddings for both source and target words synchronously . However , as illustrated in Figure 1 , source and target word embeddings are at the two ends of a long information processing procedure . The individual associations between them will gradually become loose due to the separation of source-side hidden states ( represented by h 1 , . . . , h T in Fig . 1 ) and a target- side hidden state ( represented by s t in Fig . 1 ) . As a result , in the absence of a more tight interaction between source and target word pairs , the seq2seq model in NMT produces tentative translations that contain incorrect alignments of source words with target counterparts that are non-admissible equivalents in any possible translation context . Differently from SMT , in NMT an attention model is adopted to help align output with input words . The attention model is based on the estimation of a probability distribution over all input words for each target word . Word alignments with attention weights can then be easily deduced from such distributions and support the translation . Nevertheless , sometimes one finds translations by NMT that contain surprisingly wrong word alignments , that would unlikely occur in SMT . For instance , Figure 2 shows two Chineseto-English translation examples by NMT . In the top example , the NMT seq2seq model incorrectly aligns the target side end of sentence mark eos to \u4e0b\u65ec / late with a high attention weight ( 0.80 in this example ) due to the failure of appropriately capturing the similarity , or the lack of it , between the source word \u4e0b\u65ec / late and the target eos . It is also worth noting that , as \u672c / this and \u6708 / month end up not being translated in this example , inappropriate alignment of target side eos is likely the responsible factor for under translation in NMT as the decoding process ends once a target eos is generated . Statistics on our development data show that as much as 50 % of target side eos do not properly align to source side eos . The second example in Figure 2 shows another case where source words are translated into target items that are not their possible translations in that or in any other context . In particular , \u51ac\u5965 \u4f1a / winter olympics is incorrectly translated into a target comma \" , \" and \u8f7d\u8a89 / honors into have . In this paper , to address the problem illustrated above , we seek to shorten the distance within the seq2seq NMT information processing procedure between source and target word embeddings . This is a method we term as bridging , and can be conceived as strengthening the focus of the attention mechanism into more translation-plausible source and target word alignments . In doing so , we hope that the seq2seq model is able to learn more appropriate word alignments between source and target words . We propose three simple yet effective strategies to bridge between word embeddings . The inspiring insight in all these three models is to move source word embeddings closer to target word embeddings along the seq2seq NMT information processing procedure . We categorize these strategies in terms of how close the source and target word embeddings are along that procedure , schematically depicted in Fig . 1 . ( 1 ) Source-side bridging model : Our first strategy for bridging , which we call source-side bridging , is to move source word embeddings just one step closer to the target end . Each source word embedding is concatenated with the respective source hidden state at the same position so that the attention model can more closely benefit from source word embeddings to produce word alignments . ( 2 ) Target-side bridging model : In a second more bold strategy , we seek to incorporate relevant source word embeddings more closely into the prediction of the next target hidden state . In particular , the most appropriate source words are selected according to their attention weights and they are made to more closely interact with target hidden states . ( 3 ) Direct bridging model : The third model consists of directly bridging between source and target word embeddings . The training objective is optimized towards minimizing the distance between target word embeddings and their most relevant source word embeddings , selected according to the attention model . Experiments on Chinese-English translation with extensive analysis demonstrate that directly bridging word embeddings at the two ends can produce better word alignments and thus achieve better translation . We have presented three models to bridge source and target word embeddings for NMT . The three models seek to shorten the distance between source and target word embeddings along the extensive information procedure in the encoderdecoder neural network . Experiments on Chinese to English translation shows that the proposed models can significantly improve the translation quality . Further in-depth analysis demonstrate that our models are able ( 1 ) to learn better word alignments than the baseline NMT , ( 2 ) to alleviate the notorious problems of over and under translation in NMT , and ( 3 ) to learn direct mappings between source and target words . In future work , we will explore further strategies to bridge the source and target side for sequence-to-sequence and tree-based NMT . Additionally , we also intend to apply these methods to other sequence-to-sequence tasks , including natural language conversation .", "challenge": "Neural machine translation models do not explicitly keep associations between source and possible target counterparts but implicitly through latent representations causing wrong word alignments.", "approach": "They propose simple methods to bridge the source and target word embeddings for neural machine translation models in three different levels", "outcome": "The proposed methods that bridge source and target word embeddings significantly improve translation quality and also mitigate over and under translation."} +{"id": "D07-1056", "document": "Reordering model is important for the statistical machine translation ( SMT ) . Current phrase-based SMT technologies are good at capturing local reordering but not global reordering . This paper introduces syntactic knowledge to improve global reordering capability of SMT system . Syntactic knowledge such as boundary words , POS information and dependencies is used to guide phrase reordering . Not only constraints in syntax tree are proposed to avoid the reordering errors , but also the modification of syntax tree is made to strengthen the capability of capturing phrase reordering . Furthermore , the combination of parse trees can compensate for the reordering errors caused by single parse tree . Finally , experimental results show that the performance of our system is superior to that of the state-of-the-art phrase-based SMT system . In the last decade , statistical machine translation ( SMT ) has been widely studied and achieved good translation results . Two kinds of SMT system have been developed , one is phrase-based SMT and the other is syntax-based SMT . In phrase-based SMT systems ( Koehn et al . , 2003 ; Koehn , 2004 ) , foreign sentences are firstly segmented into phrases which consists of adjacent words . Then source phrases are translated into target phrases respectively according to knowledge usually learned from bilingual parallel corpus . Fi-nally the most likely target sentence based on a certain statistical model is inferred by combining and reordering the target phrases with the aid of search algorithm . On the other hand , syntax-based SMT systems ( Liu et al . , 2006 ; Yamada et al . , 2001 ) mainly depend on parse trees to complete the translation of source sentence . As studied in previous SMT projects , language model , translation model and reordering model are the three major components in current SMT systems . Due to the difference between the source and target languages , the order of target phrases in the target sentence may differ from the order of source phrases in the source sentence . To make the translation results be closer to the target language style , a mathematic model based on the statistic theory is constructed to reorder the target phrases . This statistic model is called as reordering model . As shown in Figure 1 , the order of the translations of \" \u6b27\u5143 \" and \" \u7684 \" is changed . The order of the translation of \" \u6b27\u5143 / \u7684 \" and \" \u5927\u5e45 / \u5347\u503c \" is altered as well . The former reordering case with the smaller distance is usually referred as local reordering and the latter with the longer distance reordering as global reordering . Phrase-based SMT system can effectively capture the local word reordering information which is common enough to be observed in training data . But it is hard to model global phrase reordering . Although syntactic knowledge used in syntax-based SMT systems can help reorder phrases , the resulting model is usually much more complicated than a phrase-based system . There have been considerable amount of efforts to improve the reordering model in SMT systems , ranging from the fundamental distance-based distortion model ( Och and Ney , 2004 ; Koehn et al . , 2003 ) , flat reordering model ( Wu , 1996 ; Zens et al . , 2004 ; Kumar et al . , 2005 ) , to lexicalized reordering model ( Tillmann , 2004 ; Kumar et al . , 2005 ; Koehn et al . , 2005 ) , hierarchical phrase-based model ( Chiang , 2005 ) , and maximum entropy-based phrase reordering model ( Xiong et al . , 2006 ) . Due to the absence of syntactic knowledge in these systems , the ability to capture global reordering knowledge is not powerful . Although syntax-based SMT systems ( Yamada et al . , 2001 ; Quirk et al . , 2005 ; Liu et al . , 2006 ) are good at modeling global reordering , their performance is subject to parsing errors to a large extent . In this paper , we propose a new method to improve reordering model by introducing syntactic information . Syntactic knowledge such as boundary of sub-trees , part-of-speech ( POS ) and dependency relation is incorporated into the SMT system to strengthen the ability to handle global phrase reordering . Our method is different from previous syntax-based SMT systems in which the translation process was modeled based on specific syntactic structures , either phrase structures or dependency relations . In our system , syntactic knowledge is used just to decide where we should combine adjacent phrases and what their reordering probability is . For example , according to the syntactic information in Figure 1 , the phrase translation combination should take place between \" \u5927\u5e45 \" and \" \u5347\u503c \" rather than between \" \u7684 \" and \" \u5927\u5e45 \" . Moreover , the non-monotone phrase reordering should occur between \" \u6b27\u5143 / \u7684 \" and \" \u5927\u5e45 / \u5347\u503c \" rather than between \" \u6b27\u5143 / \u7684 \" and \" \u5927\u5e45 \" . We train a maxi-mum entropy model , which is able to integrate rich syntactic knowledge , to estimate phrase reordering probabilities . To enhance the performance of phrase reordering model , some modification on the syntax trees are also made to relax the phrase reordering constraints . Additionally , the combination of other kinds of syntax trees is introduced to overcome the deficiency of single parse tree . The experimental results show that the performance of our system is superior to that of the state-of-art phrasebased SMT system . The roadmap of this paper is : Section 2 gives the related work . Section 3 introduces our model . Section 4 explains the generalization of reordering knowledge . The procedures of training and decoding are described in Section 5 and Section 6 respectively . The experimental results are shown in Section 7 . Section 8 concludes the paper . In this paper , syntactic knowledge is introduced to capture global reordering of SMT system . This method can not only inherit the advantage of local reordering ability of standard phrase-based SMT system , but also capture the global reordering as the syntax-based SMT system . The experimental results showed the effectiveness of our method . In the future work , we plan to improve the reordering model by introducing N-best syntax trees and exploiting richer syntactic knowledge .", "challenge": "Existing phrase-based statistical machine translation systems work well for local reordering which is commonly in training data however they fail on the global phrasal level.", "approach": "They propose to train a maximum entropy model with syntactic knowledge such as boundary words, POS information and dependencies to guide global phrase reordering.", "outcome": "The proposed model outperforms the state-of-the-art statistical machine translation model and found that multiple syntax trees compensate for the errors caused by a single tree."} +{"id": "D14-1036", "document": "We introduce the task of incremental semantic role labeling ( iSRL ) , in which semantic roles are assigned to incomplete input ( sentence prefixes ) . iSRL is the semantic equivalent of incremental parsing , and is useful for language modeling , sentence completion , machine translation , and psycholinguistic modeling . We propose an iSRL system that combines an incremental TAG parser with a semantically enriched lexicon , a role propagation algorithm , and a cascade of classifiers . Our approach achieves an SRL Fscore of 78.38 % on the standard CoNLL 2009 dataset . It substantially outperforms a strong baseline that combines gold-standard syntactic dependencies with heuristic role assignment , as well as a baseline based on Nivre 's incremental dependency parser . Humans are able to assign semantic roles such as agent , patient , and theme to an incoming sentence before it is complete , i.e. , they incrementally build up a partial semantic representation of a sentence prefix . As an example , consider : ( 1 ) The athlete realized [ her goals ] PATIENT / THEME were out of reach . When reaching the noun phrase her goals , the human language processor is faced with a semantic role ambiguity : her goals can either be the PA-TIENT of the verb realize , or it can be the THEME of a subsequent verb that has not been encountered yet . Experimental evidence shows that the human language processor initially prefers the PA-TIENT role , but switches its preference to the theme role when it reaches the subordinate verb were . Such semantic garden paths occur because human language processing occurs word-by-word , and are well attested in the psycholinguistic literature ( e.g. , Pickering et al . , 2000 ) . Computational systems for performing semantic role labeling ( SRL ) , on the other hand , proceed non-incrementally . They require the whole sentence ( typically together with its complete syntactic structure ) as input and assign all semantic roles at once . The reason for this is that most features used by current SRL systems are defined globally , and can not be computed on sentence prefixes . In this paper , we propose incremental SRL ( iSRL ) as a new computational task that mimics human semantic role assignment . The aim of an iSRL system is to determine semantic roles while the input unfolds : given a sentence prefix and its partial syntactic structure ( typically generated by an incremental parser ) , we need to ( a ) identify which words in the input participate in the semantic roles as arguments and predicates ( the task of role identification ) , and ( b ) assign correct semantic labels to these predicate / argument pairs ( the task of role labeling ) . Performing these two tasks incrementally is substantially harder than doing it non-incrementally , as the processor needs to commit to a role assignment on the basis of incomplete syntactic and semantic information . As an example , take ( 1 ): on reaching athlete , the processor should assign this word the AGENT role , even though it has not seen the corresponding predicate yet . Similarly , upon reaching realized , the processor can complete the AGENT role , but it should also predict that this verb also has a PATIENT role , even though it has not yet encountered the argument that fills this role . A system that performs SRL in a fully incremental fashion therefore needs to be able to assign incomplete semantic roles , unlike existing full-sentence SRL models . The uses of incremental SRL mirror the applications of incremental parsing : iSRL models can be used in language modeling to assign better string probabilities , in sentence completion systems to provide semantically informed completions , in any real time application systems , such as dialog processing , and to incrementalize applications such as machine translation ( e.g. , in speech-tospeech MT ) . Crucially , any comprehensive model of human language understanding needs to combine an incremental parser with an incremental semantic processor ( Pad\u00f3 et al . , 2009 ; Keller , 2010 ) . The present work takes inspiration from the psycholinguistic modeling literature by proposing an iSRL system that is built on top of a cognitively motivated incremental parser , viz . , the Psycholinguistically Motivated Tree Adjoining Grammar parser of Demberg et al . ( 2013 ) . This parser includes a predictive component , i.e. , it predicts syntactic structure for upcoming input during incremental processing . This makes PLTAG particularly suitable for iSRL , allowing it to predict incomplete semantic roles as the input string unfolds . Competing approaches , such as iSRL based on an incremental dependency parser , do not share this advantage , as we will discuss in Section 4.3 . In this paper , we introduced the new task of incremental semantic role labeling and proposed a system that solves this task by combining an incremental TAG parser with a semantically enriched lexicon , a role propagation algorithm , and a cascade of classifiers . This system achieved a fullsentence SRL F-score of 78.38 % on the standard CoNLL dataset . Not only is the full-sentence score considerably higher than the Majority-Baseline ( which is a strong baseline , as it uses gold-standard syntactic dependencies ) , but we also observe that our iSRL system performs well incrementally , i.e. , it predicts both complete and incomplete semantic role triples correctly early on in the sentence . We attributed this to the fact that our TAG-based architecture makes it possible to predict upcoming syntactic structure together with the corresponding semantic roles .", "challenge": "While humans can assign semantic roles to incomplete sentences as they unfold, current systems require whole sentences and process non-incrementally because of the global features.", "approach": "They propose the incremental semantic role labelling task and an incremental TAG parser with a lexicon, a role propagation algorithm, and a cascade of classifiers.", "outcome": "The proposed system outperforms a baseline with gold-standard syntactic dependencies on the CoNLL 2009 dataset and successfully predicts complete and incomplete semantic role triples."} +{"id": "P19-1080", "document": "Generating fluent natural language responses from structured semantic representations is a critical step in task-oriented conversational systems . Avenues like the E2E NLG Challenge have encouraged the development of neural approaches , particularly sequence-tosequence ( Seq2Seq ) models for this problem . The semantic representations used , however , are often underspecified , which places a higher burden on the generation model for sentence planning , and also limits the extent to which generated responses can be controlled in a live system . In this paper , we ( 1 ) propose using tree-structured semantic representations , like those used in traditional rule-based NLG systems , for better discourse-level structuring and sentence-level planning ; ( 2 ) introduce a challenging dataset using this representation for the weather domain ; ( 3 ) introduce a constrained decoding approach for Seq2Seq models that leverages this representation to improve semantic correctness ; and ( 4 ) demonstrate promising results on our dataset and the E2E dataset . Generating fluent natural language responses from structured semantic representations is a critical step in task-oriented conversational systems . With their end-to-end trainability , neural approaches to natural language generation ( NNLG ) , particularly sequence-to-sequence ( Seq2Seq ) models , have been promoted with great fanfare in recent years ( Wen et al . , 2015 ( Wen et al . , , 2016 ; ; Mei et al . , 2016 ; Kiddon et al . , 2016 ; Du\u0161ek and Jurcicek , 2016 ) , and avenues like the recent E2E NLG challenge ( Du\u0161ek et al . , 2018 ( Du\u0161ek et al . , , 2019 ) ) have made available large datasets to promote the development of these models . Nevertheless , current NNLG models arguably remain inadequate for most real-world task-oriented dialogue systems , given their inability to ( i ) reliably perform common sentence planning and discourse structuring operations ( Reed et al . , 2018 ) , ( ii ) generalize to complex inputs ( Wiseman et al . , 2017 ) , and ( 3 ) avoid generating texts with semantic errors including hallucinated content ( Du\u0161ek et al . , 2018 ( Du\u0161ek et al . , , 2019 ) ) . 1In this paper , we explore the extent to which these issues can be addressed by incorporating lessons from pre-neural NLG systems into a neural framework . We begin by arguing in favor of enriching the input to neural generators to include discourse relations -long taken to be central in traditional NLG -and underscore the importance of exerting control over these relations when generating text , particularly when using user models to structure responses . In a closely related work , Reed et al . ( 2018 ) , the authors add control tokens ( to indicate contrast and sentence structure ) to a flat input MR , and show that these can be effectively used to control structure . However , their methods are only able to control the presence or absence of these relations , without more fine-grained control over their structure . We thus go beyond their approach and propose using full tree structures as inputs , and generating treestructured outputs as well . This allows us to define a novel method of constrained decoding for standard sequence-to-sequence models for generation , which helps ensure that the generated text contains all and only the specified content , as in classic approaches to surface realization . On the E2E dataset , our experiments demonstrate much better control over CONTRAST relations than using Reed et al . 's method , and also show improved diversity and expressiveness over standard baselines . We also release a new dataset of responses in the weather domain , which includes the JUSTIFY , JOIN and CONTRAST rela- tions , and where discourse-level structures come into play . On both E2E and weather datasets , we show that constrained decoding over our enriched inputs results in higher semantic correctness as well as better generalizability and data efficiency . The rest of this paper is organized as follows : Section 2 describes the motivation for using compositional inputs organized around discourse relations . Section 3 explains our data collection approach and dataset . 2 Section 4 shows how to incorporate compositional inputs into NNLG and describes our constrained decoding algorithm . Section 5 presents our experimental setup and results . 2 Towards More Expressive Meaning Representations We show that using rich tree-structured meaning representations can improve expressiveness and semantic correctness in generation . We also propose a constrained decoding technique that leverages tree-structured MRs to exert precise control over the discourse structure and semantic correctness of the generated text . We release a challenging new dataset for the weather domain and an enriched E2E dataset that include tree-structured MRs . Our experiments show that constrained decoding , together with tree-structured MRs , can greatly improve semantic correctness as well as enhance data efficiency and generalizability .", "challenge": "Existing neural natural language generation models remain inadequate for real-world task-oriented dialogue systems due to inabilities of reliable planning, modeling complex inputs and avoiding hallucinations.", "approach": "They propose to use tree-structure semantic representation coupled with a constrained decoding method which generates tree-structured outputs and a dataset in the weather domain.", "outcome": "Evaluation of the existing and proposed datasets shows that the proposed model achieves better controllability, expressiveness and semantic correctness."} +{"id": "2022.naacl-main.215", "document": "Large language models ( LM ) based on Transformers allow to generate plausible long texts . In this paper , we explore how this generation can be further controlled at decoding time to satisfy certain constraints ( e.g. being nontoxic , conveying certain emotions , using a specific writing style , etc . ) without fine-tuning the LM . Precisely , we formalize constrained generation as a tree exploration process guided by a discriminator that indicates how well the associated sequence respects the constraint . This approach , in addition to being easier and cheaper to train than fine-tuning the LM , allows to apply the constraint more finely and dynamically . We propose several original methods to search this generation tree , notably the Monte Carlo Tree Search ( MCTS ) which provides theoretical guarantees on the search efficiency , but also simpler methods based on reranking a pool of diverse sequences using the discriminator scores . These methods are evaluated , with automatic and human-based metrics , on two types of constraints and languages : review polarity and emotion control in French and English . We show that discriminatorguided MCTS decoding achieves state-of-theart results without having to tune the language model , in both tasks and languages . We also demonstrate that other proposed decoding methods based on re-ranking can be really effective when diversity among the generated propositions is encouraged . Generative language models exist for a long time , but with advent of the transformer architecture ( Vaswani et al . , 2017 ) and increasing computing capabilities , they are now able to generate well written and long texts . In particular , large models , such as the well known GPT-2 ( Radford et al . , 2019 ) and GPT-3 ( Brown et al . , 2020 ) , have been used successfully for various applications : assisting writers , summarizing , augmentating data for subsequent NLP tasks , generating fake news ( Kumar et al . , 2020 ; Papanikolaou and Pierleoni , 2020 ; Zellers et al . , 2019 ) . Yet , beside the prompt used to initiate the generation process , there are few options to have control on the generation process . Being able to add some constraints on the generated texts is useful for various situations . For example , it allows to create texts that follow a certain writing style , convey a certain emotion or polarity or to ensure that a generated summary contains correct information . More critically , it can be used to prevent the inherent toxicity of language models trained on the internet , or to not reproduce gender or race stereotypes . So far , most methods necessitate to fine-tune the LM , so that it specifically learns to model this constraint , i.e. the constraint is -hopefully-incorporated in the LM . This finetuning approach has several drawbacks . It implies to train multiple specific LMs ( one per constraint ) , which is costly , when even possible given the size of current state-of-the-art LM , and results in several models . In this paper , we propose new approaches to add such additional constraints on the texts but at decoding time . We exploit a discriminator that is trained to determine if a text follows a given constraint or not ; its output provides information to guide the generation toward texts that satisfy this expected constraint . In order to make the most of the discriminator information , we propose an original method based on the Monte Carlo Tree Search ( MCTS ) algorithm ( Coulom , 2006 ) , namely Plug and Play Language -Monte Carlo Tree Search ( PPL-MCTS ) . We also propose simpler methods based on re-ranking to fulfil this goal . Both approaches do not require to fine-tune the LM ; adding a new constraint can thus simply be done by providing a discriminator verifying if a text complies with what is expected . More precisely , our main contributions are the following ones : 1 . we propose to use MCTS as a decoding strat-egy to implement constrained generation and we show , on 3 datasets and 2 languages , that it yields state-of-the-art results while offering more flexibility ; 2 . we also explore simpler generation methods based on re-ranking and show that this kind of approach , with low computational costs , can also be competitive if the diversity within propositions to re-rank is encouraged ; 3 . we provide a fully functional code implementing a batched textual MCTS 1 working with the popular HuggingFace 's Transformers library ( Wolf et al . , 2020 ) 2 Related work The goal of constrained textual generation is to find the sequence of tokens x 1 : T which maximises p(x 1 : T | c ) , given a constraint c. Few methods address the constrained textual generation . Class-conditional language models . Classconditional language models ( CC-LMs ) , as the Conditional Transformer Language ( CTRL ) model ( Keskar et al . , 2019 ) , train or fine-tune the weights \u03b8 of a single neural model directly for controllable generation , by appending a control code in the beginning of a training sequence . The control code indicates the constraint to verify and is related to a class containing texts that satisfy the constraint . For the sake of simplicity , we will denote without distinction the class , the constraint verified by its texts and the associated control code by c. Trained with different control codes , the model learns p \u03b8 ( x 1 : T | c ) = T t=1 p \u03b8 ( x t | x 1 : t-1 , c ) . The constraint can then be applied during generation by appending the corresponding control code to the prompt . While this method gives some kind of control over the generation , the control codes need to be defined upfront and the LM still needs to be trained specifically for each set of control codes . This is an important limitation since the current trend in text generation is the use of large pre-trained models which can hardly be fine-tuned ( for instance , the last version of GPT , GPT-3 , can not be fine-tuned without access to very large hardware resources ) . Discriminator-based methods The general idea of discriminator-guided generation is to combine 1 https://github.com / NohTow / PPL-MCTS a disciminator D with a generative LM . The discriminator explicitly models the constraint by calculating the probability p D ( c | x 1 : T ) of the sequence x 1 : T to satisfy the constraint c. This probability is directly related to p(x 1 : T | c ) through Bayes ' rule : p(x 1 : T | c ) \u221d p D ( c | x 1 : T ) p \u03b8 ( x 1 : T ) . Discriminator-based methods alleviate the training cost problem , as discriminators are easier to train than a LM . Moreover , any additional constraint can be defined a posteriori without tuning the LM , only by training another discriminator . The discriminators have been used in different ways to explore the search space . In the work of ( Holtzman et al . , 2018 ; Scialom et al . , 2020 ) , the space is first searched using beam search to generate a pool of proposals with a high likelihood p \u03b8 ( x 1 : T ) , and then the discriminator is used to re-rank them . However , in addition that beam search can miss sequences with high likelihood , it is biased towards the likelihood , while the best sequence might only have an average likelihood , but satisfies the constraint perfectly . Hence , it might be more suitable to take the discriminator probability into account during decoding rather than after generating a whole sequence . In this case , the discriminator is used at each generation step t to get the probability p D ( c | x 1 : t ) for each token of the vocabulary V , and merge it to the likelihood p \u03b8 ( x 1 : t ) to choose which token to emit . In order to reduce the cost of using a discriminator on every possible continuation , GeDi ( Krause et al . , 2020 ) proposes to use CC-LMs as generative discriminators . The method relies on the fact that the CC-LM computes p \u03b8 ( x t | x 1 : t-1 , c ) for all tokens of the vocabulary which can be used to get p \u03b8 ( c | x 1 : t ) for all tokens using Bayes ' equation . This approach is thus at the intersection of tuning the LM and using a discriminator : it tunes a small LM ( the CC-LM ) to guide a bigger one . In Plug And Play Language Model ( PPLM ) ( Dathathri et al . , 2020 ) , the discriminator is used to shift the hidden states of the pre-trained transformer-based LM towards the desired class at every generation step . PPLM can be used on any LM and with any discriminator . However , PPLM needs to access the LM to modify its hidden states , while our approach only requires the output logits . As some LM can only be used through access to logits ( e.g. GPT-3 API ) , this makes our approach more plug and play than PPLM . A common drawback of all these approaches is their lack of a long-term vision of the generation . Indeed , the discriminator probabilities become necessarily more meaningful as the sequence grows and might only be trustable to guide the search when the sequence is ( nearly ) finished . When used in a myopic decoding strategy , classification errors will cause the generation process to deviate further and further . Trying to optimize a score defined in the long horizon by making short term decisions is very similar to common game setups such as chess , where the Monte Carlo Tree Search ( MCTS ) has proven to be really effective ( Silver et al . , 2018 ) , which motivated our approach . In this paper , we show that it is possible to control generation with the help of a discriminator that implements some expected constraints on the text during decoding . This flexible approach is very useful when using very large language models , such as GPT-3 , whose fine-tuning computational costs are prohibitive . In contrast , training a discriminator is easier and cheaper . Our proposed methods , that mix the discriminator constraint and the generation , yield performance that is equivalent to the best approaches based on LM tuning at lower training cost . On the other hand , such approaches have an additional cost during inference because of the cost of the discriminator being applied to candidate generations . A study on this additional cost depending on the type of discriminator used can be found in ( Chaffin et al . , 2022 ) . PPL-MCTS offers a solution for cases where training is too costly for the downstream application or the language model is not directly accessible . Seeing text generation as a tree exploration process , an existing approach such as GeDi indeed lowers the cost of width exploration but the depth exploration is still an issue . Using GeDi for constrained generation is thus very similar to a standard maximum likelihood search which still lacks of an optimal search method . On the other hand , Monte Carlo Tree Search provides an efficient way to explore the tree by determining the best local choice in the long run , lowering the cost of depth exploration . Thus , these two methods solve different facets of constrained generation , and the combination of the two is a promising perspective . Moreover , MCTS allows to precisely define the best compromise between cost and quality through the number of iterations and the roll-out size , while ensuring the efficiency of the search theoretically . For reproducibility purposes , our implementation is made available at https : //github.com / NohTow / PPL-MCTS . Several research avenues are opened by this work . For methods yielding high perplexity , it would be interesting to explore how to set the \u03b1 parameter in order to reach the best compromise between accuracy and perplexity . Similarly , the size ( number of tokens considered ) of the rollout in MCTS offers some ways to control the cost / performance compromise . An adaptive rollout size , for example rolling-out until the score of the discriminator is above or below a threshold as in ( Cotarelo et al . , 2021 ) , would seem particularly suited for texts . It should also be noted that finetuning a model and controlling the generation with a discriminator can be used jointly . For instance , one can use PPL-MCTS on a tuned LM , which will most likely result in even better performances because sequences considered during the search will have an overall higher quality for the considered task . Finally , not only can PPL-MCTS be applied to any property that a discriminator can identify , but it can also work using other scoring methods ( human evaluation , regular expressions , heuristic based evaluation , ... ) as long as the score reflects compliance with the expected property .", "challenge": "Finetuning different models with different constraints and controling text generation models to avoid outputting toxic outputs is computationally expensive.", "approach": "They propose to use a discriminative model during generation time without fine-tuning using Monte Carlo Tree Search for efficiency.", "outcome": "The models with guidance at inference time using a discriminator outperform the best models in French and English on automatic and manual metrics."} +{"id": "2020.aacl-main.33", "document": "Unsupervised style transfer in text has previously been explored through the sentiment transfer task . The task entails inverting the overall sentiment polarity in a given input sentence , while preserving its content . From the Aspect-Based Sentiment Analysis ( ABSA ) task , we know that multiple sentiment polarities can often be present together in a sentence with multiple aspects . In this paper , the task of aspect-level sentiment controllable style transfer is introduced , where each of the aspect-level sentiments can individually be controlled at the output . To achieve this goal , a BERT-based encoder-decoder architecture with saliency weighted polarity injection is proposed , with unsupervised training strategies , such as ABSA masked-languagemodelling . Through both automatic and manual evaluation , we show that the system is successful in controlling aspect-level sentiments . With a rapid increase in the quality of generated text , due to the rise of neural text generation models ( Kalchbrenner and Blunsom , 2013 ; Cho et al . , 2014 ; Sutskever et al . , 2014 ; Vaswani et al . , 2017 ) , controllable text generation is quickly becoming the next frontier in the field of text generation . Controllable text generation is the task of generating realistic sentences whose attributes can be controlled . The attributes to control can be : ( i ) . Stylistic : Like politeness , sentiment , formality etc , ( ii ) . Content : Like information , entities , keywords etc . or ( iii ) . Ordering : Like ordering of information , events , plots etc . Controlling sentence level polarity has been well explored as a style transfer task . Zhang et al . ( 2018 ) used unsupervised machine translation techniques for polarity transfer in sentences . Yang et al . ( 2018 ) \u00a7 equal contribution The service was speedy and the salads were great , but the chicken was bland and stale . The service was slow , but the salads were great and the chicken was tasty and fresh . used language models as discriminators to achieve style ( polarity ) transfer in sentences . Li et al . ( 2018a ) proposed a simpler method where they deleted the attribute markers and devise a method to replace or generate the target attribute-key phrases in the sentence . In this paper we explore a more fine-grained style transfer task , where each aspect 's polarities can be changed individually . Recent interest in Aspect-Based Sentiment Analysis ( ABSA ) ( Pontiki et al . , 2014 ) has shown that sentiment information can vary within a sentence , with differing sentiments expressed towards different aspect terms of target entities ( e.g. ' food ' , ' service ' in a restaurant domain ) . We introduce the task of aspect-level sentiment transfer -the task of rewriting sentences to transfer them from a given set of aspect-term polarities ( such as ' positive sentiment ' towards the service of a restaurant and a ' positive sentiment ' towards the taste of the food ) to a different set of aspect-term polarities ( such as ' negative sentiment ' towards the service of a restaurant and a ' positive ' sentiment towards the taste of the food ) . This is a more challenging task than regular style transfer as the style attributes here are not the overall attributes for the whole sentence , but are localized to specific parts of the sentence , and multiple opposing at-tributes could be present within the same sentence . The target of the transformation made needs to be localized and the other content expressed in the rest of the sentence need to be preserved at the output . An example of the task is shown in Figure 1 . For successful manipulation of the generated sentences , a few challenges need to be addressed : ( i ) . The model should learn to associate the right polarities with the right aspects . ( ii ) . The model needs to be able to correctly process the aspectpolarity query and accordingly delete , replace and generate text sequence to satisfy the query . ( iii ) . The polarities of the aspects not in the query should not be affected . ( iv ) . The non-attribute content and fluency of the text should be preserved . We explore this task in an unsupervised setting ( as is common with most style-transfer tasks due to the lack of an aligned parallel corpus ) using only monolingual unaligned corpora . In this work , a novel encoder-decoder architecture is proposed to perform unsupervised aspect-level sentiment transfer . A BERT ( Devlin et al . , 2019 ) based encoder is used that is trained to understand aspect-specific polarity information . We also propose using a ' polarity injection ' method , where saliency-weighted aspect-specific polarity information is added to the hidden representations from the encoder to complete the query for the decoder . In this paper , the task of aspect-level sentiment style transfer has been introduced , where stylistic attributes can be localized to different parts of a sentence . We have proposed a BERT-based encoder-decoder architecture with saliency-based polarity injection and show that it can be successful at the task when trained in an unsupervised setting . The experiments have been conducted on an aspect level polarity tagged benchmark dataset related to the restaurant domain . This work is hopefully an important initial step in developing a fine-grained controllable style transfer system . In the future , we would like to explore the ability to transfer such systems to data-sparse domains , and explore injecting attributes such as emotions to targets attributes in larger pieces of text .", "challenge": "Controllable generation such as the sentiment transfer task where one sentence with multiple aspects can have multiple sentiment polarities is the next frontier.", "approach": "They first propose the aspect-level sentiment controllable style transfer task where each sentiment can be indivisually controlled and a BERT-based encoder-decoder with unsupervised training strategies.", "outcome": "Automatic and manual evaluation with a restaurant dataset shows that the proposed model can successfully perform aspect-level sentiment control when trained in an unsupervised setting."} +{"id": "D11-1055", "document": "We consider the problem of predicting measurable responses to scientific articles based primarily on their text content . Specifically , we consider papers in two fields ( economics and computational linguistics ) and make predictions about downloads and within-community citations . Our approach is based on generalized linear models , allowing interpretability ; a novel extension that captures first-order temporal effects is also presented . We demonstrate that text features significantly improve accuracy of predictions over metadata features like authors , topical categories , and publication venues . Written communication is an essential component of the complex social phenomenon of science . As such , natural language processing is well-positioned to provide tools for understanding the scientific process , by analyzing the textual artifacts ( papers , proceedings , etc . ) that it produces . This paper is about modeling collections of scientific documents to understand how their textual content relates to how a scientific community responds to them . While past work has often focused on citation structure ( Borner et al . , 2003 ; Qazvinian and Radev , 2008 ) , our emphasis is on the text content , following Ramage et al . ( 2010 ) and Gerrish and Blei ( 2010 ) . Instead of task-independent exploratory data analysis ( e.g. , topic modeling ) or multi-document sum-marization , we consider supervised models of the collective response of a scientific community to a published article . There are many measures of impact of a scientific paper ; ours come from direct measurements of the number of downloads ( from an established website where prominent economists post papers before formal publication ) and citations ( within a fixed scientific community ) . We adopt a discriminative approach based on generalized linear models that can make use of any text or metadata features , and show that simple lexical features offer substantial power in modeling out-ofsample response and in forecasting response for future articles . Realistic forecasting evaluations require methodological care beyond the usual best practices of train / test separation , and we elucidate these issues . In addition , we introduce a new regularization technique that leverages the intuition that the relationship between observable features and response should evolve smoothly over time . This regularizer allows the learner to rely more strongly on more recent evidence , while taking into account a long history of training data . Our time series-inspired regularizer is computationally efficient in learning and is a significant advance over earlier text-driven forecasting models that ignore the time variable altogether ( Kogan et al . , 2009 ; Joshi et al . , 2010 ) . We evaluate our approaches in two novel experimental settings : predicting downloads of economics articles and predicting citation of papers at ACL conferences . Our approaches substantially outper-594 form text-ignorant baselines on ground-truth predictions . Our time series models permit flexibility in features and offer a novel and perhaps more interpretable view of the data than summary statistics . We presented a statistical approach to predicting a scientific community 's response to an article , based on its textual content . To improve the interpretability of the linear model , we developed a novel time series regularizer that encourages gradual changes across time steps . Our experiments showed that text features significantly improve accuracy of predictions over baseline models , and we found that the feature weights learned with the time series regularizer reflect important trends in the literature .", "challenge": "Textual information is ignored for paper citation or download count prediction.", "approach": "They propose to use textual features in scientific articles for supervised generalized linear models coupled with a new regularization technique to predict the community's response.", "outcome": "Using textual features significantly improves on citation count prediction task of economics and NLP papers over baseline models that just use metadata such as authors."} +{"id": "P04-1052", "document": "We present an algorithm for generating referring expressions in open domains . Existing algorithms work at the semantic level and assume the availability of a classification for attributes , which is only feasible for restricted domains . Our alternative works at the realisation level , relies on Word-Net synonym and antonym sets , and gives equivalent results on the examples cited in the literature and improved results for examples that prior approaches can not handle . We believe that ours is also the first algorithm that allows for the incremental incorporation of relations . We present a novel corpus-evaluation using referring expressions from the Penn Wall Street Journal Treebank . Referring expression generation has historically been treated as a part of the wider issue of generating text from an underlying semantic representation . The task has therefore traditionally been approached at the semantic level . Entities in the real world are logically represented ; for example ( ignoring quantifiers ) , a big brown dog might be represented as big1(x ) \u2227 brown1(x ) \u2227 dog1(x ) , where the predicates big1 , brown1 and dog1 represent different attributes of the variable ( entity ) x. The task of referring expression generation has traditionally been framed as the identification of the shortest logical description for the referent entity that differentiates it from all other entities in the discourse domain . For example , if there were a small brown dog ( small1(x ) \u2227 brown1(x ) \u2227 dog1(x ) ) in context , the minimal description for the big brown dog would be big1(x ) \u2227 dog1(x)1 . This semantic framework makes it difficult to apply existing referring expression generation algorithms to the many regeneration tasks that are important today ; for example , summarisation , openended question answering and text simplification . Unlike in traditional generation , the starting point in these tasks is unrestricted text , rather than a semantic representation of a small domain . It is difficult to extract the required semantics from unrestricted text ( this task would require sense disambiguation , among other issues ) and even harder to construct a classification for the extracted predicates in the manner that existing approaches require ( cf . , \u00a7 2 ) . In this paper , we present an algorithm for generating referring expressions in open domains . We discuss the literature and detail the problems in applying existing approaches to reference generation to open domains in \u00a7 2 . We then present our approach in \u00a7 3 , contrasting it with existing approaches . We extend our approach to handle relations in \u00a7 3.3 and present a novel corpus-based evaluation on the Penn WSJ Treebank in \u00a7 4 . We have described an algorithm for generating referring expressions that can be used in any domain . Our algorithm selects attributes and relations that are distinctive in context . It does not rely on the availability of an adjective classification scheme and uses WordNet antonym and synonym lists instead . It is also , as far as we know , the first algorithm that allows for the incremental incorporation of relations and the first that handles nominals . In a novel evaluation , our algorithm successfully generates identical referring expressions to those in the Penn WSJ Treebank in over 80 % of cases . In future work , we plan to use this algorithm as part of a system for generation from a database of user opinions on products which has been automatically extracted from newsgroups and similar text . This is midway between regeneration and the classical task of generating from a knowledge base because , while the database itself provides structure , many of the field values are strings corresponding to phrases used in the original text . Thus , our lexicalised approach is directly applicable to this task .", "challenge": "Existing semantic frameworks for referring expression generation algorithms work for only one domain and suffer tasks such as summarization because of unrestricted texts.", "approach": "They propose an algorithm that selects distinctive attributes and relations coupled with WordNet without requiring an adjective classification scheme for open domain referring text generation.", "outcome": "They evaluated the algorithm using a new corpus-evaluation from Penn WSJ Treebank and show that it can generate identical expressions in over 80% of cases."} +{"id": "D09-1018", "document": "This work investigates design choices in modeling a discourse scheme for improving opinion polarity classification . For this , two diverse global inference paradigms are used : a supervised collective classification framework and an unsupervised optimization framework . Both approaches perform substantially better than baseline approaches , establishing the efficacy of the methods and the underlying discourse scheme . We also present quantitative and qualitative analyses showing how the improvements are achieved . The importance of discourse in opinion analysis is being increasingly recognized ( Polanyi and Zaenen , 2006 ) . Motivated by the need to enable discourse-based opinion analysis , previous research ( Asher et al . , 2008 ; Somasundaran et al . , 2008 ) developed discourse schemes and created manually annotated corpora . However , it was not known whether and how well these linguistic ideas and schemes can be translated into effective computational implementations . In this paper , we first investigate ways in which an opinion discourse scheme can be computationally modeled , and then how it can be utilized to improve polarity classification . Specifically , the discourse scheme we use is from Somasundaran et al . ( 2008 ) , which was developed to support a global , interdependent polarity interpretation . To achieve discourse-based global inference , we explore two different frameworks . The first is a supervised framework that learns interdependent opinion interpretations from training data . The second is an unsupervised optimization framework which uses constraints to express the ideas of coherent opinion interpretation embodied in the scheme . For the supervised framework , we use Iterative Collective Classification ( ICA ) , which facilitates machine learning using relational information . The unsupervised optimization is implemented as an Integer Linear Programming ( ILP ) problem . Via our implementations , we aim to empirically test if discourse-based approaches to opinion analysis are useful . Our results show that both of our implementations achieve significantly better accuracies in polarity classification than classifiers using local information alone . This confirms the hypothesis that the discourse-based scheme is useful , and also shows that both of our design choices are effective . We also find that there is a difference in the way ICA and ILP achieve improvements , and a simple hybrid approach , which incorporates the strengths of both , is able to achieve significant overall improvements over both . Our analyses show that even when our discourse-based methods bootstrap from noisy classifications , they can achieve good improvements . The rest of this paper is organized as follows : we discuss related work in Section 2 and the discourse scheme in Section 3 . We present our discourse-based implementations in Section 4 , experiments in Section 5 , discussions in Section 6 and conclusions in Section 7 . This work focuses on the first step to ascertain whether discourse relations are useful for improving opinion polarity classification , whether they can be modeled and what modeling choices can be used . To this end , we explored two distinct paradigms : the supervised ICA and the unsupervised ILP . We showed that both of our approaches are effective in exploiting discourse relations to significantly improve polarity classification . We found that there is a difference in how ICA and ILP achieve improvements , and that combining the two in a hybrid approach can lead to further overall improvement . Quantitatively , we showed that our approach is able to achieve a large increase in recall of the polar categories without harming the precision , which results in the performance improvements . Qualitatively , we illustrated how , even if the bootstrapping process is noisy , the optimization and discourse constraints effectively rectify the misclassifications . The improvements of our diverse global inference approaches indicate that discourse information can be adapted in different ways to augment and improve existing opinion analysis techniques . The automation of the discourse-relation recognition is the next step in this research . The behavior of ICA and ILP can change , depending on the automation of discourse level recognition . The implementation and comparison of the two methods under full automation is the focus of our future work .", "challenge": "The utility of discourse schemas and annotated corpora motivated by discourse-based opinion analysis for effective computational analysis remains unknown.", "approach": "They investigate the influences of different discourse schemes on opinion polarity classification performance when coupled with supervised and unsupervised inference to achieve discourse-based global inference.", "outcome": "Both approaches investigated outperform baseline approaches showing the efficacy of exploiting discourse relations, and a combination of supervised and unsupervised performs the best."} +{"id": "D15-1205", "document": "Compositional embedding models build a representation ( or embedding ) for a linguistic structure based on its component word embeddings . We propose a Feature-rich Compositional Embedding Model ( FCM ) for relation extraction that is expressive , generalizes to new domains , and is easy-to-implement . The key idea is to combine both ( unlexicalized ) handcrafted features with learned word embeddings . The model is able to directly tackle the difficulties met by traditional compositional embeddings models , such as handling arbitrary types of sentence annotations and utilizing global information for composition . We test the proposed model on two relation extraction tasks , and demonstrate that our model outperforms both previous compositional models and traditional feature rich models on the ACE 2005 relation extraction task , and the SemEval 2010 relation classification task . The combination of our model and a loglinear classifier with hand-crafted features gives state-of-the-art results . We made our implementation available for general use 1 . Two common NLP feature types are lexical properties of words and unlexicalized linguistic / structural interactions between words . Prior work on relation extraction has extensively studied how to design such features by combining discrete lexical properties ( e.g. the identity of a word , \u21e4 \u21e4 Gormley and Yu contributed equally . 1 https://github.com / mgormley / pacaya its lemma , its morphological features ) with aspects of a word 's linguistic context ( e.g. whether it lies between two entities or on a dependency path between them ) . While these help learning , they make generalization to unseen words difficult . An alternative approach to capturing lexical information relies on continuous word embeddings2 as representative of words but generalizable to new words . Embedding features have improved many tasks , including NER , chunking , dependency parsing , semantic role labeling , and relation extraction ( Miller et al . , 2004 ; Turian et al . , 2010 ; Koo et al . , 2008 ; Roth and Woodsend , 2014 ; Sun et al . , 2011 ; Plank and Moschitti , 2013 ; Nguyen and Grishman , 2014 ) . Embeddings can capture lexical information , but alone they are insufficient : in state-of-the-art systems , they are used alongside features of the broader linguistic context . In this paper , we introduce a compositional model that combines unlexicalized linguistic context and word embeddings for relation extraction , a task in which contextual feature construction plays a major role in generalizing to unseen data . Our model allows for the composition of embeddings with arbitrary linguistic structure , as expressed by hand crafted features . In the following sections , we begin with a precise construction of compositional embeddings using word embeddings in conjunction with unlexicalized features . Various feature sets used in prior work ( Turian et al . , 2010 ; Nguyen and Grishman , 2014 ; Hermann et al . , 2014 ; Roth and Woodsend , 2014 ) A feature that depends on the embedding for this context word could generalize to other lexical indicators of the same relation ( e.g. \" operating \" ) that do n't appear with ART during training . But lexical information alone is insufficient ; relation extraction requires the identification of lexical roles : where a word appears structurally in the sentence . In ( 2 ) , the word \" of \" between \" suburbs \" and \" Baghdad \" suggests that the first entity is part of the second , yet the earlier occurrence after \" direction \" is of no significance to the relation . Even finer information can be expressed by a word 's role on the dependency path between entities . In ( 3 ) we can distinguish the word \" died \" from other irrelevant words that do n't appear between the entities . tured as special cases of this construction . Adding these compositional embeddings directly to a standard log-linear model yields a special case of our full model . We then treat the word embeddings as parameters giving rise to our powerful , efficient , and easy-to-implement log-bilinear model . The model capitalizes on arbitrary types of linguistic annotations by better utilizing features associated with substructures of those annotations , including global information . We choose features to promote different properties and to distinguish different functions of the input words . The full model involves three stages . First , it decomposes the annotated sentence into substructures ( i.e. a word and associated annotations ) . Second , it extracts features for each substructure ( word ) , and combines them with the word 's embedding to form a substructure embedding . Third , we sum over substructure embeddings to form a composed annotated sentence embedding , which is used by a final softmax layer to predict the output label ( relation ) . The result is a state-of-the-art relation extractor for unseen domains from ACE 2005 ( Walker et al . , 2006 ) and the relation classification dataset from SemEval-2010 Task 8 ( Hendrickx et al . , 2010 ) . Contributions This paper makes several contributions , including : 1 . We introduce the Yu and Dredze ( 2015 ) and Yu et al . ( 2015 ) . Additionally , we have extended FCM to incorporate a low-rank embedding of the features ( Yu et al . , 2015 ) , which focuses on fine-grained relation extraction for ACE and ERE . This paper obtains better results than the low-rank extension on ACE coarse-grained relation extraction . We have presented FCM , a new compositional model for deriving sentence-level and substructure embeddings from word embeddings . Compared to existing compositional models , FCM can easily handle arbitrary types of input and handle global information for composition , while remaining easy to implement . We have demonstrated that FCM alone attains near state-of-the-art performances on several relation extraction tasks , and in combination with traditional feature based loglinear models it obtains state-of-the-art results . Our next steps in improving FCM focus on enhancements based on task-specific embeddings or loss functions as in Hashimoto et al . ( 2015 ; dos Santos et al . ( 2015 ) . Moreover , as the model provides a general idea for representing both sentences and sub-structures in language , it has the potential to contribute useful components to various tasks , such as dependency parsing , SRL and paraphrasing . Also as kindly pointed out by one anonymous reviewer , our FCM can be applied to the TAC-KBP ( Ji et al . , 2010 ) tasks , by replacing the training objective to a multi-instance multilabel one ( e.g. Surdeanu et al . ( 2012 ) ) . We plan to explore the above applications of FCM in the future .", "challenge": "Existing methods for relation extraction do not work well on unseen words, and word embeddings alone are not sufficient without the broader linguistic contexts.", "approach": "They propose a compositional embedding model which combines hand-crafted features with word embeddings by summing over substructure embeddings to form a composed annotated sentence embedding.", "outcome": "The proposed model outperforms compositional models and traditional feature rich models on the ACE 2005 relation extraction task and SemEval 2010 relation classification task."} +{"id": "2020.acl-main.547", "document": "Chinese short text matching usually employs word sequences rather than character sequences to get better performance . However , Chinese word segmentation can be erroneous , ambiguous or inconsistent , which consequently hurts the final matching performance . To address this problem , we propose neural graph matching networks , a novel sentence matching framework capable of dealing with multi-granular input information . Instead of a character sequence or a single word sequence , paired word lattices formed from multiple word segmentation hypotheses are used as input and the model learns a graph representation according to an attentive graph matching mechanism . Experiments on two Chinese datasets show that our models outperform the state-of-the-art short text matching models . Short text matching ( STM ) is a fundamental task of natural language processing ( NLP ) . It is usually recognized as a paraphrase identification task or a sentence semantic matching task . Given a pair of sentences , a matching model is to predict their semantic similarity . It is widely used in question answer systems and dialogue systems ( Gao et al . , 2019 ; Yu et al . , 2014 ) . The recent years have seen advances in deep learning methods for text matching ( Mueller and Thyagarajan , 2016 ; Gong et al . , 2017 ; Chen et al . , 2017 ; Lan and Xu , 2018 ) . However , almost all of these models are initially proposed for English text matching . Applying them for Chinese text matching , we have two choices . One is to take Chinese characters as the input of models . Another is first to segment each sentence into words , and then to take these words as input tokens . Although character-based models can overcome the * Kai Yu is the corresponding author . problem of data sparsity to some degree ( Li et al . , 2019 ) , the main drawback of these models is that explicit word information is not fully exploited , which can be potentially useful for semantic matching . However , word-based models often suffer some potential issues caused by word segmentation . As shown in Figure 1 , the character sequence \" \u5357 \u4eac \u5e02 \u957f \u6c5f \u5927 \u6865(South Capital City Long River Big Bridge ) \" has two different meanings with different word segmentation . The first one refers to a bridge ( Segment-1 , Segment-2 ) , and the other refers to a person ( Segment-3 ) . The ambiguity may be eliminated with more context . Additionally , the segmentation granularity of different tools is different . For example , \" \u957f\u6c5f\u5927 \u6865(Yangtze River Bridge ) \" in Segment-1 is divided into two words \" \u957f\u6c5f(Yangtze River ) \" and \" \u5927 \u6865(Bridge ) \" in Segment-2 . It has been shown that multi-granularity information is important for text matching ( Lai et al . , 2019 ) . Here we propose a neural graph matching method ( GMN ) for Chinese short text matching . Instead of segmenting each sentence into a word sequence , we keep all possible segmentation paths to form a word lattice graph , as shown in Figure 1 . GMN takes a pair of word lattice graphs as input and updates the representations of nodes according to the graph matching attention mechanism . Also , GMN can be combined with pre-trained language models , e.g. BERT ( Devlin et al . , 2019 ) . It can be regarded as a method to integrate word information in these pre-trained language models during the fine-tuning phase . The experiments on two Chinese Datasets show that our model outperforms not only previous state-of-the-art models but also the pre-trained model BERT as well as some variants of BERT . In this paper , we propose a neural graph matching model for Chinese short text matching . It takes a pair of word lattices as input instead of word or character sequences . The utilization of word lattice can provide more multi-granularity information and avoid the error propagation issue of word segmentation . Additionally , our model and the pre-training model are complementary . It can be regarded as a flexible method to introduce word information into BERT during the fine-tuning phase . The experimental results show that our model outperforms the state-of-the-art text matching models as well as some BERT-based models .", "challenge": "Word-base solutions for text matching tasks in Chinese require segmenting texts into words which can be prone to error hurting the final performance.", "approach": "They propose neural graph matching networks inspired which can take multiple word segmentation hypotheses into account using a word lattice graph.", "outcome": "The proposed models outperform the state-of-the-art models and BERT-based models on two Chinese text matching datasets."} +{"id": "P11-2085", "document": "Chinese Pinyin input method is very important for Chinese language information processing . Users may make errors when they are typing in Chinese words . In this paper , we are concerned with the reasons that cause the errors . Inspired by the observation that pressing backspace is one of the most common user behaviors to modify the errors , we collect 54 , 309 , 334 error-correction pairs from a realworld data set that contains 2 , 277 , 786 users via backspace operations . In addition , we present a comparative analysis of the data to achieve a better understanding of users ' input behaviors . Comparisons with English typos suggest that some language-specific properties result in a part of Chinese input errors . Unlike western languages , Chinese is unique due to its logographic writing system . Chinese users can not directly type in Chinese words using a QW-ERTY keyboard . Pinyin is the official system to transcribe Chinese characters into the Latin alphabet . Based on this transcription system , Pinyin input methods have been proposed to assist users to type in Chinese words ( Chen , 1997 ) . The typical way to type in Chinese words is in a sequential manner ( Wang et al . , 2001 ) . Assume users want to type in the Chinese word \" \u4ec0 \u4e48(what ) \" . First , they mentally generate and type in corresponding Pinyin \" shenme \" . Then , a Chinese Pinyin input method displays a list of Chinese words which share that Pinyin , as shown in Fig . 1 . Users visually search the target word from candidates and select numeric key \" 1 \" to get the result . The last two steps do not exist in typing process of English words , which indicates that it is more complicated for Chinese users to type in Chinese words . Chinese users may make errors when they are typing in Chinese words . As shown in Fig . 2 , a user may mistype \" shenme \" as \" shenem \" . Typical Chinese Pinyin input method can not return the right word . Users may not realize that an error occurs and select the first candidate word \" \u4ec0\u6076\u9b54 \" ( a meaningless word ) as the result . This greatly limits user experience since users have to identify errors and modify them , or can not get the right word . In this paper , we analyze the reasons that cause errors in Chinese Pinyin input method . This analysis is helpful in enhancing the user experience and the performance of Chinese Pinyin input method . In practice , users press backspace on the keyboard to modify the errors , they delete the mistyped word and re-type in the correct word . Motivated by this ob-servation , we can extract error-correction pairs from backspace operations . These error-correction pairs are of great importance in Chinese spelling correction task which generally relies on sets of confusing words . We extract 54 , 309 , 334 error-correction pairs from user input behaviors and further study them . Our comparative analysis of Chinese and English typos suggests that some language-specific properties of Chinese lead to a part of input errors . To the best of our knowledge , this paper is the first one which analyzes user input behaviors in Chinese Pinyin input method . The rest of this paper is organized as follows . Section 2 discusses related works . Section 3 introduces how we collect errors in Chinese Pinyin input method . In Section 4 , we investigate the reasons that result in these errors . Section 5 concludes the whole paper and discusses future work . In this paper , we study user input behaviors in Chinese Pinyin input method from backspace operations . We aim at analyzing the reasons that cause these errors . Users signal that they are very likely to make errors if they press backspace on the keyboard . Then they modify the errors and type in the correct words they want . Different from the previous research , we extract abundant Pinyin-correction and Chinese word-correction pairs from backspace operations . Compared with English typos , we observe some language-specific properties in Chinese have impact on errors . All in all , user behaviors ( Zheng et al . , 2009 ; Zheng et al . , 2010 ; Zheng et al . , 2011b ) in Chinese Pinyin input method provide novel perspectives for natural language processing tasks . Below we sketch three possible directions for the future work : ( 1 ) we should consider position features in analyzing Pinyin errors . For example , it is less likely that users make errors in the first letter of an input Pinyin . ( 2 ) we aim at designing a selfadaptive input method that provide error-tolerant features ( Chen and Lee , 2000 ; Zheng et al . , 2011a ) . ( 3 ) we want to build a Chinese spelling correction system based on extracted error-correction pairs .", "challenge": "The logographic writing system in Chinese makes the typing process with a QWERTY keyboard more complicated than in English limiting user experience.", "approach": "They build large Chinese error-correction pairs by exploiting the action of pressing the backspace key and perform analysis to understand users' input behaviors.", "outcome": "Comparative input-error analysis between Chinese and English shows that there are languages specific error types only happen in Chinese."} +{"id": "D09-1059", "document": "We present an inexact search algorithm for the problem of predicting a two-layered dependency graph . The algorithm is based on a k-best version of the standard cubictime search algorithm for projective dependency parsing , which is used as the backbone of a beam search procedure . This allows us to handle the complex nonlocal feature dependencies occurring in bistratal parsing if we model the interdependency between the two layers . We apply the algorithm to the syntacticsemantic dependency parsing task of the CoNLL-2008 Shared Task , and we obtain a competitive result equal to the highest published for a system that jointly learns syntactic and semantic structure . Numerous linguistic theories assume a multistratal model of linguistic structure , such as a layer of surface syntax , deep syntax , and shallow semantics . Examples include Meaning-Text Theory ( Mel'\u010duk , 1988 ) , Discontinuous Grammar ( Buch-Kromann , 2006 ) , Extensible Dependency Grammar ( Debusmann et al . , 2004 ) , and the Functional Generative Description ( Sgall et al . , 1986 ) which forms the theoretical foundation of the Prague Dependency Treebank ( Haji\u010d , 1998 ) . In the statistical NLP community , the most widely used grammatical resource is the Penn Treebank ( Marcus et al . , 1993 ) . This is a purely syntactic resource , but we can also include this treebank in the category of multistratal resources since the PropBank ( Palmer et al . , 2005 ) and NomBank ( Meyers et al . , 2004 ) projects have annotated shallow semantic structures on top of it . Dependency-converted versions of the Penn Treebank , PropBank and NomBank were used in the CoNLL-2008 Shared Task ( Surdeanu et al . , 2008 ) , in which the task of the participants was to produce a bistratal dependency structure consisting of surface syntax and shallow semantics . Producing a consistent multistratal structure is a conceptually and computationally complex task , and most previous methods have employed a purely pipeline-based decomposition of the task . This includes the majority of work on shallow semantic analysis ( Gildea and Jurafsky , 2002 , inter alia ) . Nevertheless , since it is obvious that syntax and semantics are highly interdependent , it has repeatedly been suggested that the problems of syntactic and semantic analysis should be carried out simultaneously rather than in a pipeline , and that modeling the interdependency between syntax and semantics would improve the quality of all the substructures . The purpose of the CoNLL-2008 Shared Task was to study the feasibility of a joint analysis of syntax and semantics , and while most participating systems used a pipeline-based approach to the problem , there were a number of contributions that attempted to take the interdependence between syntax and semantics into account . The top-performing system in the task ( Johansson and Nugues , 2008 ) applied a very simple reranking scheme by means of a k-best syntactic output , similar to previous attempts ( Gildea and Jurafsky , 2002 ; Toutanova et al . , 2005 ) to improve semantic role labeling performance by using mul-tiple parses . The system by Henderson et al . ( 2008 ) extended previous stack-based algorithms for dependency parsing by using two separate stacks to build the syntactic and semantic graphs . Llu\u00eds and M\u00e0rquez ( 2008 ) proposed a model that simultaneously predicts syntactic and semantic links , but since its search algorithm could not take the syntactic-semantic interdependencies into account , a pre-parsing step was still needed . In addition , before the CoNLL-2008 shared task there have been a few attempts to jointly learn syntactic and semantic structure ; for instance , Merlo and Musillo ( 2008 ) appended semantic role labels to the phrase tags in a constituent treebank and applied a conventional constituent parser to predict constituent structure and semantic roles . In this paper , we propose a new approximate search method for bistratal dependency analysis . The search method is based on a beam search procedure that extends a k-best version of the standard cubic-time search algorithm for projective dependency parsing . This is similar to the search method for constituent parsing used by Huang ( 2008 ) , who referred to it as cube pruning , inspired by an idea from machine translation decoding ( Chiang , 2007 ) . The cube pruning approach , which is normally used to solve the arg max problem , was also recently extended to summing problems , which is needed in some learning algorithms ( Gimpel and Smith , 2009 ) . We apply the algorithm on the CoNLL-2008 Shared Task data , and obtain the same evaluation score as the best previously published system that simultaneously learns syntactic and semantic structure ( Titov et al . , 2009 ) . In this paper , we have presented a new approximate search method to solve the problem of jointly predicting the two layers in a bistratal dependency graph . The algorithm shows competitive performance on the treebank used in the CoNLL-2008 Shared Task , a bistratal treebank consisting of a surface-syntactic and a shallow semantic layer . In addition to the syntactic-semantic task that we have described in this paper , we believe that our method can be used in other types of multistratal syntactic frameworks , such as a representation of surface and deep syntax as in Meaning-Text Theory ( Mel'\u010duk , 1988 ) . The optimization problem that we set out to solve is intractable , but we have shown that reasonable performance can be achieved with an inexact , beam search-based search method . This is not obvious : it has previously been shown that using an inexact search procedure when the learning algorithm assumes that the search is exact may lead to slow convergence or even divergence ( Kulesza and Pereira , 2008 ) , but this does not seem to be a problem in our case . While we used a beam search method as the method of approximation , other methods are certainly possible . An interesting example is the recent system by Smith and Eisner ( 2008 ) , which used loopy belief propagation in a dependency parser using highly complex features , while still maintaining cubic-time search complexity . An obvious drawback of our approach compared to traditional pipeline-based semantic role labeling methods is that the speed of the algorithm is highly dependent on the size of the interdependency feature representation \u03a6 i . Also , extracting these features is fairly complex , and it is of critical importance to implement the feature extraction procedure efficiently since it is one of the bottlenecks of the algorithm . It is plausible that our performance suffers from the absence of other frequently used syntax-based features such as dependent-of-dependent and voice . It is thus highly dubious that a joint modeling of syntactic and semantic structure is worth the additional implementational effort . So far , no system using tightly integrated syntactic and semantic processing has been competitive with the best systems , which have been either completely pipelinebased ( Che et al . , 2008 ; Ciaramita et al . , 2008 ) or employed only a loose syntactic-semantic coupling ( Johansson and Nugues , 2008 ) . It has been conjectured that modeling the semantics of the sentence would also help in syntactic disambiguation ; however , it is likely that this is already implicitly taken into account by the lexical features present in virtually all modern parsers . In addition , a problem that our beam search method has in common with the constituent parsing method by Huang ( 2008 ) is that highly nonlocal features must be computed late . In our case , this means that if there is a long distance between a predicate and an argument , the secondary link between them will be unlikely to influence the final search result .", "challenge": "While it is obvious that syntax and semantics are interdependent, most works focus on developing pipeline-based approaches, hindering quality improvements.", "approach": "They propose an inexact search algorithm that predicts jointly the two layers in a bistratal dependency graph based on a beam search procedure.", "outcome": "The proposed algorithm performs comparably to the best system on the treebank from CoNLL-2008 Shared Task by learning syntactic and semantics jointly."} +{"id": "P08-1082", "document": "This work describes an answer ranking engine for non-factoid questions built using a large online community-generated question-answer collection ( Yahoo ! Answers ) . We show how such collections may be used to effectively set up large supervised learning experiments . Furthermore we investigate a wide range of feature types , some exploiting NLP processors , and demonstrate that using them in combination leads to considerable improvements in accuracy . The problem of Question Answering ( QA ) has received considerable attention in the past few years . Nevertheless , most of the work has focused on the task of factoid QA , where questions match short answers , usually in the form of named or numerical entities . Thanks to international evaluations organized by conferences such as the Text REtrieval Conference ( TREC)1 or the Cross Language Evaluation Forum ( CLEF ) Workshop2 , annotated corpora of questions and answers have become available for several languages , which has facilitated the development of robust machine learning models for the task . The situation is different once one moves beyond the task of factoid QA . Comparatively little research has focused on QA models for non-factoid questions such as causation , manner , or reason questions . Because virtually no training data is available for this problem , most automated systems train either on small hand-annotated corpora built in house ( Higashinaka and Isozaki , 2008 ) or on question-answer pairs harvested from Frequently Asked Questions ( FAQ ) lists or similar resources ( Soricut and Brill , 2006 ) . None of these situations is ideal : the cost of building the training corpus in the former setup is high ; in the latter scenario the data tends to be domain-specific , hence unsuitable for the learning of open-domain models . On the other hand , recent years have seen an explosion of user-generated content ( or social media ) . Of particular interest in our context are communitydriven question-answering sites , such as Yahoo ! Answers 3 , where users answer questions posed by other users and best answers are selected manually either by the asker or by all the participants in the thread . The data generated by these sites has significant advantages over other web resources : ( a ) it has a high growth rate and it is already abundant ; ( b ) it covers a large number of topics , hence it offers a better approximation of open-domain content ; and ( c ) it is available for many languages . Community QA sites , similar to FAQs , provide large number of questionanswer pairs . Nevertheless , this data has a significant drawback : it has high variance of quality , i.e. , answers range from very informative to completely irrelevant or even abusive . Table 1 shows some examples of both high and low quality content . In this paper we address the problem of answer ranking for non-factoid questions from social media content . Our research objectives focus on answering the following two questions : 1 . Is it possible to learn an answer ranking model for complex questions from such noisy data ? This is an interesting question because a positive answer indicates that a plethora of training data is readily available to QA researchers and system developers . 2 . Which features are most useful in this scenario ? Are similarity models as effective as models that learn question-to-answer transformations ? Does syntactic and semantic information help ? For generality , we focus only on textual features extracted from the answer text and we ignore all meta data information that is not generally available . Notice that we concentrate on one component of a possible social-media QA system . In addition to answer ranking , a complete system would have to search for similar questions already answered ( Jeon et al . , 2005 ) , and rank content quality using \" social \" features such as the authority of users ( Jeon et al . , 2006 ; Agichtein et al . , 2008 ) . This is not the focus of our work : here we investigate the problem of learning an answer ranking model capable of dealing with complex questions , using a large number of , possible noisy , question-answer pairs . By focusing exclusively on textual content we increase the portability of our approach to other collections where \" social \" features might not available , e.g. , Web search . The paper is organized as follows . We describe our approach , including all the features explored for answer modeling , in Section 2 . We introduce the corpus used in our empirical analysis in Section 3 . We detail our experiments and analyze the results in Section 4 . We overview related work in Section 5 and conclude the paper in Section 6 . In this work we described an answer ranking engine for non-factoid questions built using a large community-generated question-answer collection . On one hand , this study shows that we can effectively exploit large amounts of available Web data to do research on NLP for non-factoid QA systems , without any annotation or evaluation cost . This provides an excellent framework for large-scale experimentation with various models that otherwise might be hard to understand or evaluate . On the other hand , we expect the outcome of this process to help several applications , such as open-domain QA on the Web and retrieval from social media . For example , on the Web our ranking system could be combined with a passage retrieval system to form a QA system for complex questions . On social media , our system should be combined with a component that searches for similar questions already answered ; this output can possibly be filtered further by a content-quality module that explores \" social \" features such as the authority of users , etc . We show that the best ranking performance is obtained when several strategies are combined into a single model . We obtain the best results when similarity models are aggregated with features that model question-to-answer transformations , frequency and density of content , and correlation of QA pairs with external collections . While the features that model question-to-answer transformations provide most benefits , we show that the combination is crucial for improvement . Lastly , we show that syntactic dependency parsing and coarse semantic disambiguation yield a small , yet statistically significant performance increase on top of the traditional bag-of-words and n-gram representation . We obtain these results using only off-the-shelf NLP processors that were not adapted in any way for our task .", "challenge": "Existing works on question answering focus on factoid tasks leaving non-factoid questions untouched because of a lack of training data which is expensive to obtain.", "approach": "They propose an answer ranking engine for non-factoid questions using a large online community-generated question-answer collection which covers a large number of domains and languages.", "outcome": "They show that the proposed model can exploit large Web data without additional cost and the best performing model is a combination of several strategies."} +{"id": "N19-1008", "document": "Disfluencies in spontaneous speech are known to be associated with prosodic disruptions . However , most algorithms for disfluency detection use only word transcripts . Integrating prosodic cues has proved difficult because of the many sources of variability affecting the acoustic correlates . This paper introduces a new approach to extracting acoustic-prosodic cues using text-based distributional prediction of acoustic cues to derive vector z-score features ( innovations ) . We explore both early and late fusion techniques for integrating text and prosody , showing gains over a high-accuracy text-only model . Speech disfluencies are frequent events in spontaneous speech . The rate of disfluencies varies with the speaker and context ; one study observed disfluencies once in every 20 words , affecting up to one third of utterances ( Shriberg , 1994 ) . Disfluencies are important to account for , both because of the challenge that the disrupted grammatical flow poses for natural language processing of spoken transcripts and because of the information that they provide about the speaker . Most work on disfluency detection builds on the framework that annotates a disfluency in terms of a reparandum followed by an interruption point ( + ) , an optional interregnum ( { } ) , and then the repair , if any . A few simple examples are given below : Based on the similarity / differences between the reparandum and the repair , disfluencies are often categorized into three types : repetition ( the first example ) , rephrase ( the next example ) , and restart ( the last example ) . The interruption point is associated with a disruption in the realization of a prosodic phrase , which could involve cutting words off or elongation associated with hesitation , followed by a prosodic reset at the start of the repair . There may also be emphasis in the repair to highlight the correction . Researchers have been working on automatic disfluency detection for many years ( Lickley , 1994 ; Shriberg et al . , 1997 ; Charniak and Johnson , 2001 ; Johnson and Charniak , 2004 ; Lease et al . , 2006 ; Qian and Liu , 2013 ; Zayats et al . , 2016 ) , motivated in part by early work on parsing speech that assumed reliable detection of the interruption point ( Nakatani and Hirschberg , 1994 ; Shriberg and Stolcke , 1997 ; Liu et al . , 2006 ) . The first efforts to integrate prosody with word cues for disfluency detection ( Baron et al . , 2002 ; Snover et al . , 2004 ) found gains from using prosody , but word cues played the primary role . In subsequent work ( Qian and Liu , 2013 ; Honnibal and Johnson , 2014 ; Wang et al . , 2017 ) , more effective models of word transcripts have been the main source of performance gains . The success of recent neural network systems raises the question of what the role is for prosody in future work . In the next section , we hypothesize where prosody might help and look at the relative frequency of these cases and the performance of a high accuracy disfluency detection algorithm in these contexts . With the premise that there is a potential for prosody to benefit disfluency detection , we then propose a new approach to extracting prosodic features . A major challenge for all efforts to incorporate prosodic cues in spoken language understanding is the substantial variability in the acoustic correlates of prosody . For example , duration cues are expected to be useful -disfluencies are often associated with duration lengthening related to hesitation . However , duration varies with phonetic context , word function , prosodic phrase structure , speaking rate , etc . To account for some of this variability , various feature normalization techniques are used , but typically these account for only limited contexts , e.g. phonetic context for duration or speaker pitch range for fundamental frequency . In our work , we introduce a mechanism for normalization using the full sentence context . We train a sequential neural prediction model to estimate distributions of acoustic features for each word , given the word sequence of a sentence . Then , the actual observed acoustic feature is used to find the prediction error , normalized by the estimated variance . We refer to the resulting features as innovations , which can be thought of as a non-linear version of the innovations in a Kalman filter . The innovations will be large when the acoustic cues do not reflect the expected prosodic structure , such as during hesitations , disfluencies , and contrastive or emphatic stress . The idea is to provide prosodic cues that are less redundant with the textual cues . We assess the new prosodic features in experiments on disfluency detection using the Switchboard corpus , exploring both early and late fusion techniques to integrate innovations with text features . Our analysis shows that prosody does help with detecting some of the more difficult types of disfluencies . This paper has three main contributions . First , our analysis of a high performance disfluency detection algorithm confirms hypotheses about contexts where text-only models have high error rates . Second , we introduce a novel representation of prosodic cues , i.e. the innovation vector resulting from predicting prosodic cues given the whole sentence context . Analyses of the innovation distributions show expected patterns of prosodic cues at interruption points . Finally , we demonstrate improved disfluency detection performance on Switchboard by integrating prosody and textbased features in a neural network architecture , while comparing early and late fusion approaches . In this paper , we introduce a novel approach to extracting acoustic-prosodic cues with the goal of improving disfluency detection , but also with the intention of impacting spoken language processing more generally . Our initial analysis of a textonly disfluency detection system shows that despite high performance of such models , there exists a big gap in the performance of text-based approaches for some types of disfluencies , such as restarts and non-trivial or long rephrases . Thus , prosody cues , which can be indicative of interruption points , have a potential to contribute towards detection of more difficult types of disfluencies . Since the acoustic-prosodic cues carry information related to multiple phenomena , it can be difficult to isolate the cues that are relevant to specific events , such as interruption points . In this work , we introduce a novel approach where we extract relevant acoustic-prosodic information using textbased distributional prediction of acoustic cues to derive vector z-score features , or innovations . The innovations point to irregularities in prosody flow that are not predicted by the text , helping to better isolate signals relevant to disfluency detection that are not simply redundant with textual cues . We explore both early and late fusion approaches to combine innovations with text-based features . Our experiments show that innovation features are better predictors of disfluencies compared to the original acoustic cues . Our analysis of the errors and of the innovation features point to a limitation of the current work , which is in the modeling of F0 features . The current model obtains word-based F0 ( and energy ) features by simply averaging the values over the duration of the word , which loses any distinctions between rising and falling F0 . By leveraging polynomial contour models , we expect to improve both intonation and energy features , which we hope will reduce some of the false detections associated with emphasis and unexpected fluent phrase boundaries . An important next step is to test the system using ASR rather than hand transcripts . It is possible that errors in the transcripts could hurt the residual prediction , but if prosody is used to refine the recognition hypothesis , this could actually lead to improved recognition . Finally , we expect that the innovation model of prosody can benefit other NLP tasks , such as sarcasm and intent detection , as well as detecting paralinguist information .", "challenge": "Text-based approaches for disfluency detection in speech have performance gaps for some disfluency types but integration of prosodic is challenging due to its high variability.", "approach": "They propose a sequential neural model to estimate distributions of acoustic features for each word, given entire sentences and use them to find prediction errors.", "outcome": "The proposed model with early and late fusions outperforms a text-only model on the Switchboard corpus however still limited in the modeling of F0 features."} +{"id": "P08-1010", "document": "In this work , the problem of extracting phrase translation is formulated as an information retrieval process implemented with a log-linear model aiming for a balanced precision and recall . We present a generic phrase training algorithm which is parameterized with feature functions and can be optimized jointly with the translation engine to directly maximize the end-to-end system performance . Multiple data-driven feature functions are proposed to capture the quality and confidence of phrases and phrase pairs . Experimental results demonstrate consistent and significant improvement over the widely used method that is based on word alignment matrix only . Phrase has become the standard basic translation unit in Statistical Machine Translation ( SMT ) since it naturally captures context dependency and models internal word reordering . In a phrase-based SMT system , the phrase translation table is the defining component which specifies alternative translations and their probabilities for a given source phrase . In learning such a table from parallel corpus , two related issues need to be addressed ( either separately or jointly ): which pairs are considered valid translations and how to assign weights , such as probabilities , to them . The first problem is referred to as phrase pair extraction , which identifies phrase pairs that are supposed to be translations of each other . Methods have been proposed , based on syntax , that take advantage of linguistic constraints and alignment of grammatical structure , such as in Yamada and Knight ( 2001 ) and Wu ( 1995 ) . The most widely used approach derives phrase pairs from word alignment matrix ( Och and Ney , 2003 ; Koehn et al . , 2003 ) . Other methods do not depend on word alignments only , such as directly modeling phrase alignment in a joint generative way ( Marcu and Wong , 2002 ) , pursuing information extraction perspective ( Venugopal et al . , 2003 ) , or augmenting with modelbased phrase pair posterior ( Deng and Byrne , 2005 ) . Using relative frequency as translation probability is a common practice to measure goodness of a phrase pair . Since most phrases appear only a few times in training data , a phrase pair translation is also evaluated by lexical weights ( Koehn et al . , 2003 ) or term weighting ( Zhao et al . , 2004 ) as additional features to avoid overestimation . The translation probability can also be discriminatively trained such as in Tillmann and Zhang ( 2006 ) . The focus of this paper is the phrase pair extraction problem . As in information retrieval , precision and recall issues need to be addressed with a right balance for building a phrase translation table . High precision requires that identified translation candidates are accurate , while high recall wants as much valid phrase pairs as possible to be extracted , which is important and necessary for online translation that requires coverage . In the word-alignment derived phrase extraction approach , precision can be improved by filtering out most of the entries by using a statistical significance test ( Johnson et al . , 2007 ) . On the other hand , there are valid translation pairs in the training corpus that are not learned due to word alignment errors as shown in Deng and Byrne ( 2005 ) . We would like to improve phrase translation accuracy and at the same time extract as many as possible valid phrase pairs that are missed due to incorrect word alignments . One approach is to leverage underlying word alignment quality such as in Ayan and Dorr ( 2006 ) . In this work , we present a generic discriminative phrase pair extraction framework that can integrate multiple features aiming to identify correct phrase translation candidates . A significant deviation from most other approaches is that the framework is parameterized and can be optimized jointly with the decoder to maximize translation performance on a development set . Within the general framework , the main work is on investigating useful metrics . We employ features based on word alignment models and alignment matrix . We also propose information metrics that are derived from both bilingual and monolingual perspectives . All these features are data-driven and independent of languages . The proposed phrase extraction framework is general to apply linguistic features such as semantic , POS tags and syntactic dependency . In this paper , the problem of extracting phrase translation is formulated as an information retrieval process implemented with a log-linear model aiming for a balanced precision and recall . We have presented a generic phrase translation extraction procedure which is parameterized with feature functions . It can be optimized jointly with the translation engine to directly maximize the end-to-end translation performance . Multiple feature functions were investigated . Our experimental results on IWSLT Chinese-English corpus have demonstrated consistent and significant improvement over the widely used word alignment matrix based extraction method . 3", "challenge": "Existing approaches for the phrase pair extraction apply a filter to extracted pairs to obtain high precision but lose recall due to word alignment errors.", "approach": "They propose an algorithm parameterized with feature functions which can be jointly optimized to maximize final translation quality to obtain a balanced precision and recall.", "outcome": "The proposed algorithm shows consistent improvements over a common existing method which is only based on a word alignment matrix on the IWSLT Chinese-English corpus."} +{"id": "P18-1025", "document": "Embedding methods which enforce a partial order or lattice structure over the concept space , such as Order Embeddings ( OE ) ( Vendrov et al . , 2016 ) , are a natural way to model transitive relational data ( e.g. entailment graphs ) . However , OE learns a deterministic knowledge base , limiting expressiveness of queries and the ability to use uncertainty for both prediction and learning ( e.g. learning from expectations ) . Probabilistic extensions of OE ( Lai and Hockenmaier , 2017 ) have provided the ability to somewhat calibrate these denotational probabilities while retaining the consistency and inductive bias of ordered models , but lack the ability to model the negative correlations found in real-world knowledge . In this work we show that a broad class of models that assign probability measures to OE can never capture negative correlation , which motivates our construction of a novel box lattice and accompanying probability measure to capture anticorrelation and even disjoint concepts , while still providing the benefits of probabilistic modeling , such as the ability to perform rich joint and conditional queries over arbitrary sets of concepts , and both learning from and predicting calibrated uncertainty . We show improvements over previous approaches in modeling the Flickr and WordNet entailment graphs , and investigate the power of the model . Structured embeddings based on regions , densities , and orderings have gained popularity in recent years for their inductive bias towards the essential asymmetries inherent in problems such as image captioning ( Vendrov et al . , 2016 ) , lexical and textual entailment ( Erk , 2009 ; Vilnis and McCallum , 2015 ; Lai and Hockenmaier , 2017 ; Athiwaratkun and Wilson , 2018 ) , and knowledge graph completion and reasoning ( He et al . , 2015 ; Nickel and Kiela , 2017 ; Li et al . , 2017 ) . Models that easily encode asymmetry , and related properties such as transitivity ( the two components of commonplace relations such as partially ordered sets and lattices ) , have great utility in these applications , leaving less to be learned from the data than arbitrary relational models . At their best , they resemble a hybrid between embedding models and structured prediction . As noted by Vendrov et al . ( 2016 ) and Li et al . ( 2017 ) , while the models learn sets of embeddings , these parameters obey rich structural constraints . The entire set can be thought of as one , sometimes provably consistent , structured prediction , such as an ontology in the form of a single directed acyclic graph . While the structured prediction analogy applies best to Order Embeddings ( OE ) , which embeds consistent partial orders , other region-and density-based representations have been proposed for the express purpose of inducing a bias towards asymmetric relationships . For example , the Gaussian Embedding ( GE ) model ( Vilnis and Mc-Callum , 2015 ) aims to represent the asymmetry and uncertainty in an object 's relations and attributes by means of uncertainty in the representation . However , while the space of representations is a manifold of probability distributions , the model is not truly probabilistic in that it does not model asymmetries and relations in terms of prob-abilities , but in terms of asymmetric comparison functions such as the originally proposed KL divergence and the recently proposed thresholded divergences ( Athiwaratkun and Wilson , 2018 ) . Probabilistic models are especially compelling for modeling ontologies , entailment graphs , and knowledge graphs . Their desirable properties include an ability to remain consistent in the presence of noisy data , suitability towards semisupervised training using the expectations and uncertain labels present in these large-scale applications , the naturality of representing the inherent uncertainty of knowledge they store , and the ability to answer complex queries involving more than 2 variables . Note that the final one requires a true joint probabilistic model with a tractable inference procedure , not something provided by e.g. matrix factorization . We take the dual approach to density-based embeddings and model uncertainty about relationships and attributes as explicitly probabilistic , while basing the probability on a latent space of geometric objects that obey natural structural biases for modeling transitive , asymmetric relations . The most similar work are the probabilistic order embeddings ( POE ) of Lai ( Lai and Hockenmaier , 2017 ) , which apply a probability measure to each order embedding 's forward cone ( the set of points greater than the embedding in each dimension ) , assigning a finite and normalized volume to the unbounded space . However , POE suffers severe limitations as a probabilistic model , including an inability to model negative correlations between concepts , which motivates the construction of our box lattice model . Our model represents objects , concepts , and events as high-dimensional products-of-intervals ( hyperrectangles or boxes ) , with an event 's unary probability coming from the box volume and joint probabilities coming from overlaps . This contrasts with POE 's approach of defining events as the forward cones of vectors , extending to infinity , integrated under a probability measure that assigns them finite volume . One desirable property of a structured representation for ordered data , originally noted in ( Vendrov et al . , 2016 ) is a \" slackness \" shared by OE , POE , and our model : when the model predicts an \" edge \" or lack thereof ( i.e. P ( a|b ) = 0 or 1 , or a zero constraint violation in the case of OE ) , being exposed to that fact again will not update the model . Moreover , there are large degrees of freedom in parameter space that exhibit this slackness , giving the model the ability to embed complex structure with 0 loss when compared to models based on symmetric inner products or distances between embeddings , e.g. bilinear GLMs ( Collins et al . , 2002 ) , Trans-E ( Bordes et al . , 2013 ) , and other embedding models which must always be pushing and pulling parameters towards and away from each other . Our experiments demonstrate the power of our approach to probabilistic ordering-biased relational modeling . First , we investigate an instructive 2-dimensional toy dataset that both demonstrates the way the model self organizes its box event space , and enables sensible answers to queries involving arbitrary numbers of variables , despite being trained on only pairwise data . We achieve a new state of the art in denotational probability modeling on the Flickr entailment dataset ( Lai and Hockenmaier , 2017 ) , and a matching state-of-the-art on WordNet hypernymy ( Vendrov et al . , 2016 ; Miller , 1995 ) with the concurrent work on thresholded Gaussian embedding of Athiwaratkun and Wilson ( 2018 ) , achieving our best results by training on additional co-occurrence expectations aggregated from leaf types . We find that the strong empirical performance of probabilistic ordering models , and our box lattice model in particular , and their endowment of new forms of training and querying , make them a promising avenue for future research in representing structured knowledge . We have only scratched the surface of possible applications . An exciting direction is the incorporation of multi-relational data for general knowledge representation and inference . Secondly , more complex representations , such as 2n-dimensional products of 2-dimensional convex polyhedra , would offer greater flexibility in tiling event space . Improved inference of the latent boxes , either through better optimization or through Bayesian approaches is another natural extension . Our greatest interest is in the application of this powerful new tool to the many areas where other structured embeddings have shown promise .", "challenge": "While a probabilistic extension can complement Order Embeddings with more flexibility, it still lacks the ability to model the negative correlation between concepts.", "approach": "They first show that the existing probabilistic approach cannot capture negative correlation, and further propose a box lattice coupled with a probability measure.", "outcome": "The proposed probabilistic order-biased relational model achieves state-of-the-art on the Flickr entailment dataset and WordNet hypernymy matching dataset."} +{"id": "P11-1022", "document": "State-of-the-art statistical machine translation ( MT ) systems have made significant progress towards producing user-acceptable translation output . However , there is still no efficient way for MT systems to inform users which words are likely translated correctly and how confident it is about the whole sentence . We propose a novel framework to predict wordlevel and sentence-level MT errors with a large number of novel features . Experimental results show that the MT error prediction accuracy is increased from 69.1 to 72.2 in F-score . The Pearson correlation between the proposed confidence measure and the human-targeted translation edit rate ( HTER ) is 0.6 . Improvements between 0.4 and 0.9 TER reduction are obtained with the n-best list reranking task using the proposed confidence measure . Also , we present a visualization prototype of MT errors at the word and sentence levels with the objective to improve post-editor productivity . State-of-the-art Machine Translation ( MT ) systems are making progress to generate more usable translation outputs . In particular , statistical machine translation systems ( Koehn et al . , 2007 ; Bach et al . , 2007 ; Shen et al . , 2008 ) have advanced to a state that the translation quality for certain language pairs ( e.g. Spanish-English , French-English , Iraqi-English ) in certain domains ( e.g. broadcasting news , force-protection , travel ) is acceptable to users . However , a remaining open question is how to predict confidence scores for machine translated words and sentences . An MT system typically returns the best translation candidate from its search space , but still has no reliable way to inform users which word is likely to be correctly translated and how confident it is about the whole sentence . Such information is vital to realize the utility of machine translation in many areas . For example , a post-editor would like to quickly identify which sentences might be incorrectly translated and in need of correction . Other areas , such as cross-lingual question-answering , information extraction and retrieval , can also benefit from the confidence scores of MT output . Finally , even MT systems can leverage such information to do n-best list reranking , discriminative phrase table and rule filtering , and constraint decoding ( Hildebrand and Vogel , 2008 ) . Numerous attempts have been made to tackle the confidence estimation problem . The work of Blatz et al . ( 2004 ) is perhaps the best known study of sentence and word level features and their impact on translation error prediction . Along this line of research , improvements can be obtained by incorporating more features as shown in ( Quirk , 2004 ; Sanchis et al . , 2007 ; Raybaud et al . , 2009 ; Specia et al . , 2009 ) . Soricut and Echihabi ( 2010 ) developed regression models which are used to predict the expected BLEU score of a given translation hypothesis . Improvement also can be obtained by using target part-of-speech and null dependency link in a MaxEnt classifier ( Xiong et al . , 2010 ) . Ueffing and Ney ( 2007 ) introduced word posterior probabilities ( WPP ) features and applied them in the n-best list reranking . From the usability point of view , back-translation is a tool to help users to assess the accuracy level of MT output ( Bach et al . , 2007 ) . Literally , it translates backward the MT output into the source language to see whether the output of backward translation matches the original source sentence . However , previous studies had a few shortcomings . First , source-side features were not extensively investigated . Blatz et al.(2004 ) only investigated source ngram frequency statistics and source language model features , while other work mainly focused on target side features . Second , previous work attempted to incorporate more features but faced scalability issues , i.e. , to train many features we need many training examples and to train discriminatively we need to search through all possible translations of each training example . Another issue of previous work was that they are all trained with BLEU / TER score computing against the translation references which is different from predicting the human-targeted translation edit rate ( HTER ) which is crucial in post-editing applications ( Snover et al . , 2006 ; Papineni et al . , 2002 ) . Finally , the backtranslation approach faces a serious issue when forward and backward translation models are symmetric . In this case , back-translation will not be very informative to indicate forward translation quality . In this paper , we predict error types of each word in the MT output with a confidence score , extend it to the sentence level , then apply it to n-best list reranking task to improve MT quality , and finally design a visualization prototype . We try to answer the following questions : \u2022 Can we use a rich feature set such as sourceside information , alignment context , and dependency structures to improve error prediction performance ? \u2022 Can we predict more translation error types i.e substitution , insertion , deletion and shift ? \u2022 How good do our prediction methods correlate with human correction ? \u2022 Do confidence measures help the MT system to select a better translation ? \u2022 How confidence score can be presented to improve end-user perception ? In Section 2 , we describe the models and training method for the classifier . We describe novel features including source-side , alignment context , and dependency structures in Section 3 . Experimental results and analysis are reported in Section 4 . Section 5 and 6 present applications of confidence scores . In this paper we proposed a method to predict confidence scores for machine translated words and sentences based on a feature-rich classifier using linguistic and context features . Our major contributions are three novel feature sets including source side information , alignment context , and dependency structures . Experimental results show that by combining the source side information , alignment context , and dependency structure features with word posterior probability and target POS context ( Ueffing & Ney 2007 ; Xiong et al . , 2010 ) , the MT error prediction accuracy is increased from 69.1 to 72.2 in F-score . Our framework is able to predict error types namely insertion , substitution and shift . The Pearson correlation with human judgement increases from 0.52 to 0.6 . Furthermore , we show that the proposed confidence scores can help the MT system to select better translations and as a result improvements between 0.4 and 0.9 TER reduction are obtained . Finally , we demonstrate a prototype to visualize translation errors . This work can be expanded in several directions . First , we plan to apply confidence estimation to perform a second-pass constraint decoding . After the first pass decoding , our confidence estimation model can label which word is likely to be correctly translated . The second-pass decoding utilizes the confidence informa-tion to constrain the search space and hopefully can find a better hypothesis than in the first pass . This idea is very similar to the multi-pass decoding strategy employed by speech recognition engines . Moreover , we also intend to perform a user study on our visualization prototype to see if it increases the productivity of post-editors .", "challenge": "Methods for computing confidence scores and predicting errors about a whole sentence for statistical machine translation systems is under explored.", "approach": "They propose a framework to predict word and sentence-level errors with linguistic and context features and apply it to n-best list reranking for better translations.", "outcome": "The proposed method improves error prediction by predicting insertion, substitution and shift error types and also increases the Pearson correlation with human judgements."} +{"id": "2022.naacl-main.330", "document": "Figurative and metaphorical language are commonplace in discourse , and figurative expressions play an important role in communication and cognition . However , figurative language has been a relatively under-studied area in NLP , and it remains an open question to what extent modern language models can interpret nonliteral phrases . To address this question , we introduce Fig-QA , a Winograd-style nonliteral language understanding task consisting of correctly interpreting paired figurative phrases with divergent meanings . We evaluate the performance of several state-of-the-art language models on this task , and find that although language models achieve performance significantly over chance , they still fall short of human performance , particularly in zero-or few-shot settings . This suggests that further work is needed to improve the nonliteral reasoning capabilities of language models . 1 All our words are but crumbs that fall down from the feast of the mind ( Gibran , 1926 ) . When humans read such a metaphorical phrase , how do they interpret it ? Conceptual metaphors structure our everyday language and are used to map everyday physical experiences and emotions onto abstract concepts ( Lakoff and Johnson , 1981 ) . They allow us to communicate complex ideas , to emphasize emotions , and to make humorous statements ( Fussell and Moss , 2008 ) . However , despite relating words in a way that differs from their accepted definition , these phrases are readily interpreted by human listeners , and are common in discourse ( Shutova , 2011 ) , occurring on average every three sentences ( Mio and Katz , 1996 ; Fussell and Moss , 2008 ) The ability to interpret figurative language has been viewed as a bottleneck in natural language un-derstanding , but it has not been studied as widely as literal language ( Shutova , 2011 ; Tong et al . , 2021 ) . Figurative language often relies on shared commonsense or cultural knowledge , and in some cases may be difficult to solve using language statistics . This presents a challenge to language models ( LMs ) , as strong LMs trained only on text may not be able to make sense of the physical world , nor the social or cultural knowledge that language is grounded in ( Bender and Koller , 2020 ; Bisk et al . , 2020 ) . Most previous work on figurative language focuses on metaphor detection , where a model is trained to identify the existence of metaphors in text ( Tsvetkov et al . , 2014 ; Stowe and Palmer , 2018 ; Leong et al . , 2020 ) , with datasets consisting mostly of conventionalized metaphors and idioms in wide use . However , identifying these common metaphors that already appear often in language may be an easy task for LMs , and not fully test their ability to interpret figurative language . The little work that exists on metaphor interpretation frames it as a task linking metaphorical phrases to literal rewordings , either through paraphrase detection ( Bizzoni and Lappin , 2018 ) or paraphrase generation ( Shutova , 2010 ; Su et al . , 2017 ; Mao et al . , 2018 ) ( details in \u00a7 7 ) Another line of work probes for metaphorical understanding in LMs , but this is similar to the metaphor detection task , in that the LM is not actually asked to choose an interpretation for the metaphor ( Pedinotti et al . , 2021 ; Aghazadeh et al . , 2022 ) . While interesting , this work does not take into account the fact that metaphors are rich with different implications that may vary depending on the context . In this work , we ask whether or not LMs can correctly make inferences regarding creative , relatively novel metaphors generated by humans . This task is harder for two reasons : ( 1 ) inference is harder than identification or paraphrasing , as it requires understanding the underlying semantics , and ( 2 ) the metaphors in our dataset are novel cre-ations , and many may not appear even once in the LMs ' training data . We propose a minimal task inspired by the Winograd schema ( Levesque et al . , 2012 ) , where LMs are tasked with choosing the entailed phrase from two opposite metaphorical phrases . An example of a paired sentence is \" Her commitment is as sturdy as ( plywood / oak ) \" . The correct answer would be either \" She was ( committed / uncommitted ) \" . This can also be seen as an entailment task , where input x is the premise , and the output y is the hypothesis . 2We crowdsource a benchmark Fig-QA , consisting of 10,256 such metaphors and implications ( \u00a7 2 ) , which can be used to evaluate the nonliteral reasoning abilities of LMs or for more broad studies of figurative language in general ( we provide preliminary analyses in \u00a7 3 ) . Through extensive experiments over strong pre-trained LMs ( \u00a7 4 ) , we find that although they can be fine-tuned to do reasonably well , their few-shot performance falls significantly short of human performance ( \u00a7 5 ) . An in-depth analysis ( \u00a7 6 ) uncovers several insights : ( 1 ) LMs do not make use of the metaphorical context well , instead relying on the predicted probability of interpretations alone , ( 2 ) the task of associating a metaphor with an interpretation is more difficult than the reverse , ( 3 ) even strong models such as GPT-3 make inexplicable errors that are not well-aligned with human ones , indicating that further work is needed to properly model nonliteral language . 2 Dataset Creation and Validation We present a Winograd-like benchmark task to test the ability of LMs to reason about figurative language , based on large-scale collection of creative metaphors written by humans . We find a large gap between LM zero-shot and human performance on this dataset , but show that models can be fine-tuned to perform well on this particular task . We hope that this work will encourage further study of nonliteral reasoning in LMs , especially in few-shot settings . Given that metaphorical reasoning may play a role in problem-solving and linguistic creativity , the development of models , training methods , or datasets that enable metaphorical reasoning may improve models ' abilities to reason creatively and draw analogies between situations that may appear to be different on the surface . One avenue we hope to investigate is multimodal metaphors , as this dataset currently includes only text-based metaphors . Nonliteral expressions also remain understudied cross-linguistically , but further work on identifying and interpreting metaphors in other languages may also improve the abilities of multilingual models .", "challenge": "Figurative language has been under-studied and existing works on metaphorical understanding of language models do not take the richness of such expressions into account.", "approach": "They introduce a Winogrand-style nonliteral language understanding task to test if language models can make correct inferences on human-generated metaphors.", "outcome": "Evaluation of state-of-the-art models with the proposed dataset shows that they underperform humans in a zero-shot setting but can improve with fine-tuning."} +{"id": "W02-1036", "document": "In this paper , we propose a method for learning a classifier which combines outputs of more than one Japanese named entity extractors . The proposed combination method belongs to the family of stacked generalizers , which is in principle a technique of combining outputs of several classifiers at the first stage by learning a second stage classifier to combine those outputs at the first stage . Individual models to be combined are based on maximum entropy models , one of which always considers surrounding contexts of a fixed length , while the other considers those of variable lengths according to the number of constituent morphemes of named entities . As an algorithm for learning the second stage classifier , we employ a decision list learning method . Experimental evaluation shows that the proposed method achieves improvement over the best known results with Japanese named entity extractors based on maximum entropy models . In the recent corpus-based NLP research , system combination techniques have been successfully applied to several tasks such as parts-of-speech tagging ( van Halteren et al . , 1998 ) , base noun phrase chunking ( Tjong Kim Sang , 2000 ) , and parsing ( Henderson and Brill , 1999 ; Henderson and Brill , 2000 ) . The aim of system combination is to combine portions of the individual systems ' outputs which are partial but can be regarded as highly accurate . The process of system combination can be decomposed into the following two sub-processes : 1 . Collect systems which behave as differently as possible : it would help a lot if at least the collected systems tend to make errors of different types , because simple voting technique can identify correct outputs . Previously studied techniques for collecting such systems include : i ) using several existing real systems ( van Halteren et al . , 1998 ; Brill and Wu , 1998 ; Henderson and Brill , 1999 ; Tjong Kim Sang , 2000 ) , ii ) bagging / boosting techniques ( Henderson and Brill , 1999 ; Henderson and Brill , 2000 ) , and iii ) switching the data expression and obtaining several models ( Tjong Kim Sang , 2000 ) . 2 . Combine the outputs of the several systems : previously studied techniques include : i ) voting techniques ( van Halteren et al . , 1998 ; Tjong Kim Sang , 2000 ; Henderson and Brill , 1999 ; Henderson and Brill , 2000 ) , ii ) switching among several systems according to confidence values they provide ( Henderson and Brill , 1999 ) , iii ) stacking techniques ( Wolpert , 1992 ) which train a second stage classifier for combining outputs of classifiers at the first stage ( van Halteren et al . , 1998 ; Brill and Wu , 1998 ; Tjong Kim Sang , 2000 ) . In this paper , we propose a method for combining outputs of ( Japanese ) named entity chunkers , which belongs to the family of stacking techniques . In the sub-process 1 , we focus on models which differ in the lengths of preceding / subsequent contexts to be incorporated in the models . As the base model for supervised learning of Japanese named entity chunking , we employ a model based on the maximum entropy model ( Uchimoto et al . , 2000 ) , which performed the best in IREX ( Information Retrieval and Extraction Exercise ) Workshop ( IREX Committee , 1999 ) among those based on machine learning techniques . Uchimoto et al . ( 2000 ) reported that the optimal number of preceding / subsequent contexts to be incorporated in the model is two morphemes to both left and right from the current position . In this paper , we train several maximum entropy models which differ in the lengths of preceding / subsequent contexts , and then combine their outputs . As the sub-process 2 , we propose to apply a stacking technique which learns a classifier for combining outputs of several named entity chunkers . This second stage classifier learns rules for accepting / rejecting outputs of several individual named entity chunkers . The proposed method can be applied to the cases where the number of constituent systems is quite small ( e.g. , two ) . Actually , in the experimental evaluation , we show that the results of combining the best performing model of Uchimoto et al . ( 2000 ) with the one which performs poorly but extracts named entities quite different from those of the best performing model can help improve the performance of the best model . This paper proposed a method for learning a classifier to combine outputs of more than one Japanese named entity chunkers . Experimental evaluation showed that the proposed method achieved improvement in F-measure over the best known results with an ME model ( Uchimoto et al . , 2000 ) , when a complementary model extracted named entities quite differently from the best performing model .", "challenge": "System combination techniques have been shown successful in corpus-based research which aims to combine portions of individual systems' outputs that are partially but highly accurate.", "approach": "They propose a decision list-based classifier which learns to combine outputs from multiple maximum entropy-based Japanese named entity extraction models with different context lengths.", "outcome": "They show that combining the existing best performing model with worse-performing models improves the performance of the original best model."} +{"id": "P18-1181", "document": "In this paper , we propose a joint architecture that captures language , rhyme and meter for sonnet modelling . We assess the quality of generated poems using crowd and expert judgements . The stress and rhyme models perform very well , as generated poems are largely indistinguishable from human-written poems . Expert evaluation , however , reveals that a vanilla language model captures meter implicitly , and that machine-generated poems still underperform in terms of readability and emotion . Our research shows the importance expert evaluation for poetry generation , and that future research should look beyond rhyme / meter and focus on poetic language . With the recent surge of interest in deep learning , one question that is being asked across a number of fronts is : can deep learning techniques be harnessed for creative purposes ? Creative applications where such research exists include the composition of music ( Humphrey et al . , 2013 ; Sturm et al . , 2016 ; Choi et al . , 2016 ) , the design of sculptures ( Lehman et al . , 2016 ) , and automatic choreography ( Crnkovic-Friis and Crnkovic-Friis , 2016 ) . In this paper , we focus on a creative textual task : automatic poetry composition . A distinguishing feature of poetry is its aesthetic forms , e.g. rhyme and rhythm / meter . 1 In this work , we treat the task of poem generation as a constrained language modelling task , such that lines of a given poem rhyme , and each line follows a canonical meter and has a fixed number 1 Noting that there are many notable divergences from this in the work of particular poets ( e.g. Walt Whitman ) and poetry types ( such as free verse or haiku ) . Shall I compare thee to a summer 's day ? Thou art more lovely and more temperate : Rough winds do shake the darling buds of May , And summer 's lease hath all too short a date : of stresses . Specifically , we focus on sonnets and generate quatrains in iambic pentameter ( e.g. see Figure 1 ) , based on an unsupervised model of language , rhyme and meter trained on a novel corpus of sonnets . Our findings are as follows : \u2022 our proposed stress and rhyme models work very well , generating sonnet quatrains with stress and rhyme patterns that are indistinguishable from human-written poems and rated highly by an expert ; \u2022 a vanilla language model trained over our sonnet corpus , surprisingly , captures meter implicitly at human-level performance ; \u2022 while crowd workers rate the poems generated by our best model as nearly indistinguishable from published poems by humans , an expert annotator found the machine-generated poems to lack readability and emotion , and our best model to be only comparable to a vanilla language model on these dimensions ; \u2022 most work on poetry generation focuses on meter ( Greene et al . , 2010 ; Ghazvininejad et al . , 2016 ; Hopkins and Kiela , 2017 ) ; our results suggest that future research should look beyond meter and focus on improving readability . In this , we develop a new annotation framework for the evaluation of machine-generated poems , and release both a novel data of sonnets and the full source code associated with this research.2 We propose a joint model of language , meter and rhyme that captures language and form for modelling sonnets . We provide quantitative analyses for each component , and assess the quality of generated poems using judgements from crowdworkers and a literature expert . Our research reveals that vanilla LSTM language model captures meter implicitly , and our proposed rhyme model performs exceptionally well . Machine-generated generated poems , however , still underperform in terms of readability and emotion .", "challenge": "The effectiveness of deep learning techniques for creative purposes such as automatic poetry composition which has its aesthetic forms remains unknown.", "approach": "They propose a joint architecture that captures language, rhyme and meter for sonnet modelling to generate quatrains coupled with a corpus to train.", "outcome": "Evaluation with the crowd and expert judgement reveals that while models perform indistinguishable from human-written poems on stress and rhyme, they lack readability and emotion."} +{"id": "D14-1002", "document": "This paper presents a deep semantic similarity model ( DSSM ) , a special type of deep neural networks designed for text analysis , for recommending target documents to be of interest to a user based on a source document that she is reading . We observe , identify , and detect naturally occurring signals of interestingness in click transitions on the Web between source and target documents , which we collect from commercial Web browser logs . The DSSM is trained on millions of Web transitions , and maps source-target document pairs to feature vectors in a latent space in such a way that the distance between source documents and their corresponding interesting targets in that space is minimized . The effectiveness of the DSSM is demonstrated using two interestingness tasks : automatic highlighting and contextual entity search . The results on large-scale , real-world datasets show that the semantics of documents are important for modeling interestingness and that the DSSM leads to significant quality improvement on both tasks , outperforming not only the classic document models that do not use semantics but also state-of-the-art topic models . Tasks of predicting what interests a user based on the document she is reading are fundamental to many online recommendation systems . A recent survey is due to Ricci et al . ( 2011 ) . In this paper , we exploit the use of a deep semantic model for two such interestingness tasks in which document semantics play a crucial role : automatic highlighting and contextual entity search . Automatic Highlighting . In this task we want a recommendation system to automatically discover the entities ( e.g. , a person , location , organi-zation etc . ) that interest a user when reading a document and to highlight the corresponding text spans , referred to as keywords afterwards . We show in this study that document semantics are among the most important factors that influence what is perceived as interesting to the user . For example , we observe in Web browsing logs that when a user reads an article about a movie , she is more likely to browse to an article about an actor or character than to another movie or the director . Contextual entity search . After identifying the keywords that represent the entities of interest to the user , we also want the system to recommend new , interesting documents by searching the Web for supplementary information about these entities . The task is challenging because the same keywords often refer to different entities , and interesting supplementary information to the highlighted entity is highly sensitive to the semantic context . For example , \" Paul Simon \" can refer to many people , such as the singer and the senator . Consider an article about the music of Paul Simon and another about his life . Related content about his upcoming concert tour is much more interesting in the first context , while an article about his family is more interesting in the second . At the heart of these two tasks is the notion of interestingness . In this paper , we model and make use of this notion of interestingness with a deep semantic similarity model ( DSSM ) . The model , extending from the deep neural networks shown recently to be highly effective for speech recognition ( Hinton et al . , 2012 ; Deng et al . , 2013 ) and computer vision ( Krizhevsky et al . , 2012 ; Markoff , 2014 ) , is semantic because it maps documents to feature vectors in a latent semantic space , also known as semantic representations . The model is deep because it employs a neural network with several hidden layers including a special convolutional-pooling structure to identify keywords and extract hidden semantic features at different levels of abstractions , layer by layer . The semantic representation is computed through a deep neural network after its training by backpropagation with respect to an objective tailored to the respective interestingness tasks . We obtain naturally occurring \" interest \" signals by observing Web browser transitions , from a source document to a target document , in Web usage logs of a commercial browser . Our training data is sampled from these transitions . The use of the DSSM to model interestingness is motivated by the recent success of applying related deep neural networks to computer vision ( Krizhevshy et al . 2012 ; Markoff , 2014 ) , speech recognition ( Hinton et al . 2012 ) , text processing ( Collobert et al . 2011 ) , and Web search ( Huang et al . 2013 ) . Among them , ( Huang et al . 2013 ) is most relevant to our work . They also use a deep neural network to map documents to feature vectors in a latent semantic space . However , their model is designed to represent the relevance between queries and documents , which differs from the notion of interestingness between documents studied in this paper . It is often the case that a user is interested in a document because it provides supplementary information about the entities or concepts she encounters when reading another document although the overall contents of the second documents is not highly relevant . For example , a user may be interested in knowing more about the history of University of Washington after reading the news about President Obama 's visit to Seattle . To better model interestingness , we extend the model of Huang et al . ( 2013 ) in two significant aspects . First , while Huang et al . treat a document as a bag of words for semantic mapping , the DSSM treats a document as a sequence of words and tries to discover prominent keywords . These keywords represent the entities or concepts that might interest users , via the convolutional and max-pooling layers which are related to the deep models used for computer vision ( Krizhevsky et al . , 2013 ) and speech recognition ( Deng et al . , 2013a ) but are not used in Huang et al . 's model . The DSSM then forms the high-level semantic representation of the whole document based on these keywords . Second , instead of directly computing the document relevance score using cosine similarity in the learned semantic space , as in Huang et al . ( 2013 ) , we feed the features derived from the semantic representations of documents to a ranker which is trained in a supervised manner . As a result , a document that is not highly relevant to another document a user is reading ( i.e. , the distance between their derived feature 1 We stress here that , although the click signal is available to form a dataset and a gold standard ranker ( to be described in vectors is big ) may still have a high score of interestingness because the former provides useful information about an entity mentioned in the latter . Such information and entity are encoded , respectively , by ( some subsets of ) the semantic features in their corresponding documents . In Sections 4 and 5 , we empirically demonstrate that the aforementioned two extensions lead to significant quality improvements for the two interestingness tasks presented in this paper . Before giving a formal description of the DSSM in Section 3 , we formally define the interestingness function , and then introduce our data set of naturally occurring interest signals . Modeling interestingness is fundamental to many online recommendation systems . We obtain naturally occurring interest signals by observing Web browsing transitions where users click from one webpage to another . We propose to model this \" interestingness \" with a deep semantic similarity model ( DSSM ) , based on deep neural networks with special convolutional-pooling structure , mapping source-target document pairs to feature vectors in a latent semantic space . We train the DSSM using browsing transitions between documents . Finally , we demonstrate the effectiveness of our model on two interestingness tasks : automatic highlighting and contextual entity search . Our results on large-scale , real-world datasets show that the semantics of documents computed by the DSSM are important for modeling interestingness and that the new model leads to significant improvements on both tasks . DSSM is shown to outperform not only the classic document models that do not use ( latent ) semantics but also stateof-the-art topic models that do not have the deep and convolutional architecture characterizing the DSSM . One area of future work is to extend our method to model interestingness given an entire user session , which consists of a sequence of browsing events . We believe that the prior browsing and interaction history recorded in the session provides additional signals for predicting interestingness . To capture such signals , our model needs to be extended to adequately represent time series ( e.g. , causal relations and consequences of actions ) . One potentially effective model for such a purpose is based on the architecture of recurrent neural networks ( e.g. , Mikolov et al . 2010 ; Chen and Deng , 2014 ) , which can be incorporated into the deep semantic model proposed in this paper .", "challenge": "Recommending interesting documents to users is a fundamental feature of many systems, however; current systems do not directly model it but relevance.", "approach": "They propose a CNN-based model that models documents as a sequence of words in latent space and trains it on millions of Web transitions.", "outcome": "They show that their model outperforms classic/recent models on two tasks that evaluate the interestingness of documents significantly."} +{"id": "N18-1063", "document": "Current measures for evaluating text simplification systems focus on evaluating lexical text aspects , neglecting its structural aspects . In this paper we propose the first measure to address structural aspects of text simplification , called SAMSA . It leverages recent advances in semantic parsing to assess simplification quality by decomposing the input based on its semantic structure and comparing it to the output . SAMSA provides a reference-less automatic evaluation procedure , avoiding the problems that reference-based methods face due to the vast space of valid simplifications for a given sentence . Our human evaluation experiments show both SAMSA 's substantial correlation with human judgments , as well as the deficiency of existing reference-based measures in evaluating structural simplification . 1 Text simplification ( TS ) addresses the translation of an input sentence into one or more simpler sentences . It is a useful preprocessing step for several NLP tasks , such as machine translation ( Chandrasekar et al . , 1996 ; Mishra et al . , 2014 ) and relation extraction ( Niklaus et al . , 2016 ) , and has also been shown useful in the development of reading aids , e.g. , for people with dyslexia ( Rello et al . , 2013 ) or non-native speakers ( Siddharthan , 2002 ) . The task has attracted much attention in the past decade ( Zhu et al . , 2010 ; Woodsend and Lapata , 2011 ; Wubben et al . , 2012 ; Siddharthan and Angrosh , 2014 ; Narayan and Gardent , 2014 ) , but has yet to converge on an evaluation protocol that yields comparable results across different methods and strongly correlates with human judgments . This is in part due to the difficulty to combine the effects of different simplification operations ( e.g. , deletion , splitting and substitution ) . Xu et al . ( 2016 ) has recently made considerable progress towards that goal , and proposed to tackle it both by using an improved reference-based measure , named SARI , and by increasing the number of references . However , their research focused on lexical , rather than structural simplification , which provides a complementary view of TS quality as this paper will show . This paper focuses on the evaluation of the structural aspects of the task . We introduce the semantic measure SAMSA ( Simplification Automatic evaluation Measure through Semantic Annotation ) , the first structure-aware measure for TS in general , and the first to use semantic structure in this context in particular . SAMSA stipulates that an optimal split of the input is one where each predicate-argument structure is assigned its own sentence , and measures to what extent this assertion holds for the input-output pair in question , by using semantic structure . SAMSA focuses on the core semantic components of the sentence , and is tolerant towards the deletion of other units . 2For example , SAMSA will assign a high score to the output split \" John got home . John gave Mary a call . \" for the input sentence \" John got home and gave Mary a call . \" , as it splits each of its predicate-argument structures to a different sentence . Splits that alter predicate-argument relations such as \" John got home and gave . Mary called . \" are penalized by SAMSA . SAMSA 's use of semantic structures for TS evaluation has several motivations . First , it provides means to measure the extent to which the meaning of the source is preserved in the output . Second , it provides means for measuring whether the input sentence was split to semantic units of the right granularity . Third , defining a semantic measure that does not require references avoids the difficulties incurred by their non-uniqueness , and the difficulty in collecting high quality references , as reported by Xu et al . ( 2015 ) and by Narayan and Gardent ( 2014 ) with respect to the Parallel Wikipedia Corpus ( PWKP ; Zhu et al . , 2010 ) . SAMSA is further motivated by its use of semantic annotation only on the source side , which allows to evaluate multiple systems using same source-side annotation , and avoids the need to parse system outputs , which can be garbled . In this paper we use the UCCA scheme for defining semantic structure ( Abend and Rappoport , 2013 ) . UCCA has been shown to be preserved remarkably well across translations ( Sulem et al . , 2015 ) and has also been successfully used for machine translation evaluation ( Birch et al . , 2016 ) ( Section 2 ) . We note , however , that SAMSA can be adapted to work with any semantic scheme that captures predicate-argument relations , such as AMR ( Banarescu et al . , 2013 ) or Discourse Representation Structures ( Kamp , 1981 ) , as used by Narayan and Gardent ( 2014 ) . We experiment with SAMSA both where semantic annotation is carried out manually , and where it is carried out by a parser . See Section 4 . We conduct human rating experiments and compare the resulting system rankings with those predicted by SAMSA . We find that SAMSA 's rankings obtain high correlations with human rankings , and compare favorably to existing referencebased measures for TS . Moreover , our results show that existing measures , which mainly target lexical simplification , are ill-suited to predict human judgments where structural simplification is involved . Finally , we apply SAMSA to the dataset of the QATS shared task on simplification evaluation ( \u0160tajner et al . , 2016 ) . We find that SAMSA obtains comparative correlation with human judgments on the task , despite operating in a more restricted setting , as it does not use human ratings as training data and focuses only on structural aspects of simplicity . Section 2 presents previous work . Section 3 discusses UCCA . Section 4 presents SAMSA . Section 5 details the collection of human judgments . Our experimental setup for comparing our human and automatic rankings is given in Section 6 , and results are given in Section 7 , showing superior results for SAMSA . A discussion on the results is presented in Section 8 . Section 9 presents experiments with SAMSA on the QATS evaluation benchmark . We presented the first structure-aware metric for text simplification , SAMSA , and the first evaluation experiments that directly target the structural simplification component , separately from the lexical component . We argue that the structural and lexical dimensions of simplification are loosely related , and that TS evaluation protocols should assess both . We empirically demonstrate that strong measures that assess lexical simplification quality ( notably SARI ) , fail to correlate with human judgments when structural simplification is performed by the evaluated systems . Our experiments show that SAMSA correlates well with human judgments in such settings , which demonstrates its usefulness for evaluating and tuning statistical simplification systems , and shows that structural evaluation provides a complementary perspective on simplification quality .", "challenge": "Current evaluation metrics for text simplification task only focuses on lexical but not on structural aspects and do not strongly correlate with human evaluation.", "approach": "They propose a reference-free metric that uses semantic parsers to decompose inputs and outputs before comparing to assess their semantic structures.", "outcome": "Human evaluation reveals that the proposed metric correlates better with human judgements and also complements existing reference-based metrics."} +{"id": "P14-1130", "document": "Accurate scoring of syntactic structures such as head-modifier arcs in dependency parsing typically requires rich , highdimensional feature representations . A small subset of such features is often selected manually . This is problematic when features lack clear linguistic meaning as in embeddings or when the information is blended across features . In this paper , we use tensors to map high-dimensional feature vectors into low dimensional representations . We explicitly maintain the parameters as a low-rank tensor to obtain low dimensional representations of words in their syntactic roles , and to leverage modularity in the tensor for easy training with online algorithms . Our parser consistently outperforms the Turbo and MST parsers across 14 different languages . We also obtain the best published UAS results on 5 languages . 1 Finding an expressive representation of input sentences is crucial for accurate parsing . Syntactic relations manifest themselves in a broad range of surface indicators , ranging from morphological to lexical , including positional and part-of-speech ( POS ) tagging features . Traditionally , parsing research has focused on modeling the direct connection between the features and the predicted syntactic relations such as head-modifier ( arc ) relations in dependency parsing . Even in the case of firstorder parsers , this results in a high-dimensional vector representation of each arc . Discrete features , and their cross products , can be further complemented with auxiliary information about words participating in an arc , such as continuous vector representations of words . The exploding dimensionality of rich feature vectors must then be balanced with the difficulty of effectively learning the associated parameters from limited training data . A predominant way to counter the high dimensionality of features is to manually design or select a meaningful set of feature templates , which are used to generate different types of features ( Mc-Donald et al . , 2005a ; Koo and Collins , 2010 ; Martins et al . , 2013 ) . Direct manual selection may be problematic for two reasons . First , features may lack clear linguistic interpretation as in distributional features or continuous vector embeddings of words . Second , designing a small subset of templates ( and features ) is challenging when the relevant linguistic information is distributed across the features . For instance , morphological properties are closely tied to part-of-speech tags , which in turn relate to positional features . These features are not redundant . Therefore , we may suffer a performance loss if we select only a small subset of the features . On the other hand , by including all the rich features , we face over-fitting problems . We depart from this view and leverage highdimensional feature vectors by mapping them into low dimensional representations . We begin by representing high-dimensional feature vectors as multi-way cross-products of smaller feature vectors that represent words and their syntactic relations ( arcs ) . The associated parameters are viewed as a tensor ( multi-way array ) of low rank , and optimized for parsing performance . By explicitly representing the tensor in a low-rank form , we have direct control over the effective dimensionality of the set of parameters . We obtain role-dependent low-dimensional representations for words ( head , modifier ) that are specifically tailored for parsing accuracy , and use standard online algorithms for optimizing the low-rank tensor components . The overall approach has clear linguistic and computational advantages : \u2022 Our low dimensional embeddings are tailored to the syntactic context of words ( head , modifier ) . This low dimensional syntactic abstraction can be thought of as a proxy to manually constructed POS tags . \u2022 By automatically selecting a small number of dimensions useful for parsing , we can leverage a wide array of ( correlated ) features . Unlike parsers such as MST , we can easily benefit from auxiliary information ( e.g. , word vectors ) appended as features . We implement the low-rank factorization model in the context of first-and third-order dependency parsing . The model was evaluated on 14 languages , using dependency data from CoNLL 2008 and CoNLL 2006 . We compare our results against the MST ( McDonald et al . , 2005a ) and Turbo ( Martins et al . , 2013 ) parsers . The low-rank parser achieves average performance of 89.08 % across 14 languages , compared to 88.73 % for the Turbo parser , and 87.19 % for MST . The power of the low-rank model becomes evident in the absence of any part-of-speech tags . For instance , on the English dataset , the low-rank model trained without POS tags achieves 90.49 % on first-order parsing , while the baseline gets 86.70 % if trained under the same conditions , and 90.58 % if trained with 12 core POS tags . Finally , we demonstrate that the model can successfully leverage word vector representations , in contrast to the baselines . Accurate scoring of syntactic structures such as head-modifier arcs in dependency parsing typically requires rich , high-dimensional feature representations . We introduce a low-rank factorization method that enables to map high dimensional feature vectors into low dimensional representations . Our method maintains the parameters as a low-rank tensor to obtain low dimensional representations of words in their syntactic roles , and to leverage modularity in the tensor for easy training with online algorithms . We implement the approach on first-order to third-order dependency parsing . Our parser outperforms the Turbo and MST parsers across 14 languages . Future work involves extending the tensor component to capture higher-order structures . In particular , we would consider second-order structures such as grandparent-head-modifier by increasing the dimensionality of the tensor . This tensor will accordingly be a four or five-way array . The online update algorithm remains applicable since each dimension is optimized in an alternating fashion .", "challenge": "To obtain a rich representation for parsing, manually designed templates have been used which can cause a lack of clear linguistic interpretation or design difficulties.", "approach": "They propose to use tensors to map high-dimensional feature vectors into low-dimensional representations which can also represent syntactic relations and use them for online training.", "outcome": "The proposed approach outperforms the Turbo and MST parsers on 14 languages CoNLL 2008 and 2006, and it performs especially well without part-of-speech tags."} +{"id": "D14-1024", "document": "This article describes a linguistically informed method for integrating phrasal verbs into statistical machine translation ( SMT ) systems . In a case study involving English to Bulgarian SMT , we show that our method does not only improve translation quality but also outperforms similar methods previously applied to the same task . We attribute this to the fact that , in contrast to previous work on the subject , we employ detailed linguistic information . We found out that features which describe phrasal verbs as idiomatic or compositional contribute most to the better translation quality achieved by our method . Phrasal verbs are a type of multiword expressions ( MWEs ) and as such , their meaning is not derivable , or is only partially derivable , from the semantics of their lexemes . This , together with the high frequency of MWEs in every day communication ( see Jackendoff ( 1997 ) ) , calls for a special treatment of such expressions in natural language processing ( NLP ) applications . Here , we concentrate on statistical machine translation ( SMT ) where the word-to-word translation of MWEs often results in wrong translations ( Piao et al . , 2005 ) . Previous work has shown that the application of dedicated methods to identify MWEs and then integrate them in some way into the SMT process often improves translation quality . Generally , automatically extracted lexicons of MWEs are employed in the identification step . Further , various integration strategies have been proposed . The so called static strategy suggests training the SMT system on corpora in which each MWE is treated as a single unit , e.g. call off . This improves SMT indirectly by improving the alignment between source and target sentences in the training data . Various versions of this strategy are applied in Lambert and Banchs ( 2005 ) , Carpuat and Diab ( 2010 ) , and Simova and Kordoni ( 2013 ) . In all cases there is some improvement in translation quality , caused mainly by the better treatment of separable PVs , such as in turn the light on . Another strategy , which is referred to as dynamic , is to modify directly the SMT system . Ren et al . ( 2009 ) , for example , treat bilingual MWEs pairs as parallel sentences which are then added to training data and subsequently aligned with GIZA++ ( Och and Ney , 2003 ) . Other approaches perform feature mining and modify directly the automatically extracted translation table . Ren et al . ( 2009 ) and Simova and Kordoni ( 2013 ) employ Moses1 to build and train phrase-based SMT systems and then , in addition to the standard phrasal translational probabilities , they add a binary feature which indicates whether an MWE is present in a given source phrase or not . Carpuat and Diab ( 2010 ) employ the same approach but the additional feature indicates the number of MWEs in each phrase . All studies report improvements over a baseline system with no MWE knowledge but these improvements are comparable to those achieved by static methods . In this article , we further improve the dynamic strategy by adding features which , unlike all previous work , also encode some of the linguistic properties of MWEs . Since it is their peculiar linguistic nature that makes those expressions problematic for SMT , it is our thesis that providing more linguistic information to the translation process will improve it . In particular , we concentrate on a specific type of MWEs , namely phrasal verbs ( PVs ) . We add 4 binary features to the translation table which indicate not only the presence of a PV but also its transitivity , separability , and idiomaticity . We found that PVs are very suitable for this study since we can easily extract the necessary informa-tion from various language resources . To prove our claim , we perform a case study with an English to Bulgarian SMT system . Bulgarian lacks PVs in the same form they appear in English . It is often the case that an English PV is translated to a single Bulgarian verb . Such manyto-one mappings cause the so called translation asymmetries which make the translation of PVs very problematic . We perform automated and manual evaluations with a number of feature combinations which show that the addition of all 4 features proposed above improves translation quality significantly . Moreover , our method outperforms static and dynamic methods previously applied to the same test data . A notable increase in performance is observed for separable PVs where the verb and the particle(s ) were not adjacent in the input English sentence as well as for idiomatic PVs . This clearly demonstrates the importance of linguistic information for the proper treatment of PVs in SMT . We would like to point out that we view the work presented here as a preliminary study towards a more general linguistically informed method for handling similar types of translation asymmetries . The experiments with a single phenomenon , namely PVs , serve as a case study the purpose of which is to demonstrate the validity of our approach and the crucial role of properly integrated linguistic information into SMT . Our work , however , can be immediately extended to other phenomena , such as collocations and noun compounds . The remainder of the paper is organised as follows . Section 2 describes the asymmetries caused by PVs in English to Bulgarian translation . Section 3 provides details about the resources involved in the experiments . Section 4 describes our method and the experimental setup . Section 5 presents the results and discusses the improvements in translation quality achieved by the method . Sections 6 concludes the paper . In this article , we showed that the addition of linguistically informative features to a phrase-based SMT model improves the translation quality of a particular type of MWEs , namely phrasal verbs . In a case study involving SMT from English to Bulgarian , we showed that adding features which encode not only the presence of a PV in a given phrase but also its transitiveness , separability , and idiomaticity led to better translation quality compared to previous work which employs both static and dynamic strategies . In future research , we will extend our method to other language pairs which exhibit the same type of translation asymmetries when it comes to PVs . Such language pairs include , among others , English-Spanish and English-Portuguese . Further , we will apply our linguistically informed method to other phenomena which cause similar issues for SMT . Immediate candidate phenomena include other types of MWEs , collocations , and noun compounds . When it comes to MWEs , we will pay special attention to the compositionality aspect since it seems to have contributed most to the good performance achieve by our method in the study presented here .", "challenge": "Although multiword expressions frequently appear in daily communications, statistical machine translation models often produce wrong translations calling for special treatments of such expressions.", "approach": "They propose a linguistically informed method for integrating phrasal verbs by adding features such as the presence of phrasal nouns, transitivity, separability and idiomaticity.", "outcome": "The proposed method outperforms similar models on an English to Bulgarian translation task evaluated by automatic and manual metrics showing the utility of introduced features."} +{"id": "2021.naacl-main.410", "document": "Natural-language prompts have recently been used to coax pretrained language models into performing other AI tasks , using a fill-in-theblank paradigm ( Petroni et al . , 2019 ) or a few-shot extrapolation paradigm ( Brown et al . , 2020 ) . For example , language models retain factual knowledge from their training corpora that can be extracted by asking them to \" fill in the blank \" in a sentential prompt . However , where does this prompt come from ? We explore the idea of learning prompts by gradient descent-either fine-tuning prompts taken from previous work , or starting from random initialization . Our prompts consist of \" soft words , \" i.e. , continuous vectors that are not necessarily word type embeddings from the language model . Furthermore , for each task , we optimize a mixture of prompts , learning which prompts are most effective and how to ensemble them . Across multiple English LMs and tasks , our approach hugely outperforms previous methods , showing that the implicit factual knowledge in language models was previously underestimated . Moreover , this knowledge is cheap to elicit : random initialization is nearly as good as informed initialization . x performed until his death in y . Pretrained language models , such as ELMo ( Peters et al . , 2018 ) , BERT ( Devlin et al . , 2019 ) , and BART ( Lewis et al . , 2020a ) , have proved to provide useful representations for other NLP tasks . Recently , Petroni et al . ( 2019 ) and Jiang et al . ( 2020 ) demonstrated that language models ( LMs ) also contain factual and commonsense knowledge that can be elicited with a prompt . For example , to query the date-of- , \" where we have filled the first blank with \" Mozart , \" and ask a cloze language model to fill in the second blank . The prompts used by Petroni et al . ( 2019 ) are manually created , while Jiang et al . ( 2020 ) use mining and paraphrasing based methods to automatically augment the prompt sets . Finding out what young children know is difficult because they can be very sensitive to the form of the question ( Donaldson , 1978 ) . Opinion polling is also sensitive to question design ( Broughton , 1995 ) . We observe that when we are querying an LM rather than a human , we have the opportunity to tune prompts using gradient descent-the workhorse of modern NLP-so that they better elicit the desired type of knowledge . A neural LM sees the prompt as a sequence of continuous word vectors ( Baroni et al . , 2014 ) . We tune in this continuous space , relaxing the constraint that the vectors be the embeddings of actual English words . Allowing \" soft prompts \" consisting of \" soft words \" is not only convenient for optimization , but is also more expressive . Soft prompts can emphasize particular words ( by lengthening their vectors ) or particular dimensions of those words . They can also adjust words that are misleading , ambiguous , or overly specific . Consider the following prompt for the relation date-of-death : This prompt may work for the male singer Cab Calloway , but if we want it to also work for the female painter Mary Cassatt , it might help to soften \" performed \" and \" his \" so that they do not insist on the wrong occupation and gender , and perhaps to soften \" until \" into a weaker connective ( as Cassatt was in fact too blind to paint in her final years ) . Another way to bridge between these cases is to have one prompt using \" performed \" and another using \" painted . \" In general , there may be many varied lexical patterns that signal a particular relation , and having more patterns will get better coverage ( Hearst , 1992 ; Riloff and Jones , 1999 ) . We therefore propose to learn a mixture of soft prompts . We test the idea on several cloze language models , training prompts to complete factual and com-mon sense relations from 3 datasets . Comparing on held-out examples , our method dramatically outperforms previous work , even when initialized randomly . So when regarded as approximate knowledge bases , language models know more than we realized . We just had to find the right ways to ask . Well-crafted natural language prompts are a powerful way to extract information from pretrained language models . In the case of cloze prompts used to query BERT and BART models for single-word answers , we have demonstrated startlingly large and consistent improvements from rapidly learning prompts that work-even though the resulting \" soft prompts \" are no longer natural language . Our code and data are available at https:// github.com / hiaoxui / soft-prompts . How about few-shot prediction with pretrained generative LMs ? Here , Lewis et al . ( 2020b ) show how to assemble a natural language prompt for input x from relevant input-output pairs ( x i , y i ) selected by a trained retrieval model . Allowing fine-tuned soft string pairs is an intriguing future possibility for improving such methods without needing to fine-tune the entire language model .", "challenge": "When prompting language models as a knowledge base, existing methods use actual sentences as queries while models may be sensitive to the form of questions.", "approach": "They propose to treat prompts as sequences of vectors instead of words to achieve more flexibility and optimize them by gradient descent or fine-tuning.", "outcome": "The proposed approach outperforms previous methods across multiple English models and factual and commonsense tasks showing implicit factual knowledge has been underestimated."} +{"id": "N19-1217", "document": "A pun is a form of wordplay for an intended humorous or rhetorical effect , where a word suggests two or more meanings by exploiting polysemy ( homographic pun ) or phonological similarity to another word ( heterographic pun ) . This paper presents an approach that addresses pun detection and pun location jointly from a sequence labeling perspective . We employ a new tagging scheme such that the model is capable of performing such a joint task , where useful structural information can be properly captured . We show that our proposed model is effective in handling both homographic and heterographic puns . Empirical results on the benchmark datasets demonstrate that our approach can achieve new state-ofthe-art results . There exists a class of language construction known as pun in natural language texts and utterances , where a certain word or other lexical items are used to exploit two or more separate meanings . It has been shown that understanding of puns is an important research question with various real-world applications , such as human-computer interaction ( Morkes et al . , 1999 ; Hempelmann , 2008 ) and machine translation ( Schr\u00f6ter , 2005 ) . Recently , many researchers show their interests in studying puns , like detecting pun sentences ( Vadehra , 2017 ) , locating puns in the text ( Cai et al . , 2018 ) , interpreting pun sentences ( Sevgili et al . , 2017 ) and generating sentences containing puns ( Ritchie , 2005 ; Hong and Ong , 2009 ; Yu et al . , 2018 ) . A pun is a wordplay in which a certain word suggests two or more meanings by exploiting polysemy , homonymy , or phonological similarity to another sign , for an intended humorous or rhetorical effect . Puns can be generally categorized into two groups , namely hetero-graphic puns ( where the pun and its latent target are phonologically similar ) and homographic puns ( where the two meanings of the pun reflect its two distinct senses ) ( Miller et al . , 2017 ) . Consider the following two examples : ( 1 ) When the church bought gas for their annual barbecue , proceeds went from the sacred to the propane . ( 2 ) Some diets cause a gut reaction . The first punning joke exploits the sound similarity between the word \" propane \" and the latent target \" profane \" , which can be categorized into the group of heterographic puns . Another categorization of English puns is homographic pun , exemplified by the second instance leveraging distinct senses of the word \" gut \" . Pun detection is the task of detecting whether there is a pun residing in the given text . The goal of pun location is to find the exact word appearing in the text that implies more than one meanings . Most previous work addresses such two tasks separately and develop separate systems ( Pramanick and Das , 2017 ; Sevgili et al . , 2017 ) . Typically , a system for pun detection is built to make a binary prediction on whether a sentence contains a pun or not , where all instances ( with or without puns ) are taken into account during training . For the task of pun location , a separate system is used to make a single prediction as to which word in the given sentence in the text that trigger more than one semantic interpretations of the text , where the training data involves only sentences that contain a pun . Therefore , if one is interested in solving both problems at the same time , a pipeline approach that performs pun detection followed by pun location can be used . Compared to the pipeline methods , joint learning has been shown effective ( Katiyar and Cardie , 2016 ; Peng et al . , 2018 ) since it is able to re-duce error propagation and allows information exchange between tasks which is potentially beneficial to all the tasks . In this work , we demonstrate that the detection and location of puns can be jointly addressed by a single model . The pun detection and location tasks can be combined as a sequence labeling problem , which allows us to jointly detect and locate a pun in a sentence by assigning each word a tag . Since each context contains a maximum of one pun ( Miller et al . , 2017 ) , we design a novel tagging scheme to capture this structural constraint . Statistics on the corpora also show that a pun tends to appear in the second half of a context . To capture such a structural property , we also incorporate word position knowledge into our structured prediction model . Experiments on the benchmark datasets show that detection and location tasks can reinforce each other , leading to new state-of-the-art performance on these two tasks . To the best of our knowledge , this is the first work that performs joint detection and location of English puns by using a sequence labeling approach.1 2 Approach In this paper , we propose to perform pun detection and location tasks in a joint manner from a sequence labeling perspective . We observe that each text in our corpora contains a maximum of one pun . Hence , we design a novel tagging scheme to incorporate such a constraint . Such a scheme guarantees that there is a maximum of one word that will be tagged as a pun during the testing phase . We also found the interesting structural property such as the fact that most puns tend to appear at the second half of the sentences can be helpful for such a task , but was not explored in previous works . Furthermore , unlike many previous approaches , our approach , though simple , is generally applicable to both heterographic and homographic puns . Empirical results on the benchmark datasets prove the effectiveness of the proposed approach that the two tasks of pun detection and location can be addressed by a single model from a sequence labeling perspective . Future research includes the investigations on how to make use of richer semantic and linguistic information for detection and location of puns . Research on puns for other languages such as Chinese is still under-explored , which could also be an interesting direction for our future studies .", "challenge": "Existing pun detection works take a pipeline approach that is composed of detection and location while joint learning approaches perform well on other NLP tasks.", "approach": "They propose to approach pun detection and location jointly as sequence labelling using a new tagging scheme that exploits structure information.", "outcome": "The proposed approach is effective for homographic and heterographic puns and achieves state-of-the-art performance and most puns appear in the second half of sentences."} \ No newline at end of file